Sorry if this is unclear, but I'm trying to set up a script that downloads a file. Currently, my method of downloading the file is by clicking on a link like so:
https://www.URL.com/view?downloadFile=AcctItemFiles\1234567890.txt
I tried using a wget
command but that obviously didn't work because that link is not an actual file location. Any ideas on how to either figure out the actual file location or how to download the file with that link would be helpful.
wget
should have no problem fetching it, since it performs the GET command just fine on a URL like that. The only problems I can think of that you might be having are:\
is a special shell character and you will need to put the URL in quotes in order to prevent the shell from converting\1
to1
. Better yet, escape\
characters as%5C
view?downloadFile=AcctItemFiles\1234567890.txt
instead of something sane like your web browser does. Either use the-O filename
option to force it to write all the downloaded data to a specified filename eg(interesting... a code block cannot follow a bullet point)
Or, use the
--content-disposition
option to tell it to save using the filename provided by the server in the header (read and understand the warning about it being buggy and about it requiring two requests. Do not use this if the target script does not support the HEAD command) Alternatively, usecurl -O -J ...
instead of wget, where-O -J
together instruct it to read the output filename from the header. The documentation does not say that curl requires two requests, but curl recently had a vulnerability due to trusting invalid filenames, so "buggy" may still apply.Try using curl with the '-L' flag instead of wget.
Curl and wget complement each other.