I'd like to know what is the command to use to gunzip all files in a target directory recursively? I tried to use the unzip command but it didn't work.
I tried the command from Unzip all zip files in a target folder?
I'd like to know what is the command to use to gunzip all files in a target directory recursively? I tried to use the unzip command but it didn't work.
I tried the command from Unzip all zip files in a target folder?
I'm trying to use the --delete
option in rsync
to delete files in the target directory which isn't present in the original directory
Here is the command I'm using:
rsync -avz --ignore-existing --recursive --delete /var/www/* [email protected]:/var/www
So my question is, how can I delete all files in target directory which aren't present in the original directory?
Okay so I have looked up the existing answers here and elsewhere but what I can't find out is, if I use the --ignore-existing
option along with the --delete
option, will this combination I still be able to have rsync delete files from the target if they no longer exist in the source AND still prevent rsync from overwriting existing files in the target?
Thanks
I have a directory called 'existing_folder' and another directory called 'temp'
I want to replace the contents of 'existing_folder' with those of 'temp' along with any sub directories.
Because the directory contains web pages, this has to be done in a way that ensures minimal downtime.
Is there a way to do this? What command should I use to achieve this?
Currently I have the following command which will send a folder and it's contents recursively:
rsync -avz --ignore-existing --recursive /var/-iles/_site [email protected]:/var/www
What I want is only the contents of folder _site
to be sent to folder www
, not the _site
folder itself.
Is it possible to send only the contents of folder _site
via rsync?
Thanks
I'm using the following command to download web pages into static html files:
wget --quiet http://mytestdomain.com/sitemap.xml --output-document - | egrep -o "http?://[^<]+" | wget -i -
But it just outputs each file like this:
index.html
index.html.1
index.html.2
My question is, is it possible to modify this command so that each saved file uses's the original page's title instead of index.html?
Thanks
I'm trying to mirror and download as static html files all of the links in an XML sitemap file.
I found the following command which is supposed to accomplish what I'm trying to achieve but it doesn't actually download anything:
wget --quiet http://www.mydemosite.com/sitemap.xml --output-document - | egrep -o "https?://[^<]+" | wget -i -
I found this thread here:
https://stackoverflow.com/questions/17334117/crawl-links-of-sitemap-xml-through-wget-command
So my question is, how can I mirror and download as static html files all of the links in an XML sitemap file using wget?
Thanks
I'm currently using the following wget command to download from an FTP server from a list of URL's in a file:
wget --user=mylogin --password='mypassword' -P /home/ftp/ -i /var/www/file/url.txt -N
But now I need a way to simultaneously download multiple files at the same time. I'm trying to use aria2 for this and I tried the following command:
aria2c -x 5 -i /var/www/file/url.txt
But I can't seem to find a way to get aria2 to login first to the FTP.
So my question is, is there a command for aria2 to login first to the FTP server and then download from the list of URL's?
Alternatively is there a better tool better suited to my task?
Thanks