I need advice.
I have a webserver vm (LAN, not on the internet), it has 2 wikis:
I want to wget only the homework wiki pages, without crawling into the GameWiki?
My goal is to just get the .htmls (ignore all other files images etc), with wget. (I dont want to do a mysqldump or mediawiki export, but rather wget for my (non-IT) boss who just wants to double click the html).
How can I run wget to only crawl the HomeWorkWiki, and not the GameWiki on this VM.
Thanks
The solution was either to use httrack, and customize the wizard carefully, or this brilliant one liner with wget: