I often find myself having to setup a static copy of a website. If I have command line access to the host then it’s simple enough to tar up the filesystem and database, but if I only have web access then this little gem saves me a lot of time.
I’ve used “wget” for years, but hadn’t bothered to read the man pages for it. It turns out there’s a primo little option you can use to mirror a website, and strangely enough the option is –mirror.
wget --mirror URL |
This will essentially get all the html source files, and any files that are linked from within such as images, CSS etc. It even creates the appropriate directory structures so the links all work and the images aren’t broken.
The only thing I’ve found needs doing after this is manually grabbing any background images specified in the CSS files.
Hope this saves you some time too. Props to Helmuth W. Kump for sharing his knowledge.