请问HN:如何抓取一个网站?

1作者: terry_hc大约 1 个月前原帖
我希望在这位已故作者的域名和作品失效之前,抓取并保存他们的博客。为了获取该网站域名下的所有文章、所有图片等,最准确的 wget 命令是什么,以便能够在本地浏览整个网站(不包括外部内容)?
查看原文
I wish to scrape and preserve the blog of a dead author, before their domain and writings expire. What would be the most accurate wget invocation for obtaining all their articles, all images, et cetera, that reside under the website's domain, such that the whole site (bar external content) can be browsed locally?