Articles & Snippets
How to Use HTTrack to Download a Website
Posted by negraru on Mon, 15 Dec 2025
HTTrack is a free and open source tool that lets you download a website to your computer. You can browse the site offline just like you would online.
What HTTrack Does
- Downloads HTML pages
- Grabs images, CSS, and JavaScript files
- Rewrites links so the site works locally
- Mirrors the folder structure
Install HTTrack
On Ubuntu or Debian:
sudo apt update
sudo apt install httrack
On macOS using Homebrew:
brew install httrack
On Windows:
Download the installer from the official HTTrack website and run the setup file.
Basic Command Example
Here is a simple example that downloads a website and sets a custom browser user agent:
URL='https://example.com/'
httrack $URL -F "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
Command Breakdown
- URL='https://example.com/'
Stores the target website in a variable. - httrack $URL
Starts the download process using the URL variable. - -F "Mozilla/5.0 (...)"
Sets the user agent string. This makes the request look like it came from a real Chrome browser on Windows.
Download to a Specific Folder
httrack https://example.com/ -O ./my-website-copy
The -O option tells HTTrack where to save the mirrored website.
Limit Download Depth
httrack https://example.com/ -r2
The -r2 option limits link depth to 2 levels. This prevents downloading the entire internet.
Things to Keep in Mind
- Always check the website terms of service before copying content.
- Do not overload servers with aggressive settings.
- Use delays if you are downloading large sites.
Summary
HTTrack lets you copy websites for offline use. Install it, run the command with a target URL, and adjust options as needed. Keep your usage responsible and legal.
download websites offline