 
                            
            In daily development and operations work, we often need to fetch files from remote servers. Whether it’s retrieving a text file, downloading a binary package, or pulling resources in bulk through automation scripts, cURL is a very practical tool. It not only supports multiple protocols (HTTP, HTTPS, FTP, etc.) but also provides a wide range of options to flexibly handle various download scenarios.
This article will guide you through how to use cURL to download files, along with detailed explanations of common issues.
The simplest usage is to run the following in your terminal:
curl https://cliproxy.com/file.txt
This will print the content of the remote file directly to the terminal instead of saving it locally. If you just want to quickly check the file content, this is very convenient.
If you want to save the file locally, you can use the -O or -o options:
-O: Save using the original filename from the remote server.-o: Save with a custom filename.Example — Save with a specific filename:
curl -o do-bots.txt https://cliproxy.com/robots.txt
Even if the remote filename differs, you can name the file however you prefer.
Some download links involve redirects (e.g., short links redirecting to the actual file). By default, cURL does not follow redirects automatically, so you need to add the -L flag:
curl -L -O https://short.url/file
This ensures you get the final file correctly.
Some downloads require authentication, which usually falls into two categories: username/password and token-based.
curl -u username:password -O https://cliproxy.com/protected/file.zip
The -u option is used to pass the credentials.
In API scenarios, bearer tokens are commonly used:
curl -H "Authorization: Bearer <your_token>" -O https://cliproxy.com/api/download
This method is more secure and widely adopted.
To avoid being stuck too long:
curl --max-time 60 -O https://cliproxy.com/largefile.zip
When the network is unstable, you can add retries:
curl --retry 5 -O https://cliproxy.com/largefile.zip
If the download is interrupted, you can resume with:
curl -C - -O https://cliproxy.com/largefile.zip
This avoids starting over from scratch.
If you need to download multiple files, write a simple script:
#!/bin/bash
urls=(
  "https://cliproxy.com/file1.zip"
  "https://cliproxy.com/file2.zip"
)
for url in "${urls[@]}"; do
  curl -O "$url"
done
Save as download.sh and run it.
You can also list multiple URLs directly in the command line:
curl -O https://cliproxy.com/file1.zip -O https://cliproxy.com/file2.zip
This downloads all files sequentially.
If the target site has certificate issues (e.g., self-signed), cURL may throw an error. You can temporarily bypass this with -k:
curl -k -O https://cliproxy.com/file.zip
⚠️ Note: Using -k reduces security, so use it only in test environments.
-L if redirects are required.--retry or -C - for resuming.wget -r -np -k https://example.com
This downloads the whole site and converts links for offline browsing.
curl -X POST -H "Content-Type: application/json" \
-d '{"username":"test","password":"123456"}' \
https://api.example.com/login
This simulates an API login request — something wget cannot do.
1. What’s the difference between -O and -o in cURL?
-O: Save using the remote filename.-o: Save using a custom filename.2. How do I resume an interrupted download with cURL?
Use the -C - option, for example:
curl -C - -O https://cliproxy.com/file.zip
3. Can cURL be used on Windows?
Yes. Windows users can install the official cURL package or use it via Git Bash / WSL.
This article introduced multiple methods for downloading files with cURL, including saving files, handling redirects, authentication, resuming downloads, batch downloads, and SSL issues. It also compared wget and cURL.
👉 If you’re mainly downloading files, wget may be better.
👉 But if you need API debugging or flexible download scenarios, cURL is the best choice.
Start your Cliproxy trial
 
                     
                    