Python Proxy Server Integration Guide
Python is the most popular programming language for web scraping, data collection, and automation. Whether you're building a web crawler, collecting data for machine learning, or need to make anonymous HTTP requests, using a rotating proxy server is essential to avoid IP bans and rate limits.
ProxyMesh provides rotating anonymous proxy servers that work with all major Python HTTP libraries. Our proxy servers automatically rotate IP addresses, so each request can appear from a different IP without any code changes. This guide covers configuring Python applications to use proxy servers for reliable, scalable data collection.
Quick Start: Python Proxy Setup
Get started with ProxyMesh rotating proxies in three steps:
- Sign up for a free ProxyMesh trial to get your proxy credentials
- Install your preferred HTTP library (most already support proxies)
- Configure the proxy URL with your ProxyMesh server address
Basic Example with Requests
import requests
# ProxyMesh proxy URL with authentication
proxy_url = "http://username:password@PROXYHOST:PORT"
response = requests.get(
"https://api.ipify.org?format=json",
proxies={"http": proxy_url, "https": proxy_url}
)
print(response.json()) # Shows the proxy's IP address
Each request through ProxyMesh can use a different IP address from our rotating pool, helping you avoid IP blocking and rate limiting.
With Environment Variables
Set proxy configuration once and use it across your entire application:
export HTTP_PROXY="http://username:password@PROXYHOST:PORT"
export HTTPS_PROXY="http://username:password@PROXYHOST:PORT"
import requests
# Automatically uses proxy from environment
response = requests.get("https://api.ipify.org?format=json")
print(response.json())
Python HTTP Libraries with Proxy Support
ProxyMesh works with all major Python HTTP libraries. Choose the one that fits your project:
Requests
The most popular Python HTTP library with a simple, elegant API. Perfect for scripts, prototyping, and API integrations. Built-in proxy support via the proxies parameter.
pip install requests
httpx
Modern HTTP client with both sync and async APIs, plus HTTP/2 support. Native proxy configuration through the proxy parameter. Great for modern Python apps.
pip install httpx
aiohttp
High-performance async HTTP client for concurrent requests. Make thousands of simultaneous requests through rotating proxies. Best for high-volume web scraping.
pip install aiohttp
Scrapy
The most powerful Python web crawling framework. Built-in middleware for proxy rotation, automatic throttling, and data pipelines. Ideal for large-scale crawling.
pip install scrapy
urllib3
Low-level HTTP client that powers Requests. Provides connection pooling, thread safety, and fine-grained control over HTTP connections.
pip install urllib3
PycURL
Python bindings for libcurl offering maximum performance. Direct access to all libcurl features including detailed timing info. Best for performance-critical applications.
pip install pycurl
CloudScraper
Automatically bypass Cloudflare anti-bot protection while using rotating proxies. Built on Requests with automatic challenge solving.
pip install cloudscraper
AutoScraper
Smart automatic web scraper that learns extraction rules from examples. No CSS selectors or XPath needed. Combine with proxies for scalable data extraction.
pip install autoscraper
Why Use ProxyMesh for Python Web Scraping?
Automatic IP Rotation
Every request through ProxyMesh can use a different IP address from our rotating pool. Each location maintains multiple IPs that rotate throughout the day, giving you access to hundreds of unique addresses without code changes. This automatic rotation helps you:
- Avoid IP bans when scraping websites
- Bypass rate limiting based on IP address
- Distribute requests for better anonymity
Multiple Geographic Locations
ProxyMesh operates proxy servers worldwide in 17 locations:
- Los Angeles, CA, US
- Seattle, WA, US
- Paris, France
- Tokyo, Japan
- Sydney, Australia
- Frankfurt, Germany
- Amsterdam, Netherlands
- Singapore
- Chicago, IL, US
- Dallas, TX, US
- Washington DC, US
- New York, NY, US
- London, UK
- Zurich, Switzerland
- Orlando, FL, US
- Mumbai, India
- Open Proxy Server
- World Proxy Server
- US ISP Proxy Server
Access geo-restricted content or test how your application appears from different regions by connecting to the appropriate proxy location.
Simple Authentication
ProxyMesh supports two authentication methods:
- Username/Password: Include credentials in the proxy URL:
http://user:pass@proxy:port - IP Authentication: Whitelist your server's IP in the dashboard—no credentials needed in code
Works with Any Python Library
ProxyMesh uses standard HTTP proxy protocol, so it works with any Python HTTP library that supports proxies—which is virtually all of them. No special SDKs or proprietary integrations required.
Advanced: Custom Proxy Headers
For advanced use cases, ProxyMesh supports custom HTTP headers that control proxy behavior:
X-ProxyMesh-Country- Route through a specific country (only with world proxy or open proxy)X-ProxyMesh-IP- Request a specific IP address for session consistencyX-ProxyMesh-Not-IP- Exclude specific IPs from rotation
Sending these headers over HTTPS requires special handling due to how HTTPS tunneling works. We provide extension libraries for this:
- python-proxy-headers - For Requests, httpx, aiohttp, urllib3, PycURL
- scrapy-proxy-headers - For Scrapy
See the individual library guides for details on using these extensions.
Documentation and Resources
ProxyMesh Documentation
- HTTP Proxy Configuration Guide
- Proxy Server Locations
- Proxy Authentication Options
- Custom Headers Reference
Example Code
Frequently Asked Questions
What is the best Python library for proxy requests?
For most use cases, Requests is the best Python library for proxy configuration due to its simple API, extensive documentation, and wide adoption. For async applications, httpx or aiohttp provide better concurrency. For large-scale web crawling, Scrapy is the industry standard.
How do I configure a rotating proxy in Python?
Configure a rotating proxy by setting the proxies parameter in your HTTP client to a ProxyMesh server address. ProxyMesh automatically rotates IP addresses on each request—no code changes required. Simply use proxies={"https": "http://user:pass@us-wa.proxymesh.com:31280"} and each request may use a different IP.
Does ProxyMesh work with all Python HTTP libraries?
Yes, ProxyMesh works with all Python HTTP libraries that support standard HTTP proxies. This includes Requests, httpx, aiohttp, urllib3, Scrapy, PycURL, CloudScraper, AutoScraper, and any other library using standard proxy protocols. No special SDK or proprietary integration required.
How do I avoid IP bans when web scraping with Python?
To avoid IP bans, use a rotating proxy service like ProxyMesh that automatically cycles through different IP addresses. Combine this with appropriate request delays, proper user-agent headers, and respecting robots.txt. ProxyMesh provides multiple geographic locations and automatic IP rotation to distribute requests and avoid detection.
Can I use ProxyMesh with async Python code?
Yes, ProxyMesh fully supports async Python HTTP libraries including httpx (AsyncClient), aiohttp, and Scrapy. Configure the proxy URL the same way as synchronous code, and make thousands of concurrent requests through rotating IPs.
What proxy authentication methods does ProxyMesh support?
ProxyMesh supports two authentication methods: username/password authentication (included in the proxy URL) and IP whitelisting (register your server's IP in the dashboard for credential-free access). Both methods work with all Python HTTP libraries.