NetNut is a leading proxy provider offering residential, static residential, mobile, and datacenter proxies. NetNut stands out for its high reliability, global IP pool, and a unique onboarding process that requires contacting sales to activate your account and free trial. This guide will walk you through setting up NetNut proxies, optimizing bandwidth, and integrating with Scrapfly Proxy Saver for maximum efficiency.
Proxies act as intermediaries between your device and the internet, masking your real IP address and routing requests through different servers. This is essential for:
NetNut offers several proxy types:
NetNut is a premium proxy service with over 85 million residential IPs and 1 million mobile IPs worldwide. Unlike most providers, NetNut requires you to contact their sales team to activate your account and free trial, ensuring a tailored experience for your needs.
In the dashboard, you'll find your proxy server address, port, username, and password. NetNut supports HTTP, HTTPS, and SOCKS5 protocols.
HTTP Example:
USERNAME:PASSWORD@gw-am.netnut.net:5959
SOCKS5 Example:
USERNAME:PASSWORD@gw-socks-am.netnut.net:9595
Username Structure:
userID-type-country
(e.g., ticketing123-res-us
)res
(rotating residential), stc
(static residential), dc
(datacenter)-SID-12345678
(e.g., ticketing123-stc-us-SID-435765
)curl -x http://USERNAME:PASSWORD@gw-am.netnut.net:5959 https://httpbin.dev/anything
import requests
proxy = {
'http': 'http://USERNAME:PASSWORD@gw-am.netnut.net:5959',
'https': 'http://USERNAME:PASSWORD@gw-am.netnut.net:5959'
}
response = requests.get('https://httpbin.dev/anything', proxies=proxy)
print(response.json())
Optimizing your proxy usage is crucial for minimizing costs and maximizing efficiency. Here are several techniques to reduce bandwidth consumption:
Request only what you need:
headers = {
"User-Agent": "Mozilla/5.0",
"Accept": "text/html",
"Accept-Encoding": "gzip, deflate"
}
response = requests.get(url, proxies=proxy, headers=headers)
Use persistent sessions to avoid repeated handshakes:
session = requests.Session()
session.proxies = proxy
session.headers = headers
response = session.get("https://example.com")
If using browser automation (e.g., Selenium), block images, CSS, and scripts:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
prefs = {
"profile.managed_default_content_settings.images": 2,
"profile.managed_default_content_settings.javascript": 2
}
options.add_experimental_option("prefs", prefs)
driver = webdriver.Chrome(options=options)
driver.get("https://example.com")
Caching responses can help minimize redundant bandwidth usage. Store static or rarely-changing content locally to avoid redundant requests.
import os
import hashlib
import requests
def get_cached_response(url, session, cache_dir="/tmp/netnut_cache"):
os.makedirs(cache_dir, exist_ok=True)
cache_file = os.path.join(cache_dir, hashlib.md5(url.encode()).hexdigest())
if os.path.exists(cache_file):
with open(cache_file, "rb") as f:
return f.read()
response = session.get(url)
with open(cache_file, "wb") as f:
f.write(response.content)
return response.content
# Usage
session = requests.Session()
session.proxies = proxy
content = get_cached_response("https://example.com", session)
Conditional requests allow you to fetch only updated content from the server. Leverage ETag/If-None-Match headers to fetch only updated content.
session = requests.Session()
session.proxies = proxy
# First request to get ETag
response = session.get(url)
etag = response.headers.get("ETag")
# Next request with If-None-Match
headers = {"If-None-Match": etag} if etag else {}
response = session.get(url, headers=headers)
if response.status_code == 304:
print("Content not modified, use cached version.")
else:
print("Content updated, process new data.")
Setting timeouts and retry logic helps prevent your scraper from hanging on slow or unresponsive requests. Set reasonable timeouts and retry logic to avoid hanging on slow requests.
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
session.proxies = proxy
retry_strategy = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=["HEAD", "GET", "OPTIONS"]
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("http://", adapter)
session.mount("https://", adapter)
try:
response = session.get(url, timeout=10)
print(response.status_code)
except requests.exceptions.Timeout:
print("Request timed out")
Scrapfly Proxy Saver is a powerful middleware that optimizes your proxy usage by reducing bandwidth and improving reliability. It works seamlessly with NetNut proxies.
import requests
proxy_url = "http://proxyId-ABC123:scrapfly_api_key@proxy-saver.scrapfly.io:3333"
headers = {
"User-Agent": "Mozilla/5.0",
"Accept": "text/html",
"Accept-Encoding": "gzip, deflate"
}
response = requests.get(
"https://example.com",
proxies={"http": proxy_url, "https": proxy_url},
headers=headers,
verify=False
)
print(response.status_code)
You can fine-tune Proxy Saver with parameters in the username:
proxyId-ABC123-Timeout-30-FpImpersonate-firefox_mac_109@proxy-saver.scrapfly.io:3333
Setting up Scrapfly Proxy Saver with your NetNut proxies is straightforward and can be done entirely from the Scrapfly dashboard. This allows you to optimize your NetNut proxy usage with bandwidth saving, fingerprinting, and advanced connection management. Here's how to do it:
HTTP
(or SOCKS5
if using NetNut SOCKS proxies).gw-am.netnut.net
for HTTP, gw-socks-am.netnut.net
for SOCKS5).5959
for HTTP, 9595
for SOCKS5).ticketing123-res-us
or with session ID for static proxies).USERNAME
unless you have a specific need to change it.For more details and advanced options, see the official Proxy Saver documentation.
NetNut offers several proxy types, each suited for different use cases and budgets. The table below compares the main proxy types, their pricing, performance, and ideal applications.
Type | Example Package Price | Speed | Detection Risk | Ideal For |
---|---|---|---|---|
Rotating Residential | 72GB – $210, 350GB – $850 | ★★★★☆ | ★☆☆☆☆ | Web scraping, geo-targeted tasks |
Static Residential | 72GB – $210, 350GB – $850 | ★★★★☆ | ★☆☆☆☆ | Account management, session persistence |
Mobile | 72GB – $210, 350GB – $850 | ★★★☆☆ | ★☆☆☆☆ | Mobile-specific scraping, social apps |
Datacenter | 150K+ IPs – Contact Sales | ★★★★★ | ★★★☆☆ | High-volume, non-sensitive scraping |
Scrapfly Proxy Saver is a powerful middleware solution that optimizes your existing proxy connections, reducing bandwidth costs while improving performance and stability.
Feature | NetNut | Bright Data | Oxylabs | Webshare |
---|---|---|---|---|
IP Pool Size | 85M+ residential, 1M+ mobile | 72M+ | 100M+ | 30M+ |
Free Trial/Plan | Contact sales for trial | Limited usage quota | 5 datacenter IPs | 10 free proxies (permanent) |
Starting Price | $$$ (Enterprise-focused) | $$$ (Enterprise-focused) | $$ (Mid-range) | $ (Budget-friendly) |
Dashboard | User-friendly, sales-assisted | Advanced, feature-rich | Modern, comprehensive | Simple, intuitive |
Authentication | Username/Password, session ID | Zone-based system | Username/Password, IP whitelist | Username/Password, IP whitelist |
Customer Support | Live chat, messaging, email | 24/7 dedicated support | 24/7 dedicated support | Email, help center |
Ideal For | Enterprise, geo-targeted, scale | Enterprise, large-scale needs | Professional scraping projects | Budget-conscious users, SMBs |
Contact NetNut sales after registering to discuss your use case and activate your free trial.
Rotating residential, static residential, mobile, and datacenter proxies.
Use your NetNut proxy as the upstream in Scrapfly Proxy Saver and configure parameters as needed.
Use cURL or Python with your credentials to verify connectivity (see examples above).
NetNut offers a robust proxy platform with a unique onboarding process that ensures you get the right solution for your needs. By following the setup and optimization tips in this guide—and integrating with Scrapfly Proxy Saver—you can maximize efficiency, reduce bandwidth costs, and scale your web scraping or automation projects with confidence.