Proxy Pilot (By BlazingSEO) : Study Manuals Earlier than You Purchase

Proxy Pilot (By BlazingSEO) : Study Manuals Earlier than You Purchase
Proxy Pilot By BlazingSEO

What It Is, and Is Not

One confusion we get from customers of Proxy Pilot is what does Proxy Pilot do, and never do. Right here is our Normal Troubleshooting.

Subsequently, it is essential to make the excellence between how Proxy Pilot can assist you, versus its incapability to stop many frequent anti-scraping applied sciences while you use your individual software program.

What it does:

What it does not do:

  • It doesn’t assure 100% success charge in case your proxy swimming pools and settings are usually not applicable.

  • Instance:  if you wish to scrape 1m requests/hour to, and solely enter 10 proxies into your proxy pool, you’ll almost certainly obtain a ban on the goal web site. When this occurs, all 10 of your proxies would go right into a “ban cooldown”, and Proxy Pilot will return a ‘No Proxies’ error message.

  • Proxy Pilot doesn’t “cost per profitable scrape”. If you happen to’d like to dump all parts of scraping to us then we suggest you contemplate our Scraping Robotic API. Our Scraping Robotic API handles all browser administration, proxy administration, and ensures 100% success again to your software program. Proxy Pilot is simply a proxy supervisor, which is very depending on the proxies you present it. If you happen to present low high quality proxy IP addresses, or configure your software program incorrectly, then you’re going to get low high quality outcomes.

  • Proxy Pilot doesn’t present you free proxies or entry to a selected proxy pool. You should present it with the proxies you want to use. Once more, if you don’t want to buy proxies or handle them in any respect, then our Scraping Robotic API could be beneficial.

Proxy Pilot Setup Directions

Technical Setup Clarification – How Does It Work?

If you happen to haven’t learn What Is Proxy Pilot? we suggest studying the enterprise overview article first.

This text outlines the technical particulars on how you can implement Proxy Pilot. First, let’s outline the way it works:

Key elements:

  1. Set up a customized certificates in your software program. For many software program, that is 1-2 strains of code to do that.

    As soon as put in, this permits us to emulate what a man-in-the-middle-attack does, which decrypts your HTTPS site visitors so we will learn the HTML. As soon as we’re in a position to learn the complete HTML of your requests, we will detect bans and do the suitable retries.

  2. You hook up with a central Proxy Pilot server (self-hosted or managed internet hosting).

    We are going to present you a single proxy IP (ip: port with IP authorization, or ip:port:consumer: move for consumer: move authorization). You’ll ship all requests to this single proxy gateway and from there the Proxy Pilot system will take over and ahead your request to the suitable proxy.

  3. Your precise proxy listing

    As talked about in What Is Proxy Pilot? you need to present your individual proxies to the system. These proxies are those that Proxy Pilot forwards your requests to.

Programming Language Implementations

Please see the next hyperlinks for the programming language of your option to implement Proxy Pilot into. For many languages, it requires lower than 2 strains of code to put in the customized certificates, and from that time on you’ll use the proxy gateway the identical approach you employ a traditional proxy.

See setup directions for the next languages:

1. Node.js (Requests)


It’s best to have the next put in:

Instance code

Required strains to make use of Proxy Pilot:

  • rejectUnauthorized: false
    • rejectUnauthorized: false “ignores” the certificates warnings
  • (OPTIONAL) – If you want a safe connection between your server and our server, you possibly can set up our certificates and use it in Node.js. For many customers, this isn’t vital and you’ll merely “Ignore” the certificates errors. You may obtain the certificates right here.
    • // const cert = fs.readFileSync(path.resolve(__dirname, ‘./public/ca.pem’));
    • // ca: cert
    • // tunnel: true,
    • (OPTIONAL – do not use in case you are not geo-targetting)
      • // proxyHeaderWhiteList: [‘X-Sprious-Region’],
        headers: {
        // ‘X-ProxyPilot-Area’: ‘GB’,
const fs = require('fs');
const path = require('path');

const request = require('request');
        url: '',
        proxy: 'http://PROXY_LOGIN:[email protected]_IP:PROXY_PORT',
        followAllRedirects: true,
        timeout: 60000,
        methodology: "GET",
        rejectUnauthorized: false,
        gzip: true,
        headers: {
            'Consumer-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
    (err, response, physique) => {
        console.log(err, physique);

2. Node.js with Puppeteer


It’s best to have the next put in:

Instance code

Required strains to make use of Proxy Pilot:

  • const browser = await puppeteer.launch({
    args: [‘–no-sandbox’, ‘–disable-setuid-sandbox’, ‘–proxy-server=” + anonymizeProxy, ‘–ignore-certificate-errors”],
    • By ignoring the certificates in your server you don’t want to put in the certificates.
const puppeteer = require('puppeteer');
const proxyChain = require('proxy-chain');

(async () => {
    const anonymizeProxy = await proxyChain.anonymizeProxy('http://PROXY_LOGIN:[email protected]_IP:PROXY_PORT');

    const browser = await puppeteer.launch({
        args: ['--no-sandbox', '--disable-setuid-sandbox', '--proxy-server=" + anonymizeProxy, "--ignore-certificate-errors'],
    const web page = await browser.newPage();
    await web page.goto('');

    let pageTitle = await web page.title();
    let factor = await web page.$("#priceblock_ourprice");
    let worth = await (await factor.getProperty('textContent')).jsonValue();;

    console.log(worth + ' - ' + pageTitle);

    await browser.shut();
    await proxyChain.closeAnonymizedProxy(anonymizeProxy, true)

3. Curl


It’s best to have the next put in:

Instance code

Required strains to make use of Proxy Pilot:

  • -k –compressed
    • Through the use of the “-k” parameter in curl, it should IGNORE the customized certificates requirement to make use of Proxy Pilot.
  • (OPTIONAL) If you wish to use the geo-targeting characteristic, please move:
    • –proxy-header ‘X-Sprious-Area: US’
curl -s '' 
-H 'Consumer-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36'
-k --compressed
-x 'PROXY_LOGIN:[email protected]_IP:PROXY_PORT'

4. Python (Requests)


It’s best to have the next put in:

Instance code

Required strains to make use of Proxy Pilot:

  • r = requests.get(url, headers=headers, proxies=proxies, confirm=False)
    • confirm=False “ignores” the certificates warnings
  • (OPTIONAL)  # r = requests.get(url, headers=headers, proxies=proxies, confirm=’./public/ca.pem”)
    • If you want a safe connection between your server and our server, you possibly can set up our certificates and use it in Python. For many customers, this isn’t vital and you’ll merely “Ignore” the certificates errors. You may obtain the certificates right here.
import requests

url = ""
proxies = {
    "https": f"http://PROXY_LOGIN:[email protected]_IP:PROXY_PORT/",
    "http": f"http://PROXY_LOGIN:[email protected]_IP:PROXY_PORT/"
headers = {
    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
    'content-encoding': 'gzip'

# r = requests.get(url, headers=headers, proxies=proxies, confirm='./public/ca.pem'')
r = requests.get(url, headers=headers, proxies=proxies, confirm=False)

print(f"Response Physique: {r.textual content}n"
    "Request Headers:"
    f"Response Time: {r.elapsed.total_seconds()}n"
    f"Response Code: {r.status_code}n"
    f"Response Headers: {r.headers}nn"
    f"Response Cookies: {r.cookies.objects()}nn"
    f"Requesting {url}n"

5. Firefox Browser


It’s best to have the next put in:

It’s best to obtain the customized certificates:

use Proxy Pilot in Firefox

NOTE OF CAUTION: By following the directions beneath you’re primarily permitting our server to learn the pages that you just go to. That is harmful for those who intend to make use of your browser for regular exercise like utilizing your checking account. Please solely proceed for those who intend to make use of our certificates together with your proxy actions equivalent to net scraping – DO NOT use it for private use!

In Firefox you possibly can import your certificates by following these steps:

  1. Settings → Privateness & Safety → “View certificates” → “import”

  2. Choose the CA.pem file that you just saved earlier

  3. examine the checkbox “Belief this to determine web sites” → ‘OK’

  4. Click on ‘OK’

Then that you must specify your browser to make use of the PM proxy server:

  1. Settings → Normal → Community Settings → “Settings”

  2. Guide proxy settings


  4. Choose “Additionally use this proxy for FTP and HTTPS”

  5. Click on “OK”

Go to When the browser asks for login and password, enter:

  • login: PROXY_LOGIN
  • password: PROXY_PASS

Proxy Supervisor Error codes

500 Error retry restrict

Proxy Pilot reached the restrict of retries making an attempt to fetch a selected URL

Chances are you’ll retry your request once more.

500 No proxies

There’s presently not sufficient proxies in your proxy pool. This would possibly both point out that every one proxies are in cooldown, or if the geo-targeting header is specified – it may also point out that there aren’t any proxies for such area in your proxy pool.

Both retry your request once more or examine if the required geo-targeting header matches the accessible proxies in your proxy pool. Learn right here for extra details about What Proxy Pilot Is, and Is Not right here.

How do Retries Work? Are You doing the Scraping on My Behalf?

This query is a typical one given the intricacies of what’s happening within the resolution. The easy reply:  no, we’re not scraping in your behalf.

When the next circulate occurs:

  1. You ship a request to scrape to Proxy Pilot gateway

  2. Proxy Pilot forwards your request to proxyA

  3. proxyA returns a banned HTML web page again to Proxy Pilot

  4. Proxy Pilot sees this can be a ban, after which sends this similar request to proxyB

  5. proxyB returns a profitable HTML web page to Proxy Pilot

  6. Proxy Pilot returns the profitable HTML again to you (the consumer)

… on step #4 we now have a typical query which asks whether or not or not we’re utilizing our server sources to do the scraping, or in case your server compute sources are doing the scraping. The reply is that your server remains to be doing the act of the scraping.

The easiest way to consider it’s when your web disconnects halfway by a connection to an internet site and your browser reveals you a longer-than-usual “Loading” image as your web makes an attempt to do a retry. That is principally what is occurring with Proxy Pilot:  because it makes retries in your behalf, your software program is holding the connection tunnel open whereas it waits for a response from Proxy Pilot.

The compute consumption is definitely taking place on Proxy Pilot by resending the very same request headers and physique. By resending the very same headers and physique, we now have confirmed with intensive testing that it doesn’t have an effect on the outcomes of your scraping (i.e. – in case you are utilizing Puppeteer on Chromium).

The easiest way to show this your self with Proxy Pilot:  hook up with a javascript-only web site (like Google Maps) together with your browser. You’ll discover that it is possible for you to to load the web page, as a result of javascript remains to be being executed by your browser whereas that tunnel connection remains to be open.

Complicated? We agree! Please join Proxy Pilot and we’re comfortable to provide you free proxies to trial it out.

Normal Troubleshooting for Proxy Pilot

On this article we’ll focus on some steps you possibly can take to assist troubleshoot any sudden points when making an attempt to make use of your proxies through Proxy Pilot. As a reminder, Proxy Pilot is a device that depends on correct configurations by the top consumer in an effort to work correctly. If you happen to set dangerous headers or cookies, use dangerous proxies, or so forth, then you’re going to get poor outcomes nonetheless.

On the core of net scraping, for those who can’t load a request in your browser, utilizing your own home/work IP handle, then it’s unlikely you’ll have the ability to scrape a web page utilizing software program + a proxy supply. 

There are many methods to detect scraping software program (see example1 and example2), so the extra customization you add to loading an internet site (your software program + proxies), the larger your footprint can be, and the better it is going to be to detect you.

If you don’t want to fear about such anti-scraping battles, please contemplate our API at:  Our Scraping Robotic API was constructed to resolve this precise difficulty:  permitting you to focus in your core enterprise, as an alternative of combating with anti-scraping applied sciences.

If you happen to want to handle your individual proxies, use developer sources, and pay for server compute energy, then utilizing Proxy Pilot will assist (however not clear up!) with a few of these frequent scraping points for you.

Instance of Unhealthy vs Good Scraping Requests

Beneath you’ll find an instance of a really dangerous scraping request to Amazon (or any web site, actually). Proxy Pilot’s function is to not clear up these dangerous requests – it’s nonetheless on the developer’s code to ship good requests to keep away from being banned.

curl -s ‘

     -x ‘PROXY_LOGIN:[email protected]_IP:PROXY_PORT’

     -k –compressed -v

The explanation the above code would lead to a ban will not be due to Proxy Pilot, and even your proxies, however moderately it’s as a result of regular browser requests would have extra headers set within the request. Particularly, Amazon checks that the request has at the least ‘Consumer-Agent’ header, and irrespective of which proxies you are operating this request from – it will almost certainly get blocked.

By merely including user-agent to your request you possibly can considerably lower ban charges on your request:

curl -s ‘

     -H ‘Consumer-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36′

     -x ‘PROXY_LOGIN:[email protected]_IP:PROXY_PORT’

     -k –compressed -v

Tip #1:  Replicate Your Request in a Browser to Verify It Works There First

Description:  As talked about above, one of the simplest ways to know if it’s your software program inflicting points could be to run your request through a browser in your native machine. As a result of your native machine can be a pure “residential IP”, and a browser will not be personalized software program, you’ll then have the ability to efficiently load all pages. Nonetheless, for those who can’t load a web page in your browser utilizing the steps beneath, then it means you’re passing incorrect headers or cookies to the goal URL and would wish to debug in your facet to search out the right headers/cookies.

Steps to troubleshoot:

  1. Open a Chrome incognito tab and ensure you clear the cookies
    As most scraping software program begins with no earlier shopping historical past and no cookies – it’s greatest to do it this method to replicate how your software program would work

  2. Replicate the URL you are going to scrape by simply pasting it into the handle subject

  3. Be certain that it masses as anticipated. If it fails – it’s best to take this into consideration when designing your scraping software program
    In some instances the positioning would possibly ban you even at this step just because you haven’t any earlier shopping historical past (and no cookies). 

  4. In Chrome Dev Instruments open a community tab and examine the primary request (it should possible have a ‘doc’ kind). Test that the URL of that request matches the one you simply made, after which right-click and select ‘Copy as cURL’
    The cURL despatched by a browser ought to seem in your clipboard and would look one thing like this:
    curl ‘
    -H ‘authority:
    -H ‘sec-ch-ua: ” Not;A Model”;v=”99″, “Google Chrome”;v=”91″, “Chromium”;v=”91″‘
    -H ‘sec-ch-ua-mobile: ?0′
    -H ‘upgrade-insecure-requests: 1′
    -H ‘user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36′
    -H ‘settle for: textual content/html,utility/xhtml+xml,utility/xml;q=0.9,picture/avif,picture/webp,picture/apng,*/*;q=0.8,utility/signed-exchange;v=b3;q=0.9′
    -H ‘sec-fetch-site: none’
    -H ‘sec-fetch-mode: navigate’
    -H ‘sec-fetch-user: ?1′
    -H ‘sec-fetch-dest: doc’
    -H ‘accept-language: en-US,en;q=0.9′

Notice what number of headers the browser sends even from an incognito mode. With cookies the request can simply be a number of instances larger.

Notice: cURL is offered by default with macOS and most Linux distributions, in addition to the newest Home windows 10 updates.

In case you’re operating an older model of Home windows you possibly can set up curl from their official web site.

Tip #2:  Replicate the Similar Request from the Browser By way of Proxy Pilot

Description: After step #4 within the earlier tip it’s best to have an ideal cURL request with an ideal set of headers which you would possibly attempt to replicate through the Proxy Pilot, by merely including to this cURL request parameter:  -x ‘PROXY_LOGIN:[email protected]_IP:PROXY_PORT’ -v -k

with the PP credentials offered to you earlier. This might ship the identical request through PP.

You may additionally contemplate including parameters
-o take a look at.html
This might save outcomes right into a take a look at.html web page, so you possibly can open it with a browser and see it’s content material to ensure it’s working correctly.
If it returns a correct content material at this stage – this implies PP works fantastic and takes care of managing proxies, doing retries if it’s banned through some proxy, and many others.

In case the request works immediately (with out setting Proxy Pilot through -x flag), however stops working through Proxy Pilot – please inform us about that and tell us which curl request you had been sending

Tip #3:  Replicate the Similar Conduct By way of Your Software program

Description: When you’ve examined your request through the browser and through the Proxy Pilot – you possibly can apply this to your individual scraping software program. The combination with Proxy Pilot is sort of so simple as utilizing common proxies for information scraping. Extra particulars and a few code examples for various languages and frameworks might be discovered right here

Please notice, that if whereas integrating the identical request which labored through cURL stops working together with your software program, essentially the most doable purpose is a set of headers. Many websites implement actually subtle anti-scraping options which could have in mind not solely cookies, user-agents, but additionally a selected order of headers, compressing algorithms and particular browser market share (i.e. Chrome v41 is never used, so sending this through user-agent would look suspicious for the goal web site)


Leave a Reply

Your email address will not be published. Required fields are marked *