How To Use A Proxy In Python

How to's, Proxies, Python, Nov-15-20225 mins read

We often come across the term ‘proxy‘ when we are working in the computer science field. When connected to the Internet, every computer gets a unique Internet Protocol (IP) address that identifies the computer and its geographic location. Your computer sends out a request whenever it needs any information from the Internet. The request is

Table of Contents

We often come across the term ‘proxy‘ when we are working in the computer science field. When connected to the Internet, every computer gets a unique Internet Protocol (IP) address that identifies the computer and its geographic location. Your computer sends out a request whenever it needs any information from the Internet. The request is sent to a target computer that checks the type of information being asked for. The target computer sends the information back if it is allowed to give it to our IP address. At times, the computer wants to get the information from the Internet without being identified. That information is usually blocked, but we can get it using a proxy that acts as an intermediary between the client and the server machine.

The clients usually use the proxy server to browse web pages and request resources anonymously as it acts as an identification field between the client computer and the Internet. 

Proxy servers have become quite popular with the growing concern of online security and data theft. Here the question arises how the proxy server is connected to the security of our system? We can say that a proxy server adds an additional security level between our server and the external world. This extra security helps in saving our system from a breach. 

How To Use A Proxy In Python?

For using proxies with the Python requests, you need to follow the steps below.

Import requests

Import the requests package that is a simple HTTP library. You can easily send requests through this package without manually adding query strings to your URLs. You can import requests using the below command.

import requests

Create a Dictionary

You need to create a proxies dictionary defining the HTTP and HTTPS connections. You can give the dictionary variable any name like “proxies” that map a protocol to the proxy URL. Further, you have to make the URL variable set to the website you have to scrape from.

proxies = {
  "http":'http://203.190.46.62:8080',
  "https":'https://111.68.26.237:8080'
}
url = 'https://httpbin.org/ip'

Here the dictionary defines the URL of the proxy for two separate protocols i-e HTTP and HTTPS.

Create a Response Variable

You have to create a response variable that uses any of the requests methods. This method takes two arguments:

  • The URL that you created
  • The dictionary you defined
response = requests.get(url,proxies = proxies)
print(response.json())

The output is as:

Requests Methods

There are a number of requests methods like:

  • GET – It retrieves information from a given server using a given URL. 
  • POST – This method requests that the given web server accepts the enclosed data in the body of the request message to store it.
  • PUT – It requests that the enclosed data gets stored under the given URL.
  • DELETE – This method sends a DELETE request to the given URL.
  • PATCH – This request method is supported by the HTTP protocol and makes partial changes to an existing resource. 
  • HEAD – It sends a HEAD request to the given URL when you do not need the file content and only want the HTTP headers or the status_code.

You can use the below syntax of the requests methods when the URL is specified. Here, our URL is the same as we used in the above code i-e., https://httpbin.org/ip.

response = requests.get(url)
response = requests.post(url, data={"a": 1, "b": 2})
response = requests.put(url)
response = requests.delete(url)
response = requests.patch(url)
response = requests.head(url)
response = requests.options(url)

Proxy Sessions

If you want to scrape the data from websites that utilize sessions, you can follow the steps given below.

Step#01

Import the requests library.

import requests

Step#02

Create a session object by creating a session variable and setting it to the requests Session() method. 

session = requests.Session()

session.proxies = {
   'http': 'http://10.10.10.10:8000',
   'https': 'http://10.10.10.10:8000',
}

url = 'http://mywebsite.com/example'

Step#03

Send the session proxies through the requests method and pass the URL as an argument.

response = session.get(url)

Main Types Of Proxies

Let’s discuss the two essential types of proxies, i-e;

Static ProxiesRotating Proxies

Static Proxies

We can define static proxies as the datacenter Internet Protocols assigned via an Internet Service Provider (ISP) contract. They are designed to remain connected to one proxy server for a set amount of time. The name “static” implies that it allows us to operate as a residential user with the same IP for as long as required. 

In short, with the use of static proxies, we get the speed of datacenter proxies and the high anonymity of residential proxies. Furthermore, a static proxy allows us to avoid IP address rotation, making its use significantly simpler.

The static IP services are not created by using virtual machines, unlike regular datacenter proxies. These proxies, also known as sticky IP addresses, look like genuine consumers to almost all websites. 

Rotating Proxies

We can define proxy rotation as a feature that changes our IP address with every new request we send.

When we visit a website, we send a request that shows a destination server a lot of data, including our IP address. For instance, when we gather data using a scraper( for generating leads), we send many such requests. So, the destination server gets suspicious and bans it when most requests come from the same IP. 

Therefore, there must be a solution to change our IP address with each request we send. That solution is a rotating proxy. So, to avoid the needless hassle of getting a scraper for rotating IPs in web scraping, we can get rotating proxies and let our provider take care of the rotation.

Why Do You Need To Use Proxies?

Following are the reasons to use various types of proxies.

  • Social media managers appreciate proxies for letting them stick to a single server. If users constantly log in to their accounts by changing IP addresses, the social media platform will get suspicious and block their profile.
  • E-commerce sites might show different data for users from other locations and returning visitors. Also, the server becomes alert if a buyer logins his account multiple times from various IP addresses. So, we have to use proxies for online shopping.
  • We need proxies for manual marketing research when a specialist wants to check the required data through a user’s eyes from one location. 
  • Ad verification allows the advertisers to check if their ads are displayed on the right websites and seen by the right audiences. The constant change of IP addresses accesses many different websites and thus verifies ads without IP blocks.
  • When accessed from specific locations, the same content can look different or may not be available. The use of the proxies allows us to access the necessary data regardless of its geo-location. 
  • We can use proxies for accessing data, speeding up the browsing speed as they have a good cache system.

Conclusion

So far, we discussed that a proxy acts as a relay between the client and the server machine. Whenever you request information, your computer sends this request to the proxy, which then sends the information to the target computer using a different IP address. Thus your IP address remains confidential. Further, you can use proxies with requests module in Python and perform various actions depending on your need. If you need a static IP with the speed of datacenter proxies and the high anonymity of residential proxies, then static proxies are the way to go as the IP address remains unchanged with each new request. On the contrary, the rotating proxies provide benefits in testing and scraping.