爬虫小程序之爬取西刺免费高匿IP构建代理池
反爬机制很多,其中一种便是web服务器通过记录IP访问服务器的频率来判断该IP地址是否为爬虫IP,为了避免IP被封,同时可以提高爬取数据的稳定性,可以通过第三方IP地址发起请求,为了后期数据爬取的稳定性,可以构建自己的代理池,本程序是通过爬取西刺代理网站里的免费高匿IP,构建后期工作所需的IP代理池工具包,具体程序如下:
注:
1>高匿:高度匿名,web服务器只能看到代理IP地址,无法看到客户自身地址,更无法识别是否使用代理IP,一切IP均识别为真实客户IP(爬虫首选)
2>匿名:web服务器只能看到代理IP地址和使用代理访问,即web知道有人用代理IP访问网站而不知道究竟是哪个客户的IP在访问
3>透明:web服务器会知道代理IP地址和用户自身真实IP
import requests from fake_useragent import UserAgent from lxml import etree import random import time class Spider(object): def __init__(self, start, end): self.base_url = \'https://www.xicidaili.com/nn/{}\' # 用来存放所有的代理IP self.proxies_list = [] self.start = start self.end = end # 获取随机ua def get_random_ua(self): ua = UserAgent() return ua.random # 从西刺代理网站获取IP def get_ip_list(self, url): # 发请求 html = requests.get(url, headers={\'User-Agent\': self.get_random_ua()}).text # 解析 parse_html = etree.HTML(html) tr_list = parse_html.xpath(\'//*[@id="ip_list"]//tr\') # print(tr_list) for tr in tr_list[1:]: ip = tr.xpath(\'./td[2]/text()\')[0].strip() port = tr.xpath(\'./td[3]/text()\')[0].strip() proxies = { \'http\': \'http://{}:{}\'.format(ip, port), \'https\': \'https://{}:{}\'.format(ip, port), } self.proxies_list.append(proxies) # print(len(self.proxies_list)) # print(self.proxies_list) return self.proxies_list def main(self): for i in range(self.start, self.end + 1): url = self.base_url.format(str(i)) self.get_ip_list(url) time.sleep(random.randint(1, 10)) return self.proxies_list if __name__ == \'__main__\': spider = Spider(1, 2) proxies_list = spider.main()
假设上述程序放在xici_proxy_pools.py模块中,现在用其测试一下http://httpbin.org/get网站,要求如下:
1、首先判断构建的代理池是否为空;
2、当选中的IP在测试时超过时间限制时,则将其删除,从新在代理池中选取;
代码如下:
from xici_proxy_pools import * import requests import random from fake_useragent import UserAgent url = \'http://httpbin.org/get\' headers = {\'User-Agent\': UserAgent().random} spider = Spider(1, 3) proxies_list = spider.main() while True: if proxies_list is not None: proxies = random.choice(proxies_list) try: html = requests.get(url, proxies=proxies, headers=headers, timeout=3).text print(html) break except: # 先移除IP proxies_list.remove(proxies) print(\'{}已经移除\'.format(proxies)) continue
再用IP代理池爬取需要客户端验证登录才能爬取网页内容的网站,代码如下:
import requests from lxml import etree from fake_useragent import UserAgent from xici_proxy_pools import * import random class NoteSpider(object): def __init__(self): self.base_url = \'自己写\' self.headers = {\'User-Agent\': UserAgent().random} self.auth = (\'username\', \'password\') self.proxies_list = Spider(1, 2).main() # 获取一级页面 def get_one_page(self): while True: if self.proxies_list is not None: proxies = random.choice(self.proxies_list) # print(proxies) else: break try: html = requests.get(self.base_url, proxies=proxies, headers=self.headers, auth=self.auth, timeout=5).text self.parse_one_page(html) # print(html) break except: self.proxies_list.remove(proxies) continue # 解析页面 def parse_one_page(self, html): parse_html = etree.HTML(html) result_list = parse_html.xpath(\'/html/body/pre/a/text()\')[1:] print(result_list) if __name__ == \'__main__\': spider = NoteSpider() spider.get_one_page()