scrapy之中间件
中间件的简介
1.中间件的作用
在scrapy运行的整个过程中,对scrapy框架运行的某些步骤做一些适配自己项目的动作.
例如scrapy内置的HttpErrorMiddleware,可以在http请求出错时做一些处理.
2.中间件的使用方法
配置settings.py.详见scrapy文档 https://doc.scrapy.org
中间件的分类
scrapy的中间件理论上有三种(Schduler Middleware,Spider Middleware,Downloader Middleware),在应用上一般有以下两种
1.爬虫中间件Spider Middleware
主要功能是在爬虫运行过程中进行一些处理.
2.下载器中间件Downloader Middleware
主要功能在请求到网页后,页面被下载时进行一些处理.
中间件的方法
1.Spider Middleware有以下几个函数被管理:
– process_spider_input 接收一个response对象并处理,
位置是Downloader–>process_spider_input–>Spiders(Downloader和Spiders是scrapy官方结构图中的组件)
– process_spider_exception spider出现的异常时被调用
– process_spider_output 当Spider处理response返回result时,该方法被调用
– process_start_requests 当spider发出请求时,被调用
位置是Spiders–>process_start_requests–>Scrapy Engine(Scrapy Engine是scrapy官方结构图中的组件)
2.Downloader Middleware有以下几个函数被管理
– process_request request通过下载中间件时,该方法被调用
– process_response 下载结果经过中间件时被此方法处理
– process_exception 下载过程中出现异常时被调用
编写中间件时,需要思考要实现的功能最适合在那个过程处理,就编写哪个方法.
中间件可以用来处理请求,处理结果或者结合信号协调一些方法的使用等.也可以在原有的爬虫上添加适应项目的其他功能,这一点在扩展中编写也可以达到目的,实际上扩展更加去耦合化,推荐使用扩展.
代码示例
下载中间件代码示例
from scrapy.http import HtmlResponse from scrapy.http import Request class Md1(object): @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called print('md1.process_request',request) # 1. 返回Response # import requests # result = requests.get(request.url) # return HtmlResponse(url=request.url, status=200, headers=None, body=result.content) # 2. 返回Request # return Request('https://dig.chouti.com/r/tec/hot/1') # 3. 抛出异常 # from scrapy.exceptions import IgnoreRequest # raise IgnoreRequest # 4. 对请求进行加工(*) # request.headers['user-agent'] = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest print('m1.process_response',request,response) return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass
配置
DOWNLOADER_MIDDLEWARES = { #'xdb.middlewares.XdbDownloaderMiddleware': 543, # 'xdb.proxy.XdbProxyMiddleware':751, 'xdb.md.Md1':666, 'xdb.md.Md2':667, }
爬虫中间件下载示例
编写类
class Sd1(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict # or Item objects. pass # 只在爬虫启动时,执行一次。 def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r
配置
SPIDER_MIDDLEWARES = { # 'xdb.middlewares.XdbSpiderMiddleware': 543, 'xdb.sd.Sd1': 666, 'xdb.sd.Sd2': 667, }