【Python】解决urllib返回http error 503问题
想自动下载一些网页,写了个简单的爬虫,但是碰到了503问题。
代码是下面这个样子:
site = "http://XXXX" local_filename, headers = urllib.request.urlretrieve(site) html = open(local_filename, encoding=\'UTF-8\') soup = bs4.BeautifulSoup(html) names = soup.find_all(\'a\')
过去都会像上面这样写,不过这次调用urlretrieve的时候返回了503,解决方法是用下面这样的代码,模拟浏览器的行为去访问。
hdr = { \'User-Agent\': \'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36\', \'Accept\': \'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\', \'Accept-Charset\': \'ISO-8859-1,utf-8;q=0.7,*;q=0.3\', \'Accept-Encoding\': \'gzip, deflate, br\', \'Accept-Language\': \'zh-CN,zh;q=0.9\', \'Connection\': \'keep-alive\'} site = "https://XXXX" r = urllib.request.Request(site, headers=hdr) response = urllib.request.urlopen(r) page = response.read() buff = BytesIO(page) f = gzip.GzipFile(fileobj=buff) res = f.read().decode(\'utf-8\') soup = bs4.BeautifulSoup(res, \'html.parser\') names = soup.find_all(\'a\')