好久不见呀,各位。[/坏笑]

自从上次发布文章已经过去了许久,之前承诺过的2.0版本我就现在嘛出来吧。(毕竟,评论区都已经开始催了,拖不了了…)

emm…具体的网页链接我就不写在正文了,我会放在代码区的注释部分。


闲话不多说,下面就是本次更新的代码:

  1. # 目标网址:https://www.xiurenb.com
  2. # 导入库
  3. import time, os, requests
  4. from lxml import etree
  5. from urllib import parse
  6. # 定义请求头
  7. headers = {
  8. 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36 Edg/96.0.1054.62'
  9. }
  10. # 格式化列表
  11. img_list = []
  12. url_list = []
  13. page_list = []
  14. # 编码输入数据
  15. human_unencode = input('Enter the human_name:')
  16. human_encode = parse.quote(human_unencode)
  17. # 编码后索引url
  18. url_human = 'https://www.xiurenb.com/plus/search/index.asp?keyword=' + str(human_encode) + '&searchtype=title'
  19. # 获取指定人物写真集列表页数
  20. res_first = requests.get(url=url_human, headers=headers)
  21. tree_first = etree.HTML(res_first.text)
  22. Num_first = len(tree_first.xpath('/html/body/div[3]/div[1]/div/div/ul/div[3]/div/div[2]/a'))
  23. print(f'Page_total:{Num_first})
  24. # 获取指定页数的每个写真集的url并写入列表
  25. i = input('Enter the PageNumber:)
  26. print(f'Getting the page-{i}...')
  27. res_human = requests.get(url_human + '&p=' + str(i))
  28. tree_human = etree.HTML(res_human.text)
  29. jihe_human = tree_human.xpath('/html/body/div[3]/div[1]/div/div/ul/div[3]/div/div[1]/div/div[1]/h2/a/@href')
  30. for page in jihe_human:
  31. page_list.append(page)
  32. time.sleep(2)
  33. # 获取每个写真集的全部图片
  34. for Page_Num in page_list:
  35. url = 'https://www.xiurenb.com' + str(Page_Num)
  36. Num_res = requests.get(url=url, headers=headers)
  37. Num_tree = etree.HTML(Num_res.text)
  38. Num = len(Num_tree.xpath('/html/body/div[3]/div/div/div[4]/div/div/a'))
  39. url_list.append(url)
  40. for i in range(1, int(Num) - 2):
  41. url_other = url[:-5] + '_' + str(i) +'.html'
  42. url_list.append(url_other)
  43. # 获取所有图片url
  44. for url_img in url_list:
  45. res = requests.get(url=url_img, headers=headers)
  46. tree = etree.HTML(res.text)
  47. img_src = tree.xpath('/html/body/div[3]/div/div/div[5]/p/img/@src')
  48. for img in img_src:
  49. img_list.append(img)
  50. time.sleep(0.5)
  51. # 创建保存目录
  52. res = requests.get(url=url_list[0], headers=headers)
  53. res.encoding = 'utf-8'
  54. tree = etree.HTML(res.text)
  55. path_name = tree.xpath('/html/body/div[3]/div/div/div[1]/h1//text()')[0][11:]
  56. print(path_name)
  57. if not os.path.exists(f'C:/Users/liu/Pictures/{human_unencode}'):
  58. os.mkdir(f'C:/Users/liu/Pictures/{human_unencode}')
  59. the_path_name = f'C:/Users/liu/Pictures/{human_unencode}/' + path_name
  60. if not os.path.exists(the_path_name):
  61. os.mkdir(the_path_name)
  62. # 保存图片数据
  63. num = 0
  64. for j in img_list:
  65. img_url = 'https://www.xiurenb.com' + j
  66. img_data = requests.get(url=img_url, headers=headers).content
  67. img_name = img_url.split('/')[-1]
  68. finish_num = str(num) + '/' + str(len(img_list))
  69. with open(f'C:/Users/liu/Pictures/{human_unencode}/' + path_name + '/' + img_name, 'wb') as f:
  70. print(f'Downloading the img:{img_name}/{finish_num}')
  71. f.write(img_data)
  72. f.close()
  73. num += 1
  74. time.sleep(0.5)
  75. # 再次格式化列表
  76. img_list = []
  77. url_list = []
  78. else:
  79. print('gone>>>')
  80. # 再次格式化列表
  81. img_list = []
  82. url_list = []
  83. # 输出结束提示
  84. print('Finished!')

这次代码比较长,我就不一一解释了。这里需要注意的是,记得把保存路径换成自己的,毕竟用户名不同。

这个版本就是通过人名搜索写真集,比如:唐安琪。运行代码时输入想要搜索的内容,中间再输入想要下载的页数就可以了。

如果有什么其他问题的话,可以评论区问我。

当然,如果我解决不了的话我会去补课的[/痛哭],毕竟我学python也没多久…