分布式京东全站爬取分析

dukuan 2018-01-23 原文

分布式京东全站爬取分析

1、主要思路

1.1  获取京东所有商品列表

https://www.jd.com/allSort.aspx

大概有分类1190条

连接格式为:https://list.jd.com/list.html?cat=xxx

页面如下:

可以从该页面直接获取商品价格、商品标题、商品URL、商品ID、商品品牌、商品店铺地址、商品型号,所以在获取此页面时直接解析这些数据。

1.2  需要的数据分析

本次爬取主要获取以下信息(已评价为基础存储商品信息,可以将评价和商品分开存):

CREATE TABLE `JDAll` (
    `shop_id` VARCHAR (16) NOT NULL,  //商品ID
    `url` VARCHAR (255) DEFAULT NULL, //商品url
    `title` VARCHAR (1024) DEFAULT NULL, //商品标题
    `brand` VARCHAR (255) DEFAULT NULL,//品牌
    `brand_url` VARCHAR (1024) DEFAULT NULL,//店铺地址
    `price` VARCHAR (16) DEFAULT NULL,//价格
    `comment_num` INT (16) DEFAULT NULL,//评价总数
    `good_comment_rate` VARCHAR (16) DEFAULT NULL,//好评率
    `good_comment` VARCHAR (16) DEFAULT NULL,//好评数
    `general_count` VARCHAR (16) DEFAULT NULL,//中评数
    `poor_count` VARCHAR (16) DEFAULT NULL,//差评数
    `hot_comment_dict` LONGTEXT,//热评信息
    `default_comment_num` VARCHAR (16) DEFAULT NULL,//默认好评数
    `comment_id` VARCHAR (32) NOT NULL,//评价ID,主键
    `comment_context` LONGTEXT,//评价内容
    `comnent_time` VARCHAR (255) DEFAULT NULL,//评价时间
    `comment_score` VARCHAR (8) DEFAULT NULL,//评价星级
    `comment_source` VARCHAR (255) DEFAULT NULL,//评价来源
    `produce_size` VARCHAR (255) DEFAULT NULL,//商品大小
    `produce_color` VARCHAR (255) DEFAULT NULL,//商品颜色
    `user_level` VARCHAR (32) DEFAULT NULL,//用户会员级别
    `user_exp` VARCHAR (32) DEFAULT NULL,//用户京享值
    `comment_thumpup` VARCHAR (8) DEFAULT NULL,//评价点赞数
    `comment_reply_content` LONGTEXT,//店家回复
    `comment_reply_time` VARCHAR (255) DEFAULT NULL,//店铺回复时间
    `append_comment` LONGTEXT,//买家追评
    `append_comment_time` VARCHAR (255) DEFAULT NULL,//追评时间
    PRIMARY KEY (`comment_id`)
) ENGINE = INNODB DEFAULT CHARSET = utf8

1.3  商品评价获取

采用京东json接口:https://club.jd.com/comment/skuProductPageComments.action?callback=fetchJSON_comment98vv46561&productId=4207732&score=0&sortType=5&page=0&pageSize=10&isShadowSku=0&fold=1

可更改参数:productId=4207732(商品ID)、page=0(第几页评价)、pageSize=10(每页显示的条数)

详情如下:

基本评价信息:

 

热评信息:

 

主要评价:

2、代码实现

2.1  主代码:

# -*- coding: utf-8 -*-

import os,sys
import scrapy
import json
import requests
from scrapy_splash import SplashRequest
from scrapy.linkextractors import LinkExtractor
# from scrapy.spiders import Rule,CrawlSpider
from scrapy.spiders.splashcrawl import Rule,CrawlSpider
from scrapy_redis.spiders import RedisSpider
from ArticleSpider.items import JDAllItem

# 集成RedisSpider。使用redis进行去重、分布式
class JdAllSpider(RedisSpider):
    name = "jd_all"
    redis_key = "jd:start_urls"
    allowed_domains = ["jd.com"]
    # 定义header以防万一
    header = {
        'Host': 'club.jd.com',
        'Connection': 'keep-alive',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0',
        'Accept-Encoding': 'gzip, deflate, sdch',
        'Accept-Language': 'zh-CN,zh;q=0.8',
    }
    # 解析list连接
    def parse(self,response):
        # 使用LinkExtractor解析list链接
        pattern = "https://list\.jd\.com/list\.html\?cat=.*"
        le = LinkExtractor(allow=pattern)
        links = le.extract_links(response)
        print("发现list页面共:【%s】" %len(links))
        for i in links:
            print("-------------------->%s" %i.url)
            # 将解析结果交给next_page进行解析list页的下页
            yield scrapy.Request(i.url,callback=self.next_page)
    def next_page(self,response):
        # 获取list页面的总页数
        page_total = int(response.css('span.fp-text i::text').extract_first())
        print("开始获取下一页")
        for page in range(1,page_total + 1):
            # 拼接list页
            page_url = "%s&page=%s" %(response.url,page)
            print("获取list:【%s】,第【%s】页。"%(response.url, page))
            # 使用splash进行渲染,并交给parse_shop解析基本信息
            yield SplashRequest(page_url, args={'wait': 0.5, 'images': 0}, callback=self.parse_shop,splash_headers=self.header)
            # yield SplashRequest(page_url, endpoint='execute', args={'images': 0, 'lua_source': lua_script},cache_args=['lua_source'], callback=self.parse_shop,dont_filter=True)

    def parse_shop(self, response):
        # 获取当前list页面所有商品(每页应该是60个,直接搜索类别应该是每页是30个,拉动滚动条后动态生成60个,从list页面直接打开为60个,省去动态加载)
        sel_list = response.xpath('//div[@id="plist"]').xpath('.//li[@class="gl-item"]')
        for sel in sel_list:
            # 循环所有商品
            print("开始解析list页面,商品信息")
            url = "http:%s" %sel.css(".p-name a::attr('href')").extract_first()
            shop_id = url.split("/")[-1].split(".")[0]
            title = sel.css(".p-name a em::text").extract_first().strip("\n").strip(" ")
            brand = sel.css(".p-shop span a::attr('title')").extract_first()
            brand_url = sel.css(".p-shop span a::attr('href')").extract_first()
            price = sel.css(".p-price strong i::text").extract_first()
            session = requests.Session()
            print("获取%s商品评价页面" %title)
            comment_url = "https://club.jd.com/comment/skuProductPageComments.action?productId={shop_id}&score=0&sortType=5&page={page_num}&pageSize=10&isShadowSku=0&fold=1".format(shop_id=shop_id,page_num=0)
            # 此处打开第一页获取一些基本信息,以后就不用重复获取
            html = session.get(comment_url, headers=self.header)
            print("获取商品评价页 json")
            try:
                comment_json = json.loads(html.text)
            except:
                continue
            # 获取评价信息
            public_comment = comment_json['productCommentSummary']
            # 评价数
            comment_num = public_comment['commentCount']
            # 获取好评率
            good_comment_rate = public_comment['goodRate']
            # 好评数
            good_comment =public_comment['goodCount']
            # 中评数
            general_count = public_comment['generalCount']
            # 差评
            poor_count = public_comment['poorCount']
            # 默认好评
            default_comment_num = public_comment['defaultGoodCount']
            # 获取热评信息
            hot_comment = comment_json['hotCommentTagStatistics']
            if len(hot_comment) == 0:
                hot_comment_dict = "Null"
            else:
                hot_comment_dict = {}
                for i in hot_comment:
                    hot_comment_dict[i['id']] = {'name': i['name'], 'count': i['count']}
                hot_comment_dict = json.dumps(hot_comment_dict)
            shop_info = {
                'url': url,
                'shop_id': shop_id,
                'title': title,
                'brand': brand,
                'brand_url': brand_url,
                'price': price,
                'comment_num': comment_num,
                'good_comment_rate': good_comment_rate,
                'good_comment': good_comment,
                'general_count': general_count,
                'poor_count': poor_count,
                'hot_comment_dict': hot_comment_dict,
                'default_comment_num': default_comment_num,
            }
            page_num = (comment_num + 9) // 10
            # 此处只获取商品最高100页评价
            if page_num >= 100:
                page_num = 100
            print("【%s】评价页面共计【%s】" %(title,page_num))
            for page in range(0,page_num):
                comment_url = "https://club.jd.com/comment/skuProductPageComments.action?productId={shop_ids}&score=0&sortType=5&page={page_nums}&pageSize=10&isShadowSku=0&fold=1".format(shop_ids=shop_id,page_nums=page)
                print("yield评价第%s页"%page)
                yield scrapy.Request(comment_url,meta=shop_info,headers=self.header,callback=self.parse_comment)

    # 商品解析函数
    def parse_comment(self,response):
        print("开始解析评价")
        shop_id = response.meta.get("shop_id")
        url = response.meta.get("url")
        title = response.meta.get("title")
        brand = response.meta.get("brand")
        brand_url = response.meta.get("brand_url")
        price = response.meta.get("price")
        comment_num = response.meta.get("comment_num")
        good_comment_rate = response.meta.get("good_comment_rate")
        good_comment = response.meta.get("good_comment")
        general_count = response.meta.get("general_count")
        poor_count = response.meta.get("poor_count")
        hot_comment_dict = response.meta.get("hot_comment_dict")
        default_comment_num = response.meta.get("default_comment_num")
        try:
            comment_json = json.loads(response.text)
        except:
            shop_info = {
                'url': url,
                'shop_id': shop_id,
                'title': title,
                'brand': brand,
                'brand_url': brand_url,
                'price': price,
                'comment_num': comment_num,
                'good_comment_rate': good_comment_rate,
                'good_comment': good_comment,
                'general_count': general_count,
                'poor_count': poor_count,
                'hot_comment_dict': hot_comment_dict,
                'default_comment_num': default_comment_num,
            }
            # 如果没有解析出来json数据,重新下载页面
            yield scrapy.Request(response.url,meta=shop_info,headers=self.header,callback=self.parse_comment)
        else:
            comment_info = comment_json['comments']
            for comment in comment_info:
                JDItem = JDAllItem()
                # 主键 评论ID
                comment_id = comment['id']
                comment_context = comment['content']
                comnent_time = comment['creationTime']
                # 用户评分
                comment_score = comment['score']
                # 来源
                comment_source = comment['userClientShow']
                if comment_source == []:
                    comment_source = "非手机端"
                # 型号
                try:
                    produce_size = comment['productSize']
                except:
                    produce_size = "None"
                # 颜色
                try:
                    produce_color = comment['productColor']
                except:
                    produce_color = "None"
                # 用户级别
                user_level = comment['userLevelName']
                try:
                    append_comment = comment['afterUserComment']['hAfterUserComment']['content']
                    append_comment_time = comment['afterUserComment']['created']
                except:
                    append_comment = "无追加"
                    append_comment_time = "None"
                # 用户京享值
                user_exp = comment['userExpValue']
                # 评价点赞数
                comment_thumpup = comment['usefulVoteCount']
                # 店铺回复
                try:
                    comment_reply = comment['replies']
                except:
                    comment_reply = []
                if len(comment_reply) == 0:
                    comment_reply_content = "Null"
                    comment_reply_time = "Null"
                else:
                    comment_reply_content = comment_reply[0]["content"]
                    comment_reply_time = comment_reply[0]["creationTime"]
                # 赋值给Item
                JDItem["shop_id"] = shop_id
                JDItem["url"] = url
                JDItem["title"] = title
                JDItem["brand"] = brand
                JDItem["brand_url"] = brand_url
                JDItem["price"] = price
                JDItem["comment_num"] = comment_num
                JDItem["good_comment_rate"] = good_comment_rate
                JDItem["good_comment"] = good_comment
                JDItem["general_count"] = general_count
                JDItem["poor_count"] = poor_count
                JDItem["hot_comment_dict"] = hot_comment_dict
                JDItem["default_comment_num"] = default_comment_num
                JDItem["comment_id"] = comment_id
                JDItem["comment_context"] = comment_context
                JDItem["comnent_time"] = comnent_time
                JDItem["comment_score"] = comment_score
                JDItem["comment_source"] = comment_source
                JDItem["produce_size"] = produce_size
                JDItem["produce_color"] = produce_color
                JDItem["user_level"] = user_level
                JDItem["user_exp"] = user_exp
                JDItem["comment_thumpup"] = comment_thumpup
                JDItem["comment_reply_content"] = comment_reply_content
                JDItem["comment_reply_time"] = comment_reply_time
                JDItem["append_comment"] = append_comment
                JDItem["append_comment_time"] = append_comment_time
                print("yield评价")
                yield  JDItem            

2.2  item 

# Item定义
class JDAllItem(scrapy.Item):
    # 商品信息
    shop_id = scrapy.Field()
    url = scrapy.Field()
    title = scrapy.Field()
    brand = scrapy.Field()
    brand_url = scrapy.Field()
    price = scrapy.Field()
    comment_num = scrapy.Field()
    good_comment_rate = scrapy.Field()
    good_comment = scrapy.Field()
    general_count = scrapy.Field()
    poor_count = scrapy.Field()
    hot_comment_dict = scrapy.Field()
    default_comment_num = scrapy.Field()
    # 主键 评论ID
    comment_id = scrapy.Field()
    comment_context = scrapy.Field()
    comnent_time = scrapy.Field()
    # 用户评分
    comment_score = scrapy.Field()
    # 来源
    comment_source = scrapy.Field()
    # 型号
    produce_size = scrapy.Field()
    # 颜色
    produce_color = scrapy.Field()
    # 用户级别
    user_level = scrapy.Field()
    # 用户京享值
    user_exp = scrapy.Field()
    # 评价点赞数
    comment_thumpup = scrapy.Field()
    # 商家回复
    comment_reply_content = scrapy.Field()
    comment_reply_time = scrapy.Field()
    append_comment = scrapy.Field()
    append_comment_time = scrapy.Field()

    def get_insert_sql(self):
        shop_id = self["shop_id"]
        url = self["url"]
        title = self["title"]
        brand = self["brand"]
        brand_url = self["brand_url"]
        price = self["price"]
        comment_num = self["comment_num"]
        good_comment_rate = self["good_comment_rate"]
        good_comment = self["good_comment"]
        general_count = self["general_count"]
        poor_count = self["poor_count"]
        hot_comment_dict = self["hot_comment_dict"]
        default_comment_num = self["default_comment_num"]
        comment_id = self["comment_id"]
        comment_context = self["comment_context"]
        comnent_time = self["comnent_time"]
        comment_score = self["comment_score"]
        comment_source = self["comment_source"]
        produce_size = self["produce_size"]
        produce_color = self["produce_color"]
        user_level = self["user_level"]
        user_exp = self["user_exp"]
        comment_thumpup = self["comment_thumpup"]
        comment_reply_content = self["comment_reply_content"]
        comment_reply_time = self["comment_reply_time"]
        append_comment = self["append_comment"]
        append_comment_time = self["append_comment_time"]

        insert_sql = """
                       insert into JDAll(shop_id,url,title,brand,brand_url,price,comment_num,good_comment_rate,good_comment,general_count,poor_count,hot_comment_dict,default_comment_num,comment_id,comment_context,comnent_time,comment_score,comment_source,produce_size,produce_color,user_level,user_exp,comment_thumpup,comment_reply_content,comment_reply_time,append_comment,append_comment_time)  VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
                   """
        params = (
        shop_id, url, title, brand, brand_url, price, comment_num, good_comment_rate, good_comment, general_count,
        poor_count, hot_comment_dict, default_comment_num, comment_id, comment_context, comnent_time, comment_score,
        comment_source, produce_size, produce_color, user_level, user_exp, comment_thumpup, comment_reply_content,
        comment_reply_time, append_comment, append_comment_time)
        print("return SQL 语句")
        return insert_sql, params

2.3  自定义pipelines异步存储数据到MySQL

class MysqlTwistedPipline(object):
    def __init__(self,dbpool):
        self.dbpool = dbpool
    @classmethod
    def from_settings(cls,settings):
        dbparms = dict(
            host=settings["MYSQL_HOST"],
            db=settings["MYSQL_DBNAME"],
            user=settings["MYSQL_USER"],
            passwd=settings["MYSQL_PASSWORD"],
            charset='utf8',
            cursorclass=MySQLdb.cursors.DictCursor,
            use_unicode=True,
        )
        dbpool = adbapi.ConnectionPool("MySQLdb",**dbparms)
        return cls(dbpool)

    def process_item(self,item,spider):
        query = self.dbpool.runInteraction(self.do_insert,item)
        query.addErrback(self.handle_error,item,spider)

    def handle_error(self,failure,item,spider):
        print (failure)

    def do_insert(self,cursor,item):
        print("写入数据库")
        insert_sql,params = item.get_insert_sql()
        cursor.execute(insert_sql,params)

2.3  配置settings启用相关功能

SPIDER_MIDDLEWARES = {
    # 启用SplashDeduplicateArgsMiddleware中间件
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DOWNLOADER_MIDDLEWARES = {
    # splash 所用
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,

}
ITEM_PIPELINES = {
    #异步保存数据到mysql
    'ArticleSpider.pipelines.MysqlTwistedPipline': 404,
    'scrapy_redis.pipelines.RedisPipeline' : 300,
}
# 数据库配置
MYSQL_HOST = "xxx"
MYSQL_DBNAME = "xxx"
MYSQL_USER = "xxx"
MYSQL_PASSWORD = "xxx"
# redis配置
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
REDIS_URL = 'redis://192.168.1.241:6379'
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
REDIS_HOST = "192.168.1.241"
REDIS_PORT = 6379
SCHEDULER_PERSIST = True
REDIS_DB_INDEX = 1
# splash配置,此处使用的nginx反代多台splash
SPLASH_URL = 'http://192.168.1.234'

2.4  main.py配置,用于启动函数

execute(["scrapy","crawl","jd_all","-s", "LOG_LEVEL=DEBUG", "-s", "JOBDIR=job_info/jd_all"])

2.5  数据库建表 

# 本次存储采用已评价为基础存储商品信息
CREATE TABLE `JDAll` (
    `shop_id` VARCHAR (16) NOT NULL,  //商品ID
    `url` VARCHAR (255) DEFAULT NULL, //商品url
    `title` VARCHAR (1024) DEFAULT NULL, //商品标题
    `brand` VARCHAR (255) DEFAULT NULL,//品牌
    `brand_url` VARCHAR (1024) DEFAULT NULL,//店铺地址
    `price` VARCHAR (16) DEFAULT NULL,//价格
    `comment_num` INT (16) DEFAULT NULL,//评价总数
    `good_comment_rate` VARCHAR (16) DEFAULT NULL,//好评率
    `good_comment` VARCHAR (16) DEFAULT NULL,//好评数
    `general_count` VARCHAR (16) DEFAULT NULL,//中评数
    `poor_count` VARCHAR (16) DEFAULT NULL,//差评数
    `hot_comment_dict` LONGTEXT,//热评信息
    `default_comment_num` VARCHAR (16) DEFAULT NULL,//默认好评数
    `comment_id` VARCHAR (32) NOT NULL,//评价ID,主键
    `comment_context` LONGTEXT,//评价内容
    `comnent_time` VARCHAR (255) DEFAULT NULL,//评价时间
    `comment_score` VARCHAR (8) DEFAULT NULL,//评价星级
    `comment_source` VARCHAR (255) DEFAULT NULL,//评价来源
    `produce_size` VARCHAR (255) DEFAULT NULL,//商品大小
    `produce_color` VARCHAR (255) DEFAULT NULL,//商品颜色
    `user_level` VARCHAR (32) DEFAULT NULL,//用户会员级别
    `user_exp` VARCHAR (32) DEFAULT NULL,//用户京享值
    `comment_thumpup` VARCHAR (8) DEFAULT NULL,//评价点赞数
    `comment_reply_content` LONGTEXT,//店家回复
    `comment_reply_time` VARCHAR (255) DEFAULT NULL,//店铺回复时间
    `append_comment` LONGTEXT,//买家追评
    `append_comment_time` VARCHAR (255) DEFAULT NULL,//追评时间
    PRIMARY KEY (`comment_id`)
) ENGINE = INNODB DEFAULT CHARSET = utf8

2.6  启动splash

# 需要注意的是splash运行时间长会占用很高内存,可能会导致502 504错误,所以我本次爬取将splash分在了两台服务器(一台三个,一台两个),并且前面使用nginx进行反代,并定时重启docker容器
docker run -tid -p 8050:8050 scrapinghub/splash
docker run -tid -p 8051:8050 scrapinghub/splash
docker run -tid -p 8052:8050 scrapinghub/splash
docker run -tid -p 8053:8050 scrapinghub/splash
docker run -tid -p 8054:8050 scrapinghub/splash

2.7  启动爬虫

python main.py

启动后会定格此处,需要在redis上push对应的key

push 启动页的key

爬虫正常爬取

redis去重队列、请求队列、item队列如下

3、数据存储情况

我一共起了五个进程,爬了一个晚上。

 

4、亚马逊爬取结果展示

商品信息

评价信息

 

      

  

    

 

发表于 2018-01-23 16:52 杜先生的博客 阅读() 评论() 编辑 收藏

 

版权声明:本文为dukuan原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/dukuan/p/8336670.html

分布式京东全站爬取分析的更多相关文章

  1. 学习Spider 了解 Scrapy的流程

          Scrapy                 先创建项目 在windows下  scrapy st […]...

  2. 使用多线程生产者消费者模式实现抓斗图

    使用多线程生产者消费者模式实现抓斗图 # 没个图都不好意思玩微信 @.@# 需求:想要多点搞笑图为了微信斗斗图 […]...

  3. 使用Scrapy爬取图片入库,并保存在本地

    使用Scrapy爬取图片入库,并保存在本地 上 篇博客已经简单的介绍了爬取数据流程,现在让我们继续学习scra […]...

  4. scrapy-redis非多网址采集的使用

    问题描述 默认RedisSpider在启动时,首先会读取redis中的spidername:start_url […]...

  5. 2019-03-22 Python Scrapy 入门教程 笔记

    Python Scrapy 入门教程  入门教程笔记: # 创建mySpider scrapy startpr […]...

  6. Scrapy ip代理池

    一、概述 在众多的网站防爬措施中,有一种是根据ip的访问频率进行限制,即在某一时间段内,当某个ip的访问次数达 […]...

  7. 小白学 Python 爬虫(33):爬虫框架 Scrapy 入门基础(一)

    人生苦短,我用 Python 前文传送门: 小白学 Python 爬虫(1):开篇 小白学 Python 爬虫 […]...

  8. 安装scrapy出错Failed building wheel for Twisted

    安装scrapy出错Failed building wheel for Twisted 用64位windows […]...

随机推荐

  1. [故障解决]Ubuntu18.04重启之后键盘鼠标失灵

    故障描述 台式机安装Ubuntu 18.04重启之后鼠标键盘没有反应,只能停在登陆界面。 排错过程 参照博客u […]...

  2. 手把手教你实现基于Redis的分布式锁

    1. 概述 目前,分布式系统已经是各大公司的标配,它具有高可用、可扩展等特点。在分布式系统中,由于存在多台机器 […]...

  3. 【MongoDb入门】15分钟让你敢说自己会用MongoDB了

    一.MongDB是什么呢,我该如何下手呢? MongoDB是一个基于分布式文件存储的数据库。由C++语言编写。 […]...

  4. JDK容器类List,Set,Queue源码解读

    List,Set,Queue都是继承Collection接口的单列集合接口。List常用的实现主要有Array […]...

  5. Fiddler

    目录 一、Fiddler简介 二、不同代理模式的区别 三、介绍及界面概述 3.1 Fiddler 界面概述 3 […]...

  6. Unity3D重要知识点

    数据结构和算法很重要!图形学也很重要!大的游戏公司很看重个人基础,综合能力小公司看你实际工作能力,看你的Dem […]...

  7. 二维码识读设备测试方案

    将要接手一个二维码识读项目的测试,由于对二维码识读设备方面的了解还比较浅,根据目前了解的情况制定了初步测试方案 […]...

  8. Javascript定时器

    一次性定时器:setTimeout(fun,time);回调函数等待time时间后,执行一次! 参数:fun: […]...

展开目录

目录导航