爬虫学习笔记(十七)splash、AJAX 2020.5.21

前言

本节学习splash和AJAX

可参考
爬虫之Splash基础篇
Ajax详解

1、splash

介绍

Splash是一个针对js的渲染服务。

  • 它内置了一个浏览器和http接口。
  • 基于Python3和Twisted引擎。
  • 可以异步处理任务。

安装(只有linux和mac能安装):
https://splash.readthedocs.io/en/stable/install.html

首先需要安装docker

  • Docker是基于Go语言实现的开源容器项目,诞生于2013年初。
  • 所谓容器,可以简单地理解为隔断。(桶装方便面)
安装:
docker pull scrapinghub/splash
运行:
docker run -p 8050:8050 -p 5023:5023 scrapinghub/splash
访问
http://localhost:8050

http请求

splash提供的http接口
对于抓取网页,最重要的就是 : render.html

curl 'http://localhost:8050/render.html?url=http://www.baidu.com/&timeout=30&wait=0.5'
# url:必填,要请求的网址
# timeout:选填,超时时间
# wait:选填,页面加载完毕后,等待的时间

抓取今日头条,对比渲染和没有渲染的效果

import requests
from lxml import etree
url = 'http://localhost:8050/render.html?url=https://www.toutiao.com&timeout=30&wait=0.5'
# url = 'https://www.toutiao.com'
response = requests.get(url)
print(response.text)
tree = etree.HTML(response.text)
article_titles = tree.xpath('//div[@class="title-box"]/a/text()')
print(article_titles)

抓取《我不是药神》的豆瓣评论

import csv
import time
import requests
from lxml import etree
fw = open('douban_comments.csv', 'w')
writer = csv.writer(fw)
writer.writerow(['comment_time','comment_content'])
for i in range(0,20):
    url = 'http://localhost:8050/render.html?url=https://movie.douban.com/subject/26752088/comments?start={}&limit=20&sort=new_score&status=P&timeout=30&wait=0.5'.format(i*20)
    # url = 'https://movie.douban.com/subject/26752088/comments?start={}&limit=20&sort=new_score&status=P'.format(i*20)
    response = requests.get(url)
    tree = etree.HTML(response.text)
    comments = tree.xpath('//div[@class="comment"]')
    for item in comments:
        comment_time = item.xpath('./h3/span[2]/span[contains(@class,"comment-time")]/@title')[0]
        comment_time = int(time.mktime(time.strptime(comment_time,'%Y-%m-%d %H:%M:%S')))
        comment_content = item.xpath('./p/span/text()')[0].strip()
        print(comment_time)
        print(comment_content)
        writer.writerow([comment_time,comment_content])

python执行一段lua脚本,并抓取京东商品信息

import json
import requests
from lxml import etree
from urllib.parse import quote
lua = '''
function main(splash, args)
    local treat = require("treat")
    local response = splash:http_get("https://search.jd.com/Search?keyword=相机&enc=utf-8")
        return {
            html = treat.as_string(response.body),
            url = response.url,
            status = response.status
        }    
end
'''
# 线上部署的服务,需要将localhost换成服务器的公网地址(不是内网地址)
url = 'http://localhost:8050/execute?lua_source=' + quote(lua)
response = requests.get(url)
html = json.loads(response.text)['html']
tree = etree.HTML(html)
# 单品
products_1 = tree.xpath('//div[@class="gl-i-wrap"]')
for item in products_1:
    try:
        name_1 = item.xpath('./div[@class="p-name p-name-type-2"]/a/em/text()')[0]
        price_1 = item.xpath('./div[@class="p-price"]/strong/@data-price | ./div[@class="p-price"]/strong/i/text()')[0]
        print(name_1)
        print(price_1)
    except:
        pass
# 套装
products_2 = tree.xpath('//div[@class="tab-content-item tab-cnt-i-selected"]')
for item in products_2:
    name_2 = item.xpath('./div[@class="p-name p-name-type-2"]/a/em/text()')[0]
    price_2 = item.xpath('./div[@class="p-price"]/strong/@data-price | ./div[@class="p-price"]/strong/i/text()')[0]
    print(name_2)
    print(price_2)

scrapy-splash

一个让scrapy结合splash,进行动态抓取的库
文档:
https://github.com/scrapy-plugins/scrapy-splash

# 安装
pip install scrapy-splash
# 创建scrapy项目:
scrapy startproject scrapysplashtest
# 创建爬虫:
scrapy genspider taobao www.taobao.com
# 修改settings文件:
# 添加SPLASH_URL:
SPLASH_URL = 'http://localhost:8050'
# 添加下载器中间件:
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
# 启用爬虫去重中间件:
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
# 设置自定义的去重类:
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
# 配置缓存后端
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

2、AJAX

  • 异步的JavaScript和XML
  • 这个技术可以在页面不刷新的情况下,利用js和后端服务器进行交互,将内容显示在前端页面上
  • 优点是可以大大提高网页的打开速度,从开发角度可以做到前后端分离,提高开发速度

ajax的工作步骤:

  • 发送请求:通过接口,js向服务器发送xmlhttp请求(XHR)
  • 解析内容:js得到响应后,返回的内容可能是html,也可能是json格式
  • 渲染网页:js通过操纵dom树,改变dom节点的内容,达到修改网页的目的

抓取当当网书评

import json
import requests
from lxml import etree
for i in range(1,5):
    # url = 'http://product.dangdang.com/index.php?r=comment/list&productId=25340451&pageIndex=1'
    url = 'http://product.dangdang.com/index.php?r=comment/list&productId=25340451&categoryPath=01.07.07.04.00.00&mainProductId=25340451&mediumId=0&pageIndex={}'.format(i)
    header = {
                'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
                              '(KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
            }
    response = requests.get(url,
                            headers=header,
                            timeout=5
                            )
    print(response.text)
    result = json.loads(response.text)
    comment_html = result['data']['list']['html']
    tree = etree.HTML(comment_html)
    comments = tree.xpath('//div[@class="items_right"]')
    for item in comments:
        comment_time = item.xpath('./div[contains(@class,"starline")]/span[1]/text()')[0]
        comment_content = item.xpath('./div[contains(@class,"describe_detail")]/span[1]//text()')[0]
        print(comment_time)
        print(comment_content)

抓取金色财经快讯接口

import requests
import json
header = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
                      '(KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
    }
url = 'https://api.jinse.com/v4/live/list?limit=20&reading=false&flag=up'
response = requests.get(url,
                        headers=header,
                        timeout=5
                        )
result = json.loads(response.text)
print(result)
# json格式分析工具:http://www.bejson.com/
for item in result['list'][0]['lives']:
    # print(item)
    timestamp = item['created_at']
    content = item['content']
    print(timestamp)
    print(content)

抓取36氪快讯

import requests
import json
header = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 '
                      '(KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
    }
url = 'https://36kr.com/api/newsflash?&per_page=20'
response = requests.get(url,
                        headers=header,
                        timeout=5
                        )
print(json.loads(response.text))
data = json.loads(response.text)['data']
print(data)
items = data['items']
print(items)
for item in items:
    print(item)
    item_info = {}
    title = item['title']
    item_info['title'] = title
    description = item['description']
    item_info['content'] = description
    published_time = item['published_at']
    item_info['published_time'] = published_time
    print(item_info)

结语

通过例子对splash和ajax有个大概的了解

已标记关键词 清除标记
©️2020 CSDN 皮肤主题: 技术黑板 设计师:CSDN官方博客 返回首页