Scrapy简介

Scrapy at a glance(Scrapy简介)

 

Scrapy is an application framework for crawling web sites and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival. 
Scrapy是Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据。Scrapy用途广泛,可以用于数据挖掘、信息处理和历史档案。

 

Even though Scrapy was originally designed for screen scraping (more precisely, web scraping), it can also be used to extract data using APIs (such as Amazon Associates Web Services) or as a general purpose web crawler.
尽管Scrapy原本是设计用来屏幕抓取(更精确的说,是网络抓取)的目的,但它也可以用来访问API来提取数据,比如Amazon的AWS或者用来当作通常目的应用的网络蜘蛛

 

The purpose of this document is to introduce you to the concepts behind Scrapy so you can get an idea of how it works and decide if Scrapy is what you need.
本文档的目的是介绍一下Scrapy背后的概念,这样你会了解它是如何工作的,以决定它是不是你需要的

 

When you’re ready to start a project, you can start with the tutorial.
当你准备启动一个项目时,可以从这个教程开始

 

Pick a website(选择一个网站)

So you need to extract some information from a website, but the website doesn’t provide any API or mechanism to access that info programmatically. Scrapy can help you extract that information.
如果你需要从某个网站提取一些信息,但是网站不提供API或者其他可编程的访问机制,那么Scrapy可以帮助你提取信息

Let’s say we want to extract the URL, name, description and size of all torrent files added today in the Mininova site.

Let’s say we want to extract the URL, name, description and size of all torrent files added today in the Mininova site.
让我们看下Mininova网站今天增加的torrent文件,我们需要提取网址,名称,描述和文件大小

The list of all torrents added today can be found on this page:
下面这个列表是所有今天新增的torrents文件的页面

http://www.mininova.org/today

Define the data you want to scrape(定义你要抓取的数据)

The first thing is to define the data we want to scrape. In Scrapy, this is done through Scrapy Items (Torrent files, in this case).第一件事情就是定义你要抓取的数据,在Scrapy这个是通过定义Scrapy Items来实现的(本例是BT文件)

This would be our Item:这就是要定义的Item

from scrapy.item import Item, Field

class Torrent(Item):
    url = Field()
    name = Field()
    description = Field()
    size = Field()

 

 

Write a Spider to extract the data(撰写一个蜘蛛来抓取数据)

The next thing is to write a Spider which defines the start URL (http://www.mininova.org/today), the rules for following links and the rules for extracting the data from pages.下一步是写一个指定起始网址的蜘蛛,这个蜘蛛的规则包含follow链接规则和数据提取规则

If we take a look at that page content we’ll see that all torrent URLs are like http://www.mininova.org/tor/NUMBER where NUMBER is an integer. We’ll use that to construct the regular expression for the links to follow: /tor/\d+. 如果你看一眼页面内容,就会发现所有的torrent网址都是类似http://www.mininova.org/tor/NUMBER,其中Number是一个整数,我们将用正则表达式,例如 /tor/\d+. 来提取规则

We’ll use XPath for selecting the data to extract from the web page HTML source. Let’s take one of those torrent pages: 我们将使用Xpath,从页面的HTML Source里面选取要要抽取的数据,我们 选中一个页面

http://www.mininova.org/tor/13204203

And look at the page HTML source to construct the XPath to select the data we want which is: torrent name, description and size.根据页面HTML 源码,建立XPath,选取我们所要的:torrent name, description和size

By looking at the page HTML source we can see that the file name is contained inside a <h1> tag: 通过页面HTML源代码可以看到name属性包含在H1 标签内

<h1>Home[2009][Eng]XviD-ovd</h1>

 

An XPath expression to extract the name could be: 使用 XPath expression提取的表达式:

//h1/text()

 

And the description is contained inside a <div> tag with id="description":  同时description被包含在id=”description“的div中

<h2>Description:</h2>

<div id="description">
"HOME" - a documentary film by Yann Arthus-Bertrand
<br/>
<br/>
***
<br/>
<br/>
"We are living in exceptional times. Scientists tell us that we have 10 years to change the way we live, avert the depletion of natural resources and the catastrophic evolution of the Earth's climate.

...

 

An XPath expression to select the description could be:使用 XPath expression提取的表达式:

//div[@id='description']

 

Finally, the file size is contained in the second <p> tag inside the <div> tag with id=specifications: size属性在第二个<p>tag,id=specifications的div内

<div id="specifications">

<p>
<strong>Category:</strong>
<a href="/cat/4">Movies</a> &gt; <a href="/sub/35">Documentary</a>
</p>

<p>
<strong>Total size:</strong>
699.79&nbsp;megabyte</p>

 

An XPath expression to select the description could be:使用 XPath expression提取的表达式:

//div[@id='specifications']/p[2]/text()[2]

 

For more information about XPath see the XPath reference. 如果要了解更多的XPath 参考这里 XPath reference.

Finally, here’s the spider code: 最后,蜘蛛代码如下:

class MininovaSpider(CrawlSpider):

    name = 'mininova.org'
    allowed_domains = ['mininova.org']
    start_urls = ['http://www.mininova.org/today']
    rules = [Rule(SgmlLinkExtractor(allow=['/tor/\d+']), 'parse_torrent')]

    def parse_torrent(self, response):
        x = HtmlXPathSelector(response)

        torrent = TorrentItem()
        torrent['url'] = response.url
        torrent['name'] = x.select("//h1/text()").extract()
        torrent['description'] = x.select("//div[@id='description']").extract()
        torrent['size'] = x.select("//div[@id='info-left']/p[2]/text()[2]").extract()
        return torrent

 

For brevity’s sake, we intentionally left out the import statements. The Torrent item is defined above.因为很简单的原因,我们有意把重要的数据定义放在了上面(torrent数据定义),

Run the spider to extract the data(运行蜘蛛来抓取数据)

Finally, we’ll run the spider to crawl the site an output file scraped_data.json with the scraped data in JSON format:  最后,我们运行蜘蛛来爬取这个网站,输出为json格式 scraped_data.json

scrapy crawl mininova.org -o scraped_data.json -t json

 

This uses feed exports to generate the JSON file. You can easily change the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3, for example).

这个使用了feed exports,来生成json格式,当然,你可以很简单的更改输出格式为csv,xml,或者存储在后端(ftp或者Amazon S3)

You can also write an item pipeline to store the items in a database very easily.

你也可以写一段item pipeline,把数据直接写入数据库,很简单

Review scraped data(检查抓取的数据)

If you check the scraped_data.json file after the process finishes, you’ll see the scraped items there:
要运行结束以后,查看一下数据:scraped_data.json,内容大致如下

[{"url": "http://www.mininova.org/tor/2657665", "name": ["Home[2009][Eng]XviD-ovd"], "description": ["HOME - a documentary film by ..."], "size": ["699.69 megabyte"]},
# ... other items ...
]

 

You’ll notice that all field values (except for the url which was assigned directly) are actually lists. This is because the selectors return lists. You may want to store single values, or perform some additional parsing/cleansing to the values. That’s what Item Loaders are for.

关注一下数据,你会发现,所有字段都是lists(除了url是直接赋值),这是因为selectors返回的就是lists格式,如果你想存储单独数据或者在数据上增加一些解释或者清洗,可以使用Item Loaders

 

What else?(更多)

You’ve seen how to extract and store items from a website using Scrapy, but this is just the surface. Scrapy provides a lot of powerful features for making scraping easy and efficient, such as:

你也看到了如何使用Scrapy从一个网站提取和存储数据,但这只是表象,实际上,Scrapy提供了许多强大的特性,让它更容易和高效的抓取:

  • Built-in support for selecting and extracting data from HTML and XML sources   内建 selecting and extracting,支持从HTML,XML提取数据
  • Built-in support for cleaning and sanitizing the scraped data using a collection of reusable filters (called Item Loaders) shared between all the spiders. 内建Item Loaders,支持数据清洗和过滤消毒,使用预定义的一个过滤器集合,可以在所有蜘蛛间公用
  • Built-in support for generating feed exports in multiple formats (JSON, CSV, XML) and storing them in multiple backends (FTP, S3, local filesystem)内建多格式generating feed exports支持(JSON, CSV, XML),可以在后端存储为多种方式(FTP, S3, local filesystem)
  • A media pipeline for automatically downloading images (or any other media) associated with the scraped items针对抓取对象,具有自动图像(或者任何其他媒体)下载automatically downloading images的管道线
  • Support for extending Scrapy by plugging your own functionality using signals and a well-defined API (middlewares, extensions, and pipelines).支持扩展抓取extending Scrap,使用signals来自定义插入函数或者定义好的API(middlewares, extensions, and pipelines)
  • Wide range of built-in middlewares and extensions for:大范围的内建中间件和扩展:
    • cookies and session handling
    • HTTP compression
    • HTTP authentication
    • HTTP cache
    • user-agent spoofing
    • robots.txt
    • crawl depth restriction
    • and more
  • Robust encoding support and auto-detection, for dealing with foreign, non-standard and broken encoding declarations.强壮的编码支持和自动识别机制,可以处理多种国外的、非标准的、不完整的编码声明等等
  • Support for creating spiders based on pre-defined templates, to speed up spider creation and make their code more consistent on large projects. See genspider command for more details.支持根据预定义的模板创建蜘蛛,在大型项目中用来加速蜘蛛并使其代码更一致。查看genspider命令了解更多细节
  • Extensible stats collection for multiple spider metrics, useful for monitoring the performance of your spiders and detecting when they get broken可扩展的统计采集stats collection,针对数十个采集蜘蛛,在监控蜘蛛性能和识别断线断路方面很有用处
  • An Interactive shell console for trying XPaths, very useful for writing and debugging your spiders一个可交互的XPaths脚本命令平台接口Interactive shell console,在调试撰写蜘蛛是上非常有用
  • A System service designed to ease the deployment and run of your spiders in production.一个系统服务级别的设计,可以在产品中非常容易的部署和运行你的蜘蛛
  • A built-in Web service for monitoring and controlling your bot内建的Web service,可以监视和控制你的机器人
  • A Telnet console for hooking into a Python console running inside your Scrapy process, to introspect and debug your crawler一个Telnet控制台Telnet console,可以钩入一个Python的控制台在你的抓取进程中,以便内视或者调试你的爬虫
  • Logging facility that you can hook on to for catching errors during the scraping process. Logging功能使得可以在抓取过程中提取捕获的错误
  • Support for crawling based on URLs discovered through Sitemaps支持基于Sitemap的网址发现的爬行抓取
  • A caching DNS resolver 具备缓存DNS解析功能

What’s next?(下一步)

The next obvious steps are for you to download Scrapy, read the tutorial and join the community. Thanks for your interest!很明显啦,下一步就是下载Scrapy,然后阅读教程,加入社区,感谢你对Scrapy感兴趣~!

 

T:\mininova\mininova\items.py 源码

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/topics/items.html

from scrapy.item import Item, Field

class MininovaItem(Item):
    # define the fields for your item here like:
    # name = Field()
    url = Field()
    name = Field()
    description = Field()
    size = Field()
        

 T:\mininova\mininova\spiders\spider_mininova.py 源码

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from mininova.items import MininovaItem

class MininovaSpider(CrawlSpider):

    name = 'mininova.org'
    allowed_domains = ['mininova.org']
    start_urls = ['http://www.mininova.org/today']
    #start_urls = ['http://www.mininova.org/yesterday']
    rules = [Rule(SgmlLinkExtractor(allow=['/tor/\d+']), 'parse_item')]

    # def parse_item(self, response):
        # filename = response.url.split("/")[-1] + ".html"
        # open(filename, 'wb').write(response.body)

    def parse_item(self, response):
        x = HtmlXPathSelector(response)
        item = MininovaItem()
        item['url'] = response.url
        #item['name'] = x.select('''//*[@id="content"]/h1''').extract()
        item['name'] = x.select("//h1/text()").extract()
        #item['description'] = x.select("//div[@id='description']").extract()
        item['description'] = x.select('''//*[@id="specifications"]/p[7]/text()''').extract() #download
        #item['size'] = x.select("//div[@id='info-left']/p[2]/text()[2]").extract()
        item['size'] = x.select('''//*[@id="specifications"]/p[3]/text()''').extract()
        return item

 

 

时间: 2024-10-04 00:01:48

Scrapy简介的相关文章

Python的爬虫程序编写框架Scrapy入门学习教程_python

1. Scrapy简介Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 可以应用在包括数据挖掘,信息处理或存储历史数据等一系列的程序中. 其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫.Scrapy用途广泛,可以用于数据挖掘.监测和自动化测试 Scrapy 使用了 Twisted异步网络库来处理网络通讯.整体架构大致如下 Scrapy主

Scrapy安装介绍

一. Scrapy简介 Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing

scrapy 教程

scrapy英文文档 : https://doc.scrapy.org/en/1.3/index.html scrapy中文文档:  http://scrapy-chs.readthedocs.io/zh_CN/1.0/intro/overview.html 内容都是从官方文档整理过来的,只整理一部分,要想深入了解,可以看官方文档 初窥Scrapy         Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 可以应用在包括数据挖掘,信息处理或存储历史数据等一系列的程序中

Python爬虫框架Scrapy安装使用步骤_python

一.爬虫框架Scarpy简介Scrapy 是一个快速的高层次的屏幕抓取和网页爬虫框架,爬取网站,从网站页面得到结构化的数据,它有着广泛的用途,从数据挖掘到监测和自动测试,Scrapy完全用Python实现,完全开源,代码托管在Github上,可运行在Linux,Windows,Mac和BSD平台上,基于Twisted的异步网络库来处理网络通讯,用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网页内容以及各种图片. 二.Scrapy安装指南 我们的安装步骤假设你已经安装一下内容:<1>

利用Python的Scrapy框架十分钟爬取美女图的教程

简介 scrapy 是一个 python 下面功能丰富.使用快捷方便的爬虫框架.用 scrapy 可以快速的开发一个简单的爬虫,官方给出的一个简单例子足以证明其强大: 快速开发 下面开始10分钟倒计时: 1. 初始化项目 scrapy startproject mzt cd mzt scrapy genspider meizitu meizitu.com 2. 添加 spider 代码: 定义 scrapy.Item ,添加 image_urls 和 images ,为下载图片做准备. 修改 s

Scrapy框架之利用ImagesPipeline下载图片

1.ImagesPipeline简介 Scrapy用ImagesPipeline类提供一种方便的方式来下载和存储图片. 特点: 将下载图片转换成通用的JPG和RGB格式 避免重复下载 缩略图生成 图片大小过滤 2.ImagesPipeline工作流程 当使用图片管道 ImagePipeline,典型的工作流程如下: 在一个爬虫里,你抓取一个项目,把其中图片的URL放入image_urls组内. 项目从爬虫内返回,进入项目管道. 当项目进入ImagePipeline, image_urls组内的U

Python中title()方法的使用简介

  这篇文章主要介绍了Python中title()方法的使用简介,是Python入门中的基础知识,需要的朋友可以参考下 title()方法返回所有单词的第一个字符大写的字符串的一个副本. 语法 以下是title()方法的语法: ? 1 str.title(); 参数 NA 返回值 此方法返回其中所有单词的前几个字符都是大写的字符串的一个副本. 例子 下面的例子显示了title()方法的使用. ? 1 2 3 4 #!/usr/bin/python   str = "this is string

shiro(1)-简介

简介 apache shiro 是一个功能强大和易于使用的Java安全框架,为开发人员提供一个直观而全面的的解决方案的认证,授权,加密,会话管理. 在实际应用中,它实现了应用程序的安全管理的各个方面. shiro的功能 apache shiro能做什么? 支持认证跨一个或多个数据源(LDAP,JDBC,kerberos身份等) 执行授权,基于角色的细粒度的权限控制. 增强的缓存的支持. 支持web或者非web环境,可以在任何单点登录(SSO)或集群分布式会话中使用. 主要功能是:认证,授权,会话

Tutum公司简介

2015年10月21日,由Tutum公司的CEO Borja Burgos对外宣布,Tutum与Docker公司正式合作,大家对Tutum和Docker的合作还是很期待的.下面我简单介绍一下Tutum公司. Tutum的历史 Tutum创立的时间很难确定.Tutum(拉丁语里安全的意思)的最初构思是在2012年秋季,它是作为Borja Burgos在卡内基梅隆大学(匹兹堡)的研究生课程和在日本兵库县大学的硕士论文,Tutum是一个可以帮助企业过渡到云的安全支持系统. 在2013年初,Tutum有