Python Scrapy框架:通用爬蟲之CrawlSpider用法簡單示例
本文實例講述了Python Scrapy框架:通用爬蟲之CrawlSpider用法。分享給大家供大家參考,具體如下:
步驟01: 創(chuàng)建爬蟲項目
scrapy startproject quotes
步驟02: 創(chuàng)建爬蟲模版
scrapy genspider -t quotes quotes.toscrape.com
步驟03: 配置爬蟲文件quotes.py
import scrapyfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorclass Quotes(CrawlSpider): # 爬蟲名稱 name = 'get_quotes' allow_domain = [’quotes.toscrape.com’] start_urls = [’http://quotes.toscrape.com/’]# 設定規(guī)則 rules = ( # 對于quotes內(nèi)容頁URL,調(diào)用parse_quotes處理, # 并以此規(guī)則跟進獲取的鏈接 Rule(LinkExtractor(allow=r’/page/d+’), callback=’parse_quotes’, follow=True), # 對于author內(nèi)容頁URL,調(diào)用parse_author處理,提取數(shù)據(jù) Rule(LinkExtractor(allow=r’/author/w+’), callback=’parse_author’) )# 提取內(nèi)容頁數(shù)據(jù)方法 def parse_quotes(self, response): for quote in response.css('.quote'): yield {’content’: quote.css(’.text::text’).extract_first(), ’author’: quote.css(’.author::text’).extract_first(), ’tags’: quote.css(’.tag::text’).extract() } # 獲取作者數(shù)據(jù)方法 def parse_author(self, response): name = response.css(’.author-title::text’).extract_first() author_born_date = response.css(’.author-born-date::text’).extract_first() author_bron_location = response.css(’.author-born-location::text’).extract_first() author_description = response.css(’.author-description::text’).extract_first() return ({’name’: name, ’author_bron_date’: author_born_date, ’author_bron_location’: author_bron_location, ’author_description’: author_description })
步驟04: 運行爬蟲
scrapy crawl quotes
更多相關內(nèi)容可查看本站專題:《Python Socket編程技巧總結》、《Python正則表達式用法總結》、《Python數(shù)據(jù)結構與算法教程》、《Python函數(shù)使用技巧總結》、《Python字符串操作技巧匯總》、《Python入門與進階經(jīng)典教程》及《Python文件與目錄操作技巧匯總》
希望本文所述對大家基于Scrapy框架的Python程序設計有所幫助。
相關文章:
