基于scrapy实现的简单蜘蛛采集程序
2019/7/13 21:22:56
本文主要是介绍基于scrapy实现的简单蜘蛛采集程序,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:
# Standard Python library imports # 3rd party imports from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector # My imports from poetry_analysis.items import PoetryAnalysisItem HTML_FILE_NAME = r'.+\.html' class PoetryParser(object): """ Provides common parsing method for poems formatted this one specific way. """ date_pattern = r'(\d{2} \w{3,9} \d{4})' def parse_poem(self, response): hxs = HtmlXPathSelector(response) item = PoetryAnalysisItem() # All poetry text is in pre tags text = hxs.select('//pre/text()').extract() item['text'] = ''.join(text) item['url'] = response.url # head/title contains title - a poem by author title_text = hxs.select('//head/title/text()').extract()[0] item['title'], item['author'] = title_text.split(' - ') item['author'] = item['author'].replace('a poem by', '') for key in ['title', 'author']: item[key] = item[key].strip() item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern) return item class PoetrySpider(CrawlSpider, PoetryParser): name = 'example.com_poetry' allowed_domains = ['www.example.com'] root_path = 'someuser/poetry/' start_urls = ['http://www.example.com/someuser/poetry/recent/', 'http://www.example.com/someuser/poetry/less_recent/'] rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]), callback='parse_poem'), Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]), callback='parse_poem')]
希望本文所述对大家的Python程序设计有所帮助。
这篇关于基于scrapy实现的简单蜘蛛采集程序的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-09-27使用python 将ETH账户的资产打散
- 2024-09-26Python编程基础
- 2024-09-2610 种方法写出更好的 Python 代码
- 2024-09-25Python编程基础详解
- 2024-09-25Python编程入门教程
- 2024-09-25从零开始使用Python构建LLaMA 3
- 2024-09-23Python中理解和使用树形结构的简单教程
- 2024-09-23Python 编程基础入门
- 2024-09-18初探Python股票自动化交易:入门指南
- 2024-09-18Python量化入门:轻松掌握量化分析基础与实战