python爬取小说
2022/1/15 17:07:15
本文主要是介绍python爬取小说,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
from urllib import request from bs4 import BeautifulSoup import re,codecs def download(url,i=0):#下载网页 #获取HTML headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0'} req = request.Request(url, headers=headers) html = request.urlopen(req).read() #保存HTML file_name = i with open ("{}.html".format(i),"wb") as f: f.write(html) with open('{}.html'.format(i), 'rb') as f: Soup = str(BeautifulSoup(f.read(), 'html.parser')) #把html转化为string return Soup def save(name,txt): # 得到标题和正文之后,保存为txt f = codecs.open("{}.txt".format(name),'w','utf-8') f.write(txt) def tackle(url,i): Soup = download(url,i) # 获取字符串,字符串内容为整个网页 pattern_title = '<div class="title">.*</div>' #匹配,其中.*代表匹配中间全部字符(除了换行) mp = re.search(pattern_title,Soup) #正则搜索 title = mp.group()[12:-6] # group方法返回搜索结果的字符串形式,并把字符串中<div class="title">和</div>去掉 start = Soup.find('<div class="content" id="content">') end = Soup.find('</p></div>') pattern_body = Soup[start+34:end] #标记正文位置 save(title+'.txt',pattern_body) if __name__ == "__main__": Soup = download('path') # 小说目录网址 place = [substr.start() for substr in re.finditer('http://www.ncwxw.cc/0/298/8',Soup)] # 字符串匹配,确定所有章节的网址位置 place = place[13:] #预处理 website = [] #存储每一章的网址 for chapter in place: website.append(Soup[chapter:chapter+36]) '''以上适用于每一章节网址无规律的情况,若有规律则可直接变址寻址''' for i in range(1,1979): tackle(website[i],i)
这篇关于python爬取小说的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-24Python编程基础详解
- 2024-11-21Python编程基础教程
- 2024-11-20Python编程基础与实践
- 2024-11-20Python编程基础与高级应用
- 2024-11-19Python 基础编程教程
- 2024-11-19Python基础入门教程
- 2024-11-17在FastAPI项目中添加一个生产级别的数据库——本地环境搭建指南
- 2024-11-16`PyMuPDF4LLM`:提取PDF数据的神器
- 2024-11-16四种数据科学Web界面框架快速对比:Rio、Reflex、Streamlit和Plotly Dash
- 2024-11-14获取参数学习:Python编程入门教程