轻轻学爬虫—scrapy框架巧用8—猴子偷桃(4)

2021/7/8 23:25:33

本文主要是介绍轻轻学爬虫—scrapy框架巧用8—猴子偷桃(4),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

轻轻学爬虫—scrapy框架巧用8—猴子偷桃(4)

本节课我们来学习bs4库中的常用方法,还是以下面的数据为例子

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
soup.prettify()
print(soup)


#得到下面结构化的html
""“<html>
 <head>
  <title>
   The Dormouse's story
  </title>
 </head>
 <body>
  <p class="title">
   <b>
    The Dormouse's story
   </b>
  </p>
  <p class="story">
   Once upon a time there were three little sisters; and their names were
   <a class="sister" href="http://example.com/elsie" id="link1">
    Elsie
   </a>
   ,
   <a class="sister" href="http://example.com/lacie" id="link2">
    Lacie
   </a>
   and
   <a class="sister" href="http://example.com/tillie" id="link3">
    Tillie
   </a>
   ;
and they lived at the bottom of a well.
  </p>
  <p class="story">
   ...
  </p>
 </body>
</html>"""

过滤器

find_all()查找所有标签以列表形式返回

字符串

print(soup.find_all('b'))
# [<b>The Dormouse's story</b>]

正则表达式

正则的部分我们抽空讲解。先知道可以这样写就可以

import re
for tag in soup.find_all(re.compile("^b")):
    print(tag.name)
# body
# b

列表

如果传入列表参数,Beautiful Soup会将与列表中任一元素匹配的内容返回

print(soup.find_all(["a", "b"]))
# [<b>The Dormouse's story</b>, <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

True

True 可以匹配任何值,下面代码查找到所有的tag,但是不会返回字符串节点

for tag in soup.find_all(True):
    print(tag.name)
    
# html
# head
# title
# body
# p
# b
# p
# a
# a
# a
# p

方法(函数)

如果没有合适过滤器,那么还可以定义一个方法

def has_class_but_no_id(tag):
    return tag.has_attr('class') and not tag.has_attr('id')
    
soup.find_all(has_class_but_no_id)

#[<p class="title"><b>The Dormouse's story</b></p>, <p class="story">Once upon a time there were three #little sisters; and their names were
#<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
#<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
#and they lived at the bottom of a well.</p>, <p class="story">...</p>]

find_all()内容比较多,小伙伴们可以先理解一下。后续我们接着来

码字不易,欢迎大家在评论区留言,收藏。或者加入群聊群聊一起进步学习。



这篇关于轻轻学爬虫—scrapy框架巧用8—猴子偷桃(4)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程