Python爬虫之:Requests 基础操作
2021/11/24 17:12:47
本文主要是介绍Python爬虫之:Requests 基础操作,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
请求与响应
请求方式
import requests # GET r = requests.get('url') # POST r = requests.post('url') # PUT r = requests.put('url') # delete r = requests.delete('url') # head r = requests.head('url') # options r = requests.options('url')
数据交互
ajax传参
import requests data = {'key':'value'} r = requests.post('url',data=data)
url传参
import requests payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get(‘url, params=payload) # print url >>> print(r.url) url?key2=value2&key1=value1
文件交互
import requests r = requests.get('https://github.com/favicon.ico') with open('favicon.ico','wb') as f: f.write(r.content)
响应
import requests # GET r = requests.get('url') # 响应码 print(r.status_code) # 以文本格式返回响应内容 print(r.text) # 以字节格式返回响应内容 print(r.content) # 以json格式返回相应内容,因为就算请求出错也会返回一串json格式的字符串。 # 所以可以使用r.status_code或者r.raise_for_status来判断响应是否成功 print(r.json())
通用参数
headers
import requests headers={ 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'connection': 'close', 'server': 'nginx/1.0.4', 'x-runtime': '148ms', 'etag': '"e1ca502697e5c9317743dc078f67693f"', 'content-type': 'application/json' } r = requests.post('url',headers=headers)
cookis
import requests cookies="" jar = requests.cookies.RequestsCookieJar() for cookie in cookies.split(';'): key,value = cookie.split('=',1) jar.set(key,vaule) r = requests.post('url',cookies=cookies)
proxies
import requests proxies = { "http":"", "https":"" } r = requests.post('url',proxies=proxies)
timeout
import requests # 1s r = requests.post('url',timeout = 1)
SSL 证书验证
import requests # 关闭验证 r = requests.post('url',verify=False)
Session
Session 对象允许您跨请求保留某些参数。
import requests s = requests.Session() s.auth = ('username','password') s.headers.update({'x-test':'true'}) r = s.get('url')
Prepared Request
from requests import Request, Session data = { } headers = { } s = Session() req = requests.post('url',data=data,headers=headers) # 将请求表示为数据结构 prepped = s.prepare_request(req) r = s.send(prepped) print(r.text)
身份验证
import requests form requests.auth import HTTPBasicAuth r = requests.post('url',auth=HTTPBasicAuth('username','password'))
这篇关于Python爬虫之:Requests 基础操作的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-09-27使用python 将ETH账户的资产打散
- 2024-09-26Python编程基础
- 2024-09-2610 种方法写出更好的 Python 代码
- 2024-09-25Python编程基础详解
- 2024-09-25Python编程入门教程
- 2024-09-25从零开始使用Python构建LLaMA 3
- 2024-09-23Python中理解和使用树形结构的简单教程
- 2024-09-23Python 编程基础入门
- 2024-09-18初探Python股票自动化交易:入门指南
- 2024-09-18Python量化入门:轻松掌握量化分析基础与实战