在使用scrapy进行爬虫编写的时候,经常会遇到一些post请求,根据不同的header往往会有不同的参数提交方式,采坑记录。
Query String Parameters
这种是相对简单的,参数可以直接拼接在url中,在url中?
后面的部分即为请求的参数,并以&
分隔开来。
1 2 3 4 5 6 7 8
| headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36", 'Accept': 'application/json, text/javascript, */*; q=0.01', } def start_requests(self): url = "https://s.taobao.com/search?q=鸿星尔克男鞋" yield Request(url=url, method='get', headers=self.headers, callback=self.parse_link)
|
From data
scrapy.FormRequest + formdata传递,formdata中的数字要变成字符串形式。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36", 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/x-www-form-urlencoded', } def parse_CSRF(self, response):
url = "http://asdasdasdasdasdasdsd.com" form_data = { "offset":'0', "limit": '20', "site_id": '11111' } yield FormRequest(url=url, method='post', headers=headers, formdata=form_data, callback=self.parse_link)
|
payload
Request + body传递,body内容要使用json.dumps(payload)
处理一下。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36", 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type':'application/json' } def start_requests(self): url = "http://www.xxxxxx.gov.cn/xxxxx/mailList" start_hash = self.configure.interface.get_start_hash(channel['job']) payload = { "pageNum": '1', "pageSize": '20', "params": { "phone": "", "searchCode": "", } } yield Request(url=url, method='post', headers=headers, body=json.dumps(payload), callback=self.parse_link)
|
json
Request+body传递,body要使用json.dumps(post_data)
进行处理。
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36", 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/json' } def start_requests(self): url = "http://xxx.xxx.com/xxx" post_data = { "pageNum":1, "pageSize":20 } start_hash = self.configure.interface.get_start_hash(channel['job']) yield Request(url=url, method='post', headers=headers,body=json.dumps(post_data), callback=self.parse_link)
|