Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
704 views
in Technique[技术] by (71.8m points)

web scraping - How to generate the start_urls dynamically in crawling?

I am crawling a site which may contain a lot of start_urls, like:

http://www.a.com/list_1_2_3.htm

I want to populate start_urls like [list_d+_d+_d+.htm], and extract items from URLs like [node_d+.htm] during crawling.

Can I use CrawlSpider to realize this function? And how can I generate the start_urls dynamically in crawling?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The best way to generate URLs dynamically is to override the start_requests method of the spider:

from scrapy.http.request import Request

def start_requests(self):
      with open('urls.txt', 'rb') as urls:
          for url in urls:
              yield Request(url, self.parse)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...