對(duì)于爬蟲的學(xué)習(xí)而言,很多時(shí)候?qū)W習(xí)爬蟲的解決方案比復(fù)制代碼更有用,而計(jì)算機(jī)領(lǐng)域有這么一段著名的話“talk is cheap,show me code”告訴我們一個(gè)真理,代碼比蒼白的語言更有力。接下來我們通過一個(gè)python爬蟲源碼,來學(xué)習(xí)一些python怎么爬取豆瓣數(shù)據(jù),并將其爬取的數(shù)據(jù)存儲(chǔ)下來的代碼吧。
在執(zhí)行程序前,先在MySQL中創(chuàng)建一個(gè)數(shù)據(jù)庫(kù)"pachong"。
import pymysql
import requests
import re
#獲取資源并下載
def resp(listURL):
#連接數(shù)據(jù)庫(kù)
conn = pymysql.connect(
host = '127.0.0.1',
port = 3306,
user = 'root',
password = '******', #數(shù)據(jù)庫(kù)密碼請(qǐng)根據(jù)自身實(shí)際密碼輸入
database = 'pachong',
charset = 'utf8'
)
#創(chuàng)建數(shù)據(jù)庫(kù)游標(biāo)
cursor = conn.cursor()
#創(chuàng)建列表t_movieTOP250(執(zhí)行sql語句)
cursor.execute('create table t_movieTOP250(id INT PRIMARY KEY auto_increment NOT NULL ,movieName VARCHAR(20) NOT NULL ,pictrue_address VARCHAR(100))')
try:
# 爬取數(shù)據(jù)
for urlPath in listURL:
# 獲取網(wǎng)頁源代碼
response = requests.get(urlPath)
html = response.text
# 正則表達(dá)式
namePat = r'alt="(.*?)" src='
imgPat = r'src="https://atts.w3cschool.cn/attachments/(.*?)" class='
# 匹配正則(排名【用數(shù)據(jù)庫(kù)中id代替,自動(dòng)生成及排序】、電影名、電影海報(bào)(圖片地址))
res2 = re.compile(namePat)
res3 = re.compile(imgPat)
textList2 = res2.findall(html)
textList3 = res3.findall(html)
# 遍歷列表中元素,并將數(shù)據(jù)存入數(shù)據(jù)庫(kù)
for i in range(len(textList3)):
cursor.execute('insert into t_movieTOP250(movieName,pictrue_address) VALUES("%s","%s")' % (textList2[i],textList3[i]))
#從游標(biāo)中獲取結(jié)果
cursor.fetchall()
#提交結(jié)果
conn.commit()
print("結(jié)果已提交")
except Exception as e:
#數(shù)據(jù)回滾
conn.rollback()
print("數(shù)據(jù)已回滾")
#關(guān)閉數(shù)據(jù)庫(kù)
conn.close()
#top250所有網(wǎng)頁網(wǎng)址
def page(url):
urlList = []
for i in range(10):
num = str(25*i)
pagePat = r'?start=' + num + '&filter='
urL = url+pagePat
urlList.append(urL)
return urlList
if __name__ == '__main__':
url = r"https://movie.douban.com/top250"
listURL = page(url)
resp(listURL)
結(jié)果如下圖:
小結(jié)
以上就是python怎么爬取豆瓣數(shù)據(jù)的詳細(xì)內(nèi)容,更多關(guān)于python爬蟲的學(xué)習(xí)材料請(qǐng)多多關(guān)注W3Cschool相關(guān)文章!