一、百度图片抓取
百度图片抓取存在两个难点:
(1)没有翻页功能,只有下拉不断get新的img,这个暂时没有解决,据说可以通过selenium模块来模拟浏览器动作进行,暂未入手,只能抓取最开始get到的那些图片;
(2)初步试验发现很多图片的url下载下来只有100多字节,无法打开,浏览器打开这些url后发现只有一个简略的预览图,而且刷新后便出现403错误禁止,这可能只是百度的一个中转url,那么真实的url在哪呢,下面上一段百度图片的html代码:
<img class="main_img img-hover" data-imgurl="http://img1./it/u=1474139545,1219393896&fm=21&gp=0.jpg" src="http://img1./it/u=1474139545,1219393896&fm=21&gp=0.jpg" style="width: 206px; height: 206px; background-color: rgb(200, 188, 152);"><li class="imgitem" style="width: 192px; height: 200px; margin-right: 5px; margin-bottom: 5px;" data-objurl="/gjfs01/M00/89/F4/CgEHklWmXzvov6K-AABrKnXXM9U624_600-0_6-0.jpg" data-thumburl="http://img4./it/u=2997798573,880478713&fm=21&gp=0.jpg" data-fromurl="ippr_z2C$qAzdH3FAzdH3Fzw5zi7wg2_z&e3B2wg3t_z&e3Bv54AzdH3F257AzdH3F8mnnncc00mx_z&e3Bip4" data-fromurlhost="" data-ext="jpg" data-saved="0" data-pi="0" data-specialtype="0" data-cs="2997798573,880478713" data-width="504" data-height="541" data-title="<strong>宠物</strong>照片" data-personalized="0"><div class="imgbox"><a href="/search/detail?ct=503316480&z=undefined&tn=baiduimagedetail&ipn=d&word=%E5%AE%A0%E7%89%A9&step_word=&ie=utf-8&in=&cl=2&lm=-1&st=undefined&cs=2997798573,880478713&os=2697691740,1594419755&simid=0,0&pn=3&rn=1&di=17885681440&ln=1984&fr=&fmq=1477191469084_R&fm=&ic=undefined&s=undefined&se=&sme=&tab=0&width=&height=&face=undefined&is=0,0&istype=0&ist=&jit=&bdtype=0&adpicid=0&pi=0&gsm=0&objurl=http%3A%2F%%2Fgjfs01%2FM00%2F89%2FF4%2FCgEHklWmXzvov6K-AABrKnXXM9U624_600-0_6-0.jpg&rpstart=0&rpnum=0&adpicid=0" target="_blank" style="display: block; width: 191px; height: 206px;" name="pn3" class="div_2997798573,880478713"><img class="main_img img-hover" data-imgurl="http://img4./it/u=2997798573,880478713&fm=21&gp=0.jpg" src="http://img4./it/u=2997798573,880478713&fm=21&gp=0.jpg" style="width: 191px; height: 206px; background-color: rgb(189, 171, 169);"></a></div><div class="hover" title="图片来源: 图片描述:宠物照片"><div class="ct" style="left: 0px; top: 50px;"><div style="padding-top: 7px;"><a class="title" target="_blank" href="/gou/1633355776x.htm"><strong>宠物</strong>照片...</a><br><a class="size">504x541</a></div><a class="dutu" href="/n/pc_search?queryImageUrl=http%3A%2F%2Fimg4.%2Fit%2Fu%3D2997798573%2C880478713%26fm%3D21%26gp%3D0.jpg&word=%E5%AE%A0%E7%89%A9&fm=searchresult&uptype=button" target="_blank" title="按图片搜索"></a><a target="_self" class="down" οnclick="return p(null,390,{newp:42});" title="下载原图" href="/search/down?tn=download&ipn=dwnl&word=download&ie=utf8&fr=result&url=http%3A%2F%%2Fgjfs01%2FM00%2F89%2FF4%2FCgEHklWmXzvov6K-AABrKnXXM9U624_600-0_6-0.jpg&thumburl=http%3A%2F%2Fimg4.%2Fit%2Fu%3D2997798573%2C880478713%26fm%3D21%26gp%3D0.jpg"></a></div></div></li>
这里面提供了大致两种类型的url:
data-imgurl:http://img1./it/u=1474139545,1219393896&fm=21&gp=0.jpg
data-objurl:/gjfs01/M00/89/F4/CgEHklWmXzvov6K-AABrKnXXM9U624_600-0_6-0.jpg
我们打开这两个url发现,竟然是同样的图片,而且上面的url明显只是个预览,因此,确认在获取图像url时获得了多余的imgurl,需要进行剔除。
Python下载图片的几种方法:
(1)urllib.urretrieve直接调用
urllib.urltrieve(url,filename)
(2)file写入数据
conn = urllib2.urlopen(all_jpg[0]);
f = open('123','wb');
f.write(conn.read());
f.close()
(3)直接利用IO写入数据
import requests
from cStringIO import StringIO
from PIL import Image
r = requests.get('http://www.solarspace.co.uk/PlanetPics/Neptune/NeptuneAlt1.jpg')
with open('##.jpg','wb') as fout:
fout.write(r.content)
下面上代码:
#-*-coding:utf-8-*-'''created by zwg in -10-17'''import urllib2,urllibimport os,reurl='/search/index?tn=baiduimage&ct=26592&lm=' \'-1&cl=2&ie=gbk&word=%D3%DE%B4%C0&hs=0&fr=ala&ori_query=%E6%84%9A%E8' \'%A0%A2&ala=0&alatpl=sp&pos=0'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) ''AppleWebKit/537.36 (KHTML, like Gecko)'' Chrome/45.0.2454.101 Safari/537.36'}request=urllib2.Request(url=url,headers=headers)html=urllib2.urlopen(request).read()file1=file('baidu.html','w+')file1.write(html)file1.close()pattern=pile('http://.+?\.jpg')all_image=pattern.findall(html)#这里面有一大部分图片的url是百度的虚拟url,无效path='D:\\Python\\web_crawler\\baidu_photo'if not os.path.exists(path):os.mkdir(path)import timet1=time.time()# 下载保存图片的一种方法p=pile('u=')k=0for i in all_image:if len(p.findall(i))==0:#确保不是由百度提供的jsp图片地址,导致下载失败try:k=k+1urllib.urlretrieve(i,path+'\\'+str(k)+'.jpg')print iexcept:passt2=time.time()print '花费时间:', t2 - t1
二、百度贴吧图片抓取
这个抓取几乎没有什么难度,直接获取url解析正则即可得到真实url。
#-*-coding:utf-8 -*-'''created by zwg in -10-17'''#下载百度贴吧某个页面的jpg图片import reimport urllib2import urllibimport ospath='C:\\Users\\zhangweiguo\\Desktop\\jpg'if not os.path.exists(path):os.mkdir(path)def gethtml(url):#读取html为字符串page=urllib2.urlopen(url)html=page.read()return htmldef getimg(html):#正则表达式查询re_img1="http://imgsrc.baidu.*?jpg"re_img1=pile(re_img1)img1=re_img1.findall(html)re_img2 = "http://imgsrc.baidu.*?png"re_img2 = pile(re_img2)img2 = re_img2.findall(html)img=[]img.extend(img1)img.extend(img2)return imghtml=gethtml('/p/4530493052')img=getimg(html)#下面是直接下载并保存x=1for i in img:urllib.urlretrieve(i,'C:\Users\zhangweiguo\Desktop\jpg\%s.jpg'%x)x=x+1print x
如果觉得《Python_百度图片以及百度贴吧图片抓取》对你有帮助,请点赞、收藏,并留下你的观点哦!