失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > DeepLearning | Zero shot learning 零样本学习AWA2 图像数据集预处理

DeepLearning | Zero shot learning 零样本学习AWA2 图像数据集预处理

时间:2021-07-22 23:13:10

相关推荐

DeepLearning | Zero shot learning 零样本学习AWA2 图像数据集预处理

因为有打算想要写一组关于零样本学习算法的博客,需要用到AWA2数据集作为demo演示

之前想只展示算法部分的代码就好了,但是如果只展示算法部分的代码可能不方便初学者复现,所以这里把我数据预处理的方法也说一下,博客的最后会给一个处理好的数据下载地址,之后的博客都会利用该博客的方法作为数据预处理

我会对AWA2数据集做一个详细的介绍,对数据集有一个好的理解本身也有助于算法的学习和实现

AWA2 图像数据集下载地址:http://cvml.ist.ac.at/AwA2/

数据集比较大有13个G,下载可能得花点时间

这里还有几篇零样本算法的介绍和复现:

DeepLearning | Semantic Autoencoder for Zero Shot Learning 零样本学习 (论文、算法、数据集、代码)

DeepLearning | Relational Knowledge Transfer for Zero Shot Learning 零样本学习(论文、算法、数据集、代码)

目录

一、AWA2数据集简介1.1 classes.txt1.2 JPGEImages1.3 licenses1.4 predicate-matrix-binary.txt1.5 predicate-matrix-continuous.txt1.6 predict-matrix.png1.7 predicate.txt1.8 README-attributes.txt和README-images.txt1.9 testclass.txt1.10 trainclasses.txt 二、数据集处理2.1 图片读取2.2 准备属性标签2.3 使用预训练的resnet101提取图片特征 三、处理完毕的数据四、资源下载

一、AWA2数据集简介

该数据集是C. H. Lampert 等人在 Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly上公布的动物识别数据集,该数据集一共包含以下几个文件

接下来我会一一对这些文件进行介绍

1.1 classes.txt

该文件记录了数据集所包含的动物种类,共50种,注意,该文件我稍微做了修改,源文件是没有+号的如6这里,这么做是为了写法保持一致。源数据集有的文件里用了加号,有的没用,这里统一了一下

1.2 JPGEImages

该文件夹包含了数据集的所有图片数据,格式如下,每一个子文件夹包含一种动物的图片

1.3 licenses

该文件夹包含每一张图片的授权,这个文件我们在处理时是用不到的

1.4 predicate-matrix-binary.txt

该文件记录了50种动物,每一种动物的85种属性特征情况,是一个50x85的矩阵,1表示有该特征,0表示无,如下

1.5 predicate-matrix-continuous.txt

和 predicate-matrix-binary.txt 文件一样,记录了50种动物,每一种动物的85种属性特征情况,只是该矩阵对属性的描述用的是连续数字

1.6 predict-matrix.png

文件 predicate-matrix-binary.txt 的图形化

1.7 predicate.txt

该文件记录了85种预测的属性分别是什么

1.8 README-attributes.txt和README-images.txt

这两个说明文件对我们也是没有用的

1.9 testclass.txt

该文件说明了哪些动物是测试种类,共10个测试类别

1.10 trainclasses.txt

该文件说明了哪些动物是训练种类,共40个训练类别

这里就介绍完了数据集的全部文件,简而言之,数据集包含50个种类动物的37322张图片,训练集40类30337张图片,测试集10类6985张图片

二、数据集处理

2.1 图片读取

这一步我们需要将图片统一大小为224x224x3,并为数据集制作相应的标签,代码如下

import pandas as pdimport osimport numpy as npimport cv2from PIL import Imageimage_size = 224# 指定图片大小path = '/Users/zhuxiaoxiansheng/Desktop/Animals_with_Attributes2/' #文件读取路径classname = pd.read_csv(path+'classes.txt',header=None,sep = '\t')dic_class2name = {classname.index[i]:classname.loc[i][1] for i in range(classname.shape[0])} dic_name2class = {classname.loc[i][1]:classname.index[i] for i in range(classname.shape[0])}# 两个字典,记录标签信息,分别是数字对应到文字,文字对应到数字#根据目录读取一类图像,read_num指定每一类读取多少图片,图片大小统一为image_sizedef load_Img(imgDir,read_num = 'max'):imgs = os.listdir(imgDir)imgs = np.ravel(pd.DataFrame(imgs).sort_values(by=0).values)if read_num == 'max':imgNum = len(imgs)else:imgNum = read_numdata = np.empty((imgNum,image_size,image_size,3),dtype="float32")print(imgNum)for i in range (imgNum):img = Image.open(imgDir+"/"+imgs[i])arr = np.asarray(img,dtype="float32")if arr.shape[1] > arr.shape[0]:arr = cv2.copyMakeBorder(arr,int((arr.shape[1]-arr.shape[0])/2),int((arr.shape[1]-arr.shape[0])/2),0,0,cv2.BORDER_CONSTANT,value=0)else:arr = cv2.copyMakeBorder(arr,0,0,int((arr.shape[0]-arr.shape[1])/2),int((arr.shape[0]-arr.shape[1])/2),cv2.BORDER_CONSTANT,value=0) #长宽不一致时,用padding使长宽一致arr = cv2.resize(arr,(image_size,image_size))if len(arr.shape) == 2:temp = np.empty((image_size,image_size,3))temp[:,:,0] = arrtemp[:,:,1] = arrtemp[:,:,2] = arrarr = temp data[i,:,:,:] = arrreturn data,imgNum #读取数据def load_data(train_classes,test_classes,num):read_num = numtraindata_list = []trainlabel_list = []testdata_list = []testlabel_list = [] for item in train_classes.iloc[:,0].values.tolist():tup = load_Img(path+'JPEGImages/'+item,read_num=read_num)traindata_list.append(tup[0])trainlabel_list += [dic_name2class[item]]*tup[1]for item in test_classes.iloc[:,0].values.tolist():tup = load_Img(path+'JPEGImages/'+item,read_num=read_num)testdata_list.append(tup[0])testlabel_list += [dic_name2class[item]]*tup[1]return np.row_stack(traindata_list),np.array(trainlabel_list),np.row_stack(testdata_list),np.array(testlabel_list)train_classes = pd.read_csv(path+'trainclasses.txt',header=None)test_classes = pd.read_csv(path+'testclasses.txt',header=None)traindata,trainlabel,testdata,testlabel = load_data(train_classes,test_classes,num='max')print(traindata.shape,trainlabel.shape,testdata.shape,testlabel.shape)#降图像和标签保存为numpy数组,下次可以直接读取np.save(path+'AWA2_224_traindata.npy',traindata)np.save(path+'AWA2_224_testdata.npy',testdata)np.save(path+'AWA2_trainlabel.npy',trainlabel)np.save(path+'AWA2_testlabel.npy',testlabel)

2.2 准备属性标签

刚刚我们读取了数据并制作了0-49的数字标签,但光是数字标签在零样本学习中是不足的,我们还需要每一张图片与其对应的属性标签

下面制作了连续属性的标签,同样的方法还可以制作离散(01)属性的标签,还可以将连续属性规范到0-1范围内作为标签,这些代码不再重复,处理好的标签会在最后的链接中统一给出

import pandas as pdimport numpy as nppath = '/Users/zhuxiaoxiansheng/Desktop/Animals_with_Attributes2/'def make_attribute_label(trainlabel,testlabel): attribut_bmatrix = pd.read_csv(path+'predicate-matrix-continuous.txt',header=None,sep = ',')trainlabel = pd.DataFrame(trainlabel).set_index(0)testlabel = pd.DataFrame(testlabel).set_index(0)return trainlabel.join(attribut_bmatrix),testlabel.join(attribut_bmatrix)trainlabel = np.load(path+'AWA2_trainlabel.npy')testlabel = np.load(path+'AWA2_testlabel.npy')train_attributelabel,test_attributelabel = make_attribute_label(trainlabel,testlabel)np.save(path+'AWA2_train_continuous_attributelabel.npy',train_attributelabel.values)np.save(path+'AWA2_test_continuous_attributelabel.npy',test_attributelabel.values)

2.3 使用预训练的resnet101提取图片特征

在零样本学习中,很多情况下,我们不会直接使用图片本身,使用卷积网络提取出的特征会更加方便

import numpy as npimport pandas as pdfrom sklearn.metrics import accuracy_scoreimport torchimport torchvisionfrom torchvision import datasets, models,transformsfrom torch.autograd import Variablefrom torch.utils.data import DataLoader,Datasetfrom tqdm import tqdmfrom torch import nn,optimimport lightgbm as lgbimport warningswarnings.filterwarnings("ignore")from sklearn.linear_model import LogisticRegressionpath = '/Users/zhuxiaoxiansheng/Desktop/Animals_with_Attributes2/'classname = pd.read_csv(path+'classes.txt',header=None,sep = '\t')dic_class2name = {classname.index[i]:classname.loc[i][1] for i in range(classname.shape[0])}dic_name2class = {classname.loc[i][1]:classname.index[i] for i in range(classname.shape[0])}def make_test_attributetable(): #制作测试10类的属性表attribut_bmatrix = pd.read_csv(path+'predicate-matrix-binary.txt',header=None,sep = ' ')test_classes = pd.read_csv(path+'testclasses.txt',header=None)test_classes_flag = []for item in test_classes.iloc[:,0].values.tolist():test_classes_flag.append(dic_name2class[item])return attribut_bmatrix.iloc[test_classes_flag,:]class dataset(Dataset):def __init__(self,data,label,transform):super().__init__()self.data = dataself.label = labelself.transform = transformdef __getitem__(self,index):return self.transform(self.data[index]),self.label[index]def __len__(self):return self.data.shape[0] class FeatureExtractor(nn.Module):def __init__(self, submodule, extracted_layers):super(FeatureExtractor,self).__init__()self.submodule = submoduleself.extracted_layers= extracted_layersdef forward(self, x):outputs = []for name, module in self.submodule._modules.items():if name is "fc": x = x.view(x.size(0), -1)x = module(x)if name in self.extracted_layers:outputs.append(x)return outputstraindata = np.load(path+'AWA2_224_traindata.npy')trainlabel = np.load(path+'AWA2_trainlabel.npy')train_attributelabel = np.load(path+'AWA2_train_attributelabel.npy')testdata = np.load(path+'AWA2_224_testdata.npy')testlabel = np.load(path+'AWA2_testlabel.npy')test_attributelabel = np.load(path+'AWA2_test_attributelabel.npy')print(traindata.shape,trainlabel.shape,train_attributelabel.shape)print(testdata.shape,testlabel.shape,test_attributelabel.shape)data_tf = pose([transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])train_dataset = dataset(traindata,trainlabel,data_tf)test_dataset = dataset(testdata,testlabel,data_tf)train_loader = DataLoader(train_dataset,batch_size=1,shuffle=False)test_loader = DataLoader(test_dataset,batch_size=1,shuffle=False)model = models.resnet101(pretrained=True)#使用训练好的resnet101(if torch.cuda.is_available():model=model.cuda()model.eval()exact_list = ['avgpool'] #提取最后一层池化层的输出作为图像特征myexactor = FeatureExtractor(model,exact_list)train_feature_list = []for data in tqdm(train_loader):img,label = data if torch.cuda.is_available():with torch.no_grad():img = Variable(img).cuda()with torch.no_grad():label = Variable(label).cuda()else:with torch.no_grad():img = Variable(img)with torch.no_grad():label = Variable(label) feature = myexactor(img)[0]feature = feature.resize(feature.shape[0],feature.shape[1])train_feature_list.append(feature.detach().cpu().numpy()) trainfeatures = np.row_stack(train_feature_list) test_feature_list = []for data in tqdm(test_loader):img,label = data if torch.cuda.is_available():with torch.no_grad():img = Variable(img).cuda()with torch.no_grad():label = Variable(label).cuda()else:with torch.no_grad():img = Variable(img)with torch.no_grad():label = Variable(label) feature = myexactor(img)[0]feature = feature.resize(feature.shape[0],feature.shape[1])test_feature_list.append(feature.detach().cpu().numpy()) testfeatures = np.row_stack(test_feature_list) print(trainfeatures.shape,testfeatures.shape)

三、处理完毕的数据

上面已经介绍了一些基本的处理方法和数据,在之后介绍算法的过程中,数据会直接拿来使用,处理好的数据下载链接如下:

AWA2_trainlabel /s/1d08IninWz7FATJrDL6DsDA

AWA2_testlabel /s/1j-GOTYMB2DfaLPH_FziRxQ

resnet101_trainfeatures /s/10OwVXFVDJMneNFNZlYygew

resnet101_testfeatures /s/1UT5roIJm9dGb3BMr1mVyQQ

AWA2_train_attributelabel.npy /s/1xgzJBwCRiOjOKSm13IY3kQ

AWA2_test_attributelabel.npy /s/1UwtQmDlFJTLvFc71xkFZ6A

AWA2_train_continuous_01_attributelabel.npy /s/1_31wEQZO81-8kJjANFwdeA

AWA2_test_continuous_01_attributelabel.npy /s/1at2El02-JCmD-1SrKhQMeA

四、资源下载

微信搜索“老和山算法指南”获取更多下载链接与技术交流群

有问题可以私信博主,点赞关注的一般都会回复,一起努力,谢谢支持。

如果觉得《DeepLearning | Zero shot learning 零样本学习AWA2 图像数据集预处理》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。