失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > python opencv基于轮廓检测的手势识别(剪刀 石头 布)

python opencv基于轮廓检测的手势识别(剪刀 石头 布)

时间:2019-01-26 06:35:37

相关推荐

python opencv基于轮廓检测的手势识别(剪刀 石头 布)

前言

这段时间老师要求做一“剪刀,石头,布”的视觉项目,最初打算去草草找一个手势识别训练好的模板直接套用,就在git找到了这个:

git

但使用后效果不要太差,考虑到自己不会训练模型,便寻思着用图像学解决,最终效果差强人意。

效果

可以玩一个简单的剪刀石头布的游戏,这样以后就不用靠手气啦

流程

肤色识别

将图片转化到YCRCB色彩空间,会神奇地出现肤色地YCRCB会集中在一个椭圆区域,就用这个就可以很好地把手分割出来

参考:

REFERENCE

其中的双重for循环比较浪费时间,采用了间隔3个取样再膨胀的方法加速

获取轮廓,找到最大轮廓

cnts, _ = cv2.findContours(skin, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

获取外包轮廓

epsilon = 0.001*cv2.arcLength(segmented,True)

segmented = cv2.approxPolyDP(segmented,epsilon,True)

convex_hull = cv2.convexHull(segmented)

计算面积比

轮廓面积 s 1 s1 s1,外包轮廓面积 s 2 s2 s2

a = s 1 / s 2 a=s1/s2 a=s1/s2

根据面积比求得手势

通常石头<剪刀<布

python 代码

import cv2import numpy as npimport timeclass gesture:def __init__(self):#pics to show to play rock-paper-sessiors gameself.rock= cv2.imread("data/rock.jpg")self.paper= cv2.imread("data/paper.jpg")self.scissors= cv2.imread("data/scissors.jpg")self.clone=Noneself.this_time=time.time()self.fps=0#recall the rgb,pixel valuedef mouse_event(self,event, x,y,flags,param):if event==cv2.EVENT_LBUTTONDOWN:print('PIX:',x,y)print('BGR:',self.clone[y,x])global axglobal ayax=xay=ycv2.circle(self.clone,(x,y),3,(255,255,0),1)#draw circledef ellipse_detect(self,image):#skin detectionimg =imageskinCrCbHist = np.zeros((256,256), dtype= np.uint8 )cv2.ellipse(skinCrCbHist ,(113,155),(23,15),43,0, 360, (255,255,255),-1)YCRCB = cv2.cvtColor(img,cv2.COLOR_BGR2YCR_CB)(y,cr,cb)= cv2.split(YCRCB)skin = np.zeros(cr.shape, dtype=np.uint8)(x,y)= cr.shape#accelerate process speedfor i in range(0,int(x/3)):i=3*ifor j in range(0,int(y/3)):j=j*3CR= YCRCB[i,j,1]CB= YCRCB[i,j,2]if skinCrCbHist [CR,CB]>0:skin[i,j]= 255kernel = np.ones((3,3),np.uint8) skin= cv2.dilate(skin,kernel,iterations = 1)#cv2.imshow("cutout",skin)cnts, _ = cv2.findContours(skin, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if len(cnts) == 0:return Noneelse:segmented = max(cnts, key=cv2.contourArea)return (1, segmented)def loop(self):#for saving photosim_count = 0# get the reference to the webcamcamera = cv2.VideoCapture(3)x, y, r = 240, 320, 100# region of interest (ROI) coordinatestop, right, bottom, left = x-r, y-r, x+r, y+r#for calculating fpsnum_frames = 0# keep looping, until interruptedwhile(True):# get the current frame(grabbed, frame) = camera.read()while frame is None:(grabbed, frame) = camera.read()print("No camera\n")# flip the frame so that it is not the mirror viewframe = cv2.flip(frame, 1)# clone the frameself.clone = frame.copy()# get the height and width of the frame(height, width) = frame.shape[:2]# get the ROIroi = frame[top:bottom, right:left]# convert the roi to grayscale and blur it#get the hand regionhand=self.ellipse_detect(roi)# check whether hand region is segmentedif hand is not None:# segmented region(thresholded, segmented) = handepsilon = 0.001*cv2.arcLength(segmented,True)segmented = cv2.approxPolyDP(segmented,epsilon,True)# draw the segmented region and display the frameconvex_hull = cv2.convexHull(segmented)cv2.rectangle(self.clone, (left, top), (right, bottom), (0,0,0),thickness=cv2.FILLED)cv2.drawContours(self.clone, [convex_hull+(right, top) ], -1, (255, 0, 0), thickness=cv2.FILLED)cv2.drawContours(self.clone, [segmented+(right, top)], -1, (0, 255, 255), thickness=cv2.FILLED)s1=cv2.contourArea(convex_hull)s2=cv2.contourArea(segmented)#defects = cv2.convexityDefects(segmented,convex_hull)ans=0if s1/s2<1.2:ans=0cv2.imshow("ans",self.paper)elif s1/s2<1.4:ans=1cv2.imshow("ans",self.rock)else:ans=2cv2.imshow("ans",self.scissors)text = ["rock", "scissors", "paper"][ans] + " " + str(round(s1/s2, 2))cv2.putText(self.clone, text, (30,30), cv2.FONT_HERSHEY_SIMPLEX, 1,(0,0,255),2)text2 ="fps:"+ " " + str(self.fps)cv2.putText(self.clone, text2, (30,60), cv2.FONT_HERSHEY_SIMPLEX, 1,(0,0,255),2)# draw the segmented handcv2.rectangle(self.clone, (left, top), (right, bottom), (0,255,0), 2)# increment the number of framesnum_frames += 1if time.time()-self.this_time>1:self.fps=num_framesself.this_time=time.time()num_frames=0# display the frame with segmented handcv2.imshow("Video Feed", self.clone)cv2.setMouseCallback("Video Feed",self.mouse_event)# observe the keypress by the userkeypress = cv2.waitKey(1) & 0xFF# if the user pressed "q", then stop loopingpath = Noneif keypress == ord("r"):path = "r" + str(im_count) + ".png"elif keypress == ord("p"):path = "p" + str(im_count) + ".png"elif keypress == ord("s"):path = "s" + str(im_count) + ".png"if path is not None:cv2.imwrite("data/" + path, self.clone)im_count += 1# free up memorycamera.release()cv2.destroyAllWindows()if __name__ == "__main__":my=gesture()my.loop()

github:

代码+图

如果觉得《python opencv基于轮廓检测的手势识别(剪刀 石头 布)》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。