1说明 1。1OpenPose是基于卷积神经网络和监督学习并以caffe为框架写成的开源库。 1。2可以实现人的面部表情、躯干和四肢甚至手指的跟踪,适用多人且具有较好的鲁棒性。 1。3是世界上第一个基于深度学习的实时多人二维姿态估计,为机器理解人类提供了一个高质量的信息维度。 1。4代码来源:https:github。comspmallicklearnopencv 2效果图 2。1图片 2。2视频 视频节选 3准备 3。1对源代码进行修改、注释、运行,提高可读性和可操作性,适合小白,入门秒懂。 3。2环境:python3。8opencv4。4。0深度deepinlinux操作系统。 3。3模型下载地址,官网也有太慢了。网友提供,谢谢https:blog。csdn。netGLaarticledetails81661821 3。4文档结构 说明是自己单独的,output。avi是视频生成的效果图 4图片人体骨架 4。1代码:OpenPoseImage。py打开终端输入:本机python3。8OpenPoseImage。pydevicecpuimagefilesingle。jpeg第1步:导入模块importcv2importtimeimportnumpyasnpimportargparse终端参数设置parserargparse。ArgumentParser(descriptionRunkeypointdetection)启动cpu运行parser。addargument(device,defaultcpu,helpDevicetoinferenceon)parser。addargument(imagefile,defaultsingle。jpeg,helpInputimage)argsparser。parseargs()第3步:模型设置模型下载地址:https:blog。csdn。netGLaarticledetails81661821国内的,快模型判定MODECOCOifMODEisCOCO:ifMODECOCO:当前目录下的文件夹protoFileposecocoposedeploylinevec。prototxt或者直接下载,太慢了,文件很大http:posefs1。perception。cs。cmu。eduOpenPosemodelsposecocoposeiter440000。caffemodelweightsFileposecocoposeiter440000。caffemodel缺少nPoints18POSEPAIRS〔〔1,0〕,〔1,2〕,〔1,5〕,〔2,3〕,〔3,4〕,〔5,6〕,〔6,7〕,〔1,8〕,〔8,9〕,〔9,10〕,〔1,11〕,〔11,12〕,〔12,13〕,〔0,14〕,〔0,15〕,〔14,16〕,〔15,17〕〕附加elifMODEisMPI:elifMODEMPI:protoFileposempiposedeploylinevecfaster4stages。prototxt或者直接下载,太慢了,文件很大http:posefs1。perception。cs。cmu。eduOpenPosemodelsposempiposeiter160000。caffemodelweightsFileposempiposeiter160000。caffemodel缺少nPoints15POSEPAIRS〔〔0,1〕,〔1,2〕,〔2,3〕,〔3,4〕,〔1,5〕,〔5,6〕,〔6,7〕,〔1,14〕,〔14,8〕,〔8,9〕,〔9,10〕,〔14,11〕,〔11,12〕,〔12,13〕〕第4步:opencv读取图片framecv2。imread(args。imagefile)frameCopynp。copy(frame)frameWidthframe。shape〔1〕frameHeightframe。shape〔0〕threshold0。1第5步:模型读取和深度学习设置骨架和骨架点读取模型netcv2。dnn。readNetFromCaffe(protoFile,weightsFile)默认启动cpuifargs。devicecpu:net。setPreferableBackend(cv2。dnn。DNNTARGETCPU)print(UsingCPUdevice)附加设置,启动GPUelifargs。devicegpu:net。setPreferableBackend(cv2。dnn。DNNBACKENDCUDA)net。setPreferableTarget(cv2。dnn。DNNTARGETCUDA)print(UsingGPUdevice)时间设置ttime。time()inputimagedimensionsforthenetworkinWidth368inHeight368inpBlobcv2。dnn。blobFromImage(frame,1。0255,(inWidth,inHeight),(0,0,0),swapRBFalse,cropFalse)net。setInput(inpBlob)outputnet。forward()print(timetakenbynetwork:{:。3f}。format(time。time()t))Houtput。shape〔2〕Woutput。shape〔3〕骨架点Emptylisttostorethedetectedkeypointspoints〔〕foriinrange(nPoints):confidencemapofcorrespondingbodyspart。probMapoutput〔0,i,:,:〕FindglobalmaximaoftheprobMap。minVal,prob,minLoc,pointcv2。minMaxLoc(probMap)Scalethepointtofitontheoriginalimagex(frameWidthpoint〔0〕)Wy(frameHeightpoint〔1〕)Hifprobthreshold:cv2。circle(frameCopy,(int(x),int(y)),8,(0,255,255),thickness1,lineTypecv2。FILLED)cv2。putText(frameCopy,{}。format(i),(int(x),int(y)),cv2。FONTHERSHEYSIMPLEX,1,(0,0,255),2,lineTypecv2。LINEAA)Addthepointtothelistiftheprobabilityisgreaterthanthethresholdpoints。append((int(x),int(y)))else:points。append(None)骨架DrawSkeleton,画骨架forpairinPOSEPAIRS:partApair〔0〕partBpair〔1〕ifpoints〔partA〕andpoints〔partB〕:cv2。line(frame,points〔partA〕,points〔partB〕,(0,255,255),2)cv2。circle(frame,points〔partA〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)显示生成图片cv2。imshow(OutputKeypoints,frameCopy)cv2。imshow(OutputSkeleton,frame)输出生成图片cv2。imwrite(OutputKeypoints。jpg,frameCopy)cv2。imwrite(OutputSkeleton。jpg,frame)print(Totaltimetaken:{:。3f}。format(time。time()t))cv2。waitKey(0) 4。2操作和效果图 5视频骨架测试 5。1代码OpenPoseVideo。py:打开终端,需要一定的时间python3。8OpenPoseVideo。pydevicecpuvideofilesamplevideo。mp4importcv2importtimeimportnumpyasnpimportargparseparserargparse。ArgumentParser(descriptionRunkeypointdetection)parser。addargument(device,defaultcpu,helpDevicetoinferenceon)parser。addargument(videofile,defaultsamplevideo。mp4,helpInputVideo)argsparser。parseargs()MODEMPI注意源代码是is,python3改为ifMODEisCOCO:ifMODECOCO:protoFileposecocoposedeploylinevec。prototxtweightsFileposecocoposeiter440000。caffemodelnPoints18POSEPAIRS〔〔1,0〕,〔1,2〕,〔1,5〕,〔2,3〕,〔3,4〕,〔5,6〕,〔6,7〕,〔1,8〕,〔8,9〕,〔9,10〕,〔1,11〕,〔11,12〕,〔12,13〕,〔0,14〕,〔0,15〕,〔14,16〕,〔15,17〕〕elifMODEisMPI:elifMODEMPI:protoFileposempiposedeploylinevecfaster4stages。prototxtweightsFileposempiposeiter160000。caffemodelnPoints15POSEPAIRS〔〔0,1〕,〔1,2〕,〔2,3〕,〔3,4〕,〔1,5〕,〔5,6〕,〔6,7〕,〔1,14〕,〔14,8〕,〔8,9〕,〔9,10〕,〔14,11〕,〔11,12〕,〔12,13〕〕inWidth368inHeight368threshold0。1inputsourceargs。videofilecapcv2。VideoCapture(inputsource)hasFrame,framecap。read()生成本目录下的视频vidwritercv2。VideoWriter(output。avi,cv2。VideoWriterfourcc(M,J,P,G),10,(frame。shape〔1〕,frame。shape〔0〕))netcv2。dnn。readNetFromCaffe(protoFile,weightsFile)ifargs。devicecpu:net。setPreferableBackend(cv2。dnn。DNNTARGETCPU)print(UsingCPUdevice)elifargs。devicegpu:net。setPreferableBackend(cv2。dnn。DNNBACKENDCUDA)net。setPreferableTarget(cv2。dnn。DNNTARGETCUDA)print(UsingGPUdevice)whilecv2。waitKey(1)0:ttime。time()hasFrame,framecap。read()frameCopynp。copy(frame)ifnothasFrame:cv2。waitKey()breakframeWidthframe。shape〔1〕frameHeightframe。shape〔0〕inpBlobcv2。dnn。blobFromImage(frame,1。0255,(inWidth,inHeight),(0,0,0),swapRBFalse,cropFalse)net。setInput(inpBlob)outputnet。forward()Houtput。shape〔2〕Woutput。shape〔3〕Emptylisttostorethedetectedkeypointspoints〔〕foriinrange(nPoints):confidencemapofcorrespondingbodyspart。probMapoutput〔0,i,:,:〕FindglobalmaximaoftheprobMap。minVal,prob,minLoc,pointcv2。minMaxLoc(probMap)Scalethepointtofitontheoriginalimagex(frameWidthpoint〔0〕)Wy(frameHeightpoint〔1〕)Hifprobthreshold:cv2。circle(frameCopy,(int(x),int(y)),8,(0,255,255),thickness1,lineTypecv2。FILLED)cv2。putText(frameCopy,{}。format(i),(int(x),int(y)),cv2。FONTHERSHEYSIMPLEX,1,(0,0,255),2,lineTypecv2。LINEAA)Addthepointtothelistiftheprobabilityisgreaterthanthethresholdpoints。append((int(x),int(y)))else:points。append(None)DrawSkeletonforpairinPOSEPAIRS:partApair〔0〕partBpair〔1〕ifpoints〔partA〕andpoints〔partB〕:cv2。line(frame,points〔partA〕,points〔partB〕,(0,255,255),3,lineTypecv2。LINEAA)cv2。circle(frame,points〔partA〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)cv2。circle(frame,points〔partB〕,8,(0,0,255),thickness1,lineTypecv2。FILLED)cv2。putText(frame,timetaken{:。2f}sec。format(time。time()t),(50,50),cv2。FONTHERSHEYCOMPLEX,。8,(255,50,0),2,lineTypecv2。LINEAA)cv2。imshow(OutputSkeleton,frame)vidwriter。write(frame)vidwriter。release() 5。2因为需要一定的时间,过程省略,效果图如文章开头。