1说明 1。1再发:是因为之前发过单人图和视频的骨架检测,多人报错。 《OpenPose:实现抖音很火的人体骨架和视频动态舞》 《人体骨架舞:OpenPose用pythonopencv实现》 1。2本次代码是完整的全套均可,本次代码讲解清楚,逐步分析,注释清楚,适合收藏。 即: 单人图片和单人视频检测,单人摄像头检测 多人图片和多人视频检测,多人摄像头检测 1。3python的编程思维和编程要点复习,opencv相关知识。 2图片 2。1原图:来自今日头条免费正版图库 2。2效果图 3视频 3。1视频来源:something视频节选yougethttps:y。qq。comnyqqmvvs0023dwf6xi。html 3。2多人视频操作效果图 4代码讲解 4。1第1步:代码头部注释代码名:personopenposeall。py本代码说明,是在源代码的基础上进行修改原代码来自:https:github。comspmallicklearnopencv下的OpenPoseMultiPerson代码可执行图片和视频的人体骨架测试单人和多人骨架均可以视频来自摄像头和mp4均可以 4。2第2步:模块导入importcv2importnumpyasnpimportos路径管理 4。3附注环境:python3。8opencv4。4。0linux系统微软编辑器vscode,个人喜欢点击运行按钮。第3步:路径管理curpathos。path。realpath(file)获取当前代码的绝对路径dirpathos。path。dirname(curpath)获取当前文件的文件夹路径 4。4模型加载和参数设置第4步:模型加载protoFiledirpathposecocoposedeploylinevec。prototxt需要自己提前下载,第一篇文章已经告诉如何下载weightsFiledirpathposecocoposeiter440000。caffemodel参数设置nPoints18COCOOutputFormat,名称列表,可注释掉keypointsMapping〔Nose,Neck,RSho,RElb,RWr,LSho,LElb,LWr,RHip,RKnee,RAnk,LHip,LKnee,LAnk,REye,LEye,REar,LEar〕POSEPAIRS〔〔1,2〕,〔1,5〕,〔2,3〕,〔3,4〕,〔5,6〕,〔6,7〕,〔1,8〕,〔8,9〕,〔9,10〕,〔1,11〕,〔11,12〕,〔12,13〕,〔1,0〕,〔0,14〕,〔14,16〕,〔0,15〕,〔15,17〕,〔2,17〕,〔5,16〕〕mapIdx〔〔31,32〕,〔39,40〕,〔33,34〕,〔35,36〕,〔41,42〕,〔43,44〕,〔19,20〕,〔21,22〕,〔23,24〕,〔25,26〕,〔27,28〕,〔29,30〕,〔47,48〕,〔49,50〕,〔53,54〕,〔51,52〕,〔55,56〕,〔37,38〕,〔45,46〕〕颜色列表colors〔〔0,100,255〕,〔0,100,255〕,〔0,255,255〕,〔0,100,255〕,〔0,255,255〕,〔0,100,255〕,〔0,255,0〕,〔255,200,100〕,〔255,0,255〕,〔0,255,0〕,〔255,200,100〕,〔255,0,255〕,〔0,0,255〕,〔255,0,0〕,〔200,200,0〕,〔255,0,0〕,〔200,200,0〕,〔0,0,0〕〕 4。5这块是关键,搞透了就是大牛,一般不需要调整。第5步:函数定义获取关节点函数defgetKeypoints(probMap,threshold0。1):mapSmoothcv2。GaussianBlur(probMap,(3,3),0,0)mapMasknp。uint8(mapSmooththreshold)keypoints〔〕findtheblobscontours,cv2。findContours(mapMask,cv2。RETRTREE,cv2。CHAINAPPROXSIMPLE)foreachblobfindthemaximaforcntincontours:blobMasknp。zeros(mapMask。shape)blobMaskcv2。fillConvexPoly(blobMask,cnt,1)maskedProbMapmapSmoothblobMask,maxVal,,maxLoccv2。minMaxLoc(maskedProbMap)keypoints。append(maxLoc(probMap〔maxLoc〔1〕,maxLoc〔0〕〕,))returnkeypointsFindvalidconnectionsbetweenthedifferentjointsofaallpersonspresentdefgetValidPairs(output):validpairs〔〕invalidpairs〔〕ninterpsamples10pafscoreth0。1confth0。7loopforeveryPOSEPAIRforkinrange(len(mapIdx)):ABconstitutealimbpafAoutput〔0,mapIdx〔k〕〔0〕,:,:〕pafBoutput〔0,mapIdx〔k〕〔1〕,:,:〕pafAcv2。resize(pafA,(frameWidth,frameHeight))pafBcv2。resize(pafB,(frameWidth,frameHeight))FindthekeypointsforthefirstandsecondlimbcandAdetectedkeypoints〔POSEPAIRS〔k〕〔0〕〕candBdetectedkeypoints〔POSEPAIRS〔k〕〔1〕〕nAlen(candA)nBlen(candB)if(nA!0andnB!0):validpairnp。zeros((0,3))foriinrange(nA):maxj1maxScore1found0forjinrange(nB):Finddijdijnp。subtract(candB〔j〕〔:2〕,candA〔i〕〔:2〕)normnp。linalg。norm(dij)ifnorm:dijdijnormelse:continueFindp(u)interpcoordlist(zip(np。linspace(candA〔i〕〔0〕,candB〔j〕〔0〕,numninterpsamples),np。linspace(candA〔i〕〔1〕,candB〔j〕〔1〕,numninterpsamples)))FindL(p(u))pafinterp〔〕forkinrange(len(interpcoord)):pafinterp。append(〔pafA〔int(round(interpcoord〔k〕〔1〕)),int(round(interpcoord〔k〕〔0〕))〕,pafB〔int(round(interpcoord〔k〕〔1〕)),int(round(interpcoord〔k〕〔0〕))〕〕)FindEpafscoresnp。dot(pafinterp,dij)avgpafscoresum(pafscores)len(pafscores)CheckiftheconnectionisvalidIfthefractionofinterpolatedvectorsalignedwithPAFishigherthenthresholdValidPairif(len(np。where(pafscorespafscoreth)〔0〕)ninterpsamples)confth:ifavgpafscoremaxScore:maxjjmaxScoreavgpafscorefound1Appendtheconnectiontothelistiffound:validpairnp。append(validpair,〔〔candA〔i〕〔3〕,candB〔maxj〕〔3〕,maxScore〕〕,axis0)Appendthedetectedconnectionstothegloballistvalidpairs。append(validpair)else:Ifnokeypointsaredetectedprint(NoConnection:k{}。format(k))invalidpairs。append(k)validpairs。append(〔〕)returnvalidpairs,invalidpairs分配到人的关节点和关节线函数ThisfunctioncreatesalistofkeypointsbelongingtoeachpersonForeachdetectedvalidpair,itassignsthejoint(s)toapersondefgetPersonwiseKeypoints(validpairs,invalidpairs):thelastnumberineachrowistheoverallscorepersonwiseKeypoints1np。ones((0,19))forkinrange(len(mapIdx)):ifknotininvalidpairs:partAsvalidpairs〔k〕〔:,0〕partBsvalidpairs〔k〕〔:,1〕indexA,indexBnp。array(POSEPAIRS〔k〕)foriinrange(len(validpairs〔k〕)):found0personidx1forjinrange(len(personwiseKeypoints)):ifpersonwiseKeypoints〔j〕〔indexA〕partAs〔i〕:personidxjfound1breakiffound:personwiseKeypoints〔personidx〕〔indexB〕partBs〔i〕personwiseKeypoints〔personidx〕〔1〕keypointslist〔partBs〔i〕。astype(int),2〕validpairs〔k〕〔i〕〔2〕iffindnopartAinthesubset,createanewsubsetelifnotfoundandk17:row1np。ones(19)row〔indexA〕partAs〔i〕row〔indexB〕partBs〔i〕addthekeypointscoresforthetwokeypointsandthepafscorerow〔1〕sum(keypointslist〔validpairs〔k〕〔i,:2〕。astype(int),2〕)validpairs〔k〕〔i〕〔2〕personwiseKeypointsnp。vstack(〔personwiseKeypoints,row〕)returnpersonwiseKeypoints 4。6第6步:导入类型图片类常规是image1或者image,为了与视频代码兼容这里采用frame代替原来的image1framecv2。imread(dirpath11。jpeg)视频类capcv2。VideoCapture(dirpaths。mp4)mp4视频,cpu生成有一点慢capcv2。VideoCapture(0)摄像头hasFrame,framecap。read()生成本目录下的视频vidwritercv2。VideoWriter(dirpathoutputs。avi,cv2。VideoWriterfourcc(M,J,P,G),10,(frame。shape〔1〕,frame。shape〔0〕))视频类 4。7第7步:启动cpu训练和调动模型netcv2。dnn。readNetFromCaffe(protoFile,weightsFile)net。setPreferableBackend(cv2。dnn。DNNTARGETCPU)print(UsingCPUdevice) 4。8第8步:循环内设置whilecv2。waitKey(1)0:增加一张输出的黑色图片,用于显示骨架和数字outnp。zeros(frame。shape,np。uint8)add视频类,图片不需要,可以注释掉hasFrame,framecap。read()frameCopynp。copy(frame)退出设置ifnothasFrame:cv2。waitKey()break视频类frameWidthframe。shape〔1〕frameHeightframe。shape〔0〕FixtheinputHeightandgetthewidthaccordingtotheAspectRatioinHeight368inWidthint((inHeightframeHeight)frameWidth)inpBlobcv2。dnn。blobFromImage(frame,1。0255,(inWidth,inHeight),(0,0,0),swapRBFalse,cropFalse)net。setInput(inpBlob)outputnet。forward()detectedkeypoints〔〕keypointslistnp。zeros((0,3))keypointid0threshold0。1forpartinrange(nPoints):probMapoutput〔0,part,:,:〕probMapcv2。resize(probMap,(frame。shape〔1〕,frame。shape〔0〕))keypointsgetKeypoints(probMap,threshold)keypointswithid〔〕foriinrange(len(keypoints)):keypointswithid。append(keypoints〔i〕(keypointid,))keypointslistnp。vstack(〔keypointslist,keypoints〔i〕〕)keypointid1detectedkeypoints。append(keypointswithid)frameCloneframe。copy()foriinrange(nPoints):forjinrange(len(detectedkeypoints〔i〕)):cv2。circle(frameClone,detectedkeypoints〔i〕〔j〕〔0:2〕,5,colors〔i〕,1,cv2。LINEAA)展示窗口1:关节点cv2。imshow(1Keypoints,frameClone)validpairs,invalidpairsgetValidPairs(output)personwiseKeypointsgetPersonwiseKeypoints(validpairs,invalidpairs)foriinrange(17):forninrange(len(personwiseKeypoints)):indexpersonwiseKeypoints〔n〕〔np。array(POSEPAIRS〔i〕)〕if1inindex:continueBnp。int32(keypointslist〔index。astype(int),0〕)Anp。int32(keypointslist〔index。astype(int),1〕)cv2。line(frameClone,(B〔0〕,A〔0〕),(B〔1〕,A〔1〕),colors〔i〕,3,cv2。LINEAA)cv2。line(out,(B〔0〕,A〔0〕),(B〔1〕,A〔1〕),colors〔i〕,3,cv2。LINEAA)add展示窗口2:骨架关节线cv2。imshow(2DetectedPose,frameClone)展示窗口3:纯骨架关节线cv2。imshow(3PureDetectedPose,out)add视频类,需要下面的代码,图片检测,可以注释掉vidwriter。write(frameClone)vidwriter。release()视频类 结束,如果实时,可能需要GPU或者高性能计算机。 基本完整!!