We want to attribute a png of somebody’s face to a entity, and when you give a video feed of somebody else’s face, they will not be flagged as the same entity as the first face.

We want to install deepface library here.

Once we done that, import the following 3 libraries.

threading for more cpu threads to help. Cv2 for computer vision and deepface for facial recognition.

next we have our cap. The cap will have a DSHOW type cap.

Set our height and width

Next we have 3 global variables

we want to have a frame counter. We want a image to be taken every 60 (or 30) frames. Then we have a boolean for if the face is matched and a reference image for face matching.

Next we have a while loop

the while loop will chek if there is a return value. ( we will make that block do something in a second). We also have a segment which detects user input of q, if it is q then we break out the while loop and destroy all windows.

Neuralnine promised In the ret block, he would show how to view our camera feed. Lets hope he keeps his promise.

ok, so in the code inside ret for now, we have to compare the frame to the face.

check if we are on the 30th counter. We want a 1 frame per 30 activaiton.

Then, we want to give threads to a checkface function. We feed that function a tuple. The tuple is just containing one item which is a copy of our current frame. There is a comma there because we must feed it a tuple in the args parameter. We need that comma to make it a tuple.

We also have an exception.

the exception will catch a ValueError. This is just what deepface does, if it cant find a face then it returns ValueError. We don’t care about what happens when this error happens, we just pass.

Also increment counter will you?

Next, this is all optional but we have a caption underneath the video feed which will say MATCH!

that is, if we get a match from the checkface function.

you may see I also added a else. The colors are BGR, so 0,255,0 is green and 0,0,255 is red.

Finally, we show the camera feed. He kept his promise.

the title of the feed is “THE VIDEO”

The checkface function is defined like thus:

we check if we can do this:

if the frame is the same as the reference image, then we change face_match to be true, if not, then it is false still

again, this one can fail due to numerous reasons. Except value error in that case.

Error fixing

So, the first error I fixed was the file location.

reference image should have a absolute path with raw string.

Then, what I needed to do was download this file from: https://github.com/serengil/deepface_models/releases/download/v1.0/vgg_face_weights.h5

the raw h5 file and put it in my C:\Users\Digit.deepface\weights folder.

And then it works!

from this reference:

it checks the nose, eye, mouth positions. If I open my mouth, it flags it as FRAUD!