Skip to content

Looking at OpenFace

OpenCV has a good facial detection capability using Haar Cascades. Dlib uses HOG (histogram-of-oriented-gradient) based object detectors, which offers better facial detection than HC at the cost of computational power. Both can point out where faces are in an image. This is a good step, in that it can be used to get the animatronic creature to “look at” a person in front of it.

But what if we want it to identify the person? We can’t get the animatronic character to say “Don’t do this to me Dave!” unless either it can identify Dave or it thinks everyone is named Dave.

OpenFace is used to identify faces. Bonus: it is easy to setup with Docker. It’s less than a  year old at this point, and still pretty powerful and well documented. The gist: OpenFace first morphs the images so the eyes and bottom lips are in standard positions using OpenCV or Dlib. Then it uses a deep neural net to embed it into a 128-dimensional unit hypersphere. It also includes a demo that uses an SVM to classify the vectors.

This approach means that it can be quickly trained with little data compared to using a DNN from scratch. Here “little data” might mean 10 or 20 images of each subject rather than 1000s.


OpenFace on GitHub

Posted in Make, Robotics.

Tagged with , , , , .

0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

You must be logged in to post a comment.