iOS Face Detection in RubyMotion

Posted by Gant on August 9th, 2012

Robotface

When the great Cyborg uprising happens in 2051, you'll need to know 2 things. Firstly, how to monkey patch face-recognition software in order to modify Cyborgs to identify and attack their own kind, and secondly, how to make wine in a toilet. These are mandatory skill sets and… well… face recognition software is just awesome, readily available, and fun. In this post, I'm going to walk you through the basics of iOS Face Detection with a full working example in RubyMotion.

This tutorial is the basics! You're likely to learn a bit more about the drawing API than facial recognition, but hey, you've got to start somewhere.

Source Code:

All the source code of this project has been made open source on Github: https://github.com/IconoclastLabs/RubyMotion-SimpleFace

Getting Your Footing:

The meat and bones of this project is pretty straightforward. For solidarity and simplicity all the logic is done in a basic root view controller. All the magic you need is built into Core Image for iOS 5 and greater. The object we're working with is CIDetector. In our viewDidLoad we're going to simply prep and implement CIDetector which will do all the heavy lifting.

 1def viewDidLoad
 2    super
 3
 4    view.backgroundColor = UIColor.lightGrayColor
 5    @me = UIImage.imageNamed("gantman.jpeg")
 6    cme = CIImage.alloc.initWithImage(@me)
 7
 8    options = NSDictionary.dictionaryWithObject(CIDetectorAccuracyHigh, forKey:CIDetectorAccuracy)
 9    detector = CIDetector.detectorOfType(CIDetectorTypeFace, context:nil, options:options)
10
11    features = detector.featuresInImage(cme)
12
13    Dispatch::Queue.concurrent.async do
14      print_features(features)
15    end
16end

As you can see, we're creating a CIImage for the CIDetector, along with an options dictionary, which is limited to setting the accuracy of the CIDetector. Your two options are CIDetectorAccuracyHigh or CIDetectorAccuracyLow. Setting accuracy to high, will use more accurate detection techniques, but takes more time (I don't notice much difference for a single picture), while setting it to low is the inverse. Depending on your usage you'll choose one or the other. Since we're working with a static image, we'll leave the detector on high and save the speedy version for a project that would necessitate it.

The last few lines there, we use RubyMotions version of the Grand Central Dispatch (GCD), to fire off our print event asynchronously. So all this code did, was stick an image in CIDetector and send the results off to be handled asynchronously.

Reading Features

The iOS API really makes reading features too easy. The above section sends the results of CIDetector off to be printed That's what we do here. Peruse the following code:

 1def print_features features
 2    features.each_with_index do |feature, index|
 3      p "Found Feature!"
 4      
 5      if feature.hasLeftEyePosition
 6        p "Left Eye Coord: #{feature.leftEyePosition.x}x#{feature.leftEyePosition.y}"
 7      end
 8      if feature.hasRightEyePosition
 9        p "Right Eye Coord: #{feature.rightEyePosition.x}x#{feature.rightEyePosition.y}"
10      end
11      if feature.hasMouthPosition
12        p "Mouth Coord: #{feature.mouthPosition.x}x#{feature.mouthPosition.y}"
13      end
14    end
15
16end

The previous code block goes through all found features (Every face it detected), and then prints the coordinate to the screen. So each set of coordinates is a detected face! It's really quite simple. Yes, the code could be condensed even further, but in the next section we're going to use those blocks to mark each feature.

Marking Features

Screen Shot 2012 08 07 At 11.35.58 Am

Now that we have read the features and understand their location, let's make it known. We're going to draw boxes over the features that we have detected. Namely:

One very important note is that Quartz2D is planning on confusing us, and defending the Cyborgs. Rather than the traditional top left X and Y of 0,0 the origin is considered to be the bottom left! Fortunately, to keep things sane, everyone seems to acquiesce to this by moving and flipping the context of the drawing space. The following code helps you draw using the exact coordinates we have already received from the Features detected.

1CGContextTranslateCTM(currentContext, 0, @me.size.height)
2CGContextScaleCTM(currentContext, 1, -1)

With the CGContext translated and scaled, you can use a simple draw_feature function to place boxes over the detected features. See the code I use below.

1 def draw_feature(context, atPoint:feature_point)
2    size = 6
3    startx = feature_point.x - (size/2)
4    starty = feature_point.y - (size/2)
5    CGContextAddRect(context, [[startx, starty], [size, size]])
6    CGContextDrawPath(context, KCGPathFillStroke)
7 end

TADAAAA!!!! You're able to draw using your coordinates!

Conclusion

That's all folks! You can detect, and draw just like that! To see all this code in a cohesive format, check out the Github Repo. In the repo, I draw the image and the graphics on the same context, which is perfect for export. I also toss a box around the feature.bounds to identify the face boundries. And if you're interested in cleaning it up, adjusting it to handle scaled down images etc? Please send pull requests!

Definitely subscribe to our RSS feed, as we aspire to post a more advanced and fun Face Detection app in the near future :)

CIDetector Class Reference: http://developer.apple.com/library/ios/#documentation/CoreImage/Reference/CIDetector_Ref/Reference/Reference.html
Github Source: https://github.com/IconoclastLabs/RubyMotion-SimpleFace
Gist of all the above code: https://gist.github.com/3297986
RubyMotion Face recognition mustache app: https://github.com/HipByte/RubyMotionSamples/tree/master/Mustache


Copyright ©2014 Iconoclast Labs LLC