Thursday, April 9, 2020

Microsoft Azure Vision API - Face Detection

This post is a continuation of part 1 :  Microsoft Azure Vision API - Computer Vision

Lets play around with Face API which is a part of Microsoft Azure Vision. Similar to computer vision, we must have a Microsoft account, a valid subscription and face resource which is key and endpoint in order to access Face API services. Check out the Part 1 to see creation steps.

The purpose of Face API is to extract some parts of face, the expression to predict the age, the emotion of the image.


First, we select a face image. It can be in local or an URL





Next, we copy the secret key subscription and end point URL from Microsoft Azure Resource that we created in Part 1







Then, We build a REST API request object which consists of headers, parameters and data. They are formed as dictionary object.






What we know about image is put in 'returnFaceAttributes' attribute in parameters object. In the example, I want to know the age, gender, head pose, facial hair of the object. I also want to check if that person was wearing glasses or not or whether he/she was wearing makeup.

We just finish the request part. Lets try to send it  and hope no errors happen.


Return code is 200 means request is completely processed. Lets explore return attributes in the response



















Json object is well-printed with pprint. With data extracted in the image, it is a man whose age is 37. His emotion is predicted as neutral with 99% confidence. He doesn't wear glasses neither. Wanna guess who is he?





















It is Emmanuel Macron, French President. Well, in the picture he looks 5 years younger than he is.


No comments:

Post a Comment