In this guide, we will show you how to integrate your application with the Realeyes Demographic estimation Library.
The Native Demographic estimation SDK is tested on the following Operation Systems:
The list of dependencies and licensing information for the Demographic estimation Library is available here
You will need a model file for this library to work.
The latest version of the Demographic Esimation Library is published on demand. To request the package with the library and the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to use the Demographic Esimation Library is to import the library to your project.
After that you can instantiate an DemographicEstimator object.
You should provide the model file name and, optionally the maximum number of concurrent calculations in the background in the parameters.
You can call multiple detectFaces() and estimate() function calls.
To analyze one image for faces and estimations you can do the followings:
The following example shows the basic usage of the library using OpenCV for capturing frames from the camera and feeding it to the Native Experience SDK:
include "demographicestimator.h"
include "opencv2/opencv.hpp"
void main()
{
cv::Mat img = cv2::imread("1.png")
del::ImageHeader imageHeaders = {img.data, img.cols, img.rows, static_cast<int>(img.step), del::ImageFormat::BGR};
del::DemographicEstimatior de("model_de.realZ", 0)
auto faces = de.detectFaces(img).get();
for (auto face : faces)
{
auto estimations = de.estimate(face).get();
}
}
The latest version of the Demographic Esimation Library is published in pypi.org. You can install it with this command: 'pip install realeyes.demographic_estimation'.
You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to use the Demographic Esimation Library is to import the realeyes.demographic_estimation module. After that you can instantiate an DemographicEstimator object.
You should provide the model file name and the maximum number of concurrent calculations in the background in the parameters.
You can call multiple detect_faces() and estimate() function calls.
To analyze one image for faces and estimations you can do the followings:
The following example shows the basic usage of the library using OpenCV for capturing frames from the camera and feeding it to the Native Experience SDK:
import cv2
import realeyes.demographic_estimation as de
img = cv2.imread("1.png")
estimator = de.DemographicEstimator("model_de.realZ", 0)
faces = estimator.detect_faces(img)
estimations = []
for face in faces:
estimations.append(estimator.estimate(face))
The latest version of the Demographic Estimation Library is published in nuget.org. You can simply search for the NuGet package called Realeyes.DemographicEstimation and add to your project.
You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to make sure you imported the DemographicEstimation namespace in your source file.
Then you can instantiate an DemographicEstimator object.
You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).
To analyze an image first you need to call the DetectFaces() method. This method will return a Faces object. This object has got two methods: Count() and GetFace(). You can iterate through the detected Face objects with these two methods.
After you have detected the faces in the image you can call Estimate() on each Face object. This method will return an Outputs object. This object has got two methods: Count() and GetEstimation(). You can iterate through the detected Output objects with these two methods.
The Output object has got the following fields:
- name - the name of the estimation,
- type - the type of the estimation (Age/Gender)
- gender - the estimated gender (only valid if type is Gender)
- age - the estimated age (only valid if type is Age)
The following example shows the basic usage of the library using OpenCV for loading image from the disk and feeding it to the Native Experience SDK:
using DemographicEstimation;
using System.Runtime.CompilerServices;
using System.Threading;
string png1_file = "1.png";
Image<Rgb24> img1 = SixLabors.ImageSharp.Image.Load<Rgb24>(png1_file);
byte[] bytes1 = new byte[img1.Width * img1.Height * Unsafe.SizeOf<Rgb24>()];
img1.CopyPixelDataTo(bytes1);
ImageHeader img1_hdr = new ImageHeader(bytes1, img1.Width, img1.Height,
img1.Width * Unsafe.SizeOf<Rgb24>(), ImageFormat.RGB);
DemographicEstimator estimator = new DemographicEstimator("model_de.realZ", 0);
Faces faces = await estimator.DetectFaces(img1_hdr)).Results;
Faces faces = all_faces[j];
Dictionary<int, Outputs> all_outputs = new Dictionary<int, Outputs>();
for (int i = 0; i < faces.Count(); ++i)
{
Face face = faces.GetFace(i);
all_outputs[i] = (await estimator.Estimate(face)).Results;
}
foreach (var it in all_outputs)
{
Outputs outputs = it.Value;
int key = it.Key;
for (int i = 0; i < outputs.Count(); ++i)
{
Output output = await outputs.GetEstimation(i);
Console.WriteLine("{0}: {1} - {2} - {3}", key, output.name, output.type, output.type == OutputType.Age ? output.age : output.gender);
}
}
foreach (var it in all_outputs)
{
it.Value.Dispose();
}
faces.Dispose();
estimator.Dispose();
The latest version of the Demographic Estimation Plugin is published in Unity Assets Store. You can simply search for the package called Realeyes.DemographicEstimation and add to your project.
You will need a model file for this Plugin to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to make sure you imported the DemographicEstimation namespace in your source file.
Then you can instantiate an DemographicEstimator object.
You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).
To analyze an image first you need to call the DetectFaces() method. This method will return a Faces object. This object has got two methods: Count() and GetFace(). You can iterate through the detected Face objects with these two methods.
After you have detected the faces in the image you can call Estimate() on each Face object. This method will return an Outputs object. This object has got two methods: Count() and GetEstimation(). You can iterate through the detected Output objects with these two methods.
The Output object has got the following fields:
- name - the name of the estimation,
- type - the type of the estimation (Age/Gender)
- gender - the estimated gender (only valid if type is Gender)
- age - the estimated age (only valid if type is Age)
The following example shows the basic usage of the Plugin using OpenCV for loading image from the disk and feeding it to the Native Experience SDK:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using DemographicEstimation;
public class Main : MonoBehaviour
{
public string deviceName;
WebCamTexture wct;
DemographicEstimator de;
// Start is called before the first frame update
void Start()
{
de = new DemographicEstimator("./delmodel.realZ", 0);
WebCamDevice[] devices = WebCamTexture.devices;
deviceName = devices[0].name;
wct = new WebCamTexture(deviceName, 640, 480, 12);
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = wct;
renderer.enabled = true;
wct.Play();
}
// Update is called once per frame
void Update()
{
GetComponent<Renderer>().material.mainTexture = wct;
}
string labelString = "";
void OnGUI()
{
if (GUI.Button(new Rect(10.0f, 110.0f, 150.0f, 30.0f), "Check"))
TakeSnapshot();
GUI.Label(new Rect(10.0f, 70.0f, 300.0f, 30.0f), labelString);
}
void TakeSnapshot()
{
Texture2D snap = new Texture2D(wct.width, wct.height);
snap.SetPixels(wct.GetPixels());
snap.Apply();
var format = snap.format;
ImageHeader img = new ImageHeader(snap.GetRawTextureData(), 640, 480, 640*4, ImageFormat.RGBA);
var task = de.DetectFaces(img);
task.Wait();
Faces faces = task.Result.Results;
if (faces.Count() >= 1)
{
var task_embed = de.Estimate(faces.GetFace(0));
task_embed.Wait();
Outputs est = task_embed.Result.Results;
labelString = "Estimation:";
for (int i = 0; i < est.Count(); ++i)
{
Output o = est.GetEstimation(i);
switch (o.type)
{
case OutputType.Age: labelString += " age: " + o.age.ToString(); break;
case OutputType.Gender: labelString += " gender: " + (o.gender == Gender.Female ? "female" : "male"); break;
default: break;
}
}
}
}
}