Getting started
In this guide, we will show you how to integrate your application with the Realeyes Facial expression detection Library.
After completing this guide, you will know:
- the system requirements for the integration with the Facial expression detection Library,
- how to include the Facial expression detection Library in your application so that it can analyze images.
Minimum system requirements
The Native Facial expression detection SDK is tested on the following Operation Systems:
- Windows 10
- Ubuntu 22.04 LTS
The C++ SDK has the following minimum system requirements:
- C++17 compatible compiler
- At least 1 GB of RAM
The Python SDK has the following minimum system requirements:
- Python 3.8 - Python 3.11
- At least 1 GB of RAM
The dotnet SDK has the following minimum system requirements:
- .Net Core 6.0
- At least 1 GB of RAM
The Unity SDK has the following minimum system requirements:
- Unity 2022.3 or later
- At least 1 GB of RAM
Adding the Emotion Detection Library to Your App
The list of dependencies and licensing information for the Facial expression detection Library is available here
Adding the Facial expression detection Library to Your App
You will need a model file for this library to work.
The latest version of the Facial expression detection Library is published on demand. To request the package with the library and the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to use the Facial expression detection Library is to import the library to your project.
After that you can instantiate a Tracker object.
You should provide the model file name and, optionally the maximum number of concurrent calculations in the background in the parameters.
You can call multiple track() function calls.
To analyze one image for faces and estimations you can do the followings:
- call track() to get the detected emotions on an image.
The following example shows the basic usage of the library:
include "tracker.h"
include "opencv2/opencv.hpp"
void main()
{
cv::Mat img = cv2::imread("1.png")
nel::ImageHeader imageHeaders = {img.data, img.cols, img.rows, static_cast<int>(img.step), del::ImageFormat::BGR};
nel::Tracker tracker("model_fe.realZ", 0)
auto emotions = tracker.track(img).get();
}
The latest version of the Facial expression detection Library is published in pypi.org. You can install it with this command: 'pip install realeyes.emotion_detection'.
You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to use the Facial expression detection Library is to import the realeyes.emotion_detection module. After that you can instantiate a Tracker object.
You should provide the model file name and the maximum number of concurrent calculations in the background in the parameters.
You can call multiple track() function calls.
To analyze one image for faces and estimations you can do the followings:
- call track() to get the detected emotions on an image.
The following example shows the basic usage of the library using OpenCV for capturing frames from the camera and feeding it to the Native Experience SDK:
import cv2
import realeyes.emotion_detection as em
img = cv2.imread("1.png")
tr = em.Tracker("model_em.realZ", 0)
emotions = tr.track(img, 0)
The latest version of the Facial expression detection Library is published in nuget.org. You can simply search for the NuGet package called Realeyes.EmotionDetection and add to your project.
You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to make sure you imported the EmotionDetection namespace in your source file.
Then you can instantiate an EmotionsTracker object.
You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).
To analyze an image you need to call the TrackEmotions() method. This method will return the emotions found in the image or, in case of an issue the error string.
The following example shows the basic usage of the library using OpenCV for loading image from the disk and feeding it to the Native Experience SDK:
using EmotionDetection;
using System.Runtime.CompilerServices;
using System.Threading;
string png1_file = "1.png";
Image<Rgb24> img1 = SixLabors.ImageSharp.Image.Load<Rgb24>(png1_file);
byte[] bytes1 = new byte[img1.Width * img1.Height * Unsafe.SizeOf<Rgb24>()];
img1.CopyPixelDataTo(bytes1);
ImageHeader img1_hdr = new ImageHeader(bytes1, img1.Width, img1.Height,
img1.Width * Unsafe.SizeOf<Rgb24>(), ImageFormat.RGB);
EmotionsTracker tracker = new EmotionsTracker("model_de.realZ", 0);
var results = await tracker.TrackEmotions(img1);
tracker.Dispose();
The latest version of the Emotion Detection Plugin is published in Unity Assets Store. You can simply search for the package called Realeyes.EmotionDetection and add to your project.
You will need a model file for this Plugin to work. To request the model file please visit the Developers Portal SDK page (login required).
Usage
The first step to make sure you imported the EmotionDetection namespace in your source file.
Then you can instantiate an EmotionsTracker object.
You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).
To analyze an image you need to call the TrackEmotions() method. This method will return the emotions found in the image or, in case of an issue the error string.
The following example shows the basic usage of the Plugin using OpenCV for loading image from the disk and feeding it to the Native Experience SDK:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using EmotionDetection;
public class Main : MonoBehaviour
{
public string deviceName;
WebCamTexture wct;
EmotionsTracker em;
// Start is called before the first frame update
void Start()
{
em = new EmotionsTracker("./nelmodel.realZ", 0);
WebCamDevice[] devices = WebCamTexture.devices;
deviceName = devices[0].name;
wct = new WebCamTexture(deviceName, 640, 480, 12);
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = wct;
renderer.enabled = true;
wct.Play();
}
// Update is called once per frame
void Update()
{
GetComponent<Renderer>().material.mainTexture = wct;
}
string labelString = "";
void OnGUI()
{
GUI.Label(new Rect(10, 10, 300, 90), labelString);
if (GUI.Button(new Rect(10, 110, 150, 30), "Check Face"))
TakeSnapshot();
if (GUI.Button(new Rect(10, 200, 150, 30), "Exit"))
Application.Quit();
}
void TakeSnapshot()
{
Texture2D snap = new Texture2D(wct.width, wct.height);
snap.SetPixels(wct.GetPixels());
snap.Apply();
var format = snap.format;
ImageHeader img = new ImageHeader(snap.GetRawTextureData(), 640, 480, 640*4, ImageFormat.RGBA);
var task = em.TrackEmotions(img);
task.Wait();
EmotionResults emotions = task.Result.Results.Value;
labelString = "";
foreach (EmotionData ed in emotions.emotions)
{
labelString += " " + ed.emotionID.ToString() + " - " + ed.probability.ToString();
}
}
}