C++ API Documentation

Tracker class

class Tracker

The Emotion Tracker class.

Public Types

typedef std::variant<ResultType, ErrorType> ResultOrError

Type representing the result or the error in the callback interface.

Public Functions

Tracker(const std::string &modelFile, int max_concurrency = 0)

Tracker constructor: loads model file, sets up the processing.

Parameters
  • modelFile: path for the used model

  • max_concurrency: maximum allowed concurrency, 0 means automatic (using all cores), default: 0

~Tracker()

Destructor.

std::future<ResultType> track(const nel::ImageHeader &imageHeader, std::chrono::milliseconds timestamp)

Tracks the given frame asynchronously with the std::future API.

Note

The given ImageHeader doesn’t own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.

Note

Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().

Note

This is the std::future based API, for callback API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds, std::function<void (ResultOrError)>).

Parameters
  • imageHeader: image descriptor

  • timestamp: timestamp of the image

void track(const nel::ImageHeader &imageHeader, std::chrono::milliseconds timestamp, std::function<void(ResultOrError)> callback)

Tracks the given frame asynchronously with a callback API.

Note

The given ImageHeader doesn’t own the image data, and it is safe to delete the data after the call, a copy is happening internally. See nel::ImageHeader for details.

Note

Calling this function is non-blocking, so calling it again with the next frame without waiting for the result is possible. Also see get_concurrent_calculations().

Note

This is the callback based API, for std::future API see nel::Tracker::track(const nel::ImageHeader&, std::chrono::milliseconds).

Return

tracked landmarks and emotions

Parameters
  • imageHeader: image descriptor

  • timestamp: timestamp of the image

  • callback: callback to call with the result

const std::vector<nel::EmotionID> &get_emotion_IDs() const

Returns the emotion IDs provided by the loaded model.

The order is the same as in the nel::EmotionResults.

See

nel::EmotionResults

Return

A vector of emotion IDs.

const std::vector<std::string> &get_emotion_names() const

Returns the emotion names provided by the loaded model.

The order is the same as in the nel::EmotionResults.

See

nel::EmotionResults

Return

A vector of emotion names.

uint16_t get_concurrent_calculations() const

Returns the value of the atomic counter for the number of calculations currently running concurrently.

You can use this to limit the number of concurrent calculations.

Return

The (approximate) number of calculations currently in-flight.

bool is_emotion_enabled(nel::EmotionID emoID) const

Returns wether the specified emotion is enabled.

Return

Parameters
  • emoID: emotion to query

void set_emotion_enabled(nel::EmotionID emoID, bool enable)

Sets the specified emotion to enabled or disabled.

Parameters
  • emoID: emotion to set

  • enable: boolean to set to

bool is_face_tracking_enabled() const

Returns wether the face tracker is enabled.

Return

void set_face_tracking_enabled(bool enable)

Sets the face tracker to be enabled or disabled.

Parameters
  • enable: boolean to set to

std::string get_model_name() const

Returns the name (version etc) of the loaded model.

Return

name of the model

float get_minimum_face_ratio() const

Gets the current minimum face ratio.

See

set_minimum_face_ratio

Return

current minimum face size as a ratio of the smaller image dimension

void set_minimum_face_ratio(float minimumFaceRatio)

Sets the minimum face ratio.

The minimum face ratio defines the minimum face size the algorithm is looking for. The actual size is calculated from the smaller image dimension multiplied by the set minimum face ratio. If the value is 1/4.8, then in case of VGA resolution input (640x480), the minimum face size is 100x100.

Warning

The shape alignment and classifier performance can degrade in case of low resolution, tracking faces smaller than 75x75 is ill advised.

Parameters
  • minimumFaceRatio: new minimum face size as a ratio of the smaller image dimension

Public Static Functions

static nel::Version get_sdk_version()

Returns the version of the SDK (and not the model)

Return

version of the SDK

static std::string get_sdk_version_string()

Returns the version string of the SDK (and not the model)

Return

version string of the SDK

struct ErrorType

The ErrorType struct.

Public Members

std::string errorString

human readable description of the error occurred

struct ResultType

The ResultType struct.

Public Members

nel::LandmarkData landmarks

Tracked landmarks.

nel::EmotionResults emotions

Detected emotions.

Image header class

struct ImageHeader

Descriptor class for image data (non-owning)

Public Members

const uint8_t *data

pointer to the byte array of the image

int width

width of the image in pixels

int height

height of the image in pixels

int stride

length of one row of pixels in bytes (e.g: 3*width + padding)

nel::ImageFormat format

image format

enum nel::ImageFormat

Values:

Grayscale = 0

8-bit grayscale

RGB = 1

24-bit RGB

RGBA = 2

32-bit RGBA or 32-bit RGB_

BGR = 3

24-bit BGR

BGRA = 4

32-bit BGRA or 32-bit BGR_

Result classes

enum nel::EmotionID

IDs for the supported emotions/behaviours.

Values:

CONFUSION = 0
CONTEMPT = 1
DISGUST = 2
FEAR = 3
HAPPY = 4
EMPATHY = 5
SURPRISE = 6
ATTENTION = 100
PRESENCE = 101
EYES_ON_SCREEN = 102
FACE_DETECTION = 103

ResultType

struct ResultType

The ResultType struct.

Public Members

nel::LandmarkData landmarks

Tracked landmarks.

nel::EmotionResults emotions

Detected emotions.

LandmarkData

struct LandmarkData

The LandmarkData struct.

Public Members

double scale

scale of the face

double roll

roll pose angle

double yaw

yaw pose angle

double pitch

pitch pose angle

nel::Point2d translate

position of the head center in image coordinates

std::vector<nel::Point2d> landmarks2d

position of the 49 landmarks, in image coordinates

std::vector<nel::Point3d> landmarks3d

position of the 49 landmarks, in an un-scaled face-centered 3D space

bool isGood

whether the tracking is good quality or not

Point2d

struct Point2d

Public Members

double x
double y

Point3d

struct Point3d

Public Members

double x
double y
double z
typedef std::vector<nel::EmotionData> nel::EmotionResults

EmotionResults.

Vector of emotion data, the order of emotions is the same as in nel::Tracker::get_emotion_names().

See

nel::Tracker::get_emotion_names().

EmotionData

struct EmotionData

The EmotionData struct.

Public Members

double probability

probability of the emotion

bool isActive

whether the probability is higher than an internal threshold

bool isDetectionSuccessful

whether the tracking quality was good enough to reliable detect this emotion

EmotionID emotionID

ID of the emotion.