Python API Documentation (Experimental)

Tracker class

class native_emotions_library.Tracker(model_file, max_concurrency=0)

The Emotion Tracker class

__init__(self, model_file, max_concurrency=0)

Tracker constructor: loads model file, sets up the processing.

Parameters:
  • model_file (str) – path for the used model

  • max_concurrency (int) – maximum allowed concurrency, 0 means automatic (using all cores), default: 0

track(image, timestamp_in_ms)

Tracks the given frame.

Parameters:
  • image (numpy.ndarray) – frame from the video

  • timestamp_in_ms (int) – timestamp of the frame

Return type:

TrackingResult

get_emotion_ids()

Returns the emotion IDs provided by the loaded model. The order is the same as in the TrackingResult.

Return type:

list[EmotionID]

get_emotion_names()

Returns the emotion names provided by the loaded model. The order is the same as in the TrackingResult.

Return type:

list[str]

get_model_name()

Returns the name (version etc) of the loaded model.

Return type:

str

minimum_face_ratio: float

Current minimum face size as a ratio of the smaller image dimension.

is_face_tracking_enabled()

Returns wether the face tracker is enabled.

Return type:

bool

set_face_tracking_enabled(enable: bool)

Sets the face tracker to be enabled or disabled.

Parameters:

enable (bool) – new value

is_emotion_enabled(emotion_id)

Returns wether the specified emotion is enabled.

Parameters:

emotion_id (EmotionID) – emotion to query

Return type:

bool

set_emotion_enabled(emotion_id: EmotionID, enable: bool)

Sets the specified emotion to enabled or disabled.

Parameters:
  • emotion_id (EmotionID) – emotion to set

  • enable (bool) – new value

Result classes

EmotionID

class native_emotions_library.EmotionID
CONFUSION = 0
CONTEMPT = 1
DISGUST = 2
FEAR = 3
HAPPY = 4
EMPATHY = 5
SURPRISE = 6
ATTENTION = 100
PRESENCE = 101
EYES_ON_SCREEN = 102
FACE_DETECTION = 103

TrackingResult

class native_emotions_library.TrackingResult
emotions: EmotionData

Tracked emotions. See EmotionData

landmarks: LandmarkData

Tracked landmarks. See LandmarkData

to_json()

Converts the data to json (dicts and lists). :rtype: dict

LandmarkData

class native_emotions_library.LandmarkData
scale: float

Scale of the face.

roll: float

Roll pose angle.

yaw: float

Yaw pose angle.

pitch: float

Pitch pose angle.

translate: list[Point2d]

Position of the head center in image coordinates.

landmarks2d

Positions of the 49 landmarks, in image coordinates.

landmarks3d: list[Point3d]

Positions of the 49 landmarks, in an un-scaled face-centered 3D space.

is_good: bool

Whether the tracking is good quality or not.

to_json()

Converts the data to json (dicts and lists). :rtype: dict

Point2d

class native_emotions_library.Point2d
x: float
y: float
to_json()

Converts the data to json (dicts and lists). :rtype: dict

Point3d

class native_emotions_library.Point3d
x: float
y: float
z: float
to_json()

Converts the data to json (dicts and lists). :rtype: dict

EmotionData

class native_emotions_library.EmotionData
probability: float

Probability of the emotion.

is_active: bool

Whether the probability is higher than an internal threshold.

is_detection_successful: bool

Whether the tracking quality was good enough to reliable detect this emotion.

emotion_id: EmotionID

ID of the emotion. See py:class:EmotionID