Getting Started with Android
In this guide, we will show you how to integrate your Android application with the Realeyes Experience Platform using our ExperienceSDK for Android.
After completing this guide, you will know
- the system requirements for the integration with the ExperienceSDK,
- how to include the ExperienceSDK in your application so that it can take and analyze images from the Android device' "selfie" camera,
- what steps are needed to see the first facial expression data arrive from the device to our Experience Dashboard.
The ExperienceSDK uses Kotlin suspend functions and Flow
s on its API. We assume from now on that you are familiar with Kotlin coroutines, suspend functions and Flow
s. You can read about those concepts in detail in the Kotlin coroutines documentation.
Tip
You can try out our Emotion Booth application showcasing the features of the ExperienceSDK. The application is available for download on the Developers Portal.
ExperienceSDK Requirements
The ExperienceSDK has the following minimum system requirements.
- Minimum Android SDK version: 23 (Android 6)
- Front facing camera with 640x480 minimum resolution
- At least 1 GB of RAM
You also need to generate an account hash on the Developers Portal to be able to initialize and use the ExperienceSDK.
Adding the ExperienceSDK to Your App
The latest version of the ExperienceSDK is published on Realeyes' own Maven repository. To include it in your app dependencies, first declare the Realeyes Maven repository in the repositories section of your build.gradle or your settings.gradle file (whichever contains the repository declarations in your case):
build.gradle or settings.gradle
repositories {
// ...
maven {
url "https://maven.realeyesit.com"
}
}
Next, add the ExperienceSDK as a dependency to your dependencies
section:
build.gradle
dependencies {
// ...
implementation("com.realeyesit.experiencesdk:experiencesdk:0.14.0")
}
Now sync your Android Studio project with Gradle files to see the classes from the ExperienceSDK.
Working with the ExperienceSDK
When you embed the ExperienceSDK to a host application and start it, it detects certain events on the device. These events can be e.g. facial expression detection events, analytic events, custom events for user segmentation, etc. We refer to the detection or generation of those events as event collection.
The ExperienceSDK also emits (exposes) the events to the host application for client side handling, and sends the events to our Experience Dashboard for further processing.
The main entry point to the ExperienceSDK is the ExperienceSdk
singleton object. It uses Kotlin suspend functions for handling asynchronous operations.
To start the event collection, you first need to initialize the ExperienceSDK in a coroutine. We recommend to do it somewhere in your application initialization code, e.g. in Application.onCreate()
:
override fun onCreate() {
super.onCreate()
val scope = CoroutineScope(Dispatchers.Default)
val app = this
// Set this to the account hash you obtained from the Developers Portal
val yourAccountHash = "YOUR_ACCOUNT_HASH"
scope.launch {
ExperienceSdk.init(app, yourAccountHash)
}
}
By default, only a subset of the available facial expression classifiers are enabled. To fine-tune the facial expression detector component, call the init
function with a configuration block:
ExperienceSdk.init(app, yourAccountHash) {
facialExpressionsCollector(
presence = true,
eyesOnScreen = true,
attention = true,
happy = true,
surprise = true,
confusion = true,
contempt = true,
disgust = true,
empathy = true
)
}
For the ExperienceSDK to be able to generate Facial Expression Events, first you need check for camera permission and if it is not granted, you need to request it from the user. Please refer to the Android documentation on how to check for and request camera permission.
If your app already has the camera permission or the user has granted it, you can start the event collection with the start()
method. Typically, you will call this in the method where you checked for the permission and also in the successful permission request callback.
val scope = CoroutineScope(Dispatchers.Default)
// ...
scope.launch {
ExperienceSdk.start()
}
When you start the event collection, the camera is turned on and camera images are captured and processed by our Vision AI technology on the client side (within the ExperienceSDK
) and finally a detection result event is created. Image data is never transferred outside of the ExperienceSDK
. Only the final detection result is sent to our backend and exposed to the host application by the ExperienceSDK
.
To stop the event collection, call stop()
:
scope.launch {
ExperienceSdk.stop()
}
Working with Events
If you want to listen to events collected by the ExperienceSDK, subscribe to the events
Flow:
ExperienceSdk.events.collect { event: Event ->
// Process the incoming Event object here
}
If you are only interested in facial expression detection events, subscribe to the ExperienceSdk.facialExpressionsDetectedEvents
Flow:
ExperienceSdk.facialExpressionsDetectedEvents.collect { event: FacialExpressionsDetectedEvent ->
// Process the incoming FacialExpressionsDetectedEvent object here
}
The result of the detection is available in the results
field. This field contains a FacialExpressionsDetectedEvent.Result
object for each facial expression detection dimension, or null
if the given dimension was not enabled in the detector component.
The FacialExpressionsDetectedEvent.Result
object represents whether the given dimension was active (detected with a high probability) in the camera image, and also contains the probability of the given dimension on the camera image.
E.g. to extract the probability (level) of attention on the camera image from the result field, call:
val attentionProb = facialExpressionsDetectedEvent.results.attention.probability
Advanced Topics
Event Collection and Event Sending
Event collection and the sending of the collected data to the Experience Platform servers work according to the following rules.
Event Collection
When the SDK is initialized with the init()
function, certain initialization related special
events are generated and stored in memory but not sent, e.g. an events that hold information about
the device and about the initialization result. No other events are generated until start()
is called. If the host app process exits without invoking start()
, these initialization related
events disappear.
After init()
the SDK goes into Idle
state. If you call start()
at some point, the SDK will
go into Running
state. If you call stop()
the SDK will return to Idle
state.
When the SDK is Running
, it will start to generate event objects (e.g. facial expression events,
log events) and store them to the device storage using the 3rd party Amazon Kinesis SDK. Also,
the initialization related events that have been cached in memory will be stored to the device
storage together with the generated events when first calling start()
.
The SDK will keep generating and storing events whenever it is Running
. When the SDK is not
Running
any more (i.e. you called stop()
or the SDK went into Error
state), then the event
generation will stop. Note that a few events might still be generated until the stopping is fully
completed. When you call start()
again, event generation will resume.
Event Sending
The generated event objects are serialized and stored in a file. This file will use no more than
200 MB storage. When the SDK is Running
and there is a suitable network available for the event
sending then the SDK will send the stored events to the Experience Platform backend while
simultaneously removing those events from the storage that have been successfully sent. When the
SDK is not Running
or there is no suitable network, event sending will stop.
Network Type
You can set the suitable network type with the ExperienceSDK.eventSendingOnMeteredNetworkEnabled
property.
Special Camera Handling
If the host application requires any special camera handling, the built-in camera handling of the SDK can be overridden.
Custom Image Source
It is possible to override the SDK camera handling, allowing the host application to send MultiPlaneImage objects to the SDK for processing.
setImageSource(imageSource = object : ImageSource {
override fun get(): Flow<MultiPlaneImage> {
// Based on the image received from the camera, a MultiPlaneImage object can be generated at the following location,
// which will be used by the SDK later to extract the corresponding emotions.
}
})
Camera Lifecycle Delegate
If the application does not want to override the camera handling provided by the SDK but still wants to receive notifications about the camera lifecycle for certain reasons, it can be done in the following way:
setCameraLifecycleDelegate(object : CameraLifecycleDelegate {
override fun onBeforeCameraInitialized() {
// The camera has reached the point where it is about to be initialized.
}
override fun onBeforeCameraStarted() {
// The signal has been issued for the camera to start, but capturing has not yet occurred.
}
override fun onAfterCameraStopped() {
// The camera has been stopped.
}
})
Next Steps
In this guide, you added the ExperienceSDK to your application, initialized it and started the event collection. You also learned how to listen to and handle facial expression detection events.
Now you can start to measure the facial expressions of your users and store or process the detection results in your application. You can also visit our Experience Dashboard and analyze the data collected by your application.