Search Results for

    Show / Hide Table of Contents

    Getting started

    In this guide, we will show you how to integrate your application with the Realeyes Face Verification Library.

    After completing this guide, you will know:

    • the system requirements for the integration with the Face Verification Library,
    • how to include the Face Verification Library in your application so that it can analyze images.

    Minimum system requirements

    The Native Facial Verification SDK is tested on the following Operation Systems:

    • Windows 10
    • Ubuntu 22.04 LTS
    • C++
    • Python
    • .NET
    • Unity

    The C++ SDK has the following minimum system requirements:

    • C++17 compatible compiler
    • At least 1 GB of RAM

    The Python SDK has the following minimum system requirements:

    • Python 3.8 - Python 3.11
    • At least 1 GB of RAM

    The dotnet SDK has the following minimum system requirements:

    • .Net Core 6.0
    • At least 1 GB of RAM

    The Unity SDK has the following minimum system requirements:

    • Unity 2022.3 or later
    • At least 1 GB of RAM

    Dependencies and Licensing

    The list of dependencies and licensing information for the Face Verification Library is available here

    Adding the Face Verification Library to Your App

    • C++
    • Python
    • .NET
    • Unity

    You will need a model file for this library to work. The latest version of the Face Verification Library is published on demand. To request the package with the library and the model file please visit the Developers Portal SDK page (login required).

    Usage

    The first step to use the Face Verification Library is to import the library to your project. After that you can instantiate an FaceVerifier object. You should provide the model file name and, optionally the maximum number of concurrent calculations in the background in the parameters.

    You can call multiple detectFaces(), embedFace() and compareFaces() function calls.

    To analyze one image for faces and estimations you can do the followings:

    • call detectFaces() to get the faces found on the image
    • call embedFace() to get the embeddings for one face
    • call compareFaces() to compare the embeddings of two faces

    The following example shows the basic usage of the library :

    
    include "faceverifier.h"
    include "opencv2/opencv.hpp"
    
    void main()
    {
    	cv::Mat img1 = cv2::imread("1.png");
    	cv::Mat img2 = cv2::imread("2.png");
    	fvl::ImageHeader img_hdr1 = {img1.data, img1.cols, img1.rows, static_cast<int>(img1.step), fvl::ImageFormat::BGR};
    	fvl::ImageHeader img_hdr2 = {img1.data, img1.cols, img1.rows, static_cast<int>(img1.step), fvl::ImageFormat::BGR};
    	fvl::FaceVerifier fv("model_fv.realZ", 0)
    
    	auto faces1 = fv.detectFaces(img_hdr1).get();
    	auto faces2 = fv.detectFaces(img_hdr2).get();
    	
    	// let's say we have 1-1 faces in the images
    	auto emb1 = fv.embedFace(faces1[0]).get();
    	auto emb2 = fv.embedFace(faces2[0]).get();
    	
    	float similarity = fv.compareFaces(emb1, emb2);
    }
    
    

    The latest version of the Face Verification Library is published in pypi.org. You can install it with this command: 'pip install realeyes.face_verification'. You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).

    Usage

    The first step to use the Face Verification Library is to import the realeyes.face_verification module. After that you can instantiate an FaceVerifier object. You should provide the model file name and the maximum number of concurrent calculations in the background in the parameters.

    You can call multiple detect_faces(), embed_face() and compare_faces() function calls.

    To analyze one image for faces and estimations you can do the followings:

    • call detect_faces() to get the faces found on the image
    • call embed_face() to get the embeddings for one face
    • call compare_faces() to compare the embeddings of two faces

    The following example shows the basic usage of the library:

    
    import cv2
    import realeyes.face_verification as fv
    
    img1 = cv2.imread("1.png")
    img2 = cv2.imread("2.png")
    
    verifier = fv.FaceVerifier("model_fv.realZ", 0)
    
    faces1 = verifier.detect_faces(img1)
    faces2 = verifier.detect_faces(img2)
    
    # let's say we have 1-1 faces in the images
    emb1 = verifier.embed_face(faces1[0])
    emb2 = verifier.embed_face(faces2[0])
    
    similarity = verifier.compare_faces(emb1, emb2)
    
    

    The latest version of the Face Verification Library is published in nuget.org. You can simply search for the NuGet package called Realeyes.FaceVerification and add to your project. You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).

    Usage

    The first step to make sure you imported the FaceVerification namespace in your source file. Then you can instantiate an FaceVerifier object. You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).

    To analyze an image first you need to call the DetectFaces() method. After you have detected the faces in the image you can call EmbedFace() on each [Face][face] object. This method will return an the embeddings of the face. Finally you can compare the embeddings of two faces with the CompareFaces() method.

    The following example shows the basic usage of the library:

    
    using FaceVerification;
    using System.Threading;
    
    string png1_file = "1.png";
    string png2_file = "2.png";
    
    Image<Rgb24> img1 = SixLabors.ImageSharp.Image.Load<Rgb24>(png1_file);
    byte[] bytes1 = new byte[img1.Width * img1.Height * Unsafe.SizeOf<Rgb24>()];
    img1.CopyPixelDataTo(bytes1);
    ImageHeader img1_hdr = new ImageHeader(bytes1, img1.Width, img1.Height,
    	img1.Width * Unsafe.SizeOf<Rgb24>(), ImageFormat.RGB);
    
    Image<Rgb24> img2 = SixLabors.ImageSharp.Image.Load<Rgb24>(png2_file);
    byte[] bytes2 = new byte[img2.Width * img2.Height * Unsafe.SizeOf<Rgb24>()];
    img2.CopyPixelDataTo(bytes2);
    ImageHeader img2_hdr = new ImageHeader(bytes2, img2.Width, img2.Height,
    	img2.Width * Unsafe.SizeOf<Rgb24>(), ImageFormat.RGB);
    	
    FaceVerifier verifier = new FaceVerifier("model_de.realZ", 0);
     
    Faces faces1 = await verifier.DetectFaces(img1_hdr)).Results;
    Faces faces2 = await verifier.DetectFaces(img2_hdr)).Results;
    
    // let's say we have 1-1 faces in the images
    
    Face face1 = faces1.GetFace(0);
    Face face2 = faces2.GetFace(0);
    
    float[] emb1 = await verifier.EmbedFace(face1)).Results;
    float[] emb2 = await verifier.EmbedFace(face2)).Results;
    
    float similarity = verifier.CompareFaces(emb1, emb2);
    
    faces1.Dispose();
    faces2.Dispose();
    
    verifier.Dispose();
    
    

    The latest version of the Face Verification Plugin is published in Unity Assets Store. You can simply search for the package called Realeyes.FaceVerification and add to your project. You will need a model file for this library to work. To request the model file please visit the Developers Portal SDK page (login required).

    Usage

    The first step to make sure you imported the FaceVerification namespace in your source file. Then you can instantiate an FaceVerifier object. You should provide the model file name in the parameters and the maximum number of concurrent calculations in the background (default: 0, which means automatic).

    To analyze an image first you need to call the DetectFaces() method. After you have detected the faces in the image you can call EmbedFace() on each [Face][face] object. This method will return an the embeddings of the face. Finally you can compare the embeddings of two faces with the CompareFaces() method.

    The following example shows the basic usage of the library:

    
    using System.Collections;
    using System.Collections.Generic;
    using UnityEngine;
    
    using FaceVerification;
    
    public class Main : MonoBehaviour
    {
        public string deviceName;
        WebCamTexture wct;
    
        FaceVerifier fv;
    
        public float[] mainEmbeddings;
    
        // Start is called before the first frame update
        void Start()
        {
            fv = new FaceVerifier("./model_fv.realZ", 0);
            WebCamDevice[] devices = WebCamTexture.devices;
            deviceName = devices[0].name;
            wct = new WebCamTexture(deviceName, 640, 480, 12);
            Renderer renderer = GetComponent<Renderer>();
            renderer.material.mainTexture = wct;
            renderer.enabled = true;
            wct.Play();
        }
    
        // Update is called once per frame
        void Update()
        {
            GetComponent<Renderer>().material.mainTexture = wct;
        }
    
        void OnGUI()
        {
            if (GUI.Button(new Rect(10, 70, 150, 30), "Set Main Face"))
                TakeSnapshot();
            if (GUI.Button(new Rect(10, 110, 150, 30), "Check Face"))
                CheckFace();
            if (GUI.Button(new Rect(10, 200, 150, 30), "Exit"))
                Application.Quit();
        }
    
        void TakeSnapshot()
        {
            Texture2D snap = new Texture2D(wct.width, wct.height);
            snap.SetPixels(wct.GetPixels());
            snap.Apply();
    
            ImageHeader img = new ImageHeader(snap.GetRawTextureData(), 640, 480, 640*4, ImageFormat.RGBA);
    
            var task = fv.DetectFaces(img);
            task.Wait();
            Faces faces = task.Result.Results;
    
            if (faces.Count() >= 1)
            {
                var task_embed = fv.EmbedFace(faces.GetFace(0));
                task_embed.Wait();
                mainEmbeddings = task_embed.Result.Results;
                Camera.main.backgroundColor = new Color32(137, 85, 131, 0);
            }
        }
    
        public bool isSame = false;
        public float similarity = 0.0f;
    
        void CheckFace()
        {
            isSame = false;
    
            Texture2D snap = new Texture2D(wct.width, wct.height);
            snap.SetPixels(wct.GetPixels());
            snap.Apply();
    
            ImageHeader img = new ImageHeader(snap.GetRawTextureData(), snap.width, snap.height, snap.width * 4, ImageFormat.RGBA);
    
            var task = fv.DetectFaces(img);
            task.Wait();
            Faces faces = task.Result.Results;
    
            if (faces.Count() >= 1)
            {
                var task_embed = fv.EmbedFace(faces.GetFace(0));
                task_embed.Wait();
                float[] embeddings = task_embed.Result.Results;
    
                similarity = fv.CompareFaces(mainEmbeddings, embeddings);
    
                if (similarity > 0.6f)
                    isSame = true;
    
                if (isSame)
                    Camera.main.backgroundColor = new Color32(0, 143, 0, 0);
                else
                    Camera.main.backgroundColor = new Color32(143, 0, 0, 0);
            }
            else
                Camera.main.backgroundColor = new Color32(128, 128, 128, 0);
        }
    }
    
    
    In This Article
    Back to top
    Realeyes is SOC2 Type 2 compliant
    © 2024 - Realeyes' Experience Platform Documentation - Support:   support@realeyesit.com Generated by DocFX