Interface DeepARParams

Parameters for the initialization of DeepAR object.

Hierarchy

  • DeepARParams

Properties

licenseKey: string

License key created on DeepAR developer portal for your web app.

previewElement?: HTMLElement

HTML element where AR preview will be presented.

This is usually a div element which is used solely for AR preview. AR preview will fully occupy the previewElement's space. Therefore, position and size your previewElement according to your preferred AR preview dimensions.

NOTE: AR preview will adjust to the size of the previewElement if it changes.

The canvas element will be appended as child to the previewElement. If you wish to work with the canvas element directly and position and size it yourself then use canvas parameter instead.

⚠️ You must specify previewElement or canvas.

effect?: string

The URL of a DeepAR effect file. This effect will be applied when DeepAR is initialized. This parameter is optional. You can always later switch to a different effect with switchEffect.

rootPath?: string

Path to the root directory of the DeepAR SDK. This path will be used to locate and download all the additional needed files like ML models and wasm files. By default, this points to the JsDelivr CDN. For example "https://cdn.jsdelivr.net/npm/deepar@5.0.0/".

If you want to host the DeepAR SDK yourself set the path to it here. It is recommended to host DeepAR SDK on your own since then you can use compressions like gzip and brotli on the files.

⚠️ Be sure that the version of DeepAR js file and the root SDK path match!

DeepAR SDK can be hosted on any other server, but be sure not to change the directory and file structure of DeepAR SDK when hosing it.

To configure usage of non-default ML models, define them in the additionalOptions.

canvas?: HTMLCanvasElement

Canvas element where DeepAR will render the AR preview.

⚠️ You must specify previewElement or canvas.

The preferred way is by using the previewElement parameter. In that case, the DeepAR will take care of correctly sizing the canvas.

Example

const canvas = document.createElement('canvas')
canvas.style.display = 'block'

// We want to fill the whole screen.
const width = window.innerWidth
const height = window.innerHeight
const dpr = window.devicePixelRatio || 1
canvas.width = width * dpr
canvas.height = height * dpr
canvas.style.width = `${width}px`
canvas.style.height = `${height}px`
additionalOptions?: {
    cameraConfig?: {
        disableDefaultCamera?: boolean;
        facingMode?: string;
        cameraPermissionAsked?: (() => void);
        cameraPermissionGranted?: (() => void);
    };
    hint?: string | string[];
    faceTrackingConfig?: {
        modelPath: string;
    };
    rigidFaceTrackingConfig?: {
        poseEstimationWasmPath?: string;
        detectorPath?: string;
        trackerPath?: string;
        objPath?: string;
        tfjsBackendWasmPath?: string;
        tfjsBackendWasmSimdPath?: string;
        tfjsBackendWasmThreadedSimdPath?: string;
    };
    segmentationConfig?: {
        modelPath?: string;
        mediaPipeConfig?: {
            wasmBasePath?: string;
            modelPath?: string;
        };
    };
    deeparWasmPath?: string;
    dyXzimgMagicFaceWasmPath?: string;
    dyArcoreScriptingWasmPath?: string;
    dyArcorePhysicsWasmPath?: string;
    dynamicModulesConfig?: {
        xzimgPath?: string;
        mediaPipePath?: string;
    };
    embeddedEffectsConfig?: {
        backgroundBlurPath?: string;
        backgroundReplacementPath?: string;
    };
    defaultEnvmapPath?: string;
    customEnvMapPath?: string;
    exposure?: number;
    envMapIntensity?: number;
    toneMapping?: ToneMapping;
}

Additional DeepAR options.

Type declaration

  • Optional cameraConfig?: {
        disableDefaultCamera?: boolean;
        facingMode?: string;
        cameraPermissionAsked?: (() => void);
        cameraPermissionGranted?: (() => void);
    }

    Camera options. DeepAR will use the camera by default.

    • Optional disableDefaultCamera?: boolean

      If true, DeepAR will not use camera preview by default and all the other options here are ignored in that case. You can use your own camera/video by calling setVideoElement or start the camera at any time by calling startCamera

    • Optional facingMode?: string

      Can be "user" or "environment". User will be a front facing camera on mobile devices, while environment will be the back facing camera. Default is "user".

    • Optional cameraPermissionAsked?: (() => void)
        • (): void
        • Called when the camera permission is asked.

          Returns void

    • Optional cameraPermissionGranted?: (() => void)
        • (): void
        • Called when the camera permission is granted.

          Returns void

  • Optional hint?: string | string[]

    Provide a hint or hints to DeepAR to optimize the SDK in some scenarios.

    Available hints:

    • faceModelsPredownload - Will download the face models as soon as possible.
    • segmentationModelsPredownload - Will download the segmentation models as soon as possible. Note - it will not try to initialize segmentation. Segmentation will be lazy loaded when needed.
    • segmentationInit - Will initialize segmentation as soon as possible. Without this hint segmentation is lazy loaded when some segmentation effect is loaded.
    • faceInit - Will initialize face tracking as soon as possible. Without this hint face tracking is lazy loaded when some face tracking effect is loaded.
  • Optional faceTrackingConfig?: {
        modelPath: string;
    }

    Face tracking module path and options.

    • modelPath: string

      Path to the face tracking model. Something like "path/to/deepar/models/face/models-68-extreme.bin".

  • Optional rigidFaceTrackingConfig?: {
        poseEstimationWasmPath?: string;
        detectorPath?: string;
        trackerPath?: string;
        objPath?: string;
        tfjsBackendWasmPath?: string;
        tfjsBackendWasmSimdPath?: string;
        tfjsBackendWasmThreadedSimdPath?: string;
    }
    • Optional poseEstimationWasmPath?: string

      Path to the pose libPoseEstimation.wasm file, e.g. "/path/to/deepar/wasm/libPoseEstimation.wasm".

    • Optional detectorPath?: string

      Path to the detector model, e.g. "/path/to/deepar/models/face-cnn/face-det.bin".

    • Optional trackerPath?: string

      Path to the tracker model, e.g. "/path/to/deepar/models/face-cnn/face-track-19-v2.bin".

    • Optional objPath?: string

      Path to the face model object file, e.g. "/path/to/deepar/models/face/face-rigid.obj".

    • Optional tfjsBackendWasmPath?: string

      Path to tfjs-backend-wasm.wasm file, e.g. "path/to/deepar/wasm/tfjs-backend-wasm.wasm"

    • Optional tfjsBackendWasmSimdPath?: string

      Path to tfjs-backend-wasm-simd.wasm file, e.g. "path/to/deepar/wasm/tfjs-backend-wasm-simd.wasm"

    • Optional tfjsBackendWasmThreadedSimdPath?: string

      Path to tfjs-backend-wasm-threaded-simd.wasm file, e.g. "path/to/deepar/wasm/tfjs-backend-wasm-threaded-simd.wasm"

  • Optional segmentationConfig?: {
        modelPath?: string;
        mediaPipeConfig?: {
            wasmBasePath?: string;
            modelPath?: string;
        };
    }

    Segmentation module path and options.

    • Optional modelPath?: string

      Path to the base segmentation model. Something like "path/to/deepar/models/segmentation/segmentation-192x192.bin". Thi is not used by default.

    • Optional mediaPipeConfig?: {
          wasmBasePath?: string;
          modelPath?: string;
      }

      MediaPipe segmentation config.

      • Optional wasmBasePath?: string

        Base path to wasm fileset. Something like "path/to/deepar/mediaPipe/segmentation/wasm".

      • Optional modelPath?: string

        MediaPipe selfie segmenter model to be used. Something like "path/to/deepar/mediaPipe/segmentation/model/selfie_segmenter.tflite".

  • Optional deeparWasmPath?: string

    Path to deepar.wasm file. Something like "/path/to/deepar/wasm/deepar.wasm".

  • Optional dyXzimgMagicFaceWasmPath?: string

    Path to dyXzimgMagicFace.wasm file. Something like "/path/to/deepar/wasm/dyXzimgMagicFace.wasm".

  • Optional dyArcoreScriptingWasmPath?: string

    Path to dyArcoreScripting.wasm file. Something like "/path/to/deepar/wasm/dyArcoreScripting.wasm".

  • Optional dyArcorePhysicsWasmPath?: string

    Path to dyArcorePhysics.wasm file. Something like "/path/to/deepar/wasm/dyArcorePhysics.wasm".

  • Optional dynamicModulesConfig?: {
        xzimgPath?: string;
        mediaPipePath?: string;
    }

    Dynamic JS modules paths.

    • Optional xzimgPath?: string

      Path to xzimg.js file. Something like "/path/to/deepar/js/dynamicModules/xzimg.js".

    • Optional mediaPipePath?: string

      Path to mediaPipe.js file. Something like "/path/to/deepar/js/dynamicModules/mediaPipe.js".

  • Optional embeddedEffectsConfig?: {
        backgroundBlurPath?: string;
        backgroundReplacementPath?: string;
    }

    Embedded DeepAR effect files paths.

    • Optional backgroundBlurPath?: string

      Path to background_blur.deepar file. Something like "/path/to/deepar/effects/background_blur.deepar".

    • Optional backgroundReplacementPath?: string

      Path to background_replacement.deepar file. Something like "/path/to/deepar/effects/background_replacement.deepar".

  • Optional defaultEnvmapPath?: string

    Path to the default_envmap.webp file.

  • Optional customEnvMapPath?: string

    Path to some custom environment map effect should be rendered with. Internally, it is going to fetch the map and call setEnvironementMap.

  • Optional exposure?: number

    Tone mapping exposure.

  • Optional envMapIntensity?: number

    Environment map intensity.

  • Optional toneMapping?: ToneMapping

    Tone mapping.

Generated using TypeDoc