Skip to main content

Agora Extension for Web

This is the DeepAR Agora extension for Web. Read more about Agora extensions here.

DeepAR Extension is just a wrapper around the DeepAR Web that simplifies the integration with the Agora RTC platform. You can find more information about DeepAR here.


In order to use the DeepAR Web extension you need to set up a license key for your web app on

  1. Create an account.

  2. Create a project.

  3. Add a web app to the project. Note that you need to specify the domain name which you plan to use for hosting the app.

Sample Demo

You can test our sample demo app to test out the DeepAR Agora Extension.

🔥 It is free! 🔥

See the official quickstart example here.


Once the extension is initialized with Agora, the DeepAR SDK is used very much the same as plain DeepAR Web.

Visit the official DeepAR docs for Web SDK here.

How to use the DeepAR Web Extension


You need both DeepAR Web extension and Agora RTC packages.

Using npm:

npm install deepar-agora-extension agora-rtc-sdk-ng

Using yarn:

yarn add deepar-agora-extension agora-rtc-sdk-ng


Import VideoExtension and AgoraRTC.

import { VideoExtension } from 'deepar-agora-extension';
import AgoraRTC from 'agora-rtc-sdk-ng';

You are also going to need import WebAssembly file that DeepAR runs and machine learning models that DeepAR uses for face tracking, background removal and such.

We recommend using a bundler to correctly include assets like models, effects and WebAssembly files.

For example, if using Webpack, you can use Asset Modules. Add this to your webpack.config.js:

module.exports = {
// ...
module: {
rules: [
test: /\.(wasm)|(bin)|(obj)$/i,
include: [
path.resolve(__dirname, 'node_modules/deepar/'),
type: 'asset/resource',
include: [
path.resolve(__dirname, 'effects/'),
type: 'asset/resource',
// ...

Then you can import paths for these files like this:

import deeparWasm from 'deepar/wasm/deepar.wasm';
import faceModel from 'deepar/models/face/models-68-extreme.bin';
import segmentationModel from 'deepar/models/segmentation/segmentation-160x160-opt.bin';

Extension setup

Create a div tag that will be used as a container for camera preview:

<div class="video-container"></div>

Be sure to set the width and height of the div!

.video-container {
width: 640px;
height: 480px;

Initialize DeepAR extension.

const videoExtension = new VideoExtension({
licenseKey: 'your_license_key_here', // create the license key here
deeparWasmPath: deeparWasm,
segmentationConfig: {
// don't need to define this if you don't use background removal effects
modelPath: segmentationModel,

Following code is standard Agora extension setup.

//register extension

//create DeepAR extension processor
const processor = videoExtension.createProcessor();

//create CameraVideoTrack
const videoTrack = await AgoraRTC.createCameraVideoTrack();

//piping processor

await'.video-container'), {
mirror: false,

Once that is done you can use DeepAR object normally as in standard DeepAR Web SDK. See API reference here.

let deepAR = processor.deepAR;

Load the face tracking model:


Load effects

All masks, filters, background removal, etc. are represented by effect files in DeepAR. You can load them to preview the effect. You can download a free filter pack here: or visit DeepAR effect store.

Load an effect using the switchEffect method:

deepAR.switchEffect(0, 'slot', './effects/alien');

Load different effects on different persons' faces:

deepAR.switchEffect(0, 'slot', './effects/alien');
deepAR.switchEffect(1, 'slot', './effects/lion');

Load a background removal effect:

deepAR.switchEffect(0, 'slot', './effects/background_segmentation');

Background removal or blur

To use background segmentation DeepAR needs to initialize the segmentation model.

import segmentationModelPath from 'deepar/models/segmentation/segmentation-160x160-opt.bin';

// ...

const videoExtension = new VideoExtension({
segmentationConfig: {
modelPath: segmentationModelPath,
// other params ...

Shoe try-on

To use shoe try-on feature DeepAR needs to initialize foot tracking. All the footTrackingConfig parameters are required.

import poseEstimationWasmPath from 'deepar/wasm/libxzimgPoseEstimation.wasm';
import footDetectorPath from 'deepar/models/foot/foot-detector-android.bin'; // or ...-ios.bin
import footTrackerPath from 'deepar/models/foot/foot-tracker-android.bin'; // or ...-ios.bin
import footObjPath from 'deepar/models/foot/foot-model.obj';

// ...

const videoExtension = new VideoExtension({
footTrackingConfig: {
detectorPath: footDetectorPath,
trackerPath: footTrackerPath,
objPath: footObjPath,
// other params ...


Please see DeepAR Customer Agreement.

Supported browsers

Both desktop and mobile browsers are supported by DeepAR.


  • Google Chrome 66+
  • Safari 11.1+
  • Firefox 60+
  • Edge 42+


  • Safari on iOS 11+


  • Google Chrome 66+