Skip to main content


Android SDK version changelog


  • Render error message in case of too many polygons in effect.


  • Added texture size uniform that is updated every frame for every texture.
    • In shaders it is accessible with TEXTURE2D_SIZE(sampler_name) macro.


  • Updated the Scripting API:
    • Added the Face Position component class.
    • Added trigger events and functions.
    • Fixed Debug.renderText() default color parameters.



  • No changes on Android SDK


  • Added new Dynamic UVs options in SDK and Studio.
    • For dynamic UVs you can choose between face and real options.
    • face option's behaviour is the same as it was before - UVs are adjusted for sampling the person’s face from camera texture.
    • real option adjusts the UVs so that the camera texture is sampled based on the real positions of the mesh vertices.
  • Fixed dynamic UVs not working for nodes that are not in face space.
    • Dynamic UVs now work for everything:
      • Nodes in orthographic layers.
      • Nodes in perspective layers.
      • Nodes in face space.
      • Nodes in face space with parent or child node transformations.
      • Nodes not in face space.
  • Fixed recording video/audio sync issue when resuming/pausing.
  • Fixed eye position component mirroring.


  • Fix landmark2D camera mirror bug


  • Greatly improved face tracking performance (up to 100% on some devices)
  • Introduced single-threaded mode (look it up in the docs)





  • Fixed critical bug which prevented users to use render filters on the face


  • No changes on Android SDK


  • Fixed OpenCL crashes on some devices
  • Fixed crash when phone changes orientation
  • Fixed scripting subsystem resource leaks
  • BIG PERFORMANCE IMPROVEMENT: Introducing a new way of feeding camera frames to DeepAR - external texture. This is a workflow where camera frames are passed to DeepAR via OpenGL texture. Which means that camera frames constantly stay in GPU memory, they don't have to be copied from ByteBuffer where the whole process is more CPU intensive. Also, this way DeepAR can do image processing on the GPU which results in more FPS. Check out our GitHub example to see how to use this feature. You can set the useExternalCameraTexture to true/false to switch between the "external texture" workflow or the old standard workflow for passing camera frames via ByteBuffer
  • General refactoring and bugfixes


  • Optimised our engine. Up to 10x performance boost for Android devices that support OpenCL and up to 50% boost for all other devices (OpenGL fallback). You can now safely use 1080p camera resolution and it will run smoothly ;)
  • Want to have more control over physics components in runtime? You can now enable/disable entire physics with enablePhysics method or show/hide physics colliders showColliders method. Also, you can change any physics component parameter through our ChangeParameter API, see documentation
  • General refactoring and bugfixes



  • Improved audio/video sync for video recording.


MAJOR CHANGE: DeepAR is now equipped to work with Camera, Camera2 and CameraX API - receiveFrame method now accepts two additional parameters: image format and pixel stride. The method can now receive both NV21 and YUV_420_888 image formats. Read up our API documentation for more info.

  • added touchStart and touchEnd methods that work with new Hide on touch component of DeepAR Studio
  • bugfix: DeepAR crashing for feeding frames of non-standard aspect ratios


  • Significantly improved performance and quality of background segmentation in Android SDK
  • added support for the additional output format of the offscreen processing workflow. Users can now select between the following formats: RGBA_8888, ARGB_8888, BGRA_8888, and ABGR_8888
  • Other smaller bugfixes and improvements


  • Added moveGameObject API method
  • Rendering engine optimizations providing better performance on lower-end devices
  • Fixed video recording bug where the first couple of frames were black sometimes
  • Other various stability improvements and bugfixes


Major upgrade, check the API docs for more in detail features explanation

  • Frame-by-frame/Continuous rendering mode (live mode on/off) added
  • Off-screen/On-screen rendering mode added
  • Computer vision only on/off mode added
  • Seamless switching between any rendering mode

API changes

  • AREventListener error method signature change
  • AREventListener added frameAvailable method
  • Added following methods on DeepAR class: startCapture, stopCapture, setVisionOnly and setOffscreenRendering


  • Error and warning reporting from DeepAR engine via error method of ARViewListener should be more verbose now to identify potential issues
  • Various memory issues fixed
  • Video recording resolution issues fixed


  • Added support for 1920x1080 resolution
  • Smaller bugfixes and optimizations


  • Major upgrade
  • New segmentation model
  • Tracking improvements
  • Bugfixes and optimizations


  • Bugfixes
  • Added support for hair segmentation
  • Minimum API changed back to 19


  • Bugfixes
  • Game object transforms can now be changed at runtime
  • Textures can now be changed at runtime
  • Color sampling now works properly on the first frame
  • Texture can be changed at runtime
  • Added support for background segmentation
  • Minimum API level changed to 21


  • Bugfixes


  • Bugfixes


  • Bugfixes


  • Image tracking support
  • Improved handling multiple faces


  • License key support


  • Added shutdownFinished to AREventListener


  • Face rect output
  • Face detection sensitivity
  • Video recording bugfixes


  • Exposed camera device, can be used to initialize DeepAR with the back camera or to change focus/exposure settings
  • Improved how camera orientation changes
  • Bugfix: changing orientation while using the back camera
  • Other bugfixes


  • Bugfixes and performance improvements


  • Updated docs