These are sequences of dynamic movements or transformations applied to 3D objects' scene. 3D animations enable users to create and manipulate the motion of objects, characters, or elements in their 3D projects. This can include defining how objects move, rotate, and change in size or shape over time within the 3D environment.
An animation controller in a 3D editor is a tool or feature that allows users to manage and control animations within the 3D editing environment. It provides functionalities for defining animation states, transitions between states, and the conditions or triggers that activate these transitions. Animation controllers assist users in orchestrating complex animations in their 3D projects.
Animation triggers are events or conditions that users can define to initiate the playback of specific animations. These triggers can be tied to user interactions, keyframes, scripting, or other actions within the editing environment. They empower users to add interactivity and responsiveness to their 3D animations, making them context-aware and dynamic within the editor.
Any media or data that can be used in your project. An asset may come from a file created outside of the Studio, such as a 3D Model, an audio file, or an image. You can also create some asset types in the Studio, such as preset 3D models, Shaders, Render Textures, etc.
A collection of files, elements and data from a Studio projects, which are compressed and stored in one file, with the .deeparpack extension. Asset packs are a way of sharing and re-using Studio projects and assets.
An effect that isolates a person from the foreground and blurs the background, creating a visually appealing depth of field.
A neural network-based technique that distinguishes a user positioned in the foreground from the background in a sequence of video frames. It is primarily employed to eliminate or replace the background with another image. Background segmentation is utilized as a standalone feature in video conferencing, live streaming, and video communication applications. It can also be integrated into face filters alongside facial animation for entertainment applications.
Functional parts or modules that can be attached to nodes in a scene to give them specific properties or behaviors. Components can include things like rendering components, physics components, audio components, and more. They allow users to customize and enhance the functionality of objects in the scene.
User-created shader programs for advanced rendering and visual effects.
The technology used to identify and continuously monitor the presence of a human face within a video frame.
Autodesk’s proprietary format that the Studio uses to import Models, animation, and more.
The technology used to identify and continuously monitor the presence of human feet within a video frame.
A graphic overlay associated with a Node in a Scene, and displayed in the Editor Panel, for example, the move tool.
A neural network approach used to detect and segment images into hair, face, and background components. This enables real-time hair modifications such as changing hair color.
LUT Post-processing (Look-Up Table)
Real-time color correction.
Define the visual properties of objects, such as their color, reflectivity, and transparency.
A three-dimensional, interconnected network of vertices (points), edges (lines), and faces (polygons) that are used to represent and approximate the shape and structure of a 3D object in computer graphics, computer-aided design (CAD), and other fields.
A 3D model representation of an object, such as a face, cube, quad, hat, etc.
The process of altering the size and proportions of a person's face, allowing for creative modifications such as slimming down cheeks or modifying facial features for fun masks.
Multiple Face Tracking
An algorithm that facilitates the application of masks to multiple individuals simultaneously, enhancing group Augmented Reality (AR) experiences.
Individual elements or objects within a 3D scene that can be manipulated and organized to create complex structures. Nodes are connected in a hierarchy to define the relationships between different elements in the scene.
Disabling rendering of objects or parts of objects when they are not currently seen by the camera because they are obscured (occluded) by other objects.
The rendering of AR models to mimic the behavior of real-world objects in response to light and physics. This includes supporting gravity and adjusting lighting as the camera rotates or the user tilts their device.
PNG sequences are sequences of individual PNG image files commonly used in 3D editors to create animations. Each PNG image represents a frame of the animation. Users can import and arrange these image sequences to create 2D sprite-based animations or texture animations that can be applied to 3D objects in the editor.
Graphical camera effects and animations applied to pre-recorded videos to enhance their visual appeal.
Ready-made 3D geometry that can be used as building blocks in a scene.
Pre-defined shader programs that control how objects are rendered in a specific way.
A digital environment or space where three-dimensional objects, assets, and elements are arranged and configured to create a cohesive and interactive 3D composition.
Code files that contain scripts or behaviors that can be attached to objects to define their functionality or interactivity.
Support for sound effects within FaceAR experiences, enabling the addition of music and audio enhancements to filters and effects.
Digital representations of the surface of AR objects, which enable sophisticated and lifelike rendering of these objects within the AR environment.
A Transform determines the Position, Rotation, and Scale of each object in the scene. Every Node has a Transform.
Small AR scenarios activated through user facial expressions, such as opening their mouth, smiling, raising eyebrows, or frowning. This feature is commonly used in FaceAR games and interactive face filters to enhance user engagement.
Video textures in a 3D editor involve mapping a video or a sequence of frames onto the surface of 3D objects within the editing environment. This technique allows users to integrate dynamic content, such as videos, directly onto 3D objects, enhancing the visual appeal and interactivity of the 3D scene.
The user’s visible area of an app on their screen.
The technology used to identify and continuously monitor the presence of human face within a video frame.