Audio2Face simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.
HOW IT WORKS
Omniverse Audio2Face App is based on an original NVIDIA Research paper. Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple. Just select your audio and upload it into the app. The technology feeds the audio input into a pre-trained Deep Neural Network and the output of the network drives the facial animation of your character in real-time. Users have the option to edit various post-processing parameters to edit the performance of the character. The output of the network then drives the 3D vertices of your character mesh to create the facial animation. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.
(Note: Direct blendshape support will be released at a later date)
USE A RECORDING, OR ANIMATE LIVE
Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live through a microphone.
Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages. Check out these tests in English, French, Italian and Russian.
MAKE ANY FACE COME TO LIFE
Audio2Face lets you retarget to any 3D human or human-esque face, whether realistic or stylized. Watch this test as we retarget from Digital Mark to Rain.
SOLO ACT OR A CHOIR
It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like – all animated from the same, or different audio tracks. Breathe life and sound into a dialogue between a duo, a sing-off between a trio, an in-sync quartet – and beyond.
BRING THE DRAMA
Audio2Face gives you the ability to choose and animate your character’s emotions – the network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range.
Feature coming soon.