This is a WebGL implementation of a real-time renderer for Drivable Gaussian Splatting Avatar.
You can try it on web. Try with a camera for mediapipe blendshape on web.
Demo with the driveable avatar by MediaPipe BlendShape.
Demo with the driveable avatar by MediaPipe BlendShape.
- Create the drivable gaussian avatar by monocular videos.
- Aim to realtime drive and render on mobie devices.
- Morphable model saves a .plat file and drive params save a .json file.
- Blur, caused by modeling and data.
- Driving the mouth area. caused by the occlusion.
Two types of videos, recording for 2 to 3 minutes.
- static camera and dynamic facial expressions.
- static facial expressions and dynamic camera.
We choose the static camera and dynamic facial expressions to collect the facial data. Inspired by the Persona with Apple, we list a series of facial expressions and movements.
- canonical experssion with rotating the neck clockwise.
- Speeching with some senctences.
The drawback of rendering with Gaussian splatting with the animator avatar is that it is difficult to model and recover the region of the mouth. We discover the research of image restoration or super-resolution about face. We adopt the pretrained GFPGAN to restore the rendering face of gaussian splatting. In consideration of the work about the Qualcomm® AI Hub Models on real-time super-resolution,like real-esrgan, the inference time is considered feasible if it is migrated to mobile devices.
Thanks to Kevin Kwok for the original code with 3D Gaussian Splatting.