Home Artists Posts Import Register

Content

HAHA! Very excited about this!! Nathan just put together a python script to get google's mediapipe face-tracking data into blender! 

[quick edit- to help with any troubleshooting, if you're posting about having a problem, including your OS, whether you had problems installing python/the libraries, and a screenshot of the command terminal (both what you put in, and what it said), very much increases the odds we can help you :) ]

It's not useful for everything, but for some things, it's an absolute life saver.

Get the Face Tracker Python Script here at Eat The Future.

Here's a text version of what I cover in the video (along with some easy-to-copy-and-paste text):



FOR INSTALLATION:

Make sure Python is installed for your system
(you might have to uninstall any out-of-date versions)

Open a command prompt and install the libraries, one at a time, using:


pip install numpy

pip install opencv-python

pip install mediapipe


Create an environment variable:
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
with the variable itself being
python

And finally, download the script linked above (and as seen in the video) from the Eat The Future github page.


TO USE THE TOOL (probably best explained in the video, honestly)

 
For Orthographic Mode:

Open a command prompt, and drag in the python script

Make sure there's a space. 

Drag in the video to be tracked.

Another Space. 

Set the Destination for the mdd file (I usually drag in the video again, and change the filetype to ".mdd")

Mine looks like this:

Hit Enter, and off it goes!



For Perspective Mode:

The same thing, but just one more bit of information. 

Open a command prompt, and drag in the python script

Make sure there's a space.

Decide whether you want to use Field of View (FOV) or the camera settings (Sensor Size and Focal Length)

For FOV, put 

--fov ##
(replace "##" with the estimated field of view)

For Focal Length, put

--focal_len ##/##
(replace "##" with the sensor size and focal length, respectively)

Add a space. 

Drag in the video to be tracked.

Another Space.

Set the Destination for the mdd file (I usually drag in the video again, and change the filetype to ".mdd")  

Mine looks like this

Hit Enter, and off it goes!


Anyways!! We hope this is useful to someone out there! At some point we'll be releasing it to a wider audience, but in the meantime just wanna make sure it works for all you here!

And unfortunately, as I say in the video, while we super want this to work on your machine, there's a lot of variables to account for, so no promises. That said- I've been able to successfully install it on every windows machine I've tried, so I'm optimistic! 

Files

Automatic Face Tracking

Nathan put together a python script to get google's mediapipe data into blender! Check out the patreon post for all the links and such. https://www.patreon.com/posts/68138545 0:00 Introduction 4:54 Installation 5:20 Installing Python 6:55 Installing Libraries 7:31 Creating Environment Variable 7:54 Download Script 8:15 How To Use 13:17 Perspective Mode 15:40 Closing Thoughts

Comments

Anonymous

Very late to this thread but wanted to give this a try on my Windows 10 but the scrip isn't working at all. After successfully downloading everything as instructed, when I drag the face_track.py and footage files into the CMD line, I get a syntax error for an unexpected character. Screenshot here: https://drive.google.com/drive/folders/1l4PStABkET0HacvUjwHSA8Nezp43F4FU?usp=drive_link Anyone else find a workaround?

Anonymous

@Nathan maybe the new pyscript library would make install easier by turning this into a web app that would just run in anyones browser locally. I the ink the external libraries would just have to be hosted as wheels to make installing the dependencies easier ( if even needed, I’m not sure if pyodide already has support for media pipe on the backend )

Anonymous

HI , Ian. Thanks for wonderful face tool!! I have a question, Is there way to export This moving face object? I exported this moving face at FBX , and Imported this , it is not moving, and just a stopping object, not animated.

Noneya D Biznazz

Has anyone set up a place to pool info on this other than this thread?

Anonymous

Hi, Ian. When I ran cmd and put everything you showed. Cmd just opened the code, nothing happened.

Anonymous

Am I the only getting parse error on input when trying the script? :( "face_track.py" "virgieinput.mov" "virgie.mdd" from the current folder gives me parsing error on video input and output (also tried absolute paths but no success)

Anonymous

My footage is always upsidedown when imported :/ anyone know how to fix this?

Anonymous

this is such a cool concept - thx!

Anonymous

Ooh BDG!

Anonymous

When this gets fully polished please post the final results! Fingers crossed for Mac owners

Anonymous

Anybody else having issues with their face mesh NOT receiving lights when in perspective mode? I'm trying to have an HDRI/light/hologram shader light up the face mesh somewhat accurately but it never seems to be affected by it.

IanHubert

Only in perspective mode?? But it IS affected by the light when orthogonal? Cause that's bizarre! I can't think of anything that would make that happen!

Anonymous

Actually, it isn't receiving lights in ortho either which is interesting! Hmm. So if I have a spot on there or a hologram in front of the tracked face mesh, the face mesh is not affected by either. I'm wondering if it's a limitation of the mesh cache modifier?

Anonymous

This is amazing! Thank you so much. Is there a way how to project the video texture on the facemesh and zero out the head movement and rotation? I would like to track one face on another - deepfake style... Thank you for any tips in advance.

Anonymous

This is so great Ian. One question though - do you have any idea how to get around the problem of "wrong vertex count" when trying to model the base face mesh ? and do you guys have any idea how to apply the track to another face - mesh? Thank you so much for your work!!

Anonymous

For reasons I don't understand, the MDD file created only starts tracking on frame 73 of a 103 frame shot.

Anonymous

If i make a fight scene with photoscan characters, and use this techinique to put real footage faces on some scenes, it might work? Blend the photoscan character with the face footage

Jack_Wolfe

NATHAN! why are you so frellin awesome?