<< Back to Portfolio

Jack Skelington live facial capture 

This was a passion project that escalated quickly. Becca and I decided to dress as Jack Skelington and Sally for Halloween this year.

But I quickly became convinced that I could make a much better mask than this if I 3D printed a Jack Skelington model.
And if I'm going to print it, I might as well design it for rear projection and mount a mini projector inside so that I can animate Jack's face.
And if it's going to be an animated mask, it would HAVE to animated in real-time using live facial tracking. Right?

Give a mouse a cookie... or Kyle an idea...


Unfortunately, I didn't make it in time to build the mask. But this exercise did give me a great opportunity to play around with Live Facial Capture using Apple's ARKit. I used Blender to produce the model and blendshapes, ported the model into Unreal, and used Live Link Face on an iPad to produce the video below. I tried a number of alternatives, including MeowFace, iFacialCapture, and a couple of options to capture face motion directly into Blender. But the best tool overall was Live Link and Unreal.

  1. Model Jack Skelington in Blender with clean edge loops.
  2. Create the 52 blendshapes for ARKit facial tracking. (I even included the classic Tim Burton sandworm as a tongue)
  3. Port the animated model to Unreal 4.26
  4. Configure the Animation Blue Print to map the facial animation to the feed from Live Link Face.
  5. Run Live Link Face on an iPad. I turned off the face mesh overlay because it seemed to be causing a lag in the feed.

There is still quite a lot of work to do to make this idea work, but I have most of the materials, and it is really just a matter of finding the time to work on it.

  1. Refine the Jack Skelington head model for 3D printing.
  2. Modify the mask for mounting the mini-projector inside
  3. Modify the mask for mounting an iPhone camera extender inside
  4. Print the mask.
  5. Wire it up to a laptop in a backpack, or preferably a mini-playback device such as a Steam Deck or Windows Surface.

There are some obvious challenges that I just won't be able to resolve until I confront them in real-time:

  • The face capture breaks down pretty quickly the closer the camera is to the face. Even if I scale the mask size up to make it a large head, it will pretty tight quarters inside to try and mount the camera. I'm considering using a wide angle lens attachment, or a convex mirror, to capture the full face and then do some kind of real-time processing to fix the distortion.
  • The projection onto the interior of the mask will be difficult to map properly. The right way to do this would be to create an inverted face model. Imagine taking a latex mask from Spirit Halloween stores and turning it inside out. That way when the projection maps onto the surface, it will appear right side in. For this, I'm thinking I would take the 3D print model and use a Python script to project the vertices of the animation model to the opposite side of each polygon in the direction of the surface normal.
  • This thing will probably get pretty hot, so I need to design some ventilation in there.
  • It would also be cool if I could include a microphone and do some voice distortion. Maybe get something that could turn my voice into sounding more like Chris Sarandon.
  • Even though the black and white nature of Jack's bony face lends itself well to being projected onto a surface, there would still be the lack of physical movement of the mask that would kill the illusion. That would just have to be mapped to the performance design. But one thing that would be really cool is if I could animate the jaw for a special scare move. If the jaw extends down a few inches, I could do the Jack scare moment when he yells at the Boogie Boys.