You can see the retroreflective dots (scotchlite paint?) in action throughout the video - you'll note that they are bright in the frontal view but dull in the side shot. Not sure why they are using them yet, as markerless tracking has been a viable option for a while now, markers make the system much more robust, but then again, you've got to both apply them, and then remove them - or leave them be as in this case as they don't detract from the aesthetic. I think the green lines are added, if not even a red-herring for the geeks out there? (keen to learn however if I'm incorrect in that assumption).
From a bit of web hunting and looking at the first few shots from the video I've found this:
As it says, plug'and'play ...
6DoF and IR or not as you choose.
A bit out of my price range for the intents of my project, however it is likely I'll opt for two of the sensors that are used in the product:
It's my bag that I have to develop the backend and GUI for the application
Lot's of jargon - 'Harr-features/Eigenfaces/AAM/POSIT/etc.' - but all doable, especially with libraries like openCV available. Reading the original papers from the 90's is interesting with them talking about 1fps rates, looks like it's more around 100fps now, although I imagine careful consideration of the algorithmic complexity would be required to avoid a brute force approach.
I have control over the setting and it can arguably be less general and be optimised for specific cases - i.e. certain faces - which might afford a kind of offline pre-processing of sorts, might speed things up...
Developing on my webcam at the moment is crippling once I extend the features to look at beyond face and eyes, e.g. add 'pose' (pitch roll and yaw) - let alone eye and mouth movement... Work in progress however!
Although the video is great, it was just one example of many and I meant the question in a more general sense - so ok, Avatar was just standard video, but was it real time at any stage, maybe for visualization in video village?