Samsung’s Project Neon has been the talk of the town for a while now. The Korean tech giant is planning to make realistic human-like robots that could be used as virtual receptionists, news reporters, guides, etc. These digital beings are based on data captured from actual humans. They can generate their own expressions, movements, and even words.
The project’s lead, computer-human interaction researcher Pranav Mistry has now posted a Tweet revealing that Samsung is ready to demonstrate the technology at the ongoing Consumer Electronics Show.
Flying to CES tomorrow, and the code is finally working :) Ready to demo CORE R3. It can now autonomously create new expressions, new movements, new dialog (even in Hindi), completely different from the original captured data. pic.twitter.com/EPAJJrLyjd
— Pranav Mistry (@pranavmistry) January 5, 2020
He has also revealed that Neon can now: “autonomously create new expressions, new movements, new dialog (even in Hindi), completely different from the originally captured data”.
A still image doesn’t reveal much, but the video included below demonstrates exactly what this technology is capable of.
The video goes into detail about how well Neon executes its job of capturing data from actual humans and translating it into extremely lifelike projections. Chelsea Militano’s avatar voice and appearance, for example, are indistinguishable from the actual news reporter.
However, it’s still quite early for humanoid robots. We will have to wait and see whether these avatars can live up to these expectations. Several other questions will also be answered soon as Samsung is expected to showcase Neon today at the CES 2020.