In a recent interview with Glixel, Dr. Richard Marks, head of Sony’s Magic Lab R&D team, talked about PSVR’s development history, social VR, and a possible holodeck-style future. He thinks voice input has unrealised potential, and could become the way users launch into different VR experiences in the future, customising them in real-time thanks to procedural generation.

Mark’s Magic Lab played a pivotal role in developing ‘Project Morpheus’ the prototype VR headset that would eventually become PlayStation VR.

Project Morpheus prototype | Photo courtesy Sony

Following a recent Christmas break where he says he studied a robot vacuum cleaner, tested all available voice-input devices for the home (such as the Amazon Echo and Google Home smart speakers), and watched every Black Mirror episode, it was voice control that excited Sony’s head of Magic Lab the most. Marks thinks that a voice-enabled VR environment, perhaps in the form of a procedurally-generated sandbox, where practically any element could be changed at the user’s command, “doesn’t seem very far away.”

Marks imagines a future where voice input technology is set free in VR, limited only by the user’s imagination. He describes a possible virtual environment that is partly procedural, but containing finely-crafted areas created by development teams where users would send most of their time.

“That’s the kind of thing that will involve probably multiple groups and multiple companies even to get all the content that you would want to have happen, but that’s what I think the vision of VR is in the future. That’s why I see it as the holodeck. I just put it on and I can make my world anything I want right now”, he says.

SEE ALSO
On the Hunt for VR's Killer App with Sony's Head of PlayStation Magic Lab, Richard Marks

With apps like Virtual Desktop and even Oculus Home it is already possible to use voice activation to launch VR software from within a PC headset, and there are several interpretations of holodeck-like launch environments available or in development, but Marks is imagining a time where machine learning has taken significant steps beyond where it is today, allowing users to spawn anything from a vast library, or seamlessly interact with virtual characters, with nothing more than a voice command.

Google, who recently claimed to have the most accurate speech recognition, announced its collective AI efforts are now under Google.ai during last week’s I/O 2017 conference, which was heavily focused on machine learning. Their natural language processing is at the cutting-edge of voice technology, but developers are only beginning to explore the complexities and nuances of voice user interface design, as described in James Giangola’s presentation. There are many hurdles to overcome before we can have meaningful and frictionless conversations with our virtual assistants that go beyond a limited set of commands.

Asked why there isn’t a VR version of the most popular games like League of Legends or Overwatch, Marks offers a few reasons, suggesting that the number of available players and budget determines the type of game that can be made, and that sometimes a VR version simply doesn’t make sense without effectively making two different games. He points towards Resident Evil 7 (2017), whose VR mode is currently exclusive to PSVR, as a good example of a game that works on both screen and headset.

“When the game can do it I think it’s a great thing for them to do, because they can take advantage of the huge installed base of non-VR players too”, he says. “But I think once the installed base of VR gets big enough then obviously we won’t have that issue. You can just make an amazingly deep long game that’s super high production value… It just won’t be exactly the same game.”

Referring to Star Trek: Bridge Crew, which launches at the end of the month, Marks talks about the importance of social interaction in VR and in particular, the feeling of ‘co-presence’, and how it will improve in the future as the number of VR users increases, bringing greater incentive to share a virtual space with others. But artificial characters will always have a role to play, and there is a higher expectation for believable interaction with NPCs in VR games. To highlight co-presence using AI, Magic Lab has a ‘believable characters’ demo, where you interact with robots in a playroom using natural gestures and body language.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


The trial version of Microsoft’s Monster Truck Madness probably had something to do with it. And certainly the original Super Mario Kart and Gran Turismo. A car nut from an early age, Dominic was always drawn to racing games above all other genres. Now a seasoned driving simulation enthusiast, and former editor of Sim Racer magazine, Dominic has followed virtual reality developments with keen interest, as cockpit-based simulation is a perfect match for the technology. Conditions could hardly be more ideal, a scientist once said. Writing about simulators lead him to Road to VR, whose broad coverage of the industry revealed the bigger picture and limitless potential of the medium. Passionate about technology and a lifelong PC gamer, Dominic suffers from the ‘tweak for days’ PC gaming condition, where he plays the same section over and over at every possible combination of visual settings to find the right balance between fidelity and performance. Based within The Fens of Lincolnshire (it’s very flat), Dominic can sometimes be found marvelling at the real world’s ‘draw distance’, wishing virtual technologies would catch up.
  • Rec Room is the best social game I’ve played in VR so far.

  • Lucidfeuer

    Bot Colony (http://store.steampowered.com/app/263040/Bot_Colony/) is the only real AI-backed conversational experience that exists as far as I’m concerned. It’s not perfect given how complicated and ambitious this is, but this is still an impressive introduction in the concept.