Preview: ILM Uses ‘Star Wars’ Assets to Show Potential of Google’s ‘Seurat’ VR Rendering Technology

13

Google’s newly announced Seurat rendering tech purportedly makes use of ‘surface light-fields’ to turn high-quality CGI film assets into detailed virtual environments that can run on mobile VR hardware. The company gave Seurat to ILMxLab, the immersive entertainment division of Industrial Light and Magic, to see what they could do with it using assets directly from Star Wars.

Google just announced Seurat this week, a new rendering technology which could be a graphical breakthrough for mobile VR. Here’s what we know about how it works so far:

Google says Seurat makes use of something called surface light-fields, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it. The high-quality assets are then reduced to a significantly smaller number of polygons—few enough that the scene can run on mobile VR hardware—while maintaining the look of high quality assets, including perspective-correct specular lightning.

As a proof of concept, Google teamed with ILMxLab to show what Seurat could do. In the video above, xLab says they took their cinema-quality CGI renders—those which would normally take a long time to render each individual frame of final movie output—and ran them through Seurat to make them able to playback in real-time on Google’s mobile VR hardware. You can see a teaser video heading this article.

“When xLab was approached by Google, they said that they could take our ILM renders and make them run in real-time on the VR phone… turns out it’s true,” said Lewey Geselowitz, Senior UX Engineer at ILM.

Star Wars Seurat Preview

I got to see the Star Wars Seurat-rendered experience teased in the video above for myself running on a prototype version of Google’s standalone Daydream headset.

When I put on the headset I was dropped into the same hangar scene as shown in the video. And while there’s no replacing the true high quality ray-traced output that comes from the cinematic rendering process (that can take hours for each frame), this was certainly some of the best graphics I’ve ever seen running on mobile VR hardware. In addition to sharp, highly detailed models, the floor had dynamic specular reflections, evoking the same sort of lightning you would expect from some of the best real-time visuals running on high-end PC headsets.

What’s particularly magic about Seurat is that—unlike a simple 360 video render—the scene you’re looking at is truly volumetric, and properly stereoscopic no matter where you look. That means that when you move your head back and forth, you’ll get proper positional tracking and see parallax, just like you’d expect from high-end desktop VR content. And because Google’s standalone headset has inside-out tracking, I was literally able to walk around the scene in a room-scale sized area with a properly viewable area that extended all the way from the floor to above my head.

I’ve seen a number of other light-field approaches running on VR hardware and typically the actual viewing area is much smaller, often just a small box around your head (and when you exit that area the scene is no longer rendered correctly). That’s mainly for two reasons: the first of is that it can take a long time to render large areas, and second is that large areas create huge file sizes that are difficult to manage and often impractical distribute.

SEE ALSO
Watch: How 'Star Wars: Rogue One' Used SteamVR Tracking to Shoot VFX

Google says that Seurat scenes, on the other hand, result in much smaller file sizes than other light-field techniques. So small that the company says that a mobile VR experience with many individual room-scale viewing areas could be distributed in a size that’s similar to a typical mobile app.

Continued on Page 2: Combining Real-time Elements »

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Mermado 1936

    What I dont understand is why this is not posible in a shooter game… someone knows?

    • J.C.

      By the time a dev lowered the environment visuals enough to allow multiple targets, YOUR weaponry, and all interactions/effects, it probably wouldn’t look much better.

      Probably. I have no doubt someone WILL make a shooter with this tech, so I guess we’ll just have to wait and see.

      • Lucidfeuer

        This. Also this is actually the new 3D isometric graphics like that of the first fps.

    • Buddydudeguy

      What isn’t possible? The question doesn’t make sense.

    • kool

      It only renders what the camera can see. You can’t see the other side of the objects or move too far off the axis.

  • Foreign Devil

    As a graphics snob. . this is very hopeful for mobile VR. I thought mobile would be relegated to blocky decade old graphics.

    • Mei Ling

      It’s two ways really; processing power or funky new paradigm shifts in software engineering.

  • Lucidfeuer

    Still waiting for an actual explanation.

    How is this different from Otoy’s ORBX? If I understand, rather than producing assets from every possible angles it simply adjust the angle of assets and texture maps do the light/reflection work?

    I’ve been waiting for something like this mixed with real-time objects, and while I understand how it can work for flat surfaces I don’t understand how it can work for round/convoluted space objects, like those barrels and bridge in the dock scene, unless they’re 3D objects too.

    • yexi

      As I understand, you need to define a path of camera, and all the assets lights will be precalculed in this particular angle, and other key angles near this one.

      Then an algorith will be able to able to adapt a little (but not a lot, so you surely can walk a little off the path, but not more that 2step… it’s for that you can not make a FPS level with that. Only Cynematic/Panorama/…

      In short, it’s like a Dynamic 3D 360Videos in perfect quality

      • Lucidfeuer

        Yes it’s like a lightfield stereocube, but they managed to do it in an optimised lightweight way which means you can further extend the zone(s) in which you can move.

        Technically you can indeed choose to work with a whole-room free-roam zone, although that would be useless for a narrative and cinematographic intent.

        The real question is what tool they plan on implementing for it not to be a demoware (a more polite description of vaporware when only a handful of companies with direct access get to use it for demonstration purpose before it falls into oblivion because nobody can actually use it).

  • yexi

    In fact, you can still made uncommon FPS using this.
    For example, you can make some hiding spot, and you can only teleport you from one to one other, so you can pre-render each spots using this technology.

    Add somes ennemis wave (Space Pirate, Serious sam, Holopoint,…), and you have the most beautiful FPS in the world.

  • Um… sounds like their are baking their lighting and reflections, which is something you can already do in the Unreal Engine 4. Actually, it’s something you HAVE to do, as nice reflections won’t work any other way on mobile.

    • Joel Wilkinson

      I could be wrong, but it seems like they’re baking a lot more than just lighting and reflections. They’re baking the whole geometry. Their other article about this here: http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/ has a gif where it’s taking a perspective and baking only the geometry it needs making the end result look like a facade. Makes it seem like this technique is probably only useful for backdrops.