#1118: Getting 4000 Users on a Single Virtual World Instance with RP1’s Network Architecture

I got a demo of RP1‘s single shard with 4000 users in mid-July, and then chatted with 3 team members in August to unpack their journey towards creating a scalable network architecture for the Metaverse. Their demo was built on WebXR, but it could be used on other game engines as well. There are real limitations on the client-side and VR hardware with how many of those 4000 users can been rendered within a close proximity, but having a single instance with hundreds or thousands of people could start to open up new use cases for cultivating virtual spaces that recreate some of the social dynamics of tech conferences, large-scale concerts, or perhaps persistent Burning Man playa communities that focus on serendipitous collisions and real-time hangouts. I unpack RP1’s design process and potential implications of large-scale shards for the Metaverse with CEO Sean Mann, Chief Architect Dean Abramson, and Chief Client Architect Yin-Chien Yeap (aka Yinch).

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. On today's episode, we're going to be talking about RP1, which is from Metaversal. So this is a network architecture solution to be able to host 4,000 different users on the same shard. Usually, there's somewhere between 15 or 20 or maybe up to 80 people on the same instance or shard on a virtual platform. But in their solution, they're able to bring 4,000 people together. So I was a little bit skeptical honestly when I first heard about this because it was so far beyond what anybody else is doing And so I actually had a chance to do the demo and they're able to on the back end Do a mix of all the people together in terms of the audio there were some limitations in terms of the hardware side So even though there were 4,000 people on server you can't render every 4,000 one of those just because there's limitations on what my Hardware can even render out But they're able to create a single instance that if you were to walk around, you could still interact with all these different people on the single instance. So I wanted to get a little bit more of a background and context as their journey into creating RP1. Also, they've created some interesting WebXR integrations. This is actually a demo that's based on WebXR. And so apparently it's agnostic and can be used on other engines like Unreal and Unity. But the specific demo that they're giving was on WebXR. And so I was able to get a URL and go to this link and have this whole experience kind of walking through a cityscape and curing all the audio mixed together. It's really quite compelling. And I think it actually opens up the possibilities for what types of experiences are going to be possible when it comes to either tech conferences or events or maybe kind of an ongoing Burning Man type of vibe where you have a persistent room where there's lots and lots of people that can go by, creating this city like a metaverse experience. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Sean, Dean and Jinch happened on Tuesday, August 9th, 2022. So with that, let's go ahead and dive right in.

[00:02:10.488] Sean Mann: I'm Sean Mann. I'm the CEO and co-founder of RP1. I'm super excited to have an amazing team solving, I think, super hard problems and really creating the future iteration of the internet. Obviously, I'm in charge of really trying to bring this product and technology to the market to really share with the community and super excited to be here on your podcast. Thanks for having us, Ken.

[00:02:32.467] Dean Abramson: Hey Kent, I'm Dean Ibramson. I'm the chief architect of Metaversal and we are building RP1, which is a platform hopefully designed to solve the scalability problem of the Metaverse. I've been focused on scalability for the last 20 years and it's my mission right now in life to fix this problem.

[00:02:55.170] Yin-Chien Yeap: Hi, I'm Yuen Cheap. I'm the Chief Client Architect for RP1. I'm excited to be working with the RP1 technology because I think it solves a number of critical problems in developing shared spaces. And I have been a game developer for 20-odd years, and I can see that many of the problems that need solving not being undertaken by the bigger players and to have the technology that RP1 has being the underpinning of the new way of doing things is really appealing to me as a developer.

[00:03:24.962] Kent Bye: And maybe you can each give a bit more context as to your background and your journey into this space.

[00:03:30.407] Dean Abramson: Yeah, I'll start with that because I think our journey starts about two decades ago when I started working for a gaming company. It was actually casino gaming, not shoot-em-up gaming that most people are familiar with, but I was the chief architect of Full Tilt Poker. And back then we had to solve the issue of scalability, getting a little bit more than a hundred thousand people into one shardless environment. So that's where the journey in the software starts. I actually left that company in 2006 and continued to think about the problem of scalability and poker went through a little bit of a rough point in 2011, 2012 area. I started to pick up the problem again and created a new platform and I was trying to build a platform that was 10 times more efficient than the one I'd created before. And I missed the mark by a little bit. I actually created a platform that ended up being about 200 to 500 times more efficient. And it wasn't until the pandemic that I actually put on my first VR headset. It was an eye-opening moment for me. I don't come from a gaming background, but I immediately saw the application of the technology that I developed in this casino gaming space for the gaming or the metaverse space that VR had to offer. And so I set about taking the technology that we had, which was more or less designed for a specific vertical, and poured it over into the realm of the metaverse that requires significantly more real time and a lot more throughput. And so we're part of the way through the journey, and we've got some really wonderful technology that we're demoing right now, and we've got a little ways to go. And so I'll leave it from there now. Well, the next phase is, as we were growing this technology, I met up with a longtime friend of mine, Sean Mann, and I'll have him tell you how the journey continues from here.

[00:05:32.874] Sean Mann: Yeah, you know, it's funny, you know, going back even before working, I've always been a gamer, right? I loved actually escaping the real world and be able to be, you know, a different character or meet new people in the virtual space. Everything from like Ultima Online to I was actually a Street Fighter champion, if you can imagine, playing against some of the best people in the world, which a lot of people don't know about me. And I just really appreciate gaming in many ways. And I think it's the great equalizer for people on this planet. I think you can't really pick your social economic, you know, where you're born, the location on this planet, your family, but you can pick your character. You can pick who you want to be. And I think that's where you see so many passionate people in the gaming space, especially in VR and especially with the term metaverse, right? Because it's a new start potentially for lots of people on this planet. I've been in consumer electronics and technology for my entire career. I've had a keen eye for unique technologies and people that create them. And I've been a part of a lot of really neat companies, including, you know, cell phone products I put into 80,000 storefronts in the, in the world to medical companies that were kind of revolutionizing, you know, fall prevention and things within hospital spaces. And I actually had a detour into the exercise world where we actually created a revolutionary technique that kind of reverses aging and injury extremely fast, working some of the biggest athletes in the world. And when I tied in with Dean, you know, I got to learn a lot of things that he built and his story. And I said, this needs to be out there. I mean, this is a technology that can revolutionize not only scalability, but the ability to deploy things and really solve the metaverse in general. Right. I mean, I think there's a lot of missing pieces that are getting us there. And when we really kind of had these long discussions, I said, we need to share this with the community. It's not about us building it ourselves, but it's us shouting that we have a piece that can help and really work with the community to build the future of the internet. And so we joined forces and obviously to be here on your podcast and just share it with the community is a big milestone for us, right? It's neat to be able to come here together and kind of get into more detail on what we're building and how we can help the community going forward.

[00:07:32.791] Yin-Chien Yeap: Jens, go ahead. Okay. Hi. I've been programming for some decades now. I started programming as a child and I always loved the things that computers can make. A few months after I played my first video games, I said, I can do better than this. And I haven't looked back. I used to work in a multimedia and video games company, making games for the PlayStation and the Nintendo 64. So you can tell how long ago that was. After I was working there, I discovered that I had a real drive to see the technology used in video games being applied to non-video game areas. I saw that the 3D graphics were so compelling, so engaging, so good at telling a story. And so I started my own company about 20 years ago now. to bring video game technologies into non-game areas. And so I've been working in the software studio, solving various 3D graphics problems for large companies like Samsung and Ford Motor Company and Mazda, applying techniques that I'd learned in video games to solve enterprise and commercial type problems. And then more recently, I got into 3D graphics around the same time I got interested in virtual reality when the Vive came out. When Vive came out and I saw what the technology can do, I was utterly, utterly entranced. I was mesmerized by it. And once again, I saw it and I said, oh, I think I can do better. I'd like to make for this space. I'd like to create for this space. I could see there were a lot of games out there, but I also saw that there was use cases for the kind of immersive technology. And so I've done VR for medical companies. I've done VR for flight training companies. And as I worked in VR, I became more and more interested in accessibility. When the Quest came out, then I saw that it had reached consumer pricing levels and my commercial clients became interested. So we started developing stuff for standalone headsets, moving away from tethered headsets. And we also started embracing the use of WebXR as the deployment technology, because I feel that It overcomes lots of the ecosystem problems if native apps have certain barriers to entry and with WebXR, basically you're in charge of your own ship. And so I'm a huge, huge proponent of WebXR and I'm a huge, huge proponent of trying to make technologies as accessible as possible to as many people as possible for as little as possible.

[00:09:50.124] Kent Bye: Nice. Yeah. And I remember running into you, I think Sean, at a clubhouse meeting that we had back at maybe near the beginning of the pandemic or maybe the beginning of 2021, where you had said that you had this technology to be able to scale up to thousands of people in the same instance. I guess just to set some context, when you go into a virtual world, usually there's a shard that limits out anywhere between, you know, for MetaHorizon worlds, you get sometimes eight to 15, and then VRChat can have 20, 30, sometimes up to 50 or 80. You have Vrbela that can get up to like 500 people, but usually they are doing stuff to make people into like 2D cutouts. So people that are super close, you have a more limited amount. So the idea is that there's a limit as to how many people you can have on the same space, the same virtual space that you could kind of run into people and chat. And one of the things that has happened, I think, in a lot of these virtual spaces is that they're capped off at that 30 to 50 to 80 people mark, which means that it's really difficult to have these larger scale events or to create a sense of vastness and a virtual space that allows you to run into people in a more serendipitous way. So I'd love to hear where you kind of picked up on this problem of the limits of how many people were able to fit into some of these virtual worlds. And then where you started to apply some of the technology architectures that Dean had created to be able to translate that from Something that is a game state of like a casino is like maybe if I'm imagining if there's cards on a table, it's pretty static. It's a lot different than saying having a real time environment with people and avatars that are needing to update and voice chat as well. And so, yeah, maybe you could just kind of pick up the thread from there for how you started to address this issue of trying to get more people into the same virtual space.

[00:11:37.175] Sean Mann: Yeah, I think it's kind of an interesting problem because I think in gaming in general, I don't think the industry was like, how can we get a massive amount of people in a shardless architecture? Because at some point, you know, if a hundred people are playing a game, that was great for the experience, right? Solve for the experience, not for just show, right? So it's like, oh great, we have lots of people. And I think the first things that people go into when you talk about, oh, we can put 4,000 people on a single server is like, oh great, I can do a concert. but I think that's a very small percentage of what shardless scale can actually handle. I think that the best thought process is to think about looking at, let's say, applications like Twitter or Instagram or Facebook. If you can imagine, I think when Fortnite did their concert, they had 10 million people, but they basically only could put 100 people in a single shard. I 100 people, and that sounds good. Like how many people do I really want to watch a concert with? But as you alluded to, the idea of presence is super powerful, right? When you go in real life to a concert, to feel that energy is a part of the environment, a part of the experience. But I think the idea of Shardless is far bigger than that. the idea of even just social graph, right? The idea that if you have an experience, you wanna be able to build your status and your connections and be able to be connected to anyone on the planet. And so I like to take a second and imagine if we were to shard some of the world's biggest social applications like Twitter. Imagine if you can only tweet out to let's say 100 people, and then you had to go in and out and tweet to the next 100 people and so on and so on. It wouldn't be what Twitter is today, right? And I think in 3D spaces, when you're talking about the feature edition of the internet, It's not just having a silo of 50 to 100 people or even 20 people. It's a matter of getting everyone connected in a system where I can join in any experience similar to the way I do today in the real world. And so I think when we learned about this technology and started looking at the industry where everyone was kind of capped, I don't think people are trying to solve that problem until now, right? And that gives us a huge head start where Dean's been thinking about this for 20 plus years and not necessarily in VR, but obviously in gaming architecture. And he quickly understood that this is a math problem. Right? This is not, you know, to convert what pushing real-time events in a network server layer, it had really nothing to do with the game itself. It was actually how you disseminate and replicate that information between multiple servers and clients. And in fact, you know, Dean, I'd be happy to let you jump into a little bit more on what your ideas were as you saw the technology in VR and how you really translated what you're working on into the industry.

[00:14:04.912] Dean Abramson: Yeah, sure. When you think about a game like, I tend to go back to poker because it's where I came from, but any other experience like the VR Chads or the Horizons, the experience of the game that you're playing has very, very, very little to do with the overall compute time. of the server, the system. So in the poker world, I don't know exactly what the number is, but it's less than 0.01% of the compute time, if I had to estimate, is spent playing poker. The rest of the problem is all networking. So when you're looking at putting a metaverse together or any type of a system with scalability where a lot of people are being connected, your overall problem is going to be a networking problem. And so even in the poker world, it's not uncommon to see systems that have 100,000 users having solutions that require 1,000 computers or 2,000 computers. It's not because poker is complicated. It's because the networking is complicated, or I should say the network has been over-complicated. Even my initial attempts, we had a lot of computers, not thousands, but we had, I'd say, dozens or hundreds of computers in our systems, which are now being solved by less than one computer. It's because we solved the networking problem. So when you are looking at a system like VRChat or all the limitations, you mentioned some of the systems have limitations of 8 or 15, 20 people. Some of them can get to 50 or 100. Part of that could be because the client side, the amount of avatars, the way they've chosen to present their avatars, There's limitations on the client-side presentation layer, but in many cases it has to do with the amount of fidelity that you're sending. So if you're trying to send hand movements, finger movements, eye movements, mouth movements, audio, depending on how you mix your audio, if you do it server-side or if you try to do it using something like WebRTC, there's more bandwidth that's required. It's really, if you can solve the networking problem, which we've done, then the application itself is really sending a fairly small amount of data and we're able to achieve a much larger number of users into an environment. So the demo that we have now has got 4,000 users on a single computer. Inherently, our technology was designed to scale across many, many, many computers. We're talking thousands of computers. So we've limited ourselves to one computer for the first demo. On the second demo, we'll actually put a bunch of computers together and have them talking because the system can naturally handle that. And we should be able to show that we can handle hundreds of thousands of users in a seamless environment.

[00:16:30.074] Kent Bye: Yeah, I know I've been in experiences like Mozilla hubs as an example of web-based client with that. And also anytime I'm using the quest, there seems to be also limitation for how much audio streams can be rendered by the XR2 chip that are in the quest too. That seems to be like some hardware limitations that I think are maybe driving some, some of the worlds like horizon worlds could probably push more people, but then you start to potentially have breakdowns and being able to actually render out all those things. And so I noticed in the demo that I saw, you were able to get 4,000 people in the same instance, but then I couldn't always see them all at the same time because then the quest hardware would kind of start to fall down. So you have to, you start to run into some hardware limitations as well. So maybe you could just speak about some of those trade-offs between. where you run into what you can do on the backend and networking side versus if you're actually in an immersive space. And depending on what hardware you have, if you have better hardware, like if in your PC base, then you can have more capacity to render out more of those avatars and audio streams. But if you're on the Quest 2, then you start to run into some real hardware limitations. So I'd love to hear some of those different trade-offs that you're looking at there.

[00:17:37.301] Dean Abramson: Yeah, let me, I'll talk about the server side of that. And I'm gonna hand that off to Jens, because he's much more in tune with what's going on the client side and the limitations of the Quest. Our server is capable of keeping track of everyone within the space. You mentioned that you can only see a certain number of people. So depending on the options that you have set at the time, I think when you first come into demo, by default, we limit you to 64. looking at 64 avatars. And so our server is actually capable of sending you about a thousand avatars, which gets pretty crowded even when there's four or 500 avatars within view. It's still, it's a very crowded scene, but we can send you a thousand in full fidelity. You know, you get the spatial audio, you get all the hand eye movements, the full body positioning, just because the server can send it doesn't mean that the client wants to receive it. So if you're Connected with an Oculus that is untethered, the Oculus only has so much processing power. If you're using a VR headset that is tethered, where the bulk of the graphics is being rendered from a computer with a high-end GPU, then we can send you more data. I've tethered my Oculus with Air Link, and I can handle 1,000 users without a problem for quite a while, and it doesn't glitch or hiccup, and I get a pretty good refresh rate. So the amount of bandwidth that we send to the device really depends on what the device can handle. We're capable of throttling that. And then there are limitations in terms of what we can display on the client side. And so that boils down to the rendering pipeline. I don't even want to get into that because that's not my expertise. I'm going to hand it over to Yen-Chi and he can talk about the types of problems that we have to solve to keep that looking good.

[00:19:19.508] Yin-Chien Yeap: On the client side, we have a few challenges. Some of them are things like items of raw hardware power that Dean has mentioned, but there's also a legacy thinking in the way that 3D scenes are commonly rendered. Lots of 3D engines at the moment based on pre-built packages, pre-compiled packages where all the assets are known. They produce 100 gig games, you install it. When the level starts, you sit there for three minutes for the level to start every time you change. And so lots of the ways in which 3D media is shared at the moment is based on like long delays or like large asset packages that require a long time to load and cause delays. But in building the metaverse, we are faced with two challenges. One is trying to draw a lot of things, which is just a hardware challenge. And we are working towards optimizations in using LODs, using parametric models reduced to allow us to draw as much or as little depending on how busy the scene is. But a separate problem that isn't talked about much is transport. So even though you can draw it, right, you still have to get it. If you're in a metaverse in WebXR and assets have been streamed to you, even if you could draw it, if you walk into a plaza with five, 600 avatars, and each of them is trying to download a 50 megabyte avatar, then transport becomes a problem. So as part of the solution that RP1 is working on, it is to actually make changes in the paradigm of how models are stored to massively reduce the data footprint of avatars and models in such a way that you're not ending up with having to transport 4k textures every time you encounter an avatar. And our solution isn't simply to say, oh, well, you can have like tiny pixely avatars. We are still trying to deliver a very high quality texture experience, but without the footprint of having to carry a lot of pixels. And the compression things that they're doing with meshes in Draco and with with the basis compression textures. That helps a lot. You know, you get 4 to 1, 5 to 1. To do stream mobile assets at this scale, we need 100 to 1,000 to 1 improvements in compression. And so we are addressing the problem at quite a fundamental level and not just assuming that we have to just use the assets that exist. We are designing around the problem to try to address what the Metaverse needs. And it is, I mean, I'm not going to pretend it's not challenging, but I think that it means that we are coming up with a solution that addresses the whole chain, rather than saying, oh, here's a stream of metaverse. Every time you move from one city block to the next city block, you've got to wait five minutes to download. That's not the metaverse. You need something that is alive, is alive and as mobile as you want to be. And that does require us to rethink some fundamentals. And we are doing that in the early days of doing that, but we are doing that.

[00:22:02.579] Kent Bye: Yeah, I'd love to hear a little bit more elaboration on the audio limitations as well, because I know just by looking at like Mozilla hubs uses on the backend WebRTC. And I don't know if that's more peer to peer where it's having multiple connections that are coming in versus sending all the data up to one server and then being delivered down because it does seem like that. I start to reach some hard limits in terms of having too many people in a Mozilla hubs instance using WebRTC. the audio starts to break up after maybe 15 or 20 or certainly up to 30 people. It degrades if you have a bunch of people trying to talk at the same time. I think a use case that people mentioned that the early days of VRChat, that crash VRChat, where when everyone starts to sing happy birthday, then you have everybody trying to send all of the audio bits at the same time, then starts to stress the system. And also when you look at the Horizon worlds, I've noticed that they have a limit towards how many people that they're including in some of those worlds. And I don't know if that's because they're trying to tune it to some of the limitations of the quest. So I know that VRChat, a lot of the PC VR-based worlds are on a lot higher capacity for how many people can be in there. And I have noticed that if I'm in a quest with a high capacity VRChat, I start to have audio get dropped out or mess up as well. So I'd love to hear some of the different analysis for what you see as some of the limitations for WebRTC or the system that you're using in terms of how you're getting around some of those previous bottlenecks that I've experienced in terms of not able to have continuous audio streams all happening at the same time on the Quest 2 especially.

[00:23:30.835] Dean Abramson: Our fundamental assumption about what we're building is that this is gonna be an infinite type of an application, right? There's gonna be no bound to the number of users. So 20,000, 10,000, a million people, there's no limit. And so immediately when you approach a problem from that angle and you look at a technology like WebRTC, we just ruled it out immediately. there's no way in the world the WebRTC was going to be able to handle thousands of people in space. From what I've read, and Ian actually has more experience with WebRTC than I do, but from my readings of it, it just wasn't going to work. And so when you look at the problem from an architecture point of view, the only thing that I could see that would work would be doing server-side audio. And so the challenge became getting audio, individual streams of audio from the client to the server, have the server mix it and then send one stream back. So as long as the server can do the mixing, you're only looking at the uploading of one stream and the downloading of, the stream going up is mono and the stream coming down is stereo, right? So there's about twice as much data coming back. And we did some initial, I don't want to give away too much of our secret sauce here, but we did some initial measurements about what it'd take to encode and decode different types of audio streams. And so there was a little bit of math and a little bit of figuring out about what would and would not work as well in terms of encoding streams. And we did settle on a certain type of compression, but I will say MP3 did not seem it was going to work because it was going to stress both the server side and the client side considerably. So we ruled out MP3 as well as a medium of transmission just because the encoding and decoding was going to be Impossible. So from the server side, right now, we are mixing 4,000 audio strings on a single computer in the current demo. That limitation has to do with the fact that we've chosen to do everything on one computer, not just the audio mixing, but everything else in the demo is also being done on one computer. From an audio perspective, we believe we can mix somewhere around 10 times that. on the machinery that we have. And we're not talking about expensive machinery. It's actually fairly cheap commodity equipment. It's actually quite old. I don't think any components in our server are more than three years old or less than three years old, I should say. So we think we can mix maybe up to 40,000 streams on a single computer. The mixing that we're doing is good. In fact, we get a lot of compliments in the audio. Many people even fix it. They come to the demo. We think we're showing them a fantastic visual metaverse experience. And a huge number of people walk away and they're just fixated on the audio. So the audio is good. I mean, we spent a month, two months working on the audio, but we know we got a long way to go. We're not audio experts. We're hoping to bring in community audio experts. And we know that over time, like everything in the system, the audio will improve. And it's really good spatial audio now. We think it's quasi-realistic and there's different modes that we can do that allow the user to customize how quickly it tapers off. And how loud it is when you're in crowds, it can get quite loud. So you have to kind of reduce the volume a little bit. And there's a lot of fun things that we're going to be able to do with audio in the future. But the only thing that would work from the very, very beginning was obvious is we had to do server-side mixing. And I tell you, it wasn't easy. Our first attempt was kind of miserable. We were only able to do like 500 people when we first put the code in there, but I knew we could do better. And so we're, We're around 4,000 people now, but it's not because the hardware can't mix it. It's because we can't get the networking in and out.

[00:27:13.107] Sean Mann: You know what's interesting also about the audio is we've had a lot of people, including yourself, in the demo. And it's funny, we almost have to tell people that the crowd noise is not a fake crowd noise being pumped to you. It's actually the mixing of everyone around you naturally the same way as if you were in a bar or on a street of New York City or whatnot. And I think the audio is a huge piece of the level of presence that you alluded to earlier in this talk. it really makes it feel more alive, more real. And the idea that you're amongst other people. And the cool thing about the metaverse is you're going to be able to control those settings. And so you may say, I want to be kind of by myself or just hear my friends, or I do want to be immersed with other people and meet others in different applications. And so I think the audio is super, super important and we don't have the limitations with other technologies. So you can imagine a 100,000 person concert and you can literally send 100,000 people's audio all in one stream to an individual and they could hear that chanting. And I think that's something that's super special about the technology, especially when you're building unique experiences in many different genres.

[00:28:14.648] Dean Abramson: To actually mix your last two questions on the number of avatars that we can show you is vastly limited by the pipeline of the bandwidth that we can send to the client, as well as the limitations of how much we can update on the client. But the audio is really cool, as Sean alluded to, in that we can actually fill a stadium with 100,000 or a million people. We haven't done it yet. You'll just have to take my assertion for it. And we will prove this out in the future that we can mix 100,000 people or a million people in real time and get you the final result in audio because all that information is present on the server side and only needs to be sent back to the client side in a single stereo channel. So whether we're mixing audio for one or two people or one or 200,000 people, you're still getting the same amount of data back. So that's going to be a really, really super neat experience. And I know that in a demo in the future, we're going to have a stadium full of people with someone in the middle who's directing people in the stadium to do a cheer or the wave. And you'll hear that noise go all the way around. That's going to be something spectacular.

[00:29:17.779] Yin-Chien Yeap: From the experienced developer point of view, I am just very captivated by the potential of the audio system that RP1 has because being able to hear everybody around you gives you immersion and having people be able to hear you speaking gives you presence. And I've seen lots of people mention in the VR space that everybody keeps concentrating on the graphics, but good audio does 50% of the heavy lifting. And the audio is so natural in RP1. Sometimes we hold our company meetings in the demo because It's like Zoom or like Skype in quality, but you can sense where people are around you. You get spatial presence, which comes free as part of the audio feedback. So from an experience point of view, I think the spatialized audio and the scalability of it is superb. And it's like 50% of the battle won just on that side.

[00:30:03.969] Kent Bye: Yeah, one of the my understandings, at least of the networking aspect is that there's certain laws of physics in terms of the speed of light that you start to run into, say, if you have someone in Australia, and then you're trying to have conversations with them in states or Europe, you know, you have these latency that you have the time that it takes to send the signal up to the server and then send it back. So you have certain delays that you have to live with in the sense of, especially if you have a centralized system and you have people from around the world, then depending on when the server is, you're going to have different distributions of latency. So I'd love to hear some of the different challenges of that network latency architecture that you have to do. And, you know, if there's anything that you can do to overcome that, or if there's just fundamental laws of physics that you have to deal with whenever you have people from around the world that are trying to be in the same shared virtual space.

[00:30:55.050] Dean Abramson: Yeah. So our company right now is a little too small to solve the problem at the speed of light. So we're always going to be facing that problem. If two people are talking from opposite sides of the planet, someone's voice is going to have to make a round trip. And we've had conversation. But, you know, people talk on the phone, people talk on Zoom. There is acceptable latency. And we've had our system right now where you can actually have a good conversation. It's very comparable. We're a little bit slower than Verizon. say for a phone call or Zoom, of course they've had an extra 50 years over us and they probably have a few hundred thousand more employees than we do. So we're a little bit behind the ball by let's say 20 milliseconds. Honestly, I think that our problem that we have right now is that we're a WebXR demo, which means we're talking through the web browser using web audio and TCP is the only thing available. So all of our audio right now is actually being done with TCP instead of UDP. And it really should be UDP problem. So a lot of our delays in audio specifically are caused by the fact that we're using TCP. That will get better when we spend a little bit more time with the buffers that we have to set up to have the audio being smooth. And when we switch over to UDP, but I think that your question isn't just an audio, but as a whole, when you're, when you're interacting with people, there will be latencies on various. So when you're talking about high-end gaming, where 15 milliseconds or 30 milliseconds becomes a big deal, there will be some more problems that we're going to solve down the road. But in, let's say, the near term to medium term, when we're not dealing exclusively with high-end gaming, where we're dealing with just people communicating on a social level or experiences that are completely unnoticeable if they all happen within, say, 50 milliseconds or 100 milliseconds, our current network does not have a problem globally. There's an interesting way in which our network is set up. We don't have a central server. We do have authoritative servers, but they're not guaranteed to be located in a specific spot. So depending on the size of the company or the application that's driving a particular experience within the metaverse, you could have a small company who only has one server, let's say is located in South Korea. And I might be forced to go through that server. And if I'm in Los Angeles, like I am right now, and yes, I'm going to have a round trip when I'm talking to that server, if it's a small experience. A bigger company, let's say, let's pretend a company like Epic ported Fortnite into RP1. And now you have potential for lots of servers around the world. then you could solve the latency problems for those high-end games by actually doing what they do now, which is making sure that what I believe they do now, I hope they do now, is making sure that people who are playing the games together are playing local so that when I connect from Los Angeles, I'm going to be put on a server located around Los Angeles with 99 more people who are also located roughly close to Los Angeles. But some experiences when they're live and you have two people who are showing up to a comedy club, I'm listening from Los Angeles. The comedian is talking in, let's say, the UK and another listener is listening from Australia. We have three people from around the world. You can't just say, you know, oh, let's put these people on one server because they're physically located where they're located and we're here to share one experience. So you're always going to have to balance your electrons around the planet. You're limited speed of light. But again, those things aren't It's not like shooting a gun and you have to target something and you're limited, you know, we're 30 milliseconds, 15 milliseconds will make a difference.

[00:34:32.727] Kent Bye: Yeah, and I guess when I saw the demo, what I was really struck by is that there's both this stuff on the back end with the networking, but there's also some front end innovations that you've done in terms of just the WebXR being able to dynamically load in objects, kind of like a Google Maps that when you get close to a tile, then it dynamically is pulling in those assets. And so I'd love to hear a little bit more about that architecture on the front end that you're doing in terms of how you're able to like we're alluding to earlier, get away from those 100 gigabyte downloads, but that as you get close to an asset, you're more just in time or in real time dynamically loading in those objects, which, you know, sometimes, you know, as you have the world construct around you, it could start to break certain aspects of the presence. similar to how Google earth is trying to give a low poly representation of that architecture. And then as you zoom in, maybe give higher resolutions of that space so that you're not completely taken out of the experience, but you're able to, as you get closer, you can actually see more of the details load in, and you're not wasting a lot of bandwidth and energy of having a lot of stuff in the world that you can't see and making it longer to load. So I'd love to hear a little bit about some of those aspects that you've been solving on the front end.

[00:35:40.507] Yin-Chien Yeap: Sure. Trying to get anything to run fast in WebXR is a challenge. I think it's important to choose the right engine. And I think that WebXR is more powerful than people think. JavaScript is pretty powerful. There's a GPU in the Quest 2 that does a pretty good job. And in terms of trying to make things accessible, we did want to make sure that we had something that will work on the most common headset in the world. You know, we could have tried to say, oh, we'll just work on desktop PCs and for desktop VR. But really the idea of making, if the metaverse is for everyone, you've got to try to make sure you reach as many, as much of everyone as you can. And so we kind of set ourselves a hard target of targeting a platform that is common. I was faced with a couple of trade-offs. It's helped a lot by Dean's server-side engine in terms of the map system, where basically the scene culling is done on the server side. So it's not a matter of the client having to hold every single object. I'm just told which objects I need to load and unload, and the server can hold, I know, hundreds, millions, billions of objects, and I just need to know what's around me, and I don't need to know all of that. The server takes care of it. So right from the starting point, we already have a much smaller job to handle than we would have done if we didn't have the server side looking after the global picture. Once we have been told what assets we need, then, I mean, we do the usual things that all the other runtime engines do, which is to draw distant things that are at a lower fidelity, you know, with simpler shaders, with less lighting, and the closeup objects have the maximum amount of frill, you know, effects put on them. The demo that you saw was our earliest best effort at doing that. The engine itself has undergone some upgrades. One of the challenges that we definitely had was that the traditional way of packaging media, for instance, the idea of, we talked about LODs, level of detail, and how if you're close up, you draw LOD 0, which is the highest detail. And then it's slightly further away, you draw LOD 1, LOD 2, LOD 3. So in our system, we have not just for graphics, but you also have LOD for interaction. So the amount of complexity of the code running avatar is actually also reduced as you go further away. And that helps level the load out a little bit. But I was talking earlier about how CineFile formats were designed for offline usage. So one of the challenges is even if you had an avatar with LODs, the traditional way of holding LODs means that the model contains all the sizes. So LOD 0, LOD 1, LOD 2, they're all in the same 3D model. Imagine if you're walking along the street and you're just walking past somebody who's like 50 meters away from you. In the traditional file format model, you'd be forced to load all the LODs just to draw them at LOD 3. And so architecturally, lots of the file formats are really not suitable for purpose. So we are working on it. This is still under development, is to make sure that the general idea is you only download what you need when you need it. And the idea of having things broken up into very granular parcels, so you only download as little as you need, is one of our main attacks for trying to address the scalability. And secondarily, which I was mentioning earlier, was that to try to reduce the size of the parcels themselves. So we try to reduce the number of parcels you have to download, the size of those parcels. And we are looking a lot at parametric models as a source of the measures. So things which are not stored in massive like vertex lists, but we are trying to, as far as possible, trying to make the download package be small and to realize or to like render the meshes out on the client side. Again, this is not a problem that is solved in general game assets or finish meshes, finish textures. Everything is like, I call them dead assets because they're frozen, all information's frozen, you can't do that much with it. And we are much more interested in live assets. When things arrive at a client side, it's still semantically rich. We know the purpose of this surface is that it's going to be a road, or it's going to be a head, or it's going to be hands, and rather than saying that this is dead, because of deferring the rendering of the vertices, we can actually adjust the models to match capability of the device. So you're not always sending like high resolution devices. If you know that you're receiving on a low power device, then you produce a different mesh entirely. So it's trying to make sure that the client only does as much work as it has to, because it has to do a few things a lot of times. And that is a special case for Metaverse use. And so we've been trying to attack every part of the tool chain to try to get it to work. And we're still working on it, but we have got some strong leads.

[00:40:17.716] Kent Bye: Yeah, well, I wanted to speak a little bit about my own experience of your demo, because, you know, there was moments where I was able to, I guess it's a challenge of trying to stress test to see, okay, what's new, what's different. And so just to have 4,000 people in the same instance, I was never able to hear like all, I mean, maybe when I was up high, I was able to see all of them and hear them. But the thing that was really interesting for me was to say, okay, I want to see if I can just walk up to someone and hear if they're saying something that I can discern what they're saying. that it's unique from what's coming from them. It's like walking down the street and overhearing different conversations. And so I was able to test that enough to convince myself that there were each of these different entities that are walking around. As I walk by them, I hear what they're saying. And then so as I'm in there, it gives me this feeling of being in more of a cityscape, whereas most of the virtual spaces I've been in so far are more of like being inside of a single building, like you're already inside of a private space, but you're not having that experience of walking down the streets and just having lots of people around. And that was what I was starting to feel is this kind of glue between lots of other spaces that you may go into a private space, because you did have that implemented where you could go into a space that was more, you know, high fidelity as a private space, but then you're going inside and then to the outside. So when I start to think about the evolution of this as a medium, I think about these like four different phases where there's new technological capabilities that are made available. In this case, you have the RP1 technology that has new network innovations that allow new capabilities of having 4,000 people into one shard. And then you have the artists who are saying, okay, now that this capability is possible, here's something that I want to create that wasn't possible before. And then you have the distribution challenge, which means that you have to have the general public or the audience in some fashion, be able to see it, which means that you would have to either sell the technology to someone, or at least deploy some of your own instances so that you have the general public that's coming in and seeing this. At this point, you just have a demo that you're showing privately to different journalists like myself. And then eventually you have it so that the audience is able to see it and then give feedback into that whole loop back to the artists that then they have this iterative cycle where you have this loop. And so I guess I'd love to hear a little bit about what type of new experiences that you think are going to be made possible. How are you going to start to distribute this out into the world and start to get that feedback from the people so you can start iterating in terms of fine tuning the next features that are needed by both the artists and the creators, but also the audience.

[00:42:38.779] Sean Mann: I could take a stab at that. It's a great question. And I think the important thing is maybe step back a little bit, because I think when you look at the metaverse as a whole, or the definition of the metaverse, it's the future iteration of the internet, but it's a real time, it's persistent, and it's shardless or seamless, where you can go in and out of different applications, very similar to the browser. You go into a browser and you can access any website on the planet, right? And I think the challenge first is a lot of people are saying we're limited on the scale of how many individuals can be in a space. So how do you make that huge leap of having millions or hundreds of millions or billions of people in one massive internet infrastructure, right, that are able to go in and out of different quote unquote websites or, you know, with the metaverse 3D immersive kind of spaces. And I think if you think about the early days of the internet, they built an infrastructure, but a lot of people are like, well, what can we use it for? Right. And in fact, there's actually funny talks about that on the web, going back to like Bill Gates and David Letterman, where it's like, you know, why do we need the internet? Or, you know, what are these things called a website? And it was kind of like, it was a joke back then. And now, you know, the World Wide Web is connected to everyone on this planet. And so I think the first thing that we had to do as a company is first demonstrate that when everyone's saying is 10 to potentially 20 years away and being solved by hardware, we wanted to demonstrate that we're solving it through ultra-efficient software, and that we're able to connect a massive amount of people. And so then the second phase that we're going into is really to start showing how creators, designers, and developers can actually deploy separate applications still in a shardless system, right? So our next demo is going to be 100,000 shardless users with different applications that you can go in and out of all seamlessly without having to precompile or download a precompiled app. So it's all streamed to you. So there's no wait times, if you will. And so I think that's the most important thing about what we're demoing. I think the use cases are fairly endless. It's kind of like, what are the use cases for the internet? And I think the use cases are all over the map, right? I think you're going to see the connection, but I think the same way that the World Wide Web is connected to everyone through more static interactions, I think the metaverse is going to allow us to actually be with each other. both in real life and also in the digital space. And I think a lot of people when they talk about the metaverse, sometimes they say it's VR, sometimes it's AR. I do believe it's going to connect all of those things, right? So the idea of, and I think I've shared this in a personal discussion at Clubhouse, but the idea of connecting, let's say we did a digital twin of an experience and like the Louvre Museum. And if you were to couldn't be there in real life, you're going to put your VR headset on and want to travel to that space and be amongst either by yourself with friends or potentially the public, no different that you would do in real life and you'd be able to traverse that area and enjoy that experience. But let's say if you're there in real life, you're going to wear your AR glasses that will also access that digital twin, but it now is going to make your in real life experience better by helping you find different exhibits. But the most important thing that the metaverse is going to do is bring those together. So if someone is in VR and I'm there in real life with AR, what if they can just now show up in my AR glasses and enjoy that experience with us? And so I think the idea of scale in these technologies, both from an audio and also visual standpoint, is how can you do that not only in real time, but allow people not to have to wait for those experiences. So where if you're playing a game and I'm at a t-shirt shop, you easily can join me and be in that experience with me. I don't think it's necessarily like the entire metaverse is going to be cityscape, right? It's going to be different experiences, but you need a dynamic architecture that can handle those decisions. And it's up to the developer themselves to decide what type of experience they want to have. Do I want this lively, lots of people, lots of audio, because that's a part of the theater or, or the environment that I'm delivering? Or do I want to build an environment where you can do other things where I'd want to be just my friends or just by myself, even a single use case. But even if I'm by myself, I still want to be connected if I choose to go to other experiences. And that's the idea of shardless scale, not just the concert going. Right. And so I think the types of things that we can build are pretty endless, but I think we're trying to demonstrate what's possible and really help guide. But at some point, RP1 does not want to build all that content. That's for the world to build. You know, I don't think when the people that built the protocols for internet, they didn't start building all the websites. They may have to show examples of what those websites look, but what websites were in the early days are very different than they are today. And I think that's kind of our thinking process as far as how we're going to roadmap this product and kind of the different things that we're able to enable for experiences.

[00:47:02.548] Kent Bye: So one of the things that I think about is like the Dunbar number, which is like 150 people that you can reasonably keep track of as a part of your close community. But there's experiences like the Battle Royale where you have a hundred people where you're not necessarily interested in having all of your friends there. It could just be a bunch of strangers and it doesn't matter as much. But once you start to move beyond a hundred into like say 200 or 300 or 400 or a thousand, you know, part of the limitations in terms of something like fortnight or even like the rec room battle royale, if they have like a thousand people that are required to start something, then you may not have lots of people that are able to do it. Or if you die, then you have to wait for that to. to end, which could take a really long time. So you have this sort of sweet spot where the Battle Royale of 100 people actually ends up being a good number so that you can get into something quickly. If you die, you can start another one, but it's not like you're waiting for people to come in. So when I think about something greater than 100 people, I'm thinking about events that happen that you have a bunch of people like at either a conference or a concert. So I'm wondering if when you think about those scales of people, what are the other types of things that you can think of or if it is going to be those events or concerts, or what are the use cases that you have that you would want to have more than 100 people in the same virtual space at the same time? That's sort of the iterative process, you know, where the market is at right now, where you feel like you're going to start to have people coming in to start to grow your company. Some of those problems and use cases where you see that are going to be starting to iterate out for where the market is because you're not, everyone has headsets right now. And I'm just trying to see like where the next step is, where you think the most viable markets are going to be to be able to start to do these larger scale virtual events.

[00:48:41.246] Dean Abramson: There's certainly a number of use cases that come to mind that would put a large number of people together. A concert or a sporting event could bring people together in the tens of thousands or hundreds of thousands. A festival like a Burning Man or a multi-day concert could bring people together in that large number. You could have conventions. We recently attended AWE that had a couple thousand people there. And just enabling a couple thousand people to walk around in one space, even though it's not about getting them all in one room to see one presentation at a time, it would be neat that you could walk around and meet those individual people one at a time at some point during the two or three days of the convention. But we see it very differently in that part of the appeal of creating a metaverse isn't necessarily even about getting all those people together in one spot at one moment, but in many cases about discoverability, about being able to run into people who are not sharded away from you and put on a different server. Well, you'll never even be able to find them and interact with them. But that if they're all in one space and you happen to be in a certain area that people of a similar interest congregate in, I might find if my thing is photography or someone else's thing is they're interested in biology or someone else is interested in arts, that there are going to be certain pockets within the metaverse that you will be able to go and meet people there who all have the ability to be there at the same time. And then you can discover these new people in the best way that we can describe. And you even mentioned the game like Fortnite has 100 people in it, where that game is very short, right? You have a game that might last for a half an hour or a different game that might last for an hour or two hours, or some games are really short for five minutes, in which case it doesn't really make a difference if you make deep friendships within a single game of Fortnite. You're not going to. You're not going to have that opportunity. Most everyone probably wants to kill you, but you might meet your friends and try to get in the same game, but the length of the experience, something that only lasts for half an hour does not require a deep social graph. When you get into longer experiences, if you're an experience and you expect that experience to last for a day, a week, or a month, you're going to want to develop more relationships with more people. And so your social graph is going to need to expand. If you're going to get a new experience like, say, a social media site like a Facebook or Instagram or TikTok or a metaverse where your expected length of the experience is measured in years or potentially even your entire lifetime. then your social graph needs to expand. The social graph that works for a game of Fortnite will not work on a Facebook or a TikTok or on the metaverse. You need an infinite social graph. You need to be able to reach everyone. Not that you need to be able to send a tweet out and not every single person on the planet is going to read it. but you won't be able to build up equity with followers or just amassing friends or relationships if you don't at least have the opportunity to be within the same seamless, shardless environment that just requires the much larger social graph. to answer the question the way that. Yeah. Yeah.

[00:51:54.328] Kent Bye: That makes sense. It comes back to like say Metcalfe's law, which is like the value of the network is made more valuable when there's more nodes in that network. In this case, it could be people in the other cases could be websites, but both pieces of content that are connected to each other, but also people within that communication network that are able to communicate the more people that are there, the more valuable that network becomes. And I think that's why you see things like WhatsApp and Meta, Facebook, Snap, Twitter, all these social media companies that have these social graphs that have that economies of scale, the network effects that happen at that scale, then you start to have really robust dynamics that happen. I think the difference in the real-time space is that things don't go as viral as quickly when you are talking in real-time because When you have a media artifact that's either a photo or an image or a piece of text on a tweet, it can reach a million people in a day, but to reach a million people as an individual, that takes a lot longer. So I feel like that the network effects are maybe a little bit different in terms of things propagating and going viral. I think it's different in a real-time environment, but I think there's still that Metcalfe's law where the value of the network becomes more valuable when there are more people that are there. which I think is why conferences are so valuable is because people have dedicated their time and energy to make themselves available to those types of serendipitous collisions that could really advance someone's career in a way that it makes it valuable for them to dedicate that time because there's other people that are also dedicating their time. And when you have people that are really invested into being available in those spaces, then that's where I think the real magic happens. And I haven't seen that as much in the virtual conference space. Those hallway conversations are difficult to recreate. So I feel like it's an opportunity to start to create those virtual spaces that are able to do that. The challenge that I still come back to though is the distribution aspect of like how you get this technology into the hands of people. if it's going to be a strategy of licensing this out to existing networks, like say VRChat or RecRoom or Epic Games or Fortnite or Meta, if it's like an acquisition target at that point, or if you feel like you're going to be the event producer where you're going to be hosting both the front end and the back end. And it's going to be more of the full package rather than something that you're somehow licensing out to other companies. So I'd love to hear, you know, how you plan on actually distributing this out into the world in terms of your, your strategies as a company, in terms of closing that gap into actually getting it into people's hands.

[00:54:18.597] Sean Mann: Yeah, I think it's a great question. And I think, you know, you're talking about commercialization of what we're doing. And I'll tell you right now, RP1's goal is not to build out all that content. That's up to the great event companies that have expertises in that area that just have limitations in technology and the costs associated with building those applications. I think what we're doing is we're building the infrastructure and the ability for companies like that to deploy their new ways of architecture, right? To support the type of experience that you're alluding to. And so I think we're going to, in the beginning, I think, especially with demo two, we're going to show some examples of what that looks like, but we hope that people are going to want to use our system. We're not trying to replicate, you know, Roblox or Fortnite. We hope at one day that they may want to build on our platform to not only get the cost savings that we can bring to the table. So the idea of, you know, you had 20,000 servers and now you only need, let's say 200 servers, obviously your costs are going to be significantly less to operate that type of business. but more importantly, to be a part of the worldwide web, right? To have where someone can move from a Fortnite experience and not necessarily take their assets. Because I think a lot of people talk the idea that everything has to be completely interoperable and I have to be able to take my assets. That should be up to the developer. I mean, the same way that you have freedom in building your websites and pick your rules, and your limitation of what you want to do. I think that's the freedom to every developer and what they want to build and how they want to share that experience. But if you can build the right standards, that means that a browser can allow everyone to find those experiences. And I think everyone is trying to build these kind of walled garden things. We're trying to do the exact opposite. Not only are they walled garden, but they're also trying to build inherent scarcity, right? To drive up costs so they can get a lot of money in the beginning. We're trying to do the opposite. We wanted to build a system that is for everyone, for any creator, developer, or designer, where they can deploy content as easy as a website and be able to share those experiences with anyone on the planet without having to go through an app store or go through the different hurdles or things that are permanent in today's kind of digital world, if you will. And so I think if we can do what we're doing, I'm hopeful that a lot of people will not only root for us, but they're going to be excited and only get the cost savings, but be able to deploy something that they can share, but control their own destiny. Because I don't think it's the job of a company, no different than there's not someone saying you can't build a certain website or how you build your website, or do you own your website, or do you own the assets that you put in your website, or how you monetize your website. I think that's up to each of those developers. So for us, we want to work with the entire community and build a platform that they can use. and deliver whatever they want. In fact, we're taking it even a step further. We actually don't even want to build all the tools. Imagine WordPress for the internet. We believe there's going to be companies that are going to build modules on our system and be able to monetize or open source how they see freely to allow other people to build things. Someone may come up and say, I have a better avatar system, or we have a new way of doing identity, or or a whole different blockchains and cryptos and payments, you name it, those should all be modules that anyone on the planet can build. And any developer, creator, designer can pick and choose the different assets or modules that they want to use to deliver those types of experiences. And so we're trying to create a complete open platform that is very counter to, I think, a lot of the things that we're seeing today. But I'm not knocking a lot of the platforms out there. I think they're all pioneers showing what's possible, right, as we're trying to figure out this metaverse. And I think as we're trying to join everyone on the planet, I think if we can start jumpstarting the tools and a new way of delivering those assets, that we can actually help jumpstart a metaverse that can be for all. And that's kind of our value system within the company.

[00:57:47.570] Kent Bye: I know, Yanshi, we're starting to talk about some of the WebXR components, and I'd love to hear a little bit of your take of the current state of WebXR. And especially in terms of not only the fact that like Apple and Safari doesn't even have all the full implementations of WebXR, but you also have things like WebGPU that are on the horizon that are going to be coming out at some point. They're going to be bringing a lot of efficiency optimizations to something like WebGL. So I expect to see maybe even more innovation that happens once you have the ability to be able to essentially tie more directly into GPUs to be able to render out stuff in the browser. So I'd love to hear what you assess is building on a WebXR platform since you've been a part of building different experiences that have been nominated by Ben Irwin's WebXR Awards. And yeah, I'd love to hear any of your reflections on where WebXR is at now and what your hope is for where it's going in the future.

[00:58:39.646] Yin-Chien Yeap: That's a great and a very big question, Ken, obviously. I think that WebXR is an evolution of a number of things on the convergence of a number of things that have been in place. Looking back at the 2D web, there's a number of things that we do now with messaging and emailing and certain amounts of document sharing that used to be all offline and native-based, but it's all moved to the web. So things have gone from being native apps to being web apps. And that is a trend that pervades throughout all sorts of digital tasks that people undertake. And WebXR is an extension of the idea that something useful like VR should also be deliverable through the same medium of the web, that anything that you want to try or do or share should only be one click or one link away. And so I think that the convenience and choice that WebXR delivers makes it an irresistible channel for deployment in the way that the web is irresistible. I mean, I read somewhere that 90% of Netflix movies are still watched in browsers, not in apps. I mean, I was surprised. And so the browser as a channel for distributing experiences is unsurpassed. The people in app stores and all the Steam stores will have you believe otherwise, that, oh no, you have to definitely natively install it. You have to definitely download this thing, sit there for hours, clear your hard drive, install it. They want you to believe that. And in fact, making the packages bigger and bigger makes it harder and harder to believe that a web could be the solution. But I think that, like Dino was saying about how server code, it wasn't being done efficiently, and there were so many gains made by just looking at it and studying the problem from the ground up. I think WebXR is at a moment when it is starting to be able to deliver what users want across the spectrum. And as with all the other parts of the web, I'm not even particularly saying that the users know what they want, but WebXR lets creators try things out and not have to get approved by anybody or have to pay a fee for anybody or pay for, you know, with WebXR, all the benefits of the web are brought to bear in deploying this really, really compelling medium, the compelling medium of immersive reality. I mean, I love it. Every day I find another thing on WebXR that like, it startles me. And it isn't always the case that it's got the best graphics or it's got the best sound or the best design, but the sense of being there, the sense of being immersed in space, it's just magic. It's impossible to tell in a podcast or even in a video, but when you are in a space that even the simple joy of picking up a cube and watching it drop, I used to work in Unity and DePanther's export, which is based on the Mozilla default scene, has a desert scene, where there's a table in front of you with three cubes and a sphere. I don't know whether you tried it out, and that's the WebXR scene. And the fact that you could pick up the cube and stack them, or you make them in a pattern, there is a visceral pleasure to manipulating an environment that is created. It's like a dream. It's like a living dream. And so I think WebXR answers the questions of choice, convenience, and content. for immersive spaces, just as it has answered it for 2D apps. And so I think it's inevitable. And I think that it has a an awareness problem. A lot of people aren't aware of it. And I think that lots of things I hear said about WebASR are not completely true. Like people are using things like just WebGL or A-Frame, or you're not using multi-threaded engines, or you're not using WASM. I use an engine called Wonderland Engine, which I know you've come across, and they are about the only engine that has try to clear all the roadblocks. And I don't know what you saw recently, there was a technical prototype video brought up for Red Matter 2. Did you see it? Running on the Quest 2, it had physically-based rendering and lights and shadows and visual effects. For the first time, appearing on PC-like graphics was starting to appear on the Quest 2 in standalone, which was really, really exciting. But I know that OneLight Engine is also bringing out physically-based rendering and all sorts of really nice graphical effects, which closes the gap between native and WebXL. So it's not that the hardware can't do it, and it's not that WebXL can't do it, but you have to choose the right engine, really. And OneLight Engine, because it was built from the ground up to work in WebXL, It is not carrying the legacy of trying to maintain compatibility or backup compatibility with previous systems. And so I'm very bullish on WebXR. When Sean approached me with the idea of the metaverse, I said to him, if it wasn't for WebXR, I don't think I'd be able to do it. The hurdles in terms of trying to get deployed, the barriers of non-web deployment, I didn't think there was a way. It's still hard. I got to say, when he first came to me, I didn't know the exact answers, but I thought that the solution would definitely include web-based deployment. And we are seeing improvements that I think will be compelling to users. And that's the most important thing.

[01:03:23.068] Dean Abramson: Can I add one more thing there? Everything you said was totally accurate. I'm not going to step on his toes for that. WebXR is fantastic. and I think it gets a bad rap and it certainly has a bright future. I just want to add that our demo is on WebXR right now. We chose that for several very, very good reasons, but ultimately the platform is not specifically tied to WebXR. You would be able to attach to it from any type of a device conceivably using any language. And so we foresee native apps in the future that might be built with different graphics engines. And so when you program onto RP1, just like when you build a website, you're not building a website for Chrome or for Firefox or for Safari. when you would deploy in the future an experience within RP1, you're not tied to a specific output display. So if the app is ultimately going to be rendered on a mobile phone or an Xbox or PlayStation or in a VR headset or in a desktop computer, Mac, PC, whatever, we would envision a wide variety of what we would call metaverse browsers or RP1 browsers that will allow you to tie in You can even build your own standalone app that just ties in specifically to your game. So if I wanted to launch my old poker company within the metaverse with inside of RP1 and take advantage of the scalability, you can still tie to it with a pre-compiled app. that you could install that's natively built in C++, let's say, that runs specifically my own graphics. And so that's all possible in the future. Like Sean said, though, we don't want to build all those. We're going to reach out to the open source community. All of our 100% of our client-side code will ultimately not. not ultimately long-term, but ultimately even in the short-term, all of our client-side code will be open-sourced and available. And we're hoping to enlist the help of the open-source community, independent developers to come in and build those types of browsers with different gaming engines underneath them. So we can have graphics from, say, GitHub or native graphics from a VR headset or native graphics from something running on a PC client. We get a wide variety of experiences. And so that that's definitely in the future for us.

[01:05:37.516] Kent Bye: Yeah. Yeah. That makes sense. I guess, like when you have most of the experiences over the past 10 years, I guess, you know, 10 years ago, August of 2012 is when the Oculus Kickstarter came out. And then the DK one, Dev kits started to ship in that spring of 2013. So we're nine years into this now in terms of people developing for these immersive media. And a lot of it has been based into Unity or Unreal Engine by and far the majority of all those experiences. And I feel like there's an element of the WebXR that may start to bring in different types of experiences that are more web native in terms of pulling in other data from the web that are able to create immersive experiences that weren't even possible before. So I'd love to, as we start to wrap up, just put the question to each of you. If you think of an experience that you want to have in immersive media or in the metaverse, what type of things would you want to do in terms of imagining this kind of science fiction, snow crash type future, or if there's specific types of experiences that you want to have with this immersive media and the metaverse?

[01:06:38.200] Yin-Chien Yeap: So I'm excited about the scalability because it is good that a seamless population lets you be connected to everybody. But experiences are better when it lets you connect to anybody. And I think it's best when it lets you connect to somebody. So I'm really excited about how the future Metaverse will be one that is owned and created by the people who populate it. That's my real ambition. I want to see a metaverse where everybody who is a user or consumer is also a producer. The parallel I always make is with TikTok, which is huge. When people ask, why do you need so many people in a game? And I think, well, you might not need it in a game, but in terms of social networks, Every social network wants to be huge and seamless. And so I'm really excited to see a world where all consumers can be producers and the tools for them to be able to envision, to let the imagination and creativity become a shareable experience. It's just provided, I don't know whether you remember the Apple II, and one of the things about computers in the ancient times was that it came with a compiler. When you bought an Apple II, you could program for it in BASIC straight away. And I think I would really be interested in Metaverse, but the tools for creation came alongside the tools for experiencing it as well.

[01:07:53.089] Dean Abramson: For me, I could answer your question two ways I mean personally, I think when I put on the VR headset, I just want to be able to explore the things that I know as a human, I would not have an opportunity to explore so I want to effectively get off the planet, and whether I'm just going to visit a planet that Some other really talented person, much more talented than myself, has created a planet experience closer to Avatar planet or something where I could be out in space and spaceships and see other galaxies. There's a whole bunch of movies I could point to that I want to be able to explore, step on the moon, go to the planets and like that. That's personally. But I guess here on Earth, a different way that I can answer it from a humanity experience, one of the things I'd like to see for the metaverse is, I'll admit, so I live in Los Angeles, which is like a bubble of a bubble of the planet. We have a lot of opportunity here that a lot of the planet doesn't. So I really see the metaverse as being a way for a huge majority of people on the planet who don't have access to education, or to take part in an economy because there's not a good economy in the local environment that they live in. I think that the metaverse could open that up. And so there's a good opportunity for people to reach and get access to education they wouldn't have at home or get access to a global economy. And I'm hoping that that really takes a foothold. Obviously, there's a lot of pitfalls and things that we're going to worry about in the future. I'm not exactly sure how we're going to solve those. As a community, not we as a company, but we as a community are going to have to worry about the bad things that are going to go on. But I think that the good things are going to outweigh the troubles that we're going to have.

[01:09:33.948] Sean Mann: I think Jens and Dean took two of my answers, but I like to actually look at things a little bit sometimes differently. I think everyone tends to put metaverse in the idea that we're going to be ready player one, right? We're going to put on this VR headset and be able to escape our planet. I think the metaverse is going to do two things. I think the VR is for when you can't be there in real life and you're still able to put a VR headset and experience something when you didn't have the ability to do so, either financially or just, you know, where you're located, you can't travel to certain places. And I think the pandemic really showed us that, right? I mean, I think a lot of these technologies in certain ways, not the scale, but a lot of these technologies have been around for 20, 30 years, right? And a lot of people are like, what's different? right? And I think sometimes it's timing, right? It's timing. It's the pandemic showing that when you take away our ability to connect in real life, people are starving to find other ways to connect. And I think the World Wide Web is great with forums, but it's not necessarily real-time. I think a lot of audio companies came out, which gave you that real-time connection, if you will. And I think the metaverse takes it to the next level, right? It allows us to actually connect and feel presence in a virtual space. But I want to also, we can't forget, I think, the power of what AR is going to be. Where VR makes, when you can't be there in real life, you have VR, I think AR is going to make real life better, right? And so I don't know how many times I've been, and I'm sure there's many people that are listening or you guys, when you travel in real life, wouldn't it be great to be able to share with people that can't be there with you? And I think the ability to have that connection, I think is exciting. The idea that if I might be able to travel or be somewhere because of work, and I want to have my kids be able to see what I see in real time or share an experience to me, I think that's incredibly powerful. And I believe if you look at companies like Magic Leap and other ones that are trying to pioneer air headsets, the challenge that you have is how do you connect millions of people in a single architecture. It doesn't matter if it's VR or AR, you're going to run into the same networking problem. The idea that we're walking around, let's say we have NFTs on our bodies, and in real life, I want to see what you're wearing, Kent, from your digital spaces. How do we connect the same way like a social site? And I think with the RP1 system, we're building something that connects all people no matter what medium wants to connect to it. So it doesn't matter if you're an AR headset or a VR headset. If I want to walk around in real life and I go near a restaurant, it'll automatically send me information about that restaurant. Or if I go to an airport, imagine it telling me exactly where my gate is. It knows who I am, where I need to go, and it makes that process easier. Or connecting autonomous driving or IoT devices. I mean, it's pretty endless. on what you're going to need scale to really bring the planet together. And for me, that's what I believe the metaverse is. And I truly believe that we're going to have to rethink these technologies. And that's what makes me excited about what Dean and Jint are building, because at the end of the day, if you can't connect these people the same way the World Wide Web has connected people on websites, how do we get to the next stage? And I think the onboarding process is super important. And when you brought up web and the power in WebXR, instantly be able to go to an experience without having to download something, without having to do new logins. We don't want to repeat the mistakes of the web, right? We want to make it easy where you can create an identity and go explore into all these activities. And I think people have to say, look, we can't take old technologies and make that simulatedly work, right? We have to really rethink what that process looks like and how do we deliver that. And most importantly, and I don't want to get too long winded, the sustainability of what we're doing is probably the most important thing. The idea of the amount of computers necessary to run real-time applications is absolutely huge. And obviously with global warming, I think the last statistics is 3% to 5% of global warming is attributed to the data centers around this planet. And so what if we can make better choices when we build applications where we need far less compute? And to me, that's probably the most powerful thing that I think we can deliver from an experiential standpoint.

[01:13:24.845] Kent Bye: Yeah, there's a book by David Chalmers called Reality Plus where he talks about how he sees virtual reality as a real reality. And so I have adopted that and I actually resist the phrase in real life anymore because I think as we move forward, the differences between the virtual and the real are going to blur. So I actually prefer the virtual as contrasted to the physical. because I think there's other aspects of like your active presence and your social presence and mental presence and emotional presence, that there's actually very little difference between and from my perspective, the virtually mediated experiences versus the in real life or physical experiences. And so I tend to prefer the phrasing of physical reality as a contrast to virtual reality, because I feel like it's the taste, the touch and the smells, you know, the types of stuff that embodied physicality of the physical experiences are gonna be different. So I guess with that, the last question I'd love to ask is what you think the ultimate potential of virtual reality, augmented reality, XR with the metaverse might be and what it might be able to enable.

[01:14:28.307] Yin-Chien Yeap: Somebody else start this time.

[01:14:30.989] Dean Abramson: What you just said, I think, is the ultimate where we're going to go is when you can't distinguish when you can smell and taste and touch in virtual reality. If you look at where we are now, it's kind of insane. If you were five years or 10 years ago looking at where we are today, oh my God, it's crazy, but looking forward five or 10 or 20 years from now, I think a lot of the things that we think are impossible will actually become reality. And I think there are people who maybe aren't born yet, or maybe who were just born, who really won't know a world without virtual reality. And we just take it as a natural part of life right now. It's kind of like a game for us. But I think that that's where we're headed.

[01:15:11.160] Yin-Chien Yeap: I think that the ultimate potential of the immersive technologies is radical reshaping of society. I know everybody says everything is a radical reshaping of society, but I think that one of the reasons why it has proven difficult to design for virtual reality is because people thinking of it as Web 3, Web 4, Web 5 or gaming three, four, five, overlooking the fact that virtual reality is actually reality 2.0. So the thing that we are making displaces reality. When it arises, things will be done in virtual reality that are currently done in physical reality, and it will give rise to new superpowers and cause the fall of nations. The only thing I can think of that was equivalent to what might be the printing press, we are at a printing press moment in human civilization. And the fact that it hasn't shaken society to its roots just means it's not arrived yet. Because when it arrives, it will become pervasive, like iPads are pervasive. You know, we live in an iPad universe now. Everywhere you look, you see iPads. To think that you wouldn't be able to call up a new device, it's just unthinkable. And so I think virtual reality's ultimate arrival will come when it reshapes society, and we will not miss it, basically. You won't be looking for it. It'll be in your hand, it'll be on your head, in the same way that everybody has their phones on them right now, right? And so it will be a change of that scale. It won't replace the smartphone, but it will do certain things that will alter the way we interact with reality and with each other at a distance.

[01:16:40.005] Sean Mann: You know, it's interesting to me about the metaverse and I try to think about what technology and the gaming industry has done. And I think there's a convergence that's happening where I think everyone's thinking gaming, gaming, gaming is the metaverse. And I think that's a very small percentage of the metaverse. I think the social interactions and the different ways we can interact the same way we do in real life is going to happen in the digital spaces. I actually love the way you put it, that there is a blending of real life and digital spaces where it's going to be one paradigm, right? And you're just going to go what's natural based on the type of experience you're looking for or what you want to do with both friends or work or whatnot. I think the metaverse, in my opinion, is going to be probably the biggest equalizer on the planet. If you look at the World Wide Web today, the way it connected everyone on the planet together, it enabled people from small villages with very small economic footprints or reach, it allowed them to take their education, their intelligence, what they want to give to mankind and share that amongst many people across the globe. It allowed us to share information at a rate that we've never seen before. And I think the power of the web has just been ginormous. I think the metaverse is going to take it to another level. It's going to allow us to not only connect but share, but I believe it's going to allow us to be who we want to be. not what God gave to us. And that's a pretty strong statement. I mean, I think what's powerful about gaming is that you can pick your character, you can pick who you want to be, and you're not judged. You're judged based on how you talk, how you interact, your gaming skills. the amount of time that you spend in there. And I think the metaverse is going to be very similar to that. I think it's going to allow everyone to really be who they want to be and without being judged. And don't get me wrong, I think anytime you put a mass amount of people, there still will be judging, there will be jokes. And I think that's a natural premonition of people just being together. But I think you're going to have way more choices that you do in real life that I think makes it exciting. And I I think the last part of the metaverse, which actually probably gets us, at least me, the most passionate, is I think it gives a way for creators to finally get what's due to them. Because at the end of the day, without creators, without the content, without the experiences, all these things are meaningless, right? The people are what make movies, it's what they make media, they make all these things that we get to share. But there's companies that kind of take that and make it their own or really take the big cut of those percentages. For me, I think this is a way for people to basically provide a living for their family And I think we have to enable that process. And I think the metaverse is going to allow that to happen at a level that I think most companies can't comprehend today. Because it's going to make it easy for you to replicate what a company of potentially thousand people can do, maybe one or two people can do with the right technologies. And I think that's what's exciting about not only what RP1 at the scale standpoint, but when you can bring the cost of running these applications down to closer to as if you're running a website, I think it makes it equal across the board versus just the ones that can raise the most capital to deploy the right experience. And to be honest, I think the metaverse is definitely going to be a great equalizer for mankind. And I think that's what's truly exciting about what we're building.

[01:19:46.712] Dean Abramson: Hey Kent, if I may, turn the tables for just a moment. You've seen our demo and you said some nice things about it, which I appreciate, but can you tell us, having heard us talk about it for more than an hour now, can you tell us what you're hoping that you might see out of this in the future? And also what you're worried that you might see, like what are the goods and the bads if you had to give us some advice and say, well, I hope they don't do this. What would you say?

[01:20:13.426] Kent Bye: Well, I look to Simon Wordley's technology diffusion process, where you have a prototype idea, then you have the custom bespoke enterprise deployment of that idea, then it goes to mass consumer scale, and it eventually becomes a commodity. I think right now, this technology that you're talking about is at the enterprise scale, meaning that you're going to be getting one-off, small, custom bespoke implementations of this technology. But I hope in the longterm that eventually it gets into more of a mass consumerization and then eventually an open source commodity that anybody can use. But right now you're still a company that you're in that custom bespoke enterprise phase of evolution. And so I think the same thing that Sean said is that as we move forward, you know, we want to have this massive ubiquitous metaverse where everybody can have access. And I think in order to get there, you're going to have to have something like the Linux server architecture where you don't have to pay for each of the different elements, but there are going to be things. at some scale, but there's always going to be things that are at the bleeding edge of technology and innovation that's still going to be at that companies providing those services. So what I would like to see is at least a way for you to take this technology and to bootstrap it and have enough ways to move through this enterprise custom handcrafted bespoke phase into the mass consumer phase. Because in the long run, I would love to be able to deploy this as an open source project on my server. But that's probably, you know, five or 10 or 20 years from now, right now, it's not the phase of having everything, you know, you could open source and give everything away for free. But then what's your business model at that point, you know, it feels like there's certain aspects of the Amazon server architecture is another instance. So like, if you get acquired by something like Amazon, then I could deploy it and pay an instance and be able to get all the benefits. So right now it seems like this type of technology is geared towards companies or enterprises that have revenue streams that could support not only you as a company, but also their business. But eventually I'd love to see more of what we have in the open web and the challenges of the economies of scale of how it is difficult for people to run their own servers. So you end up having these platforms that are basically owning and rolling everything. because people don't want to have to deal with all the security and the money. So you end up giving away all your content to these social network platforms, but then seize control over it and ownership. And so you do have this challenge of how do you actually facilitate viable greater economies when all of the economies, the scale of these network effects mean that it's a lot cheaper for you to just build it on someone else's platform and not get the full benefits if you were to own it. Because there's trade-offs as independent creators that you have to go through. So that's at least what I see is that it's really promising technology to be able to have that. But at this point, unless you're holding events or able to be big enough to be able to sustain those different types of large scale events, conferences or something that happened once a year that have that, but that's not something that's persistent over the course of the year. So it's more event based rather than a persistent. citywide scale that you're doing with all this stuff. So that's at least my assessment where things are at right now is that people are still trying to figure out what the emerging business models of the metaverse are going to be. So the lack of clarity of those means you kind of have to go where things are at now, which is like the Fortnite gaming entertainment model, where you're creating value by having lots of people come together and you're buying and selling virtual skins and assets for people to control their virtual representation in those worlds. But to really get beyond just that gaming metaphor means that it has to have new business models and new industry verticals that are emerging. And that it's so early to not exactly know what those new markets are going to be. You have a new technology that's going to maybe enable new markets to then break the feedback loop of having those companies get the technology and then get the users and then build that feedback loop of actually sustaining a business, I think is the biggest open question. So that's at least what my assessment, my feedback as to where things are at right now.

[01:23:54.364] Dean Abramson: Appreciate that. If we do even half of what we want to do, I don't think you're gonna have to wait five or 10 or 20 years for most of that. Hopefully a fraction of that, one or two years, a lot of what you just said, I think we'll be able to do it. I know we're running out of time. We could spend another hour, so maybe invite us back. The future, when we're further on, we can talk all about that. But owning at least part of the service that you're running and running on your own computer, that should be available very, very soon. And if I could just put a little plug out there, in the limited time we have left, we do want to involve the open source community. I know that your audience is very technical. If there's anyone listening, and you want to be a part of the open source effort that we're going to spearhead with all of our client-side software, we could use help getting that out, just getting it started. Anyone who's got experience in that realm, please contact us, reach out, and we're happy to work with everyone who wants to participate.

[01:24:49.788] Sean Mann: Yeah, I think I want to second what Dean just mentioned. And I think this is important where we're coming out sharing a technology that we've solved, right? And some things that are important for the bigger metaverse to actually be created. And I think RP1, you know, it's very easy for a company to say, oh, we're just going to go build it all. And like you said, it's going to be ours and we're going to control the marketplace and we're going to control this. But as a team and with all the founders together, we're going to literally do the complete opposite. I think right now, this technology is not just for RP1. It should be for every company and every person and creator on this planet. And I think for us, we want to basically introduce that these technical challenges are possible first, and then as Dean alluded, invite the community to work together. We would love to work with the different gaming engines out there to help actually decouple kind of what they've done because they're obviously pre-compiled gaming engines, but to really solve what it looks for like non-pre-compiled information. How do we get Unity or Unreal level graphics in a browser? How do we build identity modules? How do we build ways to navigate these browsers? In fact, how do we even create this 3D browser and a new search engine for 3D content? That's not for us to build. Those are for the experts in those areas to say that we understand what this problem is. And if we can come together and bring these building blocks together, I think we can see a metaverse in our lifetime. If it's on RP1 shoulders, I think that's not the right way to go about it. We're looking to let everyone to create their own marketplaces, their own ways that they want to monetize and create and own their own data. That data is not something that I think RP1 needs to own. And I think it's the developer and the creator and the designer to own their data and decide how they want to deploy it, how they want to use it and let the marketplace dictate those rules and what the consumers demand. And I think hopefully with this talk and other things that we're doing in the industry, we're putting out an olive branch saying that we got one component, but we'd love to work with the community to really take this to the next level. And so hopefully we can get it out there.

[01:26:49.652] Kent Bye: Awesome. Jens, do you have any, anything else that's left unsaid that you'd like to say to the broader Immersive community?

[01:26:54.496] Yin-Chien Yeap: Oh, I love you guys. I think one of my first experiences with open source community was through the WebXR discord and I have never encountered a more supportive and positive community like almost anywhere. And so a shout out to all the people that I know in WebXR and the broader VR world. I think we are cooperating on solving a very, very significant thing. Sometimes it feels very lonely in our corner of the universe, but I just want to say that everything that we want to achieve, we can only do together. Part of what Dean and Sean were saying about sharing the code and sharing the effort. is I always think of WebXR specifically, and VR more in general, as being like we're all together on the lifeboat. We're all together on the lifeboat. And if you ask me, why do you want to open source the code that you've spent time and hours and spent money making? I said, well, there's no point holding the oars on a lifeboat when everybody needs to be rowing because If we don't all roll, we'll never get to shore. And, you know, they could find our bodies all shriveled, you know, clutching and all, you know, days later. And there's no point of us hoarding stuff or keeping things away from people. We want to share everything that we can share because there is a journey ahead of us and we can only do this journey together.

[01:28:13.098] Kent Bye: Well, Dean, Sean, and Jens, thanks so much for joining me today and to unpacking a little bit what you're building there at RP1. I'm really excited to see the future demos and the future phases. I know on your website, you have the phases of evolution kind of mapped out to where you're going. And so if folks are interested, for sure, check out the website and get a demo, reach out. Is that the best way if people want to see a demo to reach out to you directly or?

[01:28:35.657] Sean Mann: Yeah, we actually have a new website coming out very shortly, but yes, they can definitely reach out into the demo. Obviously I'm on LinkedIn and Twitter, and obviously we have an open channel communication and hopefully, you know, yes, feel free, pleased to reach out. And we obviously would love to work with community. So thanks Kent. I really appreciate you putting this together.

[01:28:54.506] Kent Bye: Yeah. Thanks so much for joining me today on the podcast.

[01:28:56.447] Yin-Chien Yeap: Appreciate it Kent. Take care. Thank you so much Kent.

[01:29:00.112] Kent Bye: So that was Sean Mann, the CEO and co-founder of RP1, as well as Dean Abramson. He's the chief architect at Universal and building RP1. And then Yen-Chen Yip, aka Yinch, who's the chief client architect for RP1. So I've a number of different takeaways from this interview is that first of all, I think, overall, they're able to pull off this instance with 4,000 users. It's the best of my own ability to see. I don't know what exactly they're doing on the back end. I didn't check out all the individual 4,000 users. It's a little bit difficult for me to really stress test to see what's actually going on behind the scenes and everything. But I trust that there were these different entities that were hooked up and you could hear the audio that was coming out, and then as you walk around, you hear what they're saying. So they have this spatial audio mix that is taking all those different sources and muxing them all together into a spatial audio mix that each of the different users are getting. And there's different dials that you can say how many different avatars you want to see. And, you know, start from 64 and you kind of increment on up. When you fly up, you can see like 2D cardboard cutouts of those avatars, but there's limitations for what my Quest could handle and even what a PC, I think, could handle in terms of actually rendering all these out. So, I expect something like a concert. I want to go to an event and just get a sense that there's a bunch of people there and to hear their laughter, hear their reactions. I think a comedy would be a good example of something where the experience actually increases based upon what the other people are doing and hearing their reaction is actually making your reaction more visceral. Maybe less so a little bit on, say, a music concert, because you hear people cheering and stuff, but something where you get authentic reactions from the audience or different cheers or whatnot. Maybe a sports event eventually. Or tech conferences. I think that's another really compelling use case. Like they mentioned AWE, you go there and there's people that have really focused their time and energy to be present. And something like Burning Man in the past where there's been the virtual Burning Man in alt space, but if you just have one instance and have people that were popping in and out, I think that would be another compelling use case. And so anybody that's able to cultivate a community for people to maintain their presence there, then start to have what I really value at these events are those serendipitous collisions, where you just happen to run into someone, and you know them, you may be already connected to them, and you just kind of catch up. That's the whole value of those different types of tech conferences. And so a technology like this, I think, would help facilitate that, because it's a lot more difficult to do that when you're in the context of a virtual world. Of course, then you have the problem of having virtual representations and usernames identifying people. So I think that's where a lot of the existing apps are really leveraging the building out of the social graph. So I'll be curious to see where this ends up going. They said that they actually have a lot of ambitions to have an open source solutions. And so if that's their plan, that'd be great if they can figure out how to get a number of different clients and see what the actual use cases are. I think that for me, that's the thing for taking where the market is at now and then seeing what the most compelling use case is and then to see a bunch of people participating in a virtual world. even if it's in a VR headset or not in a VR headset, even if it's accessed through a 2D browser, I think using WebXR is an advantage in that sense because you have the ability to have more people present there and you're able to maybe engage with them. And you don't necessarily need the full embodied interactions with the avatar representation or whatnot, but just to have the voice chat, I think is a compelling use case for that particular case of just running into people and having a conversation with them. So what the actual use case is going to be for this technology, I think, is yet to be determined. But I think in the future, I definitely want to see the capacity for these different spaces to have a lot more people to be able to hang out. And I think it's going to create new social dynamics and potentially even new types of gatherings that haven't existed before, just with the distributed nature of the technologies and being able to connect people up. So, yeah, it'll be exciting to see where this ends up going in the future and peeking into the future of the metaverse and the philosophical implications of it all. So that's all I have for today and I just wanted to thank you for listening to the Voices of VR podcast and if you enjoy the podcast then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast and so I do rely upon donations from people like yourself in order to continue bringing this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show