#1387: Landscape of XR Ethics: A Retrospective Presentation by Kent Bye

This is a 19-minute talk that I gave at Laval Virtual 2023 that is summarizing the work that I’ve done on XR Ethics over the past ten years. There’s around 60 slides in this talk, and so you may prefer watching the video version over on YouTube for the full multi-modal experience, or this audio-only podcast version includes the Q&A session at the end. And you can also check out the show notes for this episode that has each of the slides embedded within the full transcript along with all of the linked footnotes in case you’d like to dig into the full context of any one of these topics.

There’s a lot of ground that I attempt to cover within my 15-20 minute allotted time slot at Laval Virtual 2023, but this should provide a high-level roadmap to how I see the landscape of XR ethics. The landscape of XR ethical considerations is also an ever-expanding area, and so this is far from a complete treatment, but hopefully covers some of the major issues that I’ve been covering on the Voices of VR podcast over the past decade.

This talk at Laval Virtual was on Apr 13, 2023, which was a month after my Featured Session at SXSW on March 12, 2023 about The Ultimate Potential of VR: Promises & Perils. That talk explored both the exalted potentials of XR as well as the more troublesome perils, and this talk was focusing on just the perils. They both use a contextual framework that I elaborate more on within an upcoming paper titled “Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities” that was written for the Existing Law and Extended Reality Symposium at Standford Cyberpolicy Center in January 2023, and will hopefully be published later this year.

So with that, let’s go ahead and dive right in!

[1]

My name is Kent Bye, and I do the Voices of VR podcast. And today, I’m going to be doing a tour in the landscape of the XR moral dilemmas and ethical considerations.

And I’m attempting to cover all of the XR ethical moral dilemmas within the next 15 to 20 minutes [obviously not all of them, but a high-level sampling]. And so it’s pretty ambitious. I do have the slides available with lots of footnotes.`

I’ve been doing the Voices of VR podcast since 2014. And so I’ve recorded over 2,000 interviews and published over 1,200 of them, so just over 2 thirds of them that I’ve recorded.

[2]

And in the process of talking to a lot of folks within the XR community, there’s naturally been a lot of different ethical and moral dilemmas. And so this is like a broad overview of the landscape of XR ethical and moral dilemmas. And so I’ll be diving into each of these, but this is just to give you a bit of a sense of the landscape.

[3]

And it actually takes me back to Laval Virtual back in 2019, where I was brought out to brainstorm with a group of folks…

[4]

some of the different ethical and moral dilemmas. And so we have lots of these post-it notes. And so we’re struggling with how do we start to organize the whole landscape of all these different ethical and moral dilemmas.

[5], [6]

And so In SVVR in 2016, I had given a presentation trying to map out the ultimate potentials of virtual reality of all the different domains and industry verticals and potentials. And so I asked people at the end of every podcast, “What’s the ultimate potential of VR?” And they’d say, “Well, it’s education. It’s entertainment. It’s being able to connect with friends and family. It’s empathy. It’s doing stuff for your career.” And so this was like a start of a cartography of the different domains of human experience…

[7]

which ended up being very helpful for starting to map out these ethical and moral dilemmas into all these different domains or contexts.

[8]

And so at the end of 2019, I did a whole talk on the XR Ethics Manifesto. In 2019, I was doing a lot of talks about privacy…

[9]

trying to get a landscape of all these different aspects. And, you know, this is a half-hour talk where I list all these different ethical and moral dilemmas. In this talk, this is more of a digested view of some of the big issues that come up from each of these domains.

[10], [11]

And that talk that I gave about the XR Ethics Manifesto, that was sort of a catalyst for the IEEE Global Initiative on Ethics of Extended Reality. We started with that as a baseline. Here are some different contexts. So why not go [do] a deep dive into these different domains, whether it’s privacy, the economic aspects, education, medical, trolling, privacy?

Again, this is sort of a broad overview, and we’ll be diving deep into each of these sections at this point.

So I wanted to start with the resources, money, and values.

[12]

So first of all, there’s the access to the XR technology, which I think there’s already a digital divide. And so to what degree are the immersive technologies going to continue to expand this digital divide and make it worse? There’s potentials to make it better. But whenever you have new emerging technologies, there’s a disproportionate amount of who has access to those technologies and who’s creating the experiences within those technologies.

[13]

So we have right now kind of an app store model within the mobile apps with both Apple and Google. And Meta has actually taken this similar app store approach, where every app that you produce has a 30% cut. So that’s something that the Digital Services Act and Digital Markets Act may start to counter within the EU. There’s also different dimensions of Tim Sweeney doing the whole lawsuit to try to [fight] this.

[14], [15]

But Meta has actually adopted a hybrid approach, where they’re still doing this 30% cut. But they’re also doing dimensions of surveillance capitalism as we move forward, where there’s all sorts of different amounts of our data that may be starting to psychographically profile us. And as they profile us, to what degree are they going to be able to have this full, robust sense of our likes, our dislikes, and who we are, and our identity?

So that sort of leads into this next section of the self, identity, and biometric data.

[16], [17]

And so as we move forward, we’re going to be combining virtual reality technologies with all sorts of different types of neurotechnologies. And so this is a picture of me with the project Galea from OpenBCI. And so they have everything from EOG, EEG, EMG, EDA, temperature. And so over the long scale of VR, we’re going to start to be integrating more and more of this sensor data. And as we have more and more of that sensor data, it’s going to have more intimate information about what’s happening in our bodies. And “Who has access to that data?” is going to be a big question.

[18], [19], [20]

And so VR has an existential threat to our privacy. Even though earlier today, Juan from Meta was saying that there’s a privacy-first approach, there’s actually a very antiquated idea of what privacy is and how they define it as identity.

[21], [22]

But there’s all this other aspect of biometric psychography, which is being defined by Brittan Heller, which is all of our likes, our dislikes, our preferences. There’s a huge gap where all that information about our biometric psychography data is not being covered at all in any existing law when it comes to the GDPR, where it comes to, certainly in the US. So there’s a huge gap that needs to be closed.

[23]

So you can just get a sense from this research from Kröger, looking at eye gaze and eye tracking data, where you can start to do all sorts of different types of inferences, whether that’s your age or biometric identity, but also your cultural background, mental health, personality traits, skills and abilities, your cognitive load. All sorts of very intimate information about what’s happening in our bodies just from eye gaze alone.

[24]

Well, it’s not just going to be eye gaze, it’s going to be all sorts of other information that’s all going to be fused together. And so part of what I’ve been doing is trying to start to understand “What is the landscape of different types of data? And what type of information you can start to extrapolate from it?” So you can start to look at, say, Active Presence: so our behaviors, our intentions, our actions, our movements. Mental and Social Presence: so mental thoughts, cognitive processes, cognitive lobes, social presence. Emotional Presence: so our affective states, emotional sentiment, our facial expressions, our micro-expressions. And then Embodied [and Environmental] Presence: everything from our stress, our arousal, physiological reactions, eye gaze, attention, body language, muscle fatigue. So this is like the complex of all different types of data that are going to be sort of fused together to be able to psychographically profile us. And like I said, there’s no existing laws that are addressing this in any capacity.

[25], [26]

And so, Nita Farahany has written a book called Battle for Your Brain, where she’s sort of reduced these down to three main human rights of self-determination, freedom of thought, and mental privacy. And mental privacy for her includes both the physiological reactions and the affective reactions.

[27], [28], [29]

And this is an amazing book that just came out within the last couple of weeks, Battle for Your Brain, where for the last 10 years, the last decade, Nina Farahany has been tracking the evolution of neurotechnologies. And it wasn’t until she saw a presentation from Thomas Reardon looking at CTRL-labs and how CTRL-labs was acquired by Meta, and that was the catalyst for her to write this book. Because she had been tracking what was happening in the B2B space for neurotechnologies, but it wasn’t until she saw this acquisition by Meta that she saw there’s a pathway for some of these neurotechnologies, non-invasive neurotechnologies, to have a consumer play in the context of virtual and augmented reality. This is an amazing book, highly recommend it. And I did an extended interview with her.

[30], [31]

But she’s proposing what she calls a new human right of “cognitive liberty.” And that includes everything from self-determination, freedom of thought, mental privacy, and that we need to start to establish this new human right at an international legal level, and have it ripple down and start to influence and change things like GDPR and influence future privacy legislation as we move forward.

[32], [33]

There’s also Rafael Yusta, the [Morningside] Group, that is looking at the Neuro-Rights. So you have everything from trying to define the right to mental privacy, the right to identity, the right to agency, and other aspects of right to fair access to mental augmentation, the right to be protected from algorithmic bias. So I think the first three are the real key here in terms of protecting your mental privacy. And then if you protect that, then mapping of the identity. And then if you know your identity, then you can start to nudge people and start to undermine someone’s agency. And so this right to agency in that it’s very similar to what Nita Farahany is doing with cognitive liberty, but slightly different.

[34], [35]

But at the end of the day, we’re going to need to have some effort at the human rights level, defining new human rights, to pass it in to change GDPR, to have different aspects of redefining what biometric data means in the AI Act. And then maybe The Metaverse Initiative will be something that comes from the EU that starts to close some of these gaps that exist right now.

[36]

So going back to identity, there’s both the avatar and your identity that you’re representing. If you’re going to present and create an application, you should have a diverse selection of avatars for people to see.

[37]

Mike Seymour did a great job of covering some of the different threats when it comes to how AI is going to start to be able to spoof identity, both from our representation of what we look like, but also the AI for the voice. So what’s the future of identity? And how to verify what our identity is as we move forward in these virtual spaces?

[38]

There’s a whole idea of Snapchat dysphoria from facial filters where people see an augmented version of themselves, and they actually want to have plastic surgery to more closely match this virtual augmentation. And so what kind of body dysmorphia, like the Snapchat dysmorphia that we start to have from these different types of technologies?

So moving on to early education, communication.

[39], [40]

There was some talk from Meta about education, but the problem is that none of their technologies are FERPA compliant, which is the privacy being in alignment so that there’s offerings from these technologies that are going to be able to be used in educational context. But there’s lots of other issues that we cover in the IEEE paper around education.

[41], [42], [43]

There’s also a minimum age of 13, largely because of how the eyes are still developing, but also because of the COPPA compliance within the United States. But that’s a big issue in terms of what type of adult content may be available for folks who are less than 13 in these virtual spaces. And there’s no age verification right now anywhere in VR to verify what someone’s actual age is.

Going in to home, family, private property, this is also the Earth. I don’t have the environmental aspect here, but this previous conversation was certainly digging into how we need to be in right relationship with the Earth.

[44], [45]

But when we look at the home, we look at issues like volumetric privacy. In the United States, there’s the Fourth Amendment, which is protecting US citizens from unreasonable search and seizure. But there’s also an interpretation of that, which is the third party doctrine, which means that whenever you give data to a third party, there’s no reasonable expectation for that to remain private, which means that there’s certain threats to our Fourth Amendment rights for privacy of our home and our volumetric spaces. And so what happens to that data? Where is it going up to the cloud? And who has access to it?

[46]

Then you have Meta has talked about this idea of Contextually-Aware AI, which for me I think is super problematic because there’s no sort of privacy concept for how that’s going to be integrated for what happens to that data.

[47], [48], [49]

Helen Nissenbaum has a Contextual Integrity Theory of Privacy, which means that there’s context where there’s appropriate flows of information. So if you’re talking to your doctor, you’re getting medical information. that’s appropriate, or if you’re talking to your banker and you give financial information that’s contextually-relevant. But what happens when there’s an AI that’s ingesting all that data and has no idea for how to navigate the contextual integrity of this information? And how do you prevent that from leaking out? So I think there’s a lot of sort of fundamental problems with where this contextually-aware AI is going to go.

[50]

There is an example that Meta has given, this episodic memory of AI, which is “Where did I put grandma’s watch?” If we think about what that means, it means that AI is tracking all our movements. It knows where everything is in your house. And again, this is sort of this Big Brother, dystopic vision of where they want to take AI.

[51], [52]

Meta has this project called Ego4D, and they have these challenges that are trying to set out what their future, from an AI research perspective, is going to be. But you kind of see where they may want to take this from a product point of view, which is episodic memory, what happened when, what will I do next, what am I doing and how, who said what and when, and how are we interacting? These are all contextually-aware AI / Big Brother that’s watching and trying to answer all these questions. But again, there’s no aspect for how privacy is going to be incorporated into any of these answers to these questions.

Okay, so moving on to entertainment, hobbies, and sex.

[53], [54]

We have escapism and addiction, which, you know, for folks that have been using the Big Screen Beyond, I’ve heard a lot of folks that say, I just was in VR for 10 or 12 hours and I thought it was just a few hours. And so we have this ability of this deep immersion and people really getting lost into these experiences. And what happens when you have like the Candy Crush of VR that is really trying to like hack people to spend more and more time in VR because it’s profitable for them? But what are the implications for making sure that people are able to maintain a balance and be in right relationship to the world around them? And make sure they’re not sort of escaping into these virtual worlds?

[55], [56]

Then you have adult content in the context of it, there’s no age verification. And so you have things like child predators in VRChat that may be trying to prey on minors or what happens when they’re in these 18 plus worlds. But again, there’s no verification of how old they actually are. And so in terms of this adult content in virtual spaces.

And moving on to medical health.

[57], [58], [59], [60], [61]

So say the baseline, if you’re designing a VR experience, you need to make sure that you’re not making people sick. But there’s all sorts of other things in terms of epilepsy or derealization, depersonalization. There’s all these things in which when you’re designing these experiences, you may be triggering physiological medical reactions in folks. And so there needs to be an awareness of what those triggers are. how to disclose what those triggers might be, if there’s flashing lights or whatnot. But also to have lots of different locomotion options so that people have a variety of different ways to move through your space without you forcing them into something that’s going to make them motion sick.

[62], [63], [64]

Skip Rizzo’s doing a lot of amazing work with VR exposure therapy for PTSD. And if there’s experiences that can help people to resolve and heal from their PTSD, there’s also capability for immersive experiences to cause PTSD. And so how do you do trauma triggers or trauma warnings, or being aware of what the spectrum of trauma is? And so that you’re not creating trauma in folks that are seeing your experiences.

All right, other, partnerships.

[65], [66], [67], [68]

There’s been a lot of debate around VR as an empathy machine. I think there’s a certain degree, which certainly there’s a lot of legitimate uses for building empathy, just from any medium, from film or radio and also VR. But there’s also potentially problematic aspects of to what degree are people that are being focused on, do they have authorship or control over what stories are being told? But also, to what degree can you actually empathize with someone who’s a Syrian refugee? And is that something that is more along the lines of sympathy rather than empathy?

[69], [70], [71]

Virtual harassment and bullying is something that has certainly been a part within these virtual spaces. And so trying to create a code of conduct, and moderation, and ways to block people, having personal space bubbles from a technological perspective.

[72]

But just understanding that there’s this intersectional axes of privilege, domination, and oppression. And it’s these folks who are marginalized communities of women, people of color, LGBTQ plus, IA. So these are all the folks that are on the bottom half here receiving a lot of the oppression. These are the people that need to be listened to in terms of, to what degree are these places safe? Are there proper ways of reporting things? And really building a larger infrastructure, not only technologically, but also coming up with a cultural code of conduct and enforcement to make sure that we create these safe online spaces.

Okay, so there’s death, collective resources.

[73], [74]

There’s a lot of virtual violence that are happening in these virtual spaces. There’s a lot of debates over time in terms of, like, 2D violence within video games does not create any sort of correlation to violence in the real world or physical world. But what about these virtual spaces where you’re having embodied interactions, where you’re having these death simulators? And to what degree is that impacting people in a subtle, ethical, or moral, or physiological way?

And then you have philosophy, higher education, law.

[75], [76], [77]

So again, this is the human rights laws and regulations. And so to what degree are we going to have this global jurisdiction of the metaverse where there is certain laws that are kind of beyond any jurisdiction of regionally? And how do you manage all of those different aspects different behaviors as we move forward in the metaverse? And so I think this is you know with the EU and The Metaverse Forum maybe they’ll start to address that.

[78]

There’s a whole Stanford Cyber Policy Center they had an Existing Law and Extended Reality Symposium there was a whole exploration of that. I did a series of interviews about that. But I think that as we think about this how does the regulatory regime transfer over to what happens into these virtual spaces in the metaverse?

Okay, so career, government, institutions.

[79]

So these are all the different ways in which that you can use the different technology in the workplace. But also what happens when you have your employer who wants to start to monitor your focus, your cognitive load, all these new technologies? And if your employer is asking to have access to the data to kind of do this neurological micromanaging of what your attention is from moment to moment, There’s a real transgression of boundary. And there’s not a lot of clear regulations for how to manage that as we move forward. And this is something that Nita Farahany covers quite extensively in her book, Battle for Your Brain, which I recommend, again, checking out to get a lot more details for how this could be going down a dark path.

[80]

You also have governmental mass surveillance with not only the China and the CCP for all this information that’s coming in, which you have ByteDance, which owns Pico, which what happens to all the data that comes through these headsets? But you also have Meta that have some of that information that ends up in the hands of the US government. And so, you know, there’s certain ways that there was a data transfer between the EU and the U.S. that was disrupted because of not being able to ensure that it wasn’t going to end up in the hands of U.S. intelligence agencies. And so there’s a real concern around to what degree are authoritarian governments going to be misusing different types of data to bring about more oppression to their population.

Okay, so friends, community, collective culture.

[81], [82]

There’s a lot of importance for making sure that we have proper diversity, equity, inclusion, not only in the content that we’re producing, but who is producing that content and who’s involved in producing the platforms that we’re producing, making sure that we’re not just creating technology that’s for one subsection of cisgender white males, making sure that there’s ways of including all the concerns of diverse range of people and interests as we not only build the technology, but also build the content.

[83], [84]

Algorithmic bias is certainly a huge issue that was brought up with the Neuro-Rights Initiative. But in terms of philosophically, there’s a lot of kind of utilitarian thinking sometimes when we create technology where we think, oh, well, this works for 95% of the people. But a lot of times the 5% that it doesn’t work for actually is the same sort of marginalized communities where you’re amplifying different aspects of systemic racism or sexism or bias and amplifying it at a systemic scale. And so to what degree are these immersive technologies going to be propagating that type of algorithm bias? Coded Bias is a great film to be able to get more information, specifically in the context of facial tracking.

And the last one is accessibility.

[85], [86]

And on the one hand, there’s going to be a lot of assistive technologies for XR. But there’s also, as we move forward, not really considering the true concerns of accessibility when it comes to how to make these technologies truly accessible for people who may not have all the able-bodied capacities of sight or hearing. So making sure as we move forward to, again, make sure that we’re as accessible as we possibly can. And again, there’s a whole IEEE paper written on this, but still yet a lot of work to be done on this area.

So that was a whirlwind tour of all the ethical and moral dilemmas. I did a talk at South by Southwest where I was talking about the ultimate potential of more of the exalted potentials, but also perils on each of this.

And so again, you can check out my slides [here] if you want to get more information. And also in the Voices of VR podcast, I feature lots of different series on these different topics. And you can get a little bit more information in the footnotes that are in the slides to be able to go to both episodes, but also other resources to get more information. So with that, thanks again for listening.

Footnotes

Rough Transcript

Note: See above for the full transcript. The Q&A and take-aways are listed below

[00:20:54.493] Question 1: Thanks, Kent. We have a few minutes for questions, so it's perfect. You have met more than 1,000 people during your interview in Voices of VR. I have a stupid question. Are you still optimistic?

[00:21:12.418] Kent Bye: Well, I feel like this is like a dual question of all the potentials and the perils. My episode 1,000 was a three hour exploration of this. My talk at South by Southwest was more in depth of all the potentials. But I think there is some perils where there's things that can be solved technologically or culturally But there's also some stuff that will only be solved through law and regulation, and that specifically is privacy. So to think that we're going to self-regulate in privacy is naive. And I think that we need to have regulation that really addresses what Nita Farahany is calling for, which is this right to cognitive liberty, which is going to have all these human rights that are then propagating and impacting GDPR. impacting the AI Act and hopefully at some point have a US federal privacy law that's going to come up with some sort of approach that's more similar to the GDPR, but approaching it in the US context. So I do think that we need regulation for that. We're not going to solve it by self-regulation again, because history has shown that that's not really been effective. So that's the biggest dilemma. Until we have like proper regulation, then I'm not going to really fully sleep well at night because I feel like we're sort of sleepwalking into dystopia until we have that.

[00:22:15.839] Moderator: So do you have any question? Oh, here.

[00:22:22.922] Question 2: Thank you for the great talk. I wanted to ask, once we have like this dream of an interoperable metaverse where you can jump from horizons to the arched and have everything in one place. Once that dream is true, how do we achieve a regulation which would have to be global and not just drawn to one country? What would be a good way to do that?

[00:22:54.821] Kent Bye: Yeah, I mean, so this is part of the open questions that we were exploring at the Stanford Cyber Policy Center symposium on existing law and extended reality, trying to explore what the landscape is. The bottom line was that most of the existing intellectual property law, other laws apply, but there may be like street crimes that Eugene Volokh had talked about, like what is a street crime and how is that prosecuted? He had called it the Bangladesh problem, but I prefer calling it the global jurisdiction problem. How do you make sure that we have global jurisdiction when there's transgressions that are causing harm to the point that you need to have law enforcement step in? What happens in these virtual spaces? But in terms of this interoperability, there's the layers of being in right relationship with the earth, first of all, but then there's the context of the culture, and there's a lot of cultural dynamics where it's the first layer of what are the social norms between these, and then you have the economic layer, and then you have the legal layer, and then you have the technological architecture. So there's certain aspects that you can really solve it with the technological architecture, but there's some things that will only be solved at the level of the law, let's say like GDPR and privacy, if you have those different aspects in this metropolis that you still need to have regulatory regimes that are making sure that each of these individual entities, even if they're like decentralized, what happens to that data and how is something like GDPR going to enforce that? Or like I said, GDPR needs to be expanded to be able to have a new definition of biometric data for privacy reasons, but Yeah, I think most of the interoperable stuff when we talk about the metaverse is at the technological layer. What are the open source protocols that are going to intermediate between these? But the whole economic layer, I think, is the big open question. What's going to be the business model driving it? If it is going to be supremacist capitalism, we're going to be in trouble. I'm optimistic for what Tim Sweeney is doing at Epic, which is to take all the profits from Fortnite and to give 40% of that to the creators. And maybe it's sort of a pseudo first start of something like a universal basic income. for content creators that is going to sort of get us away from more of these pernicious business models that I think are at the root of what most of the ethical and moral dilemmas are going to be stemming from, which is what happens to our data and how are people trying to track us and model us and potentially manipulate and control and undermine our rights to freedom of thought, our self-determination, our mental privacy. So yeah, it's a complicated complex of things, but there's many different vectors that go from the cultural layer, the code of conduct, and enforcing that at a cultural layer, the economic layer, the legal layer, and then finally the technological architecture. Great.

[00:25:23.257] Moderator Thanks again, Kent.

[00:25:30.273] Kent Bye: So that was a talk that I gave at Laval Virtual back on April 13th, 2023. And like I said at the top of this episode, you can check out the show notes for a full transcript as well as each of the different slides. And I also put all the different footnotes. You can go into all the previous coverage that I've done on each of these different topics because each of the different slides usually link to at least one or two other podcast episodes, but also other references if you want to dig into more information. So highly recommend that. checking out the show notes of this podcast to dig into much more information. I had a number of different people that have been reaching out to me around XR ethics, and this is probably the best condensed summary that I've given so far that links off to all the various work that I've done on this topic. And also you can dig into much more stuff around the XR ethics manifesto, or go back and look at the 14 hour series that I did on the IEEE global initiative on the ethics of extended reality. Also entire series on accessibility that I did as well. And Yeah, just lots of other coverage that I've done on privacy and XR and all these other ethical considerations. Be sure to go check out this post to be able to get the full links for other information. So that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show