#844 XR Ethics: An XR Ethics Manifesto

This is an XR Ethics Manifesto that I presented at Greenlight’s XR Strategy Conference on October 18, 2019. This is the culmination of seven focused months of panel discussions, interviews, and talks exploring many of the nuances of ethics and privacy in virtual and augmented reality. It’s a distillation of my talk that I gave at AWE on the ethical & moral dilemmas of mixed reality, and I also started to draw up more of a prescriptive ethical framework that gives an ideal vision for a number of different contexts. It’s impossible to implement a perfect solution as there are often tradeoffs with other principles, which is what makes privacy engineering such a difficult discipline to work in, especially when the harms caused are operating at a collective level that have many other cultural, economic, and legal inputs. And this talk will hopefully be the start of a conversation to further refine these ethical principles, and expand them as the technologies continue to rapidly evolve and change.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So this is the final episode in my series of looking at XR ethics and privacy. So this talk that I gave at the XR Strategy Conference It was really the culmination of seven months of deep dive panels, discussions, and giving different talks at conferences, lots of different interviews. And I really tried to like synthesize it down. Actually, what I did was at Augmented World Expo, I took that talk and then tried to distill it down into like 12 different slides. to take another talk on the ultimate potential of XR, to condense that down into a number of different slides, and then on top of that, to sit down and actually write a manifesto of, here are some guidelines and principles for each of these different contexts that, in an ideal world, this is what we would strive towards. So as I was putting together this presentation, I had put together the outline and the structure. And I was for six straight hours before this talk, I just sat down and wrote down this whole manifesto. It's really a dense talk. It was like over 160 slides in just 30 minutes with those constraints. It really forced me to try to like get everything as I possibly could in to this half hour. So strap in, take a listen. And so we're going to be covering quite a lot of material and it's, this is really kind of like the, the distillation and synthesis of a lot of these other conversations. And then at the same time, there's so many other perspectives and points of view that need to be included. And so at the end, I'm going to be sort of unpacking sort of the next steps that I see in terms of like where to go from here. So this talk on the XR Ethics Manifesto was given on Friday, October 18th, 2019 at the XR Strategy Conference in San Francisco, California. So with that, let's go ahead and dive right in. Alright, so I'm going to give you a proper manifesto today. This is my XR Ethics Manifesto. This is something that I've been working on for really the last five and a half years. So I'm just going to dive in. I have a lot. So my name is Kent Bye. I do the Voices of VR podcast. And so for the past five and a half years, I've done well over 1,400 interviews at this point, kind of documenting the evolution of virtual reality. And so I'm gonna give you a brief chronology of the evolution of my thinking of an ethical framework, and then I'm gonna dive in and just get really polemic about what I think about a vision for ethical immersive technology. So I've asked over a thousand people now, what is the ultimate potential of VR? And what I find is that they answer into one of the domains of human experience. They'll say, well, it'll be about entertainment or medicine or connecting to your romantic partners or dealing with death and grieving, higher education, travel, spirituality, your career, friends and community, dealing with people who are isolated and mobile, expressing your identity and embodiment, having new ways of exchanging value with virtual goods and resources, early education, communication, and home and family. So as people start to describe the ultimate potential, they're really talking about a domain of human experience, which I consider like a context. But this has been very helpful when thinking about ethics, because there's many different contexts. So when I went to Laval Virtual this past year, we had a think tank where we did this open-ended brainstorming of trying to come up with all of the blackmail scenarios as we can, and we basically just did all these post-it notes, and then we were faced with trying to present to the community our results. So I said, why don't we take this context approach and see how we can start to match that up. So this was an initial take, but really there was a lot of things that were left off. And so I was like, we need something that's a little more robust and more comprehensive. I presented this a number of times. This is in Mozilla Hubs, talking to different people in the sensory design meetup, just having these conversations. I've been doing hundreds of interviews over the last five years. Diane Hossfeld wrote a paper called Making Ethical Decisions for the Immersive Web. And then at Augmented World Expo, I gave a keynote talking about the ethical and moral dilemmas of mixed reality. And I just kind of riffed for an hour based upon this structure that I had, but it was just like 70 or 80 different moral dilemmas. But again, it's like very overwhelming. So what's the sort of takeaway? So this talk is going to try to synthesize a lot of those, kind of open in a brainstorm and kind of give some specific direction. So this is kind of a rehash of the different domains of human experience, kind of fleshing out different elements of that, and I'll be going through that through this talk. But I think an important point is to say that all the things that are in the bottom are the things that I would consider to be much more intimate and connected to your privacy. So yourself, your biometric data, your financial transactions, your communications, your private communications, who are you communicating with, your home and family, where you live, your entertainment interests, your hobbies, who you're having sex with, your medical health. So all those things are issues that have existing ways of talking about personally identifiable information or are protected in some ways. But the thing about XR is that it's kind of mashing it all together. So that's an issue. So how do we start to navigate this world where we're able to get what's essentially like medical information, but in a financial context or ways that whoever has that information may not have a good fiduciary relationship of really thinking about your best interest, but are trying to use that information for their own profit. So that sort of introduced a lot of those ethical dilemmas. So then I went to SIGGRAPH and did a whole panel discussion with a number of different people from the community. And then I went to Amsterdam to talk to different people from the W3C, and privacy and ethics is a huge topic. They came out recently with their ethical web principles. And then back in 2016, Thomas Metzger and Michael Madry, they did this whole code of ethical conduct, which for me was just kind of scratching the surface, starting to dig into some of the academic research But all of this has been influenced for me to kind of present you today an ethical framework for XR. So let's dive in. Alright, so let's start with the self, biometric data, and identity. So this is everything that is your body, it's your representation, so this is kind of like an archetypal image to represent that. It's also your expression of an identity through avatars. It's also consciousness hacking, so ways to modulate your consciousness. Sensory substitution, having new senses into your body, as well as sensory addition, adding completely new senses to your experience. as well as concepts like self-centered identity, which is just the concept that you own your own data and you're able to control what information you're giving out. This is what I did at AWE is I basically said, OK, now let's look how everything went wrong. And I'll be just sort of trying to summarize the main highlights from that talk. So we have biometric data privacy, modulating human perception, diversity of avatar representation, biometrics as personally identifiable information, cyber sickness, the correlation of your behavior to your genetics, and should we have companies harvesting our emotion? It says we should, but we should not. So I'll have to fix that. So, biometric data is ephemeral and context-dependent. Think about doing real-time processing when possible and not trying to hoard biometric data. Think about what could happen if you got 10 years of your biometric data leaked online and it got into the wrong hands. What could you be able to discern from that? It could be that it's able to kind of reverse-engineer your psyche. So we need to offer a diversity of avatar representation that's going to increase the fidelity of identity expression. So just think about there's a lot of different ways that people want to express their identity. It's a powerful ability to modulate perception and consciousness, so do no harm. Ensure that it's consensual and use this wisely. You're basically modulating people's consciousness, so there's a lot of power that's there. So we need more studies in order to really assess the long-term impacts of immersion. There's ways that you can nudge your user behavior that could have unintended long-term consequences that we don't really fully aware of yet. We need to enhance the control and power of people in an experience while respecting how we're interdependent. So you don't want to completely maximize for control and power, but also recognize that you are an individual that's in the larger context of a community. So how do you have that balance? Okay, so moving on to resources, money, and values. So think about all the ways in which that you are holding value with yourself, but also like the safety and security and privacy of your information. We need balance between the yang and yin, so ways of doing competition and then cooperation. And right now there's an economic system that's highly focused on a lot of the competition, walled garden, curated content, you know, and so finding ways that there's a balance between open ecosystems, no gatekeepers, and trying to find ways to have a dialectic between the closed and the open. But also, like, what does it mean to cultivate virtual gift economies? As I go through these, I'm sort of going through the most exalted potentials and then diving into where it can go wrong. So, there's the ethics of surveillance capitalism. Corporate fiduciary responsibility is more towards the shareholder profit, which is in conflict with the user autonomy and privacy. Are we going to expect people to pay for privacy? Can we still own and sell our own data? There's a bit of an asymmetry within the attentional economy where these companies have more information about us than we know, and what kind of influence does that have? Can we share or loan virtual objects that we have? What is the sense of ownership when we're buying these things? So what are the rights to our virtual ownership? And aspects of things like differential privacy or homeomorphic encryption, are there ways in which that if you need to do some sort of processing on the data, can you keep it occluded and hidden and still get information out of it without being revealed to that raw data? So looking at how to implement those types of concepts. So security and privacy are essential. So anybody that's in this space has to think about architecting for privacy. And as we move forward, this is going to become more and more important, especially if there's regulatory changes, which it seems like even just yesterday, Senator Ron Wyden introduced a bill to be able to have new teeth when it comes to enforcing privacy law relative to if you have FTC consent decree violations. So you own your own data. You have a right to export, sell, or permanently delete your data. Don't aggregate financial transaction across multiple contexts. I mean, this is happening all the time. You have companies that are tracking what you're buying, your mortgage payments. I mean, there's a centralization of all that financial information. No one should have the right to be able to aggregate all that. So we have a right to exchange value without being tracked. Surveillance capitalism is fundamentally unethical and changes what is considered to be a reasonable expectation of privacy, and that's due to the third party doctrine. So every time we give data to a third party, we're saying we no longer expect that to be private, which weakens our Fourth Amendment protections to that data. So as we give more and more data, like our biometric data, we're essentially saying we're okay with the government tracking all of our emotions and what we're looking at, where we're paying attention to. Like we don't think that's private because we're giving that to a third party. So either the third party doctrine needs to change or we need to have these deeper regulatory enforcements to be able to protect different aspects of our own privacy. But there's implications of that, because that is like an open loop. Every time you are recording information, you are eroding the privacy of the collective, because you're enabling that to lower what the legal definition of reasonable expectation of privacy is. So assume that any de-identified data that is tracked can eventually be tied to someone's identity if it's correlated with other personally identifiable information or if there's just good AI algorithms that are out there. Just assume that's going to be personally identifiable information and just treat it like that. I think that's a safe thing rather than hoarding it and finding out later that all of a sudden you have this PIA data that you don't know what to do with. Assume that any data that's tracked can get leaked onto the dark web and into the hands of a tyrannical government, authoritarian adversary, or a malicious hacker. So is the best, contiguously, to just simply not record it in the first place. We need new business models that don't create power asymmetries that serve a fiduciary responsibility to profit over the best interest of the user. There's a digital divide right now that prevents people from having equal access to these technologies. And so it's a bit of a moral imperative to design business models as well as political systems that can serve underrepresented minorities across all classes. So what we value changes over time. So anytime you do any type of algorithmic inference, it's always going to be incomplete. So be careful about how these models can create assumptions about us that are completely wrong. So offer a way to either provide feedback or to mitigate the recording of the information in the first place when possible. Alright, so early education communication. So into the third domain here, about a quarter of the way through. So this is about any way that you're communicating with other people, but also as you're growing up, sort of the primary early education that we're going through. So the ways in which all of our principal belief systems are being laid down. So early education, looking at immersive technologies to help accelerate different aspects of education. Memory palaces to be able to improve our memory. Telepresence, to be able to communicate people remotely. Data visualization, finding new ways of using abstractions through data and spatialized experience of that. The spatial language of communicating. So just looking at how symbolically we're able to communicate in different ways that are using the full spatial affordances of 360 immersion. And then finally, different aspects of metaphoric communication, sort of like aspects of dream logic, and how are you going to be able to, what's the universal language of VR going to look like, in other words? So, brain control interfaces are going to be able to read our thoughts. Where's that data go, and who has access to it, and what can you do with it? Negative transference in education, so making sure that we're not harming people as we're educating them. How do you design systems to mitigate against hate speech? So, not being able to mute interactions would be a problem, so trying to design systems where you can mute interactions. So, is our private communication being surveilled? Is it being listened to? There's also the issue of metadata of who we're talking to and when we're talking to, which in some ways can be more informative, and so these companies may not be interested in what we're saying, but they're interested in who we're talking to and when, and there's implications of being able to share our biometric data to other people. as well as just a larger consent for accessibility of transcripts. So if you have people who can't hear, but you have automatic transcripts, then if you're overheard, then what are the ways of navigating consent when it comes to automatic transcripts in immersive environments? Okay, so technology should fundamentally enable the freedom of expression, just as a principle. But there's a question of, do we have the right of who we're communicating with and what we say is private? Does this need to be balanced with security? How does this impact freedom of speech and freedom of assembly? So, I mean, there's an issue of, like, if you have completely secure communications, does that enable terrorism? And then, so there's this dialectic between freedom and security. because you want to ensure that you're not being surveilled all the time, but you also don't want to create systems that are going to empower people who are malicious actors to be even more powerful. So this is something that all these companies have to sort of balance how you mitigate against those different things, even with Facebook trying to deploy end-to-end encryption. So should we offer end-to-end encryption when possible, and should we not track who we're talking to when and if possible? So just realize that every time there's communication, there's loss. So we need to design systems that can account for different levels of communicative expression and reception. We're moving away from explicit data entry to implicit data entry based upon what our bodies are, how they're moving, and how we're radiating information into the world. And so we need standardized ways to be able to train systems for our movement and our voice. So as we move from system to system, how does that system be able to adapt to our ability to communicate? And inferring the full context will always be limited. We need to design systems that need to be able to handle harassment and trolling, and so we need to make prevention tools and education a part of the onboarding process, and also offer code of conduct orientations for interactive training experiences. So moving on to the home and family, private property. So, this is all the ways of where you're living, not only in physical reality, but also the virtual worlds that you have. So, volumetric memories, being able to capture memories, but also the privacy implications of scanning your private areas. So, who owns an AR space? Is it part of the commons, or do you have to have ownership of private property in order to have the ability to augment things in AR space? We need to be aware of spatial doxing threats, so ways that we accidentally leak information out there, what's the ecological impact of technology, being able to memory hole history, tracking movements can reveal where you or loved ones live, and try to have a balance of private and public spaces. Alright, so it's everybody's responsibility to ensure that technology is being produced and materials sourced in an ecological sustainable manner. I think everybody's aware of the larger context of the environmental condition, just making sure that we're in harmony with not harming the earth even more or producing things that have slavery involved in some way. So we need to design systems that allow people to have private virtual spaces where they can exert their own control, autonomy, and identity. Where someone lives can be very sensitive in doxing context, so we need to educate the culture about the risks of spatial doxing and how they may be leaking information that could be geolocated, especially information over time. Immersive media can alter and enrich our sense of the history and culture of a place, so be wary of how official historical record can be erased or altered by malicious actors through censorship or forgery. Be careful of who can scan and interpret your private spaces as it can reveal a lot of personal mental health information and you may be also inadvertently violating the privacy of non-consenting friends and family when you start to share your spaces that you're in, because it may be shared spaces So, moving on to entertainment, hobbies, and sex. So this is all the ways that we find creative expression and just having fun, but also exploring different ways of expressing ourselves through entertainment. So we have immersive storytelling, we have the holodeck vision of where we're going, achieving different flow states, expression of creativity in art. So this is issues like addiction and the dopamine economy, conscious versus unconscious behavior, are we consenting to violent content, what are the unintended consequences of XR porn, systems that can limit creative expression, accidentally revealing sexual preferences through eye-tracking data, tracking our entertainment options, and mitigating sharing of explicit sexual content. So first, simply just don't hijack our attention. Don't create systems that are going to pull us in and keep us in without any other benefit other than just to keep us addicted to your systems. Don't use addictive gameplay mechanics to create dependencies that don't empower or enrich someone's life. Use content warnings, but also be aware that some people may not know what they're really consenting to, especially if it's an immersive experience that they've never had before. So, they have no way of taking something back. So, yeah. So, the web is transparent and remixable, contributes to the architecture and infrastructure of WebXR and OpenXR to enable creative expression and innovation. Real-time biofeedback and implicit data is a whole new ballgame to navigate for how we integrate into interactive experiences without the user's conscious agency. So just recognizing that when you start to take information that they're not consciously aware of, what are the implications of that? XRPort exists and will always exist, so we need to ensure that's produced in an ethical manner, but also that we have no idea what the larger societal or ethical implications may be. So I think we just should talk about it more in open dialogue. It's just one of those things that we don't address, and I'm just thinking about what some of those ethical issues might be. Eye tracking and other biometric data can be used to determine your sexual preference, which can be a life-threatening situation for some people in the wrong country. So just think about ways in which information that's leaked out could be either then put in jail or to actually lose their life. So we need to design for the most marginalized people in the most authoritarian scenarios. So how can people share sexually explicit content on your system? That's like a question you should be asking, and also how do you mitigate against it? Moving on to medical information and health. So just generally all the health applications. We have aspects of neuroplasticity that VR taps into. We have the ability for us to own our own health now. We have the ability to heal trauma. Walter Greenleaf has an amazing 20 different sectors which medical technology is being used. So I think some principles are do no harm, but also do you disclose to the user if you can detect specific medical conditions if they're in a VRXR experience, seizure disorder sensitivity, are people with specific mental health diagnoses more susceptible to harm, privacy for telemedicine, content trigger warnings and mapping out the cartography of PTSD and trauma, rehabilitation of blocked users, and should HIPAA be regulating the biometric data. So we have a responsibility to do no harm and to mitigate as many risks as we can through the architecture of code, business models, market dynamics, and cultural education, as well as legal and policy frameworks, all those things all together, combining together to be able to mitigate and to do no harm. So biometric data is medical information that should not blur over into other contexts where those inferences can be used against somebody. It's possible that the storage of biometric data should actually be regulated through something like HIPAA or some new regulatory framework that will need to be implemented, so just don't be evil. Technology should always be empowering people to heal themselves and designed for people who are on a wide spectrum of physical condition or mental diversity. Alright, we're now into the second half and there's a little bit less on these as well. So going into the other partnerships law, so it's kind of this, anything that's an other, either it's a romantic partner, anybody that's an adversary that's against you, so it's either good or bad, that's sort of like an opposition. So we have like virtual dates, we have empathy and truth and reconciliation. So, we're going to have to define new boundaries for what intimate relationships are in these immersive technologies. We have to design for safety from harassment, think about informed consent and progressive permissions, mitigating deep fakes and forged identities. How do we balance the approaches of retributed versus restorative justice? What virtual content is going to be illegal? What are the downsides of the empathy machine and trauma tourism as kind of like a tokenized spectacle? The implications of third party doctrine and the dangers of anthropomorphic AI influence as we're dealing with AI agents. If they're embodied and they feel like humans, how can those virtual humans start to manipulate us in different ways? So we need to provide tools for users to be able to block and mute people who are harassing them. Implement personal space bubbles so that users can be able to maintain the autonomy of their own personal space. The authenticity of expressions of identity are gonna get really weird with deepfakes and forged identities. I really don't know what to do with it. It could be that this is more of a culture issue, but are there gonna be ways potentially to allow people to either validate their identity in digitally mediated environments? So we need to think about how to cultivate entirely new systems of restorative justice within immersive technologies and to cultivate a culture of owning harm done, apologies, redemption. So restorative justice isn't just something that just happens. It actually is a part of the culture and actually practicing those principles. WebXR content should work across all the platforms and all the major devices. So talking about death and collective resources, so there's grief rituals on the benefit, but there's also lots of stuff of like, who has the rights to your identity after you die, what are the implications of killing people in VR, do you have the right for you to be forgotten, using VR for torture, aspects of experiential warfare, the long-term implications of exiling and permanently banning people, sexual assault in XR, human rights violations, and filtering out violent or terrorist content. So, we have the right to be forgotten. We need architect systems that allow people to export or permanently delete all traces of their identity. Designers have a responsibility to become trauma-aware and how to mitigate against creating or annotating immersive experiences that could trigger other people's trauma. We may discover that immersive violence has way more unintended consequences than in our 2D screens, so we need to consider the unknown ethical implications of the content that you're designing. Living and dying is a process that we all go through, so we need to think about how immersive experiences can help with the grieving process as well as potentially create new rituals around death and loss. So philosophy, higher education. So, you know, there's a lot of philosophical implications of XR, but also like higher education, long distance travel, ways in which that we're kind of expanding our minds. So we need to break down the different aspects of our academic silos and create more interdisciplinary approaches. And body cognition is a huge aspect that's driving a lot of immersive training. And we need to be able to comprehend complexity in new ways with VR. And there's a possibility to create sci-fi world building and just invoke this sense of wonder and awe. So, there's many philosophical implications of XR. What is truth? What is reality? I think we're all facing that in our culture today, and it's going to get even more weird with immersive technology and AI. What are the ethics of things like biohacking? We need a comprehensive ethics framework. This is sort of my first cut, but I think collectively we need to have the conversation and start to come up with some of these principles. How do we mitigate against filter bubbles of reality? We need philosophical concepts of what the new economic business models are going to be. There's a moral responsibility to future-dream both the protopias, but also do the cautionary tales, and just generally embrace paradox, plurality, and process thinking So, we need everybody in the immersive industry to participate in a wider discussion about ethics and the ethical frameworks for XR I think this conversation is really getting to the point where it needs to be formalized in some more fashion And XBAR I think is a perfect medium to be able to prototype these protopia experiments and to cultivate a culture that's designing us for a better world, but there's also dystopic narrative tropes that can either be cautionary tales, like Black Mirror, or they can just be like a blueprint for self-fulfilling prophecies. So realizing that there's an ethical implication of the narratives and stories that you're telling, you may be actually architecting the future that we don't want. All right, so the last three here, we have the career, government, and institutions. So this is a lot of your own personal career, your reputation out in the world, but also in the sense of these public governments and how they're interfacing with us. And so we have ways in which you're using it in professional context for spatial design. We have virtual screens for productivity. It's going to be a huge aspect. But we need to mitigate our governmental overreach, surveillance, and tyrannical control. There's going to be governments that are using XR for loyalty tests. Are you loyal to the Communist Party? Conflicts of interest between the industry and academia in terms of the ways that they're collaborating, but also the conflicts of interest that's starting to happen there. Are we using biometric data for hiring? What are the ethical implications of that? Augmenting public spaces for good. Implications of remote work. Algorithmic transparency for AI and the collective right to augment public space. Alright, so you need to be hesitant to rely too heavily on biometric assessment for hiring. Your model, whatever your model is, is imperfect. It can't handle the complexity of every human being, so just realize that you can put it as an input, but to put all of your onus on hiring on some sort of biometric assessment I think is a scary proposition for where things could go. We need to read science fiction and cautionary tales like 1984 just to see how far authoritarian governments can misuse biometric data and to consider the political implications of technology across different contexts The public should have algorithmic transparency of the algorithms we have to be able to see how the architecture of the code may be subtly influencing our lives Public spaces should be empowered to have an even more open and vibrant public open virtual spaces for the freedom of expression And WebXR content should be able to be delivered to the web around the world, independent of country boundaries. All right, friends, community, and collective culture. So these are the ways that we are communicating and gathering online in spaces. So hanging out with friends obviously is a big use case there. But there's the danger of explicit and implicit social scores. If you're starting to put scores on people in terms of their reputation, where does that go? How is that being used outside of that context? How to support the cultivation of communities, the codes of conduct and enforcement of those codes of conduct. What happens when you start to violate normative standards, for example, like playing Pokemon Go at the Holocaust Museum? How do you start to handle those types of situations? Implications of the freedom to assemble, principles of diversity and inclusion, preventing algorithmic bias, and the biometric data that's rated out to the community as you as an individual. Alright, so, simply don't cause harm to society. Seems simple, but there's a lot of ways in which harm can be contributed to society if you're not architecting for that. Cultures are cultivated, so they're not engineered, and so it takes many factors in order to drive these collective behaviors. Have a code of conduct. The behaviors model both from the top down as well as from the bottom up, so I encourage everyone to cultivate the culture that they want. Algorithms have bias based in human judgment and decisions, and so we need as much input and feedback as possible in order to mitigate algorithmic bias. Diversity and inclusion is just a foundational principle and should be included in all levels of what we're doing. And then finally, the hidden, exiled, and accessibility aspects. And so we have aspects of either don't have access because your body ability or your age, you feel isolated or exiled in some ways. And so combating isolation is one use case. There's also accessibility for people who aren't fully abled and how these immersive technologies are going to be able to help with that. Alright, so the principles of inclusive design and accessibility. Is it possible to be truly anonymous? The dark spatial web. Biometric polygraphs. Is it possible to re-identify DFI, personally identifiable data? What are the bad inferences that come from an incomplete context? And then the utility versus the downsides of exiling or banning people. All right, so we need to design for all humans regardless of their ability. WebXR content should be considered for how it can be made available to all people regardless of their physical ability or fidelity of input. Actually, that should be just all XR input. Consumers of WebXR content should be able to modify how that content is rendered just as they are able on the 2D web. So thinking about how those principles of screen readers and other stuff like that, how is it gonna work in XR to be able to have people to modulate that information coming in that's gonna suit whatever their sensory input fidelity is. And I guess the conclusion is that was a lot. So I needed to get it all out. This was really just a manifesto, just a rant for me. But I think the bottom line is that this is a conversation that we're all in here together. And I gave a talk about this at AWE. I tried to synthesize it in a new way. It's still dense, still very rich, but I think this Spatial representation I think can start to help us navigate it and you know if there's things I didn't cover Please come talk to me. I'd love to see how it fits in I try to be as comprehensive as I can But there's a lot to consider and I think that my goal is to try to have an ethical framework That's also an experiential design framework. So it's embedded into it as you're designing your experiences I'll be posting the slides once I correct all the typos and everything on my my Twitter handle at atkantbuy and put it as a pinned tweet. So be able to look at these and encourage anybody if you have any further comments or anything, please come up to me. I want to continue this conversation and yeah, just help promote this concept and idea that we need more comprehensive ethical frameworks. So thank you. Alright, we're now into the second half, and there's a little bit less than these as well. So going into the other partnerships, anything that's an other, either it's a romantic partner, anybody that's an adversary that's against you, so it's either good or bad, but sort of like an opposition. So we have like virtual dates, we have empathy and truth and reconciliation. So we're going to have to define new boundaries for what intimate relationships are in these immersive technologies. We have to design for safety from harassment. Mitigating deep fakes and forged identities. How do we balance the approaches of retributed versus restorative justice? What are the downsides of the empathy machine and trauma tourism as kind of like a tokenized spectacle? And the dangers of anthropomorphic AI influence as we're dealing with AI agents. If they're embodied and they feel like humans, how can those virtual humans start to manipulate us in different ways? So we need to provide tools for users to be able to block and mute people who are harassing them. Implement personal space bubbles so that users can be able to maintain the autonomy of their own personal space. The authenticity of expressions of identity are gonna get really weird with deepfakes and forged identities. I really don't know what to do with it. It could be that this is more of a cultural issue, but are there gonna be ways potentially to allow people to either validate their identity in digitally mediated environments? So we need to think about how to cultivate entirely new systems of restorative justice within immersive technologies and to cultivate a culture of owning harm done, apologies, redemption. So restorative justice isn't just something that just happens, it actually is a part of the culture and actually practicing those principles. WebXR content should work across all the platforms and all the major devices. So talking about death and collective resources, so there's grief rituals on the benefit, but there's also lots of stuff of like, who has the rights to your identity after you die, what are the implications of killing people in VR, do you have the right for you to be forgotten, using VR for torture, aspects of experiential warfare, the long-term implications of exiling and permanently banning people, sexual assault in XR, human rights violations, and filtering out violent or terrorist content. So, we have the right to be forgotten. We need architect systems that allow people to export or permanently delete all traces of their identity. Designers have a responsibility to become trauma-aware and how to mitigate against creating or annotating immersive experiences that could trigger other people's trauma. We may discover that immersive violence has way more unintended consequences than in our 2D screens, so we need to consider the unknown ethical implications of the content that you're designing. Living and dying is a process that we all go through, so we need to think about how immersive experiences can help with the grieving process as well as potentially create new rituals around death and loss. So philosophy, higher education. So there's a lot of philosophical implications of XR, but also like higher education, long distance travel, ways in which that we're kind of expanding our minds. So we need to break down the different aspects of our academic silos and create more interdisciplinary approaches. And body cognition is a huge aspect that's driving a lot of immersive training. And we need to be able to comprehend complexity in new ways with VR. And there's a possibility to create sci-fi world building and just invoke this sense of wonder and awe. So, there's many philosophical implications of XR. What is truth? What is reality? I think we're all facing that in our culture today, and it's going to get even more weird with immersive technology and AI. How do we mitigate against filter bubbles of reality? What are the ethics of things like biohacking? We need a comprehensive ethics framework. This is sort of my first cut, but I think collectively we need to have the conversation and start to come up with some of these principles. Think about informed consent and progressive permissions. What virtual content is going to be illegal? the implications of third-party doctrine. We need philosophical concepts of what the new economic business models are going to be. There's a moral responsibility to future dream both the protopias but also do the cautionary tales and just generally embrace paradox, plurality, and process thinking. So we need everybody in the immersive industry to participate in a wider discussion about ethics and the ethical frameworks for XR. I think this conversation is really getting to the point where it needs to be formalized in some more fashion. And XBAR I think is a perfect medium to be able to prototype these protopia experiments into cultivated culture that's designing us for a better world, but there's also dystopic narrative tropes that can either be cautionary tales, like Black Mirror, or they can just be like a blueprint for self-fulfilling prophecies. So realizing that there's an ethical implication of the narratives and stories that you're telling, you may be actually architecting the future that we don't want. Alright, so the last three here, we have the career, government, and institutions. So this is a lot of like your own personal career, your reputation out in the world, but also in the sense of these public governments and how they're interfacing with us. And so we have ways in which you're using it in professional context for spatial design. We have virtual screens for productivity. It's going to be a huge aspect. But we need to mitigate our governmental overreach, surveillance, and tyrannical control. There's going to be governments that are using XR for loyalty tests. Are you loyal to the Communist Party? Conflicts of interest between the industry and academia in terms of the ways that they're collaborating, but also the conflicts of interest that's starting to happen there. Are we using biometric data for hiring? What are the ethical implications of that? Augmenting public spaces for good. Implications of remote work. Algorithmic transparency for AI and the collective right to augment public space. Alright, so you need to be hesitant to rely too heavily on biometric assessment for hiring. Your model, whatever your model is, is imperfect. It can't handle the complexity of every human being, so just realize that you can put it as an input, but to put all of your onus on hiring on some sort of biometric assessment I think is a scary proposition for where things could go. We need to read science fiction and cautionary tales like 1984 just to see how far authoritarian governments can misuse biometric data and to consider the political implications of technology across different contexts The public should have algorithmic transparency of the algorithms we have to be able to see how the architecture of the code may be subtly influencing our lives Public spaces should be empowered to have an even more open and vibrant public open virtual spaces for the freedom of expression And WebXR content should be able to be delivered to the web around the world, independent of country boundaries. All right, friends, community, and collective culture. So these are the ways that we are communicating and gathering online in spaces. So hanging out with friends, obviously, is a big use case there. But there is the danger of explicit and implicit social scores. If you're starting to put scores on people in terms of their reputation, where does that go? How is that being used outside of that context? How to support the cultivation of communities, the codes of conduct and enforcement of those codes of conduct. What happens when you start to violate normative standards, for example, like playing Pokemon Go at the Holocaust Museum? How do you start to handle those types of situations? Implications of the freedom to assemble, principles of diversity and inclusion, preventing algorithmic bias, and the biometric data that's rated out to the community as you as an individual. All right, so simply don't cause harm to society. It seems simple, but there's a lot of ways in which harm can be contributed to society if you're not architecting for that. Cultures are cultivated, so they're not engineered, and so it takes many factors in order to drive these collective behaviors. Have a code of conduct. The behaviors model both from the top down as well as from the bottom up, so I encourage everyone to cultivate the culture that they want. Algorithms have bias based in human judgment and decisions, and so we need as much input and feedback as possible in order to mitigate algorithmic bias. Diversity and inclusion is just a foundational principle and should be included in all levels of what we're doing. And then finally, the hidden, exiled, and accessibility aspects. And so we have aspects of either don't have access because of your body ability or your age, you feel isolated or exiled in some ways. And so combating isolation is one use case. There's also accessibility for people who aren't fully abled and how these immersive technologies are going to be able to help with that. Alright, so the principles of inclusive design and accessibility. Is it possible to be truly anonymous? The dark spatial web. Biometric polygraphs. Is it possible to re-identify DFI, personally identifiable data? What are the bad inferences that come from an incomplete context? And then the utility versus the downsides of exiling or banning people. All right, so we need to design for all humans regardless of their ability. WebXR content should be considered for how it can be made available to all people regardless of their physical ability or fidelity of input. Actually, that should be just all XR input. Consumers of WebXR content should be able to modify how that content is rendered just as they are able on the 2D web. So thinking about how those principles of screen readers and other stuff like that, how is it gonna work in XR to be able to have people to modulate that information coming in that's gonna suit whatever their sensory input fidelity is. I guess the conclusion is that was a lot so I Needed to get it all out. This was really just a manifesto just a rant for me But I think the bottom line is that this is a conversation that we're all in here together And I gave a talk about this at AWE. I tried to synthesize it in a new way It's still dense still very rich, but I think this is Spatial representation I think can start to help us navigate it and you know if there's things I didn't cover Please come talk to me. I'd love to see how it fits in I try to be as comprehensive as I can But there's a lot to consider and I think that my goal is to try to have an ethical framework That's also an experiential design framework. So it's embedded into it as you're designing your experiences I'll be posting the slides once I correct all the typos and everything on my my Twitter handle at atkantbuy and put it as a pinned tweet. So be able to look at these and encourage anybody if you have any further comments or anything, please come up to me. I want to continue this conversation and yeah, just help promote this concept and idea that we need more comprehensive ethical frameworks. So thank you. So that was a talk that I gave called the XR Ethics Manifesto, and it was given on Friday, October 18, 2019 at the XR Strategy Conference. So I have a number of takeaways about this talk and this whole process. The first is that this is certainly a lot of information. And the thing that I noticed in giving this talk is that for each of these individual contexts, it's really like the ultimate ideal of what would be a perfect world. But I think it's actually impossible to implement every single aspect of this framework in any given specific instance, because This is like the ideal and we don't live in an ideal world. And so there's these different trade offs. And so I feel like that's the challenging part of privacy engineering is that you have to trade off different aspects. There's no perfect balance of all these different things. There's only like in the best case scenario and then ways to try to compromise different aspects and then notice that there's different harms that could be caused with those different compromises. And I think that's the big part of trying to lay out all the different potential harms of how things could go wrong. And that's where things like Black Mirror and trying to really flesh out these dystopic scenarios into a story that gives you a sense of an embodied experience of what could go wrong. That's why I like think that there's science fiction, like 1984, these like cautionary tales that are telling us something about, like, if you continue with this underlying technological architecture, then we're kind of heading towards this dystopic story of if we choose to have complete safety and security and have no privacy, as we move forward, then that's going to be an issue when you have somebody who has all that centralized power and starts to abuse it in different ways. And I think to a certain extent, we already have that with these huge consolidations of power with these companies, as well as with the government in certain aspects, just in what's happening with bulk collection of surveillance and a lot of the information that came out of the Edward Snowden leaks, just to see that they're using the third party doctrine to be able to do this mass collection on everybody. So I think what we can do as an industry is start to take the manifesto like this and it's a start, but I think it's not the end of the conversation. I think there needs to be lots of other different perspectives. For me, some other next steps is to look at these ethical codes of conduct and start to try to synthesize those different ethical codes and see what is unique to each of those different contexts. I think another whole branch that could be looked at is the virtue ethics. So looking at these underlying virtues that you are trying to design for these underlying principles, I think something like the fair information practice principles, something that Taylor Beck from Magic Leap talked about in our panel discussion at SIGGRAPH that lays out these eight different principles specifically around security and privacy, guiding the more specific implementations, but like the more higher level virtues that you're trying to embody within your technology. It's also what worried about talked about back at our South by Southwest panel back on March 10th, where she's trying to implement these different virtues of being human centered, different principles of authenticity, accountability, empowerment, accessibility, some of these different aspects that can be higher level virtues. Also a big influence on this talk was Dan Applequist and the technical architecture group from the W3C. So the tag group that was doing these ethical web principles. A lot of those web principles I think are really solid and I think they translate over pretty well into looking at immersive technologies in general. And some of them are very specific to WebXR because it's, you know, looking at the immersive technologies on the web, which I think has a little bit different flavor than just like all immersive technologies, independent of being connected to the infrastructure of the open web. So this specific series, there's a bit of a sneak preview that I started with a representative from HTC, Daniel Robbins, who is an R&D group, and he's really doing a lot of this rapid prototyping and, you know, trying to think about some of these ethical implications as he starts to build out some of these different technologies. And so I think this type of manifesto could potentially be used to look at different contexts and to be able to think about either threat modeling or how to implement that into specific architectural decisions and different trade-offs that you're trying to get. Because I do think that each of these different contexts do have trade-offs with each other. And it may be impossible to optimize completely for all of them perfectly. So that's the challenge of trying to figure out what balance works best for you. And that's also just the challenge with ethics in general is that there is no like perfect answer and that this is a. A thing for each person has to cultivate their own sense of moral intuition. And for me, part of my process for gathering up all this information is to talk to as many different people as I can to get many different perspectives and to see what some of the common themes were. And I think this talk was just trying to put something forward that I wish we were able to be able to produce out of the VR Privacy Summit and have the whole community rally around that. It just was too big of an open-ended challenging problem to try to like distill things down to that point. But hopefully this'll be another start of the conversation to just get more people involved and talking about these different issues and to formalize it in different ways. So it's just a lot of work that's still yet to be done, but this felt good to distill it down to this point. Really packed in a lot of information went down into this like half hour talk that you can watch on YouTube and send it around. So that's all that I have for today. And I just wanted to thank you for joining me here on this podcast. And if you've listened to all of the episodes in this series, then, oh my God, thank you for doing this deep, deep dive into a topic that I think is extremely, extremely important. If you're still getting started and want to, you know, dive in even more, there's lots of great conversations in this series here. And it's really been like seven months of work for me of really focusing on a topic and being very deliberate and intentional of scheduling different talks and panel discussions and doing lots of interviews on this topic, trying to. get up as much information as I can. So it feels good for me to get this whole series out there into the world. So this really felt like this independent scholarship project that I took on this year. It feels like it just something that needs to be done. Somebody needs to start to have these conversations and put this information out there. It's still an ongoing dialogue with the entire community. It's not like this is the end. It's just the beginning, I think. And I just want to encourage more people from different companies to start to have these different conversations. And yeah, if you enjoyed this series, then I encourage you to send it out to people, spread it around. Please send this video of this talk and then inspire people to get more of a deep dive into this entire series. And then the final thing that I would say would just be a plea for more support on my Patreon. I'm supported by my Patreon listeners and just a shout out to my supporters. I wouldn't be able to be doing any of this without the support that I'm getting from my Patreon. And I'd like to just encourage more people to support the work that I'm doing here on the Voices of VR podcast. And realistically, I need probably about twice the amount of money that I'm making right now on Patreon to really make it sustainable. I'd really like to stay as an independent journalist and to continue to do this type of coverage. It requires a lot of travel and also just other expenses that it costs just to be a human in this world today. I could really use the support. So if you enjoyed this series and you want to see more of that or just want to support this kind of independent journalism and real-time oral history and documentation of what's happening in the realm of virtual reality, then please do consider becoming a supporting member of this Patreon. You can become a member and donate today at patreon.com slash Voices of VR. Thanks for listening.

More from this show