#789: Human Rights in the Metaverse: Brittan Heller on Curtailing Harassment & Hate Speech in Virtual Spaces

brittan-hellerBrittan Heller is a technology & human rights fellow at the Carr Center for Human Rights at Harvard University where she is investigating the intersection of technology & policy when it comes to online harassment & hate speech within virtual worlds. Heller was one of the Jane Doe plaintiffs in the very first cyber harassment lawsuits in 2007, and she describes her experience and reasons for settling the case before it went on to create potentially bad law that could make the problem of online harassment even worse than it already had been.

Heller sees harassment and hate speech as a sociological and cultural issue where trying to solve it purely through a technologically-mediated or legal framework would only be addressing the symptom of the problem, and not the deeper causes. She talks about her experiences as an international human rights lawyer, and what type of technological interventions are justified in trying to curtail hate speech through automated, machine learning detection of some of the linguistic hallmarks of dangerous speech, including how dehumanizing language can incite groups into violence. She also says that there’s a responsibility for community members to establish social norms by actively engaging in direct dialogue to enforce the code of conduct standards, and that there does need to be mechanisms to deal with the 3-10% of sociopathic trolls who are dedicated to social disruption, but that technological solutions shouldn’t be designed centered around these edge cases.

We also talk about how the line between the virtual and the real is still not very well defined, and that there are differences between how experiences in virtual world are treated differently than experiences in face-to-face reality. There are behaviors that would be classified as either sexual harassment or sexual assault in real reality that regularly occur in virtual worlds, but there’s not a physical transgression of boundaries in physical spaces that doesn’t carry the same legal implications — even if anecdotally survivors of sexual assault and harassment report that they still experience intrusive thoughts from these experiences of virtual harassment. Heller also mentions that there is some domestic abuse law that currently focuses solely on physical threats to violence while online interactions do not carry the same weight despite it being a continuation of abuse and harassment.

Heller also talks about how the First Amendment rights to free speech are not absolute, as there are limits for inciting violence. There’s also actually a fighting words doctrine that’s designed to prevent physical violence while in the same co-located physical location, but doesn’t address whether this translates to online spaces. The fighting words are defined by a number of court cases and includes a number of definitions such as “by their very utterance, inflict injury or tend to incite an immediate breach of the peace,” represent a clear and present danger in physical reality, create an incitement to riot, or result in “a direct personal insult or an invitation to exchange fisticuffs.” But all of these definitions of fighting words are limited by the type of violence that happens in physical reality, and not what happens mediated through technology online or in virtual spaces.

Heller is taking a proactive approach to looking at the intersection of where existing laws might overlap into the types of virtual spaces that will start to form, and she’s actively collaborating with technology companies and policy makers in helping to define the balance that’s needed in order to cultivate safe online spaces while also maximizing the amount of freedom of expression that’s possible, within the limits of the types of dangerous speech that incites violence. Virtual harassment, hate speech, and online extremism are complicated issues, and she’s on the front lines of trying figure out how to integrate and balance technological solutions, cultural practices, and a legal framework that helps to balance the freedom of expression with the terms of service and codes of conduct that can cultivate safe spaces online and in the metaverse.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So on today's episode, I'm going to be talking to Britton Heller, who is a human rights lawyer who's looking at this intersection between technology and policy. She's currently a technology and human rights fellow at the Carr Center for Human Rights at Harvard University, and she's an associate in the business and human rights program for the Center for Strategic and International Studies. So Britton has actually been a Jane Doe in the very first cyber harassment lawsuit back in 2007. And in this interview, she shares her experience there and what resulted, which is essentially this experience of online harassment that she experienced and what are the ways in which you can look at the existing laws and how does that apply to what happens to you online. And so that was over 12 years ago that that happened to her and that there's been a propagation of harassment, sexual harassment, hate speech and online extremism. And so she headed up the Anti-Defamation League Center for Technology and Society. And now she's doing this research project of looking at the metaverse and virtual reality and looking at existing laws and how they're going to potentially overlap with what's happening online and try to figure out this difference between the digital and the real. and when it comes through this lens of human rights and how to address these various different issues of harassment, hate speech, and online extremism. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Britain happened on Monday, July 22nd, 2019 in Portland, Oregon. So with that, let's go ahead and dive right in.

[00:01:52.781] Brittan Heller: Hi, my name is Britton Heller. I am a technology and human rights fellow at the Carr Center for Human Rights at Harvard University. I also work as an associate in the business and human rights program at a think tank called the Center for Strategic and International Studies. I like to think about myself as a ninja turtle of the internet, where my expertise is really in the sewers and focusing on controversial content, hate speech, online extremism, online harassment, and sometimes going toe-to-toe with the people who create that stuff.

[00:02:31.510] Kent Bye: So yeah, maybe you could give me a bit more context as to your background and your journey into this space.

[00:02:37.701] Brittan Heller: Sure, it wasn't intentional. I started my career wanting to go into international criminal law. And eventually I did. I prosecuted the first cases at the International Criminal Court. I prosecuted genocide, war crimes, and other human rights abuses for the U.S. government and handled all the technology related to that. But while I was in law school, I ended up becoming the Jane Doe plaintiff in one of the first cyber harassment lawsuits. And for a while, I thought that was a distraction from the human rights work that I wanted to do. The suit started in 2007, so it's about 12 years later where I actually see that I was just on the unfortunate cusp of the wave. Everybody always wants to know what happened. So I'll take you back to 2005, when I was a student studying to go to law school. You take the LSAT test, and I was part of a study group. I ended up getting a better score on the test than the young man who ran the study group. And I got into Yale Law School, which is the top-ranked law school in the country, and was his dream school, and he did not. He also kept asking me out on dates and I kept declining. So after I got into Yale, it seemed to add insult to injury. And the next day I started getting messages from people saying, good job fighting those creeps off. or I didn't know you used this particular website and I had no idea what they were talking about. Absolutely none. And it was basically a message board that was supposed to be about law school admissions and when I went to it, it really wasn't. It was a place where people were being attacked for their race, their ethnicity, their gender identity, their sexual identity and every way a person could be attacked online. And they'd started writing about me. So it started with a post that said, stupid bitch to attend Yale Law. And it contained all of these falsehoods about my score. It said that I was a Muslim terrorist, that I had bribed my way into school with sexual favors, that I pretended to be a minority, that I pretended to be black. Just really strange stuff. So I wrote to the company that ran the website and said, hey, this is bizarre and distressing. Would you mind taking it down? And they didn't respond. So I forgot about it for a while. And then when I was at law school, I started interviewing for jobs. When you're in law school, you become a summer associate. So like a trial period of working at a law firm. And these law firms would tell me things like, you're the demographic we want, you're the editor of the international law journal you have perfect grades and we'd love to hire you but we just can't and it was kind of a mystery until one of them told me explicitly it's your online search history google yourself we don't want that kind of stuff associated with our law firm and we don't want our clients saying that even if it's all fake nonsense it doesn't matter clean it up and we can hire you right away So I went back to the people who ran the website and said, hi, it's me again. This is having an impact on my real life. Can you help a girl out? At the same time, the website had started this contest and it was girls of the top 14 law schools. So they would encourage people to take pictures of their classmates and follow them around and then submit the pictures. And sometimes the pictures would be doctored Mine eventually ended up on crime scene photos. Most women's pictures did not end up on crime scene photos. And then they would put the pictures up and put the entreaties from the women underneath it, saying things like, that's not me. Why are you doing this? Who are you? If my church or my employer or my family sees this, It will be really bad. I'll lose my job. Why are you doing this to me?" And they thought it was funny, so they put it all underneath. So I went back to the website and said, OK, here's one line of code, just one line of code. If you put it in there, all of the bad content stays up and it gets de-indexed from Google and we we all go our separate ways. What do you say? And then they finally wrote me back and said, you're going to have to sue us because First Amendment. And I said, OK, because I had realized that most people who get harassed online, they can't prove the harm. And I could prove the harm because it was involving jobs. So I had financial loss and in that way had standing in a court in a way that other people who were just terrorized online wouldn't. I recruited more Jane Does to the case. I got pro bono counsel from Stanford and Yale Law Schools. I got a PR firm to represent me for free. My volunteer cybersecurity researcher eventually became the chief security officer of Facebook when he grew up, and I went on the offense. So we announced the suit on the front page of the Washington Post, and I don't think they knew what hit them. people thought that the case was going to go to the Supreme Court because we moved the judges to do lots of very interesting things, like serve process to the people doing the harassment using the website, and that had never been done before. to sue people under their screen name because they were harassing the women anonymously. So if you looked at the complaint it was kind of funny because it was Jane Doe versus cheese-eating surrender monkey and Hitler, Hitler, Hitler and the whole list like that. There were so many new issues that something was going to go. And it also started a new wave of the harassment because the person who started it was no longer the only person interested in this issue and things got a lot worse before they got better. So I was doxed, they put all of my information, not just my social security number, my name, my phone number, my email, but when I went to the gym, where I lived, what my class schedule was, there was a running commentary about the things that I would say in class because they wanted to prove that I was stupid and functionally illiterate and that's why I couldn't get a job. They also wrote a poison pen letter and somebody sent it to the law firm that eventually did take a chance since they come as a summer associate. They sent it to the managing partners, the hiring partners, every named partner they could find at the firm, and all of the professors at Yale Law School detailing my crimes against men. which was kind of a strange diatribe to send to people, but at the same time, I was very worried that it would affect my ability to keep working there and to keep studying at school. I started getting threats of physical and sexual violence, enough so that the local police got involved and the FBI had to escort me to final exams. I ended up taking time off of law school because it was very obvious to me and I had to prove it to school that the content was such that there were people in class with me who were doing the harassment as well. How else would they know what I was wearing and what I was saying in class? And if you couple that with the threats of physical and sexual violence, I didn't feel like the school could keep me safe anymore. I ended up settling the lawsuit because there are a few reasons. One, I didn't want to create bad law, and with so many novel issues, I thought something is going to go high enough in the court to a judiciary that may not understand this, and it really has the potential to make this worse and not better. And second, I felt like I had already done what I set out to do, which was I wanted to see if an average person who was being harassed online could get redress under the current legal structure. And the answer immediately was no. because I was not an average person. Most average people don't have access to the kind of resources that I was able to get as someone who said, I go to Yale Law School and I'm being harassed. People took an interest in that. And also as someone who was getting a legal education, most people don't understand what to do when this happens. So that's also why the answer was no. I ended up working with the company to settle the suit. So we determined basically how to develop a content moderation scheme that would allow people to express themselves no matter how vile and repugnant and terrible, but keep that from impacting other people's lives in a negative way. So we had an off topic, which was all of that, and the index from Google. and an on-topic forum, and the on-topic was about law school admissions. With that, the company could then go back to Google. They had been part of the driver for some of this activity, at least from the company's incentive standpoint, was that they were making money through more user engagement, through Google Ads and clicks, so the more salacious and outrageous the content, the better for them. This would allow them to keep getting revenue from good clicks and then I could go back to the companies and say look they've actually done something good about this so maybe you want to reinstate them take a second look and I got to talk to the people who had been harassing me which was the most interesting part of this whole thing They were men and women. They ranged from 17 to retirees. Some were classmates, some were kids. Some people were post-doc, PhD, educated-level professionals, and other people had normal jobs. And most of them lived in states I'd never been to, in places I had no connection to. And they all said the same thing. They all said, I didn't realize that what I was doing was impacting a real life. It was just a game. It was just fun. I didn't mean to hurt you, and I am so, so sorry. So that is the experience and what I took away from it was one, an interest in new technological systems, in new types of social connections online, not just social media, but also emerging tech like VR and AR. and how these social interactions are both made better and made worse through hyper mediation. Through these systems that can be designed to either reinforce our perception of shared humanity or can mask that from us and encourage negative social behaviors.

[00:13:56.315] Kent Bye: Wow. And so as you are sharing your story, we can look back those 12 to 14 years ago and to see how so many of those same very issues have gone and propagated through Twitter, Facebook, and now in VR, we certainly have a whole range of different harassment issues. And there seems to be like a cycle of like a repetition of how this kind of plays out in different ways. And in your case, kind of escalating to get more and more intense in terms of the degree of harassment that's happening. And so as we look at this right now, like what are the vectors at which we could start to address this? Like when you start to try to make sense of this and as you were working on this research project with Harvard trying to figure out what are some strategies, like what do you see as like the most highly leveraged points in order to address this and to address it in kind of a holistic way?

[00:14:48.528] Brittan Heller: I think there's a couple things that immersive companies can learn from the way social media has dealt with this and social psychology that will help them build diverse, inclusive, and safe online environments. One is really simple. It's make sure that users understand the rules. Most of the time on these online platforms, the only people I know who read the terms of service are the ones who are intent on breaking them and want to get really, really close to the line. And I don't think it's reasonable to expect people to adhere to social norms if it's not clear what those norms are. In VR systems, a lot of times there aren't these heuristics yet that you have in real life. And we've seen VR companies do things like develop personal space bubbles to try to replicate that technologically. Because that's not something you really think about unless you're close to someone who's a real close talker and invades your personal space in a way that you're not comfortable with. But we do have these sort of unspoken social rules. What I think VR and AR companies should do is make it explicit. If you tell people what the rules are in your systems, they will be more likely to incorporate them into the way that they interact with others. The best example that I can show of this is, there's two. One is research about modifying people's behavior based on social norms. There was a study that dealt with towels in hotel rooms. And sometimes you'll see a little plaque that says, save the environment, reuse your towel. Some researchers wanted to figure out what made people more likely to do that than not. So they went through several iterations of science. Save the environment, use your towel. Not many people paid attention to that. It was like 25%. I don't remember the exact numbers, so don't quote me on the numbers. The second best was saying three out of five people chose to reuse their towel. So with that about half of the people said, Oh, well, you know, most of the people chose to reuse their towel. I want to be part of this group. And maybe it wasn't an intentional thought process, but they did it too. The best adherence about 80 to 85% was when there was a sign up that said three out of five people in your hotel room chose to reuse their towel. Because people want to belong to part of social groups. So if you can personalize, not just display the rules, but personalize the rules for people, you're going to see, based on the human tendency to be part of a herd, that you'll have increased positive behaviors or increased negative behaviors if the rules push that way. Twitter's been doing good work. And I know there's a lot of tech lash out there now, and it's not very popular to say Twitter is doing good work. But through their health of the internet initiatives, they've started to look at social science and apply that to how their systems are being promoted and enforced. So now they've just announced that they have a new initiative to publicize all the rules, to put them in plain language, and to try to make it accessible to the average user. I'm very, very interested to see if their results replicate the towels.

[00:18:11.935] Kent Bye: Yeah, and just came from the Decentralized Web Summit and had a chance to talk to Danny O'Brien from the Electronic Frontier Foundation. And they're very concerned in terms of, like, trying to move too quickly, aim their sites on to big companies like Facebook or Google, and in response, create a wall that then accidentally hits all the other people that are trying to also do work in this space as well. So it then becomes a situation where maybe the only people can actually live up to these laws are the big major companies and it becomes too much of a obligation for other people to try to implement that. So, for example, if it's required by law to have like AI moderation, everything's being recorded, you know, that sort of puts a bar technically that only a few companies could achieve and then sort of reinforces that consolidation of that power. And so, It's fascinating to hear from you that you didn't want to push through and to take it to court and try to establish new precedents because you didn't at the same time want it to only be a novel case and have law that impacts everybody else. And so there seems to be a gap between the information of the legislators and the judges to not really be up to speed as to what's happening with technology. And so as a lawyer, that seems a little weird. I would expect you want to have more laws, but do we need laws or what? How does the law play a role here? Is it a matter of human rights, civil rights, and how do you kind of balance all these things?

[00:19:39.588] Brittan Heller: I think when you're looking at harassment or user engagement issues, it's really a social problem, not a technological one. So I think some of the smarter solutions deal with the social aspects first and not as much with the online apparatus. When I look at the type of solutions that have been successful in stopping harassment, they are all oriented in interpersonal interactions. So this may be a social problem with a technological vector and not reversed. Another thing, going back to the previous question, that companies can do, and this one I like because it puts it in the hands of game and experience designers, is make sure you show people's eyes. There are studies that show, at least with online avatars, even showing just a bar with people's eyes as they're having online communications humanizes it enough where negative behaviors start to decrease. And if you think about this in a gaming context, how many times have you seen someone who's a character portrayed as a villain with large eyes, a clear gaze, who makes eye contact with you? Normally they're monsters with little beady squinting eyes, and that's on purpose because it's easier to dehumanize someone if you can't make eye contact with them. And you see that demonstrated in games. So why not flip it?

[00:21:14.667] Kent Bye: So in terms of content moderation, it seems like there's a number of different issues. Either it's a direct harassment or could also be hate speech in different ways. And just curious if there's like a legal definition of hate speech, like what is hate speech?

[00:21:29.271] Brittan Heller: So there is no legal definition of hate speech, at least in the U.S. context. This is kind of a particular challenge with online activity, because most of these are American companies. A kind of shorthand definition of hate speech that I like to use is the targeting of people based off of their immutable characteristics so that things about them that they can't change. Like their ethnicity or their perceived gender identity, their sexual orientation, their race, the things that make a person uniquely them. Targeting based on that, I think, is hateful. There are definitions of hate speech in the international law and the European context, and most of them focus on those type of characteristics that you see protected in civil rights law. And the reason you see that in the European context and in international law is because many of these laws came into effect after World War II. So stemming from the context of the Holocaust and trying to prevent hate speech from having the incitement of violence, which characterized World War II.

[00:22:44.449] Kent Bye: And so having worked in this space for a while, what do you see as like the best ways to counter something like hate speech? Because if we're talking about real-time environments online, then even if you were able to ban people, there's one solution is to kind of take people and ban them. And there's certainly mechanisms within these virtual worlds to be able to do that. I guess from taking a step back, what is the best way to address hate speech, especially when you look at the larger context of what's happening in the world right now? What are the ways that it should be addressed from your perspective?

[00:23:16.195] Brittan Heller: There's a couple of things you can do and these are also tips for content creators. One is to, I'll use an example. So Instagram just started doing this. They've started informing people that they're using language that is likely to be hateful and interrupting the transmission of the message and asking people if they want to proceed. The reason that this works and that most people will be like, um, no, no, is because it builds in time for you to cool off. A lot of times when people produce hate speech, it's not based off of personal animosity. It's based off of like a flash of anger. So if you build in time for people to take a pause, there have been studies that show that that is enough to make people think twice about the way they're interacting with others. There was a hackathon in the atrocities prevention context where they created something called hate base API. And it's an international lexicon of hate speech terms. And that was built into a plugin where if kids were talking to each other and said like, oh, I hate you, you're a cockroach, it would ping the database. And they get a message saying, hey there, you use the term cockroach. Do you know that in the context of the Rwandan genocide, terms like that were used to incite violence and resulted in the deaths of 800,000 people? Do you wish to proceed? 80% of people be like, oh no, no. And the 20% of people who do are, you know, they really mean it. So it's adding productive friction and taking away the thoughtless nature that many people use to promote hate speech. Other than slowing down, the second thing you can do, and this is interpersonal, is to speak up when you see this activity. And you don't have to be very aggressive about it, but it changes social norms. So if you see somebody engaging in hateful speech, you can say things like, I don't understand why you're using that term, or that makes me really uncomfortable, or we don't talk like that here. Take it offline. That alone is enough, because an interesting fact about harassment is that people think it's tied to anonymous speech, and research shows that that's not actually the case. Because when you're engaging in an online environment, be it a VR environment, even if you're using an avatar, it's still a reputational economy. And by that, I mean that your avatar or your screen name, your behavior and your reputation within that online form still attached to it, even if it's not your actual name. So using pseudonyms or avatars actually doesn't increase the incidence of harassment. It's whether or not the environment is permissive to that type of behavior. By speaking up and reasserting those social norms, you're changing what is permissible behavior within the online space that you're in. And finally, the third one is to think about why people create hate speech and turn that dynamic on them. This is one of my favorite one because it involves humor a lot of the time. I just read about this great, great counter-protest in Germany. There was a neo-Nazi festival going on in this town in Bavaria, and people were really, really, really pissed about it. So what they did is all of the citizens of the town bought up every case of beer available in the town, and then monks from the monastery took the beer, loaded it onto trucks in front of the convention center with all of the neo-Nazis watching, and they drove it away. And in Bavaria, beer is like water. So it was a very public stance by the town saying, you're welcome to meet, and you're welcome to do your thing, but we are not going to support you, and we are not going to give you our beer.

[00:27:31.215] Kent Bye: That's that kind of Mimetic warfare feels like it's like a way of trying to turn it into a bit of a joke But I actually saw an article about that. So then it actually becomes more of a media event as well But I'm curious about this line between the digital and the real because there's some things like bullying or sexual harassment or sexual assault which would be a sexual assault in reality, but maybe is It's still only sexual harassment online. I don't know if there's a difference for how to actually define, but just curious if you could kind of look at the difference between the digital and the real and how you draw that line, but also from a legal perspective, what's already in the existing law, but also what you were saying in terms of how you make sense of that space.

[00:28:21.413] Brittan Heller: The legal space is far behind the technological space. That's not surprising to policy advocates who, that's the reason they have a job, One of the areas that I think is most interesting to look at and kind of a success story is the non-consensual pornography space. where there were advocates who pushed forward state-based laws. And I think 49 states now have laws prohibiting non-consensual pornography, where a few years ago people didn't even know what that was. It's colloquially known as revenge porn, but the civil rights groups actually pushed to rephrase the term so that people understood that it's pornography and it's done without people's consent. So talking about the nature of the online threat in a very descriptive way. I think one of the avenues that hasn't been well explored yet and could have implications for social VR as we're trying to move interpersonal relationships into a new space is the domestic violence context. I'm a former prosecutor, and so a lot of the harassment cases that dealt with online activity were put into domestic violence cases or reports, and the person who reported it was told, just turn off your screen. I mean, that makes it go away, right? You can very easily see in an immersive context being told, take off your headset, or just don't use that stuff. Local police departments were the interface for most of this. So as people are thinking about how to deal with these things proactively, I feel like VR, AR, and immersive companies need to be in a dialogue, especially with local law enforcement, when they start receiving reports that harassment activities are taking place. a lot of times the laws about offline harassment make it so you have to have physical contact between the target and the subject of abuse in a domestic violence situation. And you're gonna see the same problem in immersive environments as you see with social media, where there isn't that tangible interface. And so I can anticipate people saying that offline-based harassment protections don't transfer over.

[00:30:48.562] Kent Bye: And so what about sexual harassment as like what the law says about sexual harassment and then what the difference or how you think about it in terms of either sexual assault or sexual harassment in virtual spaces?

[00:31:02.920] Brittan Heller: To me, this is a really interesting area as well, based on some of the research that people have done that shows when you're in a VR environment, it feels real. So when someone comes up and touches you without your permission in a virtual space, it's just like they're walking up to you on the street and doing it. And I don't think that the law has contemplated what that means. or how your activity within a virtual environment will engage with laws that criminalize that content in an offline context. It's a brand new frontier and there's a few legal scholars looking at it, but I think you're going to see more attention paid as these devices are in more and more homes and are more commercially available.

[00:31:52.515] Kent Bye: And there also seems to be differences in the First Amendment and free speech when it comes to fighting words, where if you are co-located with somebody and you're speaking in it, you use fighting words. It has a different meaning than if you're online, because there's, again, not that threat of physical violence. Maybe you could explain a little bit about the fighting words aspect of free speech and the difference between the physical and the virtual.

[00:32:17.873] Brittan Heller: Sure. So freedom of speech is not absolute. I think a lot of people in the U.S. forget that and the fact that the First Amendment is intended to protect us from government censorship and government overreach. So a lot of times the type of activity that people do in an online context is regulated by private companies and therefore the First Amendment doesn't necessarily apply to their online interactions in the same way. Because of this, I think a lot of attention should be paid to terms of service and community standards because that is a lot more indicative of what your rights look like in an online space than the First Amendment. Incitement to violence is different. And if you look at the community standards in terms of service of many different tech companies, unless they are very far down the permissive track, that seems to be a common thread that any content that incites violence is against the rules. And this reflects First Amendment jurisprudence, where if something is an imminent threat of violence, it's not covered by freedom of speech. In the atrocities prevention context, this is actually very well studied and so many tech companies are starting to base their policies off of the concept of dangerous speech. This was a concept developed by Susan Benesch. And it is speech that has the tendency and the propensity to propel people towards violent activity. And it's a very interesting category of hate speech. To me it's different than normal hate speech because of the tie to incitement to violence and because it follows patterns. And the research about it identifies patterns that transcend place, location, culture, and time. And they call them the hallmarks of dangerous speech. And some of those are they deal with dehumanization. Are you calling someone cockroach, vermin? Are you othering people? And you can see the type of patterns emerge and recognize them. as the psychological aspects that allow people to take their speech and transform it into violence and do that in a way that feels personally justified. So for me, looking at, it's not just First Amendment, it's freedom of expression, and looking at the fact that freedom of expression does allow you to curtail people's behavior that impinges on the safety of others.

[00:34:58.280] Kent Bye: As you're saying all that, I just think of the current president in the United States, Donald Trump, saying things like, oh, those people are animals, or he'll just say they're animals. He won't even call them people, or saying different things that could be under these rules. Anybody else in the world saying that, they would be banned from the services, but he's the leader of the country. And so it just feels like there's a bit of a collective incitement to violence that is happening from the highest levels and I'm not sure if you have any thoughts on that or you know it seems like there's people that in some ways have looked at various tweets that President Trump has sent out and said you know this would normally be like against the term of service but he's sort of an exception because he's a public figure but Yeah, it seems like that there's in the culture right now, there seems to be an increasing amounts of incitement and violence with this type of phenomena that seems to be kind of spiraling out of control from my perspective.

[00:35:51.516] Brittan Heller: Yeah, I wrote something for the New York Times about the first time that I heard President Trump shift his language from talking about illegal immigration to illegals. And he described them in saying that you take them and you throw them in the bin. So he was comparing them to trash or garbage, which is another hallmark of dangerous speech. And... That was a year and a half or two years ago. So when I watch what he does and how people react to what he does, it gives me the goosebumps because I haven't seen that type of leadership since I was working at the genocide tribunal in Rwanda. So I think it's very, very important for people who care about these type of issues to take note of what he says because he's never been subtle and he's never hid it. This is the same sort of things he's been saying since he was running for office. And only now are people coming to the perhaps belated realization that he was serious the whole time. I think leadership matters, and I actually agree with what Twitter is doing by not taking it down, but now they're going to mark it as a violation of terms of service that they are keeping up for the public interest. Because he's never had a lot of guile about this, I would much rather know exactly what he says and exactly what he's thinking, but have them mark that this would be a violation so that it changes the behavior based on those social norms, hopefully.

[00:37:37.002] Kent Bye: Yeah, I mean, as talk about that leadership in the social norms, the importance of setting social norms, we almost have like this social norms being set at the highest level. And so people who are running these social environments, if they're just mimicking exactly what the leader of the United States is saying, then even though he's using dangerous speech, like how would individual creators of either these online communities or virtual worlds be able to change the social norm if that's the social norm that's being set at the collective level?

[00:38:08.268] Brittan Heller: I think it goes down to the norms that you set in individual environments. every space we go into has different norms. If I were to take you into the library and start doing karaoke, that would definitely be a violation of the norms of that space because even if there's not a written rule against singing, living on a prayer, people just know that's not the space for it. It's the unwritten rules that matter. Whereas if we were to take that to a bar, that hopefully would be welcome depending on how well you sing. So I think that one, creating a sense of community in these individual spaces and making sure that the community values are things that the companies want to support. I do a lot of work with not just social media companies, but gaming companies, VR companies, investors, and I think it all comes down to a discussion about values. What do you value? How do you imbue that into your online experiences or your immersive environments? And how do people know that's what you value?

[00:39:20.594] Kent Bye: Well, it seems like you said that it's actually easier to track some of the online harassment and bullying because it's like there's a record of it. But if it's in real life, then there's all these things that actually makes it harder to prove. It seems like that we're actually kind of like going more towards the real time immersive environments that is a little bit more ephemeral in a lot of ways. And so It seems like unless we have everything being recorded, which then has all sorts of different surveillance and privacy implications. And there's these various different trade-offs between the privacy of the individual versus the safety of the environment that you're creating. But just curious what your thoughts are on that going back into these real-time environments and whether or not. we have to really lean upon that culture and community vector in order to really try to create these safe online spaces or if there's other either legal or technological aspects that can start to somehow be into that process.

[00:40:16.467] Brittan Heller: The most interesting thing to me about the new spaces that are coming out is comparing them against the pre-existing social networks, even the avatar-based gaming systems that we saw evolve in online contexts, and noticing how in the pre-existing systems, how the onus was placed entirely on the target of harassment. So that person would be responsible for reporting it, for communicating back to the platform, for trying to argue for redress, and for being basically the person who was almost re-victimized in a way by having to explain the experience again and again and again. And some of the people that I've worked with and advocated for who were victims of extraordinary online harassment, they actually sounded like people who I'd interviewed who were victims of sexual or physical violence because they had to relive it every time they tried to get people to do something about it. It's not a burden of proof, but it's more the burden of action. So I think that if you're building a new space, you should think about when things go really, really wrong, how do you deal with this? One way that I think creators of immersive environments could deal with this, that would put them ahead of the pre-existing stuff, is make sure that you have a human available. And I know that this is not scalable to have a human available in all circumstances, but for your really, really bad stuff, the way that people described being re-victimized was when they had to, and the thing that they all bring up, because it just sticks with you, is if you're dealing with an automated system about something that is intensely personal, that may be threatening the aspects of your identity that are fundamental to you. If you're looking at online sexual related abuse especially. So basically having a bat phone for someone to talk to someone from the company, even if they can't do something about it, makes people feel like they're being heard. And when you're looking at victimization that is almost as important as addressing the content itself.

[00:42:38.760] Kent Bye: Yeah, there seems to be probably a lot more awareness of this issue post me too, in the sense that there's certain situations where there may not be any evidence, but just to have an opportunity to be able to share your story and to be heard and to be trusted and to not be feel like you're put in the situation of like, well prove it in order for me to listen to your experience that you went through, because there's still a phenomenological truth of the experience and being able to at least and tell the story of that experience. And for me, when I look at these issues, I look to something like the Truth and Reconciliation Commission that happened in South Africa, post-apartheid, where they really had this opportunity for people to own the harm done, and then to potentially make an apology, and then to listen to the harm that was done. And it feels like there's that something about that, Ability to be able to share your story and to be heard and to have the people who were bringing that harm to the victim to be able to listen and then maybe even Own the harm that they did and it sounds like through the process of the settlement. You were actually able to get that So I'm just curious if you could talk a little bit more about that experience that you have But also like the role that you see of like these type of truth and reconciliation Tribunals that could happen in these virtual worlds

[00:43:56.760] Brittan Heller: Yeah, I think that the ability to talk with some of the perpetrators of the harassment against me makes me feel like I won because I speak about this publicly. I advise companies on how to deal with controversial content and harassment issues, and I'm very confident in the resolution that I got, but I didn't win the lawsuit as you would traditionally think of winning because I decided that we shouldn't go on because it was in everybody's best interest to resolve it one-on-one. When I look at the type of political situation we have in America right now or in other places around the world, I think that there has to be a role for restorative justice, especially if our online environments are going to be part of our new reality and reflective of offline communities and spaces. And that's actually what I learned from the lawsuit, that if you want to make things better, you have to have a lot of tough conversations. I remember that with one of the people, I got to read some of it. I had a sheet in front of me of all of the things that this person had written and had sent to me and had put on the forum. And so I got to say, hello, I'm Britain. It's nice to meet you. I see here that you'd like to gouge out my eyes and skull fuck my corpse, but I think we should be acquainted first. There's a hard question that tech companies need to grapple with because a lot of advocates think hate speech will go away if you just take it offline and that's not actually addressing the problem, that's dealing with the symptom. There are very few mechanisms that I see that will deal with how do you bring people back into the fold. If you kick somebody off of an environment that's going to be essential to engagement and society and culture going forward, how do you restore them to this community once they've transgressed? And this is something I actually think about a lot because if we can't figure out how to do this, we're actually going to be in a worse situation than having an internet full of hate speech and harassment.

[00:46:08.877] Kent Bye: Yeah, I think there's a there's a bit of like a convenience actually that we have all these big companies because we can just have them ban them or take them offline and think that the problem goes away but it feels like there's a similar approach within our own criminal justice system where we exile people and take them out of society and then when they come back into society they don't feel like they're full contributing members. They don't always have all the same rights that they once had and they become these second-class citizens and It feels like that as we move forward and we continue to have the blurring of the lines between the virtual and the real, there's just going to be access to certain dimensions of this metaverse that is going to continue to kind of blur the lines between the digital and the real. And that if we continue to have that type of exiling, then they're still going to have a physical presence there. It's not like they're going to be away. It's just, it seems to be an element there that we have to figure out how to integrate them. However, I do think there's still going to potentially be a role for exile, especially for people who refuse to own the harm that was done. Because if you try to enter into a restorative justice process with somebody who refuses to really, truly own the harm done, then you're just going to not only re-traumatize the situation, but potentially even make it worse once they start to gaslight all the people that are victims. And so it feels like there's ways in which indigenous communities have been able to deal with this within their own Villages and communities and that maybe there's a need to be able to look at some of those and see if they can take some insight but just curious what happens when you get to the situation where you actually really do need to exile somebody and Including them into this situation is going to potentially make the situation worse.

[00:47:46.987] Brittan Heller: I Think you're right that there's a role to looking at offline social psychology 80% of people who violate the rules on Twitter are doing it for the first time. And a lot of them are acting out of passion or in a flash of anger or out of genuinely held political beliefs. But they're doing it for a variety of reasons and once you tell them, oh, that's not allowed here, they modify their behavior. That means that 20% of people are the persistent trolls who are doing this habitually or opportunistically. That mirrors the social psychology that talks about the prevalence of sociopathy in any population. where you'll find about ten-ish percent of people in a given social group will exhibit signs of sociopathy and about two to three percent of them are the ones that you were talking about that are really anti-social and are trying to break social dynamics. So I think that what the designers and creators of immersive experiences can do is basically make sure they're designing for the 97% of people and not create rules that are so restrictive against this type of behavior that it impedes interactions and genuine connections between the people who really want to be there and respect the rules and to understand that there may be two to three percent of people who aren't there for the reasons that the creators intend and maybe don't embody the values of that space and should find one that reflect their own values as well.

[00:49:31.721] Kent Bye: And as we're talking about this, there's a lot of theoretical things that we're kind of projecting out in the future. And that, you know, it's striking to me to listen to your case and to see how so many other different dimensions of your case then got replicated through these different iterations. And, you know, it's easy for me to sit here as a cis white male and say, we should always aim for this restorative justice process. But yet, on the other side, there was many, many years where there were women online who were being harassed with either Gamergate or just general harassment that was happening on Twitter and they would report it and then nothing would happen. So like, how do you think about what went wrong there? And then how do we prevent that from happening again within these virtual worlds?

[00:50:12.884] Brittan Heller: So in my international law practice, I've done a lot of work with failed states or with states that have a tenuous relationship with the rule of law. Sometimes, depends on the day, I put my work with internet companies under the same bucket. I see a lot of the characteristics of a failed state reflective in the way that governance systems work online. Most of the time, you have to know a guy to get something to happen. even if the rules say this content is clearly prohibited. And that's been actually a lot of the way that I've been able to engage with companies. I've been able to say, I represent a Muslim woman who is falsely accused of having set a Trump supporter's hair on fire at an inaugural march. I'm gonna have to prove to you that, you know, she wasn't there. She was visiting her sick father in a hospice. This is a true story and she's gone public with it so I can talk about it. And with that I had to negotiate with the local police department. And I had been a prosecutor working with that jurisdiction before so I had to say, you know, I know in most cases you would never make a statement, never ever make a statement that somebody was not a suspect or a target or basically involved in an investigation. Because if later on it turns out they were, you can't take that back. I understand that. The internet is a brand new game. It's a brand new paradigm and if you don't give me that statement about this woman based on the fact that she wasn't even there and we can prove it, If you don't give me that statement, it's going to perpetuate the harms against her. She's had to move out of her house. This is affecting her place of work and her job, and her family feels endangered. Because of the internet, this is a brand new game, and so I'm asking you to make an exception based on the nature of what's happening. And I understand everything that you're going to say in protest because I have been you. It shouldn't be necessary to have somebody who has personal connections, prior experience in and with the exact law enforcement agents that you're dealing with in order to get something that you need to make a company take action to stop harassment. So that's why I think about the internet as a failed state.

[00:52:34.836] Kent Bye: Great. And so you're doing this research project. So what are some of the early preliminary findings or questions that you find that you're doing as you're kind of doing this exploration here?

[00:52:43.718] Brittan Heller: I'm still in the very early phases of it. So the type of questions that I'm thinking about are how do pre-existing laws in capture spaces that may be part of the metaverse? Can a judge even spell metaverse? What sort of technological components are being built into the hardware that advocates and consumers should be worried about? What should the extent of that worry be? And what sort of protections can we build in so that we don't see fires later on that we didn't anticipate? Also thinking about how the concept of freedom of expression changes in an immersive environment. and whether expression isn't just how it's been conceived of before by the words that you speak and the dress that you wear and the people you associate with and the type of activities that you engage with, but really things that are fundamentally more personal, like the emotions that your body expresses or the type of physical reactions that you don't have control over that indicate things about your state of mind. Because I feel like that is a whole new avenue for both tech policy and human rights law that nobody's thought about before.

[00:54:10.543] Kent Bye: Yeah, definitely the threats of biometric data is something I think from the privacy perspective is certainly there as well. So I think generally, I see that there's this tension between creating safe online spaces and privacy. There's like this dialectic between freedom and responsibility that's there. And I wouldn't want to be on the extreme of either one of those where I completely surveilled all the time, but also want to have some expectation that there's at least some mechanisms that if things do happen, I have a way to either block or have people not be in my space and so giving the tools to people but at the same time there seems to be this spectrum between freedom and responsibility and privacy and the sort of surveillance component.

[00:54:53.920] Brittan Heller: I used to say that my work centered on balancing freedom of expression and public safety, and now I don't. Now I say it's on integrating freedom of expression and public safety with an understanding that I think companies should take a hard look and do a risk-based assessment before they allocate their staff, their time, their attention, and their investment in these systems. Like I said before, are they designing for the 98% of people? Are they actually dealing with the risks that 98% of people will have? I've seen through the course of my career a lot of actions taken in the name of trying to prevent catastrophic outcomes like terrorist attacks and seeing that impetus being used to curtail individual civil rights. I feel like this is another chance to do it right, especially if we're thinking about how the average consumer and the average citizen are going to interact with these systems that we're building now, and what sort of things will unravel the fabric of their everyday lives that may not be a catastrophic national security outcome, but will be as devastating to an individual life as it would be to a national fabric.

[00:56:20.476] Kent Bye: Great, and finally, what do you think the ultimate potential of immersive technologies are and what they might be able to enable?

[00:56:30.820] Brittan Heller: Oh, I don't know, and I'm really excited about the possibilities. My hope is that it will amplify the things that bring people joy, that will allow people to make new connections, that it will engender and distribute new forms of art, that it will make people feel like they have a place where they can belong, and that all of this will be done in a way that actually encourages offline connections as well.

[00:57:07.567] Kent Bye: Great. Is there anything else that's left unsaid that you'd like to say to the immersive community?

[00:57:13.011] Brittan Heller: Feel free to contact me. I'm always up to talking about human rights and technology. And thank you for having me here today.

[00:57:21.237] Kent Bye: Awesome. Great. Yeah. Well, thank you so much. You're welcome. So that was Britton Heller. She's a technology and human rights fellow at the Carr Center for Human Rights at Harvard University. And she's also an associate in the business and human rights program for the Center for Strategic and International Studies. So I have a number of different takeaways about this interview is that first of all, Well, I was actually kind of surprised to hear some of Britain's perspectives. You know, I would have figured that she was really interested in trying to come up with specific laws that would try to address some of these issues. And I think her perspective was that, you know, some of these cases were so novel and unique that she didn't want to take this court case that she brought forth against this website to generate law that would be bad law and be too novel and potentially even make it worse for some people. you know to have a judgment that created no boundaries at all for people to then kind of push the limits of what harassment may look like. So I found that very interesting to hear her perspective because she's taking an approach of like when you deal with this type of hate and harassment then it's primarily a sociological issue and that when you try to solve it purely through technological means then you're just kind of addressing the symptom of the problem rather than the core of the problem. which I found very interesting. And it kind of aligns with my own perspective of just listening to someone like Lawrence Lessig, who he says there's like these four major dials that you have to turn to be able to address these complicated issues. This large sociological level, you have the legal vector of passing laws, you have the sociological and cultural issue, which is basically around these human dynamics that are due to these social norms, these cultural norms, producing education and cultural artifacts that are put out into the world to shift cultures in different ways. And that it seems like that a lot of these issues are at this forefront of these sociological dimensions. But then there's other technological issues. And so I think in some ways, the technology companies are these centralized points where a lot of these human dynamics and social behaviors are not only being replicated, but they're in some cases being amplified and made worse and accelerating. And so you have to kind of look at what is the role of these technology companies to try to like slow things down. And then the final sort of vector is an economic one, which is to provide some sort of economic incentive. So it seems like just listening to her case, there was an economic incentive for this message board to have all this sort of outrage that happened was on their site. And they were also taking a very specific First Amendment perspective, which is to say, you know, they very much valued an individual's right to say whatever they want, no matter how vile or egregious that may be. but there's another dimension which is the harm that is done and caused by the incitement to violence that happens by dehumanizing people and having these various different signifiers of what she called dangerous speech of speech that's actually creating this mentality where there's these different hallmarks of dangerous speech which is the dehumanization and to say that they're invading into your purity of your environment and that it's justified to go take action, which I think we're seeing that play out in different levels in American culture right now. It seems to me like a lot of these issues seem to come down to the difference between the rights of an individual and the rights of the collective. So each individual having the rights of their own sovereignty to free speech and to say what they want, but yet the freedom of speech isn't absolute. And so once you start to use speech that actually incites violence to encourage people to either dehumanize other people or to cause violence to other people, then that right starts to evaporate. And Britain actually said that a lot of these rights are more designed to protect you as an individual from the government from trying to censor you. and that we're mostly communicating on these private platforms. And within the context of those private platforms, each of those have the terms of service and the code of conduct, and those are actually the rights that we have to freedom of speech within the context of those companies. And when I talked to ACLU's Jennifer Granick, her perspective was that we actually need a huge plurality of many different companies that have different policies, because if it does fall onto just one of these big major tech company platforms, then If that is the only method of which that we have to communicate and is being controlled by these companies, then that becomes an issue and a problem. And so from ACLU's perspective, it's more of like trying to encourage a whole plurality of many different places around the web for people to have different sets of rules where they can express themselves. So it seems to be an issue right now that is coming up more and more with a lot of different companies, but as people who are in the VR community building these real-time environments, then how do you start to begin to try to navigate how to deal with these various issues of hate speech and harassment and different elements of online extremism. If it does start to happen, then how do you start to cope with it and deal with it? And so it seems like what Britain is saying is that each entity has their own ability to set the terms of service and to try to make your community members as informed as possible as to what your rules are. and to try to establish different levels of the social norm that is reinforced by the people that are participating within that community. And that when it comes to different transgressions, then it is up to the people that are on the front lines of hearing that to start to try to establish those social norms to either say things like, that makes me really uncomfortable, or we don't talk about things like that here, take it offline. or to ask a question of like, I don't understand why you're using terms like that, and to start it into a conversation and dialogue in that way. And what Britton is saying is that for a lot of the violations of harassment that happen on Twitter, like 80% of the people, it's like their first time in fractions, and they may not even know that that was a rule that they were violating. And Britton said something that was very interesting, which she said is that the only people she knows that reads a term of service are the people who want to really take those terms up to the limit and to try to use within whatever those rights are to try to incite and bring about different levels of harassment. So she says there is going to be some percentage of any group that is going to have this more sociopathic orientation where it could be anywhere from three to ten percent of the people who are really just repeat abusers and trolls trying to really break the different social dynamics and they do start to see it as a game. And she suggests that you should be designing for the 97% of the people who want to be there and want to respect the rules and not to just design everything around these edge cases. But you do need to have some sort of solution for when those edge cases do show up. How do you deal with creating these safe online spaces? And so it was fascinating for me to hear that the course of this lawsuit that Britain had brought about, that she did actually have the opportunity in the terms of the settlement to come face to face with some of the people who were her harassers and abusers. And she said that a lot of them just thought it was a game. They were so disconnected that this was an actual human being. And they didn't realize that it was actually impacting someone's life, that they just thought of it as a game and they didn't mean to hurt them and that they were truly sorry. But to just have that opportunity for Britton to explain the harms that she had experienced and to have this opportunity to share her story and to be heard in that way, sounds like that was actually a pretty huge part of her own healing to the process. And for her, she really considered that a victory and what she really wanted to get out of this whole process. But she also was trying to prove that the system wasn't really set up for normal, ordinary people to be able to get any sort of redress whenever they're facing different levels of online harassment. And I think this kind of speaks to an issue of to what degree can we rely upon these technologically mediated solutions to be able to solve this problem, which may fundamentally be a human cultural problem. She did say that some of the smarter solutions that she's seen try to deal with the social dynamics first. They don't try to see it as a technologically mediated solution, but that the solutions that actually stop the harassment are oriented around interpersonal interactions first. And that she did say that, you know, one of the things you can do just as a designer is just to actually show the human eyes. And I think once eye tracking gets into virtual environments, and that's going to be an interesting way to see if that does help mitigate any of these different dynamics, maybe that will give us a better sense of the different humanity. Or on the other hand, it could actually make things even worse for people to actually get a reaction out of people by whatever reaction is happening through either the emotional expressions that are happening within the headset or within the eyes. But that, you know, she had three main points that she was trying to get across in terms of how to deal with curtailing and addressing hate speech for number one, to just add some sort of productive friction. So in the case of Instagram or these sites that could actually do some sort of natural language processing and to see if there was any. hate speech terms or, you know, in this dictionary of different words that are indicating that you were trying to dehumanize people that you'd actually pause and say, hey, do you realize that what you're about to post is something in the context of Rwanda, an incitement of a violence that resulted in the genocide of over 800,000 people. So that's one thing to be able to do that. Now, in terms of virtual reality environments, you're talking about real time processing of that type of artificial intelligence. And that's something where you would actually need to be monitoring everything that people are saying. And I'm not necessarily sure if that's the type of thing that we want to result in, but maybe in public environments, we are somehow consenting to be able to have that extra layer of protection and safety, that's something that there's a lot of privacy risks and implications there in terms of having everything that you say be monitored by some sort of AI moderator that's trying to pick up on these things and what's the different larger context and the sarcasm. There's a lot of things in which that could go horribly wrong. But that's one solution in terms of slowing down, especially when you look at asynchronous posting of written textual information to be able to do that layer of moderation that's automatic. Another thing is to be able to speak up and to be able to enforce these social norms. And so like I mentioned earlier, just to be able to confront people face to face and try to establish what is acceptable behavior and what is not. And that there is a certain element of even though people may be using pseudonyms and not using their actual identity to have Whatever the name that they're using does have some connection to their avatar representation and who they are And so there could be some sort of lasting impact that they want to try to maintain some sort of cohesion And so there's going to be some ability to track your reputation in your identity and what you're actually saying now that obviously has limits when you have people that are changing their names or don't have anything that's tying back to their physical identity, but from what Britain was saying is that there does seem at least to have some level of having a social norm be established and enforced by the people that are actually in that environment. So there seems to be still quite a lot of open questions in terms of what is the line between the digital and the real? How do we start to deal with sexual harassment? And just looking at issues of domestic violence to say, in a lot of those cases of domestic violence, if there's a threat of physical violence, then the law applies. But if there's online harassment, or if it's mediated through technology in some ways, then it's in some sort of different ontological class of how that's treated within the legal system. And so she was saying there could be some lessons learned from what's happening in these different domestic violence cases to see how that starts to play out and perhaps start to establish less of a physical, non-physical differentiation and maybe start to establish there's more of a consistent experience that happens when you look at the differences between what happens online and what happens face to face. And I think that's what Jessica Outlaw has said to me is that, you know, there's certain aspects of sexual harassment and sexual assault that the different behaviors that happen online in a virtual space may start to trigger the same type of physical reactions that you may have if we're actually physically assaulted or actually experienced sexual harassment in different ways. to see how there could be different levels of experience that are consistent amongst these two different virtual and the real to help actually define what that line should be and then create the laws that are around that. And then finally, just some of the open questions that Britain is looking at is just how do the pre-existing laws encapsulate what's happening in these virtual spaces and how could they be applied to the metaverse? And so I really see what Britain is doing is a lot of pioneering work and trying to help define what governance and policy looks like within the metaverse and in these virtual and augmented reality worlds. What are the different tech components that are actually being built into the hardware should we be worried about? Things for me like biometric data or other things that could be different concerns for our privacy and these different trade-offs between having higher fidelities of being able to express our embodiment within situations, but also have these different potential risks to our biometric privacy and aspects of ourselves that we don't actually want to be recorded and stored forever. And what are the different protections that we need to build into the technology so that we could see what type of fires are potential down the line so that we can start to prevent things a little bit? This is a lot of what I covered at the SIGGRAPH panel discussion that I had with Magic Leap and Mozilla and 60AI, as well as with Venn Agency, looking at the privacy architectures that these different companies are taking different approaches towards. And so just thinking about how can you actually architect for privacy? So I was super impressed with what Magic Leap is doing and the degrees of which they're really seriously trying to do a privacy first design within everything that they're building with Magic Leap. And I think they were able to talk a lot about the specifics of that privacy first architecture that they started to really talk about for the first time at this panel. So I'm excited to get that out to be able to unpack that a little bit more. That's a little bit of what I think Britton is talking about is like what type of things can actually be built into these technologies that may have either some sort of unintended consequences that we can't see or be able to help prevent things that we could see as we start to imagine out how this may play out in the future. So the final thought is just how Britain is really trying to find this balance between this freedom and responsibility and the privacy and not to see it as this like polarity point that you can only have one or the other, but trying to balance and integrate this freedom of expression with the safety in these online spaces. and to see how there is a right for the sovereignty of people to be able to express themselves. But also, there are limits to that, very functional limits that are being governed by a lot of what Susan Benesch's work on dangerous speech, which it seems like a lot of those concepts are being adopted into these technology platforms, policies for their community guidelines, that if people are engaging in these different levels of dangerous speech, which are having these different hallmarks of inciting violence, that looking at these different atrocities that are happening around the world, that there's these very consistent patterns that are independent of the place, location, culture, and time, that there's these consistent patterns that they've seen over time that they've identified, and they're trying to embed that within these larger policies in these online communication platforms to be able to start to potentially curtail these different behaviors that lead to atrocities. But that's a lot of work that Britain is doing is both from the legal perspective, but also talking to different technology companies and policymakers. And ultimately try to create this metaverse and these technologies that amplify our joy, that allow us to make new connections and new art and to be a place where people can feel like they actually belong and they can connect to each other in these virtual spaces and potentially meet with each other and face to face real world as well. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show