Archives

Categories

MetaverseArtificial IntelligenceDevelopment

How Inworld AI Is Populating the Metaverse

We’ll need help to navigate the metaverse. To create that help, we need Inworld AI.

 

If you’ve played any video games at all – particularly open-world RPGs – you know that getting NPCs right can be hard. You might also have heard the theory that the metaverse will be too big to be entirely populated by human agents. So, if we’re going to have a metaverse for games, shopping, business, etc., we need something better. Inworld AI is working on it.

See Also:  Big Tech Drives Interoperability With the Metaverse Standards Forum

What is Inworld AI? Inworld AI is a company building a tool to create brains that experience creators can put into virtual bodies to populate the metaverse – whether that’s games, simulations for enterprise and education, embodied chatbots in immersive retail environments, or whatever else you can think of. ARPost talked with CEO Ilya Gelfenbeyn and CPO Kylan Gibbs to learn more.

A Colorful Background

Inworld AI only started as a company about a year ago, but the founding members are industry veterans. Gelfenbeyn was a co-founder of API.AI, which was purchased by Google and now exists as Dialogflow. Gibbs was formerly at DeepMind, a company using AI to help solve problems. Other members of the e-board have backgrounds including Magic Leap and Sony.

With such a storied team, the company has been able to gain backing from a number of funds and institutions including Microsoft’s Venture Fund, Meta, and the Venture Reality Fund – who also back an impressive portfolio of projects including Beat Games, Epic Games, Rec Room, Owlchemy Labs, 8th Wall, and Varjo.

“The idea is a continuation of what we were doing before – it’s a developer platform,” said Gelfenbeyn. “We’re focused on immersive experiences: speech, but also movements, gestures, and memory.”

A Look at Inworld AI

So, what is it, exactly? Inworld AI’s product consists of two main components. There’s the engine, that does the work, and a creator studio that lets users manipulate the engine to fit their needs.

 

The creator studio, which I was shown in a remote demo, consists of text inputs, toggles, and sliders. There’s also a special panel of controls for experimental features including audible pauses and expressive movement.

Users adjust these both to input information about the “character” that they are creating and to set how the character will act and “feel” in interactions with real people. Creators can also engage with just text, or choose from some 150 different voices.

“It’s meant to be highly accessible to everyone using it,” said Gibbs. “You are basically teaching an actor how it is supposed to behave.”

Looking at the engine component, there’s a lot that an Inworld AI character can do that a user doesn’t have to program. For example, the characters can recall past conversations. They can also access the internet to contextualize information and responses.

But, don’t be too afraid – the characters have “filters” that prevent them from profanity or harmful speech. Inworld AI is also working on a way to “limit them to certain subsets of knowledge.”

Inworld AI - Product demo -Studio_character

This isn’t just a safety feature, it will also make the characters more believable and useful in practical interactions. (Imagine trying to find info on a car and the salesman starts monologuing about Henry Ford.)

“At the end of the day, it will be consumers and end users who are interacting with these characters,” said Gibbs.

Talking to “Virtual Agents”

Right now, there’s no real way for people not using Inworld AI to interact with characters created using the platform before they show up in a completed project (although, there are some cool videos on the company’s YouTube channel). The company is working on a companion app that would let non-creators talk to the characters.

This could serve a couple of ends. For one thing, it would help creators. More people interacting with an AI makes a more nuanced and better-rounded AI. It could also help to get future end users get used to talking with AIs. Though, many of us have done this more times than we might think about.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” said Gelfenbeyn. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

So, what makes it a metaverse project? Inworld AI makes the “brain” but they don’t make the “body” – or even provide tools for it. The characters created in the platform can be inserted into avatars created in programs like Ready Player Me and MetaHumans.

See Also:  Geenee AR and Ready Player Me Partner to Bring Full-Body Avatars to WebAR Experiences

“Because we’re cross-platform, we want to be agnostic as to which avatar system you’re using,” said Gibbs. They call this model “Bring Your Own Avatar.”

Inworld AI Unity Unreal

Once the character is complete in both mind and body, they can be integrated into virtual environments created using Unreal Engine or Unity. That is, outside of the testing environment in the developer’s app for Quest.

A Bigger, Better, Metaverse

The cross-platform intentions and the user-friendly nature of the tools are what make Inworld AI so special. After all, Inworld AI didn’t invent AI.

However, just like the metaverse creates the need for AI by its size, its size is possible through user generation. This might sound like a vicious circle – and it would be, if Inworld AI wasn’t here to give users the power to create their own AIs.

Jon Jaehnig
the authorJon Jaehnig
Jon Jaehnig is a freelance journalist with special interest in emerging technologies. Jon has a degree in Scientific and Technical Communication from Michigan Technological University and lives in Michigan’s Upper Peninsula. If you have a story suggestion for Jon, you may contact him here.