Universal AR for Unity SDK with developer Jordan Campbell

9 min read
Blog Author

Universal AR for Unity SDK with developer Jordan Campbell

Blog Author
9 min read
Last week we were excited to announce the release of our Universal AR for Unity SDK, offering Unity developers the ability to use Zappar’s leading image, face, and instant world tracking APIs within one of the world’s most popular game engines.

Last week we were excited to announce the release of our Universal AR for Unity SDK, offering Unity developers the ability to use Zappar’s leading image, face, and instant world tracking APIs within one of the world’s most popular game engines. To find out more our Head of Commercial, Dave Mather sat down with Platform Developer Jordan Campbell, for a demo and to talk all things Unity and Universal AR.

 

Check out the full interview 


Dave Mather
So to kick us off, do you want to just tell us a little bit about your background, life before Zappar and life at Zappar?


Jordan Campbell
So going back a little bit, I was originally studying neuroscience at university, so that sort of kick started my academic career. But when I was doing that, I was interested in computer vision, maths and computer science. So towards the end of my degree, I started doing more computer science type work and started doing a little bit of computer vision research, so I spent a couple of years doing that. But then I wanted to transition into more sort of creative, augmented reality type, content and experiences. so I spent a little while building a photogrammetry, type X, so looking at applications for architecture and stuff like that, which didn't quite pan out as well as we would have hoped it would have. We got some cool tech, but the market wasn't quite there for it. So, then I joined Zappar in London, I moved over here and I've been at Zappar now for 18 months, and loving it, it is a great place to work. At Zappar, I do a little bit of computer vision work and I also, as you mentioned, Dave, I worked on the Unity plugin for Universal AR. So as part of a small team working on that.


Dave Mather
Awesome. So, I guess thinking about Universal AR and the Unity SDK specifically, how would you as a developer describe Universal AR? 


Jordan Campbell
So Universal AR, it's essentially a way of bringing Zappar’s computer vision technology, our algorithms that enable face tracking or instant tracking or image tracking, into one common platform. So really the point of Universal AR is that if you want to build an experience for say Android and iOS, instead of having to build two separate applications that run on these two different devices you can just use one single, underlying SDK. 

So you have one single place that you go to to get that augmented reality experience, the thing that powers that experience and then you can choose the tool on top of that, that you want to use to create that content. So I think for developers, they're already choosing the tool, they're already choosing whether to use unity or three JS. They're already having to make that decision based on the content they're creating. They don't also want to then have to say, well, what devices can I run this on? What underlying SDKs do I have to use? With Universal AR it takes away that compromise and it just gives them one single SDK that they can use across all those different platforms. 


Dave Mather
That's incredible, that's surely a huge time saver for developers. 


Jordan Campbell
It's very helpful. And the good thing about Universal AR is that it's very quick to get up and running. So the API itself is very straightforward, you have some options for controlling cameras, you have some options for creating tracking types, and then you have options for getting the poses out of those tracking types. So as your experience is running, you know, where the content should be placed. And so for developers, it's very quick to get up and running and to  start creating experiences using Universal AR. 

 


Dave Mather
Perfect. Awesome. So what's been your role in the development and implementation of the Unity SDK itself?


Jordan Campbell
So the underlying core of Universal AR, was developed in house mostly by our CTO Connell. So that layer was mostly provided to me. And so I then took that layer and I had to develop all the Unity integration for that. So I have some underlying plugin libraries that I take, and I have to make Unity understand how to use those plugin libraries. I have to provide functions that do some conversions, so for instance, converting between different coordinate spaces between device, in between Unity, making sure that the render loop run really efficiently so that when you're putting a camera frames on the screen, it's not degrading the performance of your application, I had to worry about things like where we're loading models from making sure that on all the different platforms, everything that we needed in point of time is available. So really my job was just to take that core underlying Universal AR runtime and expose it in a way that we can make it available through Unity. 


Dave Mather
Perfect. And so for Unity developers who are already using or are used to something like AR Foundation or more recently Unity Mars, how does Universal AR for Unity differ from those two, could you give us a quick comparison? 


Jordan Campbell
Yeah, that's a great question. And definitely something, I think a lot of people will be asking themselves. So, the first thing to know is that, AR Foundation and Mars are really two separate things, but they are quite interdependent in Unity. So at the very bottom of this sort of stack, we have AR Kit and AR Core, which are the sort of core platforms that Apple and Google provide for doing AR. AR foundation is a layer on top of those two that enable you to create AR experiences inside of Unity that utilize AR Kit and AR Core. 

And so then to actually build an AR experience, the team at Unity decided to create Mars. Mars is, essentially, a set of tools that augments the Unity editor itself. So they make it easier to make AR experiences inside of Unity. So Unity is leveraging technology provided by Apple and Google, and those technologies are great and Unity has provided a great rapper around that and it works fantastically, but the problem is that they are relying on code from other people. So those two APIs don't quite match up all the time, there are some features provided by AR kit that aren't provided by AR core, for instance. So the problem with this for developers, is they have to make trade offs and compromises. So it means that if you're using AR Foundation, you can't necessarily do all the things that you want to in AR Kit that you could do in AR Core. And in terms of Mars, Mars is a great development environment, but it still leverages AR Foundation so you still have these tradeoffs and these compromises, and really the other big problem with this is that you simply can't build mobile AR experiences on the web. 

That's really a big missing link in the puzzle, most people I think would expect that many experiences would be able to run in the web. As the web is a primary place where our content can thrive. You really want to be able to leverage all the technologies that you have, and you want to be able to distribute it the way that you want to. So to sort of summarize your question. The difference is that AR foundation relies on a set of APIs that aren't necessarily consistent between themselves and which aren't owned and developed by Unity. Which means Unity is relying on other people's software, whereas Zappar provides, all our own in house, computer vision algorithms. Therefore all of our algorithms run across all platforms, IOS, Android and WebGL and all have consistent APIs. So if, for instance, you're developing in Unity and you think, oh, wait a minute. I actually want to use three.JS then you can switch to three.JS, and you can create a whole new set of experiences, but using exactly the same API, knowing that your experience is going to run in all the same places on the web. This really is quite powerful for people. And so I think for developers evaluating whether to use Unity's built in tools or whether to use Zappar’s tools, really, I think for them, the question is about the tradeoffs that they want to make and what experiences they're developing, where they want to distributor and really what they want that sort of longer term management to look like of that project.


Dave Mather
Absolutely. Yeah. That's, that's a really good point. So does that mean, that at the point of compile is that there's no code changes. You can just choose if you want to publish to iOS, Android, or the web


Jordan Campbell
Exactly. So Universal AR is designed so that the same code essentially is running on all these different platforms. So for the developer, that means that they don't have to think about what happens on the web versus what happens on iOS or Android. They can just create an experience. So a really simple one might be a face tracked experience where you have a bunch of cubes floating around your head, and this type of experience is going to be really simple to create in Unity. So once you have built this experience inside Unity, and you've seen it running in play mode, you've seen it live in the editor and you've done a few tests, you’re ready to publish. Then all you need to do is click build, you build for iOS and it creates a little package for you, you build for Android it creates a little package, you build for the web it creates a little package, and you can then go into each of those uploads. So for instance, if it's through the web, you can upload through Zapworks, if it's for iOS or Android, you can upload through the native app store or the play store respectively, but you haven't made any changes to your code. You haven't made any configuration changes. You haven't made any settings updates, everything would just work as it should. 


Dave Mather
Fantastic. So, can developers expect image tracking, face tracking and instant world tracking?


Jordan Campbell
Yup. Everything that you would expect, so we offer image tracking, we also offer instant world tracking, which is very new. Not quite extended world tracking, the idea that you can wander around a room, but you can load an experience, tap on the screen and your content will exist in the real world with no markers or images or anything to actually track. You can just load it on a table or on to the floor. So instant tracking is really useful for types of experiences where you might want to display, for instance, a chair in your living room. We also have face tracking and face initiatives, which are a little bit different. So face tracking just means content attached to your face that will move around, whereas the face mesh will deform to the contours of your face, so if you open your mouth, then the mesh will expand. This type of content is great for face filters or face painting applications.

Start building now with our bootstrap projects >>


Dave Mather
Great. Before we jump into the demo, which I'm really excited about, what advice would you give to existing Unity developers wanting to give the SDK a try?


Jordan Campbell
Yeah, so it's easy. You just download the SDK, import it as a package, and then you're off. So really for existing developers whether you’re already using Unity, have a project that you think you might want to integrate Unity into, or you are starting something new, it's really easy to get started. It doesn't add any dependencies outside of the actual example library itself, so you're not going to introduce anything into your project that maybe you don't want. You can take the Zappar library back out again, if you want to quite easily. I think developers will be pleasantly surprised with how well it performs and how easy it is to get set up and started and how easy it is to build all sorts of exciting experiences. 

 

Get started now with our Unity SDK >>

 

Instant World Tracking & Face Tracking Demos

 


Comments