This article features the latest episode of The AR Show. Based on a new collaboration, episode coverage joins AR Insider’s editorial flow including narrative insights and audio. See past and future episodes here or subscribe. Guests’ opinions are their own.


One of the ongoing themes in our AR Show coverage is how executives exercise the muscles developed in early-career cross-training. That’s everything from Jeri Ellsworth’s early experiences building and racing cars to Derek Belch’s time as an athlete and coach in college football.

This can be said for Avegant CEO Edward Tang. As he discusses on the latest AR Show episode, a familial affinity for science and engineering led to a gopher job in a silicon wafer cleanroom during college. Then his big break came when he stepped up to the plate on a meaningful project.

The results launched him to the next level of notoriety and, eventually, Defense Department work with MEMS. These are everywhere including the accelerometers and IMUs in our phones. But they’re also deployed in AR optical systems such as those that power the Hololens 2.

With cross-training in these fields and some vision science, Tang became primed to tackle AR. Soon after that, Avegant was born. It’s first product, the 2016 Glyph, was a personal entertainment wearable that won CES’ best new product and remains a favorite of XR enthusiasts to this day

But then came a crossroads. Though the Glyph was a hit, was the underlying technology the real long-term value driver? And if so, would that require killing the product? These are the types of hard decisions around prioritization that define early-stage entrepreneurial leadership.

“We were at this intersection point and were saying, ‘should we invest and build a Glyph 2.0 or should we focus on what we think is super valuable and push the state of the display technology forward.’ […] And we made this deliberate choice […]  We took this next step to hyper-focus the company on what made us really valuable to the industry, which is building innovative display techniques, and not spending 80 percent of our time and resources on a factory floor in China shipping mobile movie theaters.”

So what’s the underlying technology? It’s based around a foveated display which channel pixels to your center of vision — a moving target — using proprietary MEMS-based steering mirrors. This is purpose-built for near-eye information delivery and is designed for the way the human eye works.

The benefits of this approach include efficiency, such as delivering high-resolution images just to the places that you need them. This is similar in concept to foveated rendering in VR — and likewise uses eye-tracking — though it’s much more complex in terms of steering photons.

Another way to think about this is to contrast it to a common approach for AR glasses which involves tiny screens known as microdisplays. These conversely utilize technologies that were initially purpose-built for other products — like your TV — and have physical limitations.

“Let’s say you wanted to create a 90-degree field of view display. This is a great target for AR. If you think about a 60-pixels-per-degree target of retinal resolution in the traditional method, what I would do is take a panel and cram as many pixels onto it until I get to 60-pixels-per-degree. And those types of pixel counts would be approximately 30 million pixels-per-eye […] A 4k display is something like 8-million pixels […] If I were to do that using foveation, instead of 30 million pixels-per-eye, I need something like 2 million pixels-per-eye. So instead of using something like 4x a 4k chip, I only need a single 1080p chip […] So as I keep increasing the field of view, I don’t really need a lot more pixels. All the pixels that are needed were mostly sitting in the center of my vision […] You no longer have this opposing constraint of performance and field of view.”

If this is true, why are others taking microdisplay approaches? Tang posits that engineers are very good at incremental problem solving. And that’s what’s happening in bringing existing microdisplay technology to AR glasses. But what’s needed is a transformative step-function change.

Tang conversely takes inspiration from the human eye for a radically different approach. Human optical systems are incredibly complex but our eyes paradoxically have relatively-low acuity. The way our brains process and render our perception is where the magic happens.

So to achieve an intended perception — as is the underlying goal of AR — could we sidestep technological challenges in display systems by instead doing what the brain does. In other words, could we create technology that utilizes less light input but “tricks” us into seeing more?

“Our eyes are actually terrible image sensors but they’re coupled with this amazing GPU that is our brain. We think we see the world in ways that are high quality. But that’s not really how our eyes are working. When I look around this room in front of me, I can see everything in great detail, I see all these colors around the room. But that’s actually not what my eyes are seeing. That’s what my brain is piecing together[…] I think we oftentimes find a lot of innovation when we look at things that are inspired by nature. So when you think about these displays, we’re ultimately building for our eyes and for efficiency. We should really be innovative and be motivated by how nature has built our vision system in such an incredibly efficient way.”

Panning back, the same native thinking that’s behind Avegant’s technology also stands behind Tang’s view of the AR market and the products that will be successful. There too, it’s about building from the ground up and shedding the “feature creep” of our current consumer tech.

“We shouldn’t just try to replicate what people are doing today on a phone or a watch or a computer. I think that we’ve learned many times over, every time there’s a new compute platform out that you shouldn’t just do what you were doing before. […] I remember the early Windows phones that still had a start button and a cursor […] And there are a lot of [AR] applications that people are trying to create, such as making a bigger computer desktop, or a new way to check the weather and see your text message. I think that’s probably not the right way. We need to be focused on completely new applications. And similarly, we couldn’t have predicted the rise of applications like Uber. Similarly, we can’t predict where AR is going to go.”

As for what will be AR’s killer apps, Though Tang doesn’t believe they’ll just be more immersive versions of what we’re already doing on today’s devices, they could satisfy the same basic human needs. Those include social interaction and the utilitarian, ambient delivery of daily info.

“When I think about some of the new applications that can happen on [AR] devices, two general themes come to mind. One is related to being social. I think being connected to people, and new forms of communication are going to be really powerful. The second is how do we use these displays to unobtrusively provide the information that we need in a way that makes my life easier — akin to something like a smartwatch. I’m wearing my smartwatch 12 to 16 hours a day. But how often is that watch actually on and telling me something? I think that will be similar to what AR is going to be doing for you, where these devices have incredible contextual knowledge of what’s going on around you. I think ultimately if we can get a technology that is not only compelling, delivering new high-value experiences, but also doesn’t get in the way of being human… that’s going to be a successful implementation of this technology.”

Listen or subscribe to the full episode at The AR Show or below, and see our archive of past and future episode coverage here.


For deeper XR data and intelligence, join ARtillery PRO and subscribe to the free AR Insider Weekly newsletter.

Disclosure: AR Insider has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.