Having not had a chance to see Mojo Vision’s latest smart contact lens for myself until recently, I’ll admit that I expected the company was still years away from having a working contact lens with more than just a simple notification light or a handful of static pixels. Upon looking through the company’s latest prototype I was impressed to see a much more capable prototype than I had expected.

When I walked into Mojo Vision’s demo suite at AWE 2022 last month I was handed a hard contact lens that I assumed was a mockup of the tech the company hoped to eventually shrink and fit into the lens. But no… the company said this was a functional prototype, and everything inside the lens was real, working hardware.

Image courtesy Mojo Vision

The company tells me this latest prototype includes the “world’s smallest” MicroLED display—at a miniscule 0.48mm, with just 1.8 microns between pixels—an ARM processor, 5GHz radio, IMU (with accelerometer, gyro, and magnetometer), “medical-grade micro-batteries,” and a power management circuit with wireless recharging components.

And while the Mojo Vision smart contact lens is still much thicker than your typical contact lens, last week the company demonstrated this prototype can work in an actual human eye, using Mojo Vision CEO Drew Perkins as the guinea pig.

Image courtesy Mojo Vision

And while this looks, well… fairly creepy when actually worn in the eye, the company tells me that, in addition to making it thinner, they’ll cover the electronics with cosmetic irises to make it look more natural in the future.

At AWE I wasn’t able to put the contact lens in my own eye (Covid be damned). Instead the company had the lens attached to a tethered stick which I held up to my eye to peer through.

Photo by Road to VR

When I did I was surprised to see more than just a handful of pixels, but a full-blown graphical user interface with readable text and interface elements. It’s all monochrome green for now (taking advantage of the human eye’s ability to see green better than any other color), but the demo clearly shows that Mojo Vision’s ambitions are more than just a pipe dream.

Despite the physical display in the lens itself being opaque and directly in the middle of your eye, you can’t actually see it because it’s simply too small and too close. But you can see the image that it projects.

Photo by Road to VR

Compared to every HMD that exists today, Mojo Vision’s smart contact lens is particularly interesting because it moves with your eye. That means the display itself—despite having a very small 15° field-of-view—moves with your vision as you look around. And it’s always sharp no matter where you look because it’s always over your fovea (the center part of the retina that sees the most detail). In essence, it’s like having ‘built-in’ foveated rendering. A limited FoV remains a bottleneck to many use-cases, but having the display actually move with your eye alleviates the limitation at least somewhat.

But what about input? Mojo Vision has also been steady at work on figuring out how users will interact with the device. As I wasn’t able to put the lens into my own eye, the company instead put me in a VR headset with eye-tracking to emulate what it would be like to use the smart contact lens itself. Inside the headset I saw roughly the same interface I had seen through the demo contact lens, but now I could interact with the device using my eyes.

The current implementation doesn’t constrain the entire interface to the small field-of-view. Instead, your gaze acts as a sort of ‘spotlight’ which reveals a larger interface as you move your eyes around. You can interact with parts of the interface by hovering your gaze on a button to do things like show the current weather or recent text messages.

SEE ALSO
Vision Pro Pricing Tops Out at $3,900 for 1TB of Storage, Pre-orders Open Now

It’s an interesting and hands-free approach to an HMD interface, though in my experience the eyes themselves are not a great conscious input device because most of our eye-movements are subconsciously controlled. With enough practice it’s possible that manually controlling your gaze for input will become as simple and seamless as using your finger to control a touchscreen; ultimately another form of input might be better but that remains to be seen.

This interface and input approach is of course entirely dependent on high quality eye-tracking. Since I didn’t get to put the lens on for myself, I have no indication if Mojo Vision’s eye-tracking is up to the task, but the company claims its eye-tracking is an “order of magnitude more precise than today’s leading [XR] optical eye-tracking systems.”

In theory it should work as well as they claim—after all, what’s a better way to measure the movement of your eyes than with something that’s physically attached to them? In practice, the device’s IMU is presumably just as susceptible to drift as any other, which could be problematic. There’s also the matter of extrapolating and separating the movement of the user’s head from sensor data that’s coming from an eye-mounted device.

Image courtesy Mojo Vision

If the company’s eye-tracking is as precise (and accurate) as they claim, it would be a major win because it could enable the device to function as a genuine AR contact lens capable of immersive experiences, rather than just a smart contact lens for basic informational display. Mojo Vision does claim it expects its contact lens to be able to do immersive AR eventually, including stereoscopic rendering with one contact in each eye. In any case, AR won’t be properly viable on the device until a larger field-of-view is achieved, but it’s an exciting possibility.

So what’s the road map for actually getting this thing to market? Mojo Vision says it fully expects FDA approval will be necessary before they can sell it to anyone, which means even once everything is functional from a tech and feature standpoint, they’ll need to run clinical trials. As for when that might all be complete, the company told me “not in a year, but certainly [sooner than] five years.”

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • kontis

    Eye tracking alone cannot be widely used as an input.
    But add 1-bit BCI (a virtual mouse click / tap done naturally with thought) and you have an input method more revolutionary than a mouse and a touchscreen, which would quickly turn XR into multibillion users industry and it would be the game over for smartphones.

    And without 1-bit BCI it’s quite possible XR will never be able to offer enough of new disruptive value to convince humankind to transition to HMDs.

    Improving laziness is an absolutely gigantic value, one of the biggest known to the progress of civilization.

    So either HMD will be necessary or a brain chip implant like Neuralink. Contact lenses alone probably will never read our brains, even a single bit. And brain chips will probably be more niche geeky thing in first decades than VR is today, unless it provides too much of advantage to ignore.

    tl;dr No matter how amazing these contact lenses are, wearing something on your head will stil be crucial and necessary.

    • CIR

      I agree, but I think the combination of the two technologies really will provide us with the next generation of superhuman input (proves how awesome the mouse is as an input device and how intuitive touchscreens are).

      Maybe at some point BCI devices could be integrated into something like an earbud. Surely the wireless downlink for the video will need to be relatively near the contact.

      I’m glad to see some early research coming out, we don’t often see much of this since it’s so early but it goes show how much more we have to go.

    • donkeyhigh

      We’ll probably still need to hotspot/bluetooth the lenses to our phones to get internet and notifications and stuff.. I don’t see a Sim-slot in my eye..

    • XRC

      The SQUID device (Superconducting QUantum Interference Device) in Kathryn Bigelow’s “Strange Days” was very prescient considering the movie’s release date in 1995.

      Very interesting seeing the potential of full body presence recording and playback through head worn neural lace, and the opportunity for bad actors to abuse the technology.

      • psuedonymous

        Actual SQUIDs (the sensitive magnetometer, not the movie device that stole the name for their mcguffin because they though it sounded cool) were invented in the mid 60s.

        • Malkmus

          And how dare those 1960’s engineers steal that name from the ocean-dwelling species because they thought it sounded cool!

          …Just because those screenwriters called something SQUID doesn’t mean it’s stealing, especially when there’s no similarity in function. At most, it could be considered homage.

    • AS

      There are a multitude of input methods simpler than bci devices.

      Accurate and fast gaze detection could easily be trained to act as input.

      For example, a user could ‘press’ a button by following a sequence of gaze points such as looking at two corners of a square button for a period of time to press it. Even a blink could be interpreted as a trigger if combined with another motion.

      Stephen Hawking used a single infrared sensor to select characters and words displayed.

      Add machine learning to the mix and it becomes incredibly powerful, detecting eye movements as unique as a fingerprint.

  • Sofian

    Can’t they use the friction between the surface of the lens and the eyelids to get eye tracking?

    • Sven Viking

      There might be some difficulties because even if you calibrated it for the individual’s eyelids, people’s eyelids are frequently moving and contorting in different circumstances.

  • Nepenthe

    The thought of putting something in my eyes skeeves me out. 20/15 vision but I’m nearing 48 so I need readers now for reading labels or playing Switch. But for the right utility or experience I might be willing to try…

    But this probably isn’t for people like me, it’s for people born in, say, 2030, who will first try a very good version of this in, say 2047.

    • donkeyhigh

      Well at least we get decent VR for when we end up at retirementhomes..

      • XRC

        See Bruce Willis movie “Surrogates”

      • david vincent

        I doubt there will be VR or retirement-homes in the post-collapse world

    • Jistuce

      Honestly, I’m on the other end of the vision spectrum and contacts give me the heebie-jeebies too.

      Fortunately, smart glasses are easier to make than smart contacts!

      So where are they? I am due a new set of glasses and I wanna get my cyborg on!

  • Glad to read that you have tried the lens, too and are excited as me!

  • psuedonymous

    Just to confirm (because there has been… chicanery… in the display-in-contact-lens field before): was the viewable ‘demo lens’ setup displaying a live updated image, or a static image? Because the live image at least would mean a display was present and operating inside the lens, whereas a static image can be accomplished with a piece of microfilm.

    • benz145

      I believe it was a live image; they specifically claimed the display was included in the demo lens that I saw.

    • Armchair Hydrogeologist

      I can confirm that it’s a live dynamic image. This thing really works. It’s been shown at many venues – even to celebrities.

  • Clownworld14

    would be good if they enhance vision as well, see everything in crystal 10k!

    • benz145

      They have talked about this as a possibility but it seems somewhat further down the road.

  • Kim from Texas

    Normally, a “hard” contact only covers the black part (pupil) of the eye. I have worn “hard” contact lenses that cover the colored part of the eye. They can really only be worn successfully for short periods of time (6-8 hours). This is due to the limited amount of oxygen allowed through the material, and the large amount of the eye covered. A “soft” contact version would be much better, but an engineering nightmare due to the flexibility needed. This is not a product for an everyday person.

    • Armchair Hydrogeologist

      Scleral lenses are actually very comfortable because they don’t contact the cornea. I’ve worn both and scleral are better for wearing comfort. But scleral lenses are much harder to put in and often require a “suction cup” to take out. Inexperienced contact users already squeemish about soft lenses would have a long learning curve. There are many ways to solve oxygenation issues in ways that are superior to soft lenses as well. So it’s still possible for the everyday person. The measurement issues you describe are real.

  • donkeyhigh

    If I could connect this to my phone and stream from my phone to my eyes, I’d connect my wireless earbuds and controller to my phone, start up SteamLink and just stream my PC games directly to my eyes through my phone as I lay back in my bed with my eyes closed. Hope they make some small glasses as well so I can watch movies in bed at night, I don’t wanna fall asleep with the contacts in..