NextMind, a Paris-based brain-computer interface (BCI) startup, debuted a $400 neural interface dev kit at CES this week, something the company intends to release to developers in the first half of 2020.

NextMind’s device is a non-invasive electroencephalogram (EEG), which is a well established method of measuring the voltage fluctuations of neurons from outside the skull. EEGs have been used in medicine, neurology, cognitive science, and a number of related fields too, although unlike more invasive methods out there you might describe it as metaphorically trying to figure out what’s happening in a stadium by listening to the crowd’s roar. You can infer some things, but you’re not getting the whole picture.

Attaching to the back of the head with a simple forehead strap, eight prong-like electrodes pick up brain waves from the visual cortex. The company is pitching a number of use cases, one of which was its potential application in VR headsets.

Photo captured by Road to VR

The name of the game with NextMind’s EEG dev kit is measuring user intent. NextMind advisor and investor Sune Alstrup told me was intimately connected to the company’s machine learning algorithms, which analyses and classifies resultant brain waves in real-time to determine what a user is visually focusing on.

Alstrup is the founder and former CEO of eye-tracking company The Eye Tribe, which was acquired by Facebook in late 2016. To him, eye-tracking alone doesn’t cut it when it comes to creating a more complete picture of what a user actually intends to manipulate when looking at any given object, something he referred to as “King Midas’ Golden Gaze.” Like the legendary King’s ability to turn everything to gold with a single touch, a human’s eye ‘touches’ all that it can see without acutely revealing where the user’s focus lies. To sidestep this somewhat, eye-tracking based UI modalities typically rely on how long you physically hold your gaze on an object, which for the sake of VR usually arrives with some form of countdown timer to make sure what you’re looking at is important enough to be selected, manipulated, etc.

SEE ALSO
Quest 2 Stock Appears to be Draining as Holiday Sale Drives Purchases

It’s clear that the next generation of VR headsets are heading down the path of integrated eye-tracking though, and Alstrup sees EEG data being able to work in concert.

I got a chance to try out NextMind’s dev kit at CES this week, and while it’s undoubtedly early days, the company is clearly confident enough in its appeal to productize the device (albeit only for devs at this point) and bring it to market at a relatively low cost. Priced at $400, NextMind is first releasing to select partners, and will later release to developers in Q2 2020.

The wireless device communicates via Bluetooth, and does a portion of the processing on-device whilst offloading the machine learning tasks to the same PC driving the VR experience. Exactly how much processing power is needed to run it, NextMind wouldn’t say, however I was told it could be used with a more humble setup like a standalone VR headset provided hardware manufacturers integrated the technology internally.

Photo captured by Road to VR

I was treated to two demos at NextMind’s CES booth. In my first demo, which was outside of the VR headset, I was told to focus on one of three small cubes that would flash sequentially with different colors—red, blue and green. Focusing on each cube for long enough would turn an adjacent lamp to the color I was focusing on at that moment. It worked fairly quickly and reliably for a while, however I was told that due to the busy CES show floor, which was polluted with Bluetooth cross-talk, it would eventually lead to a gradual desynchronization and failure for it to respond accurately. I experienced this after a few minutes of accurately changing the lamp light’s color; while it lasted, I really felt like some sort of off-brand Jedi Padawan. I was then ushered to an HTC Vive fitted with the TIE Fighter-shaped puck. After a few failed starts and some fiddling with the device to get it properly seated on my noggin, I was able to get some sense of how NextMind is implementing its technology, at least in the context of today’s demo.

SEE ALSO
Sony Reveals the 10 Most Downloaded PSVR 2 Games in 2023

The demo was fairly similar to the cubes: focus on an alien’s flashing, pulsing brain until the system determined you’re actually looking and maintaining your gaze. Both demos made heavy use of sequentially flashing lights, which was used to generate a synchronized pattern for NextMind to measure and then interpret in a Boolean ‘yes, I see this signal’ or ‘no, I don’t see this signal’.

This, the company told Venture Beatwould change in the future though, and that the blinking lights weren’t so important in and of themselves, but merely the change in the display.

“It could be a change in color, for instance,” NextMind CEO Sid Kouider told Venture Beat. “Your brain has to process new information. We need to generate a neural response.”

In the end, I’m fairly skeptical of the benefits of EEG over integrated eye-tracking at this point. NextMind says they’re aiming for a point when their machine learning stack can decode the shape of an object by itself, which won’t rely on object color and a brightly flashing pattern. That said, I still don’t see how going from the eyeball through the brain, the skull, and scalp is any better than a smartly designed UI to compensate for eye-tracking’s inherent misgivings. Then again, I’m not a neuroscientist, so I’ll just have to keep an open mind for now.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • knuckles625

    Ooh that pun at the end… Couldn’t resist, could you?

  • MosBen

    I’m also not a neuroscientist, but in the things that I’ve heard about brain-machine interfaces we’re still in the very early days. This product seems like it might be a bit early.

  • Andres Velasco

    Lol, The CIA and the NSA would love to grab all data provided from every human that uses it

    • Eric Draven

      they can grab this piece of data whenever they want

  • If all they are doing is making a blip on the optical cortex based on where you’re facing, how is this any difference from just finding the gaze of the camera, measuring time, and doing the action? From the video, it seems a bit fishy.

    • Because supposing the BCI product works perfectly, it can understand if you’re looking at an object because you’re interested and concentrated on it, or if you’re looking at it just because your eyes must look somewhere :)

      • Guest

        Correct…
        Any UI and input devices, no matter how well designed, will always have rooms of ambiguity about user intention.
        The ultimate dream aim of BCI in consumer electronics is really to ditch any other inputs or UI, so that the device understand user’s intention perfectly without having to navigate through layers of UI or accidently triggering unintended behaviours.

        And also the reverse – inducing realistic haptics through brain stimulation, not just a rumble sensation that is on curremt haptics technologies.

        Yes, we are far, far from that. But must start somewhere! Even if the BCI device merely reads users’ mood, there would be opportunities for loads of interactive applications.

  • I’m fascinated by BCI, but as you say, we are very early days. And personally I don’t believe that we’ll be able to have a perfect BCI without inserting anything into the brain

  • dota

    Remember the lawnmower Man. BCI can increase the attention capacity of humans.

  • dracolytch

    I work with some very, very good BCI people, and they say it’s still very early to get a good signal (especially on a dry sensor). That’s a Vive Pro Eye in the demo, and it talks about how he can “focus his visual attention.” How much of this is eye tracking, and how much is BCI? Something’s not sitting right here.

  • Bob

    Early adopters always pay the price!

    • asdf

      yeah 400 isnt that bad just to fulfill curiosity. With a low expectations maybe youd be surprised by what it could do and help some people start to theories or get a better idea for BCI entertainment before its here. I could see myself getting something around that price just to play with in unity, kinda like the leap motion, but id never get something as expensive as the magic leap 2k+ just for fun

      • Joe

        I’d say that’s rather steep just to fulfill a curiosity. Yet it didn’t stop me from paying Emotive even more. 800 for the Epoch+ 14 ch and a simple program. The problem, I needed access to the raw signals and it was locked behind a monthly price tag of 100 per month. For a rookie programmer, that’s way too much. The Epoch focused on the frontal while the one mentioned above focuses on the visual. Yet, to have true control of a virtual avatar, it needs to focus on the pre-motor and motor cortex.

  • homey kenobi

    the back of the brain is awkward, the front of the brain near the face is reasonable. you would think they would coral late the two ends to make a rudder to decide yay or nay. also measure head tilts to add in another test. measure heart rate also. the brain comes from a in and out metric i think, like eating. in its refined, out its awkward. in the front of the brain, out the back of the brain.

  • kool

    An ideal situation would be cameras to bbn pickup your arm motion. A bci for locomotion and some sort of haptic feedback.

  • Raymond H Perez

    Incredible tech.

  • Joe

    One piece of VR hardware being developed are omni-directional treadmills. One of the hurdles was the delay between a change in motion between the user and the treadmill. The treadmill basically watched the user and reacted as quickly as possible. This is where a BCI could do wonders. Rather than the visual cortex mentioned above, monitor the pre-motor and motor cortex. The idea is for the treadmill to find out how it’s suppose to move by detecting the signals in the brain rather than watching and reacting to the user’s body. Should significantly reduce, if not remove, delays in the hardware.