Archives

Categories

SponsoredAugmented RealityImmersive Technology

Augmented Reality and Hearables: Where the Two Technologies Meet

Hearables with AR features are becoming ubiquitous thanks to the overall spread of wearables, the contribution of tech giants, and the growing demand for emerging technologies.

 

Sponsored content
Sponsored by Softeq

 

Extended reality in visual solutions is making the headlines. But AR/VR solutions are not limited to Google Glass, mobile apps for trying on shoes or accessories, and AR-based games. Hearables with augmented reality features are becoming ubiquitous thanks to the overall spread of wearables, the contribution of tech giants, and the growing demand for emerging technologies.

How Augmented Reality Works

While virtual reality introduces a computer-generated environment, augmented reality enhances the physical world with virtual elements. In particular, AR places 3D models into real situations and adds contextual information to a camera display or headphones.

The technical challenge here is to attach superimposed imagery to a physical object in the real world. Generally, there are two types of AR solutions — marker-based augmented reality and markerless (sensor-based) augmented reality — that help developers merge content into the camera feed of a smartphone or head-mounted display.

See Also:  What Is the Difference between Virtual Reality and Augmented Reality?

To create marker-based AR solutions, techies use physical objects, imagery, or QR codes as markers. Algorithms extract features from the marker, including the size and position, and use that information to correspond virtual objects with the real-world environment.

Sensor-based apps detect objects with the help of localization technologies such as GPS, RFID, or Wi-Fi. Gyroscopes and accelerometers also help with this. It is crucial for a device to define the correct position of its camera relative to a real-world object. To make proper calculations, developers often use parameters of common objects such as the human body or a tennis court.

Additionally, engineers train Deep Learning algorithms to accurately detect markers in live video data.

Examples of Augmented Reality in Action

Visual augmented reality became mainstream with Pokemon Go (2016) and Snapchat filters, but its use cases are now moving beyond gaming and entertainment. Companies are beginning to incorporate the technology into solutions across various industries, including healthcare, sports, education, tourism, and the automotive industry.

IKEA Place app virtually arranges furniture in a customer’s interior. Hybrid AR app from Toyota overlays graphics of the inner workings of a car onto the physical vehicle. L’Oréal’s YouCam Makeup app enables users to virtually try cosmetics products. Augmented reality technology also equips medical experts with 3D visualizations of organs and offers travelers 360-degree tours of hotel rooms and sightseeing attractions.

Opportunities for Hearables: Key Driving Factors

Overall, augmented reality brings robust capabilities to solutions across multiple modalities — visual, auditory, haptic, somatosensory, and olfactory. While visual forms of AR communicate with screens or glasses, smart hearables as a subset of wearable technology introduce audio augmented reality. For example, an AR-enabled voice assistant in your ear can point you in the right direction and remind you about an upcoming business meeting.

The spread of AR-featured hearables and their opportunities are extending. According to a Markets and Markets report, the hearable devices market is estimated to reach $23.24 billion by 2023, growing at a CAGR of 9.98% from 2017 to 2023. Moreover, a recent report from Juniper Research says that the number of hearables in use will grow to more than 970 million units by 2024 (an increase of 270% compared to 2020).

Researchers from Markets and Markets claim that pivotal factors driving this market are:

  • the increasing role of smartphones as a source of entertainment;
  • the extensive use of wearable and portable devices;
  • the growing use of hearing aids and health monitoring solutions.

At the same time, personal voice assistants developed by tech giants — Alexa from Amazon, Siri from Apple, and Google Assistant — are standing at the forefront of the audio augmented reality market.

While Google allows devices to be launched that integrate with the Google Assistant SDK only for experimental and non-commercial uses, Amazon offers an opportunity to create custom Alexa capabilities and implement them in the cloud for multiple solutions, thus taking more hearables to a new, smarter level.

For example, in 2019, Amazon extended Alexa features to pair with headphones and earbuds. Thanks to GPS-based location support, users are now able to receive directions or traffic updates from Google Maps. Other Alexa Mobile Accessory Kit highlights include calendar management, smart home device control, and music playback. This means that the wireless noise-canceling headphones from Sony not only let users stream music but also handle calls and get information on demand.

Top 3 Types of AR-Enabled Hearables

1. In-Ear Devices for Simultaneous Interpretation

In-ear devices for interpreting have the capability to translate foreign languages in real time. The tool can hear a dialog partner, then translate the speech to one of the available languages.

The device can provide simultaneous two-way translation in multiple languages and offer several modes—face-to-face translation, group translation, and broadcast translation. The solution usually pairs with a smartphone for settings and customization. Along with the main feature set, in-ear devices can record voices, handle calls, and stream music.

From a technical standpoint, simultaneous interpreting relies on multiple technologies. A voice activity detector (VAD) identifies human speech, and deep neural networks help remove background sound.

A language identification (LID) system based on Machine Learning algorithms identifies the foreign language, and automatic speech recognition (ASR) converts the phonetic information into words. When the system is familiar with the context and grammar rules, it can even correct mistakes. Natural language processing (NLP) algorithms make machine translation from one language to another possible. Finally, text-to-speech (TTS) software produces translated speech.

2. Headphones With AI-Based Noise Cancellation

Noise-canceling headphones use artificial intelligence to remove sounds produced by a barking dog, coffee machine, or a capricious child from an audio recording. The devices are able to adjust the level of noise-canceling to fit the real environment and mix external sound with that coming from a device. A successful example of this technology in practice are headphones from Bose equipped with an additional layer of dynamic audio that enables wearers to hear their surroundings.

Noise-canceling headphones typically require a multi-microphone array that separately captures the target sound and background noise. Active noise-canceling (ANC) algorithms collect both types of signals — direct speech and unwanted sound — and then extract the desired audio from external noise.

To ensure the device produces the desired sound without low-frequency background noise, engineers develop the logic of a custom digital signal processing (DSP) module and configure the settings of the integrated module, as well as microphone modules and transport entities.

3. Smart Hearing Aids

Smart hearing aids are both medical devices for people with different levels of hearing loss and consumer electronics solutions. The devices can both enhance the desired sound and cancel background noise in real time. Hearing aid producers can add the capability to track physical activity and provide health data, as well as handle calls, stream music, and control smart home devices. ReSound, Audicus, and Starkey are in the list of top hearing aid brands.

Engineers equip smart hearing devices with directional microphones that capture the target sound and microphone arrays that collect background noise. Algorithms extract the desired audio from all types of sounds. The integrated DSP and audio amplifiers boost the loudness of the audio and reduce unwanted sound.

Techies can also improve speech recognition in noisy surroundings with a signal processing technology called beamforming. Beamformer arrays combine information across 2-4 directional microphones rather than collecting a signal spread from all directions, thus helping the device to detect the right speaker.

Hearable Solution Architecture: Core Aspects

Typical hearable ecosystems rely on four common components to function — hardware, software, infrastructure, and mobile applications. Designing and manufacturing a new device requires tech efforts on all these levels. This can be achieved either by hiring several vendors with niche expertise or by going with a wearable development company capable of creating a complex solution under one roof.

Hardware

A single hearable unit incorporates a number of components, including a PCB, data processor, sensors, battery, antenna, and microphone array. Fitting them into the limited space of a small and lightweight device can be challenging.

Hardware developers are responsible for PCB layout and design, as well as enclosure design. Specialists conduct PCB signal, power integrity, and thermal analysis. They may embed accelerators into a solution and divide hardware components to speed up specific software functions. The main task for a tech vendor on this level of development is to balance computing power, desired functionality, and usability of headphones, earbuds, or hearing aids.

Embedded Software

On this level, engineers set instructions for the device on how to communicate with hardware. This includes software (firmware and middleware) that allows hearables to collect, process, and analyze data, as well as transfer the information and merge the solution with other devices.

For example: bare-metal, a low-level programming method, tasks the code with running directly on the hardware, based on the DSP module, and without an OS. At the same time, the development of a suitable board support package (BSP) allows an operating system to function on the device.

Embedded systems engineers can also create middleware to interconnect the hardware and software components of smart headphones or interface the device with a voice assistant.

Back-end Infrastructure

Engineers define where and how to store, process, and relay data to endpoint devices and user apps, as well as develop the application logic. Developers can add capabilities that are impossible to implement with hardware. In particular, specialists employ additional sound filtration, ensure big data analysis, and apply machine learning algorithms to it.

Tech vendors can offer multiple solutions for cloud-based data storage and processing — AWS, Microsoft Azure, or Google Cloud.

Mobile application

This generally involves developing a mobile application and pairing it with the hearable to enable convenient user-to-device interactions. Wearable tech developers sync a device with a mobile app via APIs and connectivity protocols. Applications provide access to settings and customization, enabling users to access sensor data on their screen and conveniently manage the device.

Bottom Line

Hearables are a young market that is revving into gear. Emerging technologies, like augmented reality or artificial intelligence, bring more opportunities to new products. Tech giants and market leaders, such as Apple and Google, introduce new capabilities for feature sets and make wearable devices more appealing — Apple’s AirPods made hearable technology desirable.

Juniper Research named two distinct groups of vendors leveraging hearable technology to add value to premium audio products. While Sony, Bose, and Sennheiser are the key players in integrating hearable technology into headphones, Apple, Google, and Samsung are leading producers of earbuds with AI capabilities. It can be challenging for new players to compete with these tech giants, but for those willing to brave the market there could be massive rewards.

See Also:  The “War of Words” Taking Place in XR Technology

 

ARPost
the authorARPost