3D technology is fundamentally changing how we purchase anything online. By 2025, 3D content creation will be worth $4.25 billion according to ARtillery Intelligence, and it’s easy to understand why.

When compared to 2D images, 3D models of products give consumers the chance to experience a product — look at it from all angles, get a sense of material and even test out different colors and sizes. It’s an investment many retailers believe is worth the time, energy, and financial cost. But the process isn’t perfect. Additional techniques have been explored to make 3D creation faster, but extensive last-mile edits by highly skilled 3D artists are necessary to complete the assets for commercial use.

As 3D modeling and AR technology becomes more accessible, retailers and companies need a new, scalable way to create 3D models quickly and make them more affordable. Improvements in Neural Radiance Field technology (NeRF) can democratize 3D and make 3D content creation scalable.

What’s the Revenue Opportunity in AR Enabling Tech?

The problem with traditional 3D development

Many companies invest in photogrammetry technology to make the 3D content process scalable. Photogrammetry is a useful technique for extracting three-dimensional information by combining 2-dimensional data — in this case, photos of a product. It’s a tested technique used in many industries beyond e-commerce for many purposes such as engineering, architecture, cultural heritage, and geology.

The problem is that by default, photogrammetry is complex and expensive. A high-quality photogrammetry rig can include hundreds of cameras ranging up to $2,000 a piece, complicated hardware and software, and the space necessary to house such a massive piece of technology. I’ve seen many companies invest millions of dollars into this process to democratize 3D content, yet, despite the amount of money, time, and resources spent on photogrammetry, the results may still be subpar.

There are physical limitations that exist when scanning objects that have dark, shiny, or transparent surfaces. Gaps in the model process appear, giving the illusion of a scan that looks incomplete. Ultimately, more time (and money) is needed for 3D artists to clean up these scans and fix the gaps that photogrammetry creates plus improve the materials. The irony is that the company’s heavy investment to save time and money has in the end become quite the opposite.

Will Enabling Tech Unlock AR Commerce?

How neural radiance fields make 3D scalable and accessible

A similar yet alternative approach to Photogrammetry is in neural radiance fields (NeRF) where AI is trained to create an implicit render (an interactive 3D visualization) from 2D images. NeRF combines volumetric rendering with a neural network representation to calculate the density and RGB properties of the product, and through a neural network, reconstructs the product. Essentially, building a 3D version of the product through data and all through the use of any mobile device with a camera.

The power in NeRF is AI and machine learning. For example, making a handbag in a traditional way could take anywhere from one to two days’ worth of specialized 3D artist time whilst with NeRF technology, an implicit render of the same handbag could take 20 minutes, with the same image quality and maybe even exceed it.

One of the other big problems in photogrammetry is creating a standardized lighting environment where you need a consistent lighting setup. Products also can’t be moved around once they’re set up in the rig. Using NeRF, you can take photos of an object on a mobile device by moving around it. The product is stationary, but the camera is moving around. Regardless of the different lighting conditions, when the images go through the AI algorithm any lighting issues are fixed, creating a consistent image. In addition, the image’s background — the walls, floor, and other items around the product being captured doesn’t matter as through machine learning and processing of the data, we can remove it without extra work or last-mile editing.

Beyond the time savings and ease of use, NeRF technology is even more beneficial because the AI is always learning, the more images it captures, the smarter the algorithms get. So that next time you scan a chair, the AI understands the density difference between the chair’s wooden leg and the fabric seat and therefore the process speeds up.

Is AR Shopping a Metaverse Precursor?

The future of 3D content creation

NeRF is a combination of 2D images to create an implicit render however NeRF technology can also create 3D geometry, through the same scanning process. With a small piece of last-mile editing and automation, materials and shaders can be applied to transform the scan into a 3D model therefore drastically reducing the 3D content creation time when compared to traditional methods.

Having worked in 3D for nearly 30 years, I believe that NeRF is the future of 3D content creation. This innovative technology can take the legwork and monotony out of 3D creation, leaving the artistic and creative aspect of the work with 3D artists.

2D vs. implicit render vs. mesh

If you look at where we have been with 2D, where we are with implicit rendering, photogrammetry, and finally 3D geometry, what works best is a question only you can answer. Perhaps a 2D photograph of a child’s t-shirt is enough to sell a product? Perhaps an implicit render may be enough to show a consumer all sides of a chair for them to make a decision? Or do we need to add last-mile editing so that we can transform the product to 3D?

One current limitation of implicit render is animation. Showcasing a refrigerator opening and closing, or a chair folding requires 3D geometry and editing to create the 3D animation. It’s important to understand your need and what information your customers need to not only engage with a product online, but obtain enough information about that product to confidently make an informed buying decision in whichever platform they choose.

Daniel Frith is Vice President of 3D at Avataar.


More from AR Insider…