how to oculus spatial anchor unity

How to use Oculus Spatial Anchors in Unity to add persistent virtual objects to your physical world

Fasten your seatbelts because today I’m going to publish for you an amazing new tutorial, this time on Oculus Spatial Anchors, a new experimental feature added to the Oculus SDK. With it, you will be able to add persistent virtual objects in an exact place of your physical world. Let me tell you everything you need to know about them!

Oculus Spatial Anchors in Unity: video tutorial

I have made a huge video tutorial on Oculus Spatial Anchors, where you can see me create from scratch an application implementing them in Unity. I start with a blank Unity project, and in the end, I have an experience where I can add cubes and spheres that are consistent between different executions of the application on my device. The video is very long because the topic is not easy and I want to explain it to you really well. You can find the video here below:

Yes, it took me a lot to shoot this video. Yes, it is worth watching it.

As usual, keep reading for the written version of the tutorial.

Oculus Spatial Anchors

Spatial Anchors are a common feature in augmented reality frameworks: an anchor is a point in the physical space that the AR system can reliably detect (and track) even in different sessions. An anchor is associated with its representation: this is basically a description of the physical space around it, so that the anchor can be recognized. If this description holds between different executions of the applications, because the physical space has not changed, the anchor will be persistent and shareable. The anchor can so survive between different executions of the program on the same device, and can also be shared between multiple devices, so different devices can use it as a common reference point.

Let’s make an example to explain it better: let’s suppose that I want to put an anchor on the corner of my desk. The system analyzes the surrounding of that point through the images that arrive through the cameras and finds a mathematical definition of that point. If it were a human, probably it would define it with a textual description like “a straight corner on a wooden desk, with close to it a mouse and a laptop”, but since it is a machine, it would define it with lots of numbers that identify the so-called “features” (simplifying it a lot, the characteristics of the texture around the point).

I save the anchor of my desk corner during the execution of the program, and then, when I close the app and launch it again, the system can analyze the surroundings and find the point that is represented by the definition that was previously saved: the system will so look for the “straight corner on a wooden desk, with close to it a mouse and a laptop” and when it finds it, it can recover the previous status of the app. So if on that anchor I put a dancing character, that character will be put again in the same position every time the app gets executed: the virtual object becomes so consistent in all the different runs of the app, it is as if it was part of the real world. And if an anchor gets shared, all people around you can see the animated object in the same position, because their devices will use the definition about “the straight corner” you have shared with them, will find the exact same point, and put an object in there.

sketchup viewer AR hololens
When you see an image of people operating together on the same 3D model, it is because they are sharing the same spatial anchor, so the object is in the same position for everyone in the room (Image from Windows Store)

Spatial anchors are amazing because they allow for many AR/MR applications: with them, for instance, you can create a room customization system for your Quest, attach some virtual pictures to your real walls, and see them in the exact same place every time you turn on your device. It would be as the pictures would be part of your real environment: these are the kind of features we need to have a persistent mixed reality world to live in. Yes, the M-word.

Oculus has just added Spatial Anchors to its SDK. At the time of writing, they are still experimental and with limited functionalities (e.g. you can’t share them with other people and you can’t save them to the cloud). This also means you can’t publish App Lab or Oculus Store apps that implement them. But they are already very interesting and I suggest you play around with them. I think they are not easy to grasp and they are not documented in a great way, so I decided to write this long tutorial about them to help all of you fellow devs in implementing them.

A note on this tutorial

Before starting with the tutorial, let me warn you about one thing: since the topic is not easy, I assume you have already some knowledge with Unity and the development of an app for Quest: this prevents me from writing a step-by-step tutorial for dummies that would require 3 hours to be read. If you are just starting with Quest development, I would suggest you start with these other two tutorials:

And then watch the video version of this guide, where I show you the creation of a Quest app with spatial anchors from scratch. In this textual version, I would go a bit more straight to the point of Spatial Anchors.

A final note: Spatial Anchors are a new feature, still flagged as experimental. This means that in the next months, Meta will probably change some things here and there about their implementation, so depending on when you read this tutorial, you may have to do things in a slightly different way, tweaking my scripts… but I expect that 90% of this tutorial will still hold true.

Spatial Anchors – Project Setup

Let’s start by creating a Unity project. For this tutorial, I used version 2020.3.24f1. Let’s call the project SpatialAnchorsTest.

Proceed to the usual setup of a Quest project:

  • Switch the build target to Android
  • Add the current scene, called SampleScene, to the scenes to build
  • In the Project Settings, activate the XR Plugin Management
    • When the XR Plugin Management has finished installing, select Oculus for both Android and PC
  • In the Project Settings, in the Player Tab
    • Change the name of the company to whatever you want
    • In “Other Settings” switch color space from Gamma to Linear. This is not fundamental for spatial anchors, but it is for passthrough
    • In “Other Settings” set Minimum Api Level to “Android 8.0 Oreo”
    • For now, we keep the Mono scripting backend because it builds faster (on my laptop, IL2CPP is like 7x times slower). We’ll change it in the end for the passthrough version of the app
  • Install the Oculus Integration from the Asset Store. Be careful to install a version of the SDK that is v35 or higher. If you have already downloaded it for other projects, you may also find it in the Package Manager. Again, update it if the version is below v35
    • When asked what to import in the project, you just need to import the folder “VR” and the file OculusProjectConfig.asset
    • I also suggest to import the folder SampleFramework/Usage/SpatialAnchor and the file SampleFramework/Usage/SpatialAnchor.unity, because they are interesting samples to study
    • If some popups ask you to update/upgrade some plugins, say yes
    • If some popups ask you to restart, say yes
    • If a popup asks you to activate the OpenXR runtime, activate it. Spatial Anchors require OpenXR to work. If you are on Unity 2019, you may not see this popup, so you have to activate manually OpenXR using the Oculus menu on top of Unity after the package has finished importing.
  • In the SampleScene scene, remove the Main Camera and add an OVRCameraRig prefab from the Oculus integration
    • In the OVRManager component attached to the OVRCameraRig, set Tracking Origin Type to Floor Level. This is needed so that we can keep the rig in the origin and have the main camera put correctly at our real height.

Then, once the project is set up, let’s activate Spatial Anchors:

  • In the OVRManager component attached to the OVRCameraRig, look for the “Quest Features” section, click on the Experimental Tab
    • Check “Experimental Features Enabled”
    • Select “Enabled” for Spatial Anchors Support
This is how you activate spatial anchors in your project

Notice that we need this because, at the time of writing, Spatial Anchors are an experimental feature. If when you read this tutorial, they are not experimental anymore, you may see a little difference in how they are integrated.

Spatial Anchors – Quest Setup

Since Spatial Anchors are an Experimental feature at the time of writing, you have to enable Experimental mode on your Quest to make them work. This should be done EVERY TIME YOU REBOOT YOUR DEVICE.

I strongly suggest you install SideQuest to perform the next operation, but you can also use a plain ADB command if you wish.

Connect the Quest to your PC via USB cable. Open SideQuest, and click on the Wrench button in the upper right corner of the window (its tooltip is Device Settings & Tools). Look for Experimental Mode section and click the button “ON”. SideQuest should confirm it with a green stripe saying that the mode has been activated. If the stripe is yellow and shows an error, keep hitting the ON button until it works (yeah, the good old brute force way always works)

experimental mode quest 2
This is how you confirm that Experimental Mode has been activated

If you don’t want to use SideQuest and want to do this thing on the command line like a true nerd, you must know that the ADB command to activate this mode is “adb shell setprop debug.oculus.experimentalEnabled 1”.

Spatial Anchors V1 – Objects instantiation

We’ll now proceed in creating incremental versions of the application, going to add new functionalities in every iteration. I will provide you with the source code for the anchor manager in every sample, and the code will be very tidy and with many comments so that you can learn by just reading it. I will anyway also write a long explanation at every step of this tutorial so that to teach you some lessons to make you learn faster and better.

Let’s start with the first version of the app: the goal is to make a system that spawns little cubes with the left controller and little spheres with the right ones. So cool, the killer app of XR.

Death In Paradise GIF - Find & Share on GIPHY

Let’s initialize the project once for all:

  • Add a cube to the scene, put it in position (0, 1, 4) and give it a scale of (2, 2, 2). This will be our reference for the “forward” direction in the world, so that we can see the difference between anchored objects and non-anchored objects. This “Megacube” won’t be anchored and will be a standard Unity gameobject
  • Add a cube to the scene
    • Give it a scale of (0.15, 0.15, 0.15)
    • Set its collider as a trigger by checking the “IsTrigger” property
    • Change the name of the gameobject to “Object1”
    • Drag this gameobject and drop it into your Resources folder in the project tab. It should now be a prefab in the resources. This will represent the cubes we will generate with the left controller
    • Take the Object1 still in the scene, and set it as a child of OVRCameraRig/TrackingSpace/LeftHandAnchor. This will be our “brush” for cubes
  • Add a sphere to the scene
    • Give it a scale of (0.15, 0.15, 0.15)
    • Set its collider as a trigger by checking the “IsTrigger” property
    • Change the name of the gameobject to “Object2”
    • Drag this gameobject and drop it into your Resources folder in the project tab. It should now be a prefab in the resources. This will represent the spheres we will generate with the right controller
    • Take the Object2 still in the scene, and set it as a child of OVRCameraRig/TrackingSpace/RightHandAnchor. This will be our “brush” for spheres
  • Create a new material in your project (if you want to keep things tidy, create a Materials folder for it)
    • Call it “SemiTransparent”
    • Select the Standard shader for it, in its Transparent variant
    • Set the Albedo color as (1, 1, 1, 0.5), so semi-transparent white
    • Assign the new material to the Object1 and Object2 in the scene, but NOT to the prefabs. The idea is that you can have a semitransparent preview of the object you are going to create, but the real object must be opaque
  • Add an empty object in your scene
    • Call it “SpatialAnchorsManager”
  • Create a new script in your project (if you want to keep things tidy, create a Scripts folder for it)
    • Call it “SpatialAnchorsManager”
    • Assign this script to the empty object in the scene with the same name.
  • If you are using Unity 2019, you have also to add manually a reference to Newtonsoft Json in your package manger. Open the manifest.json file in the Packages folder of your project and add this line to the references:
    “com.unity.nuget.newtonsoft-json”: “2.0.0”

Cool! If you are survived until now, you have now completed the project setup for all the subsequent versions that we are going to develop together. Let’s start writing some code!

Open the script SpatialAnchorsManager.cs you have just created, and substitute its code with this one:

As I’ve told you, the code is commented a lot, but I will anyway give you some hints to make you understand it even better.

Overall behaviour

The code makes the system wait for input from the user. When the user presses the trigger of a controller, if the trigger was of the left controller, a cube is generated in the current controller position, if it was the right one, a sphere is created instead.

Tracking system / World system

I think the only notable tidbit of the code is this one

    //get the pose of the controller in local tracking coordinates
    OVRPose objectPose = new OVRPose()
    {
        position = OVRInput.GetLocalControllerPosition(isLeft ? OVRInput.Controller.LTouch : OVRInput.Controller.RTouch),
        orientation = OVRInput.GetLocalControllerRotation(isLeft ? OVRInput.Controller.LTouch : OVRInput.Controller.RTouch)
    };

    //Convert it to world coordinates
    OVRPose worldObjectPose = OVRExtensions.ToWorldSpacePose(objectPose);

There are some things here to notice:

  • The Oculus runtime loves working with OVRPose objects, that define both the position and the orientation of an object
  • Some methods of the Oculus runtime work with the “tracking reference system”. When we call OVRInput.GetLocalControllerPosition, we don’t get the position of the controller in Unity coordinates, but we get them in a special Oculus reference system that is not the same of the Unity world (it may be centered on the headset, for instance)
  • There are conversion methods between the tracking reference system and the Unity world reference system. In our case, we use OVRExtensions.ToWorldSpacePose to get the position of the controller in Unity world coordinates, so that we can create a gameobject on that position

Spatial Anchors

If looking at the code you are wondering “where are the spatial anchors?”, well you are asking the right questions. There are no spatial anchors in this first version, just a standard Unity app to use as a baseline.

You Mad GIF - Find & Share on GIPHY

Running the program

Build and run the experience on your Quest 2. You should be able to create cubes and spheres using the index triggers of your controllers. But these geometric figures have no special features. How lame!

Cubes and spheres, spheres and cubes…

Spatial Anchors V2 – Objects created with anchors

Keeping the same project as before, change the code of the script SpatialAnchorsManager.cs with this one:

Let’s comment it together.

Overall behaviour

In this version, the application creates spatial anchors and then instantiates game objects that move to their position. When the user presses the trigger for instance of the left controller, the system doesn’t create a cube but creates a spatial anchor in the current pose of the controller. When the anchor is successfully initialized, a cube is created and put in the same position as the anchor. It is the same as before, but different: we now create the cubes in an indirect way, passing through anchors.

James Franco GIF - Find & Share on GIPHY

Anchor/space handles

/// <summary>
/// Data about the 
/// </summary>
[Serializable]
public class AnchorData
{
    /// <summary>
    /// The handle that represents the anchor in this runtime
    /// </summary>
    public ulong spaceHandle;

    /// <summary>
    /// The name of the prefab that should be instantiated for this anchor
    /// </summary>
    public string prefabName;

    /// <summary>
    /// Reference to the gameobject instantiated in scene for this anchor
    /// </summary>
    public GameObject instantiatedObject = null;
}

This AnchorData structure contains the data we want to associate with an anchor we create. An anchor is associated with the name of the gameobject we want to generate for it (the cube or the sphere) and its actual instantiation in the scene. But it has also a reference to an “ulong spaceHandle”… what is that?

Anchors, which are also called “Spaces” in the documentation because they define a spatial reference system in the world, are created by the Oculus runtime, which assigns to them a unique number to identify them in the current session. If you have already worked with development for native systems (e.g. Win32), you may know that the operating systems love working with “handles”, that is numbers to refer a particular resource in their internal data structures. For instance, when you open a file, the path is used only to actually read its data, and after it has been opened, the operating system refers to it with some kind of number, like “73445”, that identifies it internally while it is in the memory. That is not a unique ID for the file, nor a persistent value, but it is useful for the current moment. For spatial anchors, it works the same way. You don’t know how an anchor is defined by the OS, you don’t know where it is saved, you know nothing about it: you just know that you can reference it by just using its handle, that is something like a “session id” for the anchor you are interested in.

An anchor handle of -1 (or ulong.MaxValue, since ulong has no sign, so it can’t cope with negative numbers) represents an invalid anchor.

Anchor creation

    //get the pose of the controller in local tracking coordinates
    OVRPose controllerPose = new OVRPose()
    {
        position = OVRInput.GetLocalControllerPosition(isLeft ? OVRInput.Controller.LTouch : OVRInput.Controller.RTouch),
        orientation = OVRInput.GetLocalControllerRotation(isLeft ? OVRInput.Controller.LTouch : OVRInput.Controller.RTouch)
    };

    //create the information about the spatial anchor (time and position), that should have the same pose of the controller.
    //You can use the below template almost every time
    OVRPlugin.SpatialEntityAnchorCreateInfo createInfo = new OVRPlugin.SpatialEntityAnchorCreateInfo()
    {
        Time = OVRPlugin.GetTimeInSeconds(),
        BaseTracking = OVRPlugin.GetTrackingOriginType(),
        PoseInSpace = controllerPose.ToPosef() //notice that we take the pose in tracking coordinates and convert it from left handed to right handed reference system
    };

    //ask the runtime to create the spatial anchor. 
    //The creation is instanteneous, and the identification handle is returned by the runtime inside the "ref" parameter
    ulong spaceHandle = InvalidHandle;

    if (OVRPlugin.SpatialEntityCreateSpatialAnchor(createInfo, ref spaceHandle))
    {

This is how we create an anchor when the trigger on one of the controllers is pressed. We create a data structure with the info about the time and the pose of the anchor, and we deliver it to the system. Notice that we are giving the system the anchor pose in the (Oculus) tracking reference system, and in fact we are not converting the controller pose to world position anymore. We are just applying a left-to-right-handed reference systems conversion because it is required so. I guess that OVR runtime works with a right-handed system (like everyone in the mathematical world), while Unity is left-handed, so a conversion is needed between the two (If you are confused about what I am saying, have a look here).

After we have defined the data about the anchor we want to create, we invoke OVRPlugin.SpatialEntityCreateSpatialAnchor, which asks the runtime to create a spatial anchor at the pose required, and will return the handle to the newly created anchor.

Anchor components

        //We need to send a request to the runtime to enable the "Locatable" component of this anchor, if it is not enabled yet.
        //Until this component is assigned to the anchor, the anchor can't be tracked by the system, so we can't get its position (so it's basically useless).
        //From my experience, usually this is already enabled upon the creation of the anchor, so we first check if it is already enabled, and only if not
        //we send an activation request
        if (OVRPlugin.SpatialEntityGetComponentEnabled(ref spaceHandle, OVRPlugin.SpatialEntityComponentType.Locatable, out bool componentEnabled, out bool changePending))
        {
            //Activate the component. The operation returns immediately only an error code, but actually the request is anynchronous and gets satisfied by the runtime
            //later in the future. We will get notified about the operation completion with the OVRManager.SpatialEntitySetComponentEnabled event
            if (!componentEnabled)
            {
                if (!OVRPlugin.SpatialEntitySetComponentEnabled(ref spaceHandle, OVRPlugin.SpatialEntityComponentType.Locatable, true, 0, ref requestId))
                    Debug.LogError("Addition of Locatable component to spatial anchor failed");
            }
            //else if it was already enabled, just create the gameobject for this anchor
            else
            {
                GenerateOrUpdateGameObjectForAnchor(spaceHandle);
            }
        }
        else
            Debug.LogError("Get status of Locatable component to spatial anchor failed");

Anchors are an entity-component system. Considering that also Unity has an entity-component system, spatial anchors in Unity are an entity-component system in an entity-component system. Yo dawg.

Yo Dawg Meme Origin - YouTube
Yo XZibit

This means that every anchor can have some associated behaviours. The first behaviour that we’ll see now is the “Locatable” one. Locatable means that the anchor can be tracked in space. If an anchor is not Locatable, it can’t be tracked, so you can’t know where it is, so it is totally useless. That’s why we have to absolutely add the Locatable behaviour to it.

To add a component to an anchor, the first thing to check is if that anchor has already that component enabled, and we do that with the method SpatialEntityGetComponentEnabled, which returns immediately an answer. If the component is not enabled yet, we can call OVRPlugin.SpatialEntitySetComponentEnabled to enable it. Notice that the Set call is asynchronous: we call it, and it returns immediately. When the runtime has actually fulfilled the request, it will call the event OVRManager.SpatialEntitySetComponentEnabled.

What we do in the code is:

  • We register to the event OVRManager.SpatialEntitySetComponentEnabled in the OnEnable
  • Every time we create an anchor, we check if it is Locatable
  • If it is already Locatable, we call the method GenerateOrUpdateGameObjectForAnchor that basically stores the data of the anchor in our internal dictionary, and creates the gameobject (sphere or cube) in the same pose of the anchor
  • If it is not Locatable, we call the SpatialEntitySetComponentEnabled method, that upon complation will invoke the event we registered to. If the Locatable component gets successfully enabled on the anchor, in the callback we can call the method GenerateOrUpdateGameObjectForAnchor and do the same as the above point.

So in any case, as soon as the anchor is locatable, we put a geometrical figure on it. Notice that at this moment, it happens that the Locatable component is already always activated on the anchor upon creation because without it the anchor would be useless.

Callbacks and Request Ids

Many of the methods related to spatial anchors work in an asynchronous way: you ask something to the runtime, and then you get the result of the operation you asked at an indefinite time in the future through an event. Usually, the related events are all in the OVRManager class and have a name that begins with “Spatial”. Only a few methods return immediately.

All events delegates have as a parameter a reference to the handle of the anchor, so you can use it to understand to what anchor the event is related to. There is also the possibility of specifying a “Request ID” when you make the request to the operating system, and then the same Id will be passed to the event callback when the request completes. This may have various uses, but for this sample is not interesting, so I’ve always set the Request ID to zero and ignored it all the time.

This is an example of a callback signature:

private void OVRManager_SpatialEntitySetComponentEnabled(UInt64 requestId, bool result, OVRPlugin.SpatialEntityComponentType componentType, ulong spaceHandle)

When you create a method to register for the events automatically with Visual Studio, it may give the parameters useless names like Uint64 u, bool b, etc… which are hardly understandable. My suggestion is to copy the method signature from the documentation, from my code, or from the Oculus integration source codes, where the names of the parameters are self-explanatory.

Locate Space

If an anchor is Locatable, you can get its pose in tracking coordinates by using the method OVRPlugin.LocateSpace. Before using these pose values, remember to convert them to Unity World coordinates. This is for instance what we do to generate an object associated with a just-created anchor and move it to the same pose of the anchor

    //create the gameobject associated with the anchor, if it didn't exist
    if (m_createdAnchors[spaceHandle].instantiatedObject == null)
        m_createdAnchors[spaceHandle].instantiatedObject = GameObject.Instantiate(Resources.Load<GameObject>(m_createdAnchors[spaceHandle].prefabName));

    //get its pose in world space: at first we get it into headset tracking space,
    //then we convert it to world coordinates
    var anchorPose = OVRPlugin.LocateSpace(ref spaceHandle, OVRPlugin.GetTrackingOriginType());
    var anchorPoseUnity = OVRExtensions.ToWorldSpacePose(anchorPose.ToOVRPose());

    //assign the pose to the object associated to this anchor.
    m_createdAnchors[spaceHandle].instantiatedObject.transform.position = anchorPoseUnity.position;
    m_createdAnchors[spaceHandle].instantiatedObject.transform.rotation = anchorPoseUnity.orientation;

Running the program

Build and run the experience on your Quest 2. You should be able to create cubes and spheres using the index triggers of your controllers, exactly as before. This is a great result because it means that you successfully created the anchors, made them Locatable, and then used their pose to successfully instantiate cubes and spheres. Not bad. But if the user recenters the world, the gameobjects move from their initial positions. They are not very “anchored”…

It’s all ok until I recenter the world with the Oculus button on my controller. If I do that, the anchored objects move, and this shouldn’t happen…

Spatial Anchors V3 – Objects fixed on the anchors

It’s finally time to start exploiting some functionalities of the spatial anchors. Change the code of the script SpatialAnchorsManager.cs with this new one:

Overall behaviour

In this version, the cubes and the spheres will have a pose locked to the one of their respective anchors, whatever it happens.

Object positions and anchors

Notice that there is no such thing as “a gameobject associated with an anchor” in the Oculus runtime. We are creating that association by hand, by moving every object we create to the same pose of the anchor they are associated with in our code. And we move the cubes and the spheres in the same position of the anchors when we call this part of the code that is in the GenerateOrUpdateGameObjectForAnchor method.

    //get its pose in world space: at first we get it into headset tracking space,
    //then we convert it to world coordinates
    var anchorPose = OVRPlugin.LocateSpace(ref spaceHandle, OVRPlugin.GetTrackingOriginType());
    var anchorPoseUnity = OVRExtensions.ToWorldSpacePose(anchorPose.ToOVRPose());

    //assign the pose to the object associated to this anchor.
    m_createdAnchors[spaceHandle].instantiatedObject.transform.position = anchorPoseUnity.position;
    m_createdAnchors[spaceHandle].instantiatedObject.transform.rotation = anchorPoseUnity.orientation;

But the GenerateOrUpdateGameObjectForAnchor gets called only when the Locatable component gets enabled, so only once, upon the anchor creation. If we want our objects in the scene to stay locked to their anchors, so to a particular physical position, we have to call this method every frame: this way, whatever thing happens in the Unity world, the anchored objects will just re-position themselves in the correct position at every frame. This is exactly what we do in the LateUpdate method: every frame, we loop through every anchor and move again the pose of the associated gameobject to the pose of the anchor.

/// <summary>
/// Late update
/// </summary>
private void LateUpdate()
{
    //At every frame we re-assign to the gameobjects the poses of the anchors.
    //We do this because this way we are sure that even if the user recenters or moves the camera rig,
    //the world position of the object remains locked.
    //If we don't execute this late update method, the anchors are correct upon creation, but if the user recenters the camera rig
    //they don't remain fixed in the corresponding world position, but they move as well.

    //for every anchor that has been created
    foreach (var createdAnchorPair in m_createdAnchors)
    {           
        //if it is a valid one
        if (createdAnchorPair.Key != InvalidHandle)
        {
            //update its pose
            GenerateOrUpdateGameObjectForAnchor(createdAnchorPair.Key);
        }
    }
}

Running the program

Build and run the experience on your Quest 2. You should be able to create cubes and spheres using the index triggers of your controllers, exactly as before. But now, even if you recenter your play space by keeping pressed the Oculus button on your controllers, the cubes and the spheres stay fixed in the same physical position. You see all the other elements in the Unity scene moving (including the Megacube), but not the little Spheres and Cubes. They stay fixed in the same position in the physical world you assigned them. This is the first superpower of spatial anchors: attaching virtual objects to a physical location. With the next version, you will see another one, even more powerful than this.

I can recenter the Unity world, but cubes and spheres do not move anymore

Spatial Anchors V4 – Objects are fixed and persistent

Things are going to be definitely interesting with this version. Change the code of the script SpatialAnchorsManager.cs with this one:

Overall behaviour

The cubes and the spheres are not only fixed, but now they are also persistent. If you create them, and then you close the app, when you re-open the app again, they are still there, in the same pose you left them before.

Anchor Uuid

We have already seen what are the handles of the anchors (or spaces, however you want to call them). But now that we want to save the anchors, we have also to explore the concept of Uuids. A Uuid is a unique identifier of the anchor too, exactly like the handle, but with the difference that the Uuid is truly unique, and completely identifies only your anchor, probably in the whole world, and forever. Uuids are used also in other programming paradigms and they are huge numbers or strings created randomly, but whose random pool is so big that is very improbable that two identical ones get generated. In the case of Spatial Anchors, the Uuid is composed of two Int64, meaning that the number of the total possible Uuids is 3.4 * 1038… you can imagine that getting two equal ones is pretty difficult.

So the handle is a temporary id for the anchor, useful for the current session, while the Uuid is a persistent unique identifier for the anchor. Returning to the example of the opened file, the handle is like a reference to the currently opened file in the OS, while the unique ID is the full path of the file, that is unique in all the file system, and persistent. It is not an exact comparison, but it conveys the idea.

Managing an id that is defined by a struct with two ints is a big nuisance, so we exploit a helper method found in the Oculus samples to convert it to a string representation, which is much handier.

/// <summary>
/// Converts a <see cref="OVRPlugin.SpatialEntityUuid"/> object to a string representation
/// </summary>
/// <param name="uuid">Unique ID of an anchor</param>
/// <returns>String representation of the Uuid</returns>
private string GetUuidString(OVRPlugin.SpatialEntityUuid uuid)
{
    //Code taken from the Oculus samples
    byte[] uuidData = new byte[16];
    BitConverter.GetBytes(uuid.Value_0).CopyTo(uuidData, 0);
    BitConverter.GetBytes(uuid.Value_1).CopyTo(uuidData, 8);
    return AnchorHelpers.UuidToString(uuidData);
}

This method makes a final reference to a method in a helper class that is included in the samples. If in the future you don’t want to import the samples anymore, just copy the code of the method in one of your files.

Saving an anchor: anchor intrinsic data vs anchor application data

When saving an anchor, we have to distinguish between two different operations.

The first one is actually saving the spatial data about the anchor, so how the anchor can be detected in the real world. If you remember the initial part of this article, it means saving the geometrical info of the physical space around the anchor, the definition that in human language would be something similar to “straight corner on a wooden desk, with close to it a mouse and a laptop”. This operation is done by the Quest runtime, and you have control over that. You just ask the runtime to do it, and it does that in a way completely opaque to you: you don’t even know where the anchor data is stored on the device.

The second one is saving the data of your application about the anchor. In the case of our application, when we load the anchors from a previous session, we should know if that anchor was associated with a cube or a sphere. Oculus doesn’t offer a way to store these data together with the anchor, so you, the developer, must care about it. What you usually do is saving some data structure (something akin to a dictionary) where to every unique Uuid of an anchor that gets saved, you save the related information about it. This is exactly what we do with this data structure

/// <summary>
/// Data structure used to serialize a <see cref="AnchorData"/> object
/// </summary>
[Serializable]
public class AnchorDataDTO
{
    /// <summary>
    /// String representation of the unique ID of the anchor
    /// </summary>
    public string spaceUuid;

    /// <summary>
    /// Name of the prefab associated with this anchor
    /// </summary>
    public string prefabName;
}

That holds all the data about an anchor. We save this data in the PlayerPrefs, so that it survives different executions of the application. We use PlayerPrefs and not a JSON file just for the sake of simplicity of this sample.

Storable component

To save an anchor, we should enable on it another type of component: Storable. The functioning is the same described above for the Locatable one: we check if it is already enabled (usually not), and if not, we ask the runtime to enable it for us. If it is already enabled, or if we receive the callback from the operating system that it has been enabled, we call a method to do something with the Storable component.

Notice that since we have to perform similar operations when we activate a new component, I have created a helper method to handle the activation of all the components: it checks if they are activated, and if not, it activates them.

/// <summary>
/// Sets a component to an anchor, taking care of all possible cases (e.g. also if the component is already added)
/// </summary>
/// <param name="spaceHandle"></param>
/// <param name="componentType"></param>
public void SetAnchorComponent(ref ulong spaceHandle, OVRPlugin.SpatialEntityComponentType componentType)
{
    //we don't care about the request Id for this sample. We so just keep it always at 0.
    //For more complicated applications, it could be useful to identify the callback relative to a particular request
    ulong requestId = 0;

    //We need to send a request to the runtime to enable the component of this anchor, if it is not enabled yet.        
    //So first of all check if it is already enabled. This method returns immediately
    if (OVRPlugin.SpatialEntityGetComponentEnabled(ref spaceHandle, componentType, out bool componentEnabled, out bool changePending))
    {
        //If the component was not enabled, activate it. The operation returns immediately only an error code, but actually the request is anynchronous and gets satisfied by the runtime
        //later in the future. We will get notified about the operation completion with the OVRManager.SpatialEntitySetComponentEnabled event
        if (!componentEnabled)
        {
            if (!OVRPlugin.SpatialEntitySetComponentEnabled(ref spaceHandle, componentType, true, 0, ref requestId))
                Debug.LogError($"Addition of {componentType.ToString()} component to spatial anchor failed");
        }
        //else if it was already enabled, just call the same callback that the runtime would have called
        //if the set component function succeeded
        else
        {
            OVRManager_SpatialEntitySetComponentEnabled(requestId, true, componentType, spaceHandle);
        }
    }
    else
        Debug.LogError($"Get status of {componentType.ToString()} component to spatial anchor failed");
}

The callback that gets called when a component is active (because it was already active or because the runtime just activated it), takes a different action depending on if the activated component is Locatable or Storable. If it is Locatable, it creates the cube or the sphere. If it is Storable, it saves the anchor to the persistent storage. Notice that it’s my personal choice to make the anchor persistent as soon as it is Storable, it is not mandatory. Your application may have a menu with a Save button to save the anchors, and not save them in a persistent way until the user chooses so.

        //The anchor has become Locatable, so we can actually spawn an object at its position.
        //Generate an object of the type specified in the dictionary about this anchor, and with the world pose of this anchor
        if(componentType == OVRPlugin.SpatialEntityComponentType.Locatable)
            GenerateOrUpdateGameObjectForAnchor(spaceHandle);
        //The anchor has become Storable, so we can save it to the persistent storage.
        if (componentType == OVRPlugin.SpatialEntityComponentType.Storable)
            SaveAnchor(spaceHandle);

Saving an anchor

Saving an anchor to the persistent storage just requires us to call the method OVRPlugin.SpatialEntitySaveSpatialEntity (Yeah, it’s a very stupid name, with SpatialEntity repeated twice… who chooses the names inside Meta??)

New Girl Facepalm GIF by HULU - Find & Share on GIPHY
    //request the runtime to save the anchor in some location where it knows how to retrive its info back. 
    //We have no control on this process and we will get an answer via the callback associated with OVRManager.SpatialEntityStorageSave
    OVRPlugin.SpatialEntitySaveSpatialEntity(ref spaceHandle, OVRPlugin.SpatialEntityStorageLocation.Local, OVRPlugin.SpatialEntityStoragePersistenceMode.IndefiniteHighPri, ref requestId);

The parameters let you define where to save the anchor and for how much time: at the moment you can only save it locally and forever, so we have no power to decide. This method is asynchronous, and upon completion calls the event OVRManager.SpatialEntityStorageSave.

In my code, I register to that event with the following callback

/// <summary>
/// Callback called when the save operation of the spatial anchor on the persistent storage completes
/// </summary>
/// <param name="requestId">Request Id passed when the Component Request was issued</param>
/// <param name="spaceHandle">The handle of the space (of the anchor) affected by this request</param>
/// <param name="result">Result of the operation</param>
/// <param name="uuid">Unique id of the serialized anchor. It is unique of this anchor among all the possible generated anchors and the same in all executions of the program</param>    
private void OVRManager_SpatialEntityStorageSave(UInt64 requestId, ulong spaceHandle, bool result, OVRPlugin.SpatialEntityUuid uuid)
{
    //check that the operation succeeded
    if (result)
    {
        //we should have added the data about the created anchor in the dictionary. 
        //If it is not so, abort the operation
        if (!m_createdAnchors.ContainsKey(spaceHandle))
        {
            Debug.LogError("Asked to save an unknown anchor, aborting");
            return;
        }

        //We need to save the data about what object was associated this anchor.
        //If we are here, the system has already saved the anchor (included its pose) for future usages in future sessions of this app,
        //but we need to save our custom data associated with the anchor, in this case the gameobject to instantiate on it.
        //We'll save everything in the PlayerPrefs for the sake of clarity of the sample, but we could have also used a file

        //create a serialization structure for the main data of this anchor and then convert it to a json string.            
        AnchorDataDTO spaceAnchorDto = new AnchorDataDTO() { spaceUuid = GetUuidString(uuid), prefabName = m_createdAnchors[spaceHandle].prefabName};
        string spaceAnchorDtoJsonString = JsonConvert.SerializeObject(spaceAnchorDto);

        //save the data about the anchor in the playerprefs, using its unique id as the key
        PlayerPrefs.SetString(spaceAnchorDto.spaceUuid, spaceAnchorDtoJsonString);            
    }
    else
        Debug.LogError($"Save operation of spatial anchor failed");
}

Notice that the runtime is giving us the Uuid of the just saved anchor as a parameter of the callback. When the callback is called, the runtime has already saved the anchor, so we have just to perform our operations associated with it, that is saving in the PlayerPrefs the application data about the anchor, using the Uuid as the key in the PlayerPrefs array. After this, all the data about the anchor (the intrinsic and the application data) are successfully saved.

Loading all the anchors

We’re done with saving, but what about loading? Well, in the Start method of the app, I decide to load back all the anchors from the previous session.

To obtain the anchors from a previous session, we have to perform a query to the Oculus runtime:

    var queryInfo = new OVRPlugin.SpatialEntityQueryInfo()
    {
        QueryType = OVRPlugin.SpatialEntityQueryType.Action,
        MaxQuerySpaces = 50,
        Timeout = 0,
        Location = OVRPlugin.SpatialEntityStorageLocation.Local,
        ActionType = OVRPlugin.SpatialEntityQueryActionType.Load,
        FilterType = OVRPlugin.SpatialEntityQueryFilterType.None
    };

    //As usual, we don't care about the requestId
    ulong requestId = 0;

    //launch of the query that retrieves all the saved spatial anchors
    if (!OVRPlugin.SpatialEntityQuerySpatialEntity(queryInfo, ref requestId))
    {
        Debug.LogError("Unable to retrieve the spatial anchors saved on the device");
    }

Queries are done calling OVRPlugin.SpatialEntityQuerySpatialEntity, specifying the parameters of the query through a OVRPlugin.SpatialEntityQueryInfo structure. In the above example, some parameters are worth a comment:

  • “ActionType = OVRPlugin.SpatialEntityQueryActionType.Load” means that we are not only making a simple query, but we also want the runtime to load the anchors that are found by the query. The runtime will so load them in memory and give them a handle
  • “FilterType = OVRPlugin.SpatialEntityQueryFilterType.None” means to apply no filters to the query, and so to get all the anchors. It is also possible to load a single anchor by specifying an exact Uuid in the parameters and ask to filter by Uuid.

As you can imagine, OVRPlugin.SpatialEntityQuerySpatialEntity calls a callback when it’s over. Here it is part of my implementation of that callback:

/// <summary>
/// Callback called when a query operation on the anchors from the persistent storage completes
/// </summary>
/// <param name="requestId">Id of the requests</param>
/// <param name="numResults">How many anchors have been succesfully obtained from the query</param>
/// <param name="results">Array of anchors returned by the query. Only the first numResults entries are valid</param>
private void OVRManager_SpatialEntityQueryResults(ulong requestId, int numResults, OVRPlugin.SpatialEntityQueryResult[] results)
{
    //for each returned anchor
    for(int i = 0; i < numResults; i++)
    {
        //get its unique Uuid and its handle for the current execution
        //Notice the difference: handle is a number valid only for the current run of the program,
        //while Uuid is always the same among all the possible executions, forever and ever
        var spatialQueryResult = results[i];
        ulong spaceHandle = spatialQueryResult.space;
        string spaceUuid = GetUuidString(spatialQueryResult.uuid);

        //if the anchor is valid
        if (spaceHandle != 0 && spaceHandle != InvalidHandle)
        {
            //Let's obtain the name of the prefab to generate on this anchor from thejson data saved in the player prefs.
            //After we have it, we can initialize the anchor as usual.
            //We use the unique id as the key in the player prefs, since it is the only data unique about an anchor among
            //different executions of the same app
            if (PlayerPrefs.HasKey(spaceUuid))
            {
                string prefabName = JsonConvert.DeserializeObject<AnchorDataDTO>(PlayerPrefs.GetString(spaceUuid)).prefabName;

                //initialize the anchor as if the user just created it in this session with his controllers.
                InitializeSpatialAnchor(spaceHandle, prefabName);

Notice that the query returns an array of results, plus a number that says how many entries of the array are actually valid. I guess this weird signature is because this method is a wrapper of a native method that works with an array of a preset size (128 from my tests). So we just loop the loaded anchors, and we get their handles and Uuids. We use the Uuid to get back from the PlayerPrefs if we have to generate a sphere or a cube. We use the handle to initialize the anchor again, making it Locatable and Storable. As soon as the anchor is Locatable, its relative gameobject gets generated, and we can see it in the scene.

This means that the anchors created with the controller and the ones loaded by the storage, in the end, call the same initialization method InitializeSpatialAnchor, and so behave in the same way. This makes the application coherent.

If you are a pro developer, probably you have noticed that I’m saving back to storage immediately the anchors that I’ve just loaded because of my management of the Storable component activation. This is not optimized, but it’s ok for this program because it is just a sample app.

Running the program

Build and run the experience on your Quest 2. You should be able to create cubes and spheres using the index triggers of your controllers, and they are fixed in space. Then you can close the app, open it again, and after a second see the same cubes and spheres as before popping up in the exact same pose as before! Our geometrical objects are now persistent! Maybe too persistent, since there is no way to remove them…

Indestructible cubes and spheres!

Spatial Anchors V5 – Objects are persistent but can be removed

We are reaching the final form. Change the code of the script SpatialAnchorsManager.cs with this one:

Overall behaviour

The cubes and the spheres are exactly as before, but now you can remove them by pressing the Thumbstick.

Deleting an anchor

You have two different ways of erasing an anchor.

You can remove it from the persistent storage, but keep it alive in the current session by asking the runtime to erase it from the storage

        //ask the runtime to erase the anchor from the persistent storage
        OVRPlugin.SpatialEntityEraseSpatialEntity(ref spaceHandle, OVRPlugin.SpatialEntityStorageLocation.Local, ref requestId)

Or you can destroy the current implementation of the anchor for this session, invalidating its handle by calling this other method

        //ask the runtime to destroy the anchor for the current session
        OVRPlugin.DestroySpace(ref spaceHandle)

Notice that the first method is asynchronous, while the second one returns immediately. In the sample, anyway, I don’t care about doing additional operations upon the destruction of an anchor, so I don’t register to the related event.

What I do in the sample, is basically loop all anchors and destroy both the persistent and session representations, then I also destroy all game objects, all PlayerPrefs entries, all the representations in memory of the anchors. I destroy all! I love the destruction!

Homer Simpson Burn GIF - Find & Share on GIPHY

Running the program

Build and run the experience on your Quest 2. You should be able to create cubes and spheres using the index triggers of your controllers, and they are fixed in space. You can then delete them by pressing the thumbstick of one of the controllers. Then you can close the app, open it again, and see that no anchor gets loaded. We killed them all!

At a certain point of the video, I press the thumbstick and everything disappears

Adding Passthrough

Spatial anchors give their best when paired with passthrough AR, because you truly see the virtual object attached to an exact point of the real world. So let’s add passthrough to our sample!

To activate the passthrough, you can follow the instructions that are contained in this other tutorial of mine. Long story short:

  • Activate Insight Passthrough on the OVRManager of the OVRCameraRig by activating “Passthrough Capability Enabled” in the General tab of the Quest features section
  • Enable passthrough on application start by clicking on Enable Passthrough, always on the OVRManager
enable passthrough quest unity
  • You add an OVRPassthroughLayer behaviour to the OVRCameraRig
    • In this behaviour, you set Placement as Underlay
    • You can change most of the other parameters as you wish to customize the appearance of the passthrough. For instance, I have enabled azure edges
  • Select the Camera component that is in the CenterEyeAnchor child object of the OVRCameraRig, and tell it to clear itself (Clear Flags) using a Solid Color, and that the Clear Color should be (0, 0, 0, 0). Notice that it is a black TRANSPARENT color
clear flags passthrough oculus
These are the parameters for URP. For the standard pipeline, they should be called Clear Flags and Clear Color
  • Go in the project settings, in the Player tab, Other Settings, set scripting backend as IL2CPP and target platform as ARM64

At this point, build and run… you can now create persistent cubes and spheres in your real room! You are amazing, you have finished this tutorial!

I tried turning off the headset, changing room, moving fast… but the cubes stayed fixed in that exact position! It was impressive

Futher references

To learn more about spatial anchors, you can:

  • Analyze the code of the sample that I let you import into the project. I took a lot of inspiration from it when writing my code: it is well written, and has also some features that I have not implemented… for instance, it shows you how to query the runtime to have info on a single anchor that was saved in the storage
  • Go reading the official documentation on Spatial Anchors at this link: https://developer.oculus.com/experimental/spatial-anchors-api-unity/

Before you go…

The tutorial is over, and I thank you for having read it. I hope it has been useful for you, and if it is the case, please let me know what you have created with it! Plus, since it took me days to prepare it, I would appreciate it if you could return back the favor by doing one of these things:

Thank you and happy VR!

(The header image contains material from Unity and Meta)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

Releated

We need camera access to unleash the full potential of Mixed Reality

These days I’m carrying on some experiments with XR and other technologies. I had some wonderful ideas of Mixed Reality applications I would like to prototype, but most of them are impossible to do in this moment because of a decision that almost all VR/MR headset manufacturers have taken: preventing developers from accessing camera data. […]

quest 3s leak

The XR Week Peek (2024.03.19): Quest 3S and Pico 4S leaked, and much more!

Today is Father’s Day in Italy, so I want to send a big hug to all the fathers out there, wherever they may be.  Life is busy these days, but in April I will have more time to collaborate with you all. For this reason, I’ve given a refresh to the pages where I talk about […]