Manipulating objects with bare hands lets us leverage a lifetime of physical experience, minimizing the learning curve for users. But there are times when virtual objects will be farther away than arm’s reach, beyond the user’s range of direct manipulation. As part of its interactive design sprints, Leap Motion, creators of the hand-tracking peripheral of the same name, prototyped three ways of effectively interacting with distant objects in VR.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Experiment #1: Animated Summoning

The first experiment looked at creating an efficient way to select a single static distant object then summon it directly into the user’s hand. After inspecting or interacting with it, the object can be dismissed, sending it back to its original position. The use case here would be something like selecting and summoning an object from a shelf then having it return automatically—useful for gaming, data visualization, and educational simulations.

This approach involves four distinct stages of interaction: selection, summoning, holding/interacting, and returning.

1. Selection

One of the pitfalls that many VR developers fall into is thinking of hands as analogous to controllers, and designing interactions that way. Selecting an object at a distance is a pointing task and well suited to raycasting. However, holding a finger or even a whole hand steady in midair to point accurately at distant objects is quite difficult, especially if a trigger action needs to be introduced.

To increase accuracy, we used a head/headset position as a reference transform, added an offset to approximate a shoulder position, and then projected a ray from the shoulder through the palm position and out toward a target (veteran developers will recognize this as the experimental approach first tried with the UI Input Module). This allows for a much more stable projective raycast.

In addition to the stabilization, larger proxy colliders were added to the distant objects, resulting in larger targets that are easier to hit. The team added some logic to the larger proxy colliders so that if the targeting raycast hits a distant object’s proxy collider, the line renderer is bent to end at that object’s center point. The result is a kind of snapping of the line renderer between zones around each target object, which again makes them much easier to select accurately.

After deciding how selection would work, next was to determine when the ‘selection mode’ should be active; since once the object was brought within reach, users would want to switch out of selection mode and go back to regular direct manipulation.

Since shooting a ray out of one’s hand to target something out of reach is quite an abstract interaction, the team thought about related physical metaphors or biases that could anchor this gesture. When a child wants something out of their immediate vicinity, their natural instinct is to reach out for it, extending their open hands with outstretched fingers.

Image courtesy Picture By Mom

This action was used as a basis for activating the selection mode: When the hand is outstretched beyond a certain distance from the head, and the fingers are extended, we begin raycasting for potential selection targets.

To complete the selection interaction, a confirmation action was needed—something to mark that the hovered object is the one we want to select. Therefore, curling the fingers into a grab pose while hovering an object will select it. As the fingers curl, the hovered object and the highlight circle around it scale down slightly, mimicking a squeeze. Once fully curled, the object pops back to its original scale and the highlight circle changes color to signal a confirmed selection.

2. Summoning

To summon the selected object into direct manipulation range, we referred to real world gestures. A common action to bring something closer begins with a flat palm facing upwards followed by curling the fingers quickly.

At the end of the selection action, the arm is extended, palm facing away toward the distant object, with fingers curled into a grasp pose. We defined heuristics for the summon action as first checking that the palm is (within a range) facing upward. Once that’s happened, we check the curl of the fingers, using how far they’re curled to drive the animation of the object along a path toward the hand. When the fingers are fully curled the object will have animated all the way into the hand and becomes grasped.

During the testing phase we found that after selecting an object—with arm extended, palm facing toward the distant object, and fingers curled into a grasp pose—many users simply flicked their wrists and turned their closed hand towards themselves, as if yanking the object towards themselves. Given our heuristics for summoning (palm facing up, then degree of finger curl driving animation), this action actually summoned the object all the way into the user’s hand immediately.

This single motion action to select and summon was more efficient than two discrete motions, though they offered more control. Since our heuristics were flexible enough to allow both, approaches we left them unchanged and allowed users to choose how they wanted to interact.

3. Holding and Interacting

Once the object arrives in hand, all of the extra summoning specific logic deactivates. It can be passed from hand to hand, placed in the world, and interacted with. As long as the object remains within arm’s reach of the user, it’s not selectable for summoning.

4. Returning

You’re done with this thing—now what? If the object is grabbed and held out at arm’s length (beyond a set radius from head position) a line renderer appears showing the path the object will take to return to its start position. If the object is released while this path is visible, the object automatically animates back to its anchor position.

Overall, this execution felt accurate and low effort. It easily enables the simplest version of summoning: selecting, summoning, and returning a single static object from an anchor position. However, it doesn’t feel very physical, relying heavily on gestures and with objects animating along predetermined paths between two defined positions.

For this reason it might be best used for summoning non-physical objects like UI, or in an application where the user is seated with limited physical mobility where accurate point-to-point summoning would be preferred.

Continued on Page 2: Telekinetic Powers »

1
2
3
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • dk
    • Betty

      Gℴℴgle pays now 97 dollars per hour to complete esay work on apple laptop .. Do work only for few hours in a whole day and fun greater time together with your family … any individual can also join it…on weekend I bought a great new Acuraafter just earnin $7934 last four weeks .it is truly the easiest job however you won’t forgive yourself if you don’t have a peek at this.!ux902a:=>=>=> http://GoogleCashWatchOpportunities/earn/hourly ♥♥p♥♥♥x♥♥♥u♥♥♥s♥o♥♥s♥p♥♥♥f♥♥♥v♥o♥♥v♥♥b♥♥♥i♥i♥o♥♥a♥p♥♥k♥r♥x♥r♥g♥m♥o♥♥d:::::!ox53g:chgwj

  • Great interaction methods here. I also think that more needs to be done to reduce arm fatigue. So all the above done with with arms at rest and just finger / wrist flicks and no forearm movement. In addition to that, using both hands simultaneously working together for a single interaction or multi-tasking with separate but simultaneous actions.

  • SoftBody

    Leap Motion should go into software development instead. Their hand-tracking/interaction research is going to waste if nobody can use it if it ties to their hardware.

    Right now, hand-tracking is a novelty. But soon it will be an assumed feature. People will go with whatever cheapest and most accessible. Their tech will simply be left behind.

    • They already are, in part. Their SDKs work even without Leap Motion, they told me. You can use their input system even with your regular Vive or Rift

  • This series is damn interesting

  • Lucidfeuer

    Are Leap Motion the only hardware company who’s work makes sense? This is exactly the work to be done in VR/AR, and hundred times more.

    The whole industry’s set-up and future was compromised from the day Oculus varpowared all their common-sense plans to integrate NimbleBit hand-tracking and 13thLab inside-out tracking from Gen1 when they were they bought-out by the mediocre, greedy blue-collars of Facebook cost and risk managers who decide that making a crap limited product was a good idea to make money…

  • Yoan Conet

    Glad finally someone did it ! Thanx Leap motion
    Imagine grabbing your enemies at distance and release them in mid air and then suddenly stop them like puppets !
    To feel real when grabbing them, at the start, everything but their center of gravity (legs harms head) would have to try keeping its original position like in real life.
    And when suddenly stopping the flying enemy, everything but their center of gravity would keep its momentum, so the enemy moving like a puppet ! Haha

  • No Spam

    How much do you want to bet that there’s an internal demo at Leap that uses extendo-hands to simulate Vader’s Force choke?

    “VR is a fad? I find your lack of faith…disturbing.”

  • M Stauffer

    This is very cool! Is the code available? I’d like to try it in an upcoming academic data visualization project – rather than spending the time coding it myself of course.