Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

What We Learned from Demoing Google’s New Depth API

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

What We Learned from Demoing Google’s New Depth API

Get ready for an upgrade: in early December, Google revealed its Depth API, a new functionality coming to ARCore that allows virtual objects and real-world environments to play nicer together, allowing for more convincing and immersive mixed reality experiences. A demonstrable way that Depth API achieves this is by enabling occlusion, or the illusion of virtual objects’ becoming obstructed behind real-world ones.

Convincing occlusion has historically been difficult to achieve, though Google has put together a video portraying demos of the new API that show off its features. One of those demos, which challenges the user to a virtual food fight against a levitating robot chef, was developed in collaboration with MediaMonks.

What’s exciting about Depth API is its ability to understand the user’s surroundings at an unprecedented speed and ease. “The API’s depth map is updated in real time, allowing AR apps to be aware of surfaces without complex scanning steps,” says Samuel Snider-Held, Creative Technologist at MediaMonks. This enables not only occlusion as mentioned above, but also the mimicry of real-time physics. For our virtual food fight against the AR-rendered robot, missing is part of the fun; users can take delight in the digital splatters of food on the objects around them without worrying about cleanup.

The Building Blocks to More Immersive AR

How does Depth API work, and what sets it apart from other methods of occlusion? “The Depth API uses an approach called ‘depth from motion,’ in which ARCore determines distances to objects by detecting variances between image frames while the camera is moving,” says Snider-Held. “The result is a high-resolution depth map that is updated in real time, allowing the device to better understand where objects are in relation to one another and how far away they are from the user.”

Depth API is software-based, requiring no new hardware for users with ARCore-enabled devices once it releases publicly. While sufficient occlusion significantly increases the verisimilitude of virtual objects, it follows a series of incremental updates that build on one another to allow for more realistic immersive experiences. Just last year—the same year ARCore debuted—Google released its Lighting Estimation API, which lights virtual objects to match the existing lighting conditions in the real-world setting, including light reflections, shadows, shading and more.

Screen Shot 2020-01-02 at 5.38.40 PM

Since then, a feature called Cloud Anchors allows multiple users to view the same virtual objects anchored in a specific environment. It’s the key feature powering the multiplayer mode of Pharos AR, an augmented reality experience we made in collaboration with Childish Gambino, Wolf + Rothstein, Google and Unity—which itself served as a de facto demo of what Cloud Anchors are capable of in activating entirely new mixed reality experiences.

“We have the creative and technical know-how to use these new technologies, understand why they’re important and why they’re awesome,” says Snider-Held. “We’re not scared to take on tech that’s still in its infancy, and we can do it with a quick turnaround with the backing of our creative team.”

A Streamlined Way to Map Depth

Depth API wasn’t the first time that MediaMonks got to experiment with occlusion or spatial awareness with augmented reality. Previously, we got to experiment with other contemporary solutions for occlusion, like 6D.ai, which creates an invisible 3D mesh of an environment. The result of this method is similar to what’s achieved with Depth API, but the execution is different; translating an environment into a 3D mesh with 6D.ai is fastest with multiple cameras, whereas Depth API simply measures depth in real time without the need of scanning and reconstructing an entire environment.

Similarly, Tango—Google’s skunkworks project which was a sort of precursor to ARCore—enabled special awareness through point clouds “When we had Tango from before, it used something similar to a Kinect depth sensor,” says Snider-Held. “You’d take the point clouds you’d get from that and reconstruct the depth, but the new Depth API uses just a single camera.”

Monk Thoughts We’re not scared to take on tech that’s still in its infancy, and we can do it with a quick turnaround with the backing of our creative team.
m
Samuel Snider-Held headshot
nk

In essence, achieving occlusion with a single camera scanning the environment in real time offers a leap in user-friendliness, and makes it widely available to users on their current mobile device. “If we can occlude correctly, it makes it feel more cemented to the real world. The way that they’re doing it is interesting, with a single camera,” says Snider-Held.

Adding Depth to Creative Experiences

Depth API is currently opening invitations to collaborators and isn’t yet ready for a public release, but it serves as a great step in rendering more believable scenes in real time. “It’s another stepping stone to reach the types of AR experiences that we’re imagining,” says Snider-Held. “We can make these projects without caveats.”

For example, a consistent challenge in rendering scenes in AR is that many users simply don’t have large enough living spaces to render large objects or expansive virtual spaces. Creative teams would get around this by rendering objects in miniature—perhaps just contained to a tabletop. “With Depth API, we can choose to only render objects within the available space,” says Snider-Held. “It lets us and our clients feel more comfortable in making these more immersive experiences.”

As brands anticipate how they might use some of the newest features of fast-evolving mixed reality technology, they stand to benefit from creative and production partner that can bring ideas to the table, quickly implementing them with awareness of the current opportunities and challenges. “We bring creative thinking to the technology, with what we can do given our technical expertise but also with things like concept art, animation and more,” says Snider-Held. “We don’t shy away from new tech, and not only do we understand it, but we can truly make something fun and inventive to demonstrate why people would want it.”

Related
Thinking

Make our digital heart beat faster

Get our newsletter with inspiration on the latest trends, projects and much more.

Thank you for signing up!

Continue exploring

Media.Monks needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss