Tango & ARCore: Reality Computing Technology by Google

Insight categories: Augmented Virtual Reality

What are Tango and ARCore? In 2014, Google released its new Android smartphone operating system, dubbed Lollipop (or version 5.0). This marked the beginning of a long journey towards creating a fully integrated mobile computing experience, where smartphones would become more powerful and valuable tools.

In the same year, Google developed a new computer vision algorithm called Tango. The software uses multiple cameras to track objects in real time, enabling a device to map out its surroundings and recognize nearby items. This helps to create a 3D model of the environment.

After testing Tango’s limits, Google deprecated Tango to focus on ARCore. In this blog, you’ll learn more about Tango’s components, concepts, and use cases that were the foundation of ARCore development, then get to explore ARCore technology and its capabilities.

What was Project Tango?

Tango was an augmented reality computing technology platform developed by Google. It uses computer vision to enable smartphones to detect their position relative to the world around them without using GPS or other external signals.

Recommended reading: Interacting with the Virtual — A Mix of Realities

Tango Components

All Tango-enabled Android devices had the following components:

Motion tracking camera: Tango used a wide-angle motion tracking camera (sometimes referred to as the “fisheye” lens) to add visual information, which helps to estimate rotation and linear acceleration more accurately.

3D Depth Sensing: To implement depth perception. Tango devices used standard depth technologies, including Structured Light, Time of Flight, and Stereo. Structured Light and Time of Flight require an infrared (IR) projector and IR sensor.

Accelerometer, barometer, and gyroscope: The accelerometer measures movement, the barometer measures height, and the gyroscope measures rotation, which was used for motion tracking.

Ambient light sensor (ALS): The ALS approximates the human eye response to light intensity under various lighting conditions and through various attenuation materials.

Key Concepts of Tango

Motion Tracking

Motion tracking allows a device to understand its motion as it moves through an area. The Tango APIs provided the position and orientation of the user’s device in full six degrees of freedom (6DoF).

Tango implemented motion tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.

Tango’s visual-inertial odometry supplemented visual odometry with inertial motion sensors capable of tracking a device’s rotation and acceleration. This allowed a Tango device to estimate its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, motion tracking with VIO worked indoors.

Area Learning

Area Learning allowed the device to see and remember the key visual features of physical space: the edges, corners, and other unique features to recognize that area again later.

To do this, it stores a mathematical description of the visual features it has identified inside a searchable index on the device. This allows the device to quickly match what it currently sees against what it has seen before without any cloud services.

Depth Perception

Depth perception gives an application the ability to understand the distance to objects in the real world.

Devices were designed to work best indoors at moderate distances (0.5 to 4 meters). This configuration gave proper depth at a distance while balancing power requirements for IR illumination and depth processing.

First, the system utilizes a 3D camera, which casts out an infrared dot pattern to illuminate the contours of your environment. This is known as a point cloud. As these dots of light got further away from their original source (the phone), they became more prominent. 

An algorithm measures the size of all the dots, and the varying lengths of the dots indicate their relative distance from the user, which is then interpreted as a depth measurement. This measurement allows Tango to understand all the 3D geometry in your space.

Tango APIs provided a function to get data from a point cloud. This format gave (x, y, z) coordinates for many points in the scene. Each dimension was a floating-point value recording the position of each point in meters in the coordinate frame of the depth-sensing camera.

Tango API Overview

As for Tango’s application development stack, Tango Service was an Android service running on a standalone process. It used standard Android Interprocess Communication to support Java, Unity, and C apps. 

Tango Service included many leading technologies, such as motion tracking, area learning, depth perception, and applications that could connect to Tango Service through the APIs.

Use Cases for Tango

Indoor Navigation

Tango devices could navigate a shopping mall or find a specific item at the store when that information is available.

Using Tango’s motion tracking capabilities, game developers can experiment with 6DoF to create immersive 3D AR gaming experiences, transform the home into a game level, or make magic windows into virtual and augmented environments.

Physical space measurement and 3D mapping

Using their built-in sensors, Tango-enabled devices were engineered to sense and capture the 3D measurements of a room, which support exciting new use cases, like real-time modeling of interior spaces and 3D visualization for shopping and interior design.

Marker detection with AR

A Tango device could search for a marker, usually a black and white barcode or a user-defined marker. Once the marker was found, a 3D object was then superimposed on the marker. Using the phone’s camera to track the device's relative position and the marker, the user could walk around the marker and view the 3D object from all angles.

Now let’s discuss Google’s ARCore and its developments with AR technology.

What’s Google’s ARCore?

Google’s augmented reality platform, ARCore, lets developers create apps for Android devices that use the phone’s camera to overlay virtual objects onto real-world environments.

The company has been working on a new version of its ARCore app, allowing developers to build their own 3D models and place them in real-world locations. This is an important step forward because it means anyone can now create AR experiences without relying on prebuilt assets from third parties.

Recommended reading: Impact of Augmented and Virtual Reality on Retail and ECommerce Industry

Recent ARCore Updates

The latest update also brings some improvements to the way that ARCore works. For example, you can now see your surroundings through the lenses of other people’s cameras when they are nearby. You can also add more than one object into a scene at once, making creating complex interactions between multiple elements easier.

The new features come as part of a larger effort by Google to make ARCore more accessible to developers who want to create their own experiences. In addition to making it easier to create 3D models, Google is also adding support for building iOS and Windows 10 Mobile apps.

ARCore is still early in development, but we expect to see more capabilities with its AR technology soon.

Final Takeaways

While Google made numerous advances with Project Tango and then ARCore, there’s still much more to expect from ARCore technology.

Google's augmented reality technology has been around for years, but it wasn't until recently that developers truly started taking advantage of its capabilities.

With ARCore, you can use your phone to see virtual objects overlaid in real-world environments. This means you can view 3D models of buildings, landmarks, and furniture right from your home.

Best of all, it works on Android devices, including phones, tablets, and wearables. There are numerous possibilities for the integration of ARCore technology in the way we interact and understand the world around us.

Want to explore the possibilities for AR in your own business? Contact info@globallogic.com and let’s talk.

Learn More:

Author

4aa901e7ab8295f07139911c8bed1ae6?s=256&d=mm&r=g

Author

Shilpi Das

View all Articles

Trending Insights

If You Build Products, You Should Be Using Digital Twins

If You Build Products, You Should Be Using...

Digital TransformationTesting and QAManufacturing and Industrial
Empowering Teams with Agile Product-Oriented Delivery, Step By Step

Empowering Teams with Agile Product-Oriented Delivery, Step By...

AgileProject ManagementAutomotiveCommunicationsConsumer and RetailMedia

Top Authors

Yuriy Yuzifovich

Yuriy Yuzifovich

Chief Technology Officer, AI

Richard Lett

Richard Lett

VP of Healthcare Technology

Amit Handoo

Amit Handoo

Vice President, Client Engagement

Ravikrishna Yallapragada

Ravikrishna Yallapragada

AVP, Engineering

Lavanya Mandavilli

Lavanya Mandavilli

Principal Technical Writer

All Categories

  • URL copied!