Archives

The importance of usability cannot be overstated. Users expect websites and apps to be usable and intuitive. If they encounter difficulties using them, they'll likely abandon their attempts to complete a task.

To help solve this, World Usability Day takes place annually to raise awareness about usability issues in software design. The goal of this event is to encourage developers and designers to think about how users interact with websites and applications. This can include making sure buttons are big enough for easy clicking, using color contrast to help users read text, and avoiding distracting animations.

In this post, we’ll dig into usability a bit deeper and explore why it matters to consumers and the companies who design, develop, and maintain products for them.

What is Usability?

Usability is the degree to which a system or product can be quickly learned and operated by specified user groups under stated conditions. The goal is to create intuitive, efficient, effective, and valuable systems.

There are five quality components to usability, according to this definition from the Nielsen Norman Group: 

  • Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
  • Efficiency: Once users learn the design, how quickly can they perform tasks?
  • Memorability: When users return to the design after not using it, how easily can they reestablish proficiency?
  • Errors: How many errors do users make? How severe are these errors? How easily can they recover from them?
  • Satisfaction: How pleasant is it to use the design?

We measure usability using observable and quantifiable metrics:

  • Effectiveness: The accuracy and completeness with which users achieve specified goals
  • Efficiency: The resources expended in relation to the accuracy and completeness with which users achieve goals
  • Satisfaction: The comfort and acceptability of use

Usability enables developers to create better products based on users’ objective and subjective experiences.

Recommended reading: Top 10 UX Design Principles for Creating Successful Products and Experiences - Method

Why is Usability Important?

When we meet usability standards, the product’s interface is transparent, and the cognitive load caused by the interface is low. This allows the user to focus on the task, be less error-prone, make decisions quickly, and feel more satisfied.

Usability is important to end users and the companies who develop products for them as it impacts revenue, loyalty, brand reputation, and more.

 

A happy user will continue using the product and be more inclined to recommend it to their peers. This will increase the user base and user loyalty, positively affecting revenue. So from a business point of view, usability is not a cost — it’s an investment.

How Can We Improve Usability?

Usability is a process. It’s involved in each stage of the development lifecycle. We recommend that you start assessing and measuring usability as early as possible. This approach enables you to discover errors sooner, making more room to iterate and test the solutions and improvements.

While there are several ways to improve usability (depending on the process stage), user testing is the most basic and valuable approach. It’s not necessarily a costly or lengthy process. It can be quick and inexpensive, suitable for any company, product, or stage. There are four simple steps to improve usability:

  1. Acquire representative users.
  2. Ask the users to perform representative tasks with the design.
  3. Observe what the users do, where they succeed, and where they have difficulties.
  4. Analyze the data and then iterate until they meet the predefined usability KPIs.

To create a valuable user experience, you must observe, interact, and focus on their needs, expectations, and skills.

Recommended reading: Is Kanzi Really Transforming UI Design?

Creating an Excellent User Experience 

At GlobalLogic, we strive to create highly usable products. With a user experience team of more than 140 experts across six countries, we can improve the usability of existing products and incorporate usability assessments and testing as part of our user-centered design approach to product development.

For example, when a major Latin American cable TV provider contacted us so we could assess the usability of its upcoming on-demand video service, the first thing we did was organize a series of user research and user testing activities. We asked current customers to test the client’s potential product to determine three main usability metrics: task success rate, user error rate, and satisfaction (using two common questionnaires, System Usability Scale and Net Promoter Score). 

The results were not ideal: high error rates, low satisfaction, and a low Net Promoter Score. We recommended that the client not release the product to market before working on and testing new solutions.

The Result

Once the client accepted our recommendation, we invited their customers to discuss how they consume media. We also visited their homes and performed onsite interviews and observations. Based on what we learned through these exercises, we developed a first round of wireframes and prototypes that the same users then tested. Through these sessions, were able to significantly improve the product’s usability KPIs.

When the client finally launched the new service, its users said they enjoyed its flexibility and ease of use. Not only did the service function efficiently, but it was intuitive and well-designed — proving that usability plays a huge role in successful products. 

Moreover, the client saved millions of dollars by developing the right product for a fast-paced market with strong competitors and newcomers.

We live in a world where technology has become ubiquitous, but many products still fail to meet users' expectations. This is why it's more important than ever to spend time researching how to perfect your user experience to keep up with technological advances and save time and money in the long run.

Learn more about World Usability Day here.

Enjoy these helpful resources:

According to Deloitte, there will be 470 million connected vehicles on highways worldwide by 2025. These connected vehicles provide opportunities and have a higher cybersecurity risk than any other connected devices; even the FBI had to make a statement about it. 

A typical new model car runs over 100 million lines of code and has up to 100 electrical control units (ECUs) and millions of endpoints. The stakes are high, too, considering the safety implications some of these security issues may cause. Supporting satellite, Bluetooth, telematics and other types of connectivity while protecting drivers and public safety is essential, and completely reliant on vehicle design and manufacturing.

Vehicle Cybersecurity Regulations for Manufacturers to Know

Considering this, the UNECE released new vehicle cybersecurity regulations in the middle of 2021 (UN R155 and UN R156), and ISO came up with ISO/SAE 21434. These standards laid the foundation of cybersecurity in connected vehicles. While they are complex, these security considerations can be classified in three main categories:

  1. In-vehicle cybersecurity: Cybersecurity aspects within the vehicle, such as OBD-II hacking, key fob hacking, theft of personal data, remote takeover, malware, etc. 
  2. Network cybersecurity: Cybersecurity aspects of vehicle network connectivity. This covers most general network threats such as DoS, Syn-flood, etc.
  3. Backend cybersecurity: Cybersecurity aspects of backend systems, which are typically the same as any cloud security aspects. Connected vehicles exchange information and data with the backend systems generally hosted on the cloud. These backend systems perform various tasks such as vehicle software updates, navigation, alerts, etc.

Recommended reading: How Smart Cars Will Change Cityscapes

Examples of Cybersecurity for Automotives Across Threat Categories

Each threat category requires different solutions and skills of the vehicle manufacturer. For example, these are some of the solutions required for each of the above categories. 

In-vehicle cybersecurity 

  • Hardware-based crypto-accelerators and secure key storage
  • JTAG memory and register access restriction
  • Firmware signing
  • Electronic Control Unit (ECU) authentication
  • Anti-tampering and side channel attack protections
  • SSH or secured access
  • Secure key storage

Network cybersecurity 

  • Encrypted and secure communication
  • IDS/IPS to track potential packet floods
  • Network segmentation
  • Virtual private network (VPN)
  • Firewall

Backend cybersecurity

  • Data loss prevention and data integrity strategy
  • OTA package encryption and signature
  • Secure images
  • Activity and log monitoring

Our team works with leading connected vehicle manufacturers and OEMs to build secure connected vehicles across all three categories. We help our clients with the cross-industry best practices required to develop solutions such as in-vehicle infotainment systems, ECUs, and advanced driver assistance systems without compromise on security or reliability.

Learn more: 

Smart cars are becoming more common all the time. Today, there are over 31 million cars worldwide with at least some level of automation, offering drivers a safer driving experience, improved fuel efficiency, and better parking options. 

They also improve cityscapes through their ability to communicate with other vehicles and infrastructure. In this way, smart cars are changing the way cities function – and the experiences people have within them. 

But what exactly is a smart car? And how will smart cars change our cities? 

What is a Smart Car?

Smart cars are equipped with advanced technologies such as sensors, cameras, GPS, and wireless communication devices. These features allow them to interact with each other and with road infrastructure, enabling smart cars to act as a conduit for useful information that helps drivers respond to traffic conditions.

 

Smart cars are improving safety, reducing congestion, and increasing mobility. As a result, these vehicles are helping to transform urban landscapes.

The rise of autonomous driving means fewer drivers will be needed to operate public transportation systems. In addition, traffic congestion will decrease significantly due to fewer traffic accidents caused by distracted drivers.

Recommended reading: Introduction to Autonomous Driving [Whitepaper]

A Smart City is One Where People Know the Value of Data

Consulting company PwC coined the term “data-driven city” to describe a smart city. The instantaneous collection, transmission, and analysis of information circulating in an urban space allow municipalities to radically change their approach to transportation management. It also impacts urban resource management (e.g., water, energy, etc.), safety improvements, environmental impacts, medicine production, and management of education and the other city services available for residents.

How is this being put into practice? New York City has a unified data collection and analysis system that feeds several effective city solutions, including a fire prediction system, garbage removal, and recycling. It also includes a health information system that collects data from citizens’ wearable devices (such as fitness trackers) and transfers it to medical institutions.

Another example is in Barcelona, where hundreds of sensors collect information on traffic, noise, electricity, and lights through an integrated system called Sentilo, which is  in the public domain. This means that city authorities can make effective management decisions, and third-party businesses can develop additional services for residents.

Technological Breakthroughs and Cities of the Future

The IEEE published research in 2017 that defines a whole range of technological trends that will influence the cities of the future, including:

Internet of Things

Smart sensors are enabling the gathering of more information from the environment. According to global forecasts, there will be 75.4 billion connected devices by 2025. IoT technology allows real-time monitoring of all city life aspects: traffic speed, outdoor security, resource consumption, etc.

Cloud Technologies

With the amount of generated data growing, there will be a need for rapid and qualitative processing. Cloud application systems will become the brain of a city, helping city managers make effective decisions (e.g., traffic regulation) based on the analysis of terabytes of data.

Recommended reading: Cloud-Driven Innovations: What Comes Next?

Open Data

By providing easier access to information, city authorities not only make communication with residents more transparent but also create the basis for new businesses, for example, developing mobile applications to monitor the environmental situation in the city.

At the same time, a smart city is a complex ecosystem that unites technological as well as human and institutional aspects. The digital transformation of cities can only happen with active involvement from municipal authorities, businesses, the local IT industry, and the citizens themselves.

Communication Between Cars in the City of the Future

The digital transformation of the automotive industry is yet another milestone in smart city development. The growing popularity of electric cars — as well as experiments with uncrewed vehicles by Tesla, Google, Mercedes, and other companies — is perhaps one of the most discussed technological topics in the media.

In the cities of the future, cars will not disappear. However, the volume of personal cars on city streets will decrease gradually as rideshare apps like Uber, car-sharing services, and autonomous vehicles for carpooling replace them. Car design will change radically, and the experience of being a passenger in an autonomous vehicle will become more comfortable. 

Unlike cars with internal combustion engines, electric cars will not pollute the city and create noise. Self-driving vehicles will save citizens from an excessive number of unsightly parking lots near the sidewalks, as there will be no need to leave cars near the office.

For several years, GlobalLogic has been developing technologies for smart cities in cooperation with automotive corporations and telecom operators. Based on our expertise, we imagine how a city might develop in 5-10 years and then experiment with related technologies. One of our predictions is that all cars will eventually be able to communicate with each other – and we already know how this will work in practice.

Communication between smart cars and smart road infrastructure will make the road a safer place to drive. In critical situations, each second matters, so the sooner drivers receive the information they need, the more likely they will avoid an accident. Communication technologies between cars will allow the driver to know about everything happening around them from a specific distance.

Recommended reading: User Experience as a Key Factor in the Automotive Industry

Incorporating Technology

So, how is this realized technologically? Cars will be able to communicate through the vehicle-to-everything (V2X) protocol, creating a powerful Wi-Fi network with instant data transfer within 1 km around themselves. 

How is this realized in practice? Using an interactive simulation environment that we developed, we tested a variety of application cases, such as:

  • The driver wants to change lanes but immediately receives an alert about a car speeding in that lane. This notification prevents drivers from making dangerous maneuvers.
  • A smart road infrastructure receives traffic data from cars and can create an alternate route for the driver. 
  • An ambulance sends a signal about driving in a certain lane. Then all drivers receive the notification to make room for the ambulance to pass. Afterward, a smart traffic light switches to green to let the ambulance pass safely through the intersection.
  • A car with a punctured tire can signal assistance to all passing cars. If the driver of a passing car cannot help, the car transmits the signal to the next vehicle.

Past Innovation

How fast will smart cars be able to communicate? And what will happen to cars that cannot? Let’s discuss the history of mobile phones. 

When they first appeared, it seemed expensive and rather pointless to purchase them since few people had a phone that you could call. But over time, more and more people became mobile users, and mobile phones became more affordable. Now, they are our main means of communication.

The same future is likely to follow for automotive communication technologies. 

First, city authorities will encourage residents to install the necessary equipment and software for the car. Then cars will come off the production line with already-integrated communicative capabilities. 

Future Innovation

At GlobalLogic, we’ve noticed numerous automotive trends and innovations that will change how we approach creating vehicles.

The widespread integration of Automotive Open System Architecture (AUTOSTAR), the personalization of cars through subscription models, autonomous vehicles, and augmented reality are just a few examples of the factors and trends influencing how smart cars will change our cityscapes in the years to come.

These advances will soon be able to create a fully autonomous car, helping us to create smarter cities and safer roads for drivers, too.

Learn more:

Mobile apps play a crucial role in our lives, providing us access to information, entertainment, health tracking, financial services, and more. As such, they have become indispensable tools in our daily routines. Given that Android accounts for 71% of OS marketshare worldwide (as of Q4 2022), it’s a must for app developers to tailor their apps to these users.

But creating these apps isn’t always straightforward. Developers face challenges ranging from technical issues to changes in consumer behaviors to complex UI design. They are constantly seeking automation solutions to streamline their workflows, creating efficiencies that enable them to focus on the more creative and complex aspects of app development.

This blog reviews several open-source frameworks that Android app developers can use to significantly accelerate their time-to-market. Open-source frameworks do this by testing frameworks that automate crucial but repetitive tasks, including those for functional (acceptance) and regression testing.

What’s an Application Framework?

An application framework is a set of tools used to build applications for mobile devices such as smartphones and tablets. The frameworks include libraries, application programming interfaces (APIs), and software development kits (SDKs). These frameworks allow developers to focus on building apps rather than learning how to write code.

An Android app framework can provide developers with tools for building apps faster and easier. These include support for Google Play Services, allowing users to access location, maps, and other helpful information inside the application. Developers also benefit from Android Studio, which makes it easy to build, test, debug, and deploy applications.

Recommended reading: Choosing the Right Cross-Platform Framework for Mobile Development

Android Studio

The Android Studio is a tool used for developing apps and games for android devices. It allows developers to create applications using Java programming language. The main features of this application are:

  • Create, run, debug, build, package, test, deploy, and monitor your app on the device or emulator.
  • Use the debugger to step through code while it’s running in the IDE.
  • Support for multiple languages (Java, Kotlin, Swift, Objective-C).
  • A graphical layout editor that lets you design layouts for screens in XML files.
  • An integrated development environment (IDE) with support for many popular IDEs such as Eclipse, IntelliJ IDEA, NetBeans, and Xcode.
  • A library manager that helps you manage third-party libraries.
  • A project management system that helps you organize projects into folders and subfolders.
  • A file explorer that helps you navigate between different parts of an Android project.
  • A source control integration that enables you to use Git, SVN, Mercurial, Bazaar, Perforce, CVS, ClearCase, Subversion, etc.
  • A database explorer that helps you explore databases created by SQLite, MySQL, PostgreSQL, Oracle, and MS Access.

Android Instrumentation

Now let’s talk about some Android frameworks developers can utilize. Below are three frameworks that belong to the Android instrumentation testing category, as specified in the family tree of test frameworks shown above.

Android frameworks

Robotium

The Robotium Android test framework offers full support for hybrid and mobile web applications, as well as native apps written using Android SDKs. It’s an instrumentation-specific open test framework maintained by the Google community. Robotium JAR is integrated with the IDE. Developers can use the test script and a programming language such as Java with Android Junit4. Learn more on Github.

Espresso

Espresso is an Android test automation framework used to test a native application. Google released the activity-specific actions that can be tested using Espresso Espresso will concentrate only on the user interface testing following the unit testing point of view. 

The working mechanism behind Espresso is as follows:

  • ViewMatchers – allows developers to conduct view finding in the current view hierarchy
  • ViewActions – allows develpers to perform actions on the views (Click, swipe, etc.)
  • ViewAssertions – allows developers to assert state of a view (True or False)

Calabash

Calabash, a behavior-driven development tool, is an open test framework that automates testing for Android mobile applications based on native, hybrid, and mobile web code. Its working mechanism is Cucumber Gherkin, integrated with the Calabash gem to execute the test scripts written as a feature file.

It’s an open source framework available in GitHub with source information. You can run the test script on multiple emulators or real devices connected to a single machine. Although, test steps written in simple English language can trigger certain actions in the mobile application when executed.

UI Automator

The UI Automator testing framework provides a set of APIs to build UI tests that perform interactions on user apps and system apps. The UI Automator APIs allow you to perform operations such as opening the Settings menu or the app launcher in a test device.

The UI Automator testing framework is well-suited for writing black box automated tests, where the test code does not rely on internal implementation details of the target app. It will directly interact with the UI elements associated with the mobile application, which trigger all the user actions such as entering the text in the text box, click action, swipe, drag to, multi-touch, etc.

Appium

Appium is an open-source tool for automating native, mobile web, and hybrid applications on Android platforms.

As its SourceForge description explains, Appium aims to automate any mobile app from any language and test framework, with full access to back-end APIs and databases from test code. Write tests with your favorite dev tools using all the above programming languages, and probably more (with the Selenium WebDriver API and language-specific client libraries).

Appium test scripts written in IDE will interact with the Appium Server, but the node server will interact with the specified IP address and port number. The node server then passes the request to mobile devices or emulators using the UI Automator in a JSON format. 

All the UI elements associated with the mobile application can be controlled using the Appium client, which is derived from Selenium. The diagram below shows the Appium workflow:

A diagram illustrating the open source tool Appium workflow.

Comparison Matrix

A comparison matrix is a tool for comparing different business models. The goal of a comparison matrix is to determine which model is best suited for each scenario. This allows you to choose the right business model based on your specific situation.

Below is a helpful matrix for comparing the features available with the frameworks discussed in this article:

A comparison matrix illustrating the features of various android test frameworks.

Final Takeaways

Android app automation frameworks allow developers to automate tasks within their apps without writing any code. This means you can create automated processes for repetitive tasks like sending messages, testing code, updating data, etc.

Integrating automated processes and helpful frameworks can save developers and companies valuable development time, resources, and money. 

Learn more:

Microservices are a development methodology where services are independently developed and deployed. This type of architecture has become popular over recent years due to its ability to decouple systems and improve the speed of delivery. To test these applications effectively, they require specialized tools and processes.

Given the volume of independent services communicating with one another, test automation in a microservices architecture can be complex. Despite this, there are several compelling benefits to the microservices architecture, which we’ll discuss in this article.

What is a Microservice Architecture Style?

By definition, a microservice architecture style is used to develop a single application with separate processes for each mechanism. These “small services communicate by accessing each other’s exposed application programming interfaces (APIs).

A typical example is Amazon’s online shopping. As shown in the diagram below, each lightweight service runs independently from the others. Even if there’s a failure at the payment gateway, users can still add items to their shopping carts and look at other modules. Using this setup, the loss of one module does not ruin the entire system.

The benefits of this approach include the following:

  • Each component has its lifecycle. This means that it can be scaled up or down as needed.
  • It’s easy to test individual components because they don’t depend on any other system part.
  • You can use different deployment strategies, such as cloud-based hosting or self-hosted solutions.
  • You can deploy multiple software versions simultaneously without affecting the system’s overall performance.

Recommended reading: Strategies for Digital Transformation with Microservices [Whitepaper]

Why Use Microservices?

There are several reasons why organizations should adopt a microservices architecture. Some of the most common include:

Increased agility. By breaking large monolithic applications into smaller pieces, teams can quickly respond to changes and make improvements.

Improved scalability. It’s easier to scale out than to scale up. If you need more capacity, add additional servers instead of rewriting the code.

Faster time to market. You can release new features faster because you don’t have to wait for a team to complete an entire application before releasing it.

Reduced complexity. A microservices architecture reduces the number of dependencies between components. This makes testing much more straightforward.

An example of an Amazon microservices architecture.

Fig. 1:  Amazon microservice architecture

How Do Microservices Work?

When developing a microservices architecture, you break down a monolith application into small services. Each service exposes a set of APIs that allow other services to interact with it.

For example, let’s say you have a web app that allows customers to create accounts. You could build a service that handles user registration. Another service might handle authentication. And another manages customer data.

When a request comes in, the client sends it to the appropriate service. That service then performs its function and returns the results to the client.

This model works well when all the services run in the same environment. However, if you want to host these services in different environments, you must expose the API so that clients can access them.

Issues with Microservice Architectures

Even though a microservice architecture approach to software development provides countless benefits, it has some drawbacks in reporting. For example, it can be a hassle to analyze test results, identify pass/fail ratios and trends, and understand the total execution time for a particular microservice regression suite. In addition, you must ensure that the communication between services is secure.

Let’s consider the below sample microservice architecture for Netflix, where there is an ‘n’ number of services running. To maintain a stable automation pipeline, you must obtain data that answers the following questions:

  • Which services have a maximum execution time?
  • Which services have more failures?
  • What are the trends in service execution times? Are they up or down?
  • I have the name of services with a maximum number of failures, but how do I drill down and check based on the scenario?
  • Can I see a list of scenarios failing for quite a long time, and if the age of failing is high?
  • Can I get all the details of the service that has the latest build installed?

An example of microservice architecture using Netflix in the illustration.

Fig. 2: Netflix microservice architecture

Effective Microservice Management

We’ve found that one way to manage the different requirements listed above successfully is to integrate all the services into a single platform. For example, we developed a custom dashboard for a client that can be used as a report generation tool and monitor more than 50 microservices (with the potential to be extended to 100+).

The main objective of this dashboard was to be a one-stop shop for all automation reporting, trends, and monitoring. To create this dashboard, we used the following technologies:

  • Spring Boot
  • Spring Thymeleaf
  • Maven
  • Java 1.8
  • Couchbase DB(Can be any DB)
  • Jenkins client api
  • D3.js

The dashboard was so successful that we now implement it in other projects. Below are the different reports we created to improve our automation health.

Overall Microservices Tab

This tab will answer most of the below data queries, including the historic (previous build) data.

  • Build data for all the microservices.
  • Duration of that microservice suite.
  • Total test case count, fail test case count, etc.

In this definition of microservices article, you find an example of a microservices tab in the reporting used for effective management.

Fig. 3: Overall Microservices Tab

Recommended reading: Time Series - Data Analysis & Forecasting [Whitepaper]

Execution Time Analysis Tab

This tab is a graphical representation of the above data that displays your microservice automation health trends. We can filter down based on environment and type of run (i.e., smoke, regression, etc.).

An example of execution time analysis in microservices management reports. Fig. 4: ExecutionTime-Analysis Tab

Failure Analysis Tab

This is one of my favorite reports. It tells us two essential parameters (“age” and “failed since”) so we can easily dig down to the scenarios that are failing over a long period. This report ultimately helps us improve our smoke suite (if it’s an application issue) or the quality of the automation test case (if it’s an automation issue).

An example of a Scenario Failure-Analysis tab in a microservices management report. Fig. 5: Scenario Failure-Analysis Tab

Summary Tab

This tab is helpful for managers to obtain the latest consolidated report for all microservices of their latest runs.

Repo-Analysis Tab

Larger, distributed teams where people work in different branches can find QA challenging. For example, while they might merge their code during intermediate runs to develop an interim branch, it’s easy to forget to merge their code into the main branch. This oversight can create issues during deployments, as there are always substantial differences between these individual developers and main branches.

To resolve this issue, we developed a matrix that can tell the difference between the commits of these various branches and raise an alert when needed—an auto-scheduler triggers every hour and updates the latest data in the database.

Repo Commit-Diff is a matrix that helps you identify the difference between commits. Fig. 6: Repo Commit-Diff

Conclusion

There a numerous use cases for microservice to increase the efficiency of internal processes. With the right tools and the information above, companies can seamlessly integrate a microservice architecture.

At GlobalLogic, consolidating requirement variations and system reports into a single dashboard has been highly influential in managing microservices. Although the specific docker files for this dashboard are proprietary to GlobalLogic, I encourage you to use the information to create your microservice dashboard.

More resources:

 

What are Tango and ARCore? In 2014, Google released its new Android smartphone operating system, dubbed Lollipop (or version 5.0). This marked the beginning of a long journey towards creating a fully integrated mobile computing experience, where smartphones would become more powerful and valuable tools.

In the same year, Google developed a new computer vision algorithm called Tango. The software uses multiple cameras to track objects in real time, enabling a device to map out its surroundings and recognize nearby items. This helps to create a 3D model of the environment.

After testing Tango’s limits, Google deprecated Tango to focus on ARCore. In this blog, you’ll learn more about Tango’s components, concepts, and use cases that were the foundation of ARCore development, then get to explore ARCore technology and its capabilities.

What was Project Tango?

Tango was an augmented reality computing technology platform developed by Google. It uses computer vision to enable smartphones to detect their position relative to the world around them without using GPS or other external signals.

Recommended reading: Interacting with the Virtual — A Mix of Realities

Tango Components

All Tango-enabled Android devices had the following components:

Motion tracking camera: Tango used a wide-angle motion tracking camera (sometimes referred to as the “fisheye” lens) to add visual information, which helps to estimate rotation and linear acceleration more accurately.

3D Depth Sensing: To implement depth perception. Tango devices used standard depth technologies, including Structured Light, Time of Flight, and Stereo. Structured Light and Time of Flight require an infrared (IR) projector and IR sensor.

Accelerometer, barometer, and gyroscope: The accelerometer measures movement, the barometer measures height, and the gyroscope measures rotation, which was used for motion tracking.

Ambient light sensor (ALS): The ALS approximates the human eye response to light intensity under various lighting conditions and through various attenuation materials.

Key Concepts of Tango

Motion Tracking

Motion tracking allows a device to understand its motion as it moves through an area. The Tango APIs provided the position and orientation of the user’s device in full six degrees of freedom (6DoF).

Tango implemented motion tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.

Tango’s visual-inertial odometry supplemented visual odometry with inertial motion sensors capable of tracking a device’s rotation and acceleration. This allowed a Tango device to estimate its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, motion tracking with VIO worked indoors.

Area Learning

Area Learning allowed the device to see and remember the key visual features of physical space: the edges, corners, and other unique features to recognize that area again later.

To do this, it stores a mathematical description of the visual features it has identified inside a searchable index on the device. This allows the device to quickly match what it currently sees against what it has seen before without any cloud services.

Depth Perception

Depth perception gives an application the ability to understand the distance to objects in the real world.

Devices were designed to work best indoors at moderate distances (0.5 to 4 meters). This configuration gave proper depth at a distance while balancing power requirements for IR illumination and depth processing.

First, the system utilizes a 3D camera, which casts out an infrared dot pattern to illuminate the contours of your environment. This is known as a point cloud. As these dots of light got further away from their original source (the phone), they became more prominent. 

An algorithm measures the size of all the dots, and the varying lengths of the dots indicate their relative distance from the user, which is then interpreted as a depth measurement. This measurement allows Tango to understand all the 3D geometry in your space.

Tango APIs provided a function to get data from a point cloud. This format gave (x, y, z) coordinates for many points in the scene. Each dimension was a floating-point value recording the position of each point in meters in the coordinate frame of the depth-sensing camera.

Tango API Overview

As for Tango’s application development stack, Tango Service was an Android service running on a standalone process. It used standard Android Interprocess Communication to support Java, Unity, and C apps. 

Tango Service included many leading technologies, such as motion tracking, area learning, depth perception, and applications that could connect to Tango Service through the APIs.

Use Cases for Tango

Indoor Navigation

Tango devices could navigate a shopping mall or find a specific item at the store when that information is available.

Using Tango’s motion tracking capabilities, game developers can experiment with 6DoF to create immersive 3D AR gaming experiences, transform the home into a game level, or make magic windows into virtual and augmented environments.

Physical space measurement and 3D mapping

Using their built-in sensors, Tango-enabled devices were engineered to sense and capture the 3D measurements of a room, which support exciting new use cases, like real-time modeling of interior spaces and 3D visualization for shopping and interior design.

Marker detection with AR

A Tango device could search for a marker, usually a black and white barcode or a user-defined marker. Once the marker was found, a 3D object was then superimposed on the marker. Using the phone’s camera to track the device's relative position and the marker, the user could walk around the marker and view the 3D object from all angles.

Now let’s discuss Google’s ARCore and its developments with AR technology.

What’s Google’s ARCore?

Google’s augmented reality platform, ARCore, lets developers create apps for Android devices that use the phone’s camera to overlay virtual objects onto real-world environments.

The company has been working on a new version of its ARCore app, allowing developers to build their own 3D models and place them in real-world locations. This is an important step forward because it means anyone can now create AR experiences without relying on prebuilt assets from third parties.

Recommended reading: Impact of Augmented and Virtual Reality on Retail and ECommerce Industry

Recent ARCore Updates

The latest update also brings some improvements to the way that ARCore works. For example, you can now see your surroundings through the lenses of other people’s cameras when they are nearby. You can also add more than one object into a scene at once, making creating complex interactions between multiple elements easier.

The new features come as part of a larger effort by Google to make ARCore more accessible to developers who want to create their own experiences. In addition to making it easier to create 3D models, Google is also adding support for building iOS and Windows 10 Mobile apps.

ARCore is still early in development, but we expect to see more capabilities with its AR technology soon.

Final Takeaways

While Google made numerous advances with Project Tango and then ARCore, there’s still much more to expect from ARCore technology.

Google's augmented reality technology has been around for years, but it wasn't until recently that developers truly started taking advantage of its capabilities.

With ARCore, you can use your phone to see virtual objects overlaid in real-world environments. This means you can view 3D models of buildings, landmarks, and furniture right from your home.

Best of all, it works on Android devices, including phones, tablets, and wearables. There are numerous possibilities for the integration of ARCore technology in the way we interact and understand the world around us.

Want to explore the possibilities for AR in your own business? Contact info@globallogic.com and let’s talk.

Learn More:

We’ve heard a lot of buzz about Rightware’s Kanzi Studio software. It claims to help designers create beautiful automotive UIs while reducing costs, increasing productivity, and accelerating time-to-market. Its innovative framework could revolutionize the way we look at UI Design.

It’s no wonder we decided to check it out – and after reading this blog, you might want to consider Kanzi technology, too.

What is Rightware’s Kanzi Studio?

Kanzi is an open-source software platform that provides developers with tools to develop high-quality applications for the automobile industry

It provides a comprehensive set of tools for designing interactive cockpit displays and dashboards, including advanced graphics rendering capabilities. Kanzi also includes a powerful scripting engine that enables developers to create custom application logic without writing code.

But it's not just a framework for cutting-edge automotive HMI development. It is also a toolchain to create entirely seamless 2D/ 3D UIs for intelligent cockpit displays. Kanzi is the only framework that provides a full suite of components, including widgets, dialogs, animations, transitions, gestures, layouts, and visual effects.

Why Rightware’s Kanzi Studio?

As a design and engineering partner to some of the world’s leading automotive original equipment manufacturers, Tier 1 suppliers, and aftermarket service providers, I’ve spent time exploring new technologies that can help elevate our customers’ digital products.

At GlobalLogic, we are especially interested in tools that can enrich and accelerate how we develop in-vehicle HMIs, such as heads-up displays (HUDs), infotainment systems, and instrument clusters.

After all, people expect more from their cars these days. Now, vehicles have to provide a great user experience in addition to a great driving experience.

Since Kanzi has become increasingly popular in the automotive community, we decided to take it for a test drive (so to speak) by developing a new proof-of-concept.

Recommended reading: User Experience as a Key Factor in the Automotive Industry

The Proof-of-Concept

To explore all the features and potential benefits of Kanzi Studio, we decided to design, develop, and test a completely new dashboard and infotainment system for the upcoming BMW i8 series.

We put together a multidisciplinary team consisting of two software engineers, a UX designer, and a Scrum Master to build the PoC. We also simulated the driving experience to test and measure the PoC’s usability with real users.

Then we focused on how to use Kanzi to design unique clusters and dashboards across two different types of interactions: a control panel and a touch screen.

We applied animations, maps, theming, performance profiling, and embedded technology integration to highlight vehicle features and behaviors. We also exported the touchscreen project as an Android app to be used in a demo tablet, and the process was surprisingly simple.

The Experience

Learning and using Kanzi was an excellent experience for the team. Not only is Kanzi an intuitive tool that promotes close integration, but it can help create synergy between developers and designers.

For example, Kanzi allowed us to make on-demand changes and test the product in its early stages. This meant we could quickly address development and design issues as they occurred.

It became natural to see designers and developers working together on the same computer in a sort of “paired programming” way — making adjustments to the product’s behavior and look in just a few minutes.

There were also practically zero lines of code involved in implementing the UI designs. We used Kanzi’s built-in UI design tool, and the job was almost done.

We dragged and dropped images into Kanzi, which converted them into assets that we could use in a UI component and see it running instantly. It was also effortless to add transition effects, animations, and image blendings to achieve the user experience we envisioned.

Recommended reading: Empowering Teams with Agile Product-Oriented Delivery, Step By Step

Every time we were unsure how to implement some special requirement (like triangle-shaped indicators), we just guessed that Kanzi might already support it — and we were always delighted to see that it did!

Even if Kanzi didn’t support the requirement precisely as we envisioned, it did still support the requirement in an identical way — like the alpha channel masked components that we ended up using.

Understanding how Kanzi executes tasks was very straightforward. It follows a natural interface while including some additional elements needed to work within a 3D environment and manage the specific demands of different target platforms. 

For example, working in a 3D environment is intuitive since it follows the usual standards and the familiar concepts in most 3D editors like 3D-Studio or Blender. You can also find the same operations as 3D visualization tools like Maya or even Unreal Engine.

Coming from an Android development background and using Android Studio, our team also found it easy to work with Kanzi’s interface to add code into behaviors.

We also found it easy to work with variables, include code for the transitions, and generate the project binaries. One remarkable feature is the drag-and-drop approach to map variables, which makes sharing values all the easier.

Final Takeaways 

Kanzi Studio is a great solution that combines a well-executed, built-in UI tool with powerful technology. We were incredibly impressed with how seamlessly it allows developers and designers to share their work with a singular focus.

In fact, many European automotive companies are already showing interest in working with GlobalLogic on future projects based on our Kanzi POC. Kanzi Studio is a must-have tool for any automotive development project that prioritizes engaging and intuitive user experiences.

Learn More:

Artificial Intelligence (AI) has become a major talking point in recent years, and many companies are investing heavily in developing AI solutions. The term was coined in 1956 by John McCarthy at Dartmouth College, and today, AI has moved from a theoretical concept to a reality.

AI technology can perform tasks typically associated with human intelligence, such as reasoning, learning, problem-solving, and perception. It’s already changing the way we live our lives; from self-driving cars to smart speakers, AI is everywhere. 

Media and entertainment brands are facing significant market and industry shifts driven by AI-powered innovations. What does this mean for your company, and how can you prepare for and adapt to these changes?

AI is Already Gaining Traction

Most media industry players already use AI-based solutions within their business workflows. One excellent example of this is Netflix. 

1. Netflix

Netflix is easily a pioneer of AI integration due to its intelligent cast compilation and viewer data analytics. They also apply a sophisticated deep learning and computer vision algorithm to their recommendation engine. 

In addition, its video encoding analyzes each shot in a video and compresses it without affecting the image quality, thus reducing the amount of data it uses. These examples of innovation were far beyond the industry standard when they were implemented and are still evolving today. 

Recommended reading: How to Elevate Your OTT Through Predictive Personalization [Whitepaper]2

2. 20th Century Fox and IBM

Another example of AI in the media industry is 20th Century Fox and IBM, who used Watson APIs and machine learning techniques to analyze hundreds of horror and thriller movie trailers.

After learning what keeps audiences on the edge of their seats, the AI system suggested the top 10 moments from the movie Morgan for a trailer. Then an IBM filmmaker edited and arranged them together to create an enticing trailer.

Since then, 20th Century Fox has used AI and deep learning models to predict which audience will most likely see a film based on the movie trailer. They can accurately predict audience type and attendance for existing movies and soon-to-be-released movies.

3. Disney

The same AI progress is true for other media and entertainment giants like Disney. Disney works hard on mixed and augmented reality projects, robotics, human-computer interaction, computer vision, and more.

On the AI front, Disney and the University of California used a deep learning approach to denoise Monte Carlo-rendered images, which produced high-quality results suitable for production.

For the film “Finding Dory,” a convolutional neural network was trained to learn the complex relationship between noisy and reference data across a large set of frames with varying distributed effects and produce noise-free image quality.

One of Disney’s most recent evolutions of AI has been the project StoryPrint, which created “interactive visualizations of creative storytelling that facilitates individual and comparative structural analyses.” Disney makes a great example of what you can do with AI.

4. Comcast

Finally, Comcast uses machine learning models (among other solutions) to predict customer issues right before they occur.

According to Adam Hertz, VP of Engineering at Comcast, their technology is 90% accurate in predicting if a technician needs to drive to a subscriber’s home to fix a connectivity problem.

Major tech vendors like Amazon, Microsoft, IBM, and Google also work hard to compete in this technology space. From pre-built AI to customizable ML and deep learning services and tools, you can find numerous solutions and services with cognitive capabilities, natural language processing, and more.

Recommended reading: Optimization Algorithms for Machine Learning Models [Whitepaper]

How AI is Disrupting the Media Industry

AI is transforming the way media and entertainment brands create viewer experiences, monetize content, and compete in competitive markets.

Even without thorough scientific research studies — and just by observing the industry itself — it would be fairly easy to spot the key areas where artificial intelligence is changing the media business.

Video Workflows

Tagging and video indexing used to be a slow and labor-consuming process. But recent computer vision development can save companies time and money. Another improvement is incorporating automated metadata extraction that gives insights into the footage and niche experiences. 

The way that your video is streamed is also essential. And again, AI already contributes to this situation since it ensures the best possible image quality while optimizing network usage.

It also utilizes intelligent fault diagnostics during video delivery instead of a manual alert configuration and makes your content accessible to international or hearing-impaired audiences using subtitling and captioning.

Content Creation

Bill Gates’ phrase, “Content is King,” is more relevant now than ever before. Industry incumbents and new-gen video services spend billions on original content. While AI can’t completely create content independently, it’s getting close with human assistance.

Intelligent technology solutions can also help directors produce pixel-perfect videos. AI can analyze your footage to select the best possible shots for specific needs, such as proper color scheme, the right actor emotion, and the best place to cut or merge scenes. 

These tasks are easier for AI-based software to accomplish than for humans. AI can put together the best scenes, create custom ads on the fly, produce more engaging movie trailers, and much more.

AI can help media creators by analyzing content and making smart suggestions for the best possible shots.

User Experience

If the content is king, I would call the user experience queen. Apart from original programming, how a brand interacts with customers will define whether they will stick with their service.

The most common way to connect with users is through targeted content and sophisticated recommendations. This is a great start, but more should be done.

Such as custom pages or screen layouts, banners tailored to profile data, and payment workflows specific to habits and preferences. Or imagine an intelligent ad system capable of not only sending relevant advertising based on content but also defining a user’s relevant emotional state and the proper timing to insert this ad.  

It’s Not as Complicated as We Might Think

Another factor businesses must consider is the cost of implementation. Depending on your goals, available basis, interactions with third-party tools, and the complexity of specific workflows, the cost of your project could, in theory, skyrocket.

But purpose-built AI can be more affordable. The market is full of tools, frameworks, libraries, and datasets ready to be leveraged.

For example, Tensorflow, Caffe, Microsoft Cognitive Toolkit, Amazon Machine Learning, Torch, and Scikit Learn are only some available open-source frameworks for deep learning.

Training datasets are no longer an obstacle to machine learning. Either through facial recognition, object detection and recognition, sound data (e.g., speech and music), or text data. There are countless options to explore.

In a nutshell, if you have specific, well-defined tasks that consist of repetitive and not creative work, then it might be a good use case for AI integration. 

Final Takeaways

AI has been around since the 1950s, but companies have begun using it to create new media experiences.

From voice recognition software to chatbots, AI technology improves many aspects of our lives and workflows. And now, it’s poised to revolutionize how we consume content and media online.

We will soon see a shift from passive consumption to active engagement with AI, and the future of media will include far more AI technology.

Learn More:

 

  • URL copied!