Archives

Software development metrics enable developers to track and understand progress, identify problems and obstacles, fine-tune strategies, and set realistic goals. To make the most of them, it’s crucial to pick the right metrics for your team’s particular needs.

Selecting the appropriate metrics can be daunting. There are many different types of metrics, each promising different value-added opportunities for your specific project.

This article aims to identify qualities to look for in software metric approaches and provide examples of metrics you should consider for your next project. You’ll also find tips for improving your development strategy as you put those metrics to work.

What are software metrics, and why are they important?

Software metrics are measurements used to evaluate the effectiveness of a software development process and the software itself. For example, they can measure a software application's performance and quality by calculating system speed, scalability, usability, defects, code coverage, and maintainability.

Metrics can provide invaluable data that allow software developers to identify issues early on and make necessary corrections before too much damage is done. They also help them stay on track with project estimates and deadlines.

Additionally, software metrics offer insight into potential conflicts between developers and stakeholders. These metrics are essential for ensuring that a program meets the customer or client’s expectations and can help teams make decisions that will best serve the interests of all parties involved.

Recommended reading: Managing Complex Digital Transformation Programs

Software Metric Categories & Metric Examples

There are numerous metrics that developers can focus on when creating and maintaining a software program. To simplify things, here are four ways developers can categorize metrics.

The first category of software metrics that software developers should consider is performance. Performance metrics measure the speed, reliability, and scalability of a system. Examples include response time, throughput, resource utilization, and memory usage. These metrics are essential for understanding how well a system handles requests.

The second category developers could use is quality. Quality metrics measure the correctness and completeness of a system and can include code coverage, defect density, and test case pass rate. These metrics are crucial for understanding how well a system performs in terms of its ability to produce correct results and meet customer requirements.

Usability is another important metric category to consider. Usability metrics measure the ease of use of a system. Usability metrics include user satisfaction scores, task completion time, and error rate. These metrics are important for understanding how well a system performs in terms of its ability to be used by customers.

Finally, the fourth category is maintainability. Maintainability metrics measure the ease of maintenance and modification of a system. Examples include code complexity, technical debt, and refactoring rate. These metrics are vital for understanding how well a system performs regarding its ability to be maintained and modified over time.

Which metrics software developers look at will depend on the goals, requirements, and constraints of the stakeholders and development team. Now that we’ve looked at different metric categories developers can consider, it’s time to look at a few specific metrics.

Other Software Metrics to Consider

  1. True Test Coverage measures the amount of tested code. It’s the percentage of code lines, branches, and states verified during unit tests. True test coverage shows which parts of an application are well-tested and which need further testing. By regularly measuring true test coverage, developers can improve the quality assurance process and ensure defects are detected before a software’s release.
  2. Team Velocity is a software development practice that measures how much work a team completes in an iteration. It's conveyed in story points per iteration, and it serves as a way to measure how quickly the team is working on a project. Velocity helps keep teams motivated and focused on completing their goals within each iteration, incentivizing them to continue their efforts. It also provides valuable data for resource planning and estimations of future sprints.
  3. Escaped Defects are issues that emerge during the software development process and make it into the released version of the application, despite having been missed during testing. These problems can happen when the development team needs more strategies to thoroughly test all features and issues before releasing a version. Escaped defects cause severe problems down the line and often result in costly rework, customer dissatisfaction, and lost time.
  4. Release Burndown is a project management tool used to track the progress of long-term projects. The goal is to accurately predict and manage changes in scope and timeline to meet successful delivery dates. It provides visibility over project features, tasks, goals, and performance metrics in a graphical burndown chart form and a tabular spreadsheet format. In addition, release burndown can help identify bottlenecks or delays.
  5.  Lead Time is the period of time between the beginning of a project and its delivery. Lead times can vary greatly depending on the specific project but typically encompass several activities that must be completed before the software is ready.
  6. Customer Satisfaction measures customers’ happiness with a product or service. Customers are satisfied when the software’s performance has met or exceeded their expectations. Measuring customer satisfaction is important because it allows businesses to identify weak points in their service and address them quickly.
  7. The Open/Close Rate for software development is the number of tasks within a given period that are in process versus the number completed. It’s usually calculated as the percentage of total opened to closed tickets on a daily, monthly, or yearly basis. This metric helps organizations understand how quickly their development team can complete tasks and accommodate fluctuating demands from different stakeholders.
  8. The Defect Detection Percentage (DDP) is a metric used to measure the proportion of coding errors identified during software development and testing. It is an important metric used to evaluate the success rate of a project. A higher DDP indicates better quality assurance and reduces future maintenance costs.

Recommended reading: Continuous Inspection: How to Define, Measure and Continuously Improve Code Quality

The Goal Question Metric Approach

Basili’s Goal Question Metric (GQM) is a metric evaluation approach developers often use for its clear structure and ease of use. The GQM is a software quality analysis technique that defines and measures software development, maintenance, and improvement objectives.

It enables project teams to analyze their achievements and problems regarding productivity, schedule, cost, or quality. The GQM is broken up into a three-step analysis process: defining the goals, the questions, and the metrics. Here’s an explanation of how to utilize the GQM approach by its founder:

“A GQM model is a hierarchical structure… starting with a goal (specifying purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken). The goal is refined into several questions, such as the one in the example, that usually break down the issue into its major components. Each question is then refined into metrics, some of them objective such as the one in the example, some of them subjective…

The same metric can be used to answer different questions under the same goal. Several GQM models can also have questions and metrics in common, ensuring that, when the measure is actually taken, the different viewpoints are taken into account correctly (i.e., the metric might have different values when taken from different viewpoints).”

The GQM approach is an excellent choice for software metric selection and analysis because it focuses on the project’s goals and provides a way to measure progress. Additionally, it allows developers to track progress over time and make adjustments as needed.

Final Takeaways

When choosing software metrics, it’s important to consider your project's specific needs and select metrics relevant to them.

Performance, quality, usability, and maintainability metrics should all be considered so you have a comprehensive understanding of how well your system is performing.

By selecting the right metrics for your software development project, your team can gain valuable insights into the progress of their development efforts and make informed decisions about how to improve them.

More helpful resources:

I have the feeling that sooner than later, we're all going to learn what it's like to have creative AIs like ChatGPT as part of our everyday lives. I can't resist thinking about what that will be like.

 As usual, when I speculate about the future, I like to look back at the past to pick up some clues. I'm old enough to have spent some of my adult life pre-Web. Having ubiquitous connectivity to many information sources is common today, but when I was a teen and young adult, this was still the subject of science fiction. If we wanted to know something then, we had to get a book or go to a library and look it up.

 My wife likes to say she's not technical, but it sometimes seems like she's always asking me questions that I end up using technology to answer. 

As a recent example, she and I both enjoy watching "cozy" mystery shows together—those with very little violence or bloodshed — when we're relaxing in the evening. Because we're avid watchers, we're constantly on the lookout for a series that's new to us, even when it's an old one. 

We recently started watching a 15+ year-old series that features the classic movie and television actor Dick van Dyke and his son. We both really liked it. At the end of the first installment, my wife asked me, "What other mystery series has Dick van Dyke been in? He must be very old — is he still alive?"

Recommended reading: Tips for Staying Relevant in the Age of Creative AIs Like ChatGPT

Before the Web, these questions would have been imponderables — not easily answered without a trip to the public library. In general, they would have been answered with a shrug and, "I don't know."

But in the internet age, I simply took my phone out of my pocket, went to Wikipedia, and instantly pulled up an article about actor and comedian Dick van Dyke. I found out that he is, in fact, still alive in 2023 — and 97 years old! He made the action series we're now enjoying when he was 81 years old (amazing), and in the past he made another long-running mystery series, also with his son, that my wife and I are looking forward to watching next.

 This is a trivial, everyday example, but typical of our expectations today for instant access to information. 

In this case, the technology (phone, Web, crowd-authored encyclopedia, etc.) also enriched my life and my wife's — at least a little bit — by providing us with interesting information and knowledge about an entertainment series I think we'll both enjoy. Pre-internet, it would have been such a hassle to get this information that neither of us would have bothered.

Another question my wife asked before a long road trip was, "What is the weather like along the way?" I've since learned that there is, in fact, at least one app in the App Store that answers that question. But suppose there wasn't. How would I answer her?

To do this reasonably well "by hand" on my phone, I would first determine the route to my destination. I'd then compute my drive-time to various intermediate points along it. Next, I'd look up the weather forecast for each location, at the time I was expected to be driving through it. I'd tabulate all this, and show her the answer. To be a stickler, I'd keep this up-to-date as we traveled along our route, accounting for the most current forecasts and projected arrival times.

Instead, my response was to shrug and say "I don't know". It was simply too much of a hassle to figure all this out. I could conceivably have written a short script that accessed the various information sources required and did the various computations for me, but that seemed like a hassle too, and definitely not worth the time it would take, at least in my mind. 

Suppose that we had access to all the weather information she requested, though. We could have perhaps planned a better route, to maneuver around a storm, for example. We could also have incorporated the weather into our projected drive times, and refined our route—and forecasts—that way. But again, more hassle than I thought it'd be worth for a single trip. We just drove.

In the future, though, with creative AIs, my wife or I will presumably be able to describe to the AI what it is that I want, and the AI will generate a one-off application to answer the question. At some point — maybe reasonably soon — the AI can probably answer my wife's original question directly: "What is the weather like along our route?" In the short term, though, it may need a more detailed description like, "Write a script that determines a route from point A to point B and looks up the weather forecast at 10 evenly spaced intermediate points along that route based on the current expected time of arrival at that intermediate point and displays the answers on an annotated route map," (or something even more detailed). 

I expect that, in general, AIs will evolve from needing a more detailed description as in my example to something higher-level (my wife's original question, for example) over time, though that evolution could be very quick.

Recommended reading: Software, the Last Handmade Thing

Making something that's currently a hassle easy to do may seem trivial, but it actually improves our lives. Things that are useful but a nuisance only tend to get done when they are necessary, or when they have the potential to bring a significant reward. But that means that many of life's possibilities are left on the table. 

Many of us — perhaps all of us at times — have the tendency to believe that happiness comes from the big things in life: family, health, career success. And indeed, those things are important. 

But within that larger context, it is the small, everyday rewards in life that make it richer; little things like avoiding bad weather along your route, or finding a new mystery series to enjoy with your wife. These small things really add to life's variety, enjoyment and safety. 

Among all the uncertainty about how creative AIs will impact our lives and work, I think we can also look forward to its increasing the things we can know and do. 

Whether that's comparable in scope to what we've already seen from the Web remains to be seen, though I suspect it will be just as game-changing. I'm also excited that it may very well help me answer the next generation of questions that my wife comes up with!

Keep reading:

Today, the development of mobile applications can help solve almost any challenge. From banking and insurance to education, healthcare, retail, and beyond, mobile apps are everywhere, making life easier in immeasurable ways.

Companies use mobile-first strategies to get more coverage for their applications. However, the success or failure of the solution largely depends on the mobile application, which is why companies prioritize choosing the best platform for their software development. From the early days of mobile application development, developers have preferred native application development. While it is a great choice in many cases, there are other options for mobile application development with distinct advantages. 

With so many choices, businesses must concentrate on their needs and goals for the application. In this article, you can explore the numerous considerations that help companies choose between cross-platform application development environments.

Cross-Platform Development

Almost all smartphone applications are targeted for   Android or iOS or both development platforms. Unless the mobile applications have specific requirements favoring one platform companies concentrate on these operating systems because releasing applications on both platforms can significantly increase the user base. 

Developers use cross-platform solutions to solve specific problems, such as an identical application released on Android and iOS with the same features. In this situation, cross-platform development can help with code reusability and save on developmental and maintenance costs. In summary, cross-platform focus on writing reusable shared code and generating platform-specific executables.

Figure 1. Generic Cross Platform Application Solution

Ideally, in cross-platform application development, developers can share code. The platform provides widgets, components, and third-party libraries to write shared code. These environments provide tools for development tasks, a compiler, and a debugger. If there is a need to use native platform-specific API, there is a provision to access and write native code alongside shared code.

Options for Cross-Platform Development

There are various options available for cross-platform development. However, the following are the most popular options among the developer community.

Considerations for Choosing a Mobile App Development Platform

Each cross-platform solution – such as Xamarin, ReactNative, and Flutter – has its pros and cons. Developers can choose specific development platforms over others due to the advantages in the considerations below.

Major Considerations

The following are the considerations that have significant implications on the choice of the platform:

Platform Future

In general, companies develop applications with a long-term vision. In rare cases, companies develop applications with a specific or short-term goal, such as a registration application for specific events. When developing applications for extended use, companies should find a platform with stable backing with investments from industry shareholders. 

Recommended reading: AI’s Impact on Software Development: Where We Are & What Comes Next

Community Support and Popularity

Platform creators and owners can provide quality infrastructure and an end-to-end environment for application development. But even with the best plan and tools, developers can still face challenges. When issues arise, one option is to research and analyze the problem. While this can become time-consuming, hence online reports and forums can be helpful.

These community forums can provide tried and tested libraries, information on potential architectural patterns, as well new ways to create efficient applications. The more popular the platform, the more online resources, and community support will be available. 

Pricing

While selecting a development platform, every enterprise or independent developer looks at licensing costs, the development environment, and development plugins. Therefore, it’s essential to periodically assess these external factors as they can change over time.

Once a business officially launches an application, there’s no turning back, even if the licensing cost increases. At this point, developers will usually stay with the platform to easily manage it and conduct maintenance.

Learning Curve

Many organizations want to use their existing resource pool and avoid a steep learning curve when developing an app. This is why companies need to think of the time it takes to truly understand each platform and how it’ll impact its development—for example, using the.Net and C# resource pool for Xamarin and the JavaScript (React) resource pool for ReactNative.

Component Library 

The inbuilt UI support and various business components are crucial considerations when selecting a cross-platform development environment. Though external third-party components may be available, inbuilt libraries are always more reliable.

Users want an application’s UI to align with native UI, which is why UI components play an important role. Businesses need to consider the platform’s UI components and their native-like appearance. Developers must also consider options to configure the appearance of the UI components.

Performance

Performance is a crucial parameter for all applications. In general, it's assumed that cross-platform applications may have low performance rates compared to native applications. But companies should still consider the available cross-platform solutions. In addition, when analyzing performance, developers should focus on using heavy graphics, as well as complex and animated UI. Another important consideration is the application’s response to UI events after extended idle times.

Other Important Considerations

The following considerations have minor differences across cross-platform development environments and will have minor implications when choosing a platform.

Reusability Across Applications

Mobile applications and web frontend applications tend to have similar features. In this scenario, companies should consider a cross-platform development environment where the possibility of reusing or sharing code between mobile or web applications is possible.

Recommended reading: A reusable accelerator for mobile application development 

Development and Debugging Tools

Development and debugging tools are crucial for smooth day-to-day application development. In general, a cross-platform environment supports quality development and debugging tools. But it’s still valuable to ensure they have the following cross-platform environment solutions:

  • A full-fledged IDE or Studio 
  • Inbuilt debugging support
  • Hot reloading of application
  • Profiling tools 

Automated Testing 

Almost every platform supports automated testing, including UI automation and unit testing. For UI automation, developers should assess the ease of extraction or access to UI components buried deep inside in UI hierarchy. In addition, they should consider the facility to addand metadata, such as the content description and to the custom or framework UI components. Regarding unit testing, developers should look at the complexity level of adding a dependency to the code and verify the injected elements in unit tests.

Accessing native API

Developers may need to access the native API even though cross-platforms do provide access to native features like camera, GPS, and Bluetooth. To access the native API, developers may need to create a bridge or channel to access these features. Therefore, companies need to consider the time it takes to access native API as well as the performance and stability of the native API.

Cross-Platform Considerations: A Quick Checklist (as of Dec-2022)

The following tables are a checklist for the primary considerations mentioned above:

Considerations Xamarin ReactNative Flutter
Future .NET MAUI from Microsoft is future (Xamarin support will end on 1st May 2023) Mostly community driven Backed by Google as next gen development platform
Popularity and Community Support Behind in popularity and Community Support Large in number communities contributors Popularity is catching up with ReactNative exponentially
Pricing Free, open source and distributed under MIT License. Annual fee for Visual Studio Enterprise. Free, open source and distributed under MIT License Free, open source and distributed under new BSD License
Learning Curve Learning curve for C# and .Net React Web developer can easily move to ReactNative Need to learn dart language
Component library Supports components which are compiled into platform-specific UI components Built-in components, many of them need to be styled.  Built-in widgets which doesn't require styling
Performance  Performance is near to Native but it drops during rendering of heavy graphics Performance lags behind the native platforms at times Dart code is compiled to a C-library hence gives performance close to the native code

Getting Started in Mobile App Development

The above considerations can act as a guide to help businesses analyze the cross-platform development environment for mobile application development. While each company has its own needs and objectives, this list can help prioritize the various features as well as the pros and cons of cross-platform development.

We help companies reduce the learning curve, easily apply mobile app development best practices, and get to market faster with GlobalLogic’s Mobile App Accelerator. This reusable accelerator is based on the most common architectures and proven guidelines for core modules including onboarding, custom error handling, and more. 

Want to learn more about how GlobalLogic helps companies save up to 25 person-months in mobile application development? Get in touch with our team and let’s see how we can help.

More useful resources:

Edge Computing – Everything you need to know [Whitepaper]

Moving from concept to market faster than your competitors is one of the hallmarks of a successful, sustainable product development strategy. Digital twins are proving to have an oversized impact on businesses using them to curate data from multiple sources and activate it to improve outcomes at every step through design, manufacturing, and support.

The IoT enables engineers to test and communicate with integrated sensors within a company’s operating products, delivering real-time insights about the system’s functionality and ensuring timely maintenance. Digital twins can also help businesses analyze data to identify underperforming parts of the plant, and even replicate that “golden batch.” They give manufacturers a tool to predict likely outcomes before investing in changes. They use real-world data and artificial intelligence (AI) to create scenarios and test product outcomes given various inputs.

While this technology has useful applications in many industries, it’s crucial for product manufacturers. Let’s look at the benefits of using a digital twin model, what you should consider before adopting one, and real-world examples of how companies deploy them to improve performance, accelerate production, and achieve faster time-to-value.

What is a Digital Twin?

A digital twin is a comprehensive digital model of an environment, product, or system used for testing, integration, and simulations without impacting its real-world counterpart.

Where a simulation typically replicates a single scenario or process, a twin can run multiple simulations simultaneously, studying various processes and outcomes at scale.

It’s no wonder the global digital twin industry was valued at $6.5 billion in 2021 and is projected to reach $125.7 billion by 2030, growing at a CAGR of 39.48% from 2022 to 2030. Growth in IoT and cloud — and the goal to cut down costs and reduce the time for product development — are key factors driving this growth.

Recommended reading: The Future of Cloud-Driven Manufacturing: Built to Scale

Digital Twins in Action: Real-World Use Cases to Inspire Your Strategy

Click to read Digital Twins in Action

The Value & Benefits of Digital Twins

Accelerated risk assessment and production time

This technology enables companies to test and validate a product before it even exists in the real world. By creating a replica of the planned production process, a digital twin enables engineers to identify any process failures before the product goes into production.

Engineers can disrupt the system to synthesize unexpected scenarios, examine the system’s reaction, and identify corresponding mitigation strategies. This new capability improves risk assessment, accelerates the development of new products, and enhances the production line’s reliability.

Predictive maintenance

Since the twin system’s IoT sensors generate big data in real time, businesses can proactively analyze their data to identify problems within the system. This ability enables businesses to more accurately schedule predictive maintenance, thus improving production line efficiency and lowering maintenance costs.

Real-time remote monitoring

It is often very difficult or even impossible to get a real-time, in-depth view of a large physical system. However, a twin can be accessed anywhere, enabling users to monitor and control the system performance remotely.

Improved team collaboration

Process automation and 24/7 access to system information allow technicians to focus more on inter-team collaboration, improving productivity and operational efficiency.

Data-backed financial decision-making

A virtual representation of a physical object can integrate financial data, such as the cost of materials and labor. The availability of a large amount of real-time data and advanced analytics enables businesses to make better and faster decisions about whether or not adjustments to a manufacturing value chain are financially sound.

What Types of Digital Twins Are There?

Component twins

A component twin is a representation or simulation of a single part of a product or process. It can be used to test the impact of weight, heat, or other stressors on an individual product part such as a screen or mechanical subassembly, for example.

Asset twins

This dynamic virtual model of an existing physical asset is kept up-to-date and accurate with ongoing, real-time data while being used to test how two or more components work together. An asset twin could provide a replica of assembly line machinery, for example, enabling the business to test multiple configurations to maximize production and reduce error.

System twins

The system twin is a level up from the asset twin because it is a digital representation of the larger system in which critical assets function – in this example, the entire factory floor. This twin not only tests multiple outcomes and analyzes data but may recommend performance improvements, as well.

Infrastructure twins

An infrastructure digital twin is a 3D digital representation of an object or system with engineering-grade accuracy. According to the Digital Twin Consortium, this subtype is unique in that it must have millimeter precision, geospatial alignment, and support for complex 3D engineering schemas.

How Do You Create a Digital Twin?

There are three essential factors to consider before implementation.

1. Update your data security protocols

According to Gartner’s estimation, 75% of the digital twins for IoT-connected OEM products will utilize at least five different kinds of integration endpoints by 2023. The amount of data collected from these numerous endpoints is huge, and each endpoint represents a potential area of security vulnerability. Therefore, companies should assess and update their security protocols before adopting digital twin technology. The areas of highest security importance include:

  • Data encryption
  • Access privileges, including a clear definition of user roles
  • Least privilege principles
  • Addressing known device vulnerabilities
  • Routine security audits

2. Manage your data quality

Digital twin models depend on the data from thousands of remote sensors communicating over unreliable networks. Companies that want to implement digital twin technology must be able to exclude bad data and manage gaps in the data streams.

3. Train your team

Users of digital twin technology must adopt new ways of working, which can potentially lead to problems in building new technical capabilities. Companies must ensure their staff has the skills and tools to work with digital twin models.

The Future of Digital Twins

Digital twins have proven an important enabler of data-driven change, particularly in product development, where they are helping designers and manufacturers reduce costs, scale testing, go to market faster, and improve customer experiences.

What does the future of digital twins look like in your organization?

You cannot just mimic every single process there is; rather, your existing pain points and goals must inform your digital twin strategy. What are you trying to achieve with your implementation, and what outcomes will provide the best ROI?

Do you have the actionable data required to mimic a product or process in a digital simulation?

And what is the anticipated business impact of your digital twin(s) implementation? Will you mimic just one island of data or your entire ecosystem in the digital environment?

Let’s explore how digital twins can help you accelerate production time, product quality, and time-to-value. Book a free consultation call.

More helpful resources:

Writing secure code is essential for any app developer. We’ve all heard horror stories about apps that have been hacked and exposed sensitive user data. So what are the best ways to ensure your mobile app is secure? Are you doing enough to protect your code?

Mobile device security should be a top priority for anyone developing software for mobile platforms. You must be aware of the potential risks and make sure your coding practices are up-to-date with industry best practices. By understanding how attackers target mobile applications, you can take steps to minimize those threats.

In this two-part series, you’ll learn more about the best practices for writing secure code for mobile apps and discover several methods to improve your own code.

Use HTTPS instead of HTTP

Hypertext Transfer Protocol Secure (HTTPS) is the most traditional and secure way to access the web. It combines two different protocols: Hypertext Transfer Protocol (HTTP) and SSL or TLS protocol. It’s also a secure way of sending requests to a server from a client. The communication is entirely encrypted, which means no one can know what you are looking for or talking about online.

HTTPS Encryption also defeats sniffing attacks by concealing the traffic’s meaning from everyone except those who know the secret of decrypting it. The traffic remains visible to the sniffer, but it appears as streams of random bytes rather than the plain text form of JSON or XML body, HTML, links, cookies, or passwords.

For this reason, developers must restrain from using HTTP URLs in their apps. The server must enforce the use of HTTPS to prevent insecure connections from being established.

Recommended reading: Confidential Computing: Third Pillar of Data Encryption

Consider POST for sending sensitive data over GET

The HTTP POST method does not expose information via the URL. With GET, essential information is passed as part of the URL, thus exposing information in server logs, browser history, or caches.

In addition, sending sensitive information via GET parameters makes it easy to alter the data submitted to the server in sniffing attacks. This is yet another security threat with GET is a situation in which a third party sends a link to the end user. 

As a rule, you cannot email a link that will force a POST request. Although, you can send a link with a malicious GET request.

Use separate channels of communication for sensitive data

Security should not rely on one channel of communication. The best practice for this situation is to use separate communication channels for sharing sensitive information such as a PIN or password. 

For example, you can use an HTTPS network connection to share encrypted data between the client and server, and then use APNS, GCM or SMS to share the PIN or token with the client app.

That way, even if one channel of communication is compromised, the security of the overall system remains intact.

Recommended reading: Security Training for the Development Team

Accept only valid SSL certificates

An SSL certificate from a trustworthy provider verifies that you are what you say you are. Otherwise, anyone can make their own certificate for google.com or thebank.com and pretend to be someone else.

For this reason, your HTTPS connection must reject all SSL certificates that are invalid for any reason.

Follow secure coding practices from respective platforms

Every platform publishes and promotes the use of an extensive set of secure coding practices and guidelines. Mobile App developers must be aware of and follow these known secure coding practices. More importantly, they should be part of the code review checklist.

Secure coding practices include performing input validation, being careful with memory management, not using insecure C functions, avoiding the use of immutable containers when storing sensitive data, etc. But remember that this is just a subset of extensive list provided by the respective platforms.

Must have jailbroken or rooted device detection

Jailbroken and rooted devices are prone to a number of security threats, and can compromise a user’s personal or company data in many ways.

Some of these threats are; brute force attacks on passcodes; a malicious app with privileged access may drain battery life or destabilize the operating system; remote attackers can exploit a secure shell server on such a device, get access to your application’s sandbox, and copy sensitive data.

All apps must be able to detect jailbroken or rooted status of the device and must limit its functionality, erase data completely, or silently notify the authorities.

Verify data and files integrity with hash

Examining the integrity and authenticity of the data and files transferred between your app and server can be very important to your app’s security. Simple implementation of hash functions for this can add great value to security.

When using hash functions for integrity, check the files and make sure you use a different communication channel to deliver the hash than that used for data and files. 

Writing Secure Code

Following best practices for writing secure code is essential for any mobile app development project. By taking the time to design your app with security in mind, you can mitigate potential risks and help ensure that your data is adequately safeguarded. 

Keep reading our secure code writing best practices guide here in part two. 

Learn more:

The mobile device market is booming, and it seems every day new phones are released. While it gives consumers of mobile devices a range of options, it can also be a huge challenge for mobile application developers and providers. New mobile devices with custom hardware and operating systems (OS) that include original equipment manufacturer (OEM)-branded user interface (UI) changes, features, and ranges of resolutions have led to fragmentation in the mobile device market. 

Keeping pace with new device launches to ensure maximum coverage is a major challenge. How can you improve your new application release strategy? This post provides insight into the considerations and strategy for developing applications for a large user base with a range of new devices, plus actionable tips you can put to work in your business. 

Mobile Device Market Fragmentation 

Mobile device fragmentation is more prevalent on Android than iOS due to Google’s policy that allows OEMs to customize Android. Some of the causes of mobile device market fragmentation include: 

  • Market Coverage: OEMs compete to appeal to every user, from the least expensive model to premium phones with all the exclusive hardware. 
  • Options: To capture the market, OEMs are giving users more phone choices within the same price range.
  • Branded UI: Every OEM wants to customize their phone UI and applications to give the user strong branding.
  • OS Updates: Both Android and iOS mobile devices receive yearly OS updates, which are then followed by OEMs on their phones.
  • Technology Development: As technological advances happen, OEMs upgrade their hardware to incorporate new changes and innovations – and in the process, some devices become obsolete. 

Recommended reading: Essential Aspects to Consider While Designing Mobile Apps

Considerations for Device Selection

It would be virtually impossible for mobile application developers to test new applications on every device available in the market, particularly newer models. It can be challenging to provide good coverage on newly-launched devices. 

Developers can apply the following considerations to increase their coverage: 

Know Your User Base

Rather than trying to be all things to all consumers, developers can target applications  to an identified user base, such as clinicians and hospital staff.  By conducting market studies of potential end-users, you can build user information to personalize and target applications. If, for example, your data predicts that clinicians are less likely to use an  Android OS 7 mobile device, you can remove support for that application.   

Review Support Configurations Regularly

Given this fragmentation of the mobile device space, it’s important to review support configurations every six months. For example:

  • Resolution supported
  • Minimum and maximum OS support
  • Special hardware support requirement

Know When to Review Application Changes

OS updates can trigger application configuration changes; for example, the screen layout of a tablet may need to be reconfigured after changes to screen resolution. Application changes can also be triggered when new Android OS updates can lead to deprecated APIs that will no longer be supported. 

Test Across Configurations

In an ideal world, applications would be tested on every device. However, that isn’t practical or possible. Instead, use market studies and industry insights to identify devices popular with potential end-users, and test across the range of device types. If the application is bug-free on all test devices, then it should work well on the rest of the devices with a similarly supported configuration. 

Develop for Scalability 

It’s important for architects and developers to consider the non-functional requirements of scalability if applications are going to be released on a wide range of phones with different configurations. For example, applications should easily adapt to a wide range of phone resolutions. 

Test for Performance Issues

In order to reach a broad range of users, you may need to support low-configuration hardware that meets the minimum confirmation required to properly run your application. To avoid performance issues for end users, develop and test specific KPI tests. Where performance issues are identified, optimize the application to the lowest OS or, in a worst case scenario, remove that device from the supported device list. 

Recommended reading: Best Practices for Writing Secure Code for Mobile Apps

Build a Robust Test Automation Framework

As the number of supported devices increase, so does the requirement for application verification before release. Therefore, in addition to manual testing, verification cycles should include automated test script development to test and support scalability and allow the application to be tested on multiple devices simultaneously. 

Take Advantage of Beta OS Updates

Almost every year, we see major OS updates in both Android and iOS. Before the public release, there is a beta version release of the OS that enables developers to test and modify applications. Testing new applications on the beta version of new OS updates can save time and frustration. 

Apply Play Store filters

When uploading an application on Play Store, apply appropriate filters and configurations per the identified supported configuration.

New Application Release Strategy 

In an environment where new devices are launched every few weeks, an application development team must provide recommendations to client/application providers. These recommendations can be used by providers to create a roadmap for application updates, development, and testing strategy.

To keep pace with market and user needs, OEM and mobile platform owners must continue designing and launching new mobile devices and updates. Hence in a fragmented mobile market, application development teams must create a robust strategy for device selection, development and testing.

GlobalLogic’s Mobile App Accelerator is a game changer for businesses that need to design, test, and iterate quickly to keep pace with rapidly evolving consumer preferences and advancing technologies. Put the latest architectural best practices and proven guidelines for core modules to work so you can choose from multiple frameworks to find the best fit, save man-hours and resources on development, and go to market faster with a superior product.

Want to learn more? Get in touch with a member of the GlobalLogic team and let’s explore the possibilities for your business.

More helpful resources:

Cognitive automation is transforming healthcare claims processing by combining robotic process automation (RPA), artificial intelligence (AI), and intelligent document processing (IDP). This specific use of technologies is enabling faster turnaround times with straight-through processing, improved accuracy with minimal manual intervention, and increased digital and non-digital workforce productivity for insurers.

In this paper, you’ll learn:

  • How rising healthcare costs and manual processes across the claims lifecycle are wreaking havoc on organizations’ administrative costs.
  • Which challenges in the current claims system are standing between healthcare organizations and efficient, secure, cost-effective adjudication and processing.
  • Why RPA and AI together aren’t enough to digitize documents in healthcare claims processing, where files often contain signatures, handwriting, and images.
  • What intelligent document processing brings to the table that optical character recognition (OCR) lacks.
  • A proposed IDP system flow for processing unstructured or semi-structured documents with greater accuracy.

As the number of malicious cybersecurity attacks continues to skyrocket, mobile security is an increasingly important consideration for developers. Writing secure code for mobile apps includes implementing robust encryption protocols and user input validation.

We covered several best practices in Part 1, such as using HTTPS instead of HTTP while transferring data over any public Wi-Fi or broadband network. It’s also important to consider using POST instead of GET while sending sensitive information, as it prevents the disclosure of confidential values when they appear in URLs.

Additionally, developers should ensure that their application only accepts valid SSL certificates so they can confirm their authenticity and origin server correctly. While these are all crucial aspects of creating secure code, other key factors must be considered to ensure you're creating secure mobile apps.

Avoid data cloning

Backup and restore tools help users copy complete device data and transfer it to different devices, resulting in the cloning of your app’s data. This may result in a security threat to a user’s sensitive data.

To avoid cloning your app’s data, generate a unique device fingerprint and use it to encrypt your app’s data. This will render the data useless on other devices, even after backup and restore.

Recommended reading: Complex Digital Transformation Programs

Always encrypt data at rest and data in transit

Data at rest is stored in persistent storage, either on the client or server side. Data in transit refers to the data traveling to and from the client and server over the network. Your application data spends most of its time either at rest or in transit, making it extremely vulnerable from a security standpoint.

This is why it’s crucial to keep data encrypted, whether it’s at rest or in transit using various types of symmetric and asymmetric algorithms. As mentioned in Part 1, developers can use HTTPS best practices to encrypt data in transit. However, developers can use various database encryption techniques to encrypt data at rest. 

Sanitize sensitive data

Developers should use sanitization techniques such as encrypting sensitive data into the memory immediately after use and refrain from creating multiple or immutable copies of sensitive data. Developers can also make mutable copies and after use, set zeros on their memory locations to avoid creating multiple references to sensitive data.

Strip debug symbols and use obfuscation options

This increases code complexity for “bad actors” trying to reverse engineer or disassemble application code by making it more difficult to identify and understand sensitive variables, structures, and critical logic or routines. It encrypts or hides hardcoded strings in the code and secures hardcoded server URLs, usernames, passwords, and encryption keys.

Another advantage of obfuscation is size reduction. Developers can change long descriptive identifiers into small one-character identifiers with ease. Unused code can be removed, and many other code shorting features are available in any quality obfuscator.

Beware the background state

On most mobile platforms, apps stay in the background, either in a suspended, frozen,  or live state. In any case, apps still hold the memory, and, at times, their display buffers containing the screenshot of the app UI from when the app went to the background. 

If the app enters the background, app developers should erase or encrypt any sensitive data that the app still holds in memory and erase the display buffer for sensitive UI design views, such as the password or pin shown on the screen.

Developers can use these techniques to avoid threats from attackers accessing an app’s sensitive data when it’s in the background or from its memory or display buffer.

Recommended reading: Security Training for the Development Team

Disable autocorrection and copy/paste

A device’s autocorrection cache holds the data for autocomplete suggestions. These autocorrection caches are common on mobile devices and are shared across apps by the OS to learn user behavior and make autocomplete suggestions smarter. Although this is a useful feature from a user perspective, it may create a security threat when enabled, giving access to sensitive data entry views.

Along similar lines, copy/paste also represents a security threat to sensitive data. Attackers can use paste to access the sensitive data copied from the UI view, representing sensitive data in the UI of your application. 

For example, allowing copying operations on a view displaying tokens, one-time passwords (OTP), or passcode from the app screen may comprise them.

Since attackers use UI views to get sensitive data from users, it’s vital to keep autocorrection and copy/paste disabled in text fields, text input, and labels.

Disable logs in release builds

Debug logs reveal a lot of information about your app, such as its interaction with servers, its critical routines, storage and retrieval of app data, classes and their interactions, and sensitive information such as username, password, API key, and tokens. This information can be useful to attackers, enabling them to reverse engineer your code and thus compromise users’ sensitive data. This is why developers should disable logs in the release build—no exceptions.

On most platforms, you have to disable logs explicitly, such as in iOS NSLog, which appears in both debug and release builds unless disabled using a macro.

Final Takeaways

This two-part series has provided developers with the necessary tools and roadmap to writing secure code for mobile apps. By focusing on authentication and authorization, mitigating errors, and keeping data safe, you’re setting yourself up for a more successful app development experience. 

Together, these practices will protect your users from malicious threats while continuing to build trust in your app. Thanks for reading along, and check out our other resources for mobile app development below. 

Learn more:

Artificial intelligence (AI) and machine learning (ML) have revolutionized how people interact with technology. AI and ML technology have driven innovation and transformed all kinds of activities; from getting stock recommendations to buying your next pair of socks, technology assists us in many ways each day. For example, whenever users unlock their phones there is at least one ML-driven service providing recommendations. 

AI and ML technology are dramatically changing the healthcare industry, as well. Eventually, companies will use ML technology to solve day-to-day and complex healthcare problems.

In the coming years, ML will be able to enhance the healthcare system by helping patients with long-term health conditions as well as improving the lives of everyone around us. Before this is possible, there are various hurdles to overcome. The reliability of ML and AI results, and patient data privacy are key concerns. 

Offline ML (or On-Device ML) is a unique innovation for processing data and can help resolve these issues. In the article, you’ll learn about offline ML, various use cases for offline ML in healthcare devices, conceptual solutions, and architecture considerations to keep in mind for your own innovative products. 

What’s the Difference Between Online and Offline ML?

With offline ML, data processing happens locally on a device, based on trained model download on device. Online ML is connected, data processing happens remotely and the model receives a continuous flow of data, often updating as it does so. 

Healthcare businesses can incorporate offline ML in various ways. Here are a few examples:

  • A microcontroller based on the Internet of Medical Things (IoMT) devices with processing power.
  • Mobile client applications; some applications can act as an IoMT devices gateway.
  • Patient bedside medical devices.
  • Diagnostic devices such as X-ray machines.
  • Pathology lab equipment.
  • An ecosystem of therapy devices.

Recommended reading: Rise of the New Healthcare Paradigm

The requirements of offline ML are not unique to the healthcare industry and apply to other domains and related applications. However, in healthcare, offline ML is essential due to the following considerations:

Privacy & Data Security

Medical information is incredibly sensitive, and users aren’t usually willing to share their data with external services. This includes protected health information (PHI) and the output from IoMT sensors, medication, or therapy details. Offline ML guarantees that devices keep the user’s data local. 

Network Latency

Sometimes, therapy procedures need to change rapidly, e.g. changing neuro-stimulator therapy parameters or an immediate pause in therapy. Offline ML is important because any delay can cause harm to the patient. A network error could affect the patient, as well, which is why an offline solution can be highly beneficial. 

Connectivity

There are various locations, in both rural and urban areas, where internet connectivity can be intermittent. At such locations, the ability to support patients and work offline is a beneficial aspect of offline ML.

Cost

Cloud service providers with managed ML services may process charges based on the number of requests made to ML services. Offline ML implementation can help healthcare brands save costs on ML services.  

Looking for specific healthcare solutions, products & platforms?

Offline ML Use Cases

Many offline ML healthcare use cases involve IoMT devices, mobile devices, and other healthcare equipment. Here are a few examples of how this technology can be used.

Health Monitoring & Predictive Analysis

Some applications use IoMT or collect user data such as caloric intake, calories burned, sleep, exercise, and idle time to help predict potential lifestyle accommodations or changes. This information involves PHI, which is why using offline ML can help safeguard user information.

Therapy Correction

Patients with specific diseases or health issues generally use special therapy devices. Some of these devices are implanted into the patient and can be self-sufficient by automatically changing therapy parameters in the future. Healthcare brands can use offline ML for these devices to improve their functionality.

Insurance Premiums

Generally, users aren’t comfortable submitting sensitive information such as personal habits or medical history to a server. Insurance companies can use offline ML solutions to help predict insurance premiums without retaining user data on servers.

Recommended reading: Real-time Premium Calculation Using IoMT in Health Insurance [Whitepaper]

Image Analytics

Diagnostic equipment like MRI, X-Ray, and CT scans can utilize offline ML to help assess image quality and accuracy, providing first-hand opinion by analyzing Images locally.

Text Reply Prediction

Healthcare brands now use communication platforms for intra-hospital communication. Offline ML can help predict text for doctors or nurses by analyzing recent communication history. Since this may contain PHI, brands can protect information with offline ML when processing information and predicting potential replies.

An Offline ML Solution Overview

Now, let’s explore a conceptual solution for offline ML in healthcare devices.

As illustrated above, the “Offline ML Module” will be part of a comprehensive solution responsible for processing incoming data and providing results using the ML model. This module will compile data from sources, process data using the ML model, and give results to the business logic, UI, or user action for further processing.

The offline ML module gives independent devices like bedside medical equipment, MRI, CT scan, or mobile application processing power.

Low-powered IoMT devices cannot process data. Therefore, mobile devices can use the Offline ML Module as part of the client application. To get an updated ML model, devices need to connect with a remote server providing the updated model. Medical devices may be low-processing, so companies should reevaluate ML model updating rules often.

Architectural Considerations for Offline ML in Healthcare

Developers must evaluate the specific architectural considerations while designing offline ML solutions for healthcare devices. Keep these in mind, in addition to the application or device-specific architectural considerations.

Trimmed-down Model

Offline ML targets devices with low processing power. This is why building a small, target-specific, trimmed-down trained ML model is crucial.

Remote Updating

Offline ML models may become outdated due to changes in data structure and may need updates. To solve this, developers should have a workflow that updates models remotely.

Recommended reading: If You Build Products, You Should Be Using Digital Twins

Switching Between Offline and Online

Companies can use offline ML for scenarios where internet connectivity is lost, and offline models require a backup. In such cases, developers should consider switching from online to offline ML.  

Analytics 

Analytics can help developers understand the results of offline ML. However, developers must collect their analytics remotely to analyze user actions and other business parameters.

Compliance

Companies should consider HIPAA compliance if the application handles patient PHI data.

Offline ML: Exploring Solutions

Significant research is underway to provide a framework and tools for running offline ML on mobile and low-power devices. In the meantime, you might like to check out the following available solutions.

Core ML 

A machine learning SDK to develop offline ML solutions for iOS-based applications. This provides support for image analytics, NLP, and sound analysis.

ML Kit

Google provides an ML Kit through an SDK for Android and iOS-based applications. It also supports Vision API (face recognition, object tracking, and pose detection) and NLP API (smart reply and entity extraction).

TensorFlow lite

TensorFlow Lite is a library for deploying models on mobile (Android, iOS, Mobile Web) and low-powered devices like microcontrollers. It also provides a complete lifecycle to create custom offline ML solutions.

Recommended reading: Stop Your Machine Learning Quick Wins Becoming a Long-Term Drain on Resources

PyTorch

Like TensorFlow lite, PyTorch also provides libraries and components to build offline ML solutions for Android and iOS. In addition, developers can use the canvas platform to create ML solutions for targeted microcontrollers.

Coral.ai

Coral.ai provides a complete toolkit to perform offline ML on microcontroller-based devices. It also includes support for image segmentation, NLP-based key phrase detection, and speech recognition.

Key Takeaway

Many organizations are focusing on offline ML, and open-source communities are making progress in providing the support needed for mass integration. In healthcare, offline ML will eventually better the lives of the patients they serve. 

While medical applications can now run on offline ML mobile devices, it will take time before companies can use them on IoMT and therapy devices. Want to explore the possibilities? Get in touch with GlobalLogic’s healthcare technology team and we can reimagine what’s possible together.

Learn more:

References

Build beneficial and privacy preserving AI. Coral. Retrieved January 27, 2023, from https://coral.ai/ 

Cainvas. Retrieved January 27, 2023, from https://cainvas.ai-tech.systems/accounts/login/ 

Core ML: Integrate machine learning models into your app. Apple Developer Documentation. (n.d.). Retrieved January 27, 2023, from https://developer.apple.com/documentation/coreml 

Machine learning for mobile developers. Google. Retrieved January 27, 2023, from https://developers.google.com/ml-kit/ 

On-device machine learning. Google. Retrieved January 27, 2023, from https://developers.google.com/learn/topics/on-device-ml 

PyTorch Mobile: End-to-end workflow from Training to Deployment for iOS and Android mobile devices. PyTorch. Retrieved January 27, 2023, from https://pytorch.org/mobile/home/ 

TensorFlow Lite: ML for Mobile and edge devices. TensorFlow. Retrieved January 27, 2023, from https://www.tensorflow.org/lite

The explosive popularity of Open AI’s ChatGPT and impending launch of Bard, Google’s long-awaited conversational AI, have set the world ablaze with speculation of just how powerful artificially intelligent systems have become. It’s a surprising development for the general public; however, for many of us this was never a question of “if” but “when.” 

Back in 2009, I published a blog post entitled "Software, the Last Handmade Thing." We recently republished that blog, just as it was written more than 13 years ago. 

In that earlier blog, I made two predictions:

  1. "[I]n the future, ‘programming’ will be done at a higher level, with better tools, more reusable frameworks and even perhaps artificially intelligent assistance."
  1. "At some point, machines will probably become so smart, and the collection of reusable frameworks so deep, that artificially intelligent systems can assemble better software from vague requirements than people can."

I think we in the software community can manifestly agree that prediction #1 has come true, and continues to come true. 

Programming is indeed being done at a higher level with AI assistance.

I also think that we can safely predict that now that AI is 'real,' our ability to create software in partnership with them will greatly expand what an individual engineer can accomplish. 

We can imagine simple improvements such as better and more nuanced recommendations for choices on reusable components, as well as more profound ones, like complete system or subsystem generation. This would be along the lines of what we had once hoped Rational Rose could do, but starting with natural language specifications ("specs"), and using AI assistance. 

All of these activities would still need human developers at their core, however, to develop those natural language specs, and to ensure that the software being produced was actually solving the problem it was meant to. The major challenge, predictably, will be ambiguities — or AI-perceived ambiguities — in the specs.

So is AI a threat to career software developers?

While a ChatGPT-type code generation AI might look like a threat to software development as a career, in general, previous productivity improvements (even significant ones) have not decreased the total number of engineers required or the type of salaries they command. Quite the contrary. The more productive the engineer, and the more complex the problems they can tackle, the more demand for engineering talent there has been and, in my opinion, will continue to be. 

There will certainly be a shakeout as lower-skilled engineers who currently perform more routine, repetitive coding tasks are replaced by better tools including AIs that can generate entire simple or niche-specific software systems. We already see this as improved non-AI tooling, such as robotic process automation systems, IFTTT systems and others eliminates the need for many previous routine, repetitive coding tasks. 

However, those who can master the new tools and AI-amplified technology will now be enabled to address bigger and tougher engineering challenges. Given the great need for high-quality software that exists in the world, I believe that even with AI assistance, the total number of human engineers will continue to grow for years to come — along with the salaries they command.

Will AI improve on our ability to write software from vague requirements?

I do think that, at some point, prediction #2 will come true: AIs will do a better job at writing software starting from vague requirements than humans can do. 

I think this is especially true given an iterative approach to such software development in which an AI will generate a system, then humans and other AIs evaluate it and refine the 'specs' accordingly. Another system will be generated from the new specs, and the cycle repeats until the desired system can be deployed. In fact, at a lower level, such self-annotation and regenerative learning is part of what makes ChatGPT so powerful, especially for text generation. 

However, as I point out in my original 2009 blog, a key issue with software in particular is that determining when it's "finished" and works is an open loop task. That is, someone (or something) external to the developer of the software needs to make this call. 

This is because except in the case of a typo, unexpected interaction, or other careless error (which AIs will presumably eliminate), the person or AI producing the software is already implementing the coder's understanding of the specs. Except for such accidental errors, he/she/it is therefore fundamentally incapable of determining when the finished system departs from the as-desired specs, because the coder already believes they understood the spec and did the right thing. 

It takes an external entity with a different perspective on the specs such as a product manager or an end user to find that the specs themselves (or the coder's understanding of them) were in error, and to fix the specs accordingly.

Recommended reading: Testing in Production: A New Paradigm for Shift-Right

The 'true' set of specs for any software system are unknown, and to large extent unknowable, until a system (or a portion of a system) has actually been built. This is a challenge that, I believe, can eventually be overcome by AIs. 

To take it out of the software world for a minute, could an AI write a novel that is as engaging as one from your favorite author? Would the depth of characters and believable (or suitably unbelievable) situations be present in that work? I would argue not yet, but that it's possible. By generating enough such novels, getting them critiqued by enough human readers, and trying again, I would argue that an AI system could, in time, give human authors some competition.

Similarly, in software, the key issue is closing the loop on the output. 

Is the generated system what I as a product manager or end user really wanted? This is not something that can be answered by the person or system generating the software; it needs an outside entity, whether human or competing AI, to determine. Creating such 'unambiguous' specs to be executed by machine, be it API or CPU, is fundamentally an engineering task. At a lower level, it's what engineers do today when they code.

For simple systems with clear and unambiguous specs, AIs will become powerful coders very soon, I believe. 

But where complex systems are concerned, human engineers, product managers and related disciplines are here to stay for some time to come. Unless, of course, the end user also happens to be an AI...

More helpful resources:

  • URL copied!