Archives

The dilemma is new, the parameters are not.

to-app-or-not-to-app

As a business owner, it’s difficult to decide where to invest your money and heart when it comes to technology solutions. Do you create a fancy app for your business in place of your mobile site? Or, to make things more complicated, in addition to your mobile site to take it to the next level? To be honest, I don’t have the answer, but I can bring things out into the open so you have a better understanding of your options.

Quite a few major enterprises are moving towards using apps today. Why?

Simply put, because it is cool and convenient -- to the customer, that is. It’s a boutique product done “just for you” and your device. People who are used to an iPhone or Android “feel” will be able to work with your app just like they would with any other feature of an iPhone. The swipes, the menu, the icons - all are custom made for their device.

This makes the app sticky. Because of the convenience the user finds on your app, s/he would keep clicking the icon rather than going to the website of your competition.

In the eCommerce world, for example, some players started going “app only” so that they could reduce scenarios where users bargain hunt across multiple sites loaded on multiple tabs on their phone browsers -- similar to looking at multiple stalls on a street corner. By going app only and investing in your app being addictive through gamification and other engagement tactics, you make it more inconvenient for the user to go out and hop -- like driving to a standalone boutique store.

You have more chances of converting the deal through an app, and you can also use a lot of the user’s phone features (e.g., using the camera to photography how an outfit looks on the user, using geolocation to provide relevant location-based offers, accessing a user’s bluetooth and contacts to do more things). This is similar to a boutique shop investing in enhancers like a cafe or gift corner to boost sales and margins rather than just “doing the thing.”

So why aren’t all businesses taking the same route?

The problems are many. You have multiple OSes (e.g., iOS, Android, Windows), form factors (e.g., mobiles, tablets, phablets), and variations within them (e.g., iPhone 5 is a different size than 6, which is different from 6+). Also, different OS and phone versions are not always backwards compatible. If you code for each of them, how many code versions must you maintain? And if you make compromises, you make your users unhappy and dilute the wow factor that was your basic premise.

Then there are other considerations. Apps take space on the phone. They crowd the phone real estate, meaning users tend to have a limited number of apps on their phones. Your app may not be lucky or compelling enough to stick around on the phone, which will compel you to maintain a mobile site in addition to the app, adding yet another layer of complexity. And even though you have so many different versions of code, you will be expected to maintain a consistent look and feel (and features) across the different versions.

There is always a third solution, isn’t there?

Some choose to build hybrid apps using platforms like PhoneGap, Appcelerator, or plain HTML5. These apps are actually mobile web sites but mimic the look and feel (and idiosyncrasies) of a mobile device. You can customise the solution to a specific device to the extent you wish, or even start with a generic solution and progressively make it specific based on user response. Hybrid apps tend to bring out the best of the both worlds -- albeit, in a limited sense.

As I stated earlier, this article does not provide you with specific answers, only some options and points that you should consider based on my discussions with executives at some of the top eCommerce players in India, the US and the UK. Your final solution will obviously depend on the money you have, the team you can afford, and your particular business’ unique needs. For example, if you are an eCommerce solution, you should probably develop apps; if you are a media site, you may decide based on the depth of your pockets!

Piyush Jha is AVP of Product Engineering for GlobalLogic India. He heads the delivery of the Retail and ECommerce vertical for GlobalLogic globally. He specializes in eCommerce, mobility, IoT and experience design. His interests include reading and travelling the globe to imbibe different cultures. Follow him on LinkedIn.

Autonomous vehicles or driverless cars are perhaps one of the IoT technologies best known to consumers. Self-driving cars count on sensors, actuators, complex algorithms, machine learning systems, controllers, and powerful processors to steer clear of hazards and accidents on public roadways.

How are their data sets created, and what benefits might driverless cars offer consumers and various industry players? In this whitepaper, we explore the technology driving autonomous vehicles forward and how it is improving the safety, cost-effectiveness, and environmental impact of self-driving cars.

Accelerate Your Automotive Software Innovation

GlobalLogic's SDV Cloud Framework and Eclipse Automotive Integration

 

Download our whitepaper to discover how GlobalLogic's SDV Cloud Framework and Eclipse Leda integration can transform your automotive development processes.

 

Key Highlights

  • Understand the Shift to Software Defined Vehicles (SDVs): Learn about the profound changes SDVs bring to the automotive industry.
  • Overcome OEM Challenges: Explore the key obstacles faced by OEMs and how to address them effectively.
  • Leverage the SDV Cloud Framework: Discover the benefits of a scalable, flexible cloud framework tailored for automotive development.
  • Maximize Efficiency with Virtual Workbench: See how virtualization can enhance collaboration and reduce costs.
  • Streamline Management with Control Center: Centralize project management and infrastructure control for seamless operations.
  • Enhance Development with Eclipse Leda Integration: Benefit from a pre-configured environment that accelerates development and testing.

 

Want to learn more about the benefits of the SDV cloud framework and Eclipse Leda integration? Download our whitepaper and read what our experts have to say about these key factors that contribute to business-added value.

Accelerated Time-to-Market

  • Standardized development processes reduce errors and rework.
  • Early problem identification and rapid response to market changes.

Enhanced Quality and Reduced Costs

  • Feature pipelines ensure timely product quality.
  • Virtualized testing reduces costs and simplifies change management.

Expanded Business Opportunities

  • Modular architecture enables tailored SDV solutions.
  • Scalability and adaptability to changing market conditions.

Increased Developer Agility and Productivity

  • Integration with IBM Doors or Codebeamer allows developers to work across multiple platforms efficiently and reduces manual data entry.
  • End-to-end transparency ensures developers can easily track and manage their work, identify issues, and collaborate.

Collaboration and Ecosystem Benefits

  • Collaboration with multiple stakeholders within the Eclipse SDV ecosystem, defining common standards and integrating tools seamlessly.
  • Provide plug-in flexibility for OEMs to integrate various tools from various partners.
  • Developing precise, specialized tools to address OEM challenges, ensuring consistency and acceptance within the ecosystem.

 

Are you ready to redefine your automotive development processes with Eclipse Leda integration opportunities?

At a recent Hitachi Energy conference, I saw a very interesting presentation by Hitachi partner nVidia—the fabless semiconductor company whose GPUs are key drivers of the GenAI revolution. The speaker described nVidia not as a GPU company but rather as a “simulation” company. He described a spectrum of simulation technologies NVidia supports ranging from “physics-based” to “data-based.”

As a person who was educated as a physicist, several light bulbs clicked on for me in this description. What the speaker meant, of course, was that simulations or video games can either be based on ‘algorithms’—that is, a set of physical or un-physical laws (for fantasy worlds, for example)—or they can use extrapolations based on data.

When we as developers write code, we establish a set of ‘laws’ or rules for a computer to follow. Learned behavior, on the other hand, abstracts a set of patterns or probabilities from the data encountered. The latter is the nature of large language models—they are not programmed; rather they are trained based on a selection of natural language text, photographs, music, or other sources of information. 

The models essentially ‘draw their own conclusions’ in a learning process. (Or, more strictly speaking, the models are the artifacts embodying the learning that took place when an algorithm processed the training data.)

Recommended reading: Using AI to Maximize Business Potential: A Guide to Artificial Intelligence for Non-Technical Professionals

Again, this stuck with me very forcefully as an analogy of the human learning process and of the way physics and science work. 

There is a famous anecdote about the physicist Galileo, who was born in the 16th Century, observing the swaying of a chandelier during a church service in the town of Pisa Italy (of leaning tower fame). There was a breeze that occasionally set the chandeliers in motion with larger or smaller oscillations. 

Galileo observed that regardless of how high the chandelier was blown by the wind, once it started to fall, a given chandelier always took the same amount of time to complete an oscillation. In other words, the time the chandelier took to swing back and forth depended only on the length of the chain holding it, not on the height when it was released.

This is quite an extraordinary observation, and the fact that this phenomenon apparently was not noticed (or at least recorded and acted on) for the first 300,000 years or so of human history indicates the degree of insight and curiosity Galileo had. 

Note that Galileo did not have a watch he could use to record the time—they had not been invented yet, and could not have been until this ‘pendulum effect’ had been discovered. Galileo timed those initial oscillations using his pulse—though he later refined his observations using, I presume, the water clocks or sand glasses that were known in his time.

Why is this interesting? Because Galileo, like other discoverers, used observations or ‘data’ to infer patterns. From the data, he was able to make a prediction—namely, that the period of a pendulum depends only on the length of the pendulum, and not on its height of oscillation, or (as was later found) its weight.

Why is this important, and how does it relate to GenAI? There are two broad branches of Physics, called “experimental” and “theoretical”. The goal of experimental physics is to make observations and determine what happens. The goal of theoretical physics is to explain why something happens—specifically, to discover the underlying principles that manifest themselves in observations, or that predict what will be observed.

What is interesting to me in the context of GenAI is that there is a middle ground between these two areas of physics that is sometimes called phenomenology. The term phenomenology is used in different contexts, but back when I was a graduate student in high energy particle physics (theoretical physics, by the way) the word ‘phenomenology’ was used to describe predictions that we did not yet have the theory to explain. 

In other words, we knew that something happened or would happen, but we didn’t yet have a satisfactory explanation for “why.”

Galileo, in his pendulum observations in the church and subsequently in his ‘lab’, was doing what today we would call experimental physics. That is, he was making observations about what happened, and describing what he saw. 

In my limited historical research, I didn’t find a record that he did so, but we can imagine that Galileo could have taken his observations one step further and made quantitative predictions about the behavior of pendulums. That is, based on his experimental results, he could have discovered that for small oscillations, the period of a pendulum was proportional to the square root of the pendulum's length. 

However, even if he had produced such a quantitatively accurate predictive model, history does not record that Galileo ever really understood WHY the pendulum rule he discovered was true. A satisfying qualitative explanation had to wait for roughly 100 years for Dutch scientist Christiaan Huygens’ work on harmonic motion in 1673. A full quantitative explanation required Sir Isaac Newton to first invent calculus and lay out his three laws of motion. (For the theoretical basis of simple harmonic motion, such as a pendulum, see here for example.)

So how does this history relate to GenAI? 

We can readily imagine our current-generation GenAI models acting like Galileo—observing what happens, identifying patterns, and making extrapolations and predictions based on those patterns. We can even imagine them doing the curve fitting and other math required to turn those fresh observations into mathematical models. 

It’s more difficult to imagine a current-generation GenAI model acting like a Huygens or a Newton and inferring from first principles WHY something happens unless the model already contains that information and simply retrieves it. 

I don’t believe reasoning from first principles is impossible for GenAI, and people are working hard on enabling it. Approaches such as “chain of thought” and “train of thought” come close. But ‘theory’ is not the strong suit of current-generation (2024) GenAI technology. Current LLMs are “phenomenologists”, not “theorists”, which is in no way intended to underrate their value.

Why do we care about the theory? If we can predict “what” will happen, do we really care “why”?

This is a good question, and it rapidly gets metaphysical, hinging on the nature of consciousness. Moreover, what constitutes a “satisfying explanation” and “first principles” gets really philosophical fast. But in a practical sense, we can see that both theory and phenomenology have value, each in a different context.

Phenomenology has ‘rough and ready’ practical value. Astronomers and, earlier, astrologers could predict the phase of the moon and the progression of the seasons long before they understood that the Earth orbits the Sun, and the Moon orbits the Earth. These purely phenomenologically-based predictions had a profound impact on human history, including the invention of agriculture which, in turn, led to the creation of cities and civilization. 

But it is the nature of the human mind to try to discern the reasons behind what it observes. People developed theories—initially what we’d now term religious or mythological—to explain why the Sun and Moon behave as they do. They did this many centuries before the discovery of calculus and the law of gravity by Newton; the increasingly refined observations made by Kepler and, earlier, Galileo; and Copernicus’ hypothesis that the earth obits the Sun. It is in the nature of humans to keep asking “why” until a satisfying ‘theory’ is presented to explain the observations.

Watch: Getting GenAI Ready with GlobalLogic

Besides being intellectually satisfying to us humans, the value of theory is that, by reducing observed behavior to an outcome of basic principles, it lets us solve problems and see connections that phenomenology alone does not. 

For example, the theory of simple harmonic motion outlined in the Feynman lecture above not only explains the motion of pendulums (Galileo’s observations), but also the vibration of plucked strings on musical instruments and the movement of weights on springs. When we generalize this slightly, driven harmonic motion (a pendulum pushed by the wind or by the escapement mechanism of a clock) also leads to insights in the area of “resonance”. 

This, in turn, helps us understand diverse phenomena such as the structure of Saturn's rings and the behavior of physical structures like bridges under the influence of an external force, such as the wind. 

By uniting our understanding of multiple observations, a theory helps us discover the underlying connection between phenomena that initially appeared distinct. This process of forming a theory is not confined to physics but is something all of us do in everyday life. We have a theory of the motivations behind our spouse’s or friend’s behavior; as infants, we form the theory that an object continues to exist even when we don’t see it; as students or engineers we form a theory of what it takes to get a good grade or promotion. 

We also form ‘theories’ every day in the software space, when we develop an “architecture” or algorithm that produces a (hopefully) simple system that solves not just one but multiple problems. 

We also abstract out commonalities between diverse systems—for example, logging, observability, and security—and structure them as “cross-cutting concerns” rather than re-inventing them afresh for every system. In general, people consistently synthesize observations and try to discern the underlying cause behind them. It’s our nature.

The human brain functions using a combination of observation, phenomenologically-based prediction, and abstraction or “theory” to understand what it observes and expects. Currently (in 2024), GenAI is strongest in the first two aspects—observation and phenomenologically-based prediction. 

To deliver on the ‘holy’ (or ‘unholy’) grail of artificial general intelligence, AI-based systems need to not only predict but also be able to form abstractions and ‘theories’ based on their observations and predictions. They will need to combine a ‘Galileo brain’ with a ‘Sir Isaac Newton’ brain. 

I expect that we will indeed see such a ‘meeting of minds’ in GenAI, even though we’re not fully there today. We have ourselves as examples that these two modes of thought can co-exist in a single entity. We also know first-hand the power of intelligence that not only predicts “what,” but also understands “why.”

You might also enjoy:

Executives, decision-makers, technical experts, and Google Cloud partners converged at Google Cloud Next to explore cutting-edge innovations and industry trends. GlobalLogic was there, speaking about modernization strategy and delivering a Cube talk on Intelligently Engineering the Next GenAI Platform we are building for Hitachi.

Among the buzz at GCN 2024, using GenAI for customer success and process and platform modernization with AI stole the spotlight. Innovative ways companies are evolving from proof of concepts to proof of value were hot topics, too. However, challenges like data integrity and legacy point systems loom large as enterprises shift towards those proof-of-value AI-driven solutions and efficient monetization strategies. Where should you focus now – and what comes next as you develop your innovation roadmap?

Here are five key trends and takeaways from the event that speak to the essential building blocks innovative companies need to lay the groundwork for successful enterprise-grade AI implementations.

1. Applying GenAI for Customer Success

Enterprise-Grade GenAI solutions for customer success are revolutionizing service quality and driving business outcomes. Imagine equipping your frontline staff with GenAI-driven agents, empowering them to ramp up productivity and provide every customer with a personalized, enhanced experience. Built-in multilingual customer support makes GenAI a versatile powerhouse for enterprise teams, catering seamlessly to a global customer base with diverse linguistic preferences. 

This transformative approach to customer success merges advanced technology with human expertise, paving the way for exceptional service delivery and business success in the digital age.

2. Modernizing the Tech Stack & Transforming the SDLC

GenAI is reshaping the software development landscape by empowering developers to drive efficiency and elevate code quality to new heights. This transformative approach extends beyond mere updates—it's about modernizing the entire stack, from infrastructure to user interface. 

Innovative approaches include automated code generation, building RAG-based applications, enhanced testing and QA, predictive maintenance, and continuous integration and deployment (CI/CD). Leveraging natural language processing (NLP) for documentation, behavioral analysis, automated performance optimization, and real-time monitoring and alerting, GenAI streamlines development processes, improves code quality, and enables proactive decision-making. GenAI empowers developers to drive efficiency, improve security, and elevate software quality to unprecedented heights throughout the SDLC by automating tasks, optimizing performance, and providing actionable insights. 

Through comprehensive refactoring of applications, GenAI is leading the charge towards a future-proofed ecosystem. However, this ambitious undertaking isn't without its challenges; it demands time, dedication, and a strategic roadmap for success. 

3. Building a Future-Forward Framework for Success

Enterprises face key challenges in unlocking the value of AI, such as ensuring data privacy and security, protecting intellectual property, and managing legal risks. Flexibility is essential to adapt to evolving models and platforms, while effective change management is crucial for successful integration. 

Embracing a 3-tier architecture with composable components over the core platform emerges as the future-forward approach, fostering flexibility and scalability. Having a robust infrastructure and data stack to underpin the GenAI layer is indispensable, forming the bedrock for successful implementation. We refer to this holistic framework as the "platform of platforms," which not only ensures alignment with business objectives but also facilitates the realization of optimal outcomes in the GenAI journey.

4. Monetizing Applications 

Monetization was a hot topic at Google Cloud Next, and enterprise organizations gravitate towards Google’s own Apigee for several reasons. Apigee’s robust API management platform offers versatile monetization models like pay-per-use and subscriptions, streamlined API productization, customizable developer portals, real-time revenue optimization analytics, seamless billing system integration, and robust security and compliance features. 

For example, we recently designed and built a solution for monetizing an application that uses APIs to access and leverage industry data stored in a cloud-based data lake. This allowed for scalable and serverless architecture, providing reliable and updated information for improved decision-making, identification of new opportunities, and early detection of potential problems. Apigee’s reputation as a trusted and reliable API management platform is backed by Google Cloud's expertise and infrastructure, further solidifying its appeal to enterprise customers.

5. Evolving the Intelligent Enterprise from POC to Proof of Value

Transitioning from Proof of Concept (POC) to Proof of Value (POV) marks a critical phase in adopting AI technologies, particularly in light of recent challenges. Many POCs implemented in the past year have faltered, and the pressure is on to demonstrate a return on AI investments.

Maturing your AI program from POCs to POV calls for a holistic approach that encompasses not only the capabilities of GenAI but also your foundational architecture, data integrity, and input sources. Maintaining data integrity throughout the AI lifecycle is paramount, as the quality and reliability of inputs significantly impact the efficacy of AI-driven solutions. Equally important is the evaluation and refinement of input sources, ensuring that they provide relevant and accurate data for training and inference purposes. 

Successful GenAI implementations are those that are reliable, responsible, and reusable, cultivating positive user experiences and deriving meaningful value for the enterprise. 

Responsibility means delivering accurate, lawful, and compliant responses that align with internal and external security and governance standards. Reliability shifts the focus to maintaining model integrity over time, combating drift, hallucinations, and emerging security threats with dynamic corrective measures. Finally, reusability emerges as a cornerstone, fostering the adoption of shared mechanisms for data ingestion, preparation, and model training. This comprehensive approach not only curtails costs but also mitigates risks by averting redundant efforts, laying a robust foundation for sustainable AI innovation.

How will you propel your AI strategy beyond ideas and concepts to enterprise-grade, production-ready AI and GenAI solutions? 

Let’s talk about it – get in touch for 30-minute conversation with GlobalLogic’s Generative AI experts.

Transforming Telco: 5 GenAI Trends Reshaping Experiences & Driving New Revenue

In the fast-paced realm of telecommunications, where constantly connected customers demand increasingly personalized and seamless experiences, innovation is a necessity. Enter GenAI – the catalyst for a profound shift in how telcos interact with their customers and manage their networks. From the bustling discussions at industry events to the boardrooms of leading companies, the buzz surrounding GenAI use cases is palpable.

Join us in exploring the transformative potential of GenAI within the telecommunications landscape. From redefining customer experiences to revolutionizing network operations, GenAI offers a myriad of opportunities for telcos to thrive in an increasingly competitive market.

1. Reimaging Customer Experiences in Telco

We’ve been having many interesting and productive conversations with clients and at the recent Mobile World Congress about GenAI use cases in telecommunications. One area of focus that’s getting a lot of attention and mindshare is GenAI’s impact on customer experience.

As telcos attempt to reimagine customers' experiences across the telecom journey, it’s become clear that intelligent GenAI applications add a lot of value. Imagine you’re a consumer wanting to buy a new service – what’s that experience like today, and how can we make that seamless and engaging? Well, we can start with intelligent chatbots. 

Chatbots aren’t new; they’ve been around for a while now. But they haven’t lent well to seamless customer experiences. In fact, many found them quite frustrating until machine learning and GenAI made them more intuitive and accurate. All the way from discovery and search through order processing to completion, these technologies are making customer interactions with chatbots seamless and frictionless.

2. Autonomous Networks Powered by 5G Advanced & 6G

As 5G has increasingly been deployed, networks have gained momentum. Now, we see an increasing development of self-organizing networks. For telecommunications in particular, GenAI plays a pivotal role in these autonomous networks. 

The combination of machine learning and AI can help us predict network outages and detect anomalies in the network. We can also leverage AI to help us with cell network interference patterns, providing seamless coverage and reducing operational costs. 

3. Activating & Monetizing the Full Spectrum of Telco Data

GenAI adds a lot of value to service operations, too. For example, addressing Wi-Fi network glitches and outages presents significant challenges. Imagine being at home, confronting a network outage, and urgently seeking assistance by contacting a customer service call center or engaging with a chatbot. The frustration often lies in the prolonged wait for a customer agent to assist. 

Enter AI—a transformative force in this scenario. Envision a future where customer agents comprehensively understand each customer’s data, history, and concerns. With their vast data reservoirs, telcos hold immense potential for leveraging AI to enhance customer service. With AI's capabilities, this wealth of data translates into actionable insights. It enables customer agents to navigate service operations efficiently, guiding customers through technical challenges precisely and efficiently.

This vision represents the future of customer service—a harmonious integration of AI and data, where every interaction leads to greater satisfaction. The key lies not only in troubleshooting but also in the synergy of technology and empathy, paving the way for a more connected and fulfilling tomorrow.

4. Bridging Technical Gaps in the Telco Ecosystem

As we attempt to connect the dots, making sense of and monetizing our data wherever possible, we’ll see more use cases for using GenAI for new revenue-generating services. There are many technical gaps between where we believe these innovations can take us and what we need to wade through to get there. 

For instance, telcos can access location information and other data to indicate when consumers are traveling or planning a trip. They can use that to power data roaming sales or even offer travel insurance. How will they connect those dots and integrate with ad tech or insurance platforms for offerings like these? 

Here’s another example: what are the technical gaps between education platforms and telcos? Consider that a North American telco might have 100 million customers. There's a huge potential upside if you start offering new revenue-generating services in the education sector, but that requires both strategic partnership and technological integration. 

There are countless opportunities for new revenue-generating services in this market with machine learning and GenAI helping us uncover relevant data. Those revenue streams can be realized as we develop new ways to bridge the technology gaps.

5. Evolving from Prototypes to Proof of Value & MVPs

In the context of the AI loop, we are still probably in the early phases of this journey. There's a lot of hype, and the last year was all about working on prototypes, experimenting, failing fast, and discovering what could be relevant and contextual

This year, we will see increasing MVPs, real products, and proof of value. As we mature in this journey, as with any other technological disruption we’ve seen before (whether it was the mobile revolution or the desktop revolution before that), there will be an inflection point. It may be a few years down the line, but it’s coming. Then we will see more AI-first products being developed.

From a telecommunications perspective, this will mean a shift from digital telco journeys to fully native AI telco journeys.

GlobalLogic is already putting two accelerators and our collaborative model for co-creating innovative use cases to work for our customers. With our GenAI "platform of platforms" integrating numerous publicly available LLMs we're crafting GenAI solutions that precisely align with our customers' objectives and requirements.

Want to learn more? Explore our GenAI Strategy & Solutions and get in touch with GlobalLogic’s GenAI experts today.

Special thanks to Allyson Klein at TechArena for the conversation that inspired this article. You can listen to ‘The Future of AI and the Network with GlobalLogic SVP Sameer Tikoo’ with Allyson here.

Limiting “Work in Process” (WIP) items is one of the key ideas behind Kanban and Lean approaches to developing software. Having too many WIPs might make it look like everyone is sufficiently busy, but there’s really no functional outcome for the end user.

In my experience, it is much more important to work towards completing the user story — in other words, to stop starting and to start finishing.

It’s natural to assume that this “stop starting, start finishing” philosophy is limited to Lean and Kanban methodologies. After all, Scrum works so well that it doesn’t run into WIP issues, right? Wrong! Let’s look at a typical Scrum standup:

In this example scenario, the project has around 9-10 team members. At the beginning of the sprint, the team creates subtasks for each user story together. The idea behind this method is that any team member should be able to pick up any subtask at any point in time — thereby limiting roadblocks or delays.

During the Scrum standup, each team member shares what he/she did yesterday, what he/she will be doing today, and if there are any impediments. Although this approach provides a decent insight into individual tasks, it fails to provide a broader progress indicator on how close the team is to completing the individual user stories and thereby the sprint. Instead, a better idea is to let the team assess how everyone can collaborate and help each other to move the user stories to the DONE column.

Now you’re probably thinking, “That’s an interesting theory, but is it really necessary? After all, the end user will only see the finished features after the sprint is over.” While technically this is true, let’s look a little deeper at the internal Scrum mechanics.

First of all, since testers receive user stories at the very end of a sprint, they are typically the ones who are under the time crunch to finish the user story on-time and with production-ready quality. However, if the entire team focuses on finishing the user story early, the testers may have more time to test it.

The “stop starting, start finishing” principle encourages better teamwork among team members. For example, in a standard Scrum team, I may choose not to help my colleague because I want to focus on finishing my own task. But if we are all focused on the greater goal of finishing the user story, then it’s in my best interest to help my colleague with his/her tasks. In fact, the primary measure of progress in a Scrum project (as per the Scrum burndown/burnup chart) is how much work remains in a sprint or how much work has been completed — NOT how much work has been started.

So in reality, the Lean approach of “stop starting, start finishing” also aligns very well with Scrum methodology. Specifically, it’s important to look at the user story as a whole during a Scrum standup and to identify how the entire team can work together to close the user story as early as possible.

In my own experience, the best way to do this is to discuss outstanding tasks during the standup and to place WIP limits with each workflow, like in a Kanban project. This approach will result in better throughput, a more thoroughly tested user story and — most importantly — a happier end user.

Shrikant Vashishtha is the Director of Engineering for GlobalLogic’s CTO department. He has over 15 years of experience in the IT industry and is currently based in Noida, India.

The pandemic and the stay-at-home orders that came with it drove a massive shift from real-world shopping to online. As consumers sought to fulfill their needs, the pace of retail innovation has accelerated to meet them in their decision-making moments with rich, compelling shopping experiences.

How is augmented reality being used in retail and ecommerce now? In this paper, you’ll learn how to enhance customer experiences with AR, explore real-world use cases featuring major retailers, and discover lesser-known benefits of augmented reality in ecommerce.

Evolution of Industrial Innovation: How IIoT Will Impact Manufacturing in the Future?

The Manufacturing Industry is entering a new era thanks to the Industrial Internet of Things, or IIoT. This revolutionary technology is dramatically reinventing manufacturing with the integration of digital technology into processes that enhance output quality, reduce costs, and increase productivity. IIoT is a shining example of innovation, pointing to a time when connected ecosystems and smart factories will propel industrial advancement.

Understanding IIoT

What Is IIoT and Why Does It Matter?

IIoT or Industrial Internet of Things, combines the physical and digital domains of industrial manufacturing and information technology to build a network that allows machines and devices to communicate, analyze, and use data to make intelligent decisions. This connectivity is transforming industry operations by increasing process efficiency, predictability, and flexibility. It's not just about optimization.

The Core Components of IIoT Systems

The fundamental elements of the IIoT are its sensors, which gather data, its data processing units, which analyze it, and its user interfaces, which facilitate communication and interaction. Together, these elements provide more operational efficiency and intelligent decision-making by transforming data into actionable insights.

Industrial-hero

How IIoT Impacting the Manufacturing Industry?

Streamlining the Production Process

Using IIoT, manufacturers can easily gather data from different equipment and machines in the factory, and that helps them identify areas for improvement. Production lines are changing as a result of the high levels of automation and efficiency brought about by IIoT. Real-time monitoring and control, together with waste reduction and production time acceleration, are made possible by smart sensors and gadgets. This change not only improves the output but also enables enterprises to respond quickly to market requirements and challenges.

Predictive Maintenance

IIoT-based predictive maintenance helps the manufacturing industry monitor equipment performance, anticipate potential breakdowns, and schedule maintenance and repairs, reducing time spent on reactive maintenance. This method represents a major improvement over conventional, reactive maintenance techniques since it decreases downtime, increases equipment life, and lowers maintenance expenses.

Enhancing Safety and Quality Control

IIoT raises the bar for quality assurance and safety. Together, sensors and analytics track operational parameters and the environment to make sure manufacturing operations stay within safe bounds and that the quality of the final product doesn't change. By proactively monitoring, accidents and faults are avoided, protecting both workers and customers.

Key Technologies Behind IIoT

The Role of Big Data and Analytics

The IIoT is not possible without big data and analytics, which allow for the analysis of enormous volumes of data produced by sensors and devices. By identifying patterns and insights, this research may help make better decisions, optimize workflows, and forecast trends, all of which improve operational effectiveness and strategic planning.

Connectivity Solutions: The Backbone of IIoT

In IIoT, connectivity is pivotal to tying systems and devices together throughout the manufacturing floor and beyond. The latest technologies that facilitate real-time data exchange include Wi-Fi, Bluetooth, 5G etc. These advanced technologies guarantee smooth connectivity. The synchronization of activities and the application of automation and advanced analytics depend on this interconnection.

AI and Machine Learning: The Brains Behind the Operation

IIoT systems are becoming intelligent entities with the ability to make decisions, forecast results, and learn from processes; thanks to artificial intelligence (AI) and machine learning. Automating complex decision-making processes is made possible by these technologies, which increases productivity and sparks creativity. Artificial intelligence (AI) can foresee equipment breakdowns, optimize production schedules, and customize maintenance schedules by studying data patterns.

Challenges in Implementing IIoT

Integration Complexities

There are several obstacles to overcome when integrating IIoT into current production systems, from organizational reluctance to compatibility problems on a technological level. Manufacturers need to devise a strategic approach that encompasses gradual deployment, ongoing review and stakeholder participation in order to effectively manage these challenges.

Cybersecurity: Protecting the Digital Frontier

New cybersecurity threats are introduced by the interconnectedness of IIoT. Ensuring the integrity of industrial processes and safeguarding confidential information are critical. To protect themselves from cyberattacks, manufacturers need to put strong security measures in place, such as encryption, access limits, and frequent security assessments.

Overcoming the Skills Gap

A workforce proficient in both digital technology and conventional manufacturing is necessary given the trend towards IIoT. It is imperative to close this skills gap in order to implement IIoT successfully. Manufacturers can overcome this obstacle by implementing focused training plans, forming alliances with academic institutions, and encouraging an environment that values lifelong learning.

IIoT in Action: Case Studies

Case Study 1: Predictive Maintenance in Brazil's Manufacturing Sector

Background:

A leading manufacturing firm in Brazil, specializing in automotive parts, faced challenges with equipment downtime and maintenance costs. Traditional maintenance strategies were reactive or scheduled at fixed intervals, leading to unnecessary maintenance or unexpected equipment failures.

Implementation:

The company embarked on an IIoT project to shift towards predictive maintenance. IoT sensors were installed on critical machinery to monitor various parameters such as temperature, vibration, and noise levels in real-time. This data was transmitted to a cloud-based analytics platform where machine learning algorithms analyzed the data to predict potential failures.

Challenges:

  • Integrating IoT sensors with legacy equipment.
  • Ensuring data accuracy and reliability.
  • Developing predictive models specific to their machinery and failure modes.

Outcomes:

  • Reduced unplanned downtime by 40%, as maintenance could be scheduled before failures occurred.
  • Maintenance costs decreased by 25% due to eliminating unnecessary scheduled maintenance.
  • Extended equipment lifespan and improved overall equipment effectiveness (OEE).

Case Study 2: Production Optimization in Germany's Automotive Industry

Background:

A German automotive manufacturer aimed to enhance its production efficiency and product quality. The traditional quality control process was reactive, with defects often identified only after production, leading to waste and rework.

Implementation:

The company implemented an IIoT system to collect data from sensors placed throughout the production line. This system provided a real-time view of the manufacturing process, enabling immediate adjustments to maintain quality standards. Additionally, the company developed digital twins for key components, allowing for virtual testing and optimization before physical production.

Challenges:

  • Achieving seamless integration of IoT data across different stages of production.
  • Ensuring data security and privacy.
  • Training staff to interpret IoT data and make informed decisions.

Results:

  • Product defects were reduced by 30%, significantly improving product quality.
  • Production efficiency increased by 20% through real-time adjustments and optimization.
  • Reduced costs associated with waste and rework.

How Will IIoT Affect Manufacturing in the Future?

Current Shifts and Forecasts

Innovations and constant improvement will characterize IIoT-driven production in the future. The adoption of 5G for improved connection, the creation of digital twins for sophisticated testing and simulation, and the use of AI and machine learning for more complex analytics are examples of emerging trends. These developments should improve manufacturing's flexibility, efficiency, and customizability even more.

Artificial Intelligence and Machine Learning's Next Wave

It is expected that machine learning (ML) and artificial intelligence (AI) will have a significant impact on the IIoT in the future. These technologies will propel improvements in industrial processes, increasing their autonomy, intelligence, and predictability. Manufacturers will be able to take full advantage of the IIoT with the aid of these technologies, from autonomously optimizing production processes that alter without human intervention to real-time supply chain optimization.

Formulating a Sustainable IIoT Plan

Important Steps for a Successful Launch

An effective IIoT strategy should consider several important factors, such as clearly defining objectives, selecting appropriate technology, and ensuring a seamless interface with existing systems. Manufacturers must put cybersecurity, employee training, and stakeholder engagement first to enable the successful deployment of IIoT.

Measuring the Impact: ROI of IIoT Applications

Evaluating IIoT project outcomes is critical to justifying investments and guiding future efforts. Manufacturers should establish specific criteria, such as higher output, reduced downtime, and better product quality, to calculate return on investment. If manufacturers regularly monitor and evaluate these KPIs, they may maximize their IIoT strategy and achieve long-term benefits.

Frequently Asked Questions (FAQs)

  • How does IIoT differ from traditional IoT?

While standard IoT covers a wider spectrum of consumer and corporate applications, IIoT concentrates on industrial applications, highlighting efficiency, dependability, and connectivity in production environments.

  • What immediate benefits does IIoT offer to manufacturers?

Immediate advantages include improved safety and quality control, decreased downtime due to predictive maintenance, and increased operational efficiency.

  • Can SMEs leverage IIoT? 

Yes, SMEs can gain from IIoT by beginning with scalable solutions made to match their unique requirements, which will increase their productivity and competitiveness.

  • How does IIoT contribute to sustainable manufacturing?

IIoT improves sustainability by using resources more efficiently, cutting waste, and using less energy during production thanks to more intelligent manufacturing techniques.

  • What are the best security practices for IIoT systems?

Strong encryption implementation, frequent security audits, access controls, and keeping up with the most recent cybersecurity threats and defenses are examples of best practices.

  • Starting with IIoT: Where do beginners begin? 

Before using IIoT technologies widely, novices should first conduct a thorough assessment of their needs and goals. This should be followed by pilot projects where users may test and learn from the technologies.

Manufacturers have a revolutionary opportunity to reimagine their operations and adopt an efficient, innovative, and sustainable future when they utilize IIoT. By understanding the potential, overcoming the challenges, and leveraging the technology driving IIoT, producers can achieve previously unobtainable levels of productivity and competitiveness. Going forward, it will not only be possible but also imperative for those who want to be in positions of leadership in the industrial landscape of the future to integrate IIoT into manufacturing processes.

This is probably a well-known fact in sociology or some other such discipline, but it struck me the other day that only the generation that knows how to do something can be the one to make that thing obsolete.

Take driving a car, for example. My generation and the ones preceding me in the U.S. eagerly learned how to drive a car as soon as we were legally allowed. Like most of my contemporaries, I started driver's education as soon as the law allowed, at age 15.5, and had my license in hand as soon as I turned 16 years old. But more recently, the number of Americans receiving their drivers license at age 16 has declined from an already low 46.2% in 1983 to a mere 25.6% in 2018 according to statistic.com [https://www.statista.com/chart/18682/percentage-of-the-us-population-holding-a-drivers-license-by-age-group/]. While not as dramatic a decrease, fewer adults in the US had drivers licenses in 2018 than in previous years.

It’s not inconceivable to me that in a few decades, between the proliferation of ride-sharing services (a technology-driven business model) and  self-driving cars (a new technology), relatively few American adults will know how to drive. This is in the country, the U.S.A., that introduced ‘car culture’ to the world.

But today, in 2024, about 91% of American adults still have a driver’s license. And I think that’s a necessary condition for self-driving cars to evolve.

Any new technology will be imperfect. This means that people who know how to use the previous generation of technology are the ones who need to be the pioneers that introduce the next generation. Those are the people who can revert back to the ‘old’ way when necessary, because the ‘new’ way isn’t quite up to some aspects of the task. While my Tesla does some things very well already, I will still over-ride the self driving features when I believe it’s not doing the right thing. But if I didn’t know how to drive, I would be at the mercy of the car—instead of seeing it as an ally and a tool. Except in controlled and limited (or remotely supervised) conditions, I don’t think a non-driver would feel completely safe in even the best of today’s self-driving cars in all circumstances. But for those of us who can already drive, self-driving functionality is a great thing; we can turn it on or off according to the situation and our needs.

There is no doubt in my mind that self-driving cars will be perfected, and will some day soon drive better and more safely in all circumstances than I do. In some specific areas, my ‘self-driving’ Tesla already does a better job than I would. Today’s children—or (if you’re a pessimist) their children—will truly have no need to learn to drive once they become adults. Except for recreation, I doubt if many will bother learning to drive. Driving ‘manually’ will become a forgotten skill.

But for the present, the only way to perfect self-driving cars is to put them in the hands of people who already know how to drive. Only those people with driving skills can “rescue” the algorithms when they don’t work quite right, or can act as trainers and perfecters of the new technology. In other words, only the people who are masters of the old technology can become the pioneers of the new.

I see the same thing happening in the software industry, as we adopt GenAI-driven development tools. There is no doubt in my mind that at some point in the future, GenAI will produce better code, tests, architectures and other software artifacts than we can create manually—and certainly faster. But as in the car-driving example, only those people with the skills to develop systems manually can be the ones to make the new technology successful.

GenAI-based development will predictably have gaps. While there are some areas where GenAI-driven development can add tremendous value already, it does not seamlessly cover the entire software development lifecycle, and won’t for some time. Humans with ‘traditional’ skillsets are very much required to realize the advantages of GenAI-based development.

For this new technology to succeed, the people who know how to develop software the ‘old fashioned way’ will need to make it successful. But why would we do that? We all believe—reasonably, I think—that this new technology will change our work fundamentally. Why would we risk working ourselves out of a job or, at the least, risk changing our current jobs beyond recognition?

My work experience has shown me, time and time again, that those people who try to make themselves indispensable by withholding knowledge are often the first to lose their jobs in any major transition. We can all probably think of a few examples of people who did manage to avoid being let go in a work transformation by hiding ‘secret knowledge’. But what a miserable existence they must have had! Hoarding knowledge and constantly worrying that someone else would displace them by learning what they think makes them valuable. Such behavior reminds me a bit of Golem stroking the one ring and repeating “My precious!”.

While our generation is the current flag bearer for the accumulated wisdom of software development know-how, its techniques and best practices are far from secret knowledge. Countless books, articles, blogs, training courses, examples and other artifacts exist and can be accessed over the web. When AI’s get smart enough, they have ample material from which to learn—as we’re already seeing. Also, there are fortunes to be made from teaching them, and new job opportunities to be created because of GenAI-native development. There’s no way any of us—or all of us—could hold back this tide, even if we wanted to. GenAI will transform the software industry: that is a given. We can argue about ‘when’ and ‘how’, but I don’t think the ‘what’ is in dispute.

Take heart, though. If you love to drive, I think that even in the upcoming era of truly self-driving cars you will have the opportunity. Manually driven cars will still be available, as an option on new or specialized models, for rental to hobbyists, or through the ‘vintage’ market. Similarly, if you love to program, I’m sure you still can. But our generation will indeed be the generation that makes GenAI-native software development a reality. The only question in my mind is: will it be because of some of us? Or all of us?

  • URL copied!