Archives

Edge computing is transmuting the way data is being gathered, processed, and delivered from millions of devices around the world. With the explosion of IoT in every industry (29B devices by 2022) and the maturity of cloud infrastructure, the need for computing power and execution capabilities on every end point of a network has increased manifold. Gartner estimates that by 2025, 75% of data will be processed outside the traditional data center or cloud up from 10% today.

Earlier, edge computing was referred to as a place where various devices of a network would connect, deliver data, and receive instructions. So, it was more of a centralized control center (like a data center or a cloud infrastructure). But that model soon reached design limitations with the exponential increase in IoT devices. Since these devices gather so much data, the sheer volume requires larger and more costly connections to a data center or a cloud. In addition, the nature of computation being performed at the end points also changed, with newer use cases that needed low latency, real-time / near real-time processing, or series of back-and-forth operations. This has resulted in the metamorphosis of edge computing to what it is today.

So, what really is edge computing? An easy to understand definition these days is - Computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. As edge computing grows wide and deep, we look at 7 trends that we are witnessing.

1. Use cases inch closer to reality

With most of the technical kinks sorted out, edge computing use cases are now starting to take shape beyond the pilot or proof-of-concept. Examples include:

  • Predictive maintenance using Industrial IoT solutions in the manufacturing industry
  • Connected Ambulance, enabling live streaming of processed patient data
  • Haptic-enabled diagnostic tools for remote specialist diagnosis in the Healthcare industry
  • Optimizing production line performance in the Manufacturing industry
  • Connected driving experiences in the Automotive industry

2. Architecture starts to mimic cloud-native paradigms

Early edge computing applications were all traditional local applications: difficult to update, test, or upgrade. With greater maturity, this architecture is now mimicking cloud-native patterns, as opposed to being traditional monolithic local applications. This gives the newer breed of edge computing applications the benefits of the cloud combined with the speed of the edge. Edge applications are thus designed and built as an "edge-cloud" to manage and orchestrate workloads across scalable physical infrastructure to ensure business continuity.

3. Melding of 5G and Wifi-6 with edge computing

5G needs mobile edge computing for two key reasons. One reason is to achieve 5G standards specification (i.e., 1 ms network latency). The second aspect is in regards to the implementation path of 5G being taken by the operators. The current approach varies for each operator/region, and so there will be a gradual adoption until "full 5G" is achieved. Melding edge computing tech with 4G can help realize several 5G use cases. One example is edge computing-enabled "5G-Like Experiences" for maintenance field teams using AR headsets. This can be used in regions that do not have 5G coverage and then slowly scaled.

Wifi-6 is also fast becoming a complementary technology for 5G. Wifi-6 as a high-density local area service will be the technology of choice of enterprises in those environments. A Wifi-6 LAN will appear to the 5G network as just another node, thus abstracting transitions from the user.

4. Acceleration in actual IoT data usage and processing

The early hype of IoT did not sustain because the cost of transmitting and storing all the data in cloud/data centers outweighed the touted benefits. But with edge computing becoming more mainstream and reliable, these costs can be reduced multi-fold, and machine learning and AI models can start identifying data patterns that have business impact.

5. IT and OT convergence

Traditional enterprises like Manufacturing, Transportation, or Oil and Gas have always had separate organizations managing software systems (IT) and industrial systems (OT). As these enterprises look at modernizing their business — becoming digital and using new tech for their critical use cases like predictive maintenance, shop-floor optimization etc., — these teams are becoming more integrated and collaborative. This integration is happening with edge computing as a common ground.

6. Impact of streaming media

Globally, Cisco forecasts that by 2022, video traffic will account for 82% of all business and consumer IP traffic. Simultaneously, global VR/AR traffic will grow twelve-fold between 2017 and 2022. This means the internet has to be re-architected to enhance the current CDNs and mitigate bottlenecks. Edge data centers will have a large role to play here.

7. Lesser known rule of 3: autonomy, compliance and security

While bandwidth and latency are at the front and center of edge computing, the lesser aspects of autonomy, compliance, and security are also key considerations.

Autonomy in edge computing is about ensuring that the scale, variability, and rate of change of edge devices and environments are managed autonomously. On the other hand, compliance in edge computing is about adhering to regional laws and policies governing the transfer of data. Lastly, security in edge computing is again about ensuring that a common minimum-security implementation exists in the edge device to prevent the weakest link syndrome from occurring.

Conclusion

To conclude, most enterprise leaders and decision-makers view the key factors for edge computing adoption to be:

  • Flexibility of present/future AI demands
  • Avoiding network latency
  • Promise of complex processing outside the cloud

It is clear that in an increasingly interconnected world, the impact of enterprise solutions and technology will become equally pervasive in the factory, as well in your home.

✓ Jira Service Desk is now Jira Service Management

✓ Clean up your Confluence site

Jira Service Desk is now Jira Service Management

For: DATA CENTER SERVER

Jira Service Management empowers Dev and Ops teams to collaborate at high-velocity, so they can respond to business changes and deliver great customer and employee service experiences fast.

Confluence has been updated to use the new name, but there’s been no changes to any of the great features, including knowledge base integration.

Clean up your Confluence site

For: DATA CENTER SERVER

A side effect of a big, busy Confluence site is the huge amount of content created. Over time, this can really build up, and clean up becomes essential.

Atlassian’s new guide gives you plenty of hints and tips for improving findability and reducing the size of Confluence’s footprint. From archiving spaces, to identifying large attachments, there’s a range of strategies to help you keep Confluence lean and clutter free.

They’re planning some more improvements in this area, and will share them with you in a future release.

✓ Bamboo logs – less noise, more control

✓ Bamboo Specs improvements

✓  Tag trigger

Bamboo logs – less noise, more control

We know that in order to understand what’s happening with your builds you need quick access to relevant information in your log files. To help you achieve that, Atlassian has decided to limit the level of noise that gets in your way and give you control over how much your Bamboo instance logs. Bamboo 7.2 brings you the following log improvements:

  • To make things quieter, Bamboo will be logging less data by default from now on. In return we’ve introduced a verbose mode which will allow you to turn on logging of additional data, like logs from various VCS and environment variables. You can enable the verbose mode when running a customized plan, or in the deployment screen.
  • Atlassian has also changed how rerun jobs are logged. Until now, logs for every rerun job were attached to an already existing logs for that job. As a result, it formed a very long log file which was hard to navigate and use. Now, logs for every job rerun are stored in a separate file, which makes fixing things so much easier.

Bamboo Specs improvements

Bamboo 7.2 brings you a number of improvements to Bamboo Specs:

  • Atlassian is introducing the any-task command thanks to which you can use tasks from any Marketplace app in Bamboo YAML Specs.
  • They’ve added the native YAML Specs support for SSH/SCP, Command, Maven, and Build Warnings tasks.
  • App vendors can now use the new YAML Specs API to manage plan and deployment triggers.
  • Trigger conditions configuration is now available in Java Bamboo Specs. YAML Specs support is coming at future versions
  • Third party Java Specs builders can be used for repository stored Specs. Create your own Specs libraries to manage large and complex plan configurations.

Tag trigger

Bamboo 7.2 introduces tag triggers. Now you can fire up your builds automatically whenever a selected tag appears in your repository.

Enable and disable agents over REST API

Starting from version 7.2 Bamboo allows you to enable and disable agents through REST API endpoints.

In this brief white paper, we provide you with an overview of various automation frameworks and their approaches, including behavior-driven development (BDD) frameworks, Spock frameworks, and customized frameworks.

The pandemic has impacted people's behaviors, both now and into the foreseeable future. Organizations need to refocus their digital strategies and accelerate their digital offerings to meet these behaviors and expectations. Customer and employee safety will be paramount in the pandemic recovery. During this GlobalLogic Cafe Session, we explore how organizations can build contactless experiences to meet customers' new expectations. Backed by core engineering, our team provides several real-life examples across business domains that you can take back and implement at your company.

Introduction of Jira Service Management

Atlassian doubles its resources for IT Service Management, a market that continues to grow rapidly.

Atlassian has set another milestone with the introduction of Jira Service Management. The latest advances in IT service management (ITSM) bring IT operations and development teams together to collaborate at high speed and strengthen digital enterprises.

Already in the era before COVID, companies increasingly became digital enterprises. Now this change is accelerating. No matter if it is about supporting employees at different locations or shifting doctor’s visits to the digital area. IT teams are currently trying to create extraordinary software-supported experiences that are always available for their customers as well as for their employees.

Over the years, Atlassian has helped thousands of software development teams adopt the principles of Agile and DevOps, enabling them to deliver faster and higher quality experiences. The move to the cloud has provided infrastructure teams with equally flexible production environments.

But what about the rest of the IT? How often are service management teams – which rely on seamless workflow between development, IT operations and business teams – tied up with tools that enforce old ways of working?

With workflows sometimes stuck in the 1990s, traditional ITSM tools hardly seem ready for the challenges of high-speed IT.

That’s why Atlassian took another look at the core idea behind Agile and DevOps. Central and most important is to ensure that the processes adapt to the needs of the teams. So how can you design an ITSM solution that addresses this issue and helps unify development, IT operations and business teams?

ITSM at high speed
The new ITSM approach builds on Jira and helps teams free themselves from the past. It puts development and IT operations on a unified platform to work together at high speed so they can respond to business changes and quickly deliver great customer and employee service experiences.

Jira Service Management represents the next generation of Jira Service Desk. In addition to all the rich features of Jira Service Desk, which more than 25,000 customers already know and love, Jira Service Management:

Advanced incident management, powered by Opsgenie:
Atlassian has integrated on-call planning, alerting and more from the popular Opsgenie product into all Jira Service Management cloud plans. In addition, deeper integrations have been developed with Jira Software, Bitbucket and Confluence so you can seamlessly orchestrate incident resolution processes that span development and IT operations teams.

  • Change Management, designed for the DevOps era:
    Your teams can make smarter decisions with richer contextual information – both from your software development and infrastructure-related tools. Innovate faster with automated change risk assessments, advanced approval workflows and deep integrations with popular CI/CD tools such as Bitbucket Pipelines, Jenkins and CircleCI.
  • Intuitive service experiences:
    Atlassian has redesigned the agent experience to better categorize your service requests, incidents, problems and changes. Leverage new features such as bulk ticket actions and the power of machine learning to intelligently categorize similar tickets and act quickly.
  • The advantages of Jira Service Management:
    Quickly achieve value creation. Atlassian rejects the one-size-fits-all, command and control workflow management common in many ITSM tools. This significantly increases the cost and complexity of each deployment. Instead, teams can use a low-code approach to define and refine their own workflows and record types, all while standardizing on Jira. Even teams that interact with IT – such as legal, human resources and finance – can use Jira Service Management to build their own service culture and service processes.

Making work visible. Being built on Jira means that Jira Service Management can provide teams and the broader organization with visibility into work across the organization. Together with tight integrations to other Atlassian products and the suite of more than 900 integrations and applications on the Atlassian Marketplace, teams really have all the contextual information they need to make informed decisions.

Dev + Ops. Your teams can work more effectively across the entire IT service lifecycle – from planning, design, development, testing, deployment, change and optimization. All so you can provide the best possible service to your customers.
This announcement underscores Atlassian’s commitment to investing in ITSM, a market that continues to be dynamic and growing rapidly. It builds on recent acquisitions such as Mindville Insight for asset and configuration management, Opsgenie for incident management, Automation for Jira for code-free automation and Halp for call ticketing.

Get started today

Atlassian also thanks its more than 25,000 Jira Service Desk customers for their trust.

Customers automatically switch to Jira Service Management at the same cost and plan level as they currently are. So rest assured that everything you’ve learned to appreciate about Jira Service Desk works the same way in Jira Service Management.

✓ Cloud gives you instant access to the latest features, security upgrades, and bug fixes

✓ Cloud helps you prioritize creativity and strategic work

✓ Cloud empowers non-technical teams

✓ Cloud simplifies remote work and distributed teams

The cloud is no longer a differentiator – it is a strategic prerequisite for long-term success. That’s what Forrester’s Benchmark Your Enterprise Cloud Adoption report says and that’s what Atlassian customers are saying.

Ten years ago, moving to the cloud was a matter of priority – not anymore. Today it’s about keeping pace and providing customers and employees with the services they expect.

So how can your teams work securely in the cloud in the future? Let us count the ways:

1. Cloud gives you instant access to the latest features, security upgrades, and bug fixes

If you leave your software and computer turned on locally, a manual upgrade is appropriate each time new features appear (usually two to four times a year). The obvious cost here is on the IT team, who need both time and budget to make the changes. And they often have to plan for downtime that can affect the entire organization.

The lower cost to the business, if you only upgrade a few times a year, is that each upgrade brings a lot of new functionality. This means that each upgrade confronts teams with a learning curve of features they’ve never seen before. Because you’re introducing so many new things at once, there’s also a greater chance that some new bugs will be introduced. The fixes for these bugs could take three to six months at the next upgrade.

With the cloud, on the other hand, releases can be as small as a single bugfix or product enhancement, so only a handful of customers at a time have the opportunity to reduce the risk of introducing a new bug. In other words, if something goes wrong, the change can be easily reversed and its impact is limited. Instead of waiting three months for the next release to fix a system bug, teams have the fix as soon as it is ready.

And because new features are also introduced regularly and in small batches, it’s easier for your teams to keep up with changes rather than having to retrain several times a year. This not only keeps teams competitive by giving them instant access to the latest features, but also keeps them agile and connected to the systems they use every day.

2. Cloud helps you prioritize creativity and strategic work

Hosting your software and products on-site requires more and more time from your technical teams. Scaling to provide more storage, inventory or processing power to your users can take days, if not weeks or even months. Upgrades and security patches require regular time commitment. All major incidents, and the scramble sometimes lasting late into the night to respond to a problem or security breach, rests entirely on the shoulders of your IT team.

With Cloud, all that extra work is outsourced. This means that bug fixes, problem management and major incidents are the responsibility of your provider. This means that the IT team can give up tedious, unprofitable tasks like installing new servers or troubleshooting in favor of focusing on the strategic and creative work that is essential and unique to your business.

Not to mention the fact that most IT teams are already overloaded. Internal support teams process an average of nearly 500 support tickets per month, and according to a Zendesk study, it takes more than 24 hours to respond to each one. Overload is the number one reason employees quit, according to Forbes Magazine.

By shifting support for servers, uptime, upgrades and security patches from the IT team to your cloud provider, they can respond more quickly to other requests-and it will probably help you retain your top talent.

3. Cloud empowers non-technical teams

In on-site operations, every change – whether it’s a security upgrade, a new feature or more processing power – must be handled by the IT department. This not only places a burden on the technical team, but also slows down the work of your non-technical teams and deprives them of the ability to quickly improve their workflows, systems and team dynamics.

With the cloud, teams can be fast and agile with features like automatic scaling and instant security and feature upgrades. They can make process changes and take advantage of new features and benefits that improve their workflows without the need for lengthy approvals, delays or IT overload.

4. Cloud simplifies remote work and distributed teams

For companies that are still completely local, remote working is complicated. On-site installations can be accessed remotely, but maintaining security while granting access is a complex dance of passwords, firewalls, VPN barriers and architectural constraints.

In contrast, cloud solutions are already accessible from any location with an Internet connection. And security in the cloud is already being built with a view to working at remote locations (which is probably why 94% of companies surveyed say security has improved for them after moving to the cloud).

Even better, the same benefits that allow employees to work remotely – either full-time or at the touch of a button in an emergency – make it easy to support geographically dispersed teams.

The great advantage of such distributed teams and working from a distance (apart from crisis management) is that they provide access to a larger talent pool, both geographically and by opening up positions to those who need to work from home for reasons such as disability or as a first responder for an elderly parent or sick child.

With the world on our fingertips, we are just a click away from exploring various horizons, thanks to the internet! In the attempt to retain maximum users on a particular page, the website speed is all that counts. And this is where Content Delivery Network (CDN) comes into consideration. With its availability spanning across geographical boundaries, CDN comes handy in providing a fast loading speed to deliver web content to the users.

Not only does the server protect the master server from the local servers, but also handles the incoming traffic on a site by speeding up the website loading time. CDN provides a rich OTT experience to the end-users by placing remote services in various parts of the world, providing fast-paced content delivery. CDN forms the backbone of OTT platforms in defining the latency of the video streaming services.

This publication is an answer to a question in the form of how, why, and what in the context of CDN. Though Content Delivery Network (CDN) is a generic term for any content on the internet, this publication is focused on OTT video/audio streaming version.

Cloud computing offerings have disrupted the IT landscape, providing both opportunities for dramatic growth for the companies embracing them and, in cases of incorrect implementation, a plethora of pitfalls. GlobalLogic has developed a Cloud Adoption Framework based on top industry practices and infused with experience gleaned from implementing migrations firsthand, in order to help clients avoid these stumbling blocks. The purpose of this Cloud Adoption Framework (hereinafter referred to as CAF) is to:

  • Provide an overview of stages for cloud migration
  • Outline different modes of cloud migration
  • Provide a list of actionable items for GlobalLogic Consultants and Clients at every stage
  • Manage and assist with migration efforts based on inputs gathered
  • Provide a list of templates and best practices

Given that each cloud journey is different, the authors have tried to keep this framework actionable yet lightweight and flexible enough to drive conversations and actions tailored to each client. It is free of the exhaustive templates and linked documents that make for a more rigid approach.

All cloud vendors and most major consultancy companies have cloud migration strategies and adoption frameworks. Each revolves around similar concepts, differing in nomenclature and depth of analysis for each phase; see the Cloud Economics model by Deloitte vs Microsoft’s Cloud Business Cases model, for example. Juniper, Amazon, and TechMahindra are among others using their own framework.

Each cloud adoption and migration effort is an iterative journey with an inevitable “crossing tax” to move between contexts. Business and IT realization that the cloud could accelerate certain business transformation objectives drives the last step, factual cloud adoption.

GlobalLogic’s CAF helps clients align strategies for the business, corporate culture and technical changes needed to achieve adoption and successful business outcomes.

In recent history, AI has revolutionized how live sports data is collected, analyzed, and turned into intelligence that can drive decisions. In this GlobalLogic Cafe session, Mitchell Wasserman (COO of Sportlogiq, an AI-enabled sports analytics company), digs into the fascinating, game-changing world of AI & sports data.

  • URL copied!