Edge computing is transmuting the way data is being gathered, processed, and delivered from millions of devices around the world. With the explosion of IoT in every industry (29B devices by 2022) and the maturity of cloud infrastructure, the need for computing power and execution capabilities on every end point of a network has increased manifold. Gartner estimates that by 2025, 75% of data will be processed outside the traditional data center or cloud up from 10% today.
Earlier, edge computing was referred to as a place where various devices of a network would connect, deliver data, and receive instructions. So, it was more of a centralized control center (like a data center or a cloud infrastructure). But that model soon reached design limitations with the exponential increase in IoT devices. Since these devices gather so much data, the sheer volume requires larger and more costly connections to a data center or a cloud. In addition, the nature of computation being performed at the end points also changed, with newer use cases that needed low latency, real-time / near real-time processing, or series of back-and-forth operations. This has resulted in the metamorphosis of edge computing to what it is today.
So, what really is edge computing? An easy to understand definition these days is - Computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. As edge computing grows wide and deep, we look at 7 trends that we are witnessing.
1. Use cases inch closer to reality
With most of the technical kinks sorted out, edge computing use cases are now starting to take shape beyond the pilot or proof-of-concept. Examples include:
- Predictive maintenance using Industrial IoT solutions in the manufacturing industry
- Connected Ambulance, enabling live streaming of processed patient data
- Haptic-enabled diagnostic tools for remote specialist diagnosis in the Healthcare industry
- Optimizing production line performance in the Manufacturing industry
- Connected driving experiences in the Automotive industry
2. Architecture starts to mimic cloud-native paradigms
Early edge computing applications were all traditional local applications: difficult to update, test, or upgrade. With greater maturity, this architecture is now mimicking cloud-native patterns, as opposed to being traditional monolithic local applications. This gives the newer breed of edge computing applications the benefits of the cloud combined with the speed of the edge. Edge applications are thus designed and built as an "edge-cloud" to manage and orchestrate workloads across scalable physical infrastructure to ensure business continuity.
3. Melding of 5G and Wifi-6 with edge computing
5G needs mobile edge computing for two key reasons. One reason is to achieve 5G standards specification (i.e., 1 ms network latency). The second aspect is in regards to the implementation path of 5G being taken by the operators. The current approach varies for each operator/region, and so there will be a gradual adoption until "full 5G" is achieved. Melding edge computing tech with 4G can help realize several 5G use cases. One example is edge computing-enabled "5G-Like Experiences" for maintenance field teams using AR headsets. This can be used in regions that do not have 5G coverage and then slowly scaled.
Wifi-6 is also fast becoming a complementary technology for 5G. Wifi-6 as a high-density local area service will be the technology of choice of enterprises in those environments. A Wifi-6 LAN will appear to the 5G network as just another node, thus abstracting transitions from the user.
4. Acceleration in actual IoT data usage and processing
The early hype of IoT did not sustain because the cost of transmitting and storing all the data in cloud/data centers outweighed the touted benefits. But with edge computing becoming more mainstream and reliable, these costs can be reduced multi-fold, and machine learning and AI models can start identifying data patterns that have business impact.
5. IT and OT convergence
Traditional enterprises like Manufacturing, Transportation, or Oil and Gas have always had separate organizations managing software systems (IT) and industrial systems (OT). As these enterprises look at modernizing their business — becoming digital and using new tech for their critical use cases like predictive maintenance, shop-floor optimization etc., — these teams are becoming more integrated and collaborative. This integration is happening with edge computing as a common ground.
6. Impact of streaming media
Globally, Cisco forecasts that by 2022, video traffic will account for 82% of all business and consumer IP traffic. Simultaneously, global VR/AR traffic will grow twelve-fold between 2017 and 2022. This means the internet has to be re-architected to enhance the current CDNs and mitigate bottlenecks. Edge data centers will have a large role to play here.
7. Lesser known rule of 3: autonomy, compliance and security
While bandwidth and latency are at the front and center of edge computing, the lesser aspects of autonomy, compliance, and security are also key considerations.
Autonomy in edge computing is about ensuring that the scale, variability, and rate of change of edge devices and environments are managed autonomously. On the other hand, compliance in edge computing is about adhering to regional laws and policies governing the transfer of data. Lastly, security in edge computing is again about ensuring that a common minimum-security implementation exists in the edge device to prevent the weakest link syndrome from occurring.
Conclusion
To conclude, most enterprise leaders and decision-makers view the key factors for edge computing adoption to be:
- Flexibility of present/future AI demands
- Avoiding network latency
- Promise of complex processing outside the cloud
It is clear that in an increasingly interconnected world, the impact of enterprise solutions and technology will become equally pervasive in the factory, as well in your home.