-
-
-
-
URL copied!
A Google paper on the Google File System, published in October 2003, and Jeffrey Dean and Sanjay Ghemawat’s MapReduce paper in 2004, kicked off the era of big data technologies. Shortly thereafter, Doug Cutting and Mike Cafarella, then working with Yahoo! on a search engine called Apache Nutch (based on Apache Lucene indexing), created the new Hadoop subproject for running a large-scale computation on large clusters of commodity hardware in January 2006.
Since these early efforts, the big data technology landscape has been enriched with numerous innovations and evolved in leaps and bounds. In part one of this blog series, we’ll look at the evolution of the data and analytics space across their core aspects.
Data Platforms Evolution
OSS based → Packaged Distributions → Cloud-Native Stack → Cloud Agnostic stack → Composable Data Analytics |
Soon after Hadoop released an Apache open source project, it spawned several frameworks on top of Hadoop to perform different types of processing. Apache Pig, Apache Hive, Apache HBase, Apache Giraph, and Apache Mahout were a few of the diverse frameworks that allowed different ways to process data stored in a Hadoop cluster.
In addition, there were parallel stacks that replaced one or more frameworks with Kafka, Cassandra, or Cascading. The initial deployments required teams to build and deploy software on commodity hardware based on open-source Hadoop ecosystem components.
After Hadoop’s project came the commercial distribution of Cloudera, Hortonworks, MapR, Hadapt, DataStax, and Pivotal Greenplum, which packaged the required software in a user-friendly fashion and provided premium support. Then, Amazon EMR released the first cloud-based Hadoop distribution.
Now there are cloud-specific Hadoop and Spark-based distributions like Azure Synapse Analytics GCP DataProc that come pre-installed with the required software and computing power.
From there, cloud-agnostic stacks such as Snowflake and DataBricks evolved to work efficiently across different clouds. These platforms are adding innovative features which cater to key performance and cost metrics. As a result, these technologies are getting quite popular, with many enterprises now moving towards such cloud-agnostic stacks.
Enterprises are increasingly looking at a multi-cloud strategy to avoid lock-ins by a particular cloud and use the best technology for various purposes. The trend for the future is to move towards composable data analytics, where companies will build data platforms using components from two to three different technologies and cloud providers.
Data Architecture Evolution
Data Warehouse → Data Lakes / LakeHouse → Data Mesh / Data Fabric |
For decades, data warehouses such as Teradata and Oracle have been used as central repositories for storing integrated data from one or more disparate sources. These data warehouses store current and historical data in one place that can create analytical reports for workers throughout the enterprise.
With the advent of big data frameworks like Hadoop, the concept of a data lake became incredibly popular. Typically, a data lake is a singular data storage and includes raw copies of source system data, sensor data, social data, and transformed data for reporting, visualization, advanced analytics, and machine learning tasks.
Data mesh is an organizational and architectural paradigm for managing big data that began to gain popularity in 2019. It is a process and architectural approach that delegates responsibility for specific data sets to business members who have the subject matter expertise to use the data properly. With the data mesh architecture, data domains become prominent with a base data platform that individual domain teams can use with their own data pipelines.
In this situation, data resides within the foundational data platform storage layer (data lake or data warehouse) in its original form. Individual teams will choose how to process this data and then serve the datasets they own in a domain-specific manner.
Data fabric is a design concept defined by Gartner that serves as an integrated layer (fabric) of data and connecting processes. A data fabric utilizes continuous analytics over existing, discoverable, and inference metadata assets to support the design, deployment, and utilization of integrated and reusable data across all environments, including hybrid and multi-cloud platforms.
Data mesh and data fabric are concepts that provide a unified view of the data distributed across the underlying data lakes and data warehouses. They, however, differ in how users access them. While data mesh is about people and processes, a data fabric is an architectural approach to tackle the complexity of the underlying data. Experts believe these concepts can be used simultaneously and will work on top of the existing data lakes and warehouses.
Data Processing Evolution
Batch Processing → Stream / Real-time Processing → Lambda → Kappa / Delta |
Initially, the big data solutions were typically long-running batch jobs to filter, aggregate, and otherwise prepare the data for analysis. Usually, these jobs involved reading source files from scalable storage like the Hadoop Distributed File System (HDFS), processing them, and writing the output to new files in scalable storage. The key requirement for these batch processing engines is the ability to scale out computations in order to handle a large volume of data.
The stream, or real-time processing, deals with data streams captured in real-time and processed with minimal latency to generate real-time (or near-real-time) reports or automated responses. Frameworks like Apache Kafka, Apache Storm, Apache Spark Streaming, Amazon Kinesis, etc., help enable this capability.
The next evolution was the Lambda architecture, a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch and stream-processing methods. This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data while simultaneously using real-time stream processing to provide views of online data
Jay Kreps then proposed the Kappa architecture as an alternative to the Lambda architecture. It has the same primary goals as the Lambda architecture but with an important distinction: all data flows through a single path, using a stream processing system.
Another alternative is the Delta architecture, which introduces a table storage layer to handle stream and table storage accessed through a single code base. Databricks proposed this architecture, with Delta Lake at the center of the architecture. Delta Lake is an open-source atomicity, consistency, isolation, durability (ACID) table storage layer over cloud object stores.
The Lambda architecture and its variants, the Kappa and Delta architecture, will continue to be valuable architectures in the near future.
This concludes the first part of the blog series. We’ll continue to explore the evolution of the data and analytics space in subsequent blog posts in this series in the coming months.
Resources:
- The Google File System by Sanjay Ghemawat, 2003.
- MapReduce: Simplified Data Processing on Large Clusters by Jeffrey Dean and Sanjay Ghemawat, 2004.
- Top Strategic Technology Trends for 2022: Data Fabric, Gartner, 2021.
Questioning the Lambda Architecture, Jay Kreps, 2014.
Top Insights
Best practices for selecting a software engineering partner
SecurityDigital TransformationDevOpsCloudMediaMy Intro to the Amazing Partnership Between the...
Experience DesignPerspectiveCommunicationsMediaTechnologyAdaptive and Intuitive Design: Disrupting Sports Broadcasting
Experience DesignSecurityMobilityDigital TransformationCloudBig Data & AnalyticsMediaLet’s Work Together
Related Content
If You Build Products, You Should Be Using Digital Twins
Digital twin technology is one of the fastest growing concepts of Industry 4.0. In the simplest terms, a digital twin is a virtual replica of a real-world object that is run in a simulation environment to test its performance and efficacy
Learn More
Unlock the Power of the Intelligent Healthcare Ecosystem
Welcome to the future of healthcare The healthcare industry is on the cusp of a revolutionary transformation. As we move beyond digital connectivity and data integration, the next decade will be defined by the emergence of the Intelligent Healthcare Ecosystem. This is more than a technological shift—it's a fundamental change in how we deliver, experience, … Continue reading The Evolution of Data & Analytics Technologies →
Learn More
Crowd-Striked: Lessons Learned and Best Practices for Future Prevention
Incident Summary On July 19, 2024, CrowdStrike released a content configuration update for the Windows sensor that resulted in widespread system instability, causing Windows systems to experience the "Blue Screen of Death" (BSOD). The issue was traced to a channel file named “C-00000291*.sys” included in the update, which caused system crashes upon deployment. “Channel files” … Continue reading The Evolution of Data & Analytics Technologies →
Learn More
Share this page:
-
-
-
-
URL copied!