Evolution of Data & Analytics Technologies (Part -2)

Insight categories: Big Data & AnalyticsTechnology

In part 1 of this blog series, we looked at the data and analytics evolution across data platforms, data processing technologies, and data architecture. Here in part 2, we’ll take a look at the evolution of the data and analytics space across application development and storage aspects.

Data Application Development Evolution

Programming based → Scripting → SQL like → Low/No Code UI

Initially, data engineers used programming languages like Java to develop most of the data applications on initial big data ecosystem projects like Apache Hadoop. This was because these frameworks provided interfaces to create and deploy data applications using the Java or Scala programming language.

Soon after, data engineers and analysts could easily use custom scripting languages like Apache Pig for Hadoop or Scalding for Cascading to develop jobs in a more user-friendly way without writing programs in the underlying language.

Due to the widespread use of SQL amongst the data analyst and data scientist communities, SQL and SQL-like frameworks such as Apache Hive for Hadoop, CQL for Cassandra, and Apache Phoenix for HBase became prominent and continue to be widely used by data engineers and data analysts alike. 

Currently, with a shortage of data engineers and analysts, enterprises are increasingly looking at user interface based development that can reduce the implementation complexity and improve productivity. Therefore, the trend for the future is to move towards low code or no-code user interface based applications like AWS Glue, Azure Data Factory, Prophecy.ai, and GlobalLogic Data Platform that minimizes the learning curve for data engineers and accelerates the development for enterprises.

Data Formats Evolution

Text / Binary Formats → Custom Formats → Columnar Formats → In Memory Columnar & High Performance Formats

In the beginning, analysts stored most of the data in the Hadoop Distributed File System (HDFS) as text files or in binary formats like SequenceFile or RCFile. While some formats like text and JSON are readable to the bare eye, they consume a lot of storage space and are not performance friendly for large volumes of data.

Subsequently, engineers developed many open-source data serialization formats like Apache Avro and Google Protobuf to serialize structured data. They provide rich data structures and a compact, fast binary data. These formats continue to be used frequently for storing data.

Then engineers developed columnar formats like Apache ORC, Apache Parquet, Delta, and Apache Hudi that support better data compression and schema evolution handling. The columnar formats like ORC, Delta, and Hudi can also support ACID transactions to handle data updates and change streams. 

The columnar data formats and storage systems are already the most used across enterprises. The trend for the future will be to use in-memory columnar formats like Apache Arrow or high-performance formats like Apache Iceberg or Apache CarbonData that provide efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Internally, these formats still use ORC or Parquet to store the data making them compatible with the existing data stored.

Data Storage Evolution

HDFS → Hive → NoSQL / NewSQL → Cloud Data Warehouses + Blob Storage

HDFS was the initial distributed file-based storage system that allowed engineers to store large amounts of data on top of community hardware infrastructure. For example, engineers run the MapReduce programs on top of the files stored in HDFS. 

Apache Hive and HBase frameworks followed this development, providing a table-like view of the underlying data and allowing developers to run SQL-like queries on the underlying data. 

Soon after, several NoSQL databases were developed with different characteristics like wide-column, key-value store, document store, graph database, etc., to support specific use cases. Some popular NoSQL databases include Apache Cassandra, MongoDB, Apache CouchDB, Neo4J, Memcached in open source and Amazon DynamoDB, Azure CosmosDB, and Google Cloud BigTable, among commercial versions. 

During this period, engineers introduced an integration of traditional RDBMS with NoSQL as NewSQL that seeks to provide the scalability of NoSQL systems for online transaction processing (OLTP) workloads while maintaining the ACID guarantees. Some NewSQL databases include Amazon Aurora, Google Cloud Spanner, CockroachDB, and Yugabyte DB, among others. 

Most of the cloud storage is HDFS-compliant, and together with the serverless nature of this storage, enterprises are increasingly using them as the blob storage systems. Therefore, the trend for the near future will be to use cloud blob storage like Amazon S3, Azure Blob Storage/ ADLS, and Google Cloud Storage as the landing zone for ingesting data. The data will then be processed and aggregated data will be persisted in Cloud data warehouses such as Amazon Redshift, Azure Synapse SQL Data warehouse, Google Cloud BigQuery, Snowflake, or Databricks DeltaLake. 

Engineers will continue to use the NoSQL databases for specific data use cases as applicable.

This concludes the second part of this blog series. We’ll continue to explore the evolution of the data and analytics space in subsequent blog posts in this series in the coming months. 

Author

Arun_FullLength_cropped_3730312

Author

Arun Viswanathan

Principal Architect

View all Articles

Top Authors

Yuriy Yuzifovich

Yuriy Yuzifovich

Chief Technology Officer, AI

Richard Lett

Richard Lett

VP of Healthcare Technology

Chet Kolley

Chet Kolley

SVP & GM, Medical Technology BU

Ravikrishna Yallapragada

Ravikrishna Yallapragada

AVP, Engineering

All Categories

  • URL copied!