-
-
-
-
URL copied!
In part 1 of this blog series, we looked at the data and analytics evolution across data platforms, data processing technologies, and data architecture. Here in part 2, we’ll take a look at the evolution of the data and analytics space across application development and storage aspects.
Data Application Development Evolution
Programming based → Scripting → SQL like → Low/No Code UI |
Initially, data engineers used programming languages like Java to develop most of the data applications on initial big data ecosystem projects like Apache Hadoop. This was because these frameworks provided interfaces to create and deploy data applications using the Java or Scala programming language.
Soon after, data engineers and analysts could easily use custom scripting languages like Apache Pig for Hadoop or Scalding for Cascading to develop jobs in a more user-friendly way without writing programs in the underlying language.
Due to the widespread use of SQL amongst the data analyst and data scientist communities, SQL and SQL-like frameworks such as Apache Hive for Hadoop, CQL for Cassandra, and Apache Phoenix for HBase became prominent and continue to be widely used by data engineers and data analysts alike.
Currently, with a shortage of data engineers and analysts, enterprises are increasingly looking at user interface based development that can reduce the implementation complexity and improve productivity. Therefore, the trend for the future is to move towards low code or no-code user interface based applications like AWS Glue, Azure Data Factory, Prophecy.ai, and GlobalLogic Data Platform that minimizes the learning curve for data engineers and accelerates the development for enterprises.
Data Formats Evolution
Text / Binary Formats → Custom Formats → Columnar Formats → In Memory Columnar & High Performance Formats |
In the beginning, analysts stored most of the data in the Hadoop Distributed File System (HDFS) as text files or in binary formats like SequenceFile or RCFile. While some formats like text and JSON are readable to the bare eye, they consume a lot of storage space and are not performance friendly for large volumes of data.
Subsequently, engineers developed many open-source data serialization formats like Apache Avro and Google Protobuf to serialize structured data. They provide rich data structures and a compact, fast binary data. These formats continue to be used frequently for storing data.
Then engineers developed columnar formats like Apache ORC, Apache Parquet, Delta, and Apache Hudi that support better data compression and schema evolution handling. The columnar formats like ORC, Delta, and Hudi can also support ACID transactions to handle data updates and change streams.
The columnar data formats and storage systems are already the most used across enterprises. The trend for the future will be to use in-memory columnar formats like Apache Arrow or high-performance formats like Apache Iceberg or Apache CarbonData that provide efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Internally, these formats still use ORC or Parquet to store the data making them compatible with the existing data stored.
Data Storage Evolution
HDFS → Hive → NoSQL / NewSQL → Cloud Data Warehouses + Blob Storage |
HDFS was the initial distributed file-based storage system that allowed engineers to store large amounts of data on top of community hardware infrastructure. For example, engineers run the MapReduce programs on top of the files stored in HDFS.
Apache Hive and HBase frameworks followed this development, providing a table-like view of the underlying data and allowing developers to run SQL-like queries on the underlying data.
Soon after, several NoSQL databases were developed with different characteristics like wide-column, key-value store, document store, graph database, etc., to support specific use cases. Some popular NoSQL databases include Apache Cassandra, MongoDB, Apache CouchDB, Neo4J, Memcached in open source and Amazon DynamoDB, Azure CosmosDB, and Google Cloud BigTable, among commercial versions.
During this period, engineers introduced an integration of traditional RDBMS with NoSQL as NewSQL that seeks to provide the scalability of NoSQL systems for online transaction processing (OLTP) workloads while maintaining the ACID guarantees. Some NewSQL databases include Amazon Aurora, Google Cloud Spanner, CockroachDB, and Yugabyte DB, among others.
Most of the cloud storage is HDFS-compliant, and together with the serverless nature of this storage, enterprises are increasingly using them as the blob storage systems. Therefore, the trend for the near future will be to use cloud blob storage like Amazon S3, Azure Blob Storage/ ADLS, and Google Cloud Storage as the landing zone for ingesting data. The data will then be processed and aggregated data will be persisted in Cloud data warehouses such as Amazon Redshift, Azure Synapse SQL Data warehouse, Google Cloud BigQuery, Snowflake, or Databricks DeltaLake.
Engineers will continue to use the NoSQL databases for specific data use cases as applicable.
This concludes the second part of this blog series. We’ll continue to explore the evolution of the data and analytics space in subsequent blog posts in this series in the coming months.
Top Insights
Manchester City Scores Big with GlobalLogic
AI and MLBig Data & AnalyticsCloudDigital TransformationExperience DesignMobilitySecurityMediaTwitter users urged to trigger SARs against energy...
Big Data & AnalyticsDigital TransformationInnovationRetail After COVID-19: How Innovation is Powering the...
Digital TransformationInsightsConsumer and RetailTop Insights Categories
Let’s Work Together
Related Content
Accelerating Digital Transformation with Structured AI Outputs
Enterprises increasingly rely on large language models (LLMs) to derive insights, automate processes, and improve decision-making. However, there are two significant challenges to the use of LLMs: transforming structured and semi-structured data into suitable formats for LLM prompts and converting LLM outputs back into forms that integrate with enterprise systems. OpenAI's recent introduction of structured … Continue reading Evolution of Data & Analytics Technologies (Part -2) →
Learn More
If You Build Products, You Should Be Using Digital Twins
Digital twin technology is one of the fastest growing concepts of Industry 4.0. In the simplest terms, a digital twin is a virtual replica of a real-world object that is run in a simulation environment to test its performance and efficacy
Learn More
Accelerating Enterprise Value with AI
As many organizations are continuing to navigate the chasm between AI/GenAI pilots and enterprise deployment, Hitachi is already making significant strides. In this article, GlobaLogic discusses the importance of grounding any AI/GenAI initiative in three core principles: 1) thoughtful consideration of enterprise objectives and desired outcomes; 2) the selection and/or development of AI systems that are purpose-built for an organization’s industry, its existing technology, and its data; and 3) an intrinsic commitment to responsible AI. The article will explain how Hitachi is addressing those principles with the Hitachi GenAI Platform-of-Platforms. GlobalLogic has architected this enterprise-grade solution to enable responsible, reliable, and reusable AI that unlocks a high level of operational and technical agility. It's a modular solution that GlobalLogic can use to rapidly build solutions for enterprises across industries as they use AI/GenAI to pursue new revenue streams, greater operational efficiency, and higher workforce productivity.
Learn More
Share this page:
-
-
-
-
URL copied!