Loading...
Latest news
Tokenomics: Why and How Tokens Fuel the Decentralised Economy
Why AI Needs Big Data
3 Ways How Blockchain Could Disrupt the Telecom Industry
How Industrial IoT is Influenced by Cognitive Anomaly Detection
News

Apache Flink: The Next Distributed Data Processing Revolution?

Article posted on : link to source

Disclaimer: The results are valid only in the case when network attached storage is used in the computing cluster.

The amount of data is growing significantly over the past few years. It is not feasible for only one machine to process large amounts of data. Therefore, the need of distributed data processing frameworks is growing. It all started back in 2011 when the first version of Apache Hadoop was released (version 1.0.0). The Hadoop framework is capable of storing a large amount of data on a cluster. This is known as the Hadoop FileSystem (HDFS) and it is used at almost every company which has the burden to store Terabytes of data every day. Then the next problem arose: how can companies process all the stored data? Here is where Distributed Data Processing frameworks come into play. In 2014, Apache Spark was released and it now has a large community. Almost every IT section has implemented at least some lines of Apache Spark code. Companies gathered more and more data and the demand for faster data processing frameworks is growing. Apache Flink (released in March 2016) is a new face in the field of distributed data processing and is one answer to the demand for …

Read More on Datafloq

%d bloggers like this: