Streaming is a process of sending/receiving data one-by-one in a steady manner instead of it being stored in a single place forever. This allows for real-time processing of data as it is generated and not for it to be stored first and then a different process to access it as it likes.
Common technologies like Apache Kafka, Apache Flink, and cloud-based streaming services are commonly used for stream processing.
Here’s an example of streaming data architecture:
There has been a shift to process data as much as possible when it is generated rather than run batch jobs later to process them in bulk. This has come to light due to increasing popularity and maturity of streaming technologies, allowing large datasets to be processed through pipelines with complex logics in very less time in a distributed architecture.
Read more about Streaming here: https://www.upsolver.com/blog/streaming-data-architecture-key-components