Use Case
4:30 PM - 5:10 PM, PST , October 29
Learnings from running high-volume data streaming with Kafka & Flink
The integration of Apache Flink with Apache Kafka has significantly shaped a dynamic ecosystem for stream processing and managing data pipelines. This powerful duo ensures high-throughput and low-latency processing, essential for real-time analytics in high-demand scenarios. The synergy of Kafka's proficient data ingestion with Flink's superior event-time processing and robust state management forms an exceptional toolkit for contemporary data architectures. Their widespread adoption across industries such as finance, e-commerce, and IoT is testament to their reliability and scalability in managing extensive data streams.
In this session, we will delve into the learnings from running Apache Flink with Apache Kafka at scale. We will cover the importance of robust monitoring and proactive alerting systems that ensure smooth operation and quick issue resolution. Key insights will be shared on best practices for managing large-scale data ingestion, including optimizing Kafka's partitioning strategies and Flink's state management for enhanced performance. Attendees will leave with a comprehensive understanding of how to efficiently manage and harness the power of these two leading technologies in a high-volume data environment.
Speaker

Amrit Sarkar
Engineer, Salesforce