Jay Kreps – Apache Kafka and the Rise of The Stream Data Platform
What happens if you take everything that is happening in your company — every click, every database change, every application log — and make it all available as a real-time stream of well structured data?
Jay will discuss the experience at LinkedIn and elsewhere moving from batch-oriented ETL to real-time streams using Apache Kafka. He’ll talk about how the design and implementation of Kafka was driven by this goal of acting as a real-time platform for event data. Jay will cover some of the challenges of scaling Kafka to hundreds of billions of events per day at Linkedin, supporting thousands of engineers, applications, and data systems in a self-service fashion.
He’ll describe how real-time streams can become the source of ETL into Hadoop or a relational data warehouse, and how real-time data can supplement the role of batch-oriented analytics in Hadoop or a traditional data warehouse.
Jay will also describe how applications and stream processing systems such as Storm, Spark, or Samza can make use of these feeds for sophisticated real-time data processing as events occur.
About Jay Kreps
Jay Kreps is the CEO of Confluent, a company focused on building a real-time stream platform around Apache Kafka. Previously, he was one of the primary architects for LinkedIn where he focused on data infrastructure and data-driven products. He was among the original authors of a number of open source projects in the big data space, including Voldemort, Kafka, and Samza.
Can't join us in person? This event will be streamed at 7pm Eastern.
Monday, Sep 28, 2015
6:30-7:00 pm: Registration
7:00-8:00 pm: Presentation
8:00-9:00 pm: Social Hour
Code as Craft participants are expected to abide by our Code of Conduct.