Apache Kafka + Machine Learning => Confluent Blog Post and Github Project

I am happy that my first official Confluent blog post was published and want to link to it from by blog:

How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka

The post explains in detail how you can leverage Apache Kafka and its Streams API to deploy analytic models to a lightweight, but scalable, mission-critical streaming appilcation.

Github Examples for Apache Kafka + Machine Learning

If you want to take a look directly at the source code, go to my Github project about Kafka + Machine Learning. It contains several examples how to combine Kafka Streams with frameworks like TensorFlow, H2O or DeepLearning4J.

Kai Waehner

bridging the gap between technical innovation and business value for real-time data streaming, processing and analytics

Recent Posts

When (Not) to Use Queues for Kafka?

Apache Kafka has long been the foundation for real-time data streaming. With the release of…

4 days ago

Diskless Kafka at FinTech Robinhood for Cost-Efficient Log Analytics and Observability

Diskless Kafka is transforming how fintech and financial services organizations handle observability and log analytics.…

1 week ago

Shift Left in Automotive: Real-Time Intelligence from Vehicle Telemetry with Data Streaming at Rivian

Rivian and Volkswagen, through their joint venture RV Tech, process high-frequency telemetry from connected vehicles…

2 weeks ago

Etihad Airways Makes Airline Operations Real-Time with Data Streaming

Airlines face constant pressure to deliver reliable service while managing complex operations and rising customer…

3 weeks ago

Stream Processing on the Mainframe with Apache Flink: Genius or a Glitch in the Matrix?

Running Apache Flink on a mainframe may sound surprising, but it is already happening and…

1 month ago

10 FinTech Predictions That Depend on Real Time Data Streaming

Financial services companies are moving from batch processing to real time data flow. A data…

2 months ago