Nothing static, all in motion.
You probably know by now: Spark is the most popular computing engine for big data, the most maintained, and with a proven track record of performance. It’s 100 times faster than the old MapReduce paradigm, and can easily be extended with machine learning and streaming capabilities, and much more.
In this Spark Streaming course, we’ll take the natural step forward: process big data as it arrives.
What’s in for you:
- You’ll learn how Spark Structured Streaming and “normal” Spark batch operations are similar and different
- You’ll work with new streaming abstractions (DStreams) for low-level, high-control processing
- You’ll integrate Kafka, JDBC, Cassandra and Akka Streams (!) so that you can later integrate anything you like
- You’ll work with powerful stateful APIs that only a few know how to properly use
And some extra perks: - You’ll have access to the entire code I write on camera (2200+ LOC) - You’ll be invited to our private Slack room where I’ll share latest updates, discounts, talks, conferences, and recruitment opportunities - (soon) You’ll have access to the takeaway slides - (soon) You’ll be able to download the videos for your offline view