Introduction to Apache Kafka

Currently one of the hottest projects across the Hadoop ecosystem, Apache Kafka is a distributed, real-time data system that functions in a manner similar to a pub/sub messaging service, but with better throughput, built-in partitioning, replication, and fault tolerance. In this video course, host G...

Descripción completa

Detalles Bibliográficos
Otros Autores: Shapira, Gwen, author (author)
Formato: Video
Idioma:Inglés
Publicado: O'Reilly Media, Inc 2015.
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630045706719
Descripción
Sumario:Currently one of the hottest projects across the Hadoop ecosystem, Apache Kafka is a distributed, real-time data system that functions in a manner similar to a pub/sub messaging service, but with better throughput, built-in partitioning, replication, and fault tolerance. In this video course, host Gwen Shapira from Cloudera shows developers and administrators how to integrate Kafka into a data processing pipeline. You’ll start with Kafka basics, walk through code examples of Kafka producers and consumers, and then learn how to integrate Kafka with Hadoop. By the end of this course, you’ll be ready to use this service for large-scale log collection and stream processing. Learn Kafka’s use cases and the problems that it solves Understand the basics, including logs, partitions, replicas, consumers, and producers Set up a Kafka cluster, starting with a single node before adding more Write producers and consumers, using old and new APIs Use the Flume log aggregation framework to integrate Kafka with Hadoop Configure Kafka for availability and consistency, and learn how to troubleshoot various issues Become familiar with the entire Kafka ecosystem Gwen Shapira is a software engineer at Cloudera with 15 years of experience working with customers to design scalable data architectures. Working as a data warehouse DBA, ETL developer, and a senior consultant, Gwen specializes in building scalable data processing pipelines and integrating existing data systems with Hadoop. She’s a committer to Apache Sqoop and an active contributor to Apache Kafka.
Notas:Title from resource description page (viewed April 22, 2015)
Date of publication taken from resource description page.
Descripción Física:1 online resource (1 video file, approximately 2 hr., 56 min.)