Artwork

Contenu fourni par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

What is the Future of Streaming Data?

41:29
 
Partager
 

Manage episode 424654009 series 2355972
Contenu fourni par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data.
Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems.
As more companies seek ways to manage this data, they are asking some basic questions:

  • How to do it?
  • Do best practices exist?
  • How can we get help?

The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data.
Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond.
EPISODE LINKS

  continue reading

Chapitres

1. Intro (00:00:00)

2. How did Greg get started with event streaming? (00:07:11)

3. What is the value of data streaming in Apache Kafka? (00:13:22)

4. Event logs vs REST APIs (00:18:45)

5. What are the stages of Kafka adoption? (00:21:44)

6. What is the next big frontier in Kafka adoption? (00:25:41)

7. How do we get to the next stage of streaming data faster? (00:33:01)

8. It's a wrap! (00:39:56)

265 episodes

Artwork
iconPartager
 
Manage episode 424654009 series 2355972
Contenu fourni par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka®. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Confluent, founded by the original creators of Apache Kafka® and Founded by the original creators of Apache Kafka® ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

What’s the next big thing in the future of streaming data? In this episode, Greg DeMichillie (VP of Product and Solutions Marketing, Confluent) talks to Kris about the future of stream processing in environments where the value of data lies in their ability to intercept and interpret data.
Greg explains that organizations typically focus on the infrastructure containers themselves, and not on the thousands of data connections that form within. When they finally realize that they don't have a way to manage the complexity of these connections, a new problem arises: how do they approach managing such complexity? That’s where Confluent and Apache Kafka® come into play - they offer a consistent way to organize this seemingly endless web of data so they don't have to face the daunting task of figuring out how to connect their shopping portals or jump through hoops trying different ETL tools on various systems.
As more companies seek ways to manage this data, they are asking some basic questions:

  • How to do it?
  • Do best practices exist?
  • How can we get help?

The next question for companies who have already adopted Kafka is a bit more complex: "What about my partners?” For example, companies with inventory management systems use supply chain systems to track product creation and shipping. As a result, they need to decide which emails to update, if they need to write custom REST APIs to sit in front of Kafka topics, etc. Advanced use cases like this raise additional questions about data governance, security, data policy, and PII, forcing companies to think differently about data.
Greg predicts this is the next big frontier as more companies adopt Kafka internally. And because they will have to think less about where the data is stored and more about how data moves, they will have to solve problems to make managing all that data easier. If you're an enthusiast of real-time data streaming, Greg invites you to attend the Kafka Summit (London) in May and Current (Austin, TX) for a deeper dive into the world of Apache Kafka-related topics now and beyond.
EPISODE LINKS

  continue reading

Chapitres

1. Intro (00:00:00)

2. How did Greg get started with event streaming? (00:07:11)

3. What is the value of data streaming in Apache Kafka? (00:13:22)

4. Event logs vs REST APIs (00:18:45)

5. What are the stages of Kafka adoption? (00:21:44)

6. What is the next big frontier in Kafka adoption? (00:25:41)

7. How do we get to the next stage of streaming data faster? (00:33:01)

8. It's a wrap! (00:39:56)

265 episodes

Todos os episódios

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide