Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
The Q3 Cloud Bundle Launch comes to you from Current 2024, where data streaming industry experts have come together to show you why data streaming is critical today, especially in the age of AI, and how it will become even more important in shaping tomorrow’s businesses...
If you are a developer looking for an easier way to test your apps on topics with schemas, this is for you! Now you can easily create a message with a topic schema directly from the Confluent Cloud Console, with built-in validation and error checking.
The beauty of Kafka as a technology is that it can do a lot with little effort on your part. In effect, it’s a black box. But what if you need to see into the black box to debug something? This post shows what the producer does behind the scenes to help prepare your raw event data for the broker.
With AI model inference in Flink SQL, Confluent allows you to simplify the development and deployment of RAG-enabled GenAI applications by providing a unified platform for both data processing and AI tasks. Learn how you can use it to build a RAG-enabled Q&A chatbot using real-time airline data.
62% of Confluent Cloud clusters run on AWS. Meanwhile, hundreds of thousands of customers are using DynamoDB. This blog explains how the connector helps customers integrate both platforms together.
Since launching our first cloud connector in 2019, Confluent’s fully managed connectors have handled hundreds of petabytes of data & expanded to include over 80 fully managed connectors, custom connectors, and private networking. Discover popular connectors, SMTs, and use cases on Confluent Cloud...
Been searching far and wide for examples of Spring Boot with Kotlin integrated with Apache Kafka®? You’ve found it. But not just an example with unstructured data or no schema management. Not here! We’re going all the way with Stream Governance in Confluent Cloud. Let’s get into it.
Skai completely revamped its interactive, ad-campaign dashboard by adding Apache Kafka and an in-memory database—eventually moving the solution to Confluent Cloud. Once on the Cloud, they devised an ingenious architecture for reducing the number of topics they needed.
We are excited to announce the release of a new Confluent Cloud Homepage UI, inspired by many conversations and features requests from our customer and field teams. In the past, many users bypassed the Homepage as just another click in the way of what they are trying to accomplish. This redesign...
Learn how Confluent Cloud and BigQuery Continuous Queries work together to enable real-time data processing, including the benefits of the integration with BigQuery Continuous Query and a step-by-step guide on setting up getting data from BQ Continuous Query and Confluent Cloud to capture data...
The Apache Flink® community released Apache Flink 1.20 this week. In this blog post, we highlight some of the most interesting additions and improvements.
This blog announces the general availability of Confluent Platform 7.7 and its latest key features: Enhanced security with OAuth Support, Confluent Platform for Apache Flink® (LA), a new Connector, and More
We are proud to announce the release of Apache Kafka 3.8.0. This release contains many new features and improvements. This blog post highlights some of the more prominent features. For a full list of changes, be sure to check the release notes.
Confluent Cloud for Apache Flink®️ supports AI model inference and enables the use of models as resources in Flink SQL, just like tables and functions. You can use a SQL statement to create a model resource and invoke it for inference in streaming queries.