Fabric-Rti-101

Fabric RTI 101: CDC vs CES Comparison

Fabric RTI 101: CDC vs CES Comparison

This diagram highlights the difference between Change Data Capture (CDC) and Change Event Streams (CES) — two ways of turning databases into real-time event sources.

CDC vs CES

On the left side, we see CDC. Here the focus is on row-level changes. Every time a row is inserted, updated, or deleted in a table, that operation is captured and streamed out as an event. Those events typically feed into real-time replication or analytics pipelines. In other words, CDC makes sure your downstream systems always have fresh copies of your data. It’s like having a live feed of every transaction at the cash register. That’s why we use the analogy here of cash register receipts — you get a record of each sale exactly as it happens.

2026-02-20

Fabric RTI 101: Database CES Sources

Fabric RTI 101: Database CES Sources

When we talk about Change Event Streams, or CES, we’re looking at a different layer of database activity than CDC.

CDC — Change Data Capture — focuses on row-level changes: inserts, updates, and deletes in your transactional tables. That’s very useful for analytics and replication. But there’s a lot more going on inside a database than just row changes.

Database CES Sources

CES captures the broader set of events that affect the structure and governance of the database itself. This includes things like schema changes — if someone adds a new column, drops a table, or creates an index. It also includes permission updates — for example, when access rights are granted or revoked. And it can even capture configuration changes or other metadata-level modifications.

2026-02-18

Fabric RTI 101: Database CDC Sources

Fabric RTI 101: Database CDC Sources

Many organizations already have huge amounts of business-critical data sitting in relational databases — SQL Server, PostgreSQL, MySQL, and so on. But by default, these databases aren’t real-time data sources. Traditionally, if you wanted to know what changed, you’d run queries at intervals — maybe once an hour, or once a day — and look for differences. That approach works, but it introduces delay.

Databases

This is where Change Data Capture, or CDC, comes in. CDC is a technique that turns a traditional database into a real-time data source. Instead of polling for differences, CDC actually streams the changes themselves as they occur. Anytime a row is inserted, updated, or deleted, that operation is captured and emitted as an event.

2026-02-16

Fabric RTI 101: Message Queue Comparisons

Fabric RTI 101: Message Queue Comparisons

The following diagram shows how different messaging technologies — Kafka, Event Hubs, and RabbitMQ — fit into a real-time pipeline.

Message Queue Comparisons

At the top, we have our event producers: applications generating user activity, IoT sensors sending telemetry, and financial systems pushing transactions. These are the raw sources of events.

Now, those events need to get transported reliably to downstream consumers. That’s where our three options come in:

  • Kafka, on the left, is like a freight train. It’s open source, distributed, and designed to move massive volumes of events continuously. It’s ideal for streaming pipelines at very large scale.
  • Event Hubs, in the middle, is essentially the managed version of Kafka on Azure. You get the same freight train power, but as a managed service. You don’t worry about the tracks or the engines — Azure handles that for you. It integrates directly with Fabric, which makes it especially attractive if you’re already in the Microsoft ecosystem.
  • RabbitMQ, on the right, is like a courier van. It’s not designed to move massive volumes, but it’s incredibly reliable and flexible for routing and guaranteed delivery. If you need every message delivered exactly once — even if the consumer is offline for a bit — RabbitMQ is your best option.

Finally, at the bottom, we have the consumers: dashboards, analytics systems, or applications that depend on this event data. The key is that each broker delivers data in a way that best matches its design philosophy: high throughput (Kafka), managed scale (Event Hubs), or guaranteed precision (RabbitMQ).

2026-02-14

Fabric RTI 101: RabbitMQ

Fabric RTI 101: RabbitMQ

RabbitMQ is another widely used message broker, but it fills a slightly different role compared to Kafka or Event Hubs. It’s an open-source broker that primarily implements the AMQP protocol, though it supports other protocols as well.

RabbitMQ

Where RabbitMQ shines is in reliable delivery and flexible routing of messages. It’s extremely good at guaranteeing that each message gets to the right destination, even if the consumer isn’t available at the time. You can define queues, exchanges, and routing rules to handle very complex message delivery patterns.

2026-02-12

Fabric RTI 101: Using AMQP vs HTTP

Fabric RTI 101: Using AMQP vs HTTP

When we talk about protocols for sending and receiving streams of event data, two of the most common you’ll come across are HTTP and AMQP.

AMQP vs HTTP

HTTP

HTTP is the workhorse of the web. It’s everywhere, it’s simple, and it’s supported by almost every platform and device. The model is request/response: the client asks for something, the server replies, and then the connection is done. HTTP is stateless, meaning every request is independent. That simplicity makes it easy to use, but it also makes it less suitable for continuous, real-time data flows. If you want to stream updates constantly, you either have to keep making new HTTP requests or hold the connection open in ways that HTTP wasn’t originally designed for.

2026-02-08

Fabric RTI 101: Azure IoT Hubs

Fabric RTI 101: Azure IoT Hubs

Azure IoT Hub is a specialized service designed specifically for connecting and managing IoT devices at scale. While Azure Event Hubs is a general-purpose streaming service, IoT Hub focuses on the unique challenges that come with millions of devices out in the real world.

First and foremost, IoT Hub provides secure, reliable connections for those devices. In an IoT deployment, you might have sensors, machines, or even entire fleets of vehicles sending telemetry data up to the cloud. IoT Hub can support millions of devices connecting simultaneously, each streaming their data in real time.

2026-02-06

Fabric RTI 101: Azure Event Hubs

Fabric RTI 101: Azure Event Hubs

Azure Event Hubs is Microsoft’s fully managed event streaming service. If you’re familiar with Apache Kafka, you can think of Event Hubs as Microsoft’s cloud-native equivalent. It’s designed to handle extremely high volumes of events — we’re talking about millions of events per second — all without you having to stand up and manage complex clusters yourself.

Azure Event Hub

One of the most important features of Event Hubs is that it offers Kafka-compatible endpoint. That means if you already have applications, tools, or client libraries that were built to talk to Kafka, in many cases they can connect to Event Hubs with little or no modification. That’s a huge benefit because it reduces friction for teams migrating workloads into Azure or building hybrid architectures.

2026-02-04

Fabric RTI 101: Message Brokers and Event Streams

Fabric RTI 101: Message Brokers and Event Streams

Let’s talk about message brokers and event streams — these are the backbone technologies that make real-time systems work at scale.

At the simplest level, a message broker acts as a middleman between the systems producing events and the systems consuming them. Instead of producers and consumers being tightly coupled — where the producer has to know exactly where to send data and the consumer has to be available at the exact right moment — the broker sits in between and handles that communication.

2026-02-02

Fabric RTI 101: Event-Driven vs Request-Driven Systems

Fabric RTI 101: Event-Driven vs Request-Driven Systems

Most of the systems we’ve worked with historically are request-driven. In this model, a client asks for information and the server provides it. Think about browsing a website: you type in a URL, your browser requests the page, and the server responds with the content. That’s a pull model — the client decides when it wants data. It’s predictable, it’s synchronous, and it’s been the backbone of web applications for decades.

2026-01-31