Microsoft-Fabric

Fabric RTI 101: KQL Database OneLake Availability

Fabric RTI 101: KQL Database OneLake Availability

I mentioned in a previous post that the standard storage format for Microsoft Fabric is in OneLake. In fact, you’ll find that pretty much everything that you use in Fabric ends up being stored in OneLake.

The thinking is that you’d be able to look at the data that was captured and being used by one tool then by using a different tool and so on. Now that that all makes sense, it’s not 100% true yet, but the story is getting better.

2026-03-10

Fabric RTI 101: What are Eventhouses ?

Fabric RTI 101: What are Eventhouses ?

An Eventhouse in Microsoft Fabric is a container for specialized databases designed specifically for real-time event analytics (i.e., KQL Databases).

They aren’t just general-purpose stores. They are optimized for very specific scenarios where you’re dealing with high-velocity event data and you need insights almost instantly.

Eventhouse

Even though most eventhouses have only a single database, they can have more than one. At their core, the databases are built on the Kusto engine, which is the same technology behind Azure Data Explorer. That means they are extremely good at handling time-series and log-style workloads. If you’ve ever worked with telemetry, application logs, or security data, you’ll know how important it is to query billions of small, timestamped records quickly. That’s exactly what Eventhouse databases are tuned for.

2026-03-08

Fabric RTI 101: Fabric Storage Options - Warehouses

Fabric RTI 101: Fabric Storage Options - Warehouses

Warehouse

Alongside lakehouses, Fabric also provides warehouses, and these are designed to feel very familiar if you’ve ever worked with a traditional relational database or data warehouse. The warehouse in Fabric is a fully managed, SQL-based analytics store, which means you don’t need to worry about provisioning servers, managing indexes, or tuning storage. It’s optimized under the hood to give you fast, consistent performance for structured data queries.

This is really valuable if your team is already comfortable with SQL. Business analysts, data professionals, and report builders can continue to use the language and tools they already know, without having to learn a whole new paradigm. Warehouses enforce relational schemas — tables with defined columns, keys, and constraints — so you get consistency and predictability in how your data is structured.

2026-03-04

Fabric RTI 101: Fabric Storage Options - Lakehouses

Fabric RTI 101: Fabric Storage Options - Lakehouses

A lakehouse is a relatively new storage concept, and it’s designed to give you the best of both worlds. On one side, you have a traditional data lake, which is extremely flexible — you can throw files of any shape or size into it, including structured, semi-structured, and unstructured data. On the other side, you have a data warehouse, which adds the structure, schema enforcement, and query performance that analysts are used to.

2026-03-02

Fabric RTI 101: Fabric Storage Options - OneLake

Fabric RTI 101: Fabric Storage Options - OneLake

OneLake is really the foundation of Fabric’s storage model. The idea is simple but powerful: instead of having separate storage systems for each analytics tool or service, Fabric provides a single, unified data lake. This gives you one logical place where all of your data lives, and all the workloads in Fabric can share it.

OneLake

Technically, OneLake is built on the open Delta Lake format. Underneath, it uses Parquet files for efficient columnar storage, but it adds transactional support on top. That means multiple processes can read and write to the same data in a consistent way, with guarantees around reliability and performance. It’s open, it’s proven, and it avoids the pitfalls of closed, proprietary formats.

2026-02-28

Fabric RTI 101: How Fabric Connects to External Sources

Fabric RTI 101: How Fabric Connects to External Sources

One of the most powerful aspects of Fabric’s Real-Time Intelligence is how it connects to external sources. The mechanism for doing this is through Eventstreams. Eventstreams are essentially the pipelines that define where your data is coming in — the inputs — and where it’s going out — the outputs. In a later post, we’ll explore Eventstreams.

Fabric comes with a range of native connectors. These include direct connections to industry-standard technologies like Kafka, Azure Event Hubs, Azure IoT Hub, and other AMQP-based systems. That means if you already have investments in streaming infrastructure — whether in the cloud or on-premises — Fabric can plug into them without a lot of custom development.

2026-02-26

Fabric RTI 101: Azure and Fabric Events

Fabric RTI 101: Azure and Fabric Events

We’ve looked at events generated by applications, databases, and storage systems — but it’s important to remember that Azure and Fabric themselves also generate events. These are sometimes called platform events, because they come from the infrastructure and services you’re running rather than from your business data directly.

Some examples are really practical. Azure might emit events when resources are scaled up or down — say, when a cluster automatically adds nodes to handle increased load. Fabric might generate events when a pipeline starts, completes, or fails. You’ll also see events related to service health, configuration changes, or security alerts.

2026-02-24

Fabric RTI 101: Azure and AWS Storage Events

Fabric RTI 101: Azure and AWS Storage Events

Another important category of real-time sources comes from cloud storage platforms like Azure Blob Storage and AWS S3. These aren’t just passive data stores — they can actually generate events whenever something happens to the data inside them.

Azure and AWS Storage

For example, when a new file is uploaded into a data lake, the storage service can immediately raise an event. That event might then trigger an ingestion pipeline, start a transformation process, or kick off machine learning model scoring. The key here is that you don’t have to wait for a scheduled scan or a batch job to check for new files. The storage system itself notifies you the instant a change occurs.

2026-02-22

Fabric RTI 101: CDC vs CES Comparison

Fabric RTI 101: CDC vs CES Comparison

This diagram highlights the difference between Change Data Capture (CDC) and Change Event Streams (CES) — two ways of turning databases into real-time event sources.

CDC vs CES

On the left side, we see CDC. Here the focus is on row-level changes. Every time a row is inserted, updated, or deleted in a table, that operation is captured and streamed out as an event. Those events typically feed into real-time replication or analytics pipelines. In other words, CDC makes sure your downstream systems always have fresh copies of your data. It’s like having a live feed of every transaction at the cash register. That’s why we use the analogy here of cash register receipts — you get a record of each sale exactly as it happens.

2026-02-20

Fabric RTI 101: Database CES Sources

Fabric RTI 101: Database CES Sources

When we talk about Change Event Streams, or CES, we’re looking at a different layer of database activity than CDC.

CDC — Change Data Capture — focuses on row-level changes: inserts, updates, and deletes in your transactional tables. That’s very useful for analytics and replication. But there’s a lot more going on inside a database than just row changes.

Database CES Sources

CES captures the broader set of events that affect the structure and governance of the database itself. This includes things like schema changes — if someone adds a new column, drops a table, or creates an index. It also includes permission updates — for example, when access rights are granted or revoked. And it can even capture configuration changes or other metadata-level modifications.

2026-02-18