Microsoft-Fabric

Fabric RTI 101: Routing Events

Fabric RTI 101: Routing Events

Routing is one of the features that makes Eventstreams especially powerful in Fabric. It gives you fine-grained control over where different types of events end up. Instead of building separate ingestion pipelines for every scenario, you can manage multiple flows in one stream and branch the data based on rules you define.

For example, imagine a company that handles both IoT telemetry and financial transactions. IoT data, like sensor readings, is most valuable in a KQL database, where you can run fast time-series analysis and detect anomalies. At the same time, financial transactions are better suited to a warehouse, where structured schemas and BI reporting tools can shine. With routing, both event types can enter the same Eventstream, but the system automatically directs them to the most appropriate destination.

2026-03-22

SSRS and Fabric Paginated Reports: Be very careful with using "c" formatting for currency

SSRS and Fabric Paginated Reports: Be very careful with using "c" formatting for currency

While on site this week, another common problem that I see everywhere arose again.

When you need to format currency, you use the “c” format right? It’s in nearly every set of course materials I’ve ever seen. And people do it in almost every demonstration.

But so often, that’s wrong!

When you do this, you’re telling the system to display the monetary value using the local currency.

Is that correct though?

2026-03-21

Fabric RTI 101: Mapping Events

Fabric RTI 101: Mapping Events

Another key part of event processing in Fabric is mapping. Mapping is all about shaping the raw events into the structure you actually want to work with downstream.

When data first arrives, it often comes in the schema defined by the producer system. That might not match what your analytics tools, your warehouse, or your business users expect. For example, a device might send a field called ’tempC’ when you really want it named ‘TemperatureCelsius’.

2026-03-20

Fabric RTI 101: Filtering Events

Fabric RTI 101: Filtering Events

When working with real-time data, one of the biggest challenges is signal versus noise. Not every event that arrives is valuable for analysis or action. For example, IoT devices may send thousands of telemetry points per second, but only a small fraction of those actually represent unusual or meaningful behavior.

That’s where filtering comes in. Filtering lets us apply simple conditions to events right at the ingestion or processing stage. For instance, imagine we have a stream of temperature readings coming from industrial sensors. Most readings might sit between 20 and 50 degrees Celsius — perfectly normal. But maybe we only care if the temperature goes above 80°C, because that indicates a possible overheating issue. With a filter, we can discard all the normal events and only pass through the ones that require attention.

2026-03-18

Fabric RTI 101: Event Processing Outputs

Fabric RTI 101: Event Processing Outputs

Once events are flowing through an Eventstream, the next decision is: where should they go? This is where outputs come into play.

Fabric supports several key destinations. You can send events into a Lakehouse, which is ideal for combining real-time streams with historical data and keeping a permanent record for later analysis. You can push data into a Warehouse for structured reporting and BI queries. Or you can use a KQL database if your focus is on fast, interactive queries over logs, telemetry, or time-series data.

2026-03-16

Fabric RTI 101: Event Processing Inputs

Fabric RTI 101: Event Processing Inputs

When we look at event processing inputs, the first thing to know is that Fabric supports a broad range of streaming sources. The big four are Kafka, Azure Event Hubs, Azure IoT Hub, and any system that speaks AMQP. These cover most of the event-driven architectures you’ll see in the real world, from enterprise message brokers to IoT device fleets and large-scale cloud-native streaming pipelines.

Another important point is that inputs can come from both cloud and on-premises environments. Many organizations are in hybrid mode — perhaps you’ve got a Kafka cluster still running in your datacenter, while also using Event Hubs in Azure for new workloads. Fabric Eventstreams can connect to both, allowing you to bring all those events into a unified pipeline without needing to modernize everything at once.

2026-03-14

Fabric RTI 101: What are Eventstreams?

Fabric RTI 101: What are Eventstreams?

An Eventstream is Microsoft Fabric’s native service for managing the flow of real-time data. Think of it as a pipeline that sits in the middle: it connects your data sources on one side, applies any transformations you need, and then routes the events to their destinations.

The key thing is that Eventstreams are built into Fabric. You don’t have to spin up infrastructure, manage clusters, or write custom code. Instead, you configure inputs and outputs through the Fabric interface. For example, you might take input from an Azure Event Hub, filter or transform the events, and then send them into a Lakehouse, a KQL database, or straight into Power BI.

2026-03-12

Fabric RTI 101: KQL Database OneLake Availability

Fabric RTI 101: KQL Database OneLake Availability

I mentioned in a previous post that the standard storage format for Microsoft Fabric is in OneLake. In fact, you’ll find that pretty much everything that you use in Fabric ends up being stored in OneLake.

The thinking is that you’d be able to look at the data that was captured and being used by one tool then by using a different tool and so on. Now that that all makes sense, it’s not 100% true yet, but the story is getting better.

2026-03-10

Fabric RTI 101: What are Eventhouses ?

Fabric RTI 101: What are Eventhouses ?

An Eventhouse in Microsoft Fabric is a container for specialized databases designed specifically for real-time event analytics (i.e., KQL Databases).

They aren’t just general-purpose stores. They are optimized for very specific scenarios where you’re dealing with high-velocity event data and you need insights almost instantly.

Eventhouse

Even though most eventhouses have only a single database, they can have more than one. At their core, the databases are built on the Kusto engine, which is the same technology behind Azure Data Explorer. That means they are extremely good at handling time-series and log-style workloads. If you’ve ever worked with telemetry, application logs, or security data, you’ll know how important it is to query billions of small, timestamped records quickly. That’s exactly what Eventhouse databases are tuned for.

2026-03-08

Fabric RTI 101: Fabric Storage Options - Warehouses

Fabric RTI 101: Fabric Storage Options - Warehouses

Warehouse

Alongside lakehouses, Fabric also provides warehouses, and these are designed to feel very familiar if you’ve ever worked with a traditional relational database or data warehouse. The warehouse in Fabric is a fully managed, SQL-based analytics store, which means you don’t need to worry about provisioning servers, managing indexes, or tuning storage. It’s optimized under the hood to give you fast, consistent performance for structured data queries.

This is really valuable if your team is already comfortable with SQL. Business analysts, data professionals, and report builders can continue to use the language and tools they already know, without having to learn a whole new paradigm. Warehouses enforce relational schemas — tables with defined columns, keys, and constraints — so you get consistency and predictability in how your data is structured.

2026-03-04