The Bit Bucket

Fabric RTI 101: Eventstream Destinations

Fabric RTI 101: Eventstream Destinations

Once your Eventstream is processing data, the next key step is deciding where those events should go — their destinations.

Eventstreams are designed to be flexible: you can route events to multiple destinations at once, both inside Fabric and to external systems. Let’s look at the main ones.

Eventstream Destinations

First is the Eventhouse, which provides a high-performance, KQL-based analytical store. It’s ideal when you need to query and visualize live data in real time — for example, detecting anomalies or monitoring live operations. Because it uses KQL, it integrates tightly with Real-Time Dashboards and KQL Querysets in Fabric.

2026-05-07

SQL: The need for user-defined index types

SQL: The need for user-defined index types

A few days ago, I wrote about SQL CLR and how I don’t normally use it now, but if I did, which types of objects make sense for it. I briefly mentioned user-defined data types but today, I wanted to call out another limitation of these that I’d like to see addressed (if we keep on using SQL CLR).

Early versions of the user-defined data types in SQL CLR had a limitation on size, where they needed to be serializable within 8KB. That limit is now long gone and so the ability to define new data types using SQL CLR integration was now almost at a usable level, apart from one key omission: indexes.

2026-05-06

Fabric RTI 101: Designing Eventstream Pipelines

Fabric RTI 101: Designing Eventstream Pipelines

When it comes to designing an Eventstream pipeline in Fabric, the process generally follows a clear, three-step flow. First, you start with the inputs — the data sources. This might include Kafka, Event Hubs, IoT Hub, or other streaming systems. At this stage, you define the connections and schemas so Fabric knows how to interpret incoming events.

The second step is where you apply transformations. These are the operations that make raw data more usable and more valuable. You might apply filtering to reduce noise and drop irrelevant events. You might use mapping to rename fields, adjust types, or flatten JSON into something cleaner. And you might use routing to branch different types of events to different destinations. Together, these transformations ensure that events are shaped, cleaned, and directed properly before they move downstream.

2026-05-05

SQL Down Under show 95 with guest Jess Pomfret discussing Data API Builder for SQL Server

SQL Down Under show 95 with guest Jess Pomfret discussing Data API Builder for SQL Server

It was great to catch up with Jess Pomfret today and to have her on a SQL Down Under podcast.

Jess is a Data Platform Engineer and a dual Microsoft MVP. She started working with SQL Server in 2011, and she says she enjoys the problem-solving aspects of automating processes with PowerShell.

Jess also enjoys contributing to dbatools and dbachecks, two open source PowerShell modules that aid DBAs with automating the management of SQL Server instances.

2026-05-05

SQL: What types of objects are useful in SQL CLR?

SQL: What types of objects are useful in SQL CLR?

I’ve recently been talking to clients about SQL CLR objects. When these were first introduced in SQL Server 2005, many of us had high hopes for them. SQL Server has never been great in regard to extensibility and this provided some way to extend the product.

Nowadays, I avoid SQL CLR. And that’s a real pity. But it’s no longer supported in Azure SQL Database, apart from the system CLR objects of geometry, geography, and hierarchyid. (Note: I’m also not a fan of hierarchyid). I need to use extensibility methods that are available in the different environments that I work in, and Azure SQL Database is one of those. The same applies to Fabric SQL Database.

2026-05-04

Fabric RTI 101: Routing Events

Fabric RTI 101: Routing Events

Routing is one of the features that makes Eventstreams especially powerful in Fabric. It gives you fine-grained control over where different types of events end up. Instead of building separate ingestion pipelines for every scenario, you can manage multiple flows in one stream and branch the data based on rules you define.

For example, imagine a company that handles both IoT telemetry and financial transactions. IoT data, like sensor readings, is most valuable in a KQL database, where you can run fast time-series analysis and detect anomalies. At the same time, financial transactions are better suited to a warehouse, where structured schemas and BI reporting tools can shine. With routing, both event types can enter the same Eventstream, but the system automatically directs them to the most appropriate destination.

2026-05-03

Opinion: How enforceable are EULAs today?

Opinion: How enforceable are EULAs today?

I was wondering today how often EULAs (end user license agreements) get tested in courts, and in particular, EULAs that appear in consumer-grade applications.

While most sound quite official, it’s hard to imagine most of them being very enforceable. Does anyone EVER read them?

Fair Cop !

I was amused a few years back when I was installing an application, clicked over the EULA and the application said “how could you possibly have read that in 1.076 seconds?”. Yep, got me there; that’s a fair cop.

2026-05-02

Fabric RTI 101: Fabric Storage Options - KQL Databases

Fabric RTI 101: Fabric Storage Options - KQL Databases

KQL databases are a specialized storage option in Fabric designed specifically for high-volume event and telemetry data. KQL stands for Kusto Query Language, which comes from the Azure Data Explorer (ADX) engine.

These databases are optimized to handle workloads where you might have billions of small events — like application logs, IoT telemetry, or time-series data — and you need to query them at speed.

KQL Databases

Where a warehouse is optimized for structured, relational data and a lakehouse is great for mixing structured and semi-structured data, a KQL database shines when you need to scan and aggregate across massive volumes of events very quickly. You can run queries that look back over millions of log entries or thousands of IoT readings and get sub-second responses. That kind of responsiveness is what makes it possible to power real-time dashboards, alerting systems, and anomaly detection workflows.

2026-05-01

SQL: Stored Procedures - Time for a real contract?

SQL: Stored Procedures - Time for a real contract?

Increasingly, developers are using tools that try to automate code generation when dealing with databases. Stored procedures have been a thorn in the side of this. Mostly that’s because it’s difficult to obtain the metadata that is really needed.

You should not need to be able to read the code of a stored procedure to know how to use it. And that includes the common exceptions it might throw. None of the existing tools for dealing with this currently do what’s needed.

2026-04-30

Fabric RTI 101: Apache and Confluent Kafka

Fabric RTI 101: Apache and Confluent Kafka

Apache Kafka is one of the most widely used event streaming systems in the world, and for good reason. At its core, Kafka is a distributed, open-source platform that makes it possible to capture, process, and deliver millions of events per second with high reliability.

Apache Kafka

Kafka organizes data into topics, which you can think of as named channels. A producer writes events into a topic — this could be an application logging user activity, or a payment system recording transactions.

2026-04-29