The Bit Bucket

SQL: Computed Columns: It's a matter of persistence

SQL: Computed Columns: It's a matter of persistence

Most SQL Server developers are aware that they can create computed columns. We do that by defining a column AS some expression like this:

CREATE TABLE Sales.OrderLines
(
    ...
    UnitPrice decimal(18, 2) NOT NULL,
    PickedQuantity decimal(18, 3) NOT NULL,
    LineTotal AS ROUND(UnitPrice * PickedQuantity, 2)
    ...
)

Each time the value from that LineTotal column is queried, the calculation is performed so the result can be returned. This makes sense when the value is changing regularly and the value is queried infrequently.

2026-03-07

Tech Career for Life: Show 1 with Guest Drew Robbins is now published!

Tech Career for Life: Show 1 with Guest Drew Robbins is now published!

I’ve been planning a new podcast for quite a while. I’ve been fascinated by what makes a successful technical career that lasts a long time.

Over the years, I’ve seen far too many promising people either just stop advancing, or sometimes, completely burn out. As an example, I remember a very technical friend saying he’d had enough, and he bought a small farm in Tasmania.

The first of these shows is now recorded and published. And this time, they’re video podcasts, not just audio. I hope you enjoy it.

2026-03-06

SQL: Plan Cache Pollution - Avoiding it and Fixing it

SQL: Plan Cache Pollution - Avoiding it and Fixing it

While SQL Server’s plan cache generally is self-maintaining, poor application coding practices can cause the plan cache to become full of query plans that have only ever been used a single time and that are unlikely to ever be reused. We call this plan cache pollution.

Causes

The most common cause of these issues are programming libraries that send multiple variations of a single query. For example, imagine I have a query like:

2026-03-05

Fabric RTI 101: Fabric Storage Options - Warehouses

Fabric RTI 101: Fabric Storage Options - Warehouses

Warehouse

Alongside lakehouses, Fabric also provides warehouses, and these are designed to feel very familiar if you’ve ever worked with a traditional relational database or data warehouse. The warehouse in Fabric is a fully managed, SQL-based analytics store, which means you don’t need to worry about provisioning servers, managing indexes, or tuning storage. It’s optimized under the hood to give you fast, consistent performance for structured data queries.

This is really valuable if your team is already comfortable with SQL. Business analysts, data professionals, and report builders can continue to use the language and tools they already know, without having to learn a whole new paradigm. Warehouses enforce relational schemas — tables with defined columns, keys, and constraints — so you get consistency and predictability in how your data is structured.

2026-03-04

SQL: Understanding Cache Dependencies in ASP.NET

SQL: Understanding Cache Dependencies in ASP.NET

I’ve had a number of clients over the years who were wondering how the SQL dependencies in ASP.NET evolved, and what they were used for. I’ve updated a previous blog post about this below.

Traditional ASP Applications

In traditional ASP applications, every time the user would request a page, the application would retrieve the data from the database to render the page. Clearly this doesn’t scale very well. It works for a small number of users but not a large number. Often though, this data access can be completely avoided for certain types of data.

2026-03-03

Fabric RTI 101: Fabric Storage Options - Lakehouses

Fabric RTI 101: Fabric Storage Options - Lakehouses

A lakehouse is a relatively new storage concept, and it’s designed to give you the best of both worlds. On one side, you have a traditional data lake, which is extremely flexible — you can throw files of any shape or size into it, including structured, semi-structured, and unstructured data. On the other side, you have a data warehouse, which adds the structure, schema enforcement, and query performance that analysts are used to.

2026-03-02

Echoes from the field 10 - What's in a name?

Echoes from the field 10 - What's in a name?

I always say that one of the things that I love about consulting or mentoring work is that I see things (mostly code) that I would have never have thought of.

Sometimes, it’s good information where I learn a new technique that I hadn’t considered. But most times, it’s just something weird that a client has done.

GUID Table Names

A good example of this was a site where every table had a GUID name. Yes, I’m talking about tables with names like:

2026-03-01

Fabric RTI 101: Fabric Storage Options - OneLake

Fabric RTI 101: Fabric Storage Options - OneLake

OneLake is really the foundation of Fabric’s storage model. The idea is simple but powerful: instead of having separate storage systems for each analytics tool or service, Fabric provides a single, unified data lake. This gives you one logical place where all of your data lives, and all the workloads in Fabric can share it.

OneLake

Technically, OneLake is built on the open Delta Lake format. Underneath, it uses Parquet files for efficient columnar storage, but it adds transactional support on top. That means multiple processes can read and write to the same data in a consistent way, with guarantees around reliability and performance. It’s open, it’s proven, and it avoids the pitfalls of closed, proprietary formats.

2026-02-28

Mixing UNION and UNION ALL Operations

Mixing UNION and UNION ALL Operations

Recently, I saw a subtle coding issue related to the UNION operator. With SQL Server, the UNION operator combines two rowsets into a single rowset. If UNION ALL is used then all rows are returned. With just UNION without the ALL, only distinct rows are returned. All good so far.

One of the most common performance issues that I come across is where people have just used UNION where they should have used UNION ALL. That extra distinct operation is often not needed, yet often very expensive.

2026-02-27

Fabric RTI 101: How Fabric Connects to External Sources

Fabric RTI 101: How Fabric Connects to External Sources

One of the most powerful aspects of Fabric’s Real-Time Intelligence is how it connects to external sources. The mechanism for doing this is through Eventstreams. Eventstreams are essentially the pipelines that define where your data is coming in — the inputs — and where it’s going out — the outputs. In a later post, we’ll explore Eventstreams.

Fabric comes with a range of native connectors. These include direct connections to industry-standard technologies like Kafka, Azure Event Hubs, Azure IoT Hub, and other AMQP-based systems. That means if you already have investments in streaming infrastructure — whether in the cloud or on-premises — Fabric can plug into them without a lot of custom development.

2026-02-26