<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>The Bit Bucket</title>
    <link>https://blog.greglow.com/</link>
    <description>Thoughts from Microsoft Data Platform MVP and RD – Dr Greg Low</description>
    <language>en</language>
    <generator>Hugo -- https://gohugo.io/</generator>

    
    <item>
      <title>Fabric RTI 101: Tumbling</title>
      <link>https://blog.greglow.com/2026/04/13/fabric-rti-101-tumbling/</link>
      <guid>https://blog.greglow.com/2026/04/13/fabric-rti-101-tumbling/</guid>
      <pubDate>Mon, 13 Apr 2026 00:00:00 AEST</pubDate>

      <description>Let’s take a look at the windowing options available. The first of these is the tumbling window, and it’s the simplest type of temporal window. It slices the event stream into fixed-length, adjacent blocks of time, with no overlap and no gaps. Think of it like dividing a timeline into perfectly equal buckets — each window starts as soon as the previous one ends.
For example, let’s say we’re monitoring sales transactions. If we set up a tumbling window of five minutes, then every five minutes the system calculates totals or averages. At the end of each window, you get a new result, and then the process resets for the next five-minute slice.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>Let’s take a look at the windowing options available. The first of these is the tumbling window, and it’s the simplest type of temporal window. It slices the event stream into fixed-length, adjacent blocks of time, with no overlap and no gaps. Think of it like dividing a timeline into perfectly equal buckets — each window starts as soon as the previous one ends.</p>
<p>For example, let’s say we’re monitoring sales transactions. If we set up a tumbling window of five minutes, then every five minutes the system calculates totals or averages. At the end of each window, you get a new result, and then the process resets for the next five-minute slice.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_09_01.png" alt="Tumbling"></p>
<p>This approach is ideal when you need consistent, regular reporting intervals. Dashboards and monitoring systems often use tumbling windows because they make it easy to compare one period to the next — for example, <em>sales in the last 5 minutes</em> or <em>number of logins in the last 10 minutes</em>.</p>
<p>The trade-off is that because windows don’t overlap, you may miss changes happening between buckets. If a spike occurs at the boundary between two windows, each one will only show part of it. That’s where other window types like sliding or hopping come in, but for simple, predictable metrics, tumbling windows are often the best choice.<br>
 </p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SQL: Calling a Scalar UDF with EXEC</title>
      <link>https://blog.greglow.com/2026/04/12/sql-calling-a-scalar-udf-with-exec/</link>
      <guid>https://blog.greglow.com/2026/04/12/sql-calling-a-scalar-udf-with-exec/</guid>
      <pubDate>Sun, 12 Apr 2026 00:00:00 AEST</pubDate>

      <description>Most SQL Server developers are aware that the EXEC statement can be used to:
Execute a stored procedure (system, user-defined, extended) Execute some dynamic SQL And most understand that you can SELECT from a scalar user-defined function.
But the option that many people don’t seem to be aware of, is that you can also use EXEC to call a scalar function.
I remember noticing this in the documentation for the EXEC command some years back. Prior to that, it had never dawned on me that you could use EXEC to call a scalar UDF. It’s also in the oldest documentation that I was able to check, so I’d say it’s worked for a long time.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/EXEC_Documentation.png" alt="cover image" /><br />
        
        <p>Most SQL Server developers are aware that the EXEC statement can be used to:</p>
<ul>
<li>Execute a stored procedure (system, user-defined, extended)</li>
<li>Execute some dynamic SQL</li>
</ul>
<p>And most understand that you can SELECT from a scalar user-defined function.</p>
<p>But the option that many people don’t seem to be aware of, is that you can also use EXEC to call a scalar function.</p>
<p>I remember noticing this in the documentation for the EXEC command some years back. Prior to that, it had never dawned on me that you could use EXEC to call a scalar UDF. It’s also in the oldest documentation that I was able to check, so I’d say it’s worked for a long time.</p>
<p>Here’s a trivial example of a function and calling it as I would have in the past:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span>USE tempdb;  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">GO</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">CREATE</span> <span style="color:#66d9ef">FUNCTION</span> dbo.SayHello(<span style="color:#f92672">@</span>WhoTo NVARCHAR(<span style="color:#ae81ff">100</span>))  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">RETURNS</span> NVARCHAR(<span style="color:#ae81ff">120</span>)  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">AS</span>  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">BEGIN</span>  
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">RETURN</span> N<span style="color:#e6db74">'Hello '</span> <span style="color:#f92672">+</span> <span style="color:#f92672">@</span>WhoTo;  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">END</span>;  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">GO</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">SELECT</span> dbo.SayHello(N<span style="color:#e6db74">'Greg'</span>);  
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">GO</span>
</span></span></code></pre></div><p>Now clearly I’d never use a function like that. The performance impacts would be horrid. But I wanted to keep this very simple.</p>
<p>Now, notice that you can also call it like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span><span style="color:#66d9ef">DECLARE</span> <span style="color:#f92672">@</span>ReturnValue NVARCHAR(<span style="color:#ae81ff">120</span>);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">EXEC</span> <span style="color:#f92672">@</span>ReturnValue <span style="color:#f92672">=</span> dbo.SayHello N<span style="color:#e6db74">'Greg'</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">SELECT</span> <span style="color:#f92672">@</span>ReturnValue;
</span></span></code></pre></div><p>Prior to the day back then when I read the documentation for EXEC, I would have guessed this wouldn’t work.</p>
<h2 id="learn-more-about-advanced-t-sql">Learn more about Advanced T-SQL</h2>
<p>If you really want to learn about SQL Server Advanced T-SQL, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/ats">SQL Server Advanced T-SQL for Developers and DBAs</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Applying Temporal Windows</title>
      <link>https://blog.greglow.com/2026/04/11/fabric-rti-101-applying-temporal-windows/</link>
      <guid>https://blog.greglow.com/2026/04/11/fabric-rti-101-applying-temporal-windows/</guid>
      <pubDate>Sat, 11 Apr 2026 00:00:00 AEST</pubDate>

      <description>One of the challenges with streaming data is that it never ends — it just keeps flowing. If we tried to calculate totals or averages across the entire stream, the numbers would just keep growing forever, and we’d never get a meaningful result. That’s why we use temporal windows.
A temporal window lets us break the continuous stream into slices of time, so we can apply aggregations within those boundaries. For example, instead of calculating total transactions forever, we might calculate total transactions every minute, or average sensor readings every five seconds. Each window produces a result that can be stored, visualized, or acted upon in real time.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>One of the challenges with streaming data is that it never ends — it just keeps flowing. If we tried to calculate totals or averages across the entire stream, the numbers would just keep growing forever, and we’d never get a meaningful result. That’s why we use temporal windows.</p>
<p>A temporal window lets us break the continuous stream into slices of time, so we can apply aggregations within those boundaries. For example, instead of calculating total transactions forever, we might calculate total transactions every minute, or average sensor readings every five seconds. Each window produces a result that can be stored, visualized, or acted upon in real time.</p>
<p>Currently, these temporal options are supported in the Group By and the Custom SQL operations.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_08_01.png" alt="Applying Temporal Windows"></p>
<p>There are several windowing strategies, and each one is designed for different business needs. Tumbling windows divide time into fixed, non-overlapping slices — great for reporting metrics per minute or per hour. Sliding windows move forward in small steps, overlapping slightly, which makes them useful for spotting trends or detecting spikes. Session windows group events into sessions of activity separated by idle gaps, which is very useful in customer behavior analysis. And hopping windows combine the advantages of overlapping and fixed intervals to detect patterns at multiple granularities.</p>
<p>Windows turn an infinite, continuous stream into manageable chunks that we can analyze. They give us control over how aggregations are applied, and they allow business users to see insights in a way that makes sense — whether that’s every few seconds for system monitoring, every few minutes for operational dashboards, or every hour for business reporting.</p>
<p>Without windows, streaming analytics would feel overwhelming — just a firehose of data. With windows, we can organize that firehose into buckets, making it possible to compute meaningful metrics, detect anomalies, and power real-time decisions.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SQL: EXEC AS USER on EXEC Statements</title>
      <link>https://blog.greglow.com/2026/04/10/sql-exec-as-user-on-exec-statements/</link>
      <guid>https://blog.greglow.com/2026/04/10/sql-exec-as-user-on-exec-statements/</guid>
      <pubDate>Fri, 10 Apr 2026 00:00:00 AEST</pubDate>

      <description>The WITH EXECUTE AS clause was a great addition for defining stored procedures and functions, to change the execution context, just for the duration of the stored procedure or function. For example:
CREATE PROC SomeSchema.SomeProc WITH EXECUTE AS USER = 'Fred' AS ... Mostly I use this with the OWNER option:
CREATE PROC SomeSchema.SomeProc WITH EXECUTE AS OWNER AS ... It’s also useful during testing, where I can temporarily change my execution context during testing. For example:
EXEC AS USER = 'Fred'; \-- Try some code here while running as Fred REVERT; But the option that most people don’t realize is possible, is that you can set the execution context for a single execution like this:
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/WhoAreYou_BrettJordan.png" alt="cover image" /><br />
        
        <p>The WITH EXECUTE AS clause was a great addition for defining stored procedures and functions, to change the execution context, just for the duration of the stored procedure or function. For example:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span><span style="color:#66d9ef">CREATE</span> PROC SomeSchema.SomeProc
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">WITH</span> <span style="color:#66d9ef">EXECUTE</span> <span style="color:#66d9ef">AS</span> <span style="color:#66d9ef">USER</span> <span style="color:#f92672">=</span> <span style="color:#e6db74">'Fred'</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">AS</span>
</span></span><span style="display:flex;"><span>... 
</span></span></code></pre></div><p>Mostly I use this with the OWNER option:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span><span style="color:#66d9ef">CREATE</span> PROC SomeSchema.SomeProc
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">WITH</span> <span style="color:#66d9ef">EXECUTE</span> <span style="color:#66d9ef">AS</span> <span style="color:#66d9ef">OWNER</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">AS</span>
</span></span><span style="display:flex;"><span>... 
</span></span></code></pre></div><p>It’s also useful during testing, where I can temporarily change my execution context during testing. For example:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span><span style="color:#66d9ef">EXEC</span> <span style="color:#66d9ef">AS</span> <span style="color:#66d9ef">USER</span> <span style="color:#f92672">=</span> <span style="color:#e6db74">'Fred'</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">\</span><span style="color:#75715e">-- Try some code here while running as Fred
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span>REVERT;
</span></span></code></pre></div><p>But the option that most people don’t realize is possible, is that you can set the execution context for a single execution like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span><span style="color:#66d9ef">EXEC</span> (<span style="color:#e6db74">'Some command to be executed as Fred'</span>) <span style="color:#66d9ef">AS</span> <span style="color:#66d9ef">USER</span> <span style="color:#f92672">=</span> <span style="color:#e6db74">'Fred'</span>;
</span></span></code></pre></div><p>This could be particularly useful when you need to call dynamic SQL code with a specific execution context.</p>
<p>I hope that helps to simplify your code.</p>
<h2 id="learn-more-about-sql-server-administration">Learn more about SQL Server Administration</h2>
<p>If you really want to learn about SQL Server administration right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/adm">SQL Server Administration for Developers and DBAs</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Grouping</title>
      <link>https://blog.greglow.com/2026/04/09/fabric-rti-101-grouping/</link>
      <guid>https://blog.greglow.com/2026/04/09/fabric-rti-101-grouping/</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 AEST</pubDate>

      <description>When we talk about grouping in real-time data processing, we’re talking about organizing events according to attributes that are meaningful for analysis. Instead of treating the event stream as one giant firehose, grouping lets us carve it up into categories that align with how the business thinks about its data.
For example, in an IoT scenario, we might group telemetry events by device ID. This way, instead of calculating one global average temperature across thousands of sensors, we can calculate average per device, giving us much more useful insights. In a retail scenario, we could group transactions by customer ID to analyze individual purchasing patterns, or by region to monitor performance across different locations. Grouping is what enables those per-customer, per-device, or per-region dashboards that managers and operators rely on.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>When we talk about grouping in real-time data processing, we’re talking about organizing events according to attributes that are meaningful for analysis. Instead of treating the event stream as one giant firehose, grouping lets us carve it up into categories that align with how the business thinks about its data.</p>
<p>For example, in an IoT scenario, we might group telemetry events by device ID. This way, instead of calculating one global average temperature across thousands of sensors, we can calculate average per device, giving us much more useful insights. In a retail scenario, we could group transactions by customer ID to analyze individual purchasing patterns, or by region to monitor performance across different locations. Grouping is what enables those per-customer, per-device, or per-region dashboards that managers and operators rely on.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_07_01.png" alt="Grouping"></p>
<p>Grouping also acts as the foundation for aggregation. Once we’ve organized events into groups, we can apply aggregation functions — like count, sum, or average — within each group. For instance, we might calculate ’total sales per customer per hour,’ or ‘maximum vibration reading per machine in the last 5 minutes.’ Without grouping, those kinds of breakdowns wouldn’t be possible.</p>
<p>Another key aspect is scalability. When streams are grouped, they can also be partitioned. This means different processors can handle different groups in parallel — one node might handle customer IDs starting with A–M, while another handles N–Z. That parallelism is essential for handling very high event volumes in real time. If we tried to process everything in one monolithic stream, the system would quickly become overloaded.</p>
<p>So grouping really gives us two big advantages: it makes analytics more relevant and precise by breaking data down into meaningful categories, and it makes pipelines more scalable and efficient by enabling parallel processing.</p>
<p>When you design real-time pipelines in Fabric, think carefully about the grouping keys you choose. They need to line up with how the business wants to slice its data, but they also need to be practical for performance and scale.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SDU Tools: List User Heap Tables in SQL Server</title>
      <link>https://blog.greglow.com/2026/04/08/sdu-tools-list-user-heap-tables-in-sql-server/</link>
      <guid>https://blog.greglow.com/2026/04/08/sdu-tools-list-user-heap-tables-in-sql-server/</guid>
      <pubDate>Wed, 08 Apr 2026 00:00:00 AEST</pubDate>

      <description>It’s common advice that most SQL Server tables should have a clustered index. There are some exceptions to this but it’s a pretty general rule, and if in doubt, you should follow it. (Note that this is not the same as having a primary key).
I regularly come across tables without clustered indexes for all the wrong reasons. So, in our free SDU Tools for developers and DBAs, we added a tool that can look for user tables that don’t have a clustered index. No surprise, it’s called ListUserHeapTables because a table without a clustered index is a heap.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/SDU_Tools_ListUserHeapTables.png" alt="cover image" /><br />
        
        <p>It’s common advice that most SQL Server tables should have a clustered index. There are some exceptions to this but it’s a pretty general rule, and if in doubt, you should follow it. (Note that this is not the same as having a primary key).</p>
<p>I regularly come across tables without clustered indexes <em>for all the wrong reasons</em>. So, in our free SDU Tools for developers and DBAs, we added a tool that can look for user tables that don’t have a clustered index. No surprise, it’s called <strong>ListUserHeapTables</strong> because a table without a clustered index is a heap.</p>
<p>It takes three parameters:</p>
<p><strong>@DatabaseName</strong> sysname - the database to look into<br>
<strong>@SchemasToList</strong> nvarchar(max) - a comma-delimited list of schemas to check (or ‘ALL’)<br>
<strong>@TablesToList</strong> nvarchar(max) - a comma-delimited list of table s to check (or ‘ALL’)</p>
<p>This procedure was added in an early version of SDU Tools but in version 27, we’ve made it more useful. As well as the schema name and table name, it now provides a page count, and a forwarded record count.</p>
<p>Forwarded records occur when the heap has nonclustered indexes and the underlying row is modified and moves to a different location. They avoid the need to update all the related indexes as well.</p>
<h3 id="find-out-more">Find out more</h3>
<p>You can see it in action in the main image above, and in the updated video here:</p>
<p>



  



<a href="https://www.youtube.com/watch?v=jt0aWk0JRsU" target="_blank" rel="noopener noreferrer"
   style="position: relative; display: block; width: 100%; max-width: 560px; aspect-ratio: 16 / 9; border-radius: 12px; overflow: hidden;">
  <img src="https://img.youtube.com/vi/jt0aWk0JRsU/hqdefault.jpg" alt="YouTube Video"
       style="width: 100%; height: 100%; object-fit: cover; display: block;">
  <span style="
    position: absolute;
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);
    background: rgba(0,0,0,0.6);
    border-radius: 50%;
    width: 64px;
    height: 64px;
    display: flex;
    align-items: center;
    justify-content: center;
  ">
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 68 48" width="36" height="36">
      <path d="M66.52 7.09a8.27 8.27 0 00-5.82-5.84C56.73 0 34 0 34 0S11.27 0 7.3 1.25a8.27 8.27 0 00-5.82 5.84A85.19 85.19 0 000 24a85.19 85.19 0 001.48 16.91 8.27 8.27 0 005.82 5.84C11.27 48 34 48 34 48s22.73 0 26.7-1.25a8.27 8.27 0 005.82-5.84A85.19 85.19 0 0068 24a85.19 85.19 0 00-1.48-16.91zM27 34V14l18 10z" fill="#fff"/>
    </svg>
  </span>
</a>


</p>
<p>Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:</p>
<p>





  <a href="http://sdutools.sqldownunder.com">http://sdutools.sqldownunder.com</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Joining with Other Data Sets</title>
      <link>https://blog.greglow.com/2026/04/07/fabric-rti-101-joining-with-other-data-sets/</link>
      <guid>https://blog.greglow.com/2026/04/07/fabric-rti-101-joining-with-other-data-sets/</guid>
      <pubDate>Tue, 07 Apr 2026 00:00:00 AEST</pubDate>

      <description>The Join operation allows you to bring together data from multiple sources in real time — much like performing a SQL join on continuously arriving data.
Imagine you have one stream of telemetry from IoT devices and another stream of configuration updates.
By joining them on a common key, such as a device ID, you can enrich your telemetry with configuration or location context in real time.
Eventstream supports different types of joins, including both inner joins (matching only overlapping records), and outer joins (keeping all events from one stream, even when there’s no match).
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>The Join operation allows you to bring together data from multiple sources in real time — much like performing a SQL join on continuously arriving data.</p>
<p>Imagine you have one stream of telemetry from IoT devices and another stream of configuration updates.</p>
<p>By joining them on a common key, such as a device ID, you can enrich your telemetry with configuration or location context in real time.</p>
<p>Eventstream supports different types of joins, including both inner joins (matching only overlapping records), and outer joins (keeping all events from one stream, even when there’s no match).</p>
<p>These joins can happen on fields like transaction IDs, timestamps, or other logical keys.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_06_01.png" alt="Joining with Other Data Sets"></p>
<p>In practice, joining is what enables correlation — connecting events from different systems to form a complete picture. For example, matching a purchase transaction with its fraud score stream, or aligning telemetry with an alert feed.</p>
<p>By carefully defining your join conditions, you can unlock much richer insights from your real-time pipelines — turning isolated event streams into integrated, contextualized data flows.</p>
<p>Note that early on, it appeared that eventstream joins would directly support reference datasets, but at present, joins can only combine events from input streams.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SDU Tools: Nepali Date Processing in SQL Server T-SQL</title>
      <link>https://blog.greglow.com/2026/04/06/sdu-tools-nepali-date-processing-in-sql-server-t-sql/</link>
      <guid>https://blog.greglow.com/2026/04/06/sdu-tools-nepali-date-processing-in-sql-server-t-sql/</guid>
      <pubDate>Mon, 06 Apr 2026 00:00:00 AEST</pubDate>

      <description>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. Version 27 adds the first set of views and functions for working with Nepali dates. These are useful in Nepal and in a number of Buddhist-related areas.
The first tool added is a view called NepaliMonths. It returns the Nepali names for months. You can see it in the main image above.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/SDU_Tools_NepaliDateProcessing.png" alt="cover image" /><br />
        
        <p>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. Version 27 adds the first set of views and functions for working with Nepali dates. These are useful in Nepal and in a number of Buddhist-related areas.</p>
<p>The first tool added is a view called <strong>NepaliMonths</strong>. It returns the Nepali names for months. You can see it in the main image above.</p>
<p>The next tool added is a view called <strong>NepaliMonthDays</strong>. It maps year numbers and month numbers to the length of each month. It is a complex pattern in Nepali date processing, so a limited range of dates is supported.</p>
<p>They also provide options to convert between English dates and Nepali dates (<strong>EnglishDateToNepaliDate</strong>) and vice-versa (<strong>NepaliDateToEnglishDate</strong>).</p>
<p>This function is computationally intensive. If you have large volumes of data to compare, consider modifying it to a table-valued function.</p>
<h3 id="find-out-more">Find out more</h3>
<p>You can see it in action in the main image above, and in the video here:</p>
<p>



  



<a href="https://www.youtube.com/watch?v=zHuayQuDxR4" target="_blank" rel="noopener noreferrer"
   style="position: relative; display: block; width: 100%; max-width: 560px; aspect-ratio: 16 / 9; border-radius: 12px; overflow: hidden;">
  <img src="https://img.youtube.com/vi/zHuayQuDxR4/hqdefault.jpg" alt="YouTube Video"
       style="width: 100%; height: 100%; object-fit: cover; display: block;">
  <span style="
    position: absolute;
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);
    background: rgba(0,0,0,0.6);
    border-radius: 50%;
    width: 64px;
    height: 64px;
    display: flex;
    align-items: center;
    justify-content: center;
  ">
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 68 48" width="36" height="36">
      <path d="M66.52 7.09a8.27 8.27 0 00-5.82-5.84C56.73 0 34 0 34 0S11.27 0 7.3 1.25a8.27 8.27 0 00-5.82 5.84A85.19 85.19 0 000 24a85.19 85.19 0 001.48 16.91 8.27 8.27 0 005.82 5.84C11.27 48 34 48 34 48s22.73 0 26.7-1.25a8.27 8.27 0 005.82-5.84A85.19 85.19 0 0068 24a85.19 85.19 0 00-1.48-16.91zM27 34V14l18 10z" fill="#fff"/>
    </svg>
  </span>
</a>


</p>
<p>You can use our tools as a set or as a great example of how to write functions like these.</p>
<p>Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:</p>
<p>





  <a href="http://sdutools.sqldownunder.com">http://sdutools.sqldownunder.com</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Aggregating</title>
      <link>https://blog.greglow.com/2026/04/05/fabric-rti-101-aggregating/</link>
      <guid>https://blog.greglow.com/2026/04/05/fabric-rti-101-aggregating/</guid>
      <pubDate>Sun, 05 Apr 2026 00:00:00 AEST</pubDate>

      <description>When we talk about aggregation, we’re really talking about taking huge volumes of raw events and rolling them up into something that’s usable, measurable, and actionable. A raw stream might be thousands of individual events per second — each transaction, each sensor ping, each click. By themselves, they’re useful for tracing details, but they don’t tell the bigger story.
Aggregation lets us step back and say: Instead of looking at every single reading, let’s look at the total, the average, or the maximum over a period of time. Common aggregation functions are ones you already know — COUNT, SUM, AVG, MIN, MAX.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>When we talk about aggregation, we’re really talking about taking huge volumes of raw events and rolling them up into something that’s usable, measurable, and actionable. A raw stream might be thousands of individual events per second — each transaction, each sensor ping, each click. By themselves, they’re useful for tracing details, but they don’t tell the bigger story.</p>
<p>Aggregation lets us step back and say: <em>Instead of looking at every single reading, let’s look at the total, the average, or the maximum over a period of time.</em> Common aggregation functions are ones you already know — COUNT, SUM, AVG, MIN, MAX.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_05_01.png" alt="Aggregating"></p>
<p>These functions help us turn data into metrics and KPIs that can be consumed at scale.</p>
<p>For example, a retailer might aggregate all point-of-sale transactions into total revenue per minute. That’s much easier to track in a dashboard than trying to follow every single receipt. In an IoT scenario, we could calculate the average temperature across all devices in a building every 10 seconds, rather than monitoring each device individually. For anomaly detection, we might track rolling maximum values, like the highest CPU usage in the last five minutes, to quickly spot problems.</p>
<p>Aggregation also helps control data volume and cost. Imagine if every dashboard had to query billions of raw events in real time. It wouldn’t be feasible. Instead, we pre-aggregate and reduce that firehose into compact metrics, which keeps queries fast and infrastructure efficient.</p>
<p>In practice, aggregations are often the bridge between raw streams and the dashboards or alerts that business users interact with. They allow executives to see KPIs like <strong>orders per minute</strong> or <strong>average response time</strong>, while analysts and engineers still have the option of drilling into the raw data if needed.</p>
<p>Aggregation is about compression with meaning: taking floods of raw events and turning them into real-time signals that people and systems can actually act on.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SDU Tools: Token Set Similarity in SQL Server T-SQL</title>
      <link>https://blog.greglow.com/2026/04/04/sdu-tools-token-set-similarity-in-sql-server-t-sql/</link>
      <guid>https://blog.greglow.com/2026/04/04/sdu-tools-token-set-similarity-in-sql-server-t-sql/</guid>
      <pubDate>Sat, 04 Apr 2026 00:00:00 AEST</pubDate>

      <description>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The TokenSetSimilarity function calculates token set similarity for two strings.
It answers the question: Do these two strings contain mostly the same words, even if the order, spacing, or repetition differs?
It is useful where word order varies, or extra or missing words are common. It can also help where character-level typos are less important than the presence of words.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/SDU_Tools_TokenSetSimilarity.png" alt="cover image" /><br />
        
        <p>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The <strong>TokenSetSimilarity</strong> function calculates token set similarity for two strings.</p>
<p>It answers the question: <strong>Do these two strings contain mostly the same words, even if the order, spacing, or repetition differs?</strong></p>
<p>It is useful where word order varies, or extra or missing words are common. It can also help where character-level typos are less important than the presence of words.</p>
<p>The function is order-insensitive (which is good for names and titles) and is intentionally designed to be set-based (so duplicates don’t increase similarity). For token-weighting (downplaying common words like pty, ltd, inc), you might consider adding a stop-word table and filtering tokens before insert. It returns NULL on NULL or empty string input.</p>
<p>This function is computationally intensive. If you have large volumes of data to compare, consider modifying it to a table-valued function.</p>
<h3 id="find-out-more">Find out more</h3>
<p>You can see it in action in the main image above, and in the video here:</p>
<p>



  



<a href="https://www.youtube.com/watch?v=sz7jqNkrRLQ" target="_blank" rel="noopener noreferrer"
   style="position: relative; display: block; width: 100%; max-width: 560px; aspect-ratio: 16 / 9; border-radius: 12px; overflow: hidden;">
  <img src="https://img.youtube.com/vi/sz7jqNkrRLQ/hqdefault.jpg" alt="YouTube Video"
       style="width: 100%; height: 100%; object-fit: cover; display: block;">
  <span style="
    position: absolute;
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);
    background: rgba(0,0,0,0.6);
    border-radius: 50%;
    width: 64px;
    height: 64px;
    display: flex;
    align-items: center;
    justify-content: center;
  ">
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 68 48" width="36" height="36">
      <path d="M66.52 7.09a8.27 8.27 0 00-5.82-5.84C56.73 0 34 0 34 0S11.27 0 7.3 1.25a8.27 8.27 0 00-5.82 5.84A85.19 85.19 0 000 24a85.19 85.19 0 001.48 16.91 8.27 8.27 0 005.82 5.84C11.27 48 34 48 34 48s22.73 0 26.7-1.25a8.27 8.27 0 005.82-5.84A85.19 85.19 0 0068 24a85.19 85.19 0 00-1.48-16.91zM27 34V14l18 10z" fill="#fff"/>
    </svg>
  </span>
</a>


</p>
<p>You can use our tools as a set or as a great example of how to write functions like these.</p>
<p>Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:</p>
<p>





  <a href="http://sdutools.sqldownunder.com">http://sdutools.sqldownunder.com</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Managing Fields</title>
      <link>https://blog.greglow.com/2026/04/03/fabric-rti-101-managing-fields/</link>
      <guid>https://blog.greglow.com/2026/04/03/fabric-rti-101-managing-fields/</guid>
      <pubDate>Fri, 03 Apr 2026 00:00:00 AEST</pubDate>

      <description>When working with real-time data, it’s easy for your streams to become cluttered — especially as events come from multiple sources with different structures. That’s where the Manage Fields option becomes essential.
It gives you control over the shape of your data stream. You can choose which fields to keep and drop those you don’t need, which helps reduce noise and improves performance. For example, you might remove diagnostic fields or metadata columns that aren’t needed for your analytics.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>When working with real-time data, it’s easy for your streams to become cluttered — especially as events come from multiple sources with different structures. That’s where the Manage Fields option becomes essential.</p>
<p>It gives you control over the shape of your data stream. You can choose which fields to keep and drop those you don’t need, which helps reduce noise and improves performance. For example, you might remove diagnostic fields or metadata columns that aren’t needed for your analytics.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_04_01.png" alt="Managing Fields"></p>
<p>You can also rename fields to make your dataset easier to understand — for instance, changing a field like dev_id to DeviceID, or aligning naming conventions across multiple sources.</p>
<p>Another powerful capability is creating calculated or derived fields. Maybe you want to compute a total, a ratio, or convert a timestamp into a specific format — all of that can be done within the Manage Fields transformation step.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_04_02.png" alt="Managing Fields"></p>
<p>Finally, reordering fields lets you control how the output schema looks, which makes it easier for downstream processes to work with your stream.</p>
<p>Think of Manage Fields as your stream’s cleanup and preparation tool — ensuring that what flows into your analytics is clean, well-structured, and ready for insights.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SDU Tools: Normalize for Search in SQL Server T-SQL</title>
      <link>https://blog.greglow.com/2026/04/02/sdu-tools-normalize-for-search-in-sql-server-t-sql/</link>
      <guid>https://blog.greglow.com/2026/04/02/sdu-tools-normalize-for-search-in-sql-server-t-sql/</guid>
      <pubDate>Thu, 02 Apr 2026 00:00:00 AEST</pubDate>

      <description>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The NormalizeForSearch function normalizes a string to make it ready for search operations.
It makes strings comparable by stripping away differences that are usually meaningless for search or matching.
It helps to answer the question: If two strings refer to the same thing, what differences should I ignore before I even start comparing?
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/SDU_Tools_NormalizeForSearch.png" alt="cover image" /><br />
        
        <p>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The <strong>NormalizeForSearch</strong> function normalizes a string to make it ready for search operations.</p>
<p>It makes strings comparable by stripping away differences that are usually meaningless for search or matching.</p>
<p>It helps to answer the question: <strong>If two strings refer to the same thing, what differences should I ignore before I even start comparing?</strong></p>
<p>It returns NULL on NULL input.</p>
<p>This function is computationally intensive. If you have large volumes of data to compare, consider modifying it to a table-valued function.</p>
<h3 id="find-out-more">Find out more</h3>
<p>You can see it in action in the main image above, and in the video here:</p>
<p>



  



<a href="https://www.youtube.com/watch?v=1Vv7A8re-ys" target="_blank" rel="noopener noreferrer"
   style="position: relative; display: block; width: 100%; max-width: 560px; aspect-ratio: 16 / 9; border-radius: 12px; overflow: hidden;">
  <img src="https://img.youtube.com/vi/1Vv7A8re-ys/hqdefault.jpg" alt="YouTube Video"
       style="width: 100%; height: 100%; object-fit: cover; display: block;">
  <span style="
    position: absolute;
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);
    background: rgba(0,0,0,0.6);
    border-radius: 50%;
    width: 64px;
    height: 64px;
    display: flex;
    align-items: center;
    justify-content: center;
  ">
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 68 48" width="36" height="36">
      <path d="M66.52 7.09a8.27 8.27 0 00-5.82-5.84C56.73 0 34 0 34 0S11.27 0 7.3 1.25a8.27 8.27 0 00-5.82 5.84A85.19 85.19 0 000 24a85.19 85.19 0 001.48 16.91 8.27 8.27 0 005.82 5.84C11.27 48 34 48 34 48s22.73 0 26.7-1.25a8.27 8.27 0 005.82-5.84A85.19 85.19 0 0068 24a85.19 85.19 0 00-1.48-16.91zM27 34V14l18 10z" fill="#fff"/>
    </svg>
  </span>
</a>


</p>
<p>You can use our tools as a set or as a great example of how to write functions like these.</p>
<p>Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:</p>
<p>





  <a href="http://sdutools.sqldownunder.com">http://sdutools.sqldownunder.com</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Filtering</title>
      <link>https://blog.greglow.com/2026/04/01/fabric-rti-101-filtering/</link>
      <guid>https://blog.greglow.com/2026/04/01/fabric-rti-101-filtering/</guid>
      <pubDate>Wed, 01 Apr 2026 00:00:00 AEST</pubDate>

      <description>When we work with streaming data, one of the first transformations we often apply is filtering. The reality is that not every event is useful. In fact, in many scenarios, the vast majority of events are just background noise. Filtering gives us a way to narrow the stream down to only the events that actually matter for the business problem we’re solving.
Take IoT telemetry as an example. A device might send thousands of readings every hour, but if 99% of them show perfectly normal operating conditions, storing and analyzing them all just adds cost and complexity. By applying a filter, we could say: only keep the events where the temperature rises above 80 degrees Celsius, or only process readings where a vibration level exceeds a set threshold. This way, we’re focusing on signals of interest instead of wasting resources on noise.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>When we work with streaming data, one of the first transformations we often apply is filtering. The reality is that not every event is useful. In fact, in many scenarios, the vast majority of events are just background noise. Filtering gives us a way to narrow the stream down to only the events that actually matter for the business problem we’re solving.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_03_01.png" alt="Filtering1"></p>
<p>Take IoT telemetry as an example. A device might send thousands of readings every hour, but if 99% of them show perfectly normal operating conditions, storing and analyzing them all just adds cost and complexity. By applying a filter, we could say: only keep the events where the temperature rises above 80 degrees Celsius, or only process readings where a vibration level exceeds a set threshold. This way, we’re focusing on signals of interest instead of wasting resources on noise.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_03_02.png" alt="Filtering2"></p>
<p>The same idea applies in financial systems. Instead of passing every single transaction downstream, maybe we only flag those above a certain value, or those coming from specific regions. This doesn’t mean we ignore the rest entirely — we could still archive them if needed — but by filtering upfront, we make sure that real-time analytics and alerts concentrate on what’s important.</p>
<p>There’s also a very practical reason for filtering: cost and performance. Every event you let into the system consumes compute, storage, and network bandwidth. The more irrelevant events you allow through, the more expensive and sluggish your pipeline becomes. By reducing the event volume early, we not only save money but also make downstream processing faster and simpler.</p>
<p>The good news is that in Fabric, filtering is easy to configure. You don’t need to write custom code — you can define filtering conditions visually in the Eventstream designer. For example, you can set a simple condition like <em>temperature > 80</em> or <em>transactionAmount > 1000</em>. That means both technical teams and business analysts can work together to define the rules that keep the data stream focused on real value.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_03_03.png" alt="Filtering3"></p>
<p>So filtering is really about precision and efficiency: cutting through the noise so that what flows through your real-time system is the data that drives action.<br>
 </p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>SDU Tools: Levenshtein Distance in SQL Server T-SQL</title>
      <link>https://blog.greglow.com/2026/03/31/sdu-tools-levenshtein-distance-in-sql-server-t-sql/</link>
      <guid>https://blog.greglow.com/2026/03/31/sdu-tools-levenshtein-distance-in-sql-server-t-sql/</guid>
      <pubDate>Tue, 31 Mar 2026 00:00:00 AEST</pubDate>

      <description>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The LevenshteinDistance function calculates the Levenshtein distance between two strings.
It essentially answers the question How far apart are these two strings in terms of character edits?, where edits are inserting, deleting, or substituting a character. In this calculation, each edit has a cost of 1.
Empty and NULL values on input return NULL.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/SDU_Tools_LevenshteinDistance.png" alt="cover image" /><br />
        
        <p>Our free SDU Tools for developers and DBAs, now includes a very large number of tools, with procedures, functions, and views. The <strong>LevenshteinDistance</strong> function calculates the Levenshtein distance between two strings.</p>
<p>It essentially answers the question <strong>How far apart are these two strings in terms of character edits?</strong>, where edits are inserting, deleting, or substituting a character. In this calculation, each edit has a cost of 1.</p>
<p>Empty and NULL values on input return NULL.</p>
<p>This function is computationally intensive. If you have large volumes of data to compare, consider modifying it to a table-valued function.</p>
<h3 id="find-out-more">Find out more</h3>
<p>You can see it in action in the main image above, and in the video here:</p>
<p>



  



<a href="https://www.youtube.com/watch?v=AQyKbbYi3ZE" target="_blank" rel="noopener noreferrer"
   style="position: relative; display: block; width: 100%; max-width: 560px; aspect-ratio: 16 / 9; border-radius: 12px; overflow: hidden;">
  <img src="https://img.youtube.com/vi/AQyKbbYi3ZE/hqdefault.jpg" alt="YouTube Video"
       style="width: 100%; height: 100%; object-fit: cover; display: block;">
  <span style="
    position: absolute;
    top: 50%;
    left: 50%;
    transform: translate(-50%, -50%);
    background: rgba(0,0,0,0.6);
    border-radius: 50%;
    width: 64px;
    height: 64px;
    display: flex;
    align-items: center;
    justify-content: center;
  ">
    <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 68 48" width="36" height="36">
      <path d="M66.52 7.09a8.27 8.27 0 00-5.82-5.84C56.73 0 34 0 34 0S11.27 0 7.3 1.25a8.27 8.27 0 00-5.82 5.84A85.19 85.19 0 000 24a85.19 85.19 0 001.48 16.91 8.27 8.27 0 005.82 5.84C11.27 48 34 48 34 48s22.73 0 26.7-1.25a8.27 8.27 0 005.82-5.84A85.19 85.19 0 0068 24a85.19 85.19 0 00-1.48-16.91zM27 34V14l18 10z" fill="#fff"/>
    </svg>
  </span>
</a>


</p>
<p>You can use our tools as a set or as a great example of how to write functions like these.</p>
<p>Access to SDU Tools is one of the benefits of being an SDU Insider, along with access to our other free tools and eBooks. Please just visit here for more info:</p>
<p>





  <a href="http://sdutools.sqldownunder.com">http://sdutools.sqldownunder.com</a>

</p>

      ]]></content:encoded>
    </item>
    
    <item>
      <title>Fabric RTI 101: Ingestion Modes</title>
      <link>https://blog.greglow.com/2026/03/30/fabric-rti-101-ingestion-modes/</link>
      <guid>https://blog.greglow.com/2026/03/30/fabric-rti-101-ingestion-modes/</guid>
      <pubDate>Mon, 30 Mar 2026 00:00:00 AEST</pubDate>

      <description>When we talk about ingestion modes in Fabric Real-Time Intelligence, we’re really thinking about how events move from their source into a destination, such as an Eventhouse table or another downstream system.
The pattern you choose affects latency, cost, data quality, and the flexibility of your real-time architecture. There are two broad models that students will encounter: direct ingestion and processing before ingestion.
Direct ingestion is the simplest path. Events arrive from a source such as IoT devices, applications, or an event broker, and they are immediately written into the target system without any intermediate shaping. This mode gives the lowest latency because nothing happens in between. It is most useful when you want to preserve raw events for later analysis, replay, or transformations that happen downstream. It is also the right choice when your first priority is freshness and the consumers are able to handle any necessary cleaning or shaping themselves.
</description>

      <content:encoded><![CDATA[
        
          <img src="https://blog.greglow.com/FabricRTI101.png" alt="cover image" /><br />
        
        <p>When we talk about ingestion modes in Fabric Real-Time Intelligence, we’re really thinking about how events move from their source into a destination, such as an Eventhouse table or another downstream system.</p>
<p>The pattern you choose affects latency, cost, data quality, and the flexibility of your real-time architecture. There are two broad models that students will encounter: direct ingestion and processing before ingestion.</p>
<p><img src="https://greglow.blob.core.windows.net/blog/images/FabricRTI101_05_02_01.png" alt="Ingestion Modes"></p>
<p>Direct ingestion is the simplest path. Events arrive from a source such as IoT devices, applications, or an event broker, and they are immediately written into the target system without any intermediate shaping. This mode gives the lowest latency because nothing happens in between. It is most useful when you want to preserve raw events for later analysis, replay, or transformations that happen downstream. It is also the right choice when your first priority is freshness and the consumers are able to handle any necessary cleaning or shaping themselves.</p>
<p>However, many real-time workloads require more than just landing raw events. That’s where processing before ingestion becomes relevant. In this mode, events flow through a processing step, often a Stream Processing job, before they land in a table or get forwarded to another service.</p>
<h2 id="processing">Processing</h2>
<p>Processing can include filtering out unwanted events, normalising formats, joining with reference data to enrich context, detecting anomalies, or creating aggregated views such as rolling counts or metrics. This does add some latency, because transformations take time to compute, but the trade-off is usually improved data quality and reduced complexity for downstream systems.</p>
<p>Another benefit of processing before ingestion is being able to reduce storage and compute costs. By filtering noisy or duplicate events early, you avoid ingesting unnecessary data. At the same time, curated streams can make life easier for data consumers who expect well-structured, high-quality information.</p>
<p>So, neither mode is <em>better</em>; they serve different purposes. Direct ingestion optimizes for speed and fidelity, while processing before ingestion optimizes for quality and readiness for analytics. In most real-time solutions, you’ll see a combination of both depending on the requirements for each downstream consumer.</p>
<h2 id="learn-more-about-fabric-rti">Learn more about Fabric RTI</h2>
<p>If you really want to learn about RTI right now, we have an online on-demand course that you can enrol in, right now. You’ll find it at 





  <a href="https://sqldownunder.com/courses/rti">Mastering Microsoft Fabric Real-Time Intelligence</a>

</p>

      ]]></content:encoded>
    </item>
    
  </channel>
</rss>
