Power BI Implementation Models for Enterprises Part 3: Cloud Friendly Clients

I've been writing a series on Power BI Implementation Models for Enterprises for https://tekkigurus.com.

Part 3 that covers what I consider Cloud Friendly Clients is now published:

https://www.tekkigurus.com/power-bi-implementation-models-part-3-cloud-friendly-clients/

Enjoy !

SQL Down Under show 85 with guest Bob Duffy discussing building data warehouses is now published

And another new SQL Down Under show is published!

Once again, I had the great pleasure yesterday to record a podcast with one of my UK based friends Bob Duffy.

Bob is a Principal Data Architect at Prodata's SQL Centre of Excellence in Dublin, Ireland. He helps enterprise customers build large scale enterprise data warehouses in Azure, typically integrating ERP and financial systems.

Bob is a long-term Data Platform MVP, a Microsoft Certified Architect, Master, and Analysis Services Maestro. He holds a number of Azure role-based certifications. Bob also specialises in performance tuning, architecture, load testing, DevOps, Software engineering, MQ architecture, semantic modelling and meta data driven design on the MS platform.

Bob used to work for Microsoft Consulting Services a Senior Consultant and subject matter expert for BI/data in Western Europe in 2005.

In the show, we discuss the approaches he takes to building data warehouses on the Microsoft BI platforms.

I hope you enjoy it. You'll find this show (and previous shows) here: https://podcast.sqldownunder.com/

SQL: Understanding Change Data Capture for Azure SQL Database – Part 4 – Azure SQL Database Service Level Objectives

This is part 4 of a series on working with change data capture (CDC) in Azure SQL Database. This part discusses how to enable it and how to use it.

When I was reading the documentation for CDC in Azure SQL Database, I kept coming across a mention that it required at least an S3 service level objective (SLO), if you were using a DTU-based database.

I really hoped that wasn't the case.

I was left saying "please say it ain't so!".

Testing

I thought it was time to find out what actually happens when you try it.

I started by creating a database with the lowest SLO (Basic). Now I know these don't hold enough data to really even be very meaningful, but I wanted to know what would happen.

I didn't get far.

I created the table from the scripts in part 3 of this series, and then tried to do the first enable of CDC.

To say that's disappointing is an understatement. But i tmakes it clear that Basic, S0, S1, and S2 aren't going to let you enable it.

What about Scaling?

Then I wondered what would happen if I enabled it on an S3, and then tried to scale down.

So I scaled the DB to S3 ok, and enabled CDC as in part 3. All worked as expected.

I then scaled the DB back to S2 using the Azure Portal. And alas, that failed too:

I suppose I'm not surprised, but I had hoped it might at least have automatically disabled CDC if it really had to.

It's a Mistake

I think these licensing restrictions for CDC in Azure SQL DB are a mistake.

Unhappy woman
Unsplash image from Getty

Back in SQL Server 2016 SP1, I was one of the main instigators to get the product team to make sure that the programming surface of the database worked across all the tiers (Enterprise, Standard, Developer, Express) wherever possible.

This restriction on CDC goes directly against that ethos. It harks back again to large single database thinking, not cloudy thinking.

I understand that it's probably needed for performance in production scenarios, but what I'm always trying to tell the team is that production scenarios aren't the only scenarios.

Developers also need to write code. We should let them write and test functionality. It can be clear to them that it won't work the same as on a production instance, but that's no different to many other aspects of the system. A Basic SLO database won't run a complex query over a lot of data either, but we don't expect it to.

But it's important to let them write their code and make sure it basically functions, no matter what level of SLO they're working with. If you require them all to have an expensive database, just to test code or concepts, you greatly limit how many developers will use the features. 

Clouds
Unsplash image by Dallas Reedy

Having those types of restrictions also restricts the cloudiness of the solution, in that you couldn't scale up/down across a range of SLOs. We often take S7 or S8 databases and temporarily "park" them at S2 while not in use. This type of restriction kills those options as well, and makes the overall solution less cloudy.

And One More Part to This Series

That's service level objectives for the use of CDC in Azure SQL DB. We'll move to accessing CDC data from another Azure SQL DB next.

  1. Why use Change Data Capture for Azure SQL Database?
  2. How Change Data Capture works in Azure SQL Database
  3. Enabling and using Change Data Capture in Azure SQL Database
  4. Change Data Capture and Azure SQL Database Service Level Objectives
  5. Accessing Change Data Capture Data from Another Azure SQL Database

 

Azure Data Factory (ADF) – Issues with parsing strings in CSVs

It's scary how much of the world's data lives in CSVs. Yet, as a standard, it's problematic. So many implementations just don't work as expected. Today, I had a client asking about why Azure Data Factory wouldn't read a CSV that, while odd, was in a valid format.

The simplest equivalent of the file that wouldn't load, would be one like this:

There are meant to be four columns. The source system wrapped strings in quotes only when the string contained a comma, as that was the delimiter for the file.

But the fourth line in that file would not load. ADF would split it across multiple columns. It was not interpreting the double-double-quotes.

I was puzzled at first, as the file seemed ok. If I upload that file an any of the common online CSV file validators or linters, they all said it was ok.

The client had configured the dataset as follows:

The options seemed ok to them.

Turns out the issue was with the escape character. ADF defaulted to using a backslash. For the majority of CSV files that I see, it would need to be a double-quote:

And sure enough, with that change in place, it loaded files similar to the one above without issue.

I don't know why that's not the default for CSV files in ADF but I do hope it helps someone else.

Fabric Down Under show 3 with guest Mark Pryce-Maher discussing SQL Warehousing in Microsoft Fabric

I had the great pleasure to get to spend time today, talking with Mark Pryce-Maker. We were discussing the SQL warehousing experience in Microsoft Fabric.

Mark is a Senior Program Manager at Microsoft, where he's working closely with the Microsoft Fabric team. He has a strong background in many areas of data, most recently prior to Fabric, with helping customers migrate to Azure Synapse Analytics. He has worked extensively with the dedicated SQL pools in that service.

SQL is one of the core experiences for data engineers in Microsoft Fabric. In an upcoming show, I'll discuss the other core experience with Spark notebooks.

In the show, Josh provides a solid introduction to what is present, and we dived into many areas of how it works, and how you might use it.

You'll find the show here: https://fabricdownunder.com

I hope you find it useful.

SQL: Understanding Change Data Capture for Azure SQL Database – Part 3 – Enabling and Using CDC

This is part 3 of a series on working with change data capture (CDC) in Azure SQL Database. This part discusses how to enable it and how to use it.

To show how this works, I have created an Azure SQL Database called CDCTest. I created it as a DTU-based database with a service level objective (SLO) of S3. I'll discuss more about the licensing implications of CDC in part 4.

Connecting and Setup

I've connected to the database using SQL Server Management Studio (SSMS) and opened a query window to the new database. I've then executed the following to create the objects:

If I check the entry in sys.databases by executing this command:

The output of the column is as shown:

CDC not enabled

Enabling CDC at the DB Level

So let's start by enabling CDC for the database, by executing:

Checking sys.databases the same way again now shows it enabled:

CDC is enabled

That's how we can check if it's enabled at the DB level. But there are other changes that have occurred. First, there's a new schema that's been added:

New CDC schema visible

And it contains a number of objects:

CDC schema objects

Enabling CDC at the Table Level

Next, let's enable it at the table level and see what's changed.

We did several things here. We said that the source table is called NewEmployees and that it lives in the dbo schema. We then said that this capture instance will support net changes. That means that as well as providing each individual change, we can ask CDC for the net affect of a set of changes. We'll see that later. And finally, we told it that for now, only admins can access the data. Otherwise, we'd have had to provide a name for the role that contains the users who can query this.

We can then see that the table is enabled for CDC by querying sys.tables:

sys.tables

But wait a minute. We only created one table right? What are these others? We can find them if we expand the tables in the database:

List of tables - more than expected

Note all these extra ones are system tables.

One other thing we did, even though we didn't specify it, is to create a capture instance called dbo_NewEmployees. That's the default name but we could have named it by using a parameter. Each table can have two capture instances to allow for handling schema changes and/or other requirements.

We can see the list of capture instances for a table by using this query:

capture instance details

This shows the configuration of the capture instance, the start and end points of the log that it relates to, and more, like the name of the index in the table, etc.

Using CDC for the Table

Now let's make use of CDC for this table. I'll start by inserting three rows, then updating one of them.

Now let's check out what happened by calling the CDC functions:

When calling these functions, I needed to specify a range of log sequence numbers (LSNs). I called sys.fn_cdc_get_min_lsn to get the minimum value available to the capture instance (note: not to the table), and used sys.fn_cdc_get_max_len to get the maximum value that's available from the log right now.

Normally, I'd want to record where I'm up to and get values past that point.

The output of the first SELECT using cdc.fn_cdc_get_all_changes_dbo_NewEmployees was:

Get all changes

Note that it shows every operation i.e. the three inserts (with __$operation as 2) and an update (with __$operation as 4).  The primary key value is also shown, and the LSNs where it all occurred.

Also note that the name of the capture instance is part of the name of the function that was created when we enabled CDC for the table.

Often I do want every operation, but sometimes I don't care about all the individual operations. I just want the net effect. That's what's shown in the second SELECT using cdc.fn_cdc_get_net_changes_dbo_NewEmployees:

Net effect

This one is interesting. Inserting 3 rows, then updating 1 of those rows, is the same as inserting 3 rows but one of them with the updated value. And that's what you get from this function.

And that's the basic enabling and use of CDC in Azure SQL DB. We'll move to licensing implications in the next part.

  1. Why use Change Data Capture for Azure SQL Database?
  2. How Change Data Capture works in Azure SQL Database
  3. Enabling and using Change Data Capture in Azure SQL Database
  4. Change Data Capture and Azure SQL Database Service Level Objectives
  5. Accessing Change Data Capture Data from Another Azure SQL Database

 

ADF: Replacing carriage returns in Data Factory expression language

This one had me stumped lately.

I had an variable in ADF where the value contained carriage returns. I needed to remove them.

I guessed that would be easy:

@replace(variables('fileContent'), '\r', ")

But no matter what variation of this I tried, it wouldn't remove them.

Thanks to the genius of fellow MVP Simon Sabin, the answer was:

@replace(variables('fileContent'),base64ToString('DQ=='),")

Note: if you need a line feed (LF) it's base64ToString('Cg==') and a carriage return / line feed pair (CRLF), is base64ToString('DQo=').

Thanks Simon, and I hope this helps someone else.

Note: if you need a linefeed it's base64ToString('Cg==') and a carriage return / line feed pair, is base64ToString('DQo=').

FIX: Data Factory ODBC linked service fails to Apply and returns Internal Server Error

I was working with a client who has having trouble debugging an ADF pipeline, related to an ODBC linked service not working as expected.

The user had configured the connection string property of an ODBC connection this way:

  •  Set a parameter to the linked service as ServiceDSN
  •  Configured the connection string as @concat('DSN=',linkedService().ServiceDSN)

The Test Connection for that worked fine, but when you click Apply, it fails with an Internal Server Error. Specifically, the error was:

Failed to encrypt linked service credentials on linked self-hosted IR 'XXX-XXXX01' through service bus, reason is: InternalServerError, error message is: Internal Server Error

Absolutely no idea why. Has to be a bug.

Other Symptoms

What it had done though, is leave the linked service connection details in a weird state. Trying to access it via the Test Connection option at the dataset level, showed Data source name too long.

What finally gave me a clue, is that when I looked at the error message in the log on the shared integration runtime, it actually said Data source name too long not found in mapping in connector. Now apart from the lousy English on this one, it's interesting that in the UI, only the first part of the message surfaced. The additional not found part was a major hint. It wasn't finding the entry for the linked service, for the interactive mode used during debugging in the portal.

Solution

Anyway, the solution was to configure the connection string as DSN=@{linkedService().ServiceDSN} instead. That tests just the same, but doesn't throw the internal server error when you try to apply it. And it works as expected.

No idea why the way you construct the connection string matters, as they both return the same string, but it does. Both methods test fine, but one explodes when you try to apply it.

Another Related Error

One other thing I saw periodically during testing was an error that said:

Format of the initialization string does not conform to specification starting at index 0

This error occurs if the connection string happens to just contain the DSN and not the string with the DSN= prefix.

Hope any/all of these help someone else.

Cosmos Down Under podcast 7 with guest Rodrigo Souza is now published!

I was able to record another new Cosmos Down Under podcast today. My guest was Microsoft Senior Program Manager Rodrigo Souza.

In the show, we discussed the Change Data Capture feed for the Analytical store in Azure Cosmos DB. This is a powerful new capability and worth learning about.

I hope you enjoy the show.

https://podcast.cosmosdownunder.com/

 

SQL: Understanding Change Data Capture for Azure SQL Database – Part 2 – How does it work?

In the part 1 of this series, I discussed the positioning of Change Data Capture. In part 2, I want to cover how it works.

Log Reading

There are many ways that you can output details of changes that occur in data within SQL Server. Many of those methods require actions to occur at the time the data change is made. This can be problematic.

The first problem with this, is the performance impact on the application that's making the change. If I update a row in a table and there is part of the process that writes details of that change to some type of audit or tracking log, I've now increased the work that needs to happen in the context of the application that's making the change. Generally what this means, is that I've slowed the application down by at least doubling the work that needs to be performed. That might not be well-received.

The second potential problem is even nastier. What if the change tracking part of the work fails even though the original update works? If I've done the change tracking work in the context of the original update (particularly if it's done as part of an atomic process), by adding tracking I might have broken the original application. That certainly wouldn't be well-received.

So what to do?

The best answer seems to be to work with the transaction log that's already built into SQL Server. By default, it does have details of the changes that have been occurring to the data. It can be read asynchronously so delays in reading it mostly won't affect the original data changes at all (there are only rare exceptions to this). If the reading of the logs failed, the problem can be corrected and the reading can be restarted, all again without affecting the main updates that are occurring.

And that's what Change Data Capture does. It uses the same log reader agent that has been part of SQL Server for a very long time. Previously though, it was used for Transactional Replication. In fact if you use both Transactional Replication and Change Data Capture on the same SQL Server system, they share the same instance of the log reader. The SQL Server Agent is used to make them both work.

SQL Server Agent – Isn't that missing?

When we're working with Azure SQL Database, things are a bit different. Currently, we don't have any concept of Transactional Replication. That could change but right now, it's not there. So sharing the log reader isn't an issue.

But I also mentioned that with SQL Server, it was the SQL Server Agent that kicks off the log reading agent. And with Azure SQL Database, we don't have SQL Server Agent either !

The Azure SQL Database team have instead provided a scheduler that runs the log reader (called the capture), and also runs the required clean-up tasks. SQL Server had another agent to perform clean-up. This is all automated and requires no maintenance from the user.

Change Data Capture (CDC) Data Flow

The data flow with CDC is basically like the following:

CDC Data Flow

  1. The original app sends a change (insert, update, delete) to a table in the Azure SQL Database.
  2. The change is recorded in the transaction log.
  3. Some time later (usually not long though), the change is read by the capture process and stored in a change table.
  4. The Data Warehouse (DW) or target system or ETL system makes a call to a set of CDC functions to retrieve the changes.

Everything in the dotted box above is part of, and contained within, the Azure SQL Database.

Upcoming

In the next section, I'll show you the code that's required, and show you the changes that occur to the Azure SQL Database when you enable CDC.

  1. Why use Change Data Capture for Azure SQL Database?
  2. How Change Data Capture works in Azure SQL Database
  3. Enabling and using Change Data Capture in Azure SQL Database
  4. Change Data Capture and Azure SQL Database Service Level Objectives
  5. Accessing Change Data Capture Data from Another Azure SQL Database