SQL: Fix: The parameters supplied for the procedure "sp_set_firewall_rule" are not valid.

I often use the procedure sp_set_firewall_rule to set firewall rules for Azure SQL Server. (There's a similar call to set the firewall for databases). The other day though, I got an error that had me puzzled:

I also tried it with named parameters and got the same error.

When I looked at my previous scripts, I realised that I had used a Unicode string for the first parameter previously.

Solution

I changed ‘TestRule’ to N’TestRule’ and it worked fine.

I’ve not seen a procedure before that wants a Unicode string that won’t accept an ASCII string. For example, this works just fine:

Apparently strict data type checking has always been a feature of extended stored procedures and this one checks specifically for a Unicode string. I really don't use extended stored procedures much anyway.

What threw me as well is that I couldn't find it in the list of system stored procedures. It's not there because it's actually an extended stored procedure. These used to mostly have xp prefixes, not sp prefixes, and are a good example of why I don't love using prefixes like that.

I really wish though, that this procedure had a better error message. While the current one is strictly correct, it is not actually all that helpful.

 

SQL: Password complexity rules for Azure SQL

Azure SQL (both Azure SQL Database and Azure SQL Managed Instance) both have different password complexity rules to SQL Server. I was reading an email discussion list and a poster asked where he could find the list of password complexity rules for Azure SQL. I said I'd never seen a list.

Well it turns out that there is a list, but not where you might have thought to look. They're spelled out in this article:

Identify the right Azure SQL Database SKU for your on-premises database (Data Migration Assistant) – SQL Server | Microsoft Docs

To avoid you reading the whole article, at the time of writing, these were the rules for Azure SQL Database:

  • Your password must be at least 8 characters in length and no more than 128 characters in length.
  • Your password must contain characters from three of the following categories – English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, etc.).
  • Your password cannot contain all or part of the login name. (Part of a login name is defined as three or more consecutive alphanumeric characters.)

Slightly more confusing is that the article says it's talking about the server admin password, but it also appears that this same list of restrictions applies to all logon/user passwords as well.

Azure SQL Managed Instance is exactly the same except it requires the password to be at least 16 characters in length.

I hope that helps someone, including myself when I next go looking for this.

 

SQL: UNMASK is now granular in Azure SQL

Dynamic Data Masking was added to SQL Server back in 2016. I've almost never used it since. Why? There were two reasons:

#1: The first reason was that it didn't work in conjunction with other security features. In particular, you couldn't use dynamic data masking in conjunction with Always Encrypted. From the start, I've been telling the product group that this is a misstep. There's very little data that I'd want to hide from my own staff, that I'm happy still being visible to a hosting provider.

This is still an issue today. Some users might just want to use Dynamic Data Masking though, without Always Encrypted, so there might be a use case there, particularly with on-premises systems.

#2: (And this was the big one) UNMASK wasn't granular enough. There was a single UNMASK permission for the whole database. That means it worked nothing like other permissions in the database. We can usually set permissions on schemas (my favourite), tables, and columns. And I didn't see UNMASK as any different. Again, a number of us made that very clear to the product team.

However, this has now changed !

I wanted to call attention to a recent blog post that just mentioned in passing that UNMASK was now granular. In fact, you can apply it like before, to the whole DB, but you also apply it at the schema, table, and/or column levels.

Well at least you can in Azure SQL Database (and Synapse Analytics). One of the things I love about working with Azure SQL Database is being at the front of the queue when T-SQL and DB enhancements are made.

This is great news. Thank you to the product team !

Azure: Changing Azure SQL Database Service Objective from T-SQL Commands

When I'm building Azure Data Factory pipelines, I often want to rescale, i.e. change the Service Level Objective (SLO) of Azure SQL Databases before and after processing. For example: I might have a database that sits at S0 or S1 all day long when not being processed (to allows for the odd adhoc query), but I want it at S6 to do overnight ingestion of data and loading of analytic data models. Then I want it to go back to what it was before.

This can make a huge difference to the cost of these databases, when they are used intensely but intermittently.

While it's possible to do this from PowerShell, etc. I've found that the easiest (by far) is to just execute the required commands from T-SQL.

Stored Procedure Approach?

What I really wanted to do, was to put a stored procedure in each database, that can be used to scale that same database. I could have then included that procedure in our SDU Tools.

But you can't do that. While the scaling would work, the stored procedure would always return an error because it would be terminated when the rescaling occurs.

The way to avoid being terminated, is to query the master database instead. The annoying part there, is that with Azure SQL Database, you aren't allowed to put any user objects in the master database. So while it would be a perfect location to hold a stored procedure for changing the SLO of a database.

Just a SQL Batch

In the end, the easiest is to just send the required SQL batch to the master database. In ADF, I can do that with a single Lookup activity. (That's the way you just execute a SQL statement).

It's annoying that the Lookup activity only supports parameters when you're calling a stored procedure. It doesn't support parameters for SQL query batches. So I have to just put the values right into the code.

Regardless, if you want to do this, below is an example of T-SQL code that will do that. You'll need to execute it against the master database, and as a server administrator.

(Note: code might wrap)

 

Making a cloud transformation, not just a migration – free course

In the dying days of PASS, one of the attempts at keeping the organization afloat was the introduction of the PASS Pro membership. One benefit of that was access to a series of Learning Experiences. Since the demise of PASS, our friends at Red-Gate have purchased the assets, and that includes those courses.

I was pleased to hear from the Red-Gate team that the courses have now been made available for free, and without any type of paywall/loginwall. That's just awesome.

The course that I built for them was on making a cloud transformation, instead of just a migration. So many times, I see CIOs/CTOs going on and on about the cloud transformation they are embarking on, yet I don't see them transforming anything. The cloud vendors love it, as it's an apparently quick way "to the cloud" for an organization. But for the organization, it often leads to lousy outcomes.

You can now see that course here.

The course is about 3 1/2 hours in length.

You'll find the other courses in that series here.

And did I mention free ?

 

SQL: GREATEST and LEAST – awesome new additions to Azure SQL Database

I still get really excited when T-SQL has new language elements added. I'm not talking about new data definition or management statements. I'm talking about new functions and logic. So, as expected, I was really happy when I saw some posts saying that GREATEST and LEAST now seemed to be working in Azure SQL Database.

They've been in other database engines before but weren't part of T-SQL.

I tried them, and sure enough, they work as expected. NOTE: Intellisense hasn't caught up yet, so they're still flagged as syntax errors.

GREATEST and LEAST

GREATEST takes a list of values and returns the largest. LEAST takes a list of values and returns the smallest.

Note that both ignore NULL values, and unlike aggregates, don't produce any message about the NULL value being ignored either:

That's how you'd hope it would work.

This is really good news. It's so common to want to choose values on this basis. For example, if I want to offer a customer the lowest price i.e. either their own special price, a sale price, or the normal price:

The Old Way

Before these functions existed, doing the above either required a convoluted bunch of CASE statements or a potentially even less obvious OUTER APPLY statement:

And that would get messier and messier if you needed to do it for multiple columns.

This is really good news. Thank you SQL Server team. Hope it comes to an on-premises version soon.

SDU Podcast: Show 80 with guest Pedro Lopes is now available

I was really pleased to get to record a SQL Down Under podcast with an old friend Pedro Lopes recently. Pedro is a Principal Program Manager with Microsoft in the Azure Data SQL Server area.

A year or so back, I had Joe Sack on the show telling us where Intelligent Query processing was heading, and Pedro now fills us in on where it's got to.

We recorded the show earlier in this month but I couldn't release it until after the PASS summit where some of the features were announced.

I hope you enjoy it.

You'll find that show, and all previous shows here.

 

SQL: PASS Learning Experience on Making a Cloud Transformation

I've been in so many companies lately where new CTOs and CIOs claim to have a cloud focus, but all they want to do is migrate all their existing systems from on-premises VMs to cloud-based VMs. They talk about making a cloud transformation but they're not transforming anything.

I was pleased to get a chance to create a short educational series for the people at PASS to cover some of my thoughts on how to make a real transformation, not just a migration.

You'll find info on it here: https://www.youtube.com/watch?v=Hq-yYQY5eLo

 

SQL Down Under Podcast 79 with Guest Mark Brown

Hi Folks,

Just a heads-up that we've just released SQL Down Under podcast show 79 with Microsoft Principal Program Manager Mark Brown.

Mark is an old friend and has been around Microsoft a long time. He is a Principal Program Manager on the Cosmos DB team. Mark is feature PM for the replication and consistency features. That includes  its multi-master capabilities, and its management capabilities. He leads a team focused on customers and community relations.

In the show, Mark provides a background on Cosmos DB for those less familiar with it, and discusses the latest licensing changes, including the new free tier. Along the way, he gets into detail on the multi-mode nature of Cosmos DB, and its consistency, availability, and indexing stories.

You'll find the show and podcast subscription details if you want those,  here: http://podcast.sqldownunder.com

 

ADF: Changing or removing your Git connection from an Azure Data Factory

I've been working a lot with ADF (Azure Data Factory) again lately. If you've started to work with ADF and aren't using Git in conjunction with it, you need to change that now.

ADF objects are also represented as JSON objects and lend themselves nicely to being stored in Git.

Another key advantage is that if you don't have Git connected, when you're working with ADF, you don't have a Save button. If you're part way through making changes, you can either Publish them or Discard All. When you have Git in place, you get an option to Save.

NOTE: If I'm working at a site that doesn't have Git connected, if I need to temporarily stop working on something, what I can do is get the JSON code for the pipeline (or whatever I'm working on), save it away manually, then discard my changes. At least I can then put it back again later. But that's basically what the Git option does for you anyway, so just use it.

Changing or Removing the Git Connection

Today was the first time I'd needed to change the Git connection across to another repository. And it took me a while to work out how. You'll notice that in the main editing window, there's no option to change it:

Now I think it should be there. But if not, it should be on the Properties page for the ADF instance. But it's not there either:

Now I think it should be in one of those two locations.

So where is it?

In one of the least obvious bits of UI ever created, it's tucked up in the top right corner of the Dashboard. That's the last place I would have looked. That page is usually useless to me. It's great if you've just started, as it has lots of links to tutorials, etc. but otherwise, not so interesting. But that's precisely where the option is:

I hope that helps someone.