Shortcut: Code outlining in SQL Server Management Studio

For some years now, SQL Server Management Studio (SSMS) has had the ability to use code outlining, the same way that other Visual Studio applications can.

This can be very useful when you are trying to navigate around a large script file.

The simplest usage is to collapse or expand a region of code. Note that in the following script, code regions have been automatically added by SSMS:

This allows us to click on the outline handles, and collapse the code:

Note that when the region of code is collapsed, the name of the region is shown as the first line of the code within the region, truncated.

If you hover over the ellipsis (the dot dot dot) at the beginning of the code region, you'll be shown what's contained within the region:

Now, what's missing?

I'd love to be able to just drag regions around.

I'd also love to be able to name the regions better. It's not too bad if the regions are procedures or functions but for other chunks of code, there's really no good option. Note that if I add a comment immediately above the code, it's not part of the same region. It might be better if it was like that, or if a specific comment could be treated as a region heading:

In the Edit menu, the Outlining submenu doesn't show anything else useful at this point, apart from bulk operations:

SDU Tools: DatesBetween – all dates between two dates in T-SQL

In our free SDU Tools for developers and DBAs, we have added many tools that help to manipulate dates.

When creating a date dimension (as part of dimensional modeling), you need to be able to get a list of all the dates between a start date and an end date. There are many other reasons why you might need to do this as well.

So we've added a table-valued function called DatesBetween to do just this. It takes a start date and an end date as parameters and returns all dates between. As well as the date values, it also numbers each of the dates.

In the main image above, you can see an example of it in use.

You can also see them in action here:

To become an SDU Insider and to get our free tools and eBooks, please just visit here:

Opinion: DIY security is not security

I spend a lot of time working in software houses. One of the nastiest things that I see again and again and again, is developers attempting to roll their own security and authentication mechanisms.

Spend a moment and think about how many security incidents the big companies (Google, Apple, Microsoft, etc.) have had over the years. Now think about how much effort they've put into doing it right, yet they still have issues at times.

The scary part about trying to do this yourself is that you often don't even know how scary what you are doing is.

Apart from the ones who do a reasonable job of password hashing, etc. I also see a surprising number who still store plain text passwords, or think that applying some "special algorithm that they wrote" to "encrypt" passwords or other private information is acceptable.

It's not.

I cringe every time I see someone who's written a algorithm that does obfuscation on a value before storing it. Worse is when they refer to it as "encryption" within the organization.

So my post today is just a simple plea:

Please don't do this.

The minute you find yourself writing "encryption algorithms" or authentication code, just stop. Just because you think you've got away with it for years, don't tell yourself that you don't have an issue.

I've seen the outcome at sites where this all goes wrong, and it's not pretty. You do not want to be anywhere near it when the finger-pointing starts. It all ends in tears.

Image by Tom Pumford
Image by Tom Pumford



SQL: More on finding rows that have changed using HASHBYTES and FOR JSON PATH

In a previous post, I wrote about how to determine if a set of incoming values for a row are different to all the existing values in the row, using T-SQL in SQL Server.

I later remembered that I'd seen a message by Adam Machanic a while back, talking about how FOR JSON PATH might be useful for this, so I did a little more playing around with it.

If you are using SQL Server 2016 or later, I suspect this is a really good option.

Here's an example of using it to hash all the rows from the Purchasing.PurchaseOrders table in WideWorldImporters:

If you're not familiar with CROSS APPLY, it's basically used to call a table valued function (TVF) for every row of a source table. In this case, the source table is Purchasing.PurchaseOrders and for every row in that table, we're calling the code below. It's not really a TVF but acts as one. It can return a table of data for every incoming row from Purchasing.PurchaseOrders.

In this case, it's being used to get all the data in the row, and then push it all into JSON. The INCLUDE_NULL_VALUES option is a key trick, as it ensures that NULL values aren't ignored. Here's how it would look without the hashing:

Now, note that I've used po.* here to get all the columns in the table. Because the hash itself would also be stored in the table, and we don't have any syntax to say "all columns except one", we'd likely have to list all the columns here, to avoid including the hashed value in the hash calculations.

After using the hashing though, you can see the output in the main image above.

If you are working with SQL Server 2016 or later, this could be a good option. Thanks to Adam for suggesting it.



Shortcut: Manually prompting for and refreshing Intellisense in SSMS

Intellisense is one of the best things that's ever been added to Visual Studio or to SQL Server Management Studio (SSMS). It's hard to remember back to before it was added, or how we worked then.

I had a young friend from the United Kingdom who had just completed a Computer Science degree and one of the things that he was most proud of, is that he knew so many HTML tags and which attributes went with which tags. When I showed him HTML Intellisense in Visual Studio, I think he was about to cry.

While Intellisense in SSMS pretty much works as expected, there are a few things that can go wrong to confuse it.

The first problem is that it caches the list of objects to make sure it can perform quite well. But the cache can get out of sync with reality. A common cause of that is if I execute T-SQL commands in one query window, and I'm using those same objects in another window.

I've seen people quite puzzled about this but it's easy to fix. On the Edit menu, you can see that the shortcut key to refresh the local cache is Ctrl-Shift-R:

So if you see a bunch of unexpected "red squigglies", first thing to do is to hit Ctrl-Shift-R.

Another thing that can happen is that you get into a situation where the prompted values won't appear. If I type the following code:

note that the Intellisense has appeared. But if I hit the Escape key, it will disappear again. So users wonder how to get it back. Now one option is to backspace over the period, then type the period again. The standard option though, is to hit Alt-RightArrow.

An alternative to this is to hit Ctrl-Space, and that's easier to hit anyway.

SDU Tools: Analyze SQL Server Table Columns

I spend a lot of time reviewing other people's databases. Apart from looking at table designs, data types, keys, etc. I'm often interested in what the typical data in the table looks like.

In our free SDU Tools for developers and DBAs, we have added a special tool called AnalyzeTableColumns that makes this type of review easy. You point it at database, schema, and table, and it tells you about the table.

In the main image, you can see what it outputs for the Warehouse.StockItems table in the WideWorldImporters database. That image is truncated at the right hand side. Here are the other columns:

Now that data is somewhat like what you get from sp_help. But I often want more. I want to see what the typical data in the table looks like. So optionally, it'll show you what they look like too.


You can also see it in action here:

To become an SDU Insider and to get our free tools and eBooks, please just visit here:

Opinion: Singular vs Plural Table Names in SQL Server

There is a near-religious debate in the development and DBA communities about singular and plural table names. It's pointless rehashing all the arguments but I want to spell out what I do, and why.

What's driving this post is that I had a developer tell me that I was doing it inconsistently because even though I generally use plural names, that he found a table where I used a singular name. He thought this was inconsistent. It's not, and this is why.

OLTP Databases 

In traditional OLTP databases, I use generally use plural table names ie: my table will be Sales.Customers, not Sales.Customer. This is to make it clear that the table is a set of data about a set of customers, not information about just one customer.

The main people who complain about this are those who think a table should directly relate to an object. Well, it doesn't. If it did, it still would be a Customers object that was a collection of Customer objects. The table would not map to one Customer.

However, all rules are meant to be broken. The one exception that I make to this is for tables that will only ever hold a single row of data ie: where more than a single rows is not logical or meaningful.

So I might have a table called Application.SourceDatabase if that table will always have one and only one row. And I won't consider that to be inconsistent.

OLAP Databases

Dimensional warehouses are perhaps treated differently.  For me, it depends upon whether there will be a view layer above the database, that's used by any analytic system that accesses the database.

I want analytic systems to see human-presentable data.

By human-presentable, I mean names that are ready to go directly into analytic or reporting models. This does mean they will have, for example, embedded spaces in names where sensible.

So I use one of two options for dimensional models:

  • (Less preferred) The database has singular table names, all names have embedded spaces where needed, except for key columns, lineage columns, etc. that are not ever part of the UI. The database can be consumed directly by those analytic applications.
  • (More preferred) The database is set up with the same naming as an OLTP database above, and has a view layer that provides the human-readable data. The analytic applications talk to the view layer, not to the database tables.

SQL: Why can't I see my files in SSMS when I want to restore?

I had a curious discussion on a mailing list the other day where the questioner was complaining that SQL Server Management Studio (SSMS) wasn't working properly. He wanted to restore a backup, and his folders did not appear in the folder list to choose from. He could see them in Windows File Explorer but couldn't see them when trying to restore a backup.

What he wasn't understanding is that when you use SSMS to perform a restore, what you are doing is sending a command to SQL Server to tell it to do a restore. SQL Server is going to open the file and get the contents out of it, not via your local copy of SSMS.

So, the issue is which files SQL Server can see, not the files that you can see.

This perhaps becomes clearer if you think about SSMS as just a client application that sends commands to a server, and that server often won't be on the same machine ie: you'll have a client system running SSMS and a server somewhere that's running SQL Server.

When you run Windows File Explorer, it's your identity that's doing the looking, either on your local machine, or on network shares on other machines.

When you restore a backup, it's SQL Server that's looking for the files, either on the server that's running SQL Server or again on some network share.

The issue then is the identity of SQL Server and the permissions granted to that identity. If you run SQL Server Configuration Manager (one of the tools that ships with SQL Server) on the server, and select the Services option in the left-hand pane as shown in the main image above, you can see who SQL Server is running as. That's the identity you need to be concerned about.


SDU Tools: Converting between Base64 and Varbinary in T-SQL

In our free SDU Tools for developers and DBAs, we have added many tools that help to convert data between different types and formats when working with T-SQL.

In some recent consulting work, I needed to convert data from BizTalk messages into other formats. To help make this easy, I added a function that converts from the Base64 (used by Biztalk messages and other applications) into varbinary.

Not surprisingly, we called it Base64ToVarbinary. And for completeness, we also added the reverse function: VarbinaryToBase64.

In the main image above, you can see an example of them in use.

You can also see them in action here:

To become an SDU Insider and to get our free tools and eBooks, please just visit here:

Shortcut: Fixing or improving SQL Server Books Online

I mentioned in an earlier post that I think the online version of Books Online (BOL) is now superior to the version that you can install locally.

I particularly like the way that the online books are now cross-version ie: each page covers all supported versions, instead of having a separate page for each version.

But one of the really big bonuses is that you now have the opportunity to change the documentation if you think it's incorrect or you think it could be improved. Microsoft have placed all the documentation in a Git repository and you can change it. Doing so is easier than you might expect.

Let's look at an example.

I've searched for the LEN function in T-SQL and found the page:

Note the small list of faces appearing under the command name. These are people who have contributed to this page.

Now while I'm reading the page, I see this:

I'm puzzled why that example is specific to AdventureWorks2012. That code would work on all AdventureWorks versions. So let's try to change it. At the top of the page, there's an Edit link.

I'll click this and I'm taken to the page in Git:

Note in the top right-hand corner that I've already logged onto Git. Also notice the Edit pencil. Now I can't just directly change this info, so what I do is click this to "fork" the project so that I have my own copy to edit.

Now I can make the change that I want to:

And at the bottom of the page, I explain why:

Then I click Propose file change, and I'm taken to a page that asks me to create a pull request.

Git runs some automated checks, and makes sure that I'm not suggesting a change that can't easily be merged.

If I'm happy with the differences shown, I just click Create pull request and again another time. This page is then sent off to the person who is responsible for maintaining the page:

If they agree, your change will be merged in. Either way, you'll receive emails telling you what's going on.

If you have added a lot of changes or code, you'll also receive another email asking you to agree to be a contributor.

This is a great new option and once you're used to it, very easy to do.