SQL: SELECT 1 for testing connectivity is pointless

I spend quite a bit of time tracing queries sent from applications to SQL Server.

Having now done this for a long time, I can recognize many of the data frameworks that are used by various developers. I think that if any of the developers ever spent much time looking at traces, they’d be amazed at how the requests they have made to the framework are translated to commands that are sent to SQL Server.

I’ve also been posting lately about how the quickest way to scale SQL Server is to just stop talking to it unnecessarily.

One variant of the JDBC drivers that I’ve seen, has a command that, while it’s really a “nothing” for SQL Server to execute, breaks this rule.

What I see is that the framework regularly executes this command:

SELECT 1

Yep, that’s it. And why does it do this? It appears that they are doing it to check if they have connectivity to the server.

I’m sure some framework developer thought that was a good idea. These are the people who don’t get things like race conditions, and timing, and are the sort of people who create applications that nearly always work.

This command is actually completely and utterly pointless, and worse, incurs the latency of a regular round trip to the server while doing so.

Even if the server responds to your SELECT statement, that tells you NOTHING about the state of your connection immediately after the command is completed.

Image by Markus Spiske

The fact that the server responded to you, doesn’t in any way at all, guarantee that the server will still be connected for your next command batch.

Please don’t do this.

It wastes your time, your end-user’s time, and SQL Server’s time. Worse, every little extra command that SQL Server has to process takes it away from the real processing that you want it to do.

DevOps: Should migration-based deployments use idempotent scripts?

In my last DevOps-related post, I discussed some of the issues that arise when using migration-based deployments.

Migration-based deployments are based on a series of T-SQL scripts that apply changes to your database from its current state to a desired final state.

One of the questions that arises though, is whether or not these T-SQL scripts should be idempotent ie:

Should the script be written so that you get the same outcome if you run the scripts multiple times?

Creating T-SQL scripts that you can reliably run more than once isn't easy. It's not required for migration-based deployments if all goes well, it might make them more reliable, and/or make them more useful when errors occur during deployment. However, achieving that isn't easy.

In essence, creating scripts like this means that instead of writing code like this:

you write code like this:

But what would happen if the table is there but different?

Sometimes it's also messier than you'd hope. For example, CREATE SCHEMA has to be the first statement in a batch. You can't write:

Instead, you need to write something like:

And apart from now losing help from Intellisense, etc. within the script items, this sort of thing just gets harder and harder.

My experience is that apart from situations that are fairly trivial, creating seriously idempotent scripts is much, much harder than it looks, and it's far too easy to create scripts that can still end up with different end states.

I think you are far better off validating the schema before running the migration, and if failure occurs, recover and re-run whenever possible.

 

 

Shortcut: Import and Export Settings in SQL Server Management Studio

Whenever I need to work on a new laptop or server, or whenever I change versions of SQL Server Management Studio, I kick myself for not remembering to export my settings, so I can import them again.

I spend quite a bit of effort getting SSMS configured the way I want, so it only makes sense to save the settings. Saving them isn't perfect but it's far better than not having done it.

From the Tools menu, choose Import and Export Settings:

As an example, let's export the settings. I'll choose Next:

Notice that it's not an all or nothing export. I can choose details of which settings or groups of settings to export.

In this case, I wanted all of them, so I just need to pick a name and a location:

And next time I change to a different machine or new version, I can just import them and pat myself on the back for remembering.

SDU Tools for March 2018 – going out tonight

I'm really happy with the additions we made to our free SDU Tools for developers and DBAs that are going out tonight:

DatesBetween – table-valued function to return dates between two dates
DateDimensionColumns – table-valued function to return typical date dimension columns for a date (useful for data warehouses)

JulianDayNumberToDate – converts a Julian day number to a date
DateToJulianDayNumber – converts a date to a Julian day number

SystemConfigurationOptionDefaults – view that shows system configuration options and their default values (and many other values)
NonDefaultSystemConfigurationOptions – view that shows system configuration options that are not at their default values

SQLServerVersionForCompatibilityLevel – function that converts a database (or server) compatibility level to a SQL Server version

SystemWords – view that shows reserved words, future reserved words, ODBC reserved words, and system data type names and their display color in SSMS
ReservedWords – view that shows SQL Server reserved words (and their display color in SSMS)
FutureReservedWords – view that shows SQL Server future reserved words (and their display color in SSMS)
ODBCReservedWords – view that shows ODBC reserved words (and their display color in SSMS)
SystemDataTypeNames – view that shows all system data type names (and their display color in SSMS)

CreateLinkedServerToAzureSQLDatabase – as the name says – makes it easy to create a linked server to an Azure SQL DB

ListPrimaryKeyColumns – comma-delimited list of primary key columns for all primary keys

You'll find more on joining SDU Insiders and getting our free resources here:

http://sdutools.sqldownunder.com

 

SDU Tools: Separate T-SQL Strings By Case

If you've ever used SQL Server Reporting Services, you'll notice that when you drag a database column into a table, it auto-magically converts the name of the column into a more English-readable name.

Image from MSDN

Notice how when the LineTotal column has been dragged into the table, the heading has been set to Line Total with a space. What it is doing is taking a Pascal-cased or camel-cased name and separating the words with spaces, based upon where the capital letters are.

In our free SDU Tools, we have a tool that does that as well.

In the example above, you can see both a Pascal-cased phrase and a camel-cased phrase that has been separated.

You can see it in action here:

For more information on our SDU Tools, and to join our SDU Insiders to get them and our other free resources, please just look here:

http://sdutools.sqldownunder.com

 

Opinion: To find good staff, invest in communications, not buildings

Many of my customers are software houses (ISVs). In almost all of them, I hear people complaining that they can't find enough good staff. I think they are trying to tackle the wrong problem.

Most of the staff members they are trying to find are developers, and there are so many great developers out there, but you have to accept that they might not be in the location that you hope they're in.

I've seen companies struggling to hire the last remaining developers for various technologies, from those available in that city. This often even leads to crazy situations like hiring staff that they have previously rejected. They should not be drilling further and further down the talent pool in their location.

Worse, the more that companies hire poorly qualified staff, the more their experienced staff will be spending their times fixing issues caused by staff that should never have been hired in the first place. This is a formula for losing your best staff.

Some temporary respite might come from convincing people to move to the same city, but that's a very limited group of people who can do that or will be willing to do that.

So many jobs are now able to be performed from distributed locations, and development jobs are prime examples of these.

Why hire poorly qualified locals at ever-increasing costs when there are many outstanding people in other locations?

Instead of trying to fight for the last remaining staff in a city, and paying ever increasing salaries to ever less capable staff, companies should make a serious commitment to enabling remote work, and investing in really high quality communications infrastructure and probably some software. It's important to get out of the mindset of needing to have all your staff in your building.

Image by Richard Jaimes

It's time to look outside. It's a big world out there, full of amazing people who could be doing great work for you.

SQL: Real triggers, SET NOCOUNT ON, and Counting Correctly

I've done a lot of Microsoft exams over the years, mostly SQL Server ones but plenty of others too. And one thing that I really don't like is when the questions are:

  • Purely academic (ie: would never happen)
  • Memory based (ie: who cares what the DTU limit is for a P3 today?)
  • Clearly not written by someone who actually uses the product

Today, I want to mention an item in the last category.

In many exams, I've seen T-SQL code for triggers and one of the instant tell-tale signs of whether the author of the question really uses triggers or has just read the documentation and thinks they know about triggers, is whether or not they've used SET NOCOUNT ON.

If an application updates two rows in a table, SQL Server tells it that two rows were updated.

If you then add a trigger that, for example, updates a row in another table, SQL Server will now tell the application that three rows were updated.

This is enough to break many applications.

The application that sent the update won't want to know about other rows inserted, updated, deleted, etc. by your trigger. That will only confuse or break things.

While there are odd exceptions where you do want the default behavior, the answer to this is to use SET NOCOUNT ON. When we are teaching students to write triggers, we tell them that nearly every DML trigger should start with SET NOCOUNT ON. Check out the documentation here:

https://docs.microsoft.com/en-us/sql/t-sql/statements/set-nocount-transact-sql

SET NOCOUNT ON stops the count of affected rows being returned for an operation. The beauty of using this in a trigger is that it stops the count of rows affected by the trigger from affecting the count that the application sees.

So, when I see a whole lot of exam questions on triggers and none of them include this, that's really disappointing in so many ways. Learn to use this in your trigger code.

(Note that we try to avoid using triggers in the first place most of the time).

 

DevOps: Dealing with issues in migration-based deployment of databases

In an earlier post, I described the difference between state-based deployments and migration-based deployments. What I want to talk about today are two of the main issues that can arise in a migration-based deployment and what to do about one of them.

When you are using a migration-based deployment technique, you are storing all the scripts that take a database from its current state to the desired final state. While tools like Ready Roll make this is a relatively easy way to perform migrations, it suffers from a few key problems:

Performance

When you are applying a series of steps to make all the required changes to a database, you may end up taking much longer than a move directly from the current state to the final state would involve. For example, you might add a new index, then later decide to add another column to it. Clearly it would be faster to just create the final index, rather than to create the index then modify it.

I often see deployments where indexes are applied, removed, reapplied, tables added, then removed again, etc. This isn't a big problem for small databases but can be a major issue for larger databases.

While this is a real issue, it's one that doesn't have a simple answer when you are using migration-based deployments. It's not one that I'm going to discuss further today. Let's talk about another issue that you can do something about.

Drift

If you have multiple production servers, then servers in other environments (like UAT, Development, Test, etc.), in a perfect world, the schemas of the databases in each of these environments would be identical. Often though, that's not what happens.

If an urgent production fix is required, someone might make a change directly on a production server. Worse, they might apply the same change differently on each of the production servers. Then they might forget to apply the same change (or apply it differently) in other environments. No matter what the cause, the schemas have drifted.

Now if you create migration scripts in the development environment, and even test them in a test environment, those same scripts might break when applied to the production servers because the scripts are dependent on the schemas being the same.

To work around this issue, your deployment process needs to start with schema comparisons, using either Visual Studio's schema compare, or a tool like Red-Gate's SQL Compare.

You might not be able to get tools like this into the production environment, but generally, you should be able to script the database(s) in the production environment (and other environments), load the scripts in a test environment, and run the comparisons there.

If they aren't in sync, you need to fix that first.

Once you know that you schemas are in sync, you can proceed with a migration-based deployment with more confidence.

 

Shortcut: Change the Status Bar in SQL Server Management Studio (SSMS)

The status bar at the bottom of a query window in SQL Server Management Studio contains a wealth of information in its default configuration.

The bottom left shows the connection state:

I'll talk about connection management more in a later post. The bottom right also shows quite a bit:

In this case, it's showing me that I'm connected to a server called SDUPROD; it's running v14 of SQL Server (ie: SQL Server 2017); I'm logged on as GREGP50\Greg; my spid (or session ID) is 68; and I'm connected to the master database.

If I had a query running, the time would be counting up, and if a query had completed, it would show me the number of rows.

While this is useful, it's also configurable. The options are in Tools > Options> Text Editor > Editor Tab and Status Bar:

You can choose whether the displayed time is the elapsed time or the end time. In an earlier post, I mentioned that you can also see the end time in the Properties pane.

You can choose to remove the display of the database, login, row count, and/or server name. You can set the color used for group and/or single server connections and choose where the status bar is located. The only extra option at present is that you can move it to the top.

Some of these become more important when using multi-server queries. I'll talk about them soon.

SDU Tools: Quote a T-SQL String

I'm not a great fan of dynamic SQL (where you create a T-SQL command in code, then execute the command) but there are times when you just need to do it.

When creating dynamic SQL, it can get very painful trying to work out which strings are quoted and how many quotes you need.

To make this a bit easier, in our free SDU Tools, we added a function for QuoteString.

You can see its action in the screenshot above.

I puts an opening and closing quote around a string, but also doubles up any quotes contained within the string.

You can see it in action here:

For more information on our free SDU Tools and to get a copy of these and our other free resources by becoming an SDU Insider, follow this link:

http://sdutools.sqldownunder.com