Shortcut: Using dark theme in SQL Server Management Studio (SSMS)

SQL Server Management Studio (SSMS) is based on the Visual Studio shell. What is puzzling sometimes is that features that are present in Visual Studio are not present in SSMS. One of these relates to themes.
"Out of the box", Visual Studio looks like this:

When you write code, the windows look like this:

Some people don't like the "whiteness" of the whole screen and find darker colors easier to work with for long periods. In Visual Studio, you can change this. In Tools, then Options, you can change the theme:

The default here was the Blue theme. There is an extra contrast theme, and a Light theme (can't say I love that one), and a Dark theme. Note that once I choose the Dark theme, I see this:

OK, so what about SSMS? When I go into the same menu items, I see this:

All there is to choose from is the Blue theme that I was using, and the Light theme that I really don't like. It's curious that even the higher contrast Blue theme isn't there.

So what's going on?

Well the team has gone out of the their way to remove the other options. A quick search around the configuration files in the folder:

leads you to two interesting files: ssms.pkgdef and ssms.pkgundef.

Opening ssms.pkgdef in NotePad++ I can see this:

It's basically a list of where things are but note there's also now info about the background telemetry that SSMS performs.

In ssms.pkgundef, I can see this:

It's basically a list of all the things that are normally there that they have decided to remove. Notice, sadly, that macro support is one of them. I've never understood why we don't have macros. I think they'd be useful.

Another day, I'll try to experiment with enabling macros.

Regardless, scrolling down we find the culprit for our missing theme:

So this is no accident. The team has deliberately decided to remove it. I can only imagine that's because they aren't sure if it will work properly (or know that it won't) and they don't want to have to test or fix it.

We can enable it, by commenting out that line, and saving it. (We'll have to be an administrator to be able to save it).

And when we restart SSMS, it's now there:

And notice that it now works:

Well, at least, almost works. It seems fine for editing code, but notice that Object Explorer doesn't play along with a dark theme, nor does many of the items that you launch from within it:

But if you really, really want to edit code with a dark theme all day long, this might be enough for you.

SDU Tools: ReservedWords, FutureReservedWords, ODBCReservedWords, SystemDataTypeNames, SystemWords

One of the things that I've always thought was missing in SQL Server and the T-SQL language was a built-in view that showed reserved words, and even better, future reserved words.

It's one thing to say that you can create a policy to avoid people using reserved words for table, column, or object names, but how exactly do you know that the name you've chosen is one of those?

We decided to tackle that with one of our free SDU Tools for database developers and DBAs.

We added four views to help with this. The first is ReservedWords:

Note that we also added the color that these are normally displayed in tools like SQL Server Management Studio.

Microsoft also publishes a list of words that are reserved for future use. So we also added FutureReservedWords:

A number of words are reserved within ODBC so we added a view for that called ODBCReservedWords:

And finally, you don't want to accidentally use the name of a data type either so we added SystemDataTypeNames:

To make all of these easier to work with in a single place, we added a view that has all this information. It's called SystemWords and you can see it in the main image above.

You can also see each of these in action here:

For more information on SDU Insiders where you get access to get our free tools, books, and more, please visit http://sdutools.sqldownunder.com.

 

Opinion: When building an SaaS application, you're only as good as your weakest SLA

The industry is clearly trending quite quickly towards Software as a Service (SaaS) applications. Rather than building monolithic chunks of code, new applications are often constructed by combining a variety of platform services, themselves usually delivered as Platform as a Service (PaaS) offerings.

Any application layers that you build above these services though, are only as good as the underlying services. And that's where things can go very, very wrong quite quickly.

I was at a software house recently where the management that I talked to said they couldn't ever be offline for more than about four hours. In fact, anything more than two hours would be a problem. The IT people at the same place told me that backups of their primary database were taking over eight hours, and that restores would be longer. I'm often left wondering if these groups of people within the same company even talk to each other. Clearly there was an expectation gap.

Image by Paula May
Image by Paula May

I was also recently working at another ISV (Independent Software Vendor) that was looking to provide a SaaS offering, and were offering their end customers an SLA (service level agreement) that said they'd always be back up and running within 4 hours. But the data centers that they were depending upon had an SLA showing that loss of a region could involve an outage of one full week. (And the customer data could not go to another region)

What makes this worse is the current trend for many of these services to be impenetrable by phone or for anything urgent.

One of our suppliers had a major outage last week because one of their own suppliers (NameCheap) had decided (incorrectly) to disable their DNS entry because of spam reports. So our supplier was offline, and could do nothing except email NameCheap's support team and hope they would respond soon. They told us that there was no way for them to call NameCheap.

But even if that's the case, NameCheap isn't alone on this. It's a common trend. If you are building any sort of SaaS offering though, you need to realize that you are only as good as your weakest link (or in this case, SLA).

SQL: The datepart minute is not supported by date function dateadd for data type date

I was asked about this error message by a client just last week. They had what they thought was straightforward code, and yet were getting the error above.

What they were trying to achieve was to calculate the time as 10 minutes past midnight on the same day. So, they'd used SYSDATETIME() to get the current time (it could have been GETDATE() but SYSDATETIME() is the current version that returns a datetime2 data type).

They'd then CAST the value to date to remove the time portion. So far so good:

But then they tried to pass that value into the DATEADD function and while it will accept a date parameter, it won't let you add minutes to it. I actually think that isn't a sensible restriction ie: you should be able to pass a date to a function that requires a datetime. But alas you can't do that here.

What they needed to do was to then cast the value to a datetime (or datetime) and then add the 10 minutes:

And all works fine.

 

AI and ML: Why have machine learning in SQL Server at all?

In a post the other day, I described how to test if machine learning with R and/or Python was set up correctly within SQL Server 2017.

One of the comments on that post, said that the info was useful but they were still to be convinced why you'd want to have machine learning in the database in the first place.

Fair question.

I see several reasons for this. SQL Server Machine Learning Services is the result of embedding a predictive analytics and data science engine within SQL Server. Consider what happens in most data science groups today, where this type of approach isn't used.

I routinely see data scientists working with large amounts of data in generic data stores. This might mean that they have data in stores like Hadoop/HDInsight or Azure Data Lake Store but in many cases, I just see operating system files, often even just CSV files. Both the R and Python languages make it really easy to create data frames from these types of files. But where did this data come from? In some cases, it will have come from the generic data store, but in most cases that I see, it has come from within a database somewhere.

And that raises a number of questions:

  • What effort is required to extract that data (particularly for large volumes)?
  • How up to date is the data?
  • What is the security context for that data?

Often the answers to these questions aren't great. What I see is data science people extracting data from existing databases into CSV files, and then loading them up and processing them in tools like RStudio. And mostly, I see that data being processed single-threaded in those tools.

The outcome of this work though, is either analytics or (more commonly), trained predictive models.

Having Machine Learning in SQL Server helps here in several ways. First, you can utilize the same security model that you're using for any other access to that same data. Second, as the data volumes grow, you aren't needing to move (and then refresh) the data. You can process it right where it is. Third, you can take advantage of the multi-threaded architecture of SQL Server.

With Operational Analytics in SQL Server 2016 and later (basically non-clustered columnstore indexes with delayed aggregation, built over transactional data), you might even be able to have the outcomes really up to date.

While being able to train and retrain predictive models is really important, and is hard work, it's when you use those models to create predictions that the real value becomes apparent. Trained models are quite lightweight execution-wise. You can add predictions right into your queries along with your other returned data, and very efficiently. This is where having Machine Learning within the database engine truly shines.

And you don't necessarily even need to create the predictive models. The SQL Server team have provided a series of world-class pretrained models that you can load directly into and bind to, an instance of SQL Server.

 

 

Shortcut: Using zoom features in SQL Server Management Studio (SSMS)

When working with SQL Server Management Studio, sometimes you need to show someone else what you're working on, and the fonts that you're using are just too small for someone looking over your shoulder or looking at a screen that you've shared with them.

What I often see someone do then, is to go into Tools and Options and start to change the font and color settings. The pain with this is that you then need to set them back later.

There are several options to allow you to zoom in, without needing to change the settings.

First, like many Windows programs, you can hold down the Control key and use the mouse scroll bar to increase or decrease the font size. Note that as you do that, the percentage of zoom is also displayed in the lower right-hand side of the query window.

You could also, of course, just use that drop-down to set the zoom level too:

When you make this change, it applies to all current query windows, and all other query windows that you open until you change it back.

SDU Tools: Print message without delay in T-SQL

When you are trying to debug code in SQL Server T-SQL, or trying to monitor where code is up to, it's common to add PRINT statements into the code.

These are fine when they are ad-hoc statements, but if you include PRINT statements in stored procedure code, and the procedure runs for any length of time, you'll know that you normally don't get to see the PRINT output in the Messages tab until the procedure ends.

And by then, it's really too late.

In our free DBA and developer SDU Tools, we added a tool to make it easy to get messages immediately rather than waiting.

PrintMessage sends a message to the Messages tab in SQL Server Management Studio immediately.

You can see it in use above in the main image and here:

For more information on our free SDU Tools and to join our SDU Insiders to get more information on our tools, eBooks, webinars, etc. please just visit here:

http://sdutools.sqldownunder.com

Upcoming SQL Saturday: Brisbane 2 June – hope to see you there

Just a short note to let you know that I'm heading up to speak at SQL Saturday in Brisbane.

I'll be delivering a session on SQL Server Management Studio tips and tricks and hope to get a chance to show you a number of ways to use SSMS that you might not have seen.

I really hope to see many of you there. If you are coming, please come and say hi.

If you can't make it though, you can get a copy of our free SSMS Tips and Tricks ebook here: http://ssmsbook.sqldownunder.com.

SQL: No report servers were found: what's in a name?

If you've worked with earlier versions of SQL Server, it's likely that you've configured SQL Server Reporting Services. To do that, the tool you use is Reporting Services Configuration Manager.

Lately, though, I've seen a number of people installing SQL Server 2017 Reporting Services and being confused. The same applies to Power BI Report Server installation.

During installation they are provided with an option to configure the report server, but if they go back into Reporting Services Configuration Manager, they can't find the server. They get the No report servers error shown in the featured image above.

The reason for this is that there was a subtle name change in SQL Server 2017, as you can see here:

Note that there are now too options that could be on your system:

  • Reporting Services Configuration Manager
  • Report Server Configuration Manager

Nothing to be confused about there 🙂

The first option will still find your older Reporting Services installations and allow you to configure them.

The second option will let you find a SQL Server 2017 Report Server or a Power BI Report Server and configure it.

It's worth noting that in SQL Server 2017, Reporting Services is no longer multi-instance. You can have one SQL Server 2017 Report Server OR one Power BI Report Server on a system.

SDU Tools: Read CSV File in T-SQL

One of the most common tasks that "data people" perform is moving data around, and that can include exporting existing database data, and importing data from other places.

While there are many other standards for how data is stored in files, CSV (comma-separated-value) files are still (by far) the most common. Another common variation are TSV (tab-separated-value) files, where tabs are used to separate values instead of commas. This is usually a good idea as commas occur frequently within the data.

Given these file formats are so common, how easy it is to read these files using T-SQL? And the answer is "way more complicated than it should be".

So, we built one of our SDU Tools to come to the rescue. Our ReadCSV stored procedure takes the following parameters:

@Filepath nvarchar(max) – this is the full path to the CSV or TSV file
@Delimiter nvarchar(1) – delimiter used (default is comma but can be another character like  tab)
@TrimOutput bit – should all output values be trimmed before returned (very handy)
@IsFileUnicode bit – 1 if the file is unicode, 0 for ASCII
@RowsToSkip int – should rows be skipped (eg: 1 for a single header row with column names if it is present)

Here's is an example of it being called:

The procedure returns 50 columns named Column01, Column02, and so on. It always returns 50 columns and will return NULL for any missing values. Note in this example though, that the column names were in the first row. So we can fix that by skipping the first row as shown in the main image above:

You can see the tool in action here:

For more information on our free SDU Tools and to join our SDU Insiders to get more information on our tools, eBooks, webinars, etc. please just visit here:

http://sdutools.sqldownunder.com