Backup TO DISK, TAPE and now URL – Backup On-Premises SQL Server Databases to Azure Storage

There has been a lot of discussion over the years about whether or not it's a good idea to introduce new functionality in service packs (for SQL Server, the OS, and others). The concern is that it can invalidate testing, invalidate documentation, books, courseware, etc. However, it can also introduce much-needed features without having to wait for another version of the product. It's been even-less likely that new functionality would be introduced in a cumulative update (or CU), which are basically a set of rolled-up hotfixes.

However, interesting new functionality has appeared in CU2 for SQL Server 2012 SP1.

The BACKUP command now supports TO URL when specifying a backup device. That allows you to specify the location of an Azure storage account as the backup destination. This avoids the need to create a backup locally, and to then have a process to copy it to Azure storage. Similarly, RESTORE now also supports a FROM URL option.

This is very interesting news for a number of reasons.

For customers using SQL Server within Azure Virtual Machines (VMs), it provides a way to specify the use of a storage account as the backup destination. While you could do that before by mapping a drive letter to a storage account (and that's what you do when setting up a VM in the first place), the number of drive letters available is limited and this avoids using them up.

For customers using SQL Server on-premises, this provides a direct cloud-based backup option with low-cost, large storage available. The biggest limitation for these customers will no doubt be network bandwidth but for some customers, that isn't an issue. It's also an interesting new DR option for customers that have relatively smaller databases that don't take too long to send over the network but which are none-the-less critical to the organisations.

Another group of customers that would benefit from this would be any customer that needs to distribute a single copy of a database to multiple locations. A backup can be done once, and each secondary location can restore directly from the cloud. I can imagine this being an interesting way to push out new versions of reference data, etc.

A notable group of customers that would benefit from this would be those using Windows Azure SQL Databases. However, the syntax is not yet supported in WASD. I have no advanced knowledge on this but I'm guessing that adding this looks pretty likely, as it would be so beneficial.

When you connect to a storage account, you need to provide both the URL and a storage key. There are two types of keys. One type of key allows for unlimited access. The other type of key can be time-bombed, handed out for short-term purposes and can easily be revoked. Both these types of keys should work with this new option. The BACKUP and RESTORE commands both support a WITH CREDENTIAL option for providing the link to this key.

Credentials were introduced in SQL Server 2005. They are basically a way of giving a name to a set of authentication details (name and secret/password). Previously we have used them with SQL Server Agent Proxy accounts, to allow us to fine tune the permissions that are granted to steps in SQL Server Agent jobs. Now they have an interesting new use.

You'll find details of the changes, with samples of the syntax in this article: http://blogs.msdn.com/b/windowsazure/archive/2013/01/24/sql-server-backup-and-restore-to-cloud-simplified.aspx

SQL Down Under show 53 with Grant Paisley – PowerPivot, Power View, SSAS Tabular and Multidimensional

I promised there would be a bunch of shows this year. The second one is now published.

In this show, SQL Server MVP Grant Paisley describes PowerPivot, Power View, Analysis Services Tabular vs Multidimensional and shares the lessons he's learned when working with these tools.

The show is here: http://www.sqldownunder.com/Resources/Podcast.aspx

Enjoy!

Working with Aliases for Windows Azure SQL Databases in SQL Server Management Studio

One of the issues that is often raised with Windows Azure SQL Database is that you don't get to pick the name of your server, so you end up with a bizarre name such as:

yy2l95dk1k.database.windows.net.

I can understand why the team did this. Apparently when they first set it up, they allowed you to pick your own server name, so everyone started registering Coke, Pepsi, etc. Not wanting to have yet another place for people to argue about name ownership, they quickly removed that ability. I'm glad they did.

However, when working with a databases, I've been finding that I'm constantly looking at lists of Azure servers and having no idea which one is which. When I open SQL Server Management Studio (SSMS), and ask to connect, I'm greeted with a list of servers that looks like:

  • yy2l95dk1k.database.windows.net
  • ky4296dk1k.database.windows.net
  • xy9914dk2j.database.windows.net

and so on. Now I'm sure there are people that can remember which one is which, but as the number of servers increases (particularly when I'm dealing with client servers as well as my own), I'm not one of those people that can.

Normally when I'm working with a bunch of servers and I only have IP addresses, I configure SQL Server Client Aliases for each address. However, when I first tried to configure an alias like AzureDemo for a server called yy2l95dk1k.database.windows.net, I found I couldn't connect to it. I received an error that said "Server name cannot be determined. It must appear as the first segment of the server's dns name (servername.database.windows.net). That led me to believe that using an alias wouldn't work.

However, in later versions of this error message, more information is provided. "Some libraries do not end the server name, in which case the server name must be included as part of the user name (username@servername). In addition, if both formats are used, the server names must match. (Microsoft SQL Server, Error: 40531). Once again, when I first saw this, I presumed that it would still stop me from using an alias but that is not the case.

To use a meaningful alias, what you need to do is:

  • Using SQL Server Configuration Manager, in the SQL Native Client 11.0 Configuration\Aliases node, create an alias. Pick a meaningful name (ie; HRServer) for the Alias Name, leave the port at the default (ie: 1433), leave the protocol at the default (ie: TCP/IP) and provide your real server name (ie yy2195dk1k.database.windows.net) in the Server field.
  • Create the same alias in the SQL Native Client 11.0 Configuration (32bit)\Aliases node.
  • When connecting to the server in SSMS, enter your new alias (ie: HRServer) for the Server name, SQL Server Authentication for the Authentication method, and for the login, specify your login name followed by an @ symbol, followed by the first segment of the real server name (ie: mylogin@yy2195dk1k), then enter your password.
  • If the user isn't an admin user that has the ability to connect to the master database, you'll also need to manually set the database name using the Options button. (Note that you won't be able to browse for database names).

Once you have done that with all your servers, finding and connecting to the right server should be easy.

In a separate post, I'll describe how to push out aliases to other users in your domain.

 

 

 

Windows Azure SQL Database, GETDATE, SYSDATETIME, SYSUTCDATETIME and Rounding

When I've been putting data into Windows Azure SQL Database (WASD) in the past, I'd normally been providing the dates from my own system. This week, I had the first time where I wanted to put a column default that provided a date in WASD. It suddenly dawned on me that I wasn't sure what timezone the date would be from. As I was using the Southeast Asia data centre (in Singapore), I was presuming the value would be based on Singapore's timezone. So it was time to find out.

Prior to SQL Server 2008, GETDATE() was the normal way that we'd retrieve the current time from the server. GETDATE() returns a datetime data type. SQL Server 2008 introduced the datetime2 data type (a higher-precision data type with a poorly-chosen name). SQL Server then provided SYSDATETIME() as a replacement for GETDATE(). SYSDATETIME() returns the datetime2 data type. To make it easier to work with UTC-based values, SQL Server also provides SYSUTCDATETIME().

If I connect just now to my database, and execute the query:

SELECT GETDATE() AS GetDateValue, 
 
     SYSDATETIME() AS SysDateTimeValue, 
       SYSUTCDATETIME() AS SysUTCDateTimeValue;

The values returned were:

GetDateValue            SysDateTimeValue            SysUTCDateTimeValue
———————– ————————— —————————
2013-01-19 22:23:21.830 2013-01-19 22:23:21.8400294 2013-01-19 22:23:21.8400294

(1 row(s) affected)

I was pleasantly surprised to see that the values were UTC based instead. That's really useful as it means that no matter which Azure data centre you connect to, they all have the same concept of "current time".

What also surprised me is that while the SYSDATETIME() and SYSUTCDATETIME() values were identical, the GETDATE() value wasn't just a rounded version of the same time. It was an earlier time so it must be resolved separately in the query. That's not an Azure-specific issue though. If I execute the same query against my laptop system, the following output is produced:

GetDateValue            SysDateTimeValue            SysUTCDateTimeValue
———————– ————————— —————————
2013-01-20 09:27:41.503 2013-01-20 09:27:41.5051908 2013-01-19 22:27:41.5051908

(1 row(s) affected)

Note that my system is operating in +11 timezone so my SYSDATETIME() value is 11 hours ahead of my SYSUTCDATETIME() value. But again notice that it's exactly the same value when allowing for those 11 hours. However, the GETDATE() value is different again. 

So there are two messages from this:

  • Azure SQL Databases are always UTC timezone based (which is good news)
  • Don't depend upon GETDATE() and SYSDATETIME() returning exactly the same time in a single query, after allowing for rounding.

New SQL Down Under Podcast: Allan Hirt – SQL HA & Server Core

Hi folks, I'm back into the swing with a bunch of new podcasts coming, featuring SQL Server 2012. First up is Clustering MVP Allan Hirt discussing SQL Server HA with AlwaysOn. Allan also discusses the deployment of SQL Server on Windows Server Core.

Thanks for all the emails that said they loved the shows and told me to get them happening again soon. I think you'll like the ones that are lined up. You'll find the shows here:  http://www.sqldownunder.com/Resources/Podcast.aspx

New Azure Mobile Services Samples

Hi Folks,

I have to say that I’m really impressed by the rate at which the Windows Azure Mobile Services team is rolling out updates and samples. In particular, it’s good to see a new sample for using Geolocation, given how common a requirement that is in mobile apps today. It’s worth checking out the following:

·         New Code Samples page on WindowsAzure.com

·         Updated WindowsAzure.com Tutorials and Resources page that includes new tutorials and related videos from the new Windows Azure Mobile Services channel 9 series

·         A bunch of new Windows Store + Mobile Services scenario-based samples:

o    Geolocation sample end to end using Windows Azure Mobile Services

o    Enqueue and Dequeue messages with Windows Azure Mobile Services and Services Bus

o    Capture, Store and Email app Feedback using Windows Azure Mobile Services

o    Upload File to Windows Azure Blob Storage using Windows Azure Mobile Services

o    Create a Game Leaderboard using Windows Azure Mobile Services

Nice!

How full is my Windows Azure SQL Database?

While the level of compatibility of Windows Azure SQL Databases is high, there are a number of things that need to be dealt with differently, compared with how they are done with on-premises SQL Server.

An example of this, today I needed to know how full one of my databases was. I wanted to know how much space I had used but also to know what the limit was. My first attempt was the usual system views such as:

SELECT * FROM sys.database_files;

But in the Azure environment, that returns:

Msg 208, Level 16, State 1, Line 1
Invalid object name 'sys.database_files'.

So that wasn't going to help. I had also tried the new sys.resource_stats and sys.resource_usage views also to no avail. There is a good article that provides details of which views do and don't work on which versions here. Conor Cunningham also posted early last year about how some of these are supposed to work. (For more info on Conor and Windows Azure SQL Database in general, see the podcast that I recorded with him recently).

Michael Wood pointed me to a post from Ryan Dunn that showed how to get the used size in total, and for each database object. (For more info on Ryan, see the SQL Down Under podcast that I recorded with him back when SQL Data Services first appeared). Tom LaRock also explained that in his post here. The view that I needed to use was the sys.dm_db_partition_stats view.

For an overall space usage total, the following query helps:

SELECT SUM(reserved_page_count) * 8.0/1024 AS DatabaseMB
FROM sys.dm_db_partition_stats;

For a breakdown by individual object, the following query (a tidied up version of what I've found in the posts) helps:

SELECT o.name AS ObjectName, 
      
SUM(reserved_page_count) * 8.0 / 1024 AS SizeinMB
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.sysobjects AS o
ON ps.object_id = o.id
GROUP BY o.name
ORDER BY SizeinMB DESC;

There is also a really good Azure article here that describes the use of these along with info on bandwidth monitoring, etc.

But the final thing I wanted to know is what the limit was for a given database. Sanjay Nagamangalam (from the SQL Server team) came to the rescue by pointing out a post from Walter Berry. It mentions that you can get the maximum size from an extended database property. The query below that I've adapted from it, shows the current limit:

SELECT CAST(DATABASEPROPERTYEX('PopkornKraze_DW' , 'MaxSizeInBytes') AS integer) 
      
/ 1024 / 1024 AS DatabaseLimitInMB;

(Note that PopkornKraze_DW was the name of my database in this test). The final thing that I might need is a query that shows how full my database is as a percentage, so if we combine them, it's just:

SELECT CAST((SELECT SUM(reserved_page_count) * 8.0 / 1024
             FROM sys.dm_db_partition_stats) * 100
            / (SELECT CAST(DATABASEPROPERTYEX('PopkornKraze_DW' , 'MaxSizeInBytes') AS integer)
            / 1024 / 1024) AS decimal(10,2)) AS PercentageUsage;

If you do happen to exceed the size limit, SQL Exception 40544 is thrown.

You can modify the database size (and edition) by executing:

ALTER DATABASE PopkornKraze_DW MODIFY (EDITION='BUSINESS', MAXSIZE=10GB);

You appear to be able to change the size while others are connected but changing the edition terminates existing connections.

Hope that helps someone.