Stored Procedure Contracts and Temp Tables

Temp tables are visible within the scope where they are declared but also in sub-scopes. This means that you can declare a temp table in one stored procedure but access it in another stored procedure that is executed from within the first stored procedure.

There are two reasons that people do this. One reason is basically sloppy code, a bit like having all your variables global in a high level programming language. But the more appropriate reason is to avoid the overhead of moving large amounts of data around, and because we only have READONLY table valued parameters.

But I've never liked the idea of this type of access. It breaks normal coding rules on encapsulation. Nothing in the child procedure gives any clue that it expects the temp table to already be present. The code that accesses the temporary object could just as easily be a typo. Where this comes to light is when you try to use tools like SQL Server Data Tools (or previously a similar issue with DataDude). How can a utility that analyzes your code know if that reference is valid or just an unresolved reference?

I've previously posted about the need for stored procedures to have contracts: http://www2.sqlblog.com/blogs/greg_low/archive/2010/01/20/stored-procedure-contracts-return-values.aspx

This seems to be another case where a contract might help. The fact that a procedure depends upon the pre-existence of a temporary object should be discoverable from the metadata of the procedure. Perhaps something like this:

CREATE PROCEDURE ChildProc

REQUIRES #SomeTempTable SomeTableType

AS

or if the temp table isn't based on a table type, just syntax similar to a table valued function where you could say:

CREATE PROCEDURE ChildProc

REQUIRES #SomeTempTable TABLE (Table definition here)

AS

I'd love to hear what you think.

PS: I've added a Connect item on this. Please vote if you agree: https://connect.microsoft.com/SQLServer/feedback/details/716565/sql-server-should-allow-declarative-specification-of-temp-tables-used-as-parameters

SQL Server Upgrade Advisor should check Database db_compat levels, not server version

We have a very common customer scenario where the customer decides to upgrade to a new version (in this case let's say SQL Server 2012). They run upgrade advisor and note that there are a whole lot of problems. They think "Hey that's way too much to deal with right now. I don't have time".

So they don't deal with them. They upgrade the server to 2012 but leave their db_compat level at a lower level with the intention of dealing with the problems later.

But when they go to run the upgrade advisor later, to find the issues that they need to resolve before changing db_compat level to 2012, it tells them that they can no longer use it because their server is already at 2012. How can they now check what needs to be changed? They also cannot now reattach their databases to an earlier version server, as they've been upgraded to 2012 structure.

I believe that upgrade advisor should check db_compat levels, not server versions, before refusing to run. If you agree, you know what to do. Vote once, vote often:

https://connect.microsoft.com/SQLServer/feedback/details/715400/upgrade-advisor-should-check-database-db-compat-level-not-server-version

New SQL Down Under Podcast with Roger Doherty on SQL Server 2012

Well it's been a while since I've posted up a new podcast. (I know, I know).

But I've just started a new series for SQL Server 2012. First out of the gate is Roger Doherty (Senior Program Manager in the SQL Server team) with an overview of all the key SQL Server 2012 pillars and enhancements.

You'll find it here:

http://www.sqldownunder.com/PreviousShows/tabid/98/Default.aspx

Enjoy!

SQL Azure DB size limit increased to 150GB

Nice to see the increase in maximum database size on SQL Azure kicked up to 150GB.

In most enterprises I go into, there are a few databases that wouldn't fit but now the vast majority of databases would fit in SQL Azure.

Also included in the November release are federations and an updated management portal.

More info here: http://msdn.microsoft.com/en-us/library/windowsazure/ff602419.aspx

Avoiding connection timeouts on first connection to LocalDB edition of SQL Server Express

When you first make a connection to the new LocalDB edition of SQL Server Express, the system files, etc. that are required for a new version are spun up. (The system files such as the master database files, etc. end up in C:\Users\<username>\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\LocalDBApp1) That can take a while on a slower machine, so this means that the default connection timeout of 30 seconds (in most client libraries) could be exceeded.

To avoid this hit on the first connection, you can create the required instance of LocalDB beforehand using the SqlLocalDB.exe utility, like this:

<path to binaries>\SqlLocalDB.exe create InstanceName

You can also specify the required version of SQL Server and ask to start the instance like this:

<path to binaries>\SqlLocalDB.exe create InstanceName 11.0 -s

Your application should then connect quickly, even on the first connection.

SqlLocalDB documentation is starting to appear now. Documentation on the utility is here: http://msdn.microsoft.com/en-us/library/hh212961(v=sql.110).aspx.

FIX: A network error occurred during SQL Server Native Client installation

One of the things that I have been pestering the SQL team to do is to name their updates according to what is contained in them. For example, instead of just:

sqlncli.msi 

What I'd prefer is that the file was called something like:

SQLServerNativeClient2k8SP1CU3x64.msi

So I normally rename them as soon as I receive them, to avoid confusion in future. However, today I found that doing so caused me a problem. After renaming the file, and installing it, the installation failed with the error:

"A network error occurred while reading from the file: C:\temp\sqlncli.msi"

A quick inspection of the error shows that the code in the msi is looking for the file by name. Renaming the file back to the original name makes it install ok. It's a pity that the person coding the installer didn't pick up the name of the file programmatically, rather than hard-coding it.

Anyway, hope that helps someone that sees this error.

MVP Deep Dives volume 2 is now published!

It's great to see that volume 2 of MVP Deep Dives is now available and will be distributed at the PASS summit next week. I'm really sad that I won't be at the book signing next week but I'd encourage you all to get along, order a copy and have it signed.

A huge thanks has to go to Kalen Delaney for her management of this project and a big thanks to my fellow editors Louis Davidson, Brad McGehee, Paul Nielsen, Paul Randal, and Kimberly Tripp for their efforts. A special mention for Paul Nielsen whose ideas and spirit around volume 1 that have continued into this new volume.

And of course, a really big thank you to all the authors that gave their time to make this possible.

Please buy a copy and help us to help Operation Smile. You'll find the book's website here: http://www.manning.com/delaney/ 

While you're at it, why not send an extra donation to Operation Smile: https://secure.operationsmile.org/site/Donation2?df_id=10380&10380.donation=form1

Denali: Improved T-SQL Query Optimization

Part of the value in the ongoing evolution of the T-SQL language is that we are moving further and further towards being declarative rather than prescriptive ie: we are able to tell SQL Server what we want, rather than how to do it. Over time, that raises more and more possibilities for the optimizer to work with us to achieve a better outcome.

For example, note the following query against the AdventureWorksDW database:

SELECT rs.ProductKey, rs.OrderDateKey, rs.SalesOrderNumber,

       rs.OrderDateKey – (SELECT TOP(1) prev.OrderDateKey

                          FROM dbo.FactResellerSales AS prev

                          WHERE rs.ProductKey = prev.ProductKey

                          AND prev.OrderDateKey <= rs.OrderDateKey

                          AND prev.SalesOrderNumber < rs.SalesOrderNumber

                          ORDER BY prev.OrderDateKey DESC,

                                  prev.SalesOrderNumber DESC)

                AS DaysSincePrevOrder

FROM dbo.FactResellerSales AS rs

ORDER BY rs.ProductKey, rs.OrderDateKey, rs.SalesOrderNumber;

In this query, I'm trying to include details of how long it was since the previous order, beside the details of the current order. Note the option that the LAG operator now provides:

SELECT ProductKey, OrderDateKey, SalesOrderNumber,

       OrderDateKey – LAG(OrderDateKey)

                         OVER (PARTITION BY ProductKey

                                ORDER BY OrderDateKey, SalesOrderNumber)

                AS DaysSincePrevOrder

FROM dbo.FactResellerSales AS rs

ORDER BY ProductKey, OrderDateKey, SalesOrderNumber;

Also note how much more elegant the code is but more importantly, look at the difference in optimization:

 

This is great work from the T-SQL and engine teams. I encourage you to get out and try the new Windowing functions in Denali CTP3.