Since the removal of the 8KB limit on serialization, the ability to define new data types using SQL CLR integration is now almost at a usable level, apart from one key omission: indexes.
We have no ability to create our own types of index to support our data types. As a good example of this, consider that when Microsoft introduced the geometry and geography (spatial) data types, they did so as system CLR data types but also needed to introduce a spatial index as a new type of index. Those of us that need to work with the product as it's supplied can't just create our own new types of index objects.
What would have been far preferable would have been for the ability to create user-defined indexes to have been added to the product and for spatial indexes to have been one instance of that.
Other database engines (such as Oracle) have this capability. This makes it impossible to migrate applications that use Oracle Data Cartridges to SQL Server in an effective way. It also just makes the creation of data types in SQL Server that much more limiting than it could be.
One alternative is to promote properties of CLR data types via persisted calculated columns and then index those but that's somewhat awkward and more importantly, doesn't really do the same thing.
If you'd like to see user-defined index types be considered in the future, you know what to do. Vote once, vote often 🙂
Just posting this blog tonight hoping it might save someone else a bunch of time. For call recording on Skype, I use a program called Pamela. Lately, when I'd first installed it, it would work fine. Later, however, it would come up and say:
"Another application (Pamela.exe) is attempting to access Skype, but we are unable to respond".
You just have to love these sorts of messages that don't give you the slightest clue about what the problem is.
While I saw the problem with Pamela, it can happen with any Skype add-in. The problem actually is related to UAC. If Skype is running as non-admin (startup app in windows) and you launch the other app as admin, you'll get this error message. Amazingly, running the add-in without admin privileges makes it work. The irony is that I was trying it as admin because I thought that would make sure it works.
Hope that helps someone else as I certainly wasted endless hours trying to work out what was wrong. And a big raspberry to the Skype guys for the "useful" error message.
I've had a number of people over the years ask about whether or not a particular type of object is a good candidate for SQL CLR integration. The rules that I normally apply are as follows:
Generally poor performance
Good option when limited or no data-access
Good option if data-related
Good option when limited or no data-access
Good option when external access is required or limited data access
Rarely a good option as most perform substantial data access
OK option if only limited processing of XML EVENTDATA
Good option for extensive processing of XML EVENTDATA
User-defined Data Type
Only alias types
Scalar UDFs written in Transact-SQL are well-known for causing performance problems in SQL Server environments. Managed code is often a good option (and generally a much faster option) for implementing scalar UDFs as long as the functions do not depend on heavy data access.Table-valued Functions that are data-oriented are likely to be best implemented in Transact-SQL. A common use case for managed code in table-valued UDFs is for functions that need to access external resources such as the file system, environment variables, registry, etc.
Stored procedures have traditionally been written in T-SQL. Most stored procedures should continue to be written in T-SQL. There are very few good use cases for managed code in stored procedures. The exceptions to this are stored procedures that need to access external resources or perform complex calculations. There should be consideration, however, about whether code that performs these tasks should be implemented within SQL Server at all.
Almost all DML triggers are heavily-oriented towards data access and are written in T-SQL. There are very few valid use cases for implementing DML triggers in managed code.
DDL triggers are also often data-oriented. Some DDL triggers though need to do extensive XML processing, particularly based on the XML EVENTDATA structure passed to these triggers by SQL Server. The more that extensive XML processing is required, the more likely the DDL trigger would be best implemented in managed code. Managed code would also be a better option if the DDL trigger needed to access external resources but this is rarely a good idea within any form of trigger.
Transact-SQL offers no concept of user-defined aggregates. These need to be implemented in managed code. A common use case for user-defined aggregates is to create functionality that exists in other database engines while migrating to SQL Server. For example, if a database engine provides a RANGE function, a replacement could be written in managed code to avoid the need to rewrite the application code. As a further example, the SQL Server Migration Assistant uses managed code to ease Oracle migration. SQL Server 2005 limited user-defined aggregates to those that could be serialized in 8KB of data. SQL Server 2008 changed this limit to 2GB and also introduced the ability to create multi-column aggregates.
Transact-SQL offers the ability to create alias data types but these are not really new data types. They are more like subsets (or subclasses) of existing built-in data types. Managed code offers the ability to create entirely new data types and determine not only what data needs to be stored but also the behavior of the data type. It is important to understand however that the intention of user-defined CLR data types is not to convert SQL Server into an object-oriented database. Data types should relate to storage of basic data, not to objects such as employees or customers.
It's good to see some airports that have queues for people that travel frequently and know what they're doing. But I'm left thinking that IT vendors need to have something similar.
Bigpond (part of Telstra) in Australia have recently introduced new 42MB/sec modems on their 3G network. It's actually just a pair of 21MB/sec modems linked together but the idea is cute. Around most of the country, they work pretty well. In the middle of the CBD in Melbourne however, at present they just don't work.
Having more patience than myself, my wife (who also has one) called their support line today. The symptoms were:
- The data connection works fine till around 8AM.
- It works fine again after about 5:30PM.
- At all other times during the day, the connection strength shows a full signal and connections happen instantaneously.
- During the day, a ping to the default gateway is returned after about 3 seconds about 1 in 100 times. All other pings are lost.
- No DNS names can be resolved at all.
- No IP traffic can move at all.
Now, let's see what you think about the suggestions from the "support" people:
- It must be a problem with your computer configuration. (Interesting that it works fine outside those times without change)
- It could be a problem with your browser cache. Please clear it. (Hmmm, well there goes the useful cache)
- It might be a problem with your username and password. (But how does it work without change the rest of the day?)
- Please manually connect to the Telstra 3G network instead of the Telstra network that we connect you to by default. (No change occurs)
And then they get scary:
- Please reset all the security zones for all sites in your browser. That might be the problem. (WTF???? What outstanding advice!)
At what point do companies ever begin to think that this sort of nonsense is acceptable?
Do you have different levels of support for your customers, depending upon their knowledge level? If so, how do you decide who can use the more advanced support? How do you control access to it?
Our MVP buddy Erland Sommarskog doesn't post articles that often but when he does, you should read them. His latest post is here:
It talks about why a query might be slow when sent from an application but fast when you execute it in SSMS. But it covers way more than that. There is a great deal of good info on how queries are executed and query plans generated.
One thing about being around the industry since the days when dinosaurs roamed the Earth, is that you can get to be a little philosophical about the industry at times. There are two things that I've been thinking about again today, where we seem to be in total denial. One is EULAs; the other is passwords.
Has anyone tested EULAs in court lately? It's hard to imagine most of them being very enforceable. More importantly, does anyone EVER read them? I was amused a few years back when I was installing an application, clicked over the EULA and the application said "how could you possibly have read that in 1.076 seconds?". That's a fair cop.
Today though, it's the EULAs at the Apple AppStore that just seem totally stupid. A while back, I installed the genuine Twitter app on my iPhone. Previously I was using Halo. The new Twitter app seemed pretty cool, right up till the day it had an update available. On the screen of my phone, it kept prompting me to update it. So I thought "hey this is cool, I'll update it". To update the FREE application, I had to log in to the AppStore. Doing that on the phone is a bit cumbersome but ok. But after I did so, it tells me that I now need to agree to the new AppStore terms and conditions. Once I had done that, it then told me that I should try my "purchase" again.
But the super annoying thing is that EVERY time I go to update the Twitter app, the AppStore terms and conditions seem to have changed. So this happens EVERY time. But what really impresses me about this process, is that the terms and conditions that you need to agree to occupy 58 PAGES. Has ANYONE EVER read all those 58 pages? And do I really need to read and agree to 58 pages of terms and conditions every time I want to update my free Twitter app? On a phone? You have to be kidding. I'd love to see the page view statistics from Apple that show how many people have EVER retrieved most of those pages.
The other mild form of insanity that I'm thinking about today is passwords. We always tell users to:
- Use different passwords for every system
- Make the passwords very complex
- Change the passwords regularly
- Never write down the passwords
Users get blamed when they don't follow the rules but I'm sorry, is it even humanly possible to follow the rules? We have to stop this sort of nonsense if we ever want our industry taken seriously by the general public. And I think I have to go back to using Halo instead of the free genuine Twitter app.
And as for the courts, I can well imagine many judges not being too convinced about enforcing rules that it's almost impossible for a human to comply with.
I've been working with SQLCMD mode again today and one thing about it always bites me. If I execute a script like:
I'm sure I'm not the only person that would be surprised to see all three SELECT commands executed against SERVER3 and none executed against SERVER1 or SERVER2. If you think that's odd behavior, here's where to vote: https://connect.microsoft.com/SQLServer/feedback/details/611144/sqlcmd-connect-to-a-different-server-should-be-an-implicit-batch-separator#details
There was a short discussion on the SQL Down Under mailing list this morning about screen resolutions for working with the SQL Server tools. In particular, the issue was about how unusable the tools are on the 1366×768 resolution notebooks that now seem to be the most common. While finding a notebook with an appropriate resolution is obviously the answer at this time, I started thinking that the product itself needs to address this.
SQL Server tools currently target a portrait 4:3 shape for minimum window sizes. Increasingly, portrait 4:3 format monitors are disappearing from sale.
While I loved the standard portrait orientation and prefer to work in it, the world has decided that watching DVDs on computer screens is more important than real work. So sadly, I think the SQL Server tools should now target wide screen formats by default. If you think, as I do, that the SQL Server tools should now be designed to be at least workable on 1366×768, vote here: https://connect.microsoft.com/SQLServer/feedback/details/636373/sql-server-tooling-should-target-wide-screen-form-factors#details
One of the discussion lists that I participate in, had a brief discussion this morning about whether or not it's possible to perform log shipping between differernt versions of SQL Server. Specifically, can you do log shipping between SQL Server 2005 and SQL Server 2008?
SQL Server does support restoring earlier version databases on later versions of the product. The databases get upgraded along the way when you perform restores of databases. SQL Server also allows you to restore transactions logs from earlier versions of the product but (as Robert Davis points out in the comments below), the upgrade doesn't happen until recovery of the database occurs. And that's why you can't use STANDBY mode in this situation.
So, you can set up log shipping between versions, however things aren't that simple. Log shipping is often used to provide a warm standby. If you use it in this way and you need to fail over to the standby server, you now have now way to swap the log shipping roles, as you can't then log ship back from the 2008 server to the 2005 server.
If you are performing a one way log ship, intentionally, this might be quite acceptable to you. I often see log shipping used when servers are being upgraded from one version of SQL Server to another version, even side-by-side. When it's time for the swap to the new server to happen, the final logs just need to be moved and this takes very little time. There are other reasons as well as to why you might be happy to just have a one-way log shipping operation.
The main point is that you need to consider why you are performing log shipping before you do this. If it's with a view to swap roles from primary to secondary and back, then log shipping between versions isn't for you.
A year or two back, I was involved in a project with my colleagues (led by Ron Talmage) to construct an Upgrade Technical Reference for SQL Server 2008. It seemed to be well received.
We've updated it now to SQL Server 2008 R2 and it's just been published. You'll find it on this web site: http://www.microsoft.com/sqlserver/en/us/product-info/why-upgrade.aspx You'll need to click on the Upgrade Guide link towards the middle of the RHS under the "Why Upgrade" whitepaper.