I was really sad to hear that Arthur C Clarke died today. For those that enjoyed his work, it's worth watching his "goodbye" video he made on the occasion of his birthday in December 2007. He knew at that time that the end was near. I particularly liked his choice of a Rudyard Kipling poem to read at the end. You'll find the video here: http://www.youtube.com/watch?v=eLXQ7rNgWwg
While on site this week, another common problem that I see everywhere arose again.
When you need to format currency, you use the "c" format right? It's in nearly every set of course materials I've ever seen.
In many cases, that's wrong!
Take a long hard think about whether you really want the monetary values that you display automatically converting to the client system's currency. If you are reporting financial figures, for example, is it really appropriate that they should mysteriously suddenly change from dollars to yen because the client is based in Zama City rather than in New York City? Not likely. A million dollars in sales might be good but a million yen in sales not quite so good 🙂
SQL Down Under show 35 with guest SQL Server DPE Evangelist Roger Doherty is now available for download from www.sqldownunder.com.
In the show, Roger discusses what developers are looking forward to in SQL Server 2008.
I spent time working with Reporting Services again today on a client site. The complexity of the reports we were working on reminded me that I really, really don't like seeing T-SQL scripts (or really any business logic) embedded in reports.
DBAs tend to be considered a conservative bunch. One thing they're usually conservative about is refactoring their databases. In many cases, this is because they have little idea what they will break when they make database changes. I've seen countless organisations that have allowed direct access to tables from client Access-based applications and have then lost control of the ability to ever change anything in the database because of the hundreds of little applications and reports that they might then break. I spend a lot of time talking to them about how to regain control of the situation.
Reporting Services is another area where this can happen. Fortunately, the reports are typically located in a single location. When a database change is needed, at least you don't then have to search the enterprise for reports that might break. However, life is much simpler if all the report does is call a stored procedure instead of having T-SQL embedded in it. I see the advantages of this approach as:
1. Refactoring the database is easier.
2. Unit testing of the code is much easier. You can easily build tests to just call the stored procedures. While possible via the web service interface, it's much harder to test the reports directly and requires a different skill set.
3. It allows the UI to be built by one person and the stored procedures to be built by another.
4. It's easy to deal with report permissions. Typically I create a Reports schema in the database and grant EXECUTE permission on that schema to the reporting users groups. That way, I don't have to manage individual permissions on the stored procedures and I don't have to grant permissions on the tables (or perhaps views).
5. I may well get benefits on procedure cache efficiency.
6. I can use the same stored procedure on multiple reports. I quite often find I have to write each report twice: once to look nice, the other to just dump the data into Excel format. Typically, when clients ask for Excel output, they just want the data, not the pretty formatting. I can use the same procedure for both versions. (Fortunately in SQL Server 2008, it looks like I'll have more flexible Excel rendering options).
SQL Down Under show 34 with guest SQL Server MVP Geoff Hiten is now available for download from www.sqldownunder.com. In this show, Geoff discusses SQL Server clustering, SANs and magic SAN dust.
SQL Server Data Services was announced today at MIX. This new offering from Microsoft provides Internet-based data services using RESTful interfaces like those provided by ADO.NET Data Services.
This is going to enable some outstanding new applications.
Apply for the beta here now: http://www.microsoft.com/sql/dataservices/default.mspx
SQL Down Under show 33 with SQL Server MVP Allen White discussing the automating of administration using SMO and Powershell is available for download from www.sqldownunder.com.
A little while back I wrote an article on SQL Cache Dependencies. I had a colleague ask about these today so I think it's time to blog it. (For more information, I also did a podcast some time back with my old mates Wally McClure and Paul Glavich from the ASP.NET Podcast. It's at: http://weblogs.asp.net/pglavich/archive/2005/08/21/423160.aspx)
SQL Dependencies in ASP.NET v2
Traditional ASP Applications
In traditional ASP applications, every time the user would request a page, the application would retrieve the data from the database to render the page. Clearly this doesn't scale very well. It works for a small number of users but not a large number. Often though, this data access can be completely avoided for certain types of data.
Consider the Production.ProductCategory table in the AdventureWorks database. Realistically, how often would it change? If we are building an application that displays the list of product categories, it doesn't make sense to retrieve it from the database on every request. We could improve on this situation if the same user was retrieving this list regularly by holding it in the user's Session state but that also wasn't that helpful if there were a large number of users as many copies of the same data would be held.
A key advance in ASP.NET was the introduction of the Cache object. It provided a server-wide shared cache that could be used to hold relatively static data. We could read the list of product categories into a dataset and hold it in the Cache.
The issue with any cached data though is when to refresh the data in the cache. If we choose to refresh it once per day, we then could be showing an incorrect list for up to an entire day. If we base the refresh policy on elapsed time, we have a trade-off. The shorter the refresh time, the fresher the data but the less useful the cache is. The longer the refresh interval, the more efficient the cache becomes but the staler the data potentially is.
The ASP.NET Cache object in v1 gave us two options for cache invalidation. One was time based, the other was based on a file dependency. This was originally designed to allow for situations where the data was coming from an XML file stored in the filesystem. A filesystem watcher was set up to monitor the file. If the file had changed, the Cache would be invalidated and refreshed.
But the invalidation option that we really wanted was "when the data changes".
Cache Invalidation Prior To ASP.NET v2
Prior to ASP.NET v2, there were a number of attempts to provide cache invalidation when the data changed. The most popular version was:
1. Create an extended stored procedure in SQL Server that simply touched a nominated file in the filesystem.
2. Add a trigger (for INSERT, UPDATE and DELETE) on the table to be monitored that called fired the trigger.
3. Set the cache to have a dependency on that file.
This option was miles ahead of where we started but it had a few downsides. First, writing extended stored procedures for SQL Server was not trivial. It couldn't be done in managed code but a C++ template was available to use as a starting point. Second, the trigger would fire for any changes to the table, even if our query was only for certain rows in the table. Third, the mechanism was table-based and didn't deal with queries that involved joins across multiple tables. Last, the mechanism was tied to a specific database engine and often to a specific version of a database engine.
ADO.NETv2 SqlDependency And SQL Server 2005
SQL Server 2005 added a wonderful new feature called Query Notifications. (NOTE: nothing to do with Notification Services – they often get confused). The idea here is that you could say to SQL Server "Here is a query I want you to execute. However, when you finish, I don't want you to forget the query, I want you to remember it and, if the underlying data that it was based on changes in such a way that *if* I was to re-execute the query I'd get a different response, I want you to tell me about it".
This is exactly what we need to support our ideal Cache invalidation scenario. We don't want to re-execute the query, unless the data has changed. You might be thinking "that sounds like a lot of hard work for the database engine". Well it is a bit of work but it's in fact utilising the same mechanism that was already in place for detecting data changes underneath Indexed Views in SQL Server 2000. What it does mean is that we have some limitations on the types of query that this will work with:
· Use two part names for tables (SQL Server needs to know which schema so it knows exactly which table, rather than having to resolve that at query time)
· Reasonable complexity (not all queries can be implemented)
· Specific column lists (ie: No SELECT * FROM somewhere)
Normally we SQL Server, we send a query and get a response. We don't expect unannounced data to arrive sometime later. So the next issue is how SQL Server can send us the notification. To do this it uses another new feature called Service Broker. Service Broker is a transacted queue that lives within the database server. Communication occurs between "services" attached to "endpoints". We then need to set up an endpoint ready to receive the notification.
The ADO.NET team did a lot of this for us. They created a new object called the SqlDependency. It is a simple programmable object that wraps most of this functionality for us. Basically, it's used by:
· Create a SqlCommand you wish to execute
· Wrap a new SqlDependency around the command (you pass the command to the constructor of the dependency)
· An event on the SqlDependency object fires when the data changes.
It's that easy. The one remaining requirement is to set up the queue to listen for the notifications. You do this by calling the Start() method on the SqlDependency object once before you get going.
This new object can be used in any ADO.NET v2 application, including in WinForms applications. However, its primary use-case is in ASP.NET Cache invalidation. You can use it in code like this:
using (SqlConnection conn = new SqlConnection("some connection string"))
SqlCommand cmd = new
SqlCommand("SELECT EmployeeID, EmployeeName FROM Employees",conn);
SqlCacheDependency dependency = new SqlCacheDependency(cmd);
SqlDataAdapter employeesDA = new SqlDataAdapter(cmd);
DataSet employessDS = new DataSet("EmployeesDS");
Cache.Insert("emp", employeesDS, dependency);
ASP.NET v2 SqlCacheDependency
Even though the ADO.NET v2 team had done most of the work, the ASP.NET team decided to make it even easier to use with the Cache object. In v2, the Cache object can now have a SqlCacheDependency in addition to the previous options of file and time dependencies. This can be applied to the Cache object and also to other cached objects like user controls. You can specify a directive like:
<%@ OutputCache Duration="2400" VaryByParam="none"
You can also directly use the SqlDependencies in code:
The code shown is a general form of the SqlCacheDependency settings. In ASP.NET v2, when using SQL Server 2005's special mechanisms, we use the specific string "CommandNotification" rather than database and table names.
What If I Don't Have SQL Server 2005?
Rather than tying their implementation directly to SQL Server 2005, the ASP.NET team also decided to create a pluggable architecture so other forms of SqlCacheDependency could be provided. For example, they have provided an implementation for SQL Server 2000. As SQL Server 2000 doesn't offer query notifications or service broker, the implementation isn't as good as the one for 2005. It works like this:
· A table is created to hold details of change notifications. (A command has been provided to help with this: aspnet_regsql.exe –d databasename –ed –E)
· Triggers are set up on the required tables. (Again a command has been provided to assist: aspnet_regsql.exe –d databasename –et –t tablename –E)
· ASP.NET polls the change notification table to look for changes.
Obviously polling is not as efficient as being notified but it's still a whole lot more efficient that having the application constantly looking up the data.
This mechanism can then be extended to other database engines.
In a previous post, I mentioned lessons I learned from what I think Progress made mistakes at. However, when looking at the implementation of LINQ in .NET, I think much could have been learned from these types of applications and tools.
At the time I used to use it, Progress applications implicitly has a database context. It's often said that over 90% of applications are data-related. This is the argument for why data libraries shouldn't be an add-on to the language but should be baked in.
I'd go further and suggest that over 90% of the data-related applications operate in the context of a single database. So why don't applications or middle-tier components have the option to specify that database context globally?
I really liked the detailed integration of database objects and language objects in Progress. It was quite seamless. If I wanted to iterate through customers, I could just say:
for each customer:
If I had the equivalent of a where clause, I could just say:
for each customer where somecolumn = something:
To locate a particular customer row, I could just say:
find customer where somecolumn = something.
If there were multiple possible rows, I could say:
find first customer where somecolumn = something.
Where LINQ still feels like an add-on to the language, this is very natural syntax and behaviour. I totally struggle with why there was any need to use SQL-like syntax with LINQ. And given that LINQ is a language extension for querying, unrelated to databases, why not use syntax that's more related to object querying rather than a messed-up variant of SQL?
Another aspect that the Progress language had was default display formats associated with each data type. An entire program (not a great one :-)) could be:
find customer where somecolumn = something.
These two lines encapsulated the data access, the update logic, the transaction scoping (poor in this case), the entire screen layout and display/update format for the column, etc. I'm not suggesting I want that style of code any more, particularly given application layering and tiers but it gives an indication of the level of abstraction that you could work at. We're still a long way short of that. (People who use Access are probably chuckling at this point).
I suspect we still need a standardised way to store a variety of column and data type related metadata in the database. We can do a lot of that via extended properties but everyone does it in a different way.