SQL Reporting Services: be very careful with using "c" formatting for currency

While on site this week, another common problem that I see everywhere arose again.

When you need to format currency, you use the "c" format right? It's in nearly every set of course materials I've ever seen.

In many cases, that's wrong!

Take a long hard think about whether you really want the monetary values that you display automatically converting to the client system's currency. If you are reporting financial figures, for example, is it really appropriate that they should mysteriously suddenly change from dollars to yen because the client is based in Zama City rather than in New York City? Not likely. A million dollars in sales might be good but a million yen in sales not quite so good 🙂

SQL Down Under show 35 – Roger Doherty – SQL Server 2008 for Database Developers

Hi Folks,


SQL Down Under show 35 with guest SQL Server DPE Evangelist Roger Doherty is now available for download from www.sqldownunder.com.


In the show, Roger discusses what developers are looking forward to in SQL Server 2008.







SQL Server Reporting Services: Avoid T-SQL in Reports

I spent time working with Reporting Services again today on a client site. The complexity of the reports we were working on reminded me that I really, really don't like seeing T-SQL scripts (or really any business logic) embedded in reports.

DBAs tend to be considered a conservative bunch. One thing they're usually conservative about is refactoring their databases. In many cases, this is because they have little idea what they will break when they make database changes. I've seen countless organisations that have allowed direct access to tables from client Access-based applications and have then lost control of the ability to ever change anything in the database because of the hundreds of little applications and reports that they might then break. I spend a lot of time talking to them about how to regain control of the situation.

Reporting Services is another area where this can happen. Fortunately, the reports are typically located in a single location. When a database change is needed, at least you don't then have to search the enterprise for reports that might break. However, life is much simpler if all the report does is call a stored procedure instead of having T-SQL embedded in it. I see the advantages of this approach as:

1. Refactoring the database is easier.

2. Unit testing of the code is much easier. You can easily build tests to just call the stored procedures. While possible via the web service interface, it's much harder to test the reports directly and requires a different skill set.

3. It allows the UI to be built by one person and the stored procedures to be built by another.

4. It's easy to deal with report permissions. Typically I create a Reports schema in the database and grant EXECUTE permission on that schema to the reporting users groups. That way, I don't have to manage individual permissions on the stored procedures and I don't have to grant permissions on the tables (or perhaps views).

5. I may well get benefits on procedure cache efficiency.

6. I can use the same stored procedure on multiple reports. I quite often find I have to write each report twice: once to look nice, the other to just dump the data into Excel format. Typically, when clients ask for Excel output, they just want the data, not the pretty formatting. I can use the same procedure for both versions. (Fortunately in SQL Server 2008, it looks like I'll have more flexible Excel rendering options).

SQL Server Data Services in Cloud for the WideOpen Web

SQL Server Data Services was announced today at MIX. This new offering from Microsoft provides Internet-based data services using RESTful interfaces like those provided by ADO.NET Data Services.

This is going to enable some outstanding new applications.

Apply for the beta here now: http://www.microsoft.com/sql/dataservices/default.mspx

SQL Cache Dependencies

A little while back I wrote an article on SQL Cache Dependencies. I had a colleague ask about these today so I think it's time to blog it. (For more information, I also did a podcast some time back with my old mates Wally McClure and Paul Glavich from the ASP.NET Podcast. It's at: http://weblogs.asp.net/pglavich/archive/2005/08/21/423160.aspx)

SQL Dependencies in ASP.NET v2


Traditional ASP Applications

In traditional ASP applications, every time the user would request a page, the application would retrieve the data from the database to render the page. Clearly this doesn’t scale very well. It works for a small number of users but not a large number. Often though, this data access can be completely avoided for certain types of data.

Consider the Production.ProductCategory table in the AdventureWorks database. Realistically, how often would it change? If we are building an application that displays the list of product categories, it doesn’t make sense to retrieve it from the database on every request. We could improve on this situation if the same user was retrieving this list regularly by holding it in the user’s Session state but that also wasn’t that helpful if there were a large number of users as many copies of the same data would be held.

ASP.NETCache Object

A key advance in ASP.NET was the introduction of the Cache object. It provided a server-wide shared cache that could be used to hold relatively static data. We could read the list of product categories into a dataset and hold it in the Cache.

The issue with any cached data though is when to refresh the data in the cache. If we choose to refresh it once per day, we then could be showing an incorrect list for up to an entire day. If we base the refresh policy on elapsed time, we have a trade-off. The shorter the refresh time, the fresher the data but the less useful the cache is. The longer the refresh interval, the more efficient the cache becomes but the staler the data potentially is.

The ASP.NET Cache object in v1 gave us two options for cache invalidation. One was time based, the other was based on a file dependency. This was originally designed to allow for situations where the data was coming from an XML file stored in the filesystem. A filesystem watcher was set up to monitor the file. If the file had changed, the Cache would be invalidated and refreshed.

But the invalidation option that we really wanted was “when the data changes”.

Cache Invalidation Prior To ASP.NET v2

Prior to ASP.NET v2, there were a number of attempts to provide cache invalidation when the data changed. The most popular version was:

1.       Create an extended stored procedure in SQL Server that simply touched a nominated file in the filesystem.

2.       Add a trigger (for INSERT, UPDATE and DELETE) on the table to be monitored that called fired the trigger.

3.       Set the cache to have a dependency on that file.

This option was miles ahead of where we started but it had a few downsides. First, writing extended stored procedures for SQL Server was not trivial. It couldn’t be done in managed code but a C++ template was available to use as a starting point. Second, the trigger would fire for any changes to the table, even if our query was only for certain rows in the table. Third, the mechanism was table-based and didn’t deal with queries that involved joins across multiple tables. Last, the mechanism was tied to a specific database engine and often to a specific version of a database engine.

ADO.NETv2 SqlDependency And SQL Server 2005

SQL Server 2005 added a wonderful new feature called Query Notifications. (NOTE: nothing to do with Notification Services – they often get confused). The idea here is that you could say to SQL Server “Here is a query I want you to execute. However, when you finish, I don’t want you to forget the query, I want you to remember it and, if the underlying data that it was based on changes in such a way that *if* I was to re-execute the query I’d get a different response, I want you to tell me about it”.

This is exactly what we need to support our ideal Cache invalidation scenario. We don’t want to re-execute the query, unless the data has changed. You might be thinking “that sounds like a lot of hard work for the database engine”. Well it is a bit of work but it’s in fact utilising the same mechanism that was already in place for detecting data changes underneath Indexed Views in SQL Server 2000. What it does mean is that we have some limitations on the types of query that this will work with:

·         Use two part names for tables (SQL Server needs to know which schema so it knows exactly which table, rather than having to resolve that at query time)

·         Reasonable complexity (not all queries can be implemented)

·         Specific column lists (ie: No SELECT * FROM somewhere)

Normally we SQL Server, we send a query and get a response. We don’t expect unannounced data to arrive sometime later. So the next issue is how SQL Server can send us the notification. To do this it uses another new feature called Service Broker. Service Broker is a transacted queue that lives within the database server. Communication occurs between “services” attached to “endpoints”. We then need to set up an endpoint ready to receive the notification.

The ADO.NET team did a lot of this for us. They created a new object called the SqlDependency. It is a simple programmable object that wraps most of this functionality for us. Basically, it’s used by:

·         Create a SqlCommand you wish to execute

·         Wrap a new SqlDependency around the command (you pass the command to the constructor of the dependency)

·         An event on the SqlDependency object fires when the data changes.

It’s that easy. The one remaining requirement is to set up the queue to listen for the notifications. You do this by calling the Start() method on the SqlDependency object once before you get going.

This new object can be used in any ADO.NET v2 application, including in WinForms applications. However, its primary use-case is in ASP.NET Cache invalidation. You can use it in code like this:

using (SqlConnection conn = new SqlConnection(“some connection string”))


      SqlCommand cmd = new

         SqlCommand(“SELECT EmployeeID, EmployeeName FROM Employees”,conn);

      SqlCacheDependency dependency = new SqlCacheDependency(cmd);

      SqlDataAdapter employeesDA = new SqlDataAdapter(cmd);

      DataSet employessDS = new DataSet(“EmployeesDS”);


      Cache.Insert(“emp”, employeesDS, dependency);


ASP.NET v2 SqlCacheDependency

Even though the ADO.NET v2 team had done most of the work, the ASP.NET team decided to make it even easier to use with the Cache object. In v2, the Cache object can now have a SqlCacheDependency in addition to the previous options of file and time dependencies. This can be applied to the Cache object and also to other cached objects like user controls. You can specify a directive like:

<%@ OutputCache Duration=”2400” VaryByParam=”none”

               SqlDependency=”northwind:Employees” %>

You can also directly use the SqlDependencies in code:


        new SqlCacheDependency(“northwind”,”Employees”));

The code shown is a general form of the SqlCacheDependency settings. In ASP.NET v2, when using SQL Server 2005’s special mechanisms, we use the specific string “CommandNotification” rather than database and table names.

What If I Don’t Have SQL Server 2005?

Rather than tying their implementation directly to SQL Server 2005, the ASP.NET team also decided to create a pluggable architecture so other forms of SqlCacheDependency could be provided. For example, they have provided an implementation for SQL Server 2000. As SQL Server 2000 doesn’t offer query notifications or service broker, the implementation isn’t as good as the one for 2005. It works like this:

·         A table is created to hold details of change notifications. (A command has been provided to help with this: aspnet_regsql.exe –d databasename –ed –E)

·         Triggers are set up on the required tables. (Again a command has been provided to assist: aspnet_regsql.exe –d databasename –et –t tablename –E)

·         ASP.NET polls the change notification table to look for changes.

Obviously polling is not as efficient as being notified but it’s still a whole lot more efficient that having the application constantly looking up the data.

This mechanism can then be extended to other database engines.

LINQ – Lessons that could have been learned from languages like Progress

In a previous post, I mentioned lessons I learned from what I think Progress made mistakes at. However, when looking at the implementation of LINQ in .NET, I think much could have been learned from these types of applications and tools.

At the time I used to use it, Progress applications implicitly has a database context. It's often said that over 90% of applications are data-related. This is the argument for why data libraries shouldn't be an add-on to the language but should be baked in.

I'd go further and suggest that over 90% of the data-related applications operate in the context of a single database. So why don't applications or middle-tier components have the option to specify that database context globally?

I really liked the detailed integration of database objects and language objects in Progress. It was quite seamless. If I wanted to iterate through customers, I could just say:

for each customer:

If I had the equivalent of a where clause, I could just say:

for each customer where somecolumn = something:

To locate a particular customer row, I could just say:

find customer where somecolumn = something.

If there were multiple possible rows, I could say:

find first customer where somecolumn = something.

Where LINQ still feels like an add-on to the language, this is very natural syntax and behaviour. I totally struggle with why there was any need to use SQL-like syntax with LINQ. And given that LINQ is a language extension for querying, unrelated to databases, why not use syntax that's more related to object querying rather than a messed-up variant of SQL?

Another aspect that the Progress language had was default display formats associated with each data type. An entire program (not a great one :-)) could be:

find customer where somecolumn = something.

update customer.name.

These two lines encapsulated the data access, the update logic, the transaction scoping (poor in this case), the entire screen layout and display/update format for the column, etc. I'm not suggesting I want that style of code any more, particularly given application layering and tiers but it gives an indication of the level of abstraction that you could work at. We're still a long way short of that. (People who use Access are probably chuckling at this point).

I suspect we still need a standardised way to store a variety of column and data type related metadata in the database. We can do a lot of that via extended properties but everyone does it in a different way.

Progress DBMS – three lessons for ISV's including collective deafness

Alphatross posted an interesting reply to my blog entry about LINQ and Entity Framework terminology. He asked if I'd worked with the Progress DBMS. I have. Here are my thoughts on it. Obviously others will have a different view of history but I mention Progress over and over again when I'm teaching classes as three examples related to them really hit home for me.

I worked with Progress for many years, starting in about 1983 through to some time in the 90's.

When they started, they wrote a product that just made sense. They didn't care about being compatible with any older code and made an application and language that enabled you to write good code quickly. The thing I love most about the product was its stability. We migrated sites over to Progress and they just went quiet. When you are an ISV, quiet sites are a good thing. For character based RDBMS systems, I still haven't seen anything I prefer for the types of applications we were building.

Then Windows appeared.

Lesson 1:

Companies that have had a good run and good track record tend to build up a collective deafness and eventually seem to have no mechanism to hear it when people say "we don't like your products". Version 6.2 of Progress was wonderful. Version 7 of Progress was horrid. I didn't like it. I told them that. Their marketing folk kept insisting that the lack of new version adoption was due to "you guys not wanting to move to Windows". We kept saying "we are moving to Windows, in fact we've already done so but not with your product". They just didn't *want* to hear the real message and were deafened to that message by their own previous successes.

Lesson 2:

Windows applications need to behave like Windows applications. This seems simple but it's very important. Progress didn't "get" it. They kept producing versions of their applications which were Windows-like but which didn't really behave like Windows applications. As one simple example, if they displayed items in a list box, there would always be a scroll bar, even if there was nothing to scroll. This is very confusing to a user who sees a full list box and a scroll bar and thinks there will be more found by scrolling. I asked why they did this and they told me reasons that had to do with keeping compatibility with their character-based (ie: non-Windows) versions. One key reason for the success of Windows has been the shared knowledge you get from the platform. It minimises the amount you need to learn about new applications as all applications (should!) behave in a similar way. 

This is the problem when you have existing legacy customers. If you endlessly try to keep backwards compatibility, you'll eventually be run over by a new competitor that writes a new application that just makes sense for the current time and couldn't care less about being compatible with your old code.

Lesson 3:

Another thing that drove me crazy with Progress was that I could never keep purchasing the same base license versions. As an ISV, if I did all my building and testing on say version 6.2K, that's the version I want to install at customer sites. I don't want to have my application supplied with 6.2L today or 6.2L+ tomorrow then 6.2M a month later.

For customers using my application, no new feature added to the system is going to change *anything* in a positive way as the application won't have been built to use it. The best I could hope for from a new version is that the application would still work the same way it used to. All other possible outcomes are worse than where I am now, not better, except for the hopefully rare situation where there's an underlying serious problem that I haven't stumbled across in testing.

Software Engineering process matters much more than new versions. And the first rule is that you don't keep randomly changing the underlying platform.

Are certification exams useful for experienced people?

There has been an interesting discussion on an internal Microsoft mailing list regarding the certification process and exams. I've seen posts from many people that are very experienced with products saying they can't see any point in the certification exams and that competencies in the Microsoft Partner program shouldn't be based on exams. They feel these people should somehow just be recognised for their other contributions.


Regarding the certification process, I don’t agree that anyone should be just “grandfathered” in. Any of the people that  have a great deal of knowledge and experience should be able to just take the exams and be done with it. I find it really easy to take them at events like TechEd. I typically book in for one each morning at 8am and usually it’s only $50.


Alternately, I do them in beta where they are free. The biggest hassle I have with doing the beta exams is it requires me to know where I’ll be on a particular day in a few months time. I usually haven’t got a clue even what city I’ll be in that far out, unless there is an event on.


I also question that none of them would get any learning benefit from the process. I often see people that have worked with a product for ten or more years who can’t pass an exam on it. Mostly, that’s because they use 40% of a product every day and the exam covers aspects of the product they never look at.


One of the reasons I do take the exams is it makes me study the whole product. Using SQL Server as an example, it helps avoid the problem I see where people use SQL Server 2005 or 2008 the same way they used SQL Server 6.5 or 7. I like getting the prep guides and making sure I’ve covered off all the topics on it. Given I spend so much of my time evangelising topics at the prerelease or early adopter stage, there usually isn’t much in the exams that’s a surprise by the time the product comes out. After I’ve covered the material, doing the exam is then almost an afterthought.


However, I don’t consider that having done the exam means much at all. You can buy a complete word-for-word copy of the exams for about $30 out of China if all you want to do is just pass the exams. Until this nonsense stops, the fact that someone has done them means almost nothing on its own, unless they took them in the beta phase where no such cheating options are available.


For those that take the exams seriously though, there definitely is value in the preparation process, rather than in actually taking the exam. Once you've done the prep work, you might as well do the exam. An additional benefit is that many people are timeline driven and having an exam booked tends to create a sense of urgency in preparation.