I'm really happy with the Reporting Services team. I've had a few items on Connect where I've asked for features and they've come back and said "Done!".
One of these is RenderFormat. I wanted the ability to change my formatting based upon where I was rendering to. For me, this came from a need to have quite different output when rendered to Excel than when rendered to HTML or PDF.
Now, there is a global variable called RenderFormat that allows me to use RenderFormatName in expressions. Woohoo.
But the good part is they went further. The team realized that some decisions are simply based on whether or not the target is an interactive one or not ie: perhaps I want a very different page length for HTML than for PDF. They added RenderFormat.IsInteractive to allow us to test for this.
Thank you Reporting Services team.
Continuing on the theme of describing StreamInsight, the next major concept is the Event Model. Events in StreamInsight are made up of two sets of data. One set is the basic information required by the StreamInsight engine such as when an event occurred. The other set is the user data contained within the events, called the "payload".
You can define what is contained in the payload. It is effectively a .NET class or struct that exposes a set of public members. We'll talk more about the payload in another post.
I mentioned before that StreamInsight has details of when an event occurs. It's actually more flexible than just storing a date and time. The temporal information that is stored is determined by the Eventshape. Events have three potential shapes.
EventShape.Interval provides a start and stop time for an event and is used for events that have a duration. The times are stored as UTC.
EventShape.Point deals with events that occur at a single point in time. The start time is all that matters. An end time is available but it is defined as the start time plus one cronon (or the smallest granularity of time storage). We mostly tend to ignore the end time.
EventShape.Edge deals with events that occur over an interval but at the time an event is recorded, we only have the start time. Later the event is updated when we find out the end time.
Since I posted some StreamInsight info the other day, I've had a bunch of people asking me what StreamInsight is used for.
StreamInsight is Microsoft's implementation of Complex Event Processing. This is not a new market but it is new territory for Microsoft.
Complex Event Processing (CEP) is all about querying data while it's still in flight. Traditionally, we obtain data from a source, put it into a database and then query the database. When using CEP, we query the data *before* it hits a database and derive information that helps us make rapid business decisions, potentially also including automated business decisions.
I liked the way that one of our new colleagues (Sharon Bjeletich) put it to me: "It's about throwing the data at the query, rather than throwing the query at the data".
There are lots of places that this makes sense but they all involve relatively high data rates. Good examples of these are automated trading in capital markets, fraud detection in networks or in casino operations, battefield control systems for military use, outbreak management for public health, etc.
While StreamInsight may combine the data with reference data stored in SQL Server, the primary development skills needed for working with it are .NET development skills.
Our Microsoft RD lead Kevin Schuler has asked us to post predictions for 2010 that will appear in a special edition of TheRegion. (Check out www.theregion.com for any interesting blog if you haven't already). Here's mine:
Against all perceived wisdom, I suspect that the interest in developing general applications for the iPhone store will peak this year, unless Apple comes out with a more innovative platform. At present, Apple have completely won the mindshare in relation to phone applications, not just the hardware game. All major websites I deal with are starting to create iPhone friendly versions. Early on, we heard amazing stories of how developers had made a fortune through the appstore. I see a few problems becoming more apparent this year:
1. The price of applications. Even super-sophisticated applications are considered over-priced now at $8. While there's some truth that it's "just a numbers game", it's getting much harder to justify the effort required to build the next generation of apps as the price drops lower and lower.
2. Political control of the appstore. Having a developer story that says that you can spend six months building an app, make it beautiful and functional and then at a whim Apple could decide to not let you sell it, and you have no other way to sell it, isn't a good story. That's particularly the case when the reasons might seem unreasonable to you eg: not competing with built-in functionality or not providing a service that their "partners" already provide.
3. Most serious applications being built now seem to be front-ends for standard business sites. There's nothing wrong with that but it's the interest in building general purpose applications that I'm suggesting will peak.
4. You can't find things in the appstore any more. The beauty of the appstore has become it's ugly side too. How do you efficiently find apps that are worthwhile amongst the load of rubbish that's in there. And the volume is increasing daily.
What do you think will happen in the software industry this year?
OK, I'm sure many will have already seen this but if you haven't:
Create a folder in Windows 7 and rename it to:
Then check out it's contents. That's seriously geeky and cool.
While building content for the upcoming Metro training for SQL Server 2008 R2, Bill Chesnut and I were puzzled about the Adjust option for AdvanceTimePolicy in the AdvanceTimeSettings for a stream. It was described in Books Online as causing the timestamp for the event to be adjusted forward to the time of the latest CTI (current time increment). No matter what we tried though, we couldn't seem to get it to do anything.
After discussing it with Roman from the product group, we worked out what our issue was.
We were using EventShape.Point. That means that the starttime and endtime are 1 cronon apart (smallest unit of time for the system). Our events had times that were prior to the latest CTI timestamp but we weren't seeing them be adjusted.
Turns out that the adjustment only occurs when your event interval (ie: from starttime to endtime) overlaps the CTI. Then, the starttime of the event is adjusted to match the CTI. This means the event has been adjusted to start at the CTI timestamp and still end when it was recorded as ending before adjustment.
Because we were using EventShape.Point, no adjustment was occuring as our event didn't overlap the CTI. Had we been using EventShape.Interval, and had a starttime before the CTI and and endtime at or after the CTI, we would have seen the adjustment working.
One of the new items coming with SQL Server 2008 R2 and Visual Studio 2010 is the Data-Tier Application. It is designed for (what are described as) departmental applications.
What a "deparmental" application is deserves some thought. Mostly it relates to the size of the application. What percentage of your databases (count of databases not their volume) would be under say 2GB? What about 10GB? The argument is that for most sites, it's a surprisingly high percentage. Even most sites I see at the Enterprise level have one or two very large databases and the rest are fairly small. Does that apply to your sites?
I've been doing a lot of work lately with StreamInsight, coming in SQL Server 2008 R2.
There are three development models you can use with StreamInsight: Implicit Server, Explicit Server and IObservable/IObserver.
When I was working through material on the IObservable/IObserver pattern, it wasn't immediately apparent to me where it had come from. It's based on the Rx Framework for .NET (Reactive Extensions). I finally got to watch the PDC Online session from Erik Meijer on the Rx Framework a few days ago and so many things suddenly fell into place for me.
If you have an interest in working with StreamInsight, I'd recommend watching Erik Meijer's session on the Reactive Extensions here: http://www.microsoftpdc.com.
I was recently interviewed by Andrew and Michael and their Frankly Speaking podcast. Was great fun. The show is here: http://www.noisetosignal.com.au/franklyspeaking/?p=71
I was doing some varied reading this morning and stumbled across this article by Paul Graham. I want to highlight this passage:
"We now have several examples to prove that amateurs can surpass professionals, when they have the right kind of system to channel their efforts. Wikipedia may be the most famous. Experts have given Wikipedia middling reviews, but they miss the critical point: it's good enough. And it's free, which means people actually read it. On the web, articles you have to pay for might as well not exist. Even if you were willing to pay to read them yourself, you can't link to them. They're not part of the conversation."
It pretty much sums up what I've been thinking for some time about sites with paid-for articles. Do they have any future at all? I was interested to see Rupert Murdoch placing his hopes on a paid-for future. He's arguing that free news sites are dead. Can't say I agree with that. I'm sure they'll be different to what we've been used to in the past.
When I'm searching for technical topics, I have to say that every time I see a link to a site I know is paid, I don't think "I must join that site some time", I simply automatically skip over their content. A good indication on Google is page caching. Google will happily turn off page caching for paid-for sites. I wish they had an option to simply leave them out of my results set. When I'm searching for results, any page I see that doesn't have a cached page available, is probably no longer of interest to me.
I think Paul's last sentence is the most telling: "They're not part of the conversation". You can't build a buzz or discussion around something that people have to pay to see.
What this does raise is the question on how technical content will be generated in future. Is our future one that's full of "good enough" technical articles too? Or is advertising the only way forward, much as we might wish it wasn't?