Improvements in the Red Gate source code control suite

One aspect of SQL Server related development that I see done quite badly when visiting client sites is source code control. So many sites have almost nothing in place for this important aspect of development. By comparison, it’s rare now that I visit an application development organisation that doesn’t have at least reasonable source code control in place. Many DBAs seem to have grown up in environments where source code control systems weren’t in use.

I’ve been pleased to see the Red Gate folk continuing to work in this direction and to expand their toolsets. A while back, they released Source Control for SQL Server: http://www.red-gate.com/products/sql-development/sql-source-control/ but I was also interested to note their upcoming release of Source Control for Oracle: www.red-gate.com/source-control-for-oracle While I focus predominantly on SQL Server, many sites I work with use more than one type of database engine, and the other engine is often Oracle. So it’s useful to be able to use a single system for both.

These tools support TFS (which I tend to use mostly now) and SVN which I often see at client sites, usually in conjunction with a Tortoise front end.

If you aren’t using source code control in your own site, you should start looking at these types of tools.

Hardware Sizing Guide for SSAS Tabular

Over the last few months, I’ve been a tech reviewer on John Sirmon and Heidi Steen’s new whitepaper.

It’s published today on MSDN: Hardware Sizing a Tabular Solution (SQL Server Analysis Services)

Its goal is to help you determine the amount of memory and CPU resources needed to accommodate query and processing workloads in a production environment.

Recommended!

Has the term “big data” completely lost meaning yet?

There are some terms in IT that make their way straight into the hype stratosphere. Unfortunately “big data” is one of these. I see very few systems that I’d contend are actually “big data”. However, I endlessly see the term applied to data stores that are trivial by today’s standards. This might help the marketing teams but it’s sad none-the-less. There are some technological challenges that really do start to bite as the data volume really does start to become large, and as the proportion of unstructured and semi-structured data increases. There are also some very interesting new tools that allow us to process larger volumes of data faster, particularly in relation to analytics, and a large market building around Hadoop and its derivatives.

I also see entire teams that claim to focus on big data, yet whenever I discuss the projects with them, none of them are working with databases that are even vaguely in the ballpark of what anyone would have considered big data ten years ago, let alone today. None of the people involved have ever dealt with even really large tables by today’s standards. It’s interesting that so many data-related jobs have suddenly become “big data” jobs. I’d love to know what these teams think they mean, when they say that they “focus” in these areas. It simply isn’t possible for so many of them to do so.

I had a chuckle when I read this blog post from Warwick Leitch: Call it big data if you like… but you will look foolish. In that example, Warwick was referring to survey data that was held in a spreadsheet…

For a more serious take on this subject though, there is some interesting material in Stephen Few’s recent blog post: Big Data, Big Deal. Stephen argues that big data is simply a marketing campaign. As always, the comments associated with the blog post make for reading that’s as interesting as the post itself. I don’t totally agree with Stephen on this, as there really has been quite a shift in the available tooling in recent years, but much of his discussion is right on target.

Ironically yesterday I was working with a team that has a project that I would qualify as “big data”, yet they had never thought to call it that.

I suspect we as an industry need to start to quantify what the term “big data” really means, at a given point in time. It’s clearly a relative term that changes over time. Otherwise, we should lose the term entirely or further define it, as there is currently a great deal of confusion around it.

The whole discussion reminded me of this wonderful xkcd cartoon that compared production levels in the energy industry: http://xkcd.com/1162/

One of the more amusing calls I had last year was with a US based fast food chain. They told me that they were ok using SQL Server for their analytic work but they’d decided they needed to use Oracle for the main database engine, based on the volume of data that they needed to handle efficiently. The Oracle sales guy had done a good job. I was intrigued about what volumes of data they thought would justify this. Later, it became apparent that it was about 30GB…

Without triggering a “that’s not a knife, that’s a knife” moment, I’d love to hear what others currently consider “big data”. I don’t consider “using Hadoop (or HDInsight) as a synonym for “working with big data”.

Update of Ola Hallengren’s Maintenance Tools (Still Free Too)

SQL Server 2008 R2 SP2 and SQL Server 2012 SP1 introduced sys.dm_db_stats_properties. This DMV is a new way to check the number of rows that have been modified since the last time a set of statistics was updated. It is described here: (http://msdn.microsoft.com/en-us/library/jj553546.aspx)

In the latest version of Ola’s tools, he’s made use of this new DMV for customers that are on one of these SQL Server versions. For customers on earlier versions, he continues to use sys.sysindexes in the same way he did previously.

Another notable change is that the new version works better when combined with log shipping on the same databases (avoids options that would break log shipping chains).

Ola’s tools have been gaining a well-deserved reputation, particularly given the price (ie free).

You can read more about the most recent version of the solution at http://ola.hallengren.com/versions.html or download it at http://ola.hallengren.com/scripts/MaintenanceSolution.sql

Sysprep improvements in SQL Server 2012 SP1 CU2

Sysprep has been a useful tool to allow the preconfiguration of SQL Server environments, along with the rest of the operating system. This can be very useful for templated Azure VMs as well as for on-premises systems and private clouds.

One of the limitations has been that not all SQL Server components were “sysprep-able”.

Another important change in CU2 for SQL Server 2012 SP1 is that more SQL Server components are now able to be “sysprepped”. In particular, SQL Server Analysis Services (SSAS) and SQL Server Integration Services (SSIS) can now be part of this.

This is good news and you’ll find more info in the VM Engineering team’s blog post by Robert Hutchison here: http://blogs.technet.com/b/scvmm/archive/2013/01/25/expanded-sysprep-support-in-sql-server-2012-sp1-cu2.aspx

Can a table have no columns?

One of the things I always tell people that I love about consulting/mentoring work is that you see things that you just can’t make up. They provide interesting material for training classes.

This week I came across something that I wasn’t expecting. I was migrating data from DB2 and in my scripts, I had made the presumption that a table would have at least one column. Turns out that in DB2 you can have a table with no columns.

I can only imagine that comes from creating a table then dropping all the columns. I wondered if SQL Server would do the same, so it was time to find out.

USE tempdb;
GO

CREATE TABLE dbo.TableWithNoColumns
( FirstColumn int,
  SecondColumn int
);
GO

ALTER TABLE dbo.TableWithNoColumns DROP COLUMN SecondColumn;
GO

ALTER TABLE dbo.TableWithNoColumns DROP COLUMN FirstColumn;
GO

It doesn’t allow this. It won’t let you delete the last column. The following message is returned.

Msg 4923, Level 16, State 1, Line 2

ALTER TABLE DROP COLUMN failed because ‘FirstColumn’ is the only data column in table ‘TableWithNoColumns’. A table must have at least one data column.

 

Whenever I get asked why I like SQL Server so much, these are the sorts of things that don’t immediately spring to mind but they are the sorts of reasons why I love working with SQL Server and get frustrated when working in other environments.

 

SQL Down Under Show 54 – now available – Grant Fritchey – Query tuning, execution plans and kilts

SQL Down Under show 54 is now available for download.

In this show, I spoke with Grant Fritchey on his experiences with query tuning and reading SQL Server execution plans. We also discuss SQL Kilts.

You’ll find the show at our podcasts page: http://www.sqldownunder.com/Resources/Podcast.aspx

Enjoy!