Power BI Designer March Update includes Google Analytics Connector

The Power BI team have released details of their March update to the standalone Power BI designer.

You’ll find details of the update here: http://blogs.msdn.com/b/powerbi/archive/2015/03/26/7-new-updates-to-the-power-bi-designer-preview-march-2015.aspx

The first thing I noticed is just how much faster the tool operates. The blog post mentioned performance enhancements but I really, really noticed them.

One particular enhancement  that I wanted to call out was the additional of a connector for Google Analytics. I’ve been trying that this morning and have found it really easy to use. All the standard categories appear as available sets of data:


I’ve picked those that I’m interested in:


Then added them to a report:



Very easy. And it’s clear which type of device/operating system we need to continue to ensure the best experience on at SQL Down Under.

Powershell Invoke-Sqlcmd –Hostname Parameter sets the Application Name instead of the Host Name

Two of the parameters in SQL Server connections are the Application Name and the Host Name. You can see these in SQL Server if you execute the following command:


I’ve always been a fan of having applications identify themselves in their connection strings. It makes tasks like tracing much easier. The tools supplied with SQL Server do a reasonable job of that as you can see above. But many other tools don’t do such a good job.

I was working at a site today where they are using Powershell to execute commands for monitoring. I noticed that the Powershell commands did not set the Application Name in the connection string when using Invoke-Sqlcmd. Note the following example:


I then tried to work out how to set the Application Name. When I checked the documentation for Invoke-Sqlcmd, it shows that the Hostname is set via the SQLCMD option –H, by using the –Hostname parameter.


However, note that if you use the –Hostname option, it actually sets the Application Name and does not set the Host Name:


Suggest Compression Strategies for Tables and Indexes

I make no secret of the fact that I think that table compression was the best thing added to SQL Server 2008.

It seems like every week that I’m talking to clients about it. They have Enterprise Edition but they weren’t using it because they were familiar with O/S level disk compression and they think that compression will mean smaller and slower, yet this often couldn’t be further from the truth for customers whose databases are currently I/O bound or who don’t have enough memory in their buffer cache for it to work correctly.

It’s important though, to make sure the correct form of compression is used for each table and index. In fact, the decision should be taken for each partition of each table and index if the table is partitioned. We often recommend different compression strategies for different partitions of the same table.

To make it easier to decide which form of compression might suit, I’ve created another version of the script that I use as a first pass in determining compression strategies. The earlier version suggested ROW and/or PAGE compression. While this version can do the same, it also considers the clustered columnstore indexes that are part of the Enterprise Edition of SQL Server 2014. (I’ve included a switch to turn that function on or off in case you don’t have 2014 yet).

The cutoff points are configurable but my default values are as shown.

  • CCI (Clustered Columnstore Index) will be recommended where the partition is scanned more than 95% of the time, updated less than 10% of the time, seeks and lookups are less than 5% of the time, and where there are at least 800,000 rows. It will also only be recommended if it is supported.
  • PAGE will be recommended where the partition is scanned more than 75% of the time and updated less than 20% of the time.
  • ROW will be recommended in all other cases. We believe that ROW should be the default in SQL Server across the board, instead of NONE.

It is important that this script only be run after the system has been in use for long enough to have experienced typical usage patterns.

I hope this helps you to get into using compression where it makes sense for you.


— Suggest data compression changes for tables and indexes

— Dr Greg Low

— March 2015


DECLARE @ClusteredColumnstoreScansCutoff int = 95;

DECLARE @ClusteredColumnstoreUpdatesCutoff int = 10;

DECLARE @ClusteredColumnstoreSeeksLookupsCutoff int = 5;

DECLARE @ClusteredColumnstoreTotalRowsCutoff bigint = 800000;


DECLARE @PageCompressionScansCutoff int = 75;

DECLARE @PageCompressionUpdatesCutoff int = 20;


DECLARE @IsClusteredColumnstoreSupported bit = 1;



WITH IndexUsageStats


( SELECT object_id AS ObjectID,

         index_id AS IndexID,

         COALESCE(user_seeks, 0) + COALESCE(system_seeks, 0) AS Seeks,

         COALESCE(user_scans, 0) + COALESCE(system_scans, 0) AS Scans,

         COALESCE(user_lookups, 0) + COALESCE(system_lookups, 0) AS Lookups,

         COALESCE(user_updates, 0) + COALESCE(system_updates, 0) AS Updates,

         COALESCE(user_seeks, 0) + COALESCE(system_seeks, 0)

           + COALESCE(user_scans, 0) + COALESCE(system_scans, 0)

           + COALESCE(user_lookups, 0) + COALESCE(system_lookups, 0)

           + COALESCE(user_updates, 0) + COALESCE(system_updates, 0) AS Operations

  FROM sys.dm_db_index_usage_stats

  WHERE database_id = DB_ID()




( SELECT SCHEMA_NAME(t.schema_id) AS SchemaName,

         t.name AS TableName,

         i.name AS IndexName,

         i.index_id AS IndexID,

         i.type_desc AS IndexType,

         CASE WHEN COALESCE(Operations, 0) <> 0

              THEN CAST((COALESCE(Seeks, 0) + COALESCE(Lookups, 0)) * 100.0

                        / COALESCE(Operations, 0) AS int)

              ELSE 0

         END AS SeekLookupPercentage,

         CASE WHEN COALESCE(Operations, 0) <> 0

              THEN CAST(COALESCE(Scans, 0) * 100.0 / COALESCE(Operations, 0) AS int)

              ELSE 0

         END AS ScanPercentage,

         CASE WHEN COALeSCE(Operations, 0) <> 0

              THEN CAST(COALESCE(Updates, 0) * 100.0 / COALESCE(Operations, 0) AS int)

              ELSE 0

         END AS UpdatePercentage,

         p.partition_number AS PartitionNumber,

         p.data_compression_desc AS CurrentCompression,

         p.rows AS TotalRows

   FROM sys.tables AS t

   INNER JOIN sys.indexes AS i

   ON t.object_id = i.object_id

   INNER JOIN sys.partitions AS p

   ON i.object_id = p.object_id

   AND i.index_id = p.index_id

   LEFT OUTER JOIN IndexUsageStats AS ius

   ON i.object_id = ius.ObjectID

   AND i.index_id = ius.IndexID

   WHERE i.index_id > 0

   AND t.is_ms_shipped = 0

   AND t.type = N'U'




( SELECT pud.*,

         CASE WHEN pud.ScanPercentage >= @ClusteredColumnstoreScansCutoff

              AND pud.UpdatePercentage <= @ClusteredColumnstoreUpdatesCutoff

              AND pud.SeekLookupPercentage <= @ClusteredColumnstoreSeeksLookupsCutoff

              AND pud.TotalRows >= @ClusteredColumnstoreTotalRowsCutoff

              AND @IsClusteredColumnstoreSupported <> 0

              THEN N'CCI'

              WHEN pud.ScanPercentage >= @PageCompressionScansCutoff

              AND pud.UpdatePercentage <= @PageCompressionUpdatesCutoff

              THEN N'PAGE'

              ELSE N'ROW'

         END AS SuggestedCompression

  FROM PartitionUsageDetails AS pud


SELECT spct.SchemaName,






FROM SuggestedPartitionCompressionTypes AS spct

WHERE spct.SuggestedCompression <> spct.CurrentCompression

ORDER BY spct.SchemaName,


         CASE WHEN spct.IndexID = 1 THEN 0 ELSE 1 END,



Rebuild Partitioned Clustered Columnstore Indexes

A short time back, I provided an updated script that can be used to rebuild clustered columnstore indexes when necessary.

One thing that wasn’t covered by that script was the ability to only individual index partitions. Below, I’ve provided an updated version of the script that will rebuild only affected partitions if the clustered columnstore index is partitioned.


— Rebuild clustered columnstore indexes when necessary

— Dr Greg Low v2.2



— rebuild when more than supplied % of rows have been deleted

— rebuild when any segments contain more than supplied % deleted rows

— rebuild if more than supplied number of segments are empty

— rebuild when segments are on average less than the supplied % of

–     maximum capacity


DECLARE @DeletedTotalRowPercentage int = 10; 

DECLARE @DeletedSegmentsRowPercentage int = 20;

DECLARE @EmptySegmentsAllowed int = 0;

DECLARE @SegmentPotentialFullnessPercentage int = 90;



DECLARE @IndexesToRebuild TABLE (SchemaName sysname,

                                 TableName sysname,

                                 IndexName sysname,

                                 PartitionNumber int,

                                 IsPartitioned bit);




WITH ClusteredColumnstoreIndexes


( SELECT t.object_id AS ObjectID,

         SCHEMA_NAME(t.schema_id) AS SchemaName,

         t.name AS TableName,

         i.name AS IndexName

  FROM sys.indexes AS i

  INNER JOIN sys.tables AS t

  ON i.object_id = t.object_id

  WHERE i.type = 5




( SELECT csrg.object_id AS ObjectID,

         csrg.partition_number AS PartitionNumber,

         csrg.total_rows AS TotalRows,

         csrg.deleted_rows AS DeletedRows,

         csrg.deleted_rows * 100.0 / csrg.total_rows AS DeletedPercentage,

         CASE WHEN csrg.total_rows = csrg.deleted_rows

              THEN 1 ELSE 0

         END AS IsEmptySegment

  FROM sys.column_store_row_groups AS csrg

  WHERE csrg.state = 3 — Compressed (Ignoring: 0 – Hidden, 1 – Open, 2 – Closed, 4 – Tombstone)




( SELECT cci.ObjectID,





         SUM(CAST(rg.TotalRows AS decimal(18,0))) AS TotalRows,

         SUM(CAST(rg.DeletedRows AS decimal(18,0))) AS DeletedRows,

         SUM(CAST(rg.DeletedRows AS decimal(18,0))) * 100.0

           / SUM(CAST(rg.TotalRows AS decimal(18,0)))

           AS DeletedPercentage,

         SUM(rg.IsEmptySegment) aS EmptySegments,

         COUNT(rg.TotalRows) AS TotalSegments

  FROM ClusteredColumnstoreIndexes AS cci

  INNER JOIN RowGroups AS rg

  ON cci.ObjectID = rg.ObjectID

  GROUP BY cci.ObjectID, cci.SchemaName, cci.TableName, cci.IndexName, rg.PartitionNumber


INSERT @IndexesToRebuild (SchemaName, TableName, IndexName, PartitionNumber, IsPartitioned)

SELECT s.SchemaName,





                        FROM IndexPartitionStats AS ps

                        WHERE s.SchemaName = ps.SchemaName

                        AND s.TableName = ps.TableName

                        AND s.IndexName = ps.IndexName

                        AND ps.PartitionNumber > 1)

            THEN 1

            ELSE 0


FROM IndexPartitionStats AS s

WHERE s.DeletedPercentage > @DeletedTotalRowPercentage

OR s.EmptySegments > @EmptySegmentsAllowed


                   WHERE rg.ObjectID = s.ObjectID

                   AND rg.DeletedPercentage > @DeletedSegmentsRowPercentage)

OR (s.TotalRows * 100 / (s.TotalSegments * 1048576.0))

            < @SegmentPotentialFullnessPercentage;


DECLARE @SchemaName sysname;

DECLARE @TableName sysname;

DECLARE @IndexName sysname;

DECLARE @PartitionNumber int;

DECLARE @IsPartitioned bit;

DECLARE @SqlCommand nvarchar(max);




  SELECT SchemaName, TableName, IndexName, PartitionNumber, IsPartitioned

  FROM @IndexesToRebuild

  ORDER BY SchemaName, TableName, IndexName, PartitionNumber;


OPEN IndexList;


FETCH NEXT FROM IndexList INTO @SchemaName, @TableName, @IndexName, @PartitionNumber, @IsPartitioned;




  SET @SqlCommand = N'ALTER INDEX ' + QUOTENAME(@IndexName)

                  + N' ON ' + QUOTENAME(@SchemaName)

                  + N'.' + QUOTENAME(@TableName)

                  + N' REBUILD'

                  + CASE WHEN @IsPartitioned <> 0

                         THEN N' PARTITION = ' + CAST(@PartitionNumber AS nvarchar(10))

                         ELSE N"


                  + ';';

  PRINT @SqlCommand;


  FETCH NEXT FROM IndexList INTO @SchemaName, @TableName, @IndexName, @PartitionNumber, @IsPartitioned;



CLOSE IndexList;