Opinion: RIP Microsoft Professional Program

Three years back, with much fanfare at a partner conference, Microsoft announced the Microsoft Professional Degree program. It was going to be a set of courses that you could take that would lead to one of their professional degrees.

Now here in Australia, you can't just call something a degree, and I'm guessing that's the same in the USA, so I wasn't surprised when I noticed soon after I started with it, that the name had changed to the Microsoft Professional Program (MPP), and they'd dropped the "degree" word.

The first program available was for Data Science. It required you to complete 11 courses. Each course restarted every three months, and had an exam and certificate at the end. Importantly, to complete a program, you also had to complete a practical course, called the capstone project.

I loved these programs, and I completed four of them: Data Science, AI, Big Data, and DevOps.

It's not all roses

The program retirement was announced the other day. You can't enrol after next month, and you must complete everything before December.

Many people are part way through the program, have paid to certify previous exams, and are now unable to complete before the deadline. That's really not fair to them. A nice touch would have been to at least refund their exam certification costs if they're part way through a program.

And more importantly, what does it really mean for those that have invested time, money, and effort in the programs? I don't know but I'd almost bet that within a year, it'll be hard to even find any trace of the entire program ever existing.

What I don't love is the way that Microsoft has constant churn in these types of things. For things like certification that require substantial commitments to pursue, this type of churn is just not appropriate.

I wish it was the first time that many of us have been down this same path but sadly, it's not. (#MCM4Life)

Microsoft's offerings around learning have been really messy and jumbled for a very long time. The move to refocus learning around Microsoft Learn is a good move. I just wish they'd learn how to make these types of changes and consolidations without leaving their biggest supporters feeling abandoned (again).

Why I liked the MPP

I really liked the format of the MPP for a number of reasons:

  • You could take any of the courses for free (this meets the goal of the product groups who just want to get the information out there widely and without the friction of cost). Initially, that also included the exams.
  • You could pay for a certified exam. The courses were done in conjunction with edX and they would check out who you were. (i.e. government issued photo ID, etc.) If you wanted the certification, you needed to pay to certify all the relevant exams.
  • The content was not just Microsoft content. For example, the initial statistics course was actually a course from Columbia University. Some of the content was taught by DataCamp (who I'm not happy with after their data breach), and by a prof from Norway. This gave the material a wider context.
  • There was often a choice in the content. For Data Science, you could use either R or Python in each required course. For AI, there was a choice of areas of AI to work in: Speech, Vision, etc.
  • The work could be done in your own time, and fitted in amongst other activities as you had free time.

Tracks were expanding

Eventually, there were many tracks:

  • Data Science
  • AI
  • Big Data
  • DevOps
  • IoT
  • Data Analysis
  • Cybersecurity
  • Entry Level Software Development
  • IT Support

Thanks is due

Naturally, like with most things, the quality varied across the courses. But overall, I liked the effort that had been put into the classes.

A hearty thank you to anyone who was involved in creating these courses and their associated materials!

Awesome image by Pete Pedroza

For Posterity

Like a friend of mine and fellow MVP, Thomas LaRock, said in a recent post, I have no idea what really happens to the certifications that were achieved in the program. As I mentioned, I suspect they have suddenly been massively devalued. And as Thomas did, I include my course certificates for posterity.

 

DevOps: Is AIOps just yet another almost meaningless acronym?

DevOps has quickly become a core part of how many organizations deliver IT, and in particular, how they deliver applications. But just as quickly as it has become popular, a whole series of XXXOps names have appeared. One of the latest is AIOps. So is it just yet another almost meaningless acronym?

Well as Betteridges Law of Headlines suggests, the answer is no.

When I first saw the term, I was presuming this would be about how to deploy AI based systems, and I wondered why on earth that would need a special name. But that's not what it is.

So what is AIOps?

AIOps is the use of artificial intelligence (AI) and machine learning (ML) techniques to allow us to analyze IT problems that occur so that we can respond to them fast enough to be useful.

The core problem is that we're now generating an enormous volume of metric and log data about every step of all our processes, and about the health of all our systems and applications yet so much of that data is never processed, or at least not processed fast enough to be useful.

Only machines can do this.

The term AIOps seems to have been coined by Will Cappelli (a former analyst with Gartner). In the end, humans won't be scouring logs and responding to what they find. Instead, they'll be teaching the machines what to look for, and how to correlate information from a variety of sources to find what is really going on.

Cappelli is now at Moogsoft and sums up AIOps quite distinctly:

AIOps is the application of artificial intelligence for IT operations. It is the future of ITOps, combining algorithmic and human intelligence to provide full visibility into the state and performance of the IT systems that businesses rely on.

People are already doing this but it's likely in the future that this will become a well-known job role. It will be important to guide the machine's learning to teach it to recognize the appropriate patterns.

If you are working in related IT roles, it might be time to start to add some data science, AI, and/or ML into your learning plans.

SQL in the City – Brisbane, Christchurch, Melbourne – Hope to see you there

I'm presenting a session on Azure DevOps for SQL Server DBAs that's designed as an intro for data people who haven't really worked with it before, at Red-Gate's SQL in the City events in Brisbane (May 31), Christchurch (June 7), and Melbourne (June 14).

Looks like a fun lineup for the day, and it'd be great to catch up with you at one of those events. You can find more info here:

https://www.red-gate.com/hub/events/redgate-events/sqlinthecity-summit/

Also, the day after each of those events, I'll likely also be presenting at SQL Saturday in the same cities. I'll let you know more when the speaking lineup for those events is released, but either way, I'll be at those events too and would love to catch up with you.

 

Business Intelligence: Success is about small starts leading to bigger things

I spend a lot of time on client sites and time and again, one of the mistakes that I see people making, is trying to start with large projects. I think one of my all time favorite quotes about IT is:

Any successful large IT system used to be a successful small IT system.

The next time you're thinking about creating a project that's going to have a big bang outcome, please remember this. The history of the industry is that it really is likely to be a big bang, and not in terms of being a big success like you'd hoped for.

A staggeringly high percentage of large IT projects fail.

This is even more important in business intelligence projects. The industry is littered with companies that have spent a fortune on BI projects, and almost no-one in those companies can, or do, use the outcomes. It's a sad but true statistic.

Asking users what they want, gathering a large amount of detail, then building it, testing it, and delivering it to them sounds good, but it almost never works. Unfortunately, it's how many larger organizations think projects need to be managed. They assume that creating an IT project is like building a bridge.

It's not.

The first problem is that the users don't know what they want. Don't blame them later because you think they didn't give you the right instructions. That's your fault for assuming they can describe exactly what they want. In many cases, until they see some of the outcomes, they won't start to understand what the system will do, and until then, they won't realize what they really need.

Second, no matter how clearly you document their requirements, it won't be what they need. One of my Kiwi buddies Dave Dustin was having a bad day once, and I remember he mentioned that he was going to just spend the day delivering exactly what people asked for. That was wonderful and beautifully insightful, because we all know that it would lead to a disaster. It's little surprise. They might have said "sales" but they really meant "profit", and so on.

Finally, the larger the project, the longer it will take to deliver, and by then, even if you'd done it perfectly, the users' needs will have changed, so it still won't be what they want.

When you're starting a BI project, I'd implore you to find a project that has these characteristics:

  • Small enough to be done in a couple of weeks at most
  • Based on mocked up data in front of the target users
  • Large enough to make a significant difference to someone's life or work
  • Targeted at someone who's important enough to be a champion for the rest of what you want to achieve
  • (Ideally) targeted at someone "who pays the bills"

Start by doing a great job, and building on that, once you have support.

Opinion: Developers, silently swallowing errors is not OK

I don't know if it's considered some sort of modern trend, but what is it with applications now that just swallow errors instead of dealing with them? Is there an edict within these companies that errors should get shown, so they can argue their app doesn't have errors?

I'm working with a SaaS app right now. It does editing. Sometimes when I save, it just doesn't save. No error, just nothing saved. Or every now and then, I find the order of what I've entered just gets changed. Again, no error, but the order was changed.

Worse, sometimes when I then try to correct the order, it shows it as done, but next time I go back to that screen, the order is back the way it was in the first place.

On many occasions, if I close my browser, open it again, and log in, it all works OK again for a while.

But it's not just these types of applications. I've lost count of the number of sites I've been to, where supposedly serious applications are being developed, yet the code is full of try/catch blocks but the catch blocks are empty ie: silently ignoring any errors that occur.

How did we get to the point that this is what passes for application development now? Apps that mostly work and fail silently?

Sorry, but this is not OK.

DevOps: Are you centralizing your log files?

System configurations are becoming more complex all the time. Each and every server, container, and key application and service today has log files that tell you a wealth about what's going on under the covers. But how accessible are those log files in your organization?

If you aren't using a log management tool, you probably should be.

Here are a few easy ones to get started with:

Azure Monitor

One thing that I do find frustrating with Microsoft tooling at present is the constant churn of product names. A while back, we had Application Insights that could collect details of what was happening within an app. The data for that was stored in a tool called Log Analytics, and it could also collect operating system logs and more. Agents were provided for on-premises systems.

Originally, these tools had different query languages but eventually, the query language for Log Analytics was one that's used. It's awesome to be able to write a query to simply find and filter log details.

For my SQL Server buddies, there were SQL Insights which has now morphed into SQL Server Intelligent Insights along with Azure SQL Database Intelligent Insights. These allow you to capture bunches of info about your SQL Server instances and databases so very simply.

I constantly visit client sites where they have purchased tools for this, and those tools aren't even close to being as useful as these Azure ones. And they don't just work with Microsoft servers and services.

Anyway, these have now all been bundled up again under the name Azure Monitor.

Azure Monitor also offers built-in integration with popular DevOps, issue management, ITSM and SIEM tools. You can use packaged solutions for monitoring specialised workloads, or build your own custom integration using Azure Monitor REST APIs and webhooks.

Papertrail

Another interesting offering from our friends at SolarWinds, is Papertrail. Their claim is "Frustration-free log management. Get started in seconds.
Instantly manage logs from 2 servers… or 2,000". Papertrail seems to be gaining a stronghold in the Linux, MySQL, Ruby, Apache, Tomcat areas along with many others.

In the end, if you aren't using one of these types of tools, you probably should be.

 

Opinion: Get used to reading traces and logs before you need them

I used to do a lot of work at the operating system and network level. I was always fascinated watching people use network trace tools when they were trying to debug a problem. The challenge was that they had no idea what was normal activity on the network, and what wasn't.

The end result of this is that they'd then spend huge amounts of time chasing down what were really just red herrings.

When you don't know what normal activity looks like, everything looks odd.

Today, I see the same thing with traces of SQL Server activity, either using SQL Profiler (and/or SQL Trace), and Extended Events Profiler. I also see the same thing with insights data sent to Log Analytics, and the outcomes of many expensive SQL Server monitoring tools.

For example, if you are looking at a SQL Server trace, and you see a large number of sp_reset_connection commands. Is that an issue? When would it be an issue, and when is it just normal?

If I see an sp_reset_connection executed on a connection followed by a number of other commands, I know that the application is using connection pooling. If however, I see a bunch of those on the same connection, without any commands executed in between, I know that the application code is opening connections when it doesn't need to. Perhaps it should be opening the connection closer to where it decides if it needs it.

The key point is that it's really important that you learn to use these tools before you have a problem. You need to be able to recognize what's normal, and what isn't.

 

DevOps: Without PaaS, a Cloud Transformation is just Expensive Hosting 2.0

Damon Edwards had a session recently where he said claimed that Without Self-Service Operations, the Cloud is Just Expensive Hosting 2.0. There is much in his session that I completely agree with, and have been concerned about for quite a while. I see it more in terms of the adoption of Platform as a Service (PaaS) offerings.

I spend most of my consulting/mentoring time in larger organizations, many are large financial organizations. In every one now, there is a person heading up a "Cloud Transformation" project, but none of these companies mean the same thing when they talk about these types of projects.

In so many companies, all they are doing is taking their virtual machines and networks that are hosted in some existing hosting provider, and moving them into a public cloud.

Let's be clear: those companies are not making a cloud transformation.

They might be replacing existing infrastructure that's difficult to work with, with dynamic and configurable cloud infrastructure. They might also be outsourcing many of the functions of their difficult-to-work-with IT support teams with the public cloud ones. This can particularly apply to many offshore or offsite IT management providers.

But there's no real transformation going on. In so many cases, they are missing out on the real beauty that can be offered by cloud-based services.

Companies like Microsoft started pushing PaaS services heavily at first, but then realized that many companies just aren't agile enough to be able to start to use them. They now seem to focus on getting the customers into the cloud as the first step (aka lift and shift), then later focusing on getting them to use it properly.

If you aren't putting PaaS services in place, I don't think you're really making a cloud transformation. You may just be upgrading to the Expensive Hosting 2.0 that Damon mentioned. 

That may be better than where you were but you shouldn't confuse this with a real transformation.

 

SDU Tools: Get SQL Server Table Schema Comparison

In a recent post, I mentioned that I often need to do a quick check to see if the schema of two SQL Server databases is the same, and how our GetDBSchemaCoreComparison procedure can make that easy. On a similar vein, I often need to get a detailed comparison of two tables.

In our free SDU Tools for developers and DBAs, there is another stored procedure called GetTableSchemaComparison to make that easy as well. It takes the following parameters and also returns a rowset that's easy to consume programmatically (or by just looking at it):

@Table1DatabaseName sysname -> name of the database containing the first table
@Table1SchemaName sysname -> schema name for the first table
@Table1TableName sysname -> table name for the first table
@Table2DatabaseName sysname -> name of the database containing the second table
@Table2SchemaName sysname -> schema name for the second table
@Table2TableName sysname -> table name for the second table
@IgnoreColumnID bit -> set to 1 if tables with the same columns but in different order are considered equivalent, otherwise set to 0
@IgnoreFillFactor bit -> set to 1 if index fillfactors are to be ignored, otherwise set to 0

Note that it doesn't care if the two tables have different names in the same or two different databases.

You can see the outcome in the main image above.

You can see it in action here:

To become an SDU Insider and to get our free tools and eBooks, please just visit here:

http://sdutools.sqldownunder.com

SDU Tools: Get SQL Server Database Schema Core Comparison

I often need to do a quick check to see if the schema of two SQL Server databases is the same.

In our free SDU Tools for developers and DBAs, there is a stored procedure called GetDBSchemaCoreComparison to make that easy. It takes the following parameters and returns a rowset that's easy to consume programmatically (or by just looking at it):

@Database1 sysname -> name of the first database to check
@Database2 sysname -> name of the second database to compare
@IgnoreColumnID bit -> set to 1 if tables with the same columns but in different order are considered equivalent, otherwise set to 0
@IgnoreFillFactor bit -> set to 1 if index fillfactors are to be ignored, otherwise set to 0

You can see the outcome in the main image above.

You can see it in action here:

To become an SDU Insider and to get our free tools and eBooks, please just visit here:

http://sdutools.sqldownunder.com