Opinion: ORMs: Are they a good idea when developing for SQL Server?

Many people know that I have an issue with ORMs (object relational mappers). It’s worth spending a blog post to describe why.
Unfortunately, I spend my life on the back end of trying to deal with the messes involved. The following are the key issues that I see:
Potentially horrid performance
I’ve been on the back end of this all the time. There are several reasons. One is that the frameworks generate horrid code to start with, the second is that they are typically quite resistant to improvement, the third is that they tend to encourage processing with far too much data movement.
I keep hearing that “the databases are getting so fast now that this loss of performance doesn’t matter”. I can tell you that I don’t live in a world where that’s true.
In the world that I live in, I regularly end up in software houses with major issues that they don’t know how to solve.
I was in a large financial in Sydney a while back. They were in the middle of removing the ORM that they’d chosen out of their app because try as they might, they couldn’t get anywhere near the required performance numbers.
Why had they called me in?
Because before they wrote off 8 months’ work for 240 developers, the management wanted another opinion. They were making the right decision but they wanted someone else to agree with them.
As another example, I was at a start-up software house last year. They had had a team of 10 developers building an application for the last four years. The business owner said that if it would support 1000 concurrent users, they would have a viable business. 5000 would make a good business. 500 they might survive. They had their first serious performance test two weeks before they had to show the investors. It fell over with 9 concurrent users. The management (and in this case the devs too) were panic-stricken.
For yet another recent example, I was in a software house that had to deliver an app to a government department. They were already 4 weeks overdue and couldn’t get it out of UAT. They wanted a silver bullet. That’s not the point to then be discussing their architectural decisions, yet the architectural decisions that they’d made were the issue.
I’ve seen background processing jobs in a large UK-based financial service organisation. On a system with 48 processors, and 1.2 TB of memory, and 7 x 1 million UK pound 20TB flash drive arrays, one task. During that time, it issued 550 million SQL batches to be processed and almost nothing else would work well on the machine at the same time. The replacement job that we wrote in T-SQL issued 40,000 SQL batches and ran in just over 2 minutes. I think I can get that to 1/10 of that with further work. Guess which version is the right answer? What had caused this? Re-use of entity-level code through a custom ORM.
Minimal savings yet long term pain
Many of the ORMs give you an initial boost to “getting something done”. But at what cost? At best, on most projects that I see, it might save 10% of the original development time, on the first project. But as David Connor pointed out in his excellent TechEd talk with Joel Pobar (and as I’ve seen from so many other places), the initial development cost of a project is usually only around 10% of the overall development cost.
So what are we talking about? Perhaps 1% of the whole project? Putting yourself into a long-term restrictive straightjacket situation for the sake of a 1% saving is a big, big call. The problem is that it’s being decided by someone who isn’t looking at the lifetime cost, and often 90% of that lifetime cost comes out of someone else’s bucket.
Getting stuck in how it works
For years, code generated by tools like Linq to SQL was very poor. And this was a tool designed fully for SQL Server. Now imagine what the code that’s generated by a tool that doesn’t even know which backend database it’s using looks like.
That’s where Entity Framework (EF) started. Very poor choices are often made in the design of these tools. The whole reason that “optimize for ad hoc workloads” was added to SQL Server was to deal with the mess from the plan cache pollution caused by these tools.
A simple example is that on the SqlCommand object, they called AddWithValue() to add parameters to the parameters collection. When I used to teach ADO.NET, I’d explain why using that method call was a really bad idea. It provides the name of the parameter and the value, but no data type. So it tried to derive the data type from the data. SQL Server would quickly end up with a separate query plan for every combination of every length of string for every parameter. This is plan cache pollution.
And what could the developers do to fix it?
Nothing. Nada. Nil. NULL.
Because it was baked into how the framework worked. The framework eventually got changed a bit to have more generic sizes but still never addressed the actual issues.
Getting stuck with old code
SQL Server 2008 added a bunch of new data types. Spatial was one of my favourites. When did that get added to EF? Many, many, many years later. What could the developers do about it? Almost nothing except very hacky workarounds.
When you use a framework like this, you are totally at the mercy of what its developers feel is important, and that’s if they haven’t already lost interest in it yet.
When you code with a tool that was targeted cross-DB, you usually end up with a very poor choice of data types. You end up with lowest common denominator of everything, or you end up with something that doesn’t fit well with the framework.
Many of the authors of the frameworks quickly lose interest. Do you want to be stuck with it?
Boilerplate code
“Oh I don’t want to write all that boilerplate code” I hear people say.
Sure. I get that but are you incapable of code-generation? At least give yourself control over how the code works. And that’s mostly a one-time cost. You get to use it again, project after project. But at least if you don’t like it, you can fix it.
Summary
Look, I could go on for days about this stuff and the astonishing messes that I’ve seen.
You need to decide what you’re building. But at least consider where you are taking your advice from.
If you’re taking architectural advice from someone who hasn’t built real systems at scale, at least get further opinions.
If you are building toy apps ie: a replacement for CardFile, none of this matters but I live in a world where it really, really matters. And so do the vast majority of software houses that I spend time in.
Future
Also worth noting that if you want real performance from upcoming DB engines, you might want to rethink the idea of just using the database an an object repository. Even NOSQL options like CosmosDB (was DocumentDB) have a concept of a stored procedure within the engine. You write them in Javascript. But there’s a reason they are there.
In SQL Server 2016, this is even more stark. I have simple examples on SQL Server 2016 where a batch of code sent from an app executes in 26 seconds. If I move the table to in-memory, it runs in 25.5 seconds. If I move the code to a natively-compiled stored procedure, it runs in 60 milliseconds.
2017-11-07