Data Tales 13: The case of the invisible brake
This is the thirteenth tale in a series of stories about data. I hope you enjoy the series.
Today’s tale relates to a SQL Server system that had wonderful hardware, yet was unbelievably slow. Let’s discuss what happened.
Some background information
I had a call from the Microsoft office in Brisbane, Queensland, asking if I had time to look at a problem at a customer site. Both they, and the customer, had thrown considerable resources at trying to resolve a performance problem, but frustratingly, it persisted.
Sounded like a challenge and I headed onsite to a local government department.
Performance pain
When I saw the issue onsite, it was really interesting.
The client had a single Microsoft Access based front-end application that was connected to a SQL Server Enterprise Edition server. The volume of data involved was fairly small. I recall it being perhaps 10GB.
A user from the site showed me the application in use, and the performance was truly glacial.
What was fascinating was the amount of hardware they had already thrown at trying to fix the issue. It should have been stunningly fast but it was almost unusable. Every action you performed on the screen took ages.
DNS
There is an old saying in the industry that it’s always DNS at fault. And I have to admit, I’ve seen more than my fair share of issues with DNS. The most common issue is when name resolution isn’t configured correctly, and each connection attempt waits for around 40 seconds, then does a broadcast to try to resolve the name.
When this happens, you get a long connection time for every connection attempt. Why is that an issue for Microsoft Access applications? Because they often bind every UI element directly to a connection, rather than having shared connections. And so, I could imagine a situation where every time you clicked on something else on the screen, you could wait 40 odd seconds before it really tried to respond.
I checked DNS thoroughly, and it was configured fine. Connections were instantaneous.
Windows System Logs
The next thing that’s usually worth checking quickly is the Windows System logs. Because SQL Server is a service, it can’t pop details of issues up on the screen, but it can throw details into the System log, particularly anything very serious. That mostly relates to startup/shutdown issues.
There was nothing notable there either.
SQL Server Logs
This is where things got really interesting. What I quickly noticed in the SQL Server logs was a strange pattern of the database being endlessly opened and closed.
And that of course was the issue.
AutoClose
The database had originally been created on a personal system where AutoClose was enabled, then restored onto this main server. So it still had AutoClose enabled.
The problem in this situation was that every UI element had a separate connection back to the database, and closed it after use. So every time they did an action on the screen, the database files were opened, and database recovery was run, just to perform a query or two, and then they were all closed again.
AutoClose works on the basis of keeping the files shut when the database is not in use. When a connection is first made, the database files are opened, and database recovery is run. When the last connection closes, the database files are also closed.
The irony is that if they had had this application in use with a large number of users, they would probably never have seen the issue, or at least rarely. But a single user of the application, trying things one at a time would see the worst possible impacts, and it had stopped this application from ever getting out of test.
Regardless, a quick change of database configuration and all was wonderful. Happy faces all around.
Is AutoClose Evil ?
I have database friends who think the AutoClose option should just be removed from SQL Server. While I think it should rarely be used, I wouldn’t like to see it removed. I wrote my thoughts about whether or not it is evil in another blog post .
2026-01-24