I'm involved in a lot of performance tuning work for SQL Server based applications. These are mostly your typical OLTP accounting, financial, or record keeping applications. One thing that constantly surprises me is that many developers and DBAs really aren't sure about where their tuning efforts need to be focused.
In particular, what really surprises me is how much focus most people have on write/update behavior and how little on read behavior. I think there's a perception problem.
If I ask a developer what percentage of the time they think the database will be writing (vs reading), I'm invariably told numbers like 20 or 25 percent. In fact, I remember having a discussion with a SQL Server product group member who was writing an MSDN whitepaper and I was a reviewer. He told me that he thought 40 to 60 percent was common.
But every time I measure this for these types of applications, and over a lengthy period, guess what write percentages I normally see?
Less than one percent is common and 1 or 2 percent is quite typical.
Even if you need to update a row, you have to find it first. I'm convinced that these applications that write 40 or 60 percent of the time are as commonplace as unicorns.

Now I'm not suggesting that there aren't applications where write performance is critical, but I can assure you that the vast majority of my tuning work is going to focus on making reads as fast as possible, because that's what the systems are going to spend their times doing.
I'm going to focus on what the system is doing 99 percent of the time, not so much on the 1 percent, unless the writes are particularly time critical for some reason.
And the wonderful thing is that if you reduce the impact of the reads, the system has more time to work on writes anyway. Reducing the impact of reads will generally improve the performance of your writes.
Generally speaking, do you find most customers for OLTP are using the default isolation level of read committed (thus readers blocking writers)? And it is because of the "1 percent" writes scenario that they just leave it like that? Why not switch to one of the variants of snapshot isolation (assuming resource overhead is acceptable) to avoid the waiting/blocking and focus all effort on improving read performance?
I generally try to assess every application to see if we can get away with RCSI being turned on. If I'm building new applications, I start with it on.
I think it's a far preferable option to the default use of read committed. Unfortunately, not all applications can deal with it.
Generally when I see that though, it's poor app design that is the problem. For example, at a large software house, it took years to make the change because the existing system had race conditions, and implementing RCSI would exacerbate the race conditions. That's unfortunately a pretty sad reason. The right answer, of course, is to fix the race conditions, but often easier said than done.