What has ChatGPT got to do with Baseball Umpiring?

What has ChatGPT got to do with Baseball Umpiring?

What many people who follow me don’t know, is that I spent a lot of years playing baseball. Early on I played third base, later I pitched, and my last position was usually catcher or first base.

I love the game. What is less known is that I also spent many years umpiring baseball, through to state level. I umpired in the Pan-Pacific games in Queensland.

Umpiring requires a very detailed knowledge of the rules of the game. Umpires must pass annual rules examinations. If you want to umpire at higher levels of competition, you must pass the exams at a higher level.

So, what’s that got to do with ChatGPT?

I haven’t umpired for years but recently, I’ve considered doing it again. (There is a chance that I’m a glutton for punishment - see also the years I spent refereeing soccer).

That led me to joining the local umpiring association. Once I’d done that, I started receving rules quizzes, designed to keep umpires up to date.

After I did one of those quizzes, I thought “How would ChatGPT go with these questions?”. I started by telling it that it was a triple A baseball umpire, and to answer based on that. Then I asked it the same questions. I had scored 83%. I was keen to see if it could beat me.

How did ChatGPT go?

In short, not well. The score it would have received was 40%.

So I tried to understand where it went wrong.

What was immediately clear was that it mistook softball rules for baseball rules. But the thing that puzzled me most, is that the questions it got wrong were the easier questions. It was truly impressive on some harder questions.

Why did ChatGPT get the easy questions wrong?

What eventually occurred to me, is that the problem comes from the data that ChatGPT learns from. Harder questions are only ever likely to be discussed by people who know the rules. Easier questions are endlessly discussed by people who simply don’t know the rules.

As an umpire, you see these people all the time. Something happens in a game; they think the umpire didn’t see it; and they go on and on and on about it from behind the net. The reality often is that the umpire did see it, but the spectator does not understand how the rules are applied. I can’t tell you how often I wished I could call time, and go over to explain the rules to someone we’ll (gratiously) describe as “lacking knowledge of the rules”.

Similarly, I endlessly ran into people who’d played the game themselves for over 30 years, were coaching at a high level, and still didn’t know the rules of the game.

It struck me that if ChatGPT learns from any available content, it’s far more likely to get easy issues wrong. The complex issues aren’t discussed by most people.

This is a real issue for any of these types of systems. They have no idea who to learn from. And that’s more than a little scary.

2024-08-02