Cognitive Arbitrage

There’s a guy I used to work with. He was well-liked, articulate, friendly, but there was this one weird little thing, almost like a vocal tic. If you asked him a question about anything, he’d say, “I have no idea,” then confidently state an answer. “I have no idea. Average monthly growth last year was 8.6%.” “I have no idea. ARR for the new feature is currently $267k.” “I have no idea. The A/B test delivered a 1.36% improvement with a p-value of 0.02.” He didn’t tell you where he got the information – whether it had some basis, or was pulled out of thin air – he would just declare that he didn’t know, then confidently state an answer.

The weirdest thing was that we believed him. We believed him when he told us that he didn’t know, and we believed him when he gave us an answer. Maybe we didn’t think we did, but we remembered what he’d said and forgot where it came from. Of course we never checked his answers to make sure. His statements took the place of answers we might have found for ourselves. They were like mental cuckoo’s eggs – answer-shaped things that took up space in our minds and in the conversation, exploiting our laziness and replacing the need for accuracy. When his statements were correct, we took it as proof that he knew what he was talking about. And when they were wrong, we remembered that he’d told us he didn’t know ahead of time. Most of the time it was fine, but we made some pretty bad decisions based on the random things he said. And how could we be angry? Hadn’t he told us that he didn’t know? Why had we believed him? He’d warned us. If we believed him, that was on us.

This might sound ridiculous, but I’ve recently noticed a lot of people are starting to do the exact same thing, and just like before, we believe them. “I got this answer from ChatGPT.” “This documentation was generated by Claude.” “This is what Gemini said.” We understand that this means that they don’t know, but we shrug and pretend that it’s true in order to avoid derailing the conversation. It’s probably true, right? The discussion moves on, and somewhere along the line we forget that we’re basing the entire conversation on statements that were generated by stochastic autocomplete joined to a search engine.

There’s a kind of linguistic judo going on here. Normally, when someone states a fact, they’re accepting responsibility for the accuracy of what they’ve said. When they attribute it to AI, they’re turning that around. “I found this statement lying around. I have no idea if it’s true or not, but you can use it if you want.” By saying this, they disavow responsibility, and put the onus on you to verify the information. You can choose to use it or not – they get credit either way, and now you’re the Human In The Loop!

It gets worse. There have been multiple studies suggesting that critical thinking skills decline as we use AI. We aren’t just believing AI more, we’re also getting worse at evaluating whether we should believe it.

I feel like we need to do a linguistic reset. Instead of caveating a sentence with “this is what AI told me,” we should make it absolutely clear what we mean. As a public service, I propose the following as preferable ways of identifying fake information.

“I have absolutely no idea if this is true, but this thing that the AI told me confirms my biases.”

“This documentation was generated by AI. Based on random sampling, we calculate that 15% of it is factually incorrect. As such, every time you trust the documentation you have a 1 in 7 chance of causing a service outage.”

“These cost estimates were generated by Amazon Q. There’s no way of knowing whether they’re accurate, other than checking.”

“I was too lazy to generate the data, and besides, I’ve forgotten how. This is what the AI told me.”

“I couldn’t be bothered to spend time answering this question, so I let the AI come up with something that sounded credible. The good news (for me) is that I did it quickly and with zero effort – if having an actual answer matters to you, spend your own damn time on it.”

Leave a comment