I recognize I’m becoming That Guy when it comes to AI, but I just had an interaction with AI that really demonstrates why I think it’s a serious problem.
I was talking with some folks and the question of “is it a good idea to use salt to put out a kitchen fire?” came up. There was disagreement among the group (my opinion: yes, dumping a pile of salt onto a greasy fire will put it out, whereas water will make it worse, and unlike a fire extinguisher might not make the food completely inedible) so I searched the question on my phone.
The suggested post that DuckDuckGo gave me (the one that comes up right under the browser’s address bar when you type the question) seemed to confirm my opinion. “Salt can put out a fire, but it’s not a magic bullet. It’s true that salt is an effective fire retardant, but it won’t put out a fire as effectively as a sprinkler system or water from your hose. Salt is just a last resort; if you have other options, you should use them first.” So far so good.
But as I kept reading I found that the page was wordy, repetitive, and somewhat self-contradictory. I knew it was a badly written clickbait page, but I began to suspect worse. Then I hit this gem: “Salt is used for putting out fires because it has a lot of water in it, which means that when it comes into contact with the flames of a fire, it will cause those flames to extinguish themselves by evaporating water from its own substance (the salt).” That statement is plausible, articulate, and 100% wrong: the hallmarks of AI.
This is, to my mind, a particularly egregious example of AI-generated misinformation. For one thing, it’s information about fire safety (the URL of the garbage site on which I found it includes the words “fire safety”) and misinformation about fire safety has a chance of getting someone killed. But I also noticed something going on in my own mind.
Here’s the thing: that egregiously wrong sentence means that everything else on the page, including the very reasonable statement that “salt is an okay way of putting out a fire but it should not be your first choice,” is suspect. But, having read up to that point with an open and accepting mind, everything on the page above that statement was now in my head. And it’s extremely difficult to to go back through your own brain’s “recent items” history and delete information which you now realize might not be accurate.
So I now know that everything I thought I knew about putting out fires with salt — which now includes an unknown amount of new information which might or might not be true — is suspect. My brain has been poisoned by AI-generated crap. And I’m a pretty skeptical guy, and I was deliberately using DuckDuckGo rather than Google (a search engine provided by a company which makes its money from advertising and is now heavily investing in AI) so I had already done one thing to shield myself from misinformation. And still I got bit by AI. I’m mad at myself for falling for it, and even madder at the assholes who put up that page full of misinformation for the sake of maybe getting a few fractional pennies from someone clicking on a sponsored link within it.
I hate that in this f’d-up modern world I now need to treat EVERYTHING I read, not just the political news, with deep skepticism. AI is imposing a cognitive burden on everyone and isn’t benefitting anyone except the advertisers and those who wish to promulgate misinformation.
Feh and double feh.