Recently, I had an interaction that finally got me to put something into words which I've been trying to say for months. I didn't say everything I have to say on the subject (not even close, honestly), but I said enough of it that I felt like it was worth a blog post. It started with this comment, that I left on Lobste.rs.
After leaving this comment, I got a reply.
Can I ask why? I’m genuinely curious. [...] I have to say, I am surprised by the increasing anti-AI sentiment in tech communities.
I said this in a reply to them, but I want to say it again here:
ChatGPT is always going to generate descriptions that are at least slightly incorrect when they’re that long, and either the author doesn’t know better, or is willfully jettisoning misinformation out into the world. If I, as the reader, don’t know what branch prediction is and I care enough to learn about it, I’m just going to open a new tab and go learn about it for a minute.
Instead of spending the time going to ChatGPT and generating some unverifiable stream-of-un-consciousness, they could have just found an actual knowledgable description written by a human, and they could’ve linked to it, or quoted some small descriptive section of it with a citation.
These AI insertions are not neutral or harmless. They deprive us of actual research.