Honestly, to me this feels like things have reached the point of moral panic. The query that they actually used is basically using ChatGPT as a glorified search engine, which is the thing it's actually good at, provided somebody actually checks the references.
(That is, LLMs often hallucinate, but links are links. If a human being follows and evaluates those links, this is more or less precisely the same as using Google for the process, just faster and easier, and likely to reduce the number of scandals showing up after the fact because somebody turns out to have a poisonous background. In their shoes, I might well have done exactly the same thing.)
Tempest in a bloody teapot, IMO -- there are plenty of problems with LLMs, but this sort of witch-hunt is counter-productive. Really, I think the only thing they're guilty of is failing to read the room...
no subject
Thanks for that specific pointer!
Honestly, to me this feels like things have reached the point of moral panic. The query that they actually used is basically using ChatGPT as a glorified search engine, which is the thing it's actually good at, provided somebody actually checks the references.
(That is, LLMs often hallucinate, but links are links. If a human being follows and evaluates those links, this is more or less precisely the same as using Google for the process, just faster and easier, and likely to reduce the number of scandals showing up after the fact because somebody turns out to have a poisonous background. In their shoes, I might well have done exactly the same thing.)
Tempest in a bloody teapot, IMO -- there are plenty of problems with LLMs, but this sort of witch-hunt is counter-productive. Really, I think the only thing they're guilty of is failing to read the room...