Date: 2025-05-07 07:25 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

Thanks for that specific pointer!

Honestly, to me this feels like things have reached the point of moral panic. The query that they actually used is basically using ChatGPT as a glorified search engine, which is the thing it's actually good at, provided somebody actually checks the references.

(That is, LLMs often hallucinate, but links are links. If a human being follows and evaluates those links, this is more or less precisely the same as using Google for the process, just faster and easier, and likely to reduce the number of scandals showing up after the fact because somebody turns out to have a poisonous background. In their shoes, I might well have done exactly the same thing.)

Tempest in a bloody teapot, IMO -- there are plenty of problems with LLMs, but this sort of witch-hunt is counter-productive. Really, I think the only thing they're guilty of is failing to read the room...

Date: 2025-05-07 08:00 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

It's not great at being a search engine, because it will both imagine links, miss links, and have links to things that don't say what it says they do. The first and third of those you can work around by checking its work. But in that case, you'd have been as well off using an actual search engine. The second you can't do anything about.

Not sure I agree. I use a variety of the LLMs a moderate amount these days (using the Kagi Assistant front end), precisely because it is sometimes Much Less Effort to find my answers that way than it is with a normal search engine. The accuracy / relevance I'm seeing is generally on par with Kagi (much better than Google), and the summarization typically makes it much easier for me to decide which links I think are worth investigating.

As for hallucinations, it's been a while for me to see one actually hallucinate a link. The summaries are sometimes wrong, but that's why you check their work. And the missed-links problem is equally true of search engines.

So yes, you can do the same thing with conventional search, but it would probably take much more labor -- which is precisely the reason they gave for using it. "People points" are the most precious resource when running any convention, and usually in very limited supply.

The business is colossally evil, sure -- but so is Google at this point, and most folks use that without blinking an eye. And the energy-usage equation is more complicated than most folks understand, although it's probably true that ChatGPT per se is still excessive. (One of the reasons I like Kagi Assistant is that I can do much of my work in more-optimized engines, and only use the high-powered energy-chewing ones when I really need them.)

So again: I get that they failed to read the room and understand that writers are specifically het up about LLMs in general, and on the warpath, which made the whole thing unwise politically. But aside from that, the only thing I would probably have done differently is use a more-efficient LLM for the process.

Date: 2025-05-07 08:06 pm (UTC)
greenwoodside: (Default)
From: [personal profile] greenwoodside
But in that case, you'd have been as well off using an actual search engine.

Not really. A search engine will return millions of results. A carefully worded query to an LLM will return a short summary of different points to investigate.

If it's included as part of a well designed process containing different stages and methods, with people at the helm who trust their own judgement over that of either random humans on the internet, random bots, or generative AI, then an LLM could help produce better results.

Many writers are horrified by AI, others (not necessarily the loudest) are intrigued by it.

I hate AI because of the energy costs, and because it's currently so expensive to run that a tiny handful of companies with deep pockets have cornered the market, putting yet more power into the hands of the tech giants. At least DeepSeek's existence may suggest glimmers of hope in that direction. Though DeepSeek of course has its own issues.

And after all that, I'm still not sure what the panellists were being vetted for. It would be funny if a red flag was them using AI.

June 2025

S M T W T F S
1 2 3 4 5 6 7
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 9th, 2025 12:26 am
Powered by Dreamwidth Studios