Date: 2025-05-07 12:54 pm (UTC)
greenwoodside: (Default)
From: [personal profile] greenwoodside
5. Oh dear. Another year, another Hugo kerfuffle. I'm down on LLMs more for environmental reasons than copyright reasons. Don't have the time to read the article in detail so still uninformed, but I feel a bit sorry for the admins who resigned.

Date: 2025-05-07 02:06 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
And is particularly tactless given that the people most likely to care about the copyright issues of LLMs are creators of written text – the people whose copyrights are actually being violated by feeding everything that can't run away fast enough into LLM training, and not coincidentally, the people whose livelihoods are directly threatened by people believing (whether rightly or not) that LLMs can generate content just as good on demand. And those are also precisely the class of people being considered as Worldcon participants.

Even if I personally believed the copyright questions about LLMs were a non-issue (spoiler, I don't), it shouldn't have been hard to predict that inflicting anything LLM-related on the people most likely to not only believe it was a big issue but to take it personally … would cause them to take it personally!

Date: 2025-05-07 02:45 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
Wow, yes, that does seem quite optimistic. Has that person ever tried looking for an answer on the Internet themself?!

Date: 2025-05-07 03:08 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
There's also some chance on Stack Overflow that you might find not just the answer, but some hints about the thought process or deeper understanding that led the responder to the answer they gave, or the debugging process someone used to find out the answer. Separately from getting the immediate problem fixed, those things can improve your ability to answer similar questions yourself in future. Not every SO post manages this, but the best ones do.

Date: 2025-05-07 05:00 pm (UTC)
hilarita: stoat hiding under a log (Default)
From: [personal profile] hilarita
Also, if I adopt some clever StoatOverflow thing into my code, I can link back (and I do) to the SO post where I got it, so it's easy to trace what I was thinking. I don't think there's a similar thing to point people back at an LLM-generated answer, unless you C&P the whole thing (plus prompts) somewhere.

Search results also produce the effect [personal profile] andrewducker alluded to: you can skim different results and get a feeling for some level of disagreement. This is rather flattened by LLMs, and of course they render it impossible to examine the source material yourself. It breaks traceability.

Date: 2025-05-07 04:20 pm (UTC)
movingfinger: (Default)
From: [personal profile] movingfinger
The committee using the LLMs to vet panelists and the Hugo Committee are two different groups of people. If you are at all interested in the matter, Nicholas Whyte et al. posted a resignation letter eloquent in both what it says and carefully does not say.

AFAIK the panelist-vetting people are doubling down on their "everything is fine" position, but WorldCons burn down so quickly this could have changed since I last looked.

Date: 2025-05-07 06:59 pm (UTC)
errolwi: (Default)
From: [personal profile] errolwi
Have you read the detailed description of what they asked and what they did with the information?
This is a different issue from the morality/legality of current LLMs, and the convention's participants reaction to their use.
https://file770.com/seattle-worldcon-2025-tells-how-chatgpt-was-used/

Date: 2025-05-07 07:25 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

Thanks for that specific pointer!

Honestly, to me this feels like things have reached the point of moral panic. The query that they actually used is basically using ChatGPT as a glorified search engine, which is the thing it's actually good at, provided somebody actually checks the references.

(That is, LLMs often hallucinate, but links are links. If a human being follows and evaluates those links, this is more or less precisely the same as using Google for the process, just faster and easier, and likely to reduce the number of scandals showing up after the fact because somebody turns out to have a poisonous background. In their shoes, I might well have done exactly the same thing.)

Tempest in a bloody teapot, IMO -- there are plenty of problems with LLMs, but this sort of witch-hunt is counter-productive. Really, I think the only thing they're guilty of is failing to read the room...

Date: 2025-05-07 08:00 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

It's not great at being a search engine, because it will both imagine links, miss links, and have links to things that don't say what it says they do. The first and third of those you can work around by checking its work. But in that case, you'd have been as well off using an actual search engine. The second you can't do anything about.

Not sure I agree. I use a variety of the LLMs a moderate amount these days (using the Kagi Assistant front end), precisely because it is sometimes Much Less Effort to find my answers that way than it is with a normal search engine. The accuracy / relevance I'm seeing is generally on par with Kagi (much better than Google), and the summarization typically makes it much easier for me to decide which links I think are worth investigating.

As for hallucinations, it's been a while for me to see one actually hallucinate a link. The summaries are sometimes wrong, but that's why you check their work. And the missed-links problem is equally true of search engines.

So yes, you can do the same thing with conventional search, but it would probably take much more labor -- which is precisely the reason they gave for using it. "People points" are the most precious resource when running any convention, and usually in very limited supply.

The business is colossally evil, sure -- but so is Google at this point, and most folks use that without blinking an eye. And the energy-usage equation is more complicated than most folks understand, although it's probably true that ChatGPT per se is still excessive. (One of the reasons I like Kagi Assistant is that I can do much of my work in more-optimized engines, and only use the high-powered energy-chewing ones when I really need them.)

So again: I get that they failed to read the room and understand that writers are specifically het up about LLMs in general, and on the warpath, which made the whole thing unwise politically. But aside from that, the only thing I would probably have done differently is use a more-efficient LLM for the process.

Date: 2025-05-07 08:06 pm (UTC)
greenwoodside: (Default)
From: [personal profile] greenwoodside
But in that case, you'd have been as well off using an actual search engine.

Not really. A search engine will return millions of results. A carefully worded query to an LLM will return a short summary of different points to investigate.

If it's included as part of a well designed process containing different stages and methods, with people at the helm who trust their own judgement over that of either random humans on the internet, random bots, or generative AI, then an LLM could help produce better results.

Many writers are horrified by AI, others (not necessarily the loudest) are intrigued by it.

I hate AI because of the energy costs, and because it's currently so expensive to run that a tiny handful of companies with deep pockets have cornered the market, putting yet more power into the hands of the tech giants. At least DeepSeek's existence may suggest glimmers of hope in that direction. Though DeepSeek of course has its own issues.

And after all that, I'm still not sure what the panellists were being vetted for. It would be funny if a red flag was them using AI.

Date: 2025-05-07 09:44 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
4. So context-sensitive pricing is good for electricity bills (item 10) but not for train journeys ?

Date: 2025-05-08 11:38 am (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
I'm not absolutely certain, but I think that the Scottish government will be subsidising peak fares. The article definitely suggests that they did for the pilot.

I note that the article mentions that road travel doesn't cost more at peak times ! Cue discussion of congestion charging ?

June 2025

S M T W T F S
1 2 3 4 5 67
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 6th, 2025 12:33 pm
Powered by Dreamwidth Studios