Interesting Links for 07-05-2025
May. 7th, 2025 12:00 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- 1. How to reduce the spread of crypto
- (tags:cryptography virus funny newzealand )
- 2. Dozens of clubs fight back against FA ban on trans women
- (tags:football transgender women uk )
- 3. Fear and intimidation at Newark airport, as a Palestinian-American re-enters the USA
- (tags:usa palestine OhForFucksSake )
- 4. Peak rail fares to be scrapped by Scottish government
- (tags:trains GoodNews )
- 5. Hugo Administrators Resign in Wake of ChatGPT Controversy
- (tags:Hugo awards conventions ai OhForFucksSake )
- 6. Scientists issue new warning linking microplastics with strokes
- (tags:plastic stroke )
- 7. The county that shows how benefits cuts and bills drove voters from Labour to Reform
- (tags:politics economics uk poverty )
- 8. Orsted pulls plug on Hornsea 4 windfarm (one of the world's largest)
- (tags:windpower doom )
- 9. Scottish 'Hollow Mountain' hydro power plant expansion put on hold
- (tags:hydroelectric scotland doom )
no subject
Date: 2025-05-07 12:54 pm (UTC)no subject
Date: 2025-05-07 12:59 pm (UTC)But they were checking whether someone was an acceptable person to have on a panel by using LLMs. Which strikes me as likely to be wrong a lot. And is a terrible use of them.
no subject
Date: 2025-05-07 02:06 pm (UTC)Even if I personally believed the copyright questions about LLMs were a non-issue (spoiler, I don't), it shouldn't have been hard to predict that inflicting anything LLM-related on the people most likely to not only believe it was a big issue but to take it personally … would cause them to take it personally!
no subject
Date: 2025-05-07 02:40 pm (UTC)But then I saw someone this morning be confused why anyone wouldn't expect AIs to be accurate, because "They scour the internet for answers".
And I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a reaction.
no subject
Date: 2025-05-07 02:45 pm (UTC)no subject
Date: 2025-05-07 02:50 pm (UTC)On a small tangent, I was discussing with someone at work the use of ChatGPT vs "Looking things up on Stack Overflow" - and a big bonus to me on Stack Overflow is that I get multiple answers and discussions in the comments on each of them of what works well/badly. I'd much rather have that, work out what I want of the various options, and make something work. Getting The One True Way, even if correct, would be less helpful to me!
no subject
Date: 2025-05-07 03:08 pm (UTC)no subject
Date: 2025-05-07 03:19 pm (UTC)no subject
Date: 2025-05-07 05:00 pm (UTC)Search results also produce the effect
no subject
Date: 2025-05-07 04:20 pm (UTC)AFAIK the panelist-vetting people are doubling down on their "everything is fine" position, but WorldCons burn down so quickly this could have changed since I last looked.
no subject
Date: 2025-05-07 05:00 pm (UTC)no subject
Date: 2025-05-07 06:59 pm (UTC)This is a different issue from the morality/legality of current LLMs, and the convention's participants reaction to their use.
https://file770.com/seattle-worldcon-2025-tells-how-chatgpt-was-used/
no subject
Date: 2025-05-07 07:25 pm (UTC)Thanks for that specific pointer!
Honestly, to me this feels like things have reached the point of moral panic. The query that they actually used is basically using ChatGPT as a glorified search engine, which is the thing it's actually good at, provided somebody actually checks the references.
(That is, LLMs often hallucinate, but links are links. If a human being follows and evaluates those links, this is more or less precisely the same as using Google for the process, just faster and easier, and likely to reduce the number of scandals showing up after the fact because somebody turns out to have a poisonous background. In their shoes, I might well have done exactly the same thing.)
Tempest in a bloody teapot, IMO -- there are plenty of problems with LLMs, but this sort of witch-hunt is counter-productive. Really, I think the only thing they're guilty of is failing to read the room...
no subject
Date: 2025-05-07 07:35 pm (UTC)And all while supporting a business that many writers are horrified by, and using massive amounts of energy.
no subject
Date: 2025-05-07 08:00 pm (UTC)Not sure I agree. I use a variety of the LLMs a moderate amount these days (using the Kagi Assistant front end), precisely because it is sometimes Much Less Effort to find my answers that way than it is with a normal search engine. The accuracy / relevance I'm seeing is generally on par with Kagi (much better than Google), and the summarization typically makes it much easier for me to decide which links I think are worth investigating.
As for hallucinations, it's been a while for me to see one actually hallucinate a link. The summaries are sometimes wrong, but that's why you check their work. And the missed-links problem is equally true of search engines.
So yes, you can do the same thing with conventional search, but it would probably take much more labor -- which is precisely the reason they gave for using it. "People points" are the most precious resource when running any convention, and usually in very limited supply.
The business is colossally evil, sure -- but so is Google at this point, and most folks use that without blinking an eye. And the energy-usage equation is more complicated than most folks understand, although it's probably true that ChatGPT per se is still excessive. (One of the reasons I like Kagi Assistant is that I can do much of my work in more-optimized engines, and only use the high-powered energy-chewing ones when I really need them.)
So again: I get that they failed to read the room and understand that writers are specifically het up about LLMs in general, and on the warpath, which made the whole thing unwise politically. But aside from that, the only thing I would probably have done differently is use a more-efficient LLM for the process.
no subject
Date: 2025-05-07 08:06 pm (UTC)Not really. A search engine will return millions of results. A carefully worded query to an LLM will return a short summary of different points to investigate.
If it's included as part of a well designed process containing different stages and methods, with people at the helm who trust their own judgement over that of either random humans on the internet, random bots, or generative AI, then an LLM could help produce better results.
Many writers are horrified by AI, others (not necessarily the loudest) are intrigued by it.
I hate AI because of the energy costs, and because it's currently so expensive to run that a tiny handful of companies with deep pockets have cornered the market, putting yet more power into the hands of the tech giants. At least DeepSeek's existence may suggest glimmers of hope in that direction. Though DeepSeek of course has its own issues.
And after all that, I'm still not sure what the panellists were being vetted for. It would be funny if a red flag was them using AI.
no subject
Date: 2025-05-07 09:44 pm (UTC)no subject
Date: 2025-05-08 02:50 am (UTC)no subject
Date: 2025-05-08 11:38 am (UTC)I note that the article mentions that road travel doesn't cost more at peak times ! Cue discussion of congestion charging ?
no subject
Date: 2025-05-08 11:50 am (UTC)(It's quite-possibly not revenue-neutral, but hopefully making it permanent will encourage people to use the train more, and make it so in the long-run.)