Interesting Links for 07-05-2025
May. 7th, 2025 12:00 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- 1. How to reduce the spread of crypto
- (tags:cryptography virus funny newzealand )
- 2. Dozens of clubs fight back against FA ban on trans women
- (tags:football transgender women uk )
- 3. Fear and intimidation at Newark airport, as a Palestinian-American re-enters the USA
- (tags:usa palestine OhForFucksSake )
- 4. Peak rail fares to be scrapped by Scottish government
- (tags:trains GoodNews )
- 5. Hugo Administrators Resign in Wake of ChatGPT Controversy
- (tags:Hugo awards conventions ai OhForFucksSake )
- 6. Scientists issue new warning linking microplastics with strokes
- (tags:plastic stroke )
- 7. The county that shows how benefits cuts and bills drove voters from Labour to Reform
- (tags:politics economics uk poverty )
- 8. Orsted pulls plug on Hornsea 4 windfarm (one of the world's largest)
- (tags:windpower doom )
- 9. Scottish 'Hollow Mountain' hydro power plant expansion put on hold
- (tags:hydroelectric scotland doom )
no subject
Date: 2025-05-07 08:00 pm (UTC)Not sure I agree. I use a variety of the LLMs a moderate amount these days (using the Kagi Assistant front end), precisely because it is sometimes Much Less Effort to find my answers that way than it is with a normal search engine. The accuracy / relevance I'm seeing is generally on par with Kagi (much better than Google), and the summarization typically makes it much easier for me to decide which links I think are worth investigating.
As for hallucinations, it's been a while for me to see one actually hallucinate a link. The summaries are sometimes wrong, but that's why you check their work. And the missed-links problem is equally true of search engines.
So yes, you can do the same thing with conventional search, but it would probably take much more labor -- which is precisely the reason they gave for using it. "People points" are the most precious resource when running any convention, and usually in very limited supply.
The business is colossally evil, sure -- but so is Google at this point, and most folks use that without blinking an eye. And the energy-usage equation is more complicated than most folks understand, although it's probably true that ChatGPT per se is still excessive. (One of the reasons I like Kagi Assistant is that I can do much of my work in more-optimized engines, and only use the high-powered energy-chewing ones when I really need them.)
So again: I get that they failed to read the room and understand that writers are specifically het up about LLMs in general, and on the warpath, which made the whole thing unwise politically. But aside from that, the only thing I would probably have done differently is use a more-efficient LLM for the process.