Interesting Links for 23-05-2025
May. 23rd, 2025 12:00 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- 1. Why are Scotland's councils so short of cash when tax is going up?
- (tags:scotland tax )
- 2. Greenland Signs Lucrative Minerals Deal with Europe in Blow to Trump
- (tags:Europe USA materials Greenland trade )
- 3. How I Beat NES Mario in 0.000005 Seconds (the nerdiest video I have ever watched. If you've seen something nerdier then do let me know)
- (tags:video games programming technology mario )
- 4. Trump's image of dead 'white farmers' came from Reuters footage in Congo, not South Africa
- (tags:USA politics Africa southafrica )
- 5. Programmers spend 5% of their time editing code, the rest is mostly understanding it/the issue they're trying to solve..
- (tags:programming research )
- 6. Which word made you old? (Mine was GOAT)
- (tags:language age change comic )
- 7. Explaining "what a species is" turns out to be very very tricky
- (tags:video ontology life )
no subject
Date: 2025-05-25 05:30 pm (UTC)On #5: THIS. The exact numbers aside, this is why I consider AI-based coding tools to not just be problematic in terms of their quality -- more importantly, they're solving the wrong problem.
Writing code shouldn't be the thing that needs automation; frankly, if it is, I generally think the codebase isn't well-enough factored.
What I find LLMs useful for is simply as a glorified search engine. That's not a small thing: my experience over the past six months is that phrasing my question as a question in Kagi (which turns on the Quick Answer feature), and then double-checking the citations, takes me about half as much time as a conventional search does. Given how much of my time is spent on simply looking stuff up, that's not a small thing.
But code automation? Until and unless that produces truly high-quality code, without dumb bugs, based specifically on my codebase, it's taking the minor problem and making it worse, which is a foolish waste of my time.