Page Summary
Active Entries
- 1: Photo cross-post
- 2: Interesting Links for 06-01-2026
- 3: Catchup links for 30-12-2025
- 4: Photo cross-post
- 5: Photo cross-post
- 6: Life with two kids: No peace for the wicked.
- 7: Interesting Links for 01-01-2026
- 8: Interesting Links for 04-01-2026
- 9: Interesting Links for 03-01-2026
- 10: Interesting Links for 02-01-2026
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2026-01-06 12:41 pm (UTC)no subject
Date: 2026-01-06 01:06 pm (UTC)You can find him over here
no subject
Date: 2026-01-06 12:53 pm (UTC)no subject
Date: 2026-01-06 01:05 pm (UTC)Then we'll all be better off because of it.
no subject
Date: 2026-01-06 01:22 pm (UTC)I'm not terribly impressed by cheap LLMs, I've always been impressed by genuine expert systems.
In my area of expertise, I'll say:
There is more to software development than code, and there is more to solving problems for people/businesses than software development. And in any case (at least right now) the skills to "direct and review" machine-generated code are those required to actually produce architecture and code by hand, plus all the analysis and communication with the user/user proxy - and is necessary, for any business or safety-critical applications. Right now, we still have plenty of people trained and practiced in these skills. I can see that skill base declining for all the obvious reasons. Hence I keep in practice both with concepts and execution. I'll definitely use these tools as help, especially for obscure syntax/configuration. Or personal non-critical tasks (which actually is sharpening my requirements gathering and analysis skills)
no subject
Date: 2026-01-07 09:20 am (UTC)Claims that higher version numbers are definitely, qualitatively better are also a huge red flag for boosterism. Suuuuure, . Yeah, and your AI waifu girlfriend who goes to another school definitely exists and definitely really loves you.
no subject
Date: 2026-01-07 09:32 am (UTC)I'm not convinced either. The "It's mediocre now, but the next version will be amazing" approach has been going on since practically the second week of LLMs existing.
I'm not using them myself, and I get very frustrated by people quoting them in discussions.
I am intrigued by the maths achievement, but then frustrated by sentences like "If AI is ‘just predicting the next word’ and can still achieve this level of cognitive performance, I struggle to see the relevance of the objection" - which fails to understand the nature of the discussion. Which is not "This is unimpressive" but "This is not anything like what we think of as thinking".
no subject
Date: 2026-01-08 10:44 pm (UTC)I dunno -- I largely agree with the article here.
I mean, this is what the Turing Test has always been about: as we get closer and closer to not being able to tell the difference, does it matter if it doesn't "think" the same way we do at a physical level?
The point of the article is that describing current-generations of LLMs as "fancy autocomplete" isn't just reductive, it's largely inaccurate -- they're far more complex, and are gradually moving in the direction where architecturally they're starting to look more like actual brains, bit by bit. It's still quite different, but these aren't the relatively simplistic stochastic parrots of 2022, at least not the good ones.
I'm on a panel on this subject at Arisia next weekend, titled "Hello, Dave" (it's specifically about the relationship between AI in science fiction vs current reality), so I'm spending a good deal of time thinking about this right now.
And more generally, these tools are getting to the point of being seriously, genuinely useful on a day-to-day basis -- not "the next generation will be amazing", I mean I am saving hours of work a week using them for research and problem-solving -- so the reductive framings feel increasingly off-base. I don't yet trust them quite enough to use them for code generation, but it's easy to see myself getting there within the next couple of years, given that I am frequently asking them pretty deep technical questions, and getting back answers that, while not 100% reliable, are usually useful enough to save me a ton of time thrashing around.
no subject
Date: 2026-01-08 10:52 pm (UTC)"does it matter"
If you care about what things are, and how they work, on more than a very facile level, then of course it matters what they really are.
no subject
Date: 2026-01-08 11:22 pm (UTC)Framed as such, sure -- that's almost tautological.
But in all seriousness: if interactions with an LLM were to produce results that are closer to what I would say than interactions with another flesh-and-blood human would -- which is more different from me, and which more alike? I don't think that's strictly obvious: it's a philosophically challenging problem, at the heart of the Turing Test.
There's a tendency to pshaw at this comparison, but I have to wonder how much that's because these LLMs are mostly being built and controlled by evil, craptastic, hyper-capitalist corporations that we fundamentally don't trust.
A flip side of that, though (and this is where the above-referenced Arisia panel is affecting my thinking), how do we know when a device is "sentient"? How do we know when it has rights? I'm very much in mind of "Measure of a Man" (the first good episode of ST:TNG), which was on precisely this topic: dismissing an artificial being as simply property because it operates differently, even though its actions are in no serious way inferior to ours.
Not that I'm saying we're there now, of course, or even close: even the best LLMs are a long ways from there. But I'm a bit startled to realize that it's no longer implausible that we could get to these questions within my lifetime. So now is the time to be thinking about the relevant philosophical and ethical frameworks in a serious way.
And yes, I'm drifting a fair ways from the original article. But this is some of where my "does it matter?" is coming from. It's not at all flippant -- I'm just thinking ahead a few steps, to where the ground starts to get very uncertain IMO.
no subject
Date: 2026-01-07 12:28 pm (UTC)Machine learning, predictive systems, expert systems, no doubt many more that you all can think of. Any use of the word "intelligence" comes up against needing to define that word... And even if, common usage of words distorts the general discussion. Sigh.