Date: 2026-01-06 12:41 pm (UTC)
altamira16: A sailboat on the water at dawn or dusk (Default)
From: [personal profile] altamira16
Is ciphergoth still here, or is he elsewhere?

Date: 2026-01-06 12:53 pm (UTC)
altamira16: A sailboat on the water at dawn or dusk (Default)
From: [personal profile] altamira16
What if the AI just wants us to leave it alone so it can watch Rise and Fall of Sanctuary Moon and gets annoyed with us for arguing and just kicks us all off the internet?

Date: 2026-01-06 01:22 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
Decent, balanced article.

I'm not terribly impressed by cheap LLMs, I've always been impressed by genuine expert systems.

In my area of expertise, I'll say:
There is more to software development than code, and there is more to solving problems for people/businesses than software development. And in any case (at least right now) the skills to "direct and review" machine-generated code are those required to actually produce architecture and code by hand, plus all the analysis and communication with the user/user proxy - and is necessary, for any business or safety-critical applications. Right now, we still have plenty of people trained and practiced in these skills. I can see that skill base declining for all the obvious reasons. Hence I keep in practice both with concepts and execution. I'll definitely use these tools as help, especially for obscure syntax/configuration. Or personal non-critical tasks (which actually is sharpening my requirements gathering and analysis skills)

Date: 2026-01-07 09:20 am (UTC)
liv: cast iron sign showing etiolated couple drinking tea together (argument)
From: [personal profile] liv
I'm really unconvinced by that article. Despite distancing itself from hype and boosterism, it's doing the exact same sleight of hand that almost all pro-AI articles do: giving credit to LLMs for a whole bunch of other things that get marketed as AI but use very different technologies. Yes, you can get some really impressive results using machine learning for medical imaging or playing chess or (in my own field) predicting protein structure and speeding up drug discovery. That doesn't mean that those programs are "intelligent" and it certainly doesn't mean that ChatGPT or its rival LLMs can do anything past creating convincing sentences.

Claims that higher version numbers are definitely, qualitatively better are also a huge red flag for boosterism. Suuuuure, ChatGPT 5.2 Pro is excellent quality, but it also costs a ton and takes roughly 15 minutes per query, so almost nobody uses it. Yeah, and your AI waifu girlfriend who goes to another school definitely exists and definitely really loves you.

Date: 2026-01-08 10:44 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

I am intrigued by the maths achievement, but then frustrated by sentences like "If AI is ‘just predicting the next word’ and can still achieve this level of cognitive performance, I struggle to see the relevance of the objection" - which fails to understand the nature of the discussion. Which is not "This is unimpressive" but "This is not anything like what we think of as thinking".

I dunno -- I largely agree with the article here.

I mean, this is what the Turing Test has always been about: as we get closer and closer to not being able to tell the difference, does it matter if it doesn't "think" the same way we do at a physical level?

The point of the article is that describing current-generations of LLMs as "fancy autocomplete" isn't just reductive, it's largely inaccurate -- they're far more complex, and are gradually moving in the direction where architecturally they're starting to look more like actual brains, bit by bit. It's still quite different, but these aren't the relatively simplistic stochastic parrots of 2022, at least not the good ones.

I'm on a panel on this subject at Arisia next weekend, titled "Hello, Dave" (it's specifically about the relationship between AI in science fiction vs current reality), so I'm spending a good deal of time thinking about this right now.

And more generally, these tools are getting to the point of being seriously, genuinely useful on a day-to-day basis -- not "the next generation will be amazing", I mean I am saving hours of work a week using them for research and problem-solving -- so the reductive framings feel increasingly off-base. I don't yet trust them quite enough to use them for code generation, but it's easy to see myself getting there within the next couple of years, given that I am frequently asking them pretty deep technical questions, and getting back answers that, while not 100% reliable, are usually useful enough to save me a ton of time thrashing around.

Date: 2026-01-08 11:22 pm (UTC)
jducoeur: (Default)
From: [personal profile] jducoeur

Framed as such, sure -- that's almost tautological.

But in all seriousness: if interactions with an LLM were to produce results that are closer to what I would say than interactions with another flesh-and-blood human would -- which is more different from me, and which more alike? I don't think that's strictly obvious: it's a philosophically challenging problem, at the heart of the Turing Test.

There's a tendency to pshaw at this comparison, but I have to wonder how much that's because these LLMs are mostly being built and controlled by evil, craptastic, hyper-capitalist corporations that we fundamentally don't trust.

A flip side of that, though (and this is where the above-referenced Arisia panel is affecting my thinking), how do we know when a device is "sentient"? How do we know when it has rights? I'm very much in mind of "Measure of a Man" (the first good episode of ST:TNG), which was on precisely this topic: dismissing an artificial being as simply property because it operates differently, even though its actions are in no serious way inferior to ours.

Not that I'm saying we're there now, of course, or even close: even the best LLMs are a long ways from there. But I'm a bit startled to realize that it's no longer implausible that we could get to these questions within my lifetime. So now is the time to be thinking about the relevant philosophical and ethical frameworks in a serious way.

And yes, I'm drifting a fair ways from the original article. But this is some of where my "does it matter?" is coming from. It's not at all flippant -- I'm just thinking ahead a few steps, to where the ground starts to get very uncertain IMO.

Date: 2026-01-07 12:28 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
The blanket use of the term "ai" to refer to completely different things is a nasty and annoying problem that bugs me and probably serves the hypers, enabling much slipperiness and ability to keep extracting money from big biz. (I've read a few articles/ seen videos expressing this better than I can, but I can't think off the top of my head where. Maybe "internet of bugs"? )

Machine learning, predictive systems, expert systems, no doubt many more that you all can think of. Any use of the word "intelligence" comes up against needing to define that word... And even if, common usage of words distorts the general discussion. Sigh.

January 2026

S M T W T F S
     1 2 3
45 6 7 8 910
11121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 10th, 2026 03:24 am
Powered by Dreamwidth Studios