andrewducker: (android fisting)
[personal profile] andrewducker
A little over a year ago I wrote a post about AI, Intelligence, and Large Language Models. And, having just re-read it, I totally stand by it.

However, a year later, a couple of things came along that add something useful to it:

1) This piece on how the underlying technology works, and how it spots patterns in text and uses them to build a working model of language. If you're at all interested in a layperson-aimed introduction to the Transformer, which is what all of the recent breakthroughs in AI are based on, then I highly recommend it.

2) This piece on how robots and AI are working together. The most important bit being that they are building models that blend together LLMs with other kinds of input from the world, allowing them to build a much richer understanding. This quote is key:
These “vision-language-action models” (VLAMs) take in text and images, plus data relating to the robot’s presence in the physical world, including the readings on internal sensors, the degree of rotation of different joints and the positions of actuators (such as grippers, or the fingers of a robot’s hands). The resulting models can then answer questions about a scene, such as “can you see an apple?” But they can also predict how a robot arm needs to move to pick that apple up, as well as how this will affect what the world looks like...Grounding the model’s perception in the real world in this way greatly reduces hallucinations (the tendency for AI models to make things up)
as well as their ability to tell you why they did what they did when asked
Such models can also respond in kind. “When the robot makes a mistake, you can query the robot, and it answers in text form,”..."It can provide explanations while we drive, and it allows us to debug, to give the system instructions, or modify its behaviour to drive in a certain style"
.

The latter feels like the biggest breakthrough to me. It's exactly what I was talking about in the penultimate paragraph of my last post. A system that's not just understanding how language works, but also how it relates to the real world is something that understands *meaning*. And that's incredibly powerful.

While I was looking for the post I came across a poll I ran back in 2015. And I'm re-running it here, to see how different the results are:


Open to: Registered Users, detailed results viewable to: All, participants: 35


The mind is entirely material in origin, and not supernatural in any way

View Answers

Yes
29 (85.3%)

No
2 (5.9%)

Something Else I Will Explain In Comments
3 (8.8%)

Given sufficient time, humans will understand the patterns which make up simple minds, and build artificial ones

View Answers

Yes
23 (71.9%)

No
8 (25.0%)

SEIWEIC
1 (3.1%)

If humanity doesn’t blow itself up, eventually we will create human-level AI

View Answers

Yes
18 (56.2%)

No
12 (37.5%)

SEIWEIC
2 (6.2%)

If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI

View Answers

Yes
17 (54.8%)

No
12 (38.7%)

SEIWEIC
2 (6.5%)

If far-above-human-level AI comes into existence, eventually it will so overtake humanity that our existence will depend on its goals being aligned with ours

View Answers

Yes
17 (53.1%)

No
13 (40.6%)

SEIWEIC
2 (6.2%)

It is possible to do useful research now which will improve our chances of getting the AI's goals aligned with ours

View Answers

Yes
15 (53.6%)

No
12 (42.9%)

SEIWEIC
1 (3.6%)

Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise

View Answers

Yes
16 (57.1%)

No
11 (39.3%)

SEIWEIC
1 (3.6%)

Bonus Question: I would like my household robot....

View Answers

In the shape of a human - that's what I'm most comfortable talking to
1 (3.1%)

Non-human, but still friendly/organic looking - artificial humans are creepy, but I like my companions fluffly
5 (15.6%)

Artificial-looking - I want it to look like a machine, not a creature.
6 (18.8%)

To _be_ the house - why have a robot in your house when you can live in a robot?
9 (28.1%)

No household robot for me, thank you - I don't want my technological assistance to think for me
9 (28.1%)

SEIWEIC
2 (6.2%)

Date: 2024-06-13 03:18 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth

Don't think my answers have changed :)

Date: 2024-06-13 04:21 pm (UTC)
wildeabandon: picture of me (Default)
From: [personal profile] wildeabandon
SEIWEIC - I found the first question almost impossible to answer, because I'm no longer sure I can divide material and supernatural in a way that feels meaningful to me, and that leads to answer to the rest of the questions becoming the sort of 'don't know' that I don't even feel able to assign a probability to.

Date: 2024-06-13 04:23 pm (UTC)
symbioid: (Default)
From: [personal profile] symbioid
This is one of the things I've thought about when it comes to "hallucinations" - which is why the whole LLM is amazing is... not as amazing as they make it sound.

We have tons of research on "Ontologies" (basically Taxonomies), Object relations. Graphs. In the 00's there was a big push for something like "OWL" Ontology Web Language or something like that. The idea was that everything would have a category/scheme (I think it was supposed to be rdf formatted data). Graph relations between descriptors and objects (apple, red-color, shiny, green-color, tree-grown, eatable... etc) was supposed to be this amazing thing that helped organize all "objects". AFAICT it didn't do much in the web space. But it was (is?) an active area of research if not OWL there were other schemas.

LLMs came along and I was like - why isn't Ontology part of this? I mean, it is when it comes to computer vision, but only so far as it is a shitty complex structure of if/else code. If see "object" in field not supposed to be there (based on parameters, x,y,z) STOP (unless a child, then accelerate and kill Elon's kids or something)

I don't even think it NEEDS a physical embodiment if one constructs a world of relational object and descriptor states, the robot/physicality of course, is helpful in that it sort of limits you to concrete/real situations. But at that point you have animal consciousness. Maybe that's the initial goal, "bootstrap" to the real world, but the imaginal "hallucinations" that aren't bound by reality will eventually be constrained by that. Then, once the constraints on physicality and object relations happen, allow more "hallucinations" to emerge to allow for "creativity" (so far as I can tell though it's still very much a rote exercise in presets and training).

Yet again too much wordiness in my reply.

Either way, I don't think it's as amazing as they say it is, I think there's a lot of hype and I hope for 2nd (3rd?) AI Winter.

I think it will be useful to some degree, but I hate the hype and I hate it shoved down our throats.
My biggest worry isn't "Robots" it's "Humans" who build the robots with the guns. In that sense, sure, best to prepare now to prevent these things, but we already aren't doing it so the future I don't expect us to do it then. If we cared we'd be stopping these companies who are pushing forward with supposedly "semi"-autonomous AI bot things. But we aren't. And considering all the other things we aren't actively stopping (Global Climate Change, Human-on-Human War) I'm not too concerned about the robots, we deserve what we get, at this point. Humans are stupid and short sighted. This isn't necessarily by nature, some of it is the systems we've built up for ourselves with an inherent momentum; Some of it is the reaction against that, but not in a side-channel (e.g. alternate futures) but regressions (a la MAGA and Brexit and Alexander Dugin, etc).

I don't know if I want an squidbot with tentacles cleaning all the things, Rosie the Robot from Jetsons or Vickie the little cute maid from the 80s sitcom. Mostly I'll just be the hoarding filth-living being I am til the floods drown us, methinks.

Date: 2024-06-13 07:08 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
I go back to the Turing test.
An AI that is like a human has many of the same flaws (note that Turing's example arithmetic is not quite right). At which point using AI is a matter of economics. The horse population has crashed massively since the car came into being.

Economics is about costs and benefits. Last I heard training an AI used about as much energy as building a car, and it will need to retrain, so I don't know that an AI will be better for the planet than a human.
So far we appear to be working on the assumption that the output of the AI belongs to the builder,
which makes it a money concentrator. To many that is its real value. I see that as the real danger.

Date: 2024-06-13 04:30 pm (UTC)
rhythmaning: (cat)
From: [personal profile] rhythmaning
All my SEIWEIC are basically "I haven't a f*cking clue! And even if I had, I'd expect my answer to be wrong!"

Date: 2024-06-13 06:36 pm (UTC)
fanf: (Default)
From: [personal profile] fanf

We have at least 3 household robots! A dishwasher, a washing machine, a tumble dryer … oh, and a roomba that doesn’t get out enough.

Date: 2024-06-13 06:49 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
As Asimov had it, to a far-above-us AI we will be pets. I don't think "aligned" is the right word with which to compare my goals and the goals of the mice I had as a boy. Many of my priority goals had little to do with the mice, and no doubt they had goals that were independent of me.
I suppose that you could say that non-conflicting is a synonym of aligned.

Date: 2024-06-14 12:12 am (UTC)
snippy: Lego me holding book (Default)
From: [personal profile] snippy
for question 1, I don't think we know, we may never know, and I don't think it's an important question (I am a Jew, after all!). For the rest, I answered where I could and the ones I didn't answer, I don't know enough to have an informed opinion.

Date: 2024-06-14 07:15 am (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
I am pleased to see 2. I have been saying for over 30 years that for human-like intelligence, I believe that robots would need human/animal-like bodies (and emotions, because once you HAVE a body and ARE that body and KNOW it, you need strong drivers to care for it and avoid damage where possible...etc.) And that body plus emotions about it, including a sense of self, leads you to understand bodies and movement (and injury and health) in a much deeper way.
Edited Date: 2024-06-14 07:19 am (UTC)

Date: 2024-06-14 07:32 am (UTC)
zz: (Default)
From: [personal profile] zz
the 2nd half is "dunno", and we can't even make humans be nice to each other, so making an AI be nice to others seems like a challenge.

Date: 2024-06-14 12:21 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
I feel that the issue with humans is that 'being an exploitative bastard' and 'being a cooperative nice person' seem to be equally effective strategies...

Date: 2024-06-14 09:26 am (UTC)
fub: (Default)
From: [personal profile] fub
I am increasing sceptical about our ability to create a human-level intelligence by shoving text into a machine. But suppose that is possible, how would a synthetic human-level intelligence create an even smarter intelligence? Surely not by shoving more text into a slightly different machine, because then we would have done it ourselves.
Lots of magical thinking here.

My Little Stick

Date: 2024-06-14 10:48 am (UTC)
agoodwinsmith: (Default)
From: [personal profile] agoodwinsmith
I really want to answer the bonus question. I don't know enough to answer the other ones, but I always have an opinion.

Before I get to the answer, I want to think about constraints.

One. Currently, with a family history of alzheimers (not just my Dad), I am closer to needing a keeper than not. With no children or nieces or nephews, if it is cheaper to mind me with a robot than a person, then I want something fluffy and gentle.

Two. I can anthropomorphize anything - absolutely anything. Completely human-esque or a stick with a ribbon tied on it. Even just an elastic band. If said stick is active and does anything even remotely like a human or a pet or a car or my gol-darned cane which is possessed I tell you and hates my guts, then I am going to think of it as a fully sentient being.

Three. I don't want humans meddling in my life, so the house robot better not be buying me groceries or making me lists or snooping in my fridge. I will grudgingly accept toilet cleaning and dish washing, but they, without a doubt, will always be doing it wrong and don't even get me started about my laundry (Lorne & I each did our own, always). A house robot attempting to do even the welcome cleaning things better have a super comfortable appearance and manners.

So. For reasons I can't quite pin down yet, if I have a household robot, I think it needs to have a robotty appearance. Nothing scary like gun-toting headless dogs with their knees on backwards, but not completely human-passing. Given human history of slavery/indentured servitude etc, then anything completely human is going to have an identifiable ethnic/race flavour[1], even if covered in blue skin, and any being completely subservient to owners will amplify any already present biases about how certain humans should be treated. I think we need to avoid treading there.

Also. Considering developers never do anything without a maximum-profit-ethical-or-not focus, the appearance needs to remind us that the household robot is connected to people who do not love us and wish to snoop on us to make another penny. I can't think of anything more boring than my life for said snoopers, but that's just going to annoy them into gouging more pennies.

I don't want a household robot, nor would I trust one until I forgot that they weren't a sentient stick, but once they are capable of true grunt work (toilets, dishes, keeping tabs on me as I wander away), I am probably going to need one.

[1] - I know that race is a made-up concept, but we behave like it's real, and we're not very nice about it.

Date: 2024-06-14 08:14 pm (UTC)
wenchpixie: (Default)
From: [personal profile] wenchpixie
I'm saving this to come back to - I'm dealing with implications of AI at work right now and I'm definitely interested but a friday night is not a good time for braining about it, particularly after digging into environmental credentials!

Your household robot question took me back to "Compassion Circuit" by John Wyndam (1954, published in the Seeds of Time Anthology), which is SciFi horror... I definitley do not want a humanoid robot in the house!

January 2026

S M T W T F S
     1 2 3
45 6 7 8 9 10
11 12 13 1415 16 17
18 19 20 21 22 23 24
25 26 2728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 28th, 2026 01:23 am
Powered by Dreamwidth Studios