However, a year later, a couple of things came along that add something useful to it:
1) This piece on how the underlying technology works, and how it spots patterns in text and uses them to build a working model of language. If you're at all interested in a layperson-aimed introduction to the Transformer, which is what all of the recent breakthroughs in AI are based on, then I highly recommend it.
2) This piece on how robots and AI are working together. The most important bit being that they are building models that blend together LLMs with other kinds of input from the world, allowing them to build a much richer understanding. This quote is key:
These “vision-language-action models” (VLAMs) take in text and images, plus data relating to the robot’s presence in the physical world, including the readings on internal sensors, the degree of rotation of different joints and the positions of actuators (such as grippers, or the fingers of a robot’s hands). The resulting models can then answer questions about a scene, such as “can you see an apple?” But they can also predict how a robot arm needs to move to pick that apple up, as well as how this will affect what the world looks like...Grounding the model’s perception in the real world in this way greatly reduces hallucinations (the tendency for AI models to make things up)as well as their ability to tell you why they did what they did when asked
Such models can also respond in kind. “When the robot makes a mistake, you can query the robot, and it answers in text form,”..."It can provide explanations while we drive, and it allows us to debug, to give the system instructions, or modify its behaviour to drive in a certain style".
The latter feels like the biggest breakthrough to me. It's exactly what I was talking about in the penultimate paragraph of my last post. A system that's not just understanding how language works, but also how it relates to the real world is something that understands *meaning*. And that's incredibly powerful.
While I was looking for the post I came across a poll I ran back in 2015. And I'm re-running it here, to see how different the results are:
The mind is entirely material in origin, and not supernatural in any way
Given sufficient time, humans will understand the patterns which make up simple minds, and build artificial ones
If humanity doesn’t blow itself up, eventually we will create human-level AI
If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI
If far-above-human-level AI comes into existence, eventually it will so overtake humanity that our existence will depend on its goals being aligned with ours
It is possible to do useful research now which will improve our chances of getting the AI's goals aligned with ours
Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise
Bonus Question: I would like my household robot....
In the shape of a human - that's what I'm most comfortable talking to
1 (3.1%)
Non-human, but still friendly/organic looking - artificial humans are creepy, but I like my companions fluffly
5 (15.6%)
Artificial-looking - I want it to look like a machine, not a creature.
6 (18.8%)
To _be_ the house - why have a robot in your house when you can live in a robot?
9 (28.1%)
No household robot for me, thank you - I don't want my technological assistance to think for me
9 (28.1%)
SEIWEIC
2 (6.2%)
no subject
Date: 2024-06-13 03:18 pm (UTC)Don't think my answers have changed :)
no subject
Date: 2024-06-13 04:21 pm (UTC)no subject
Date: 2024-06-13 04:25 pm (UTC)I'd be interested to know more if you wrote a post about it.
no subject
Date: 2024-06-13 04:23 pm (UTC)We have tons of research on "Ontologies" (basically Taxonomies), Object relations. Graphs. In the 00's there was a big push for something like "OWL" Ontology Web Language or something like that. The idea was that everything would have a category/scheme (I think it was supposed to be rdf formatted data). Graph relations between descriptors and objects (apple, red-color, shiny, green-color, tree-grown, eatable... etc) was supposed to be this amazing thing that helped organize all "objects". AFAICT it didn't do much in the web space. But it was (is?) an active area of research if not OWL there were other schemas.
LLMs came along and I was like - why isn't Ontology part of this? I mean, it is when it comes to computer vision, but only so far as it is a shitty complex structure of if/else code. If see "object" in field not supposed to be there (based on parameters, x,y,z) STOP (unless a child, then accelerate and kill Elon's kids or something)
I don't even think it NEEDS a physical embodiment if one constructs a world of relational object and descriptor states, the robot/physicality of course, is helpful in that it sort of limits you to concrete/real situations. But at that point you have animal consciousness. Maybe that's the initial goal, "bootstrap" to the real world, but the imaginal "hallucinations" that aren't bound by reality will eventually be constrained by that. Then, once the constraints on physicality and object relations happen, allow more "hallucinations" to emerge to allow for "creativity" (so far as I can tell though it's still very much a rote exercise in presets and training).
Yet again too much wordiness in my reply.
Either way, I don't think it's as amazing as they say it is, I think there's a lot of hype and I hope for 2nd (3rd?) AI Winter.
I think it will be useful to some degree, but I hate the hype and I hate it shoved down our throats.
My biggest worry isn't "Robots" it's "Humans" who build the robots with the guns. In that sense, sure, best to prepare now to prevent these things, but we already aren't doing it so the future I don't expect us to do it then. If we cared we'd be stopping these companies who are pushing forward with supposedly "semi"-autonomous AI bot things. But we aren't. And considering all the other things we aren't actively stopping (Global Climate Change, Human-on-Human War) I'm not too concerned about the robots, we deserve what we get, at this point. Humans are stupid and short sighted. This isn't necessarily by nature, some of it is the systems we've built up for ourselves with an inherent momentum; Some of it is the reaction against that, but not in a side-channel (e.g. alternate futures) but regressions (a la MAGA and Brexit and Alexander Dugin, etc).
I don't know if I want an squidbot with tentacles cleaning all the things, Rosie the Robot from Jetsons or Vickie the little cute maid from the 80s sitcom. Mostly I'll just be the hoarding filth-living being I am til the floods drown us, methinks.
no subject
Date: 2024-06-13 04:27 pm (UTC)It's how you get the "object" bit of that that's tricky. Text is all descriptor. So you need to have input from the world, and if you can interact with the world then your quality of input is way higher.
no subject
Date: 2024-06-13 07:08 pm (UTC)An AI that is like a human has many of the same flaws (note that Turing's example arithmetic is not quite right). At which point using AI is a matter of economics. The horse population has crashed massively since the car came into being.
Economics is about costs and benefits. Last I heard training an AI used about as much energy as building a car, and it will need to retrain, so I don't know that an AI will be better for the planet than a human.
So far we appear to be working on the assumption that the output of the AI belongs to the builder,
which makes it a money concentrator. To many that is its real value. I see that as the real danger.
no subject
Date: 2024-06-13 04:30 pm (UTC)no subject
Date: 2024-06-13 08:18 pm (UTC)no subject
Date: 2024-06-13 06:36 pm (UTC)We have at least 3 household robots! A dishwasher, a washing machine, a tumble dryer … oh, and a roomba that doesn’t get out enough.
no subject
Date: 2024-06-13 08:19 pm (UTC)(Although it is nice that the washing machine sends me a notification when it's done)
The Roomba didn't cope with things on the floor. Or stairs. So we gave it away.
no subject
Date: 2024-06-13 06:49 pm (UTC)I suppose that you could say that non-conflicting is a synonym of aligned.
no subject
Date: 2024-06-13 08:19 pm (UTC)Something like the relationship between Culture ships and their humans.
no subject
Date: 2024-06-14 12:12 am (UTC)no subject
Date: 2024-06-14 07:15 am (UTC)no subject
Date: 2024-06-14 07:32 am (UTC)no subject
Date: 2024-06-14 07:35 am (UTC)no subject
Date: 2024-06-14 12:21 pm (UTC)no subject
Date: 2024-06-14 01:08 pm (UTC)no subject
Date: 2024-06-14 09:26 am (UTC)Lots of magical thinking here.
no subject
Date: 2024-06-14 09:55 am (UTC)But that's the advantage of the system in point 2, above.
My Little Stick
Date: 2024-06-14 10:48 am (UTC)Before I get to the answer, I want to think about constraints.
One. Currently, with a family history of alzheimers (not just my Dad), I am closer to needing a keeper than not. With no children or nieces or nephews, if it is cheaper to mind me with a robot than a person, then I want something fluffy and gentle.
Two. I can anthropomorphize anything - absolutely anything. Completely human-esque or a stick with a ribbon tied on it. Even just an elastic band. If said stick is active and does anything even remotely like a human or a pet or a car or my gol-darned cane which is possessed I tell you and hates my guts, then I am going to think of it as a fully sentient being.
Three. I don't want humans meddling in my life, so the house robot better not be buying me groceries or making me lists or snooping in my fridge. I will grudgingly accept toilet cleaning and dish washing, but they, without a doubt, will always be doing it wrong and don't even get me started about my laundry (Lorne & I each did our own, always). A house robot attempting to do even the welcome cleaning things better have a super comfortable appearance and manners.
So. For reasons I can't quite pin down yet, if I have a household robot, I think it needs to have a robotty appearance. Nothing scary like gun-toting headless dogs with their knees on backwards, but not completely human-passing. Given human history of slavery/indentured servitude etc, then anything completely human is going to have an identifiable ethnic/race flavour[1], even if covered in blue skin, and any being completely subservient to owners will amplify any already present biases about how certain humans should be treated. I think we need to avoid treading there.
Also. Considering developers never do anything without a maximum-profit-ethical-or-not focus, the appearance needs to remind us that the household robot is connected to people who do not love us and wish to snoop on us to make another penny. I can't think of anything more boring than my life for said snoopers, but that's just going to annoy them into gouging more pennies.
I don't want a household robot, nor would I trust one until I forgot that they weren't a sentient stick, but once they are capable of true grunt work (toilets, dishes, keeping tabs on me as I wander away), I am probably going to need one.
[1] - I know that race is a made-up concept, but we behave like it's real, and we're not very nice about it.
Re: My Little Stick
Date: 2024-06-14 11:03 am (UTC)no subject
Date: 2024-06-14 08:14 pm (UTC)Your household robot question took me back to "Compassion Circuit" by John Wyndam (1954, published in the Seeds of Time Anthology), which is SciFi horror... I definitley do not want a humanoid robot in the house!