However, a year later, a couple of things came along that add something useful to it:
1) This piece on how the underlying technology works, and how it spots patterns in text and uses them to build a working model of language. If you're at all interested in a layperson-aimed introduction to the Transformer, which is what all of the recent breakthroughs in AI are based on, then I highly recommend it.
2) This piece on how robots and AI are working together. The most important bit being that they are building models that blend together LLMs with other kinds of input from the world, allowing them to build a much richer understanding. This quote is key:
These “vision-language-action models” (VLAMs) take in text and images, plus data relating to the robot’s presence in the physical world, including the readings on internal sensors, the degree of rotation of different joints and the positions of actuators (such as grippers, or the fingers of a robot’s hands). The resulting models can then answer questions about a scene, such as “can you see an apple?” But they can also predict how a robot arm needs to move to pick that apple up, as well as how this will affect what the world looks like...Grounding the model’s perception in the real world in this way greatly reduces hallucinations (the tendency for AI models to make things up)as well as their ability to tell you why they did what they did when asked
Such models can also respond in kind. “When the robot makes a mistake, you can query the robot, and it answers in text form,”..."It can provide explanations while we drive, and it allows us to debug, to give the system instructions, or modify its behaviour to drive in a certain style".
The latter feels like the biggest breakthrough to me. It's exactly what I was talking about in the penultimate paragraph of my last post. A system that's not just understanding how language works, but also how it relates to the real world is something that understands *meaning*. And that's incredibly powerful.
While I was looking for the post I came across a poll I ran back in 2015. And I'm re-running it here, to see how different the results are:
The mind is entirely material in origin, and not supernatural in any way
Given sufficient time, humans will understand the patterns which make up simple minds, and build artificial ones
If humanity doesn’t blow itself up, eventually we will create human-level AI
If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI
If far-above-human-level AI comes into existence, eventually it will so overtake humanity that our existence will depend on its goals being aligned with ours
It is possible to do useful research now which will improve our chances of getting the AI's goals aligned with ours
Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise
Bonus Question: I would like my household robot....
In the shape of a human - that's what I'm most comfortable talking to
1 (3.1%)
Non-human, but still friendly/organic looking - artificial humans are creepy, but I like my companions fluffly
5 (15.6%)
Artificial-looking - I want it to look like a machine, not a creature.
6 (18.8%)
To _be_ the house - why have a robot in your house when you can live in a robot?
9 (28.1%)
No household robot for me, thank you - I don't want my technological assistance to think for me
9 (28.1%)
SEIWEIC
2 (6.2%)