andrewducker: (Default)
[personal profile] andrewducker
There's a school of thought that says that "People set a bar for AI, and whenever computers achieve that bar it's moved. And this is because people are protective of their intelligence, so they treat anything a computer can do as clearly not *real* intelligence."

And I can understand objections to that - it feels unfair to keep moving the bar. The problem is that lots of people have either no working definition of "intelligence". They have an inductive "feeling" for what intelligence is, and most of the time that works just fine. And it certainly used to feel like, for instance, "being able to beat a human at chess" would require intelligence from a human, so presumably if a computer could do it then it would be an artificial intelligence. So they set the bar wherever it feels like "If the computer can solve this task then surely it must be intelligent".

And actually that's kinda true, if your definition of intelligence is "Can consider lots of possibilities about a chess board, and find the one that's the most effective." The problem is that they then got an "AI" that could only apply its "Intelligence" to chess. And it did't really understand chess, it just had a set of steps to follow that allow it to do well at chess. If you gave the set of steps to a person who had never played chess, and got them to follow the steps then they'd be just as likely to win a game. But they'd have no mental model of chess, because that's not how (most) chess engines work.

And that idea of a mental model is where my definition of intelligence comes from - "The ability to form models from observations and extrapolate from them."

If something is able to form those models and then use them to make predictions, or to analyse new situations, or to extend the models and test them, then it's intelligent. They might be models of how car engines work, or how French works, or how numbers work, or how humans (or their societies) work. Or, indeed, of how to catch updrafts while hunting fieldmice, or where the best grazing is that's safe for your herd, or how to get the humans to deliver the best treats. These are all things that one can create mental models of, and then use those models to understand them and predict how they interact with the world.

I mention this because of the recent excitement about Large Language Models. The kind of thing which GPT is an example of, and exploded onto the scene with extremely impressive examples of conversational ability. These models are, to put it mildly, incredibly impressive. They were trained on huge amounts of text, and they can do an awesome job of taking a prompt and generating some text which looks (mostly) like a human wrote it. It is, frankly, amazing how well it can do this.

And, as you'd expect, some people have come out and said "If a computer can solve this task then surely it must be intelligent." Particularly because we are very used to judging people's intelligence based on how they write (particularly on the internet, when that's frequently all we have to go on). But "This looks like a person wrote it" is exactly what GPT is designed to do. To quote François Chollet, Saying "ChatGPT feels intelligent to me so it must be" is an utterly invalid take -- ChatGPT is literally an adversarial attack on your theory-of-mind abilities.

To be fair, though, LLMs _do_ have models. They make models of what well-written answers to questions look like. Impressively good ones. But that shouldn't be confused with understanding those questions, or having any kind of model of the world. It's great that, when asked "What does a parrot looks like?", it can say "A parrot is a colorful bird with a distinctive curved beak and zygodactyl feet, which means that they have two toes pointing forward and two pointing backward." - because it knows which words are associated with describing what things look like, what words are associated with the word "parrot", how to structure a sentence, etc. But that doesn't mean it has any idea what a curve actually looks like. The word "curved" means something to you because when you were very young people showed you curves and said the word "curve" enough times that you made the connection between experience and language. LLMs have no experience, they only have language. And no matter how much language you pile onto a model, and how many words you link to each other, if none of them ever link to a real experience of a real thing then there's no "there" there - it's all language games. And that's why these systems will regularly say things with no connection to reality - they don't understand what they're saying, they aren't connected to reality, they're just making sentences that look plausible.

Simply put, LLM is amazing, but it's amazing at understanding language patterns and working out what piece of language comes next to look like a person wrote it. And language is only meaningful if it's connected to concepts, and you connect those by starting with, for instance, experiencing dogs and *then* learning the word "dog". Or experiencing dogs, learning the word "cat" and then being told that dogs are like cats except for certain differences.

However! This doesn't mean that there couldn't be a "there", that a system not unlike an LLM couldn't learn how to interact with the reality, to associate words with physical things, and develop an intelligence that was rooted in understanding of the world. I suspect it will need to be significantly bigger than existing models, and to be able to work with huge amounts of memory in order to store the context that needs for various situations. But the idea of building models based on huge amounts of input, and then extrapolating them is clearly one that's not going anywhere.

In the meantime, I can't give you a better idea of what large language models are, and why they produce the things they do than this rather wonderful description.

Date: 2023-05-02 02:04 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
My opinion hasn't changed for 30+years (you may even recall, from our dim and distant youth). I think that for machines to be intelligent, they must also have 1) a "body" - physical equipment to interact with the physical world 2) "emotions" - an aversion to harm to their "body" and most likely some drive towards a goal or goals. (else ... "I think you ought to know, I'm feeling very depressed")

Date: 2023-05-02 02:50 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
I think we'll see if I'm right if and when it happens.

If intelligence without emotion could exist, I don't think we'd recognise, understand or be able to relate to it. Every current instance of intelligence is in an animal - in all cases directed by drives we call emotions. Why would an intelligent machine do one thing instead of another? I suppose we're not considering it's doing things purely at random...

Do you see intelligent" machines only following direct orders? No self-direction or self-determination? Would that be intelligent?

If an intelligent machine is NOT purely following external orders, how does it choose what to do? Why do one thing instead of another, pursue one goal instead of another? In my mind, the mechanisms for those choices in anything we would recognise as intelligent would be a mix of logic and emotion, just as for animals (including us). And I have long thought that the most base emotion is "I have a body that enables my mind to interact with the world - and I want it to keep working - I don't want it harmed". (Well, maybe I mean easiest to implement... Obviously it gets more complicated quickly when you have other aims that might be worth or require a little damage...)

Date: 2023-05-02 03:07 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
Well, understanding comes from doing things. Interacting with the world. I thought that was part of what you were saying in your post. Maybe I misread, or read something different into it.

But I'd definitely argue that it would be extremely hard for us humans to recognise disembodied intelligence. Since every single example of intelligence we've ever encountered IS embodied.

You're not some kind of Cartesian dualist are you? (Reveal: I'm about as far from that as you can get)

Date: 2023-05-02 03:45 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
Hmmmm , do I believe that intelligence requires (at least some degree of) self-awareness? Possibly. That could be where I'm coming from. Combined with a definition of "self" that means having a physical presence in the material world (cue Madonna :-) ). And that intelligence is performed, not a fixed quality that "just is". Hmmm.

Of course everyone else's mileage may vary.

Every example we have of intelligence is embodied. But hey, how long did we think all solar systems were like ours and how quickly did that change when we actually saw a few different ones than our own. So maybe if I meet a possibly-intelligent disembodied machine, I'll change my mind :-).

I wonder how much of our own biases stem from in how comfortable we are with being a physical creature (with all the joys and limitations). Maybe those who don't like it so much, have a bit more invested in seeing other ways that intelligence could exist.

Date: 2023-05-11 09:41 pm (UTC)
azurelunatic: Vivid pink Alaskan wild rose. (Default)
From: [personal profile] azurelunatic
One of the classic tests for self-awareness is the mirror test, where an entity is shown a dynamic and current image of themself and then something changes. I don't recall what the classic thing is, but with my housemate's late cat, I positioned my finger within immediate touching distance of her ear, coming up from behind so she would not be able to see it without use of the cellphone selfie camera. She flicked her ear. I repeated the motion with the camera off, pointed elsewhere, and other changes. If she didn't see it in the cellphone mirror, she didn't flick her ear. If she did see my finger proximal to her ear in the phone, she did flick it. I would say that by that test, she had a sense of self.

My partner's cat, who has studied all the automatic feeders we have used for her, and figured out how to defeat at least one model, and how to re-lock another model, screams at us when she has medical issues that she'd like fixed, and knocks on doors when she wants out, does not have a mirror-based sense of self.

Date: 2023-05-02 06:10 pm (UTC)
toothycat: (Default)
From: [personal profile] toothycat
Watching the development of self-driving cars is interesting. These are embodied machine with goals. Moreover, the builders of these devices desperately need them to form a correct model of the world around them which has sufficient fidelity to be useful for making predictions.

What we actually end up with is software which is usable in some situations but cannot be relied on in the general case; and there seems to be no immediate prospect of this changing without some fundamental shift in approach.

Date: 2023-05-02 06:18 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
Good example. Externally set goals, but goals nonetheless.

And, as you say, woeful performance, whereas most humans can be trained to the task in a short time.

Date: 2023-05-02 02:24 pm (UTC)
doug: (Default)
From: [personal profile] doug
You might like to know that the first chess engine was an algorithm devised by Turing in 1950, following on from some theoretical work on how it could be done by Shannon. Unfortunately, it was beyond the capabilities of any computer available at the time. So what was (arguably) the first ever human-vs-computer chess match was Shannon playing Turing's algorithm as manually-implemented by Turing, which took him half an hour per move. The algorithm lost.

Date: 2023-05-02 02:35 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
ChatGPT is literally an adversarial attack on your theory-of-mind abilities

That's an interesting way of looking at it. And it connects directly to the Turing test, of course: is ChatGPT (or maybe its next-but-3 successor, whatever) also an adversarial attack on the Turing test? Is it about to demonstrate that Turing picked the wrong definition?

I think that at present, ChatGPT's most interesting way to fail the Turing test – in the sense of behaving very unlike a human answering the same question – is where it lies (or perhaps, per Harry Frankfurt's distinction, bullshits) with no inhibitions whatsoever and also with no discernible purpose.

Of course humans lie and bullshit in many situations, but generally they'll have some purpose in doing so: you lie to protect a secret that has some consequences to you if it's found out, you bullshit for your own aggrandizement or profit. In both cases, you won't depart from the truth in the first place unless you have a reason to bother – because it requires work, in that you have to take reasonable care to ensure that the untruths you're spouting are not trivially falsifiable. The used-car vendor who goes on about how well-cared-for and reliable the car is doesn't expect that the customer will never find out the truth – but does at least care that the customer doesn't find it out until it's too late. ChatGPT really couldn't give a monkey's.

But then ... wait a minute. Humans do sometimes answer questions in a manner that's immediately obviously an untruth, or extremely unhelpful. If they don't care a hoot about helping the questioner, and can't see any downside to themself in being caught in a lie, why not just say any old thing they think is funny, and enjoy the confusion and/or annoyance they cause in their questioners?

ChatGPT is nowhere near passing the Turing test if the test judge is trying to distinguish it from a sensible human trying to behave reasonably. But it might be pretty close if the judge is trying to distinguish it from an Internet troll.

Date: 2023-05-02 06:07 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
Turing's example includes a long addition (twelve digits *if* IRC) with one digit wrong which takes several seconds.

Once I read that, I decided that an AI gives up most of the computer's principled/theoretical advantages over humans - all that is left is economics.

If it takes as much energy to train an AI as to make a car (which may be more than the car uses over its lifetime) and some fraction of that to keep its knowledge up to date, we may find that an AI is not much, if any, cheaper than a person.
I think the more important discussion is who profits from an AI. If its training set is the internet, then it seems to me that the world owns half of the IP and therefore half of the revenue it generates.

(If AI does make much human work unneccessary one solution might be a Universal Income, though I see economic problems with that, such as AIs cannnot yet grow food or construct buildings or water systems.)

Date: 2023-05-02 06:25 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
Turing's example includes a long addition (twelve digits *if* IRC) with one digit wrong which takes several seconds.

Thanks to ciphergoth's link I can correct that recollection: 5+5=6 digit addition with one digit off-by-one after a thirty second delay.

Date: 2023-05-02 06:30 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
We see this logic of people being cheaper all the time. At least anyone who's worked in a factory or a "low skill" job (hehe, many take all sorts of skills).

30 years ago, I worked in a factory where (as a comp sci student) I was fascinated to discover that a lot of the clever machinery was still mechanical, cams and pneumatics and thermocouples, not programmed, rather built so that action A causes (or restricts) action B (and C and D E,F).

And even more interesting, we had one more highly automated line that produced the same output as a manual line. It was much more costly to run, because it needed regular attention from the more expensive technicians, rather than the manual line that 4-6 cheap humans (uneducated women) could easily keep going most of the week without needing tech input.

Date: 2023-05-03 08:03 am (UTC)
drplokta: (Default)
From: [personal profile] drplokta
It’s already the case that jobs that people like doing are paid less well than jobs that people don’t like doing. That’s part of why, for example, doctors and teachers are paid less than bankers.

Date: 2023-05-02 02:50 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth
What dialogues with GPT-4 do you think most strongly illustrate the way in which it's an "adversarial attack on your theory-of-mind abilities" rather than "real intelligence"?

Date: 2023-05-02 03:55 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth
If this difference between "real intelligence" and what GPT-4 has is not manifested in a difference in capabilities, I don't see what point you're trying to make.

Date: 2023-05-02 04:05 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth

I am reminded of the introductory paragraph to Computing Machinery and Intelligence:

I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

Date: 2023-05-02 04:21 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth

I would think that any real difference in modelling ability as you describe would be detectable as a difference in capabilities. If you really can't detect it from the outside it feels like "angels on the head of a pin" stuff.

I would actually guess the other way - that the modelling stuff is a real difference in capabilities between people and GPT-4 that is detectable using queries! As a result, I'm always on the lookout for queries that starkly illustrate that difference.

Date: 2023-05-02 03:47 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
Was that question AI generated? :-)

Date: 2023-05-02 03:53 pm (UTC)
ciphergoth: (Default)
From: [personal profile] ciphergoth
I appreciate your well-articulated perspective on the definition of intelligence and how it applies to AI, especially large language models like GPT. However, I see a couple of points that could be reconsidered.

Firstly, it is important to differentiate between different types of intelligence. The human understanding of intelligence is rather multifaceted; it includes not only problem-solving and pattern recognition abilities but also emotional, spatial, and interpersonal intelligence, among others. When we discuss AI, we often lean towards a specific aspect of intelligence: problem-solving. This is where AI excels, as with the chess example you mentioned. But it's also true that AI, including GPT, lacks the ability to have a rich understanding of the world in the way that humans do, especially in terms of emotional and experiential intelligence. Therefore, the assertion that AI isn't intelligent because it lacks a comprehensive model of the world might be overlooking the type of intelligence that these systems do possess.

Secondly, the idea that large language models don't understand language or reality is arguably more of a philosophical question than a practical one. While it's true that AI models like GPT don't understand language or reality in the human sense, they do understand patterns in data. They "understand" in the sense that they can process and generate language based on the patterns they've been trained on. This form of understanding, while vastly different from our own, is nevertheless a form of understanding.

Your point on LLMs lacking experiences and therefore cannot link words to reality is valid, but it's important to remember that human intelligence and AI are fundamentally different. AI doesn't require experiences in the same way humans do to understand and generate language. This doesn't negate their intelligence but rather demonstrates a different form of intelligence.

Finally, it's true that AI models are not yet capable of forming comprehensive models of the world based on their [ChatGPT-4 stopped generating here]

Date: 2023-05-02 05:54 pm (UTC)
calimac: (Default)
From: [personal profile] calimac
But if, as the person referred to in your final link says, the AI's ability is only to come up with what sounds like a plausible response to a statement, what does it say when - as happened to me - the AI came up with a plausible answer to a question but a whole bunch of humans trying to do so failed?

Sometimes I wonder if all humans are capable of forming and extrapolating models from observations, your definition of intelligence. I'm reminded of PKD's theory that many supposed humans are actually androids.

Date: 2023-05-02 06:40 pm (UTC)
channelpenguin: (Default)
From: [personal profile] channelpenguin
[personal profile] calimac: By observation and deliberation over a few decades - I think maybe a substantial proportion of people aren't aware of forming models and/or cannot articulate them - if they even do so at all... Most people aren't trained in logic, and it's a bit random then how well they can use it systematically.

Date: 2023-05-02 09:11 pm (UTC)
agoodwinsmith: (Default)
From: [personal profile] agoodwinsmith
"The ability to form models from observations and extrapolate from them."

I think this needs modification, because as it stands, the ChatGPT is/are intelligent. The fact that the extrapolations are not correct is not part of the brief.

For it to know that some of its extrapolations are not correct, it needs judgement, or the ability to evaluate and retry. It also needs to know that correctness is valued. Apparently some success/improvement is happening when a querier asks the ChatGPT to reflect, but it doesn't have the initiative/programming to do so on its own.

So, I think your definition needs an addition such as: "The ability to form models from observations and extrapolate from them; and the ability to evaluate the extrapolations against its models and then modify them towards a closer fit with its models."
Edited (too many "not"s) Date: 2023-05-02 09:13 pm (UTC)

Date: 2023-05-03 02:15 pm (UTC)
From: [personal profile] anna_wing
So is there are necessary connection between intelligence and a consciousness of some sort? Self-awareness?

I like mechanical things. They are so much more robust. I still remember seeing a room full of connected metal pieces, all the same size, which were collectively the levers that moved the Gates-of-Mordor-sized original locks of the Panama Canal, all of them driven originally by a motor that didn't look a lot bigger than a large outboard motor. Plus, anyone wanting to interfere with it would have had to actually break in and go at it physically. And even then, any workshop could have produced replacement pieces at once.

When I worked in New York city more than 20 years ago, some of the old pneumatic tubes downtown designed at the beginning of the 20th century were still operational. I remember seeing them being used in at least one shop that I visited, which must presumably have originally been a post office. I don't know if they survived the 11 Sept 2001 incidents.

September 2025

S M T W T F S
  12 3 4 5 6
7 8910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 9th, 2025 05:40 am
Powered by Dreamwidth Studios