ChatGPT is literally an adversarial attack on your theory-of-mind abilities
That's an interesting way of looking at it. And it connects directly to the Turing test, of course: is ChatGPT (or maybe its next-but-3 successor, whatever) also an adversarial attack on the Turing test? Is it about to demonstrate that Turing picked the wrong definition?
I think that at present, ChatGPT's most interesting way to fail the Turing test – in the sense of behaving very unlike a human answering the same question – is where it lies (or perhaps, per Harry Frankfurt's distinction, bullshits) with no inhibitions whatsoever and also with no discernible purpose.
Of course humans lie and bullshit in many situations, but generally they'll have some purpose in doing so: you lie to protect a secret that has some consequences to you if it's found out, you bullshit for your own aggrandizement or profit. In both cases, you won't depart from the truth in the first place unless you have a reason to bother – because it requires work, in that you have to take reasonable care to ensure that the untruths you're spouting are not trivially falsifiable. The used-car vendor who goes on about how well-cared-for and reliable the car is doesn't expect that the customer will never find out the truth – but does at least care that the customer doesn't find it out until it's too late. ChatGPT really couldn't give a monkey's.
But then ... wait a minute. Humans do sometimes answer questions in a manner that's immediately obviously an untruth, or extremely unhelpful. If they don't care a hoot about helping the questioner, and can't see any downside to themself in being caught in a lie, why not just say any old thing they think is funny, and enjoy the confusion and/or annoyance they cause in their questioners?
ChatGPT is nowhere near passing the Turing test if the test judge is trying to distinguish it from a sensible human trying to behave reasonably. But it might be pretty close if the judge is trying to distinguish it from an Internet troll.
no subject
That's an interesting way of looking at it. And it connects directly to the Turing test, of course: is ChatGPT (or maybe its next-but-3 successor, whatever) also an adversarial attack on the Turing test? Is it about to demonstrate that Turing picked the wrong definition?
I think that at present, ChatGPT's most interesting way to fail the Turing test – in the sense of behaving very unlike a human answering the same question – is where it lies (or perhaps, per Harry Frankfurt's distinction, bullshits) with no inhibitions whatsoever and also with no discernible purpose.
Of course humans lie and bullshit in many situations, but generally they'll have some purpose in doing so: you lie to protect a secret that has some consequences to you if it's found out, you bullshit for your own aggrandizement or profit. In both cases, you won't depart from the truth in the first place unless you have a reason to bother – because it requires work, in that you have to take reasonable care to ensure that the untruths you're spouting are not trivially falsifiable. The used-car vendor who goes on about how well-cared-for and reliable the car is doesn't expect that the customer will never find out the truth – but does at least care that the customer doesn't find it out until it's too late. ChatGPT really couldn't give a monkey's.
But then ... wait a minute. Humans do sometimes answer questions in a manner that's immediately obviously an untruth, or extremely unhelpful. If they don't care a hoot about helping the questioner, and can't see any downside to themself in being caught in a lie, why not just say any old thing they think is funny, and enjoy the confusion and/or annoyance they cause in their questioners?
ChatGPT is nowhere near passing the Turing test if the test judge is trying to distinguish it from a sensible human trying to behave reasonably. But it might be pretty close if the judge is trying to distinguish it from an Internet troll.