I think having views about AI alignment is like having views about the number of angels that can dance on the head of a pin. It's not relevant to anything
The core AI problems of today are: - plagiarism - human inability to control the quality of something that looks right if you don't pay attention (also see Tesla Autopilot) is
We are not on a path to anything resembling AGI if we think guess the next word generators are "AI"
With respect to AI, we are still too reliant upon humans to do things for any electronic system to threaten or disempower humanity. Yet.
AI can threaten or disempower human societies, yes. AI could theoretically hack all our computer systems, destroy or corrupt all our telecommunications, and help fanatic despots achieve power over political systems. AI could fake evidence, and send orders to police to arrest electricians planning to disconnect it. But we still need a bag of meat to be the politician to be sworn in as the despot, a team of bags of meat to assemble, install, and run the silicon chip fabrication equipment. We still need bags of meat to install and maintain power transmission lines, and to whack power generation equipment with a hammer when it acts up.
(bias note: one thing my factory makes is equipment used when etching wafers to make computer chips. Lots of humans with hand tools are still involved).
2. I am too mis-aligned with the question to appreciate the design of the questionnaire.
I have several concerns about AI that overwhelm the possibility of the machines going rogue.
I don't trust the organisations who are currently in charge of "AI".
If AI is owned and run for profit, will the accumulation of wealth kill the entire economic system ? I fear that it could make the mass of the population too poor to be consumers !
Is the energy consumption going to be a problem - I've read that training one system takes about the same energy as making a car (which is possibly more than the energy consumed by using the car over its lifetime) ?
If an AI is as good as, say, an average lawyer or even a mediocre one, how will anyone get the experience to be a good or great one. Ditto any other skill/trade.
The questions on AI are nonsensical to me. Is it possible to create an AI that will overthrow humanity? Yes, in theory -- but it would not look anything like whatever is marketed as "AI" right now. ChatGPT and its ilk are like random tables to generate a response to a prompt, not STEM researchers.
It makes me very very sad that so many many people now use the term "AI" to mean JUST these LLMs.
That survey might make a LOT more sense if they mean "AI" as I think of the term. Which is probably roughly the meaning used from the mid 80s until about 5 years ago.
Yes. That was half my point. It's obvious to me, and clearly to you also, that the survey is very much NOT using the term "AI" as it's currently colloquially used.
Other people's mileage is clearly varying here, though! (Which was my other half.
I wonder how many people have ONLY ever heard the term "AI" refer to the modern day chat bot tools... It makes me sad and wonder how the term can be reclaimed (because what else do we use?)
no subject
no subject
no subject
no subject
The core AI problems of today are:
- plagiarism
- human inability to control the quality of something that looks right if you don't pay attention (also see Tesla Autopilot) is
We are not on a path to anything resembling AGI if we think guess the next word generators are "AI"
no subject
AI can threaten or disempower human societies, yes. AI could theoretically hack all our computer systems, destroy or corrupt all our telecommunications, and help fanatic despots achieve power over political systems. AI could fake evidence, and send orders to police to arrest electricians planning to disconnect it. But we still need a bag of meat to be the politician to be sworn in as the despot, a team of bags of meat to assemble, install, and run the silicon chip fabrication equipment. We still need bags of meat to install and maintain power transmission lines, and to whack power generation equipment with a hammer when it acts up.
(bias note: one thing my factory makes is equipment used when etching wafers to make computer chips. Lots of humans with hand tools are still involved).
no subject
no subject
no subject
I have several concerns about AI that overwhelm the possibility of the machines going rogue.
I don't trust the organisations who are currently in charge of "AI".
If AI is owned and run for profit, will the accumulation of wealth kill the entire economic system ?
I fear that it could make the mass of the population too poor to be consumers !
Is the energy consumption going to be a problem - I've read that training one system takes about the same energy as making a car (which is possibly more than the energy consumed by using the car over its lifetime) ?
If an AI is as good as, say, an average lawyer or even a mediocre one, how will anyone get the experience to be a good or great one. Ditto any other skill/trade.
no subject
no subject
That survey might make a LOT more sense if they mean "AI" as I think of the term. Which is probably roughly the meaning used from the mid 80s until about 5 years ago.
no subject
no subject
Other people's mileage is clearly varying here, though! (Which was my other half.
I wonder how many people have ONLY ever heard the term "AI" refer to the modern day chat bot tools... It makes me sad and wonder how the term can be reclaimed (because what else do we use?)