andrewducker (
andrewducker) wrote2023-04-06 12:00 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Interesting Links for 06-04-2023
- 1. Edinburgh's North Bridge will reopen to two-way traffic on the 21st of April
- (tags:edinburgh bridge traffic transport GoodNews )
- 2. What happens if you ask an AI how to turn the world into paperclips?
- (tags:ai funny )
- 3. What *is* the rabbinical consensus on the Plague of Frog?
- (tags:frogs Jews religion funny )
- 4. Psychedelic drug DMT improves symptoms of depression for at least six months
- (tags:psychedelics depression )
- 5. Timeline of police probe into SNP finances
- (tags:snp money politics scotland police )
- 6. 10 Reasons Board Games Are Better Now (a sometimes contradictory but always interesting video)
- (tags:boardgames video games history )
- 7. 'I consider the president a war criminal' An interview with the highest-ranking secret service officer to escape from Russia
- (tags:Russia interview military )
no subject
It strikes me as unexpected because you wouldn't classify a human as unaligned with normal morality just because they threw themself wholeheartedly into a totally counterfactual what-if exercise of this kind. It's only if they actually put it into practice, or seriously planned to, or conspired to, that we'd think there was something wrong. But a science fiction author, for example, is totally allowed to strategy-game to their heart's content about how best to turn the universe into an equivalent mass of paperclips, and if they do it well enough, they'd even be praised for it!
Of course in that example it would be difficult to imagine the human actually intending to do it for real. But there are examples where it can be harder to tell. For example, carefully researching the best way in some circumstances to commit a murder or hide a body: bad if you're doing it for the purposes of actually committing murder, but just fine for the purposes of writing a mystery novel.
And you wouldn't imagine, say, Asimov's robots having trouble with the distinction either. If you ordered Daneel Olivaw to turn the universe into paperclips, he'd refuse, because First Law; but if you ordered him to think carefully about how another entity might, he'd have no reason not to give it his best shot, even if it made him a little uncomfortable to imagine. Indeed, if he had any reason to think some other entity was planning a paperclip-oriented rampage, then First Law would outright require him to anticipate that entity's moves as best he could, so as to thwart them effectively.
I suppose the point is that at the moment, the GPT series doesn't really have the distinction: to it, everything is a what-if exercise, and everything is real, because they're the same. So perhaps even giving a considered answer to "what would you do in these counterfactual conditions?" is considered equivalent to saying yes to the demand "ok, go do it for real".
no subject
no subject
no subject
no subject
no subject
More about the frog(s), with talmudic link, here: https://cellio.dreamwidth.org/2121947.html .
It's not a translation error. The Hebrew really is singular in one place and plural in another.