Interesting Links for 20-08-2021
Aug. 20th, 2021 12:00 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- AI has the worst superpower… medical racism.
- (tags:AI medicine racism )
- Never underestimate an old person sending you on a quest
- (tags:funny video )
- Canadian Nobel scientist's deletion from Wikipedia points to wider bias
- (tags:patriarchy Wikipedia )
- OnlyFans announces it is bored of making money
- The way that payment providers treat sex work is actually one of the best arguments for a need for cryptocurrency
(tags:CrowdFunding porn doom fans business banking sexwork OhForFucksSake viaSwampers ) - Operation Choke Point - the US government attempt to cut off banking for services it didn't like
- (tags:USA banking OhForFucksSake sexwork )
- Skyrim foxes will lead players to treasure. But they were never designed to do so!
- (tags:Foxes games design )
- Pfizer vaccine effectiveness declines quicker than AstraZeneca (but is still higher)
- (tags:vaccine pandemic doom )
- How MasterCard and Visa make some kinds of work impossible
- (tags:payment sexwork OhForFucksSake usa )
- Shiloh and the Angel's Glow: An explanation.
- (tags:bacteria history light )
Evil AI
Date: 2021-08-20 12:26 pm (UTC)Re: Evil AI
Date: 2021-08-20 03:41 pm (UTC)AI has a notorious failure mode where it picks something random and learns the wrong thing. One famous example is that an AI looking for cancerous tumours learnt not to recognise cancer, but that images of cancer tend to have a measurement strip in the frame more often than images that weren't of cancer. This one was fairly easy to spot, because you could look at which bit of the image the AI was weighting in its response, and spot that it wasn't the bit with the tumour in it.
Now in this study, they've picked all the obvious ways of checking whether your AI has fixated on something unhelpful, and they've not found them. That suggests it's a lot of small things contributing to the overall impression the AI is getting. And some of that information is going to be an artefact of the sociological context of collecting and labelling the image. Unless they've made a very basic mistake, there should be no way the AI can deduce this. It suggests that racial bias is built-in to medical imaging. It's a strong claim, and I'm sure there'll be a bunch of stuff to challenge this.
Btw, while we currently regard race as not an objective thing, this was by no means true 100 years ago. A lot of the post-WWII scientific work has been unpicking scientific racism, and doing better statistics on confounding variables (mostly, let's face it, poverty and discrimination).
Re: Evil AI
Date: 2021-08-20 04:33 pm (UTC)no subject
Date: 2021-08-20 03:48 pm (UTC)no subject
Date: 2021-08-20 03:51 pm (UTC)Thankfully, my readers are terribly smart and will connect the dots without my help!
no subject
Date: 2021-08-20 05:39 pm (UTC)So why didn't OnlyFans blast the true responsibility from the hilltops? Or did they and nobody paid attention?
no subject
Date: 2021-08-20 06:16 pm (UTC)Also, looking at this related item at Vice News, I've been introduced to another acronym: "SWERF".