Interesting Links for 13-09-2018
Sep. 13th, 2018 12:00 pm- Comicsgate is the latest front in the ongoing culture wars
- (tags: comics fascism OhForFucksSake )
- Middle Earth Map Style
- (tags: maps lotr )
- Labour's Tom Watson 'reversed' type-2 diabetes through diet and exercise
- (tags: diabetes diet health )
- Microsoft intercepting Firefox and Chrome installation on Windows 10
- (tags: microsoft windows browsers OhForFucksSake )
- EU Parliament triggers Article 7 against Hungary
- (tags: Hungary Europe rights )
- Harrassing the children of people you don't like is counterproductive to your cause. And a terrible thing to do.
- (tags: children politics OhForFucksSake )
- Language fluency in speech and print
- (tags: language speech accents writing )
- Your reminder that Apple will delete movies you "bought" from your library. And keep your money.
- (tags: Apple movies copyright OhForFucksSake )
- Oregon novelist who wrote 'How to Murder Your Husband' charged with murdering her husband
- (tags: murder writing novel headline )
- This is the kind of behaviour from the Tories that pushes people towards independence
- (tags: Scotland Conservatives OhForFucksSake )
- MEPs vote to ban killer robots on battlefield
- (tags: robots war europe )
- Did the DC movie universe creators set out to make a mess? Because that's what they've produced
- (tags: dc comics movies )
- Ipswich town centre haunted by eerie nursery rhyme (creepiest story I've heard in some time)
- (tags: uk WTF scary )
- A Christmas Carol, a vanishing Italian, and a single lunch. A real life mystery.
- (tags: writing CharlesDickens italy weird )
- "No joke: TV editors cut me out of a panel show – and didn’t even notice"
- (tags: women tv comedy )
- Scientist Publishes A List Of Known Harassers in Academia
- (tags: academia abuse )
- Men Wish Women Would Stop Expecting These Things From Romcoms In Real Life
- (tags: movies men women behaviour relationships )
- EU Trials Tracker — Who's not sharing clinical trial results?
- (tags: Europe research fail )
- Horrible Science Stories #1a.k.a. How To Kill Several Thousand Women
- (tags: cancer research women breasts OhForFucksSake fraud )
- Juvenile or adult? Leap-year suspect poses conundrum for Australian court
- (tags: time age australia law )
- UK mass surveillance programme violates human rights, European court rules
- (tags: privacy rights uk europe )
Juvenile or adult? Leap-year suspect poses conundrum for Australian court
Date: 2018-09-13 11:33 am (UTC)I also don't understand how people came to different conclusions. Like, I can definitely see, if you want to designate a day which is your birthday day, it's ambiguous whether that should be Feb 28th or Mar 1st. But if you're asking "which midnight should you be legally 18 after and 17 before", how you can come to any other conclusion than the midnight at the end of the 28th.
Did the DC movie universe creators set out to make a mess? Because that's what they've produced
Date: 2018-09-13 11:40 am (UTC)It's like they were operating on a lag. Nolan's batman, especially the middle film, had a vision of the character (basically 'gritty') that fit the character well. DCEU was like, "ok, that was gritty, lets make a gritty superman film". Which (I hear?) was not great, partly because gritty is a bad fit, partly because the protagonist had no agency.
Then the MCU did well with lots of witty banter, and they put some of that in wonder woman and justice league, but it was a bit late.
They have a good model -- the DC *animated* universe is really great. Can't they basically do that with the actors they have (all the actors were pretty good, actually)? But apparently not.
RomComs
Date: 2018-09-13 11:52 am (UTC)no subject
Date: 2018-09-13 03:42 pm (UTC)no subject
Date: 2018-09-13 03:44 pm (UTC)Autonomous Killer Robots
Date: 2018-09-14 09:57 am (UTC)The wisdom: other countries are going to use autonomous systems in the kill chain. This includes in the actual tactical decisions to kill a human being. The reason they are researching those those systems is that they offer both tactical and strategic advantages. Autonomous systems are often better, faster and cheaper than human-in-the-loop systems. This offers a tactical advantage and that tactical advantage, multiplied up by the abiltiy to mass produce systems gives a strategic advantage.
I worry that by unilaterally preventing ourselves from using autonomous systems to kill people we put ourselves at a significant disadvantage. I fear we may be doing so for no practical improvement in the quality of decision of making, the practical levels of human oversight or the ethical position.
It is the case that we can use autonomous systems to disrupt and defend ourselves against autonomous systems using lethal force against us. I also appreciate that the argument that other countries will do this so we should to is pretty similar to the MAD doctrine used to justify nuclear deterance.
The ethics of banning autonomous systems seems suspect to me. I think a decision about whether to kill a human being might be better made by an algorithm designed and tested a long way from the battlefield in time and space rather than made by tired, stressed, emotionally distraught soldiers. We have plenty of examples of soldiers killling prisoners, killing non-combatants in error, killing non-combatants negligently and killing non-combatants deliberately.
I think we also need to be aware of where the effective decision is being made. If you decide to send combat troops in to contact with a potential enemy based on autonmously gathered and analysed intelligence, who, or what has actually made the decision.
A third ethical consideration is that placing a human-in-the-loop places us at a tactical disadvantage. This has cost for own troops and our own civilians.
In terms of the practicality; it's not clear to me where in the kill chain there is effective human intervention. Mines at sea are a clear example of what I mean. If you put a sophisticated mine in a sealane, one that is tuned to detonate only when it detects the signature of an aircraft carrier, has the autonomous system made the decision to sink the ship? What about the situation where autonomous systems are used to identify targets, such as guerillas or irregular infantry for a strike by an unmanned aerial drone. One automated system identifies a target. A human concurs. A second automated systems flies a weapons system that kills the target. How much actual oversight and human intervention is the supposed decision maker actually performing?
If we know that the police in Maricopa County are racist and trigger happy what are we doing when we provide them with a system that automously detects potential criminals in a crowd based on subtle behaviouraly cues?
Practically, in a world where autonomous systems of all types are common, how are we going to distinguish between a vehicle that has a human inside and one that doesn't? Do I use a different decision making tool and system if I think that the fast boat approaching is an unmanned recon boat or a manned special forces raider?
There's a bit to think about here.
Re: Autonomous Killer Robots
Date: 2018-09-15 05:57 pm (UTC)Our systems perform excellently at detecting perfect product, and at detecting definitely-bad product. We keep humans in the loop to classify the "false calls", the many cases where product does not fall cleanly into one of the two categories.
Autonomous systems will be excellent at detecting which heat-emitting blobs are wearing friendly or enemy uniforms, or which are carrying metal objects shaped like friendly or enemy weapons. However, just like current industrial automated inspection systems, they will overreact and scream "ENEMY" when they see a heat-emitting blob of flesh wearing a green plaid blouse not matching friendly camouflage patterns, or a heat-emitting blob carrying metal gardening tools that do not match friendly weapons templates.
At my workplace, when we have high levels of anxiety about defects, our human inspectors tend to overcall and click "DEFECT" for all marginal issues. In a combat situation, soldiers may also tape down the "FIRE" button to automatically shoot anything not identified as clearly friendly.
Re: Autonomous Killer Robots
Date: 2018-09-18 10:22 am (UTC)It's the Fire button taped down aspect of humans that makes me think about where the effective decision is being made. If you have an autonomous target finding system that identifies likely eneny combatants and than hands over the final decision making to the human commanders but we know that the human commanders will not really challenge the identification of the autonomous system I think we have, de facto, if not de jure, made the autonomous system responsible for the decision to kill a person. Worse, because we've kidded ourselves that humans are in the loop, we might well not have designed the autonomous system to the quality standards one might want for a system that was making fatal decisions about humans.
There are some situations where the identification of something as an enemy vehicle is able to made very accurately. A 15 meter object travelling at Mach 1.5 without an IFF device is either going to be an enemy plane or an enemy missile. What is uncertain is whether is a manned aircraft ot an unmanned vehicle.