andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2018-01-21 12:03 pm
gingicat: deep purple lilacs, some buds, some open (Default)

[personal profile] gingicat 2018-01-21 12:18 pm (UTC)(link)
I have not heard the term “no platform” before. Is it shorthand for “they may say what they like, but we at this venue/organization/school will not give them a platform to spew hateful lies denying the civil rights of a given set of people”?
gingicat: deep purple lilacs, some buds, some open (Default)

[personal profile] gingicat 2018-01-21 01:26 pm (UTC)(link)
I put it in as a comment to the blog.

[personal profile] luzclarita 2018-01-21 11:55 pm (UTC)(link)
Wow! That de-escalation technique reminds me of the new Sarah Silverman show https://www.hulu.com/i-love-you-america
danieldwilliam: (Default)

Lovelace Oath

[personal profile] danieldwilliam 2018-01-22 09:26 am (UTC)(link)
I am no expert on Artificial Intelligence or deep learning. To be honest I'm not sure the people I work with are experts on it (for a value of expert of literally wrote the PhD text book) but the are definately applying deep learning to problems.

I get the impression from chat around the office that the deep learning tools handling very large data sets can't tell you why they "know" what they know. I think a lack of transparancy might be fundamentally linked to how some of these systems work. So I'm not sure it's entirely possible to apply the Lovelace Oath and always use something that has the required level of transparancy without losing significant amounts of functionality.

Not having that functionallity might be worth having the transparancy. It may be possible to back fill the transaparancy and achieve the goals some other way.
danieldwilliam: (Default)

Re: Lovelace Oath

[personal profile] danieldwilliam 2018-02-12 12:52 pm (UTC)(link)
The testing might be problematic in terms of the necessary volume required to achieve a certain level of understanding and certainty.

You can test in simulation (e.g. run your driving software though several million hours of simulated real conditions and see if its Desired Outcome for Accidents Arbiter is working as you would like) but you have a risk that your simulations aren't very good.

A larger risk is that we are trying to create dumb but powerful Gods using 21st Century capitalism.