andrewducker: (Default)
andrewducker ([personal profile] andrewducker) wrote2018-01-21 12:03 pm
danieldwilliam: (Default)

Lovelace Oath

[personal profile] danieldwilliam 2018-01-22 09:26 am (UTC)(link)
I am no expert on Artificial Intelligence or deep learning. To be honest I'm not sure the people I work with are experts on it (for a value of expert of literally wrote the PhD text book) but the are definately applying deep learning to problems.

I get the impression from chat around the office that the deep learning tools handling very large data sets can't tell you why they "know" what they know. I think a lack of transparancy might be fundamentally linked to how some of these systems work. So I'm not sure it's entirely possible to apply the Lovelace Oath and always use something that has the required level of transparancy without losing significant amounts of functionality.

Not having that functionallity might be worth having the transparancy. It may be possible to back fill the transaparancy and achieve the goals some other way.
danieldwilliam: (Default)

Re: Lovelace Oath

[personal profile] danieldwilliam 2018-02-12 12:52 pm (UTC)(link)
The testing might be problematic in terms of the necessary volume required to achieve a certain level of understanding and certainty.

You can test in simulation (e.g. run your driving software though several million hours of simulated real conditions and see if its Desired Outcome for Accidents Arbiter is working as you would like) but you have a risk that your simulations aren't very good.

A larger risk is that we are trying to create dumb but powerful Gods using 21st Century capitalism.