Discussion about this post

User's avatar
You know, Cannot Name It's avatar

You write about trust in opaque systems as if it were a matter of convenience. But what you’re really pointing to is something else: the willingness to live in a world where decisions are made without explanation, and where the institution is no longer human but the machine itself. That’s what you left unsaid.

Expand full comment
My GloB's avatar

Explainability and fairness are interrelated, post-event normative (safety) mechanisms that we attempt to impose on the construction of AI and other systems to avoid accidents (harm and lawsuits).

As you also point out, the main import of the machine is that it realises the work first and foremost. As such, any preemtive restrictions, however well intended, must perforce limit the optimal development of the machine and therefore be counterproductive.

AI, like all machines, is the replication of what the human can and will do. It is a beefed-up re-creation of what the human already does and would like to do at greater scale to achieve satisfaction at various levels (both mental and material).

In reality, trust only comes into the equation when the machine delivers as expected or better, and only in areas where its use delivers satisfaction.

Truly trusting the technology is, to a large degree an ideal for which we have no basis, measurement or answer, especially under the identified conditions of Sacharidis' implicit programming.

Expand full comment
13 more comments...

No posts