14 Comments
User's avatar
You know, Cannot Name It's avatar

You write about trust in opaque systems as if it were a matter of convenience. But what you’re really pointing to is something else: the willingness to live in a world where decisions are made without explanation, and where the institution is no longer human but the machine itself. That’s what you left unsaid.

Expand full comment
Romaric Jannel's avatar

Thank you for your comment and for your time! Yes, you're right. That topic was outside the scope of my explanation. However, there is another, more obvious reason for my silence on it. I hope people don't really want to live in such a world. I expect research on trust and AI to continue because people not only want reliable AI systems, but also responsible ones. This may require clear design and designer accountability...

Expand full comment
You know, Cannot Name It's avatar

Thanks for your comment. Yes, developer responsibility is important, and there's nowhere without it. But you do understand that the hope that "trust research" will solve the issue will not be justified. The reason is not in people or in the masses, but in the very vector of AI development. Its goal is not trust, but efficiency and scalability. Where there is no risk and vulnerability, there can be no trust. You can build accountability, you can set standards, but it will be control, not trust. And this difference is fundamental.

Expand full comment
Romaric Jannel's avatar

I would love to read your reply, but unfortunately, I don't read Russian.

Expand full comment
You know, Cannot Name It's avatar

You're writing about trust in artificial intelligence — and yet you couldn't be bothered to copy a comment into a translator?

That’s not a language barrier. That’s a trust failure.

If you expect trust from readers, maybe start by respecting what they offer — even if it comes in a language you don’t speak. Machines can handle the translation. You just chose not to.

And that says a lot more about your attitude toward dialogue than it does about my comment.

Expand full comment
Romaric Jannel's avatar

I respect your language and culture far more than you seem to think. Surely it is far richer than what I can read in English or French. That's precisely why I asked for an answer in a language I can understand. After all, this is a philosophical issue. Even the slightest terminological variation can alter the direction of the conversation. Neither I nor AI may have been capable of grasping your argument with exact accuracy.

I discussed trust in AI in this post because we cannot take it or the goodwill of companies for granted.

I'm sorry if you were hurt by my request.

Expand full comment
You know, Cannot Name It's avatar

Maybe I was rude that you didn't understand my post, or it really always happens with a translator. I hope I didn't inconvenience you with this. thank you

Expand full comment
You know, Cannot Name It's avatar

No. I wasn't talking about the language or culture itself. I was talking about the trust between the author and the reader. it's from a different key. It has nothing to do with the language barrier.

Expand full comment
My GloB's avatar

Explainability and fairness are interrelated, post-event normative (safety) mechanisms that we attempt to impose on the construction of AI and other systems to avoid accidents (harm and lawsuits).

As you also point out, the main import of the machine is that it realises the work first and foremost. As such, any preemtive restrictions, however well intended, must perforce limit the optimal development of the machine and therefore be counterproductive.

AI, like all machines, is the replication of what the human can and will do. It is a beefed-up re-creation of what the human already does and would like to do at greater scale to achieve satisfaction at various levels (both mental and material).

In reality, trust only comes into the equation when the machine delivers as expected or better, and only in areas where its use delivers satisfaction.

Truly trusting the technology is, to a large degree an ideal for which we have no basis, measurement or answer, especially under the identified conditions of Sacharidis' implicit programming.

Expand full comment
Romaric Jannel's avatar

Thank you for your comment. I agree with you about the relationship between trust and technology in general. However, I think it's more complicated when it comes to AI because it processes large amounts of information across various layers. The lines may change in the future depending on how AI develops. One significant difference that may persist between AI and living beings is that AI lacks affectivity...

Expand full comment
My GloB's avatar

Thanks. Here are a couple of short pieces I've written on the topic of AI and 'affectivity' as you call it:

https://eme1998.substack.com/p/the-poetry-of-ai

https://eme1998.substack.com/p/the-making-our-the-tech-god

Let me know what you think if/when you get a chance.

Expand full comment
You know, Cannot Name It's avatar

You reduced trust to satisfaction and utility. But trust is not what a machine gives us when it works — it’s what we hand over to it even when we don’t know how it works.

Expand full comment
My GloB's avatar
2dEdited

I probably did not express myself well enough. I reduced trust to utility and satisfaction in/with the machine not in humans. The machine may only be trusted as far as that. Humans are a different (more complex) matter.

My position on (human) trust is explained in my comment here: https://substack.com/@myglob/note/c-138437011?r=nhio9

Expand full comment