@gurupanguji<p><strong>Trust in the world of AI</strong></p><blockquote><p>We use these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.</p><p><a href="https://www.schneier.com/academic/archives/2025/06/ai-and-trust.html" rel="nofollow noopener" target="_blank">AI and Trust – Schneier on Security</a></p></blockquote><p>While I am still forming my thoughts about Bruce Schneier is positing here, it’s certainly thought provoking. I particularly resonated with <em>trust-overloading</em>. Btw, I pick some resonant pieces from the article, but I highly recommend <a href="https://www.schneier.com/academic/archives/2025/06/ai-and-trust.html" rel="nofollow noopener" target="_blank">reading the whole essay </a>before proceeding. </p><blockquote><p>There’s personal and intimate trust. When we say we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. Let’s call this “interpersonal trust.”</p><p>There’s also a less intimate, less personal type of trust. We might not know someone personally or know their motivations, but we can still trust their behavior. This type of trust is more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.<br>…<br>What I didn’t appreciate is how different the first two and the last two are. Morals and reputation are person to person, based on human connection. They underpin interpersonal trust. Laws and security technologies are systems that compel us to act trustworthy. They’re the basis for social trust.<br>…<br>But because we use the same word for both, we regularly confuse them. When we do that, we are making a category error. We do it all the time, with governments, with organizations, with systems of all kinds—and especially with corporations.</p></blockquote><p>This buildup was needed for this to resonate as well as it did</p><blockquote><p>We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as they can get away with.</p></blockquote><p>Why? </p><p>Schneier posits that our AI-future is a move back to older forms of information dissemination</p><blockquote><p>I actually think that websites will largely disappear in our AI future. Static websites, where organizations make information generally available, are a recent invention—and an anomaly. Before the Internet, if you wanted to know when a restaurant opened, you would call and ask. Now you check the website. In the future, you—or your AI agent—will once again ask the restaurant, the restaurant’s AI, or some intermediary AI. It’ll be conversational: the way it used to be.</p></blockquote><p>After establishing how the environment is already setup for near infinite amount of trust on to our AI <em>agents</em>, Bruce establishes that this suggests a mindful evaluation on how AI can be attacked. </p><blockquote><p>When we think of AI hacks, there are three different levels. First, an adversary is going to want to manipulate the AI’s output (an integrity attack). Failing that, they will want to eavesdrop on it (a confidentiality attack). If that doesn’t work, they will want to disrupt it (an availability attack). Note that integrity attacks are the most critical.<br>…<br>Everything we know about cybersecurity applies to AI systems, along with all the additional AI-specific vulnerabilities, such as prompt injection and training-data manipulation.</p></blockquote><blockquote><p>So that’s three work streams to facilitate trust in AI. One: AI security, as we know it traditionally. Two: AI integrity, more broadly defined. And three: AI regulations, to align incentives. We need them all, and we need them all soon. That’s how we can create the social trust that society needs in this new AI era.</p></blockquote><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/ai/" target="_blank">#ai</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/development/" target="_blank">#development</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/economics/" target="_blank">#economics</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/llm/" target="_blank">#llm</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/model/" target="_blank">#model</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/programming/" target="_blank">#programming</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/society/" target="_blank">#society</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/systems/" target="_blank">#systems</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://gurupanguji.com/tag/trust/" target="_blank">#trust</a></p>