mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#camel

1 Beitrag1 Beteiligte*r0 Beiträge heute

Bactrian Camel, Mirko Hanák

some animals I can't imagine with a tummy ache or the sniffles, they're plodders, they hold steady, keep going. Need some of that super power today.

This guy Mirko Hanák seems to have really mastered watercolor and ink, I've painted enough to see how expert this is, water and paper and color have quirky interactions that he uses well. I would love to watch a video of his process.

#art#camel#humps

"If you’re new to prompt injection attacks the very short version is this: what happens if someone emails my LLM-driven assistant (or “agent” if you like) and tells it to forward all of my emails to a third party?
(...)
The original sin of LLMs that makes them vulnerable to this is when trusted prompts from the user and untrusted text from emails/web pages/etc are concatenated together into the same token stream. I called it “prompt injection” because it’s the same anti-pattern as SQL injection.

Sadly, there is no known reliable way to have an LLM follow instructions in one category of text while safely applying those instructions to another category of text.

That’s where CaMeL comes in.

The new DeepMind paper introduces a system called CaMeL (short for CApabilities for MachinE Learning). The goal of CaMeL is to safely take a prompt like “Send Bob the document he requested in our last meeting” and execute it, taking into account the risk that there might be malicious instructions somewhere in the context that attempt to over-ride the user’s intent.

It works by taking a command from a user, converting that into a sequence of steps in a Python-like programming language, then checking the inputs and outputs of each step to make absolutely sure the data involved is only being passed on to the right places."

simonwillison.net/2025/Apr/11/

Simon Willison’s WeblogCaMeL offers a promising new direction for mitigating prompt injection attacksIn the two and a half years that we’ve been talking about prompt injection attacks I’ve seen alarmingly little progress towards a robust solution. The new paper Defeating Prompt Injections …
#AI#GenerativeAI#LLMs