mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,4 Tsd.
aktive Profile

#llm

311 Beiträge245 Beteiligte1 Beitrag heute

Yes… This is the ‘type’ of AI that will help us to a more effective workload, a smarter way to process things… Pattern recognition software will do (is doing) all of these tasks with more efficient and accurate processes. Please note, this ‘kind’ of AI is no your ordinary #LLM #GenAI #trash that is getting so much marketing impetus from the #TechBros who are trying to recoupd their investments in #chatbot technology’ sold as ‘intelligent’ — they are not!

“…knowledge workers — such as lawyers, consultants, or anybody who relies on experience or pattern recognition in their roles — would become more productive with AI than they were without it. This also applies across sectors of the economy that rely, to a large extent, on pattern recognition. So medical sciences, healthcare more broadly…" (Source: ABC News)

#PatternDetection

Read more: abc.net.au/news/2025-09-23/aus

ABC News · Artificial intelligence to dominate Australia's future economy, but who will reap the benefits?Von David Taylor

AI CINESE. POCHI SOLDI, TANTA RESA

#Deepseek AI, il corso cinese dell' #intelligenzaartificiale, è addestrato con una frazione dei costi sostenuti dai giganti occidentali.
In uno studio pubblicato su #Nature, il team di ricerca ha rivelato che ha incentivato le capacità di ragionamento degli #LLM con l'apprendimento per rinforzo (RL) puro, ovvero un sistema di punteggio: alto per le risposte corrette e basso per le sbagliate.

nature.com/articles/s41586-025

@aitech

“The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.”

This bias explains why people believe in #LLM accuracy even though they have seen it fail utterly before.

en.wikipedia.org/wiki/Gell-Man

en.wikipedia.orgGell-Mann amnesia effect - Wikipedia

Even een gedachte ter toetsing bij mensen die er verstand van hebben: als een LLM getraind wordt op bestaande teksten dan kan het antwoord dat zo'n model op een vraag genereerd toch alleen maar gebaseerd zijn op bestaande kennis of hooguit op een ongeverifieerde extrapolatie van bestaande kennis?

Dat is dan toch de reden waarom je ze niet moet gebruiken als kennis-orakel maar uitsluitend voor textuele zaken? (Vertalen, redigeren, stijl- en niveau-aanpassingen.)

I just published an article on how to integrate Ollama and Semantic Kernel with .NET Aspire. Enjoy!

Also, I will be posting more useful AI engineering content on my Substack, so feel free to subscribe if this is something you are interested in.

#dotnet #aspire #ollama #LLM

fiodar.substack.com/p/integrat

Fiodar’s Tech Insights · Integrating Ollama container and Semantic Kernel with .NET AspireVon Fiodar Sazanavets
Schneier on Security · Time-of-Check Time-of-Use Attacks Against LLMs - Schneier on SecurityThis is a nice piece of research: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.: Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e.g., prompt injection) and data-oriented threats (e.g., data exfiltration), time-of-check to time-of-use (TOCTOU) remain largely unexplored in this context. TOCTOU arises when an agent validates external state (e.g., a file or API response) that is later modified before use, enabling practical attacks such as malicious configuration swaps or payload injection. In this work, we present the first study of TOCTOU vulnerabilities in LLM-enabled agents. We introduce TOCTOU-Bench, a benchmark with 66 realistic user tasks designed to evaluate this class of vulnerabilities. As countermeasures, we adapt detection and mitigation techniques from systems security to this setting and propose prompt rewriting, state integrity monitoring, and tool-fusing. Our study highlights challenges unique to agentic workflows, where we achieve up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. When combining all three approaches, we reduce the TOCTOU vulnerabilities from an executed trajectory from 12% to 8%. Our findings open a new research direction at the intersection of AI safety and systems security...