mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#context

0 Beiträge0 Beteiligte0 Beiträge heute
Fortgeführter Thread

Solved! 🥳

This was a pretty "interesting" bug. Remember when I invented a way to implement #async / #await in #C, for jobs running on a threadpool. Back then I said it only works when completion of the task resumes execution on the *same* pool thread.

Trying to improve overall performance, I found the complex logic to identify the thread job to put on a pool thread a real deal-breaker. Just having one single MPMC queue with a single semaphore for all pool threads to wait on is a lot more efficient. But then, a job continued after an awaited task will resume on a "random" thread.

It theoretically works by making sure to restore the CORRECT context (the original one of the pool thread) every time after executing a job, whether partially (up to the next await) or completely.

Only it didn't, at least here on #FreeBSD, and I finally understood the reason for this was that I was using #TLS (thread-local storage) to find the context to restore.

Well, most architectures store a pointer to the current thread metadata in a register. #POSIX user #context #switching saves and restores registers. I found a source claiming that the #Linux (#glibc) implementation explicitly does NOT include the register holding a thread pointer. Obviously, #FreeBSD's implementation DOES include it. POSIX doesn't have to say anything about that.

In short, avoiding TLS accesses when running with a custom context solved the crash. 🤯

Update on // foss.events:

registration has opened and description updated for

ConTeXt meeting by context group, Ryszard Kubiak on 22-29 August 2025 in Holiday site KREFTA in #Chmielno, #Poland

Find out more on foss.events/2025/08-22-context

Official account: @context
Official hashtag(s): #ConTeXt

// foss.eventsConTeXt meeting on // foss.eventsEverything in a nutshell about ConTeXt meeting on // foss.events
#foss#floss#freesoftware

"It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature." — Niels Bohr

and

"Progress is measured not by convergence but by proliferation — by the expanding collection of contexts in which we can say something coherent, predictive, and actionable."

Very good article on perspectival realism, without using that term. Highly recommended.

freedium.cfd/https://csferrie.

I had an eye-opening experience working with Gemini yesterday on my book.

I experienced Gemini being able to manage a complex, novel chain of thought, _until_ it had to switch context to the conventional view which due to its established popularity, had overwhelmingly more _breadth_ of context in the LLM's training data. At that point its attention mechanism would lose focus on the novel concept and be dominated by the many available associations of the overwhelmingly popular thinking. Repeatedly. When I pointed this out to it, it confirmed my assessment and apologized, saying yes, that is currently its nature as a large language model.

I was testing my chain of thought on perspectival realism (epistemological, not ontological) and functionalism as a more coherent and extensible foundation for metaethics, I was doing this by engaging Gemini in argument against the classic definition of knowledge as "justified, true, belief" (JTB) and focusing on the weakness of its self-referential use of "True" knowledge to define true knowledge,,,. We went around and around, and as I explained the perspectival/functional viewpoint it was able to explain it back to me and even compose a comprehensive and compelling argument for its coherence, extensibility, and application to a reality that we always only know not for what it _is_, but for what we perceive it _does_ at the expanding boundary of our environment of interaction. But after establishing that it "understood" the new concept, when I then asked the AI to look for weakness in my thinking in contrast with JTB, it would effectively forgot the levels of reasoning providing support for the novel thinking,

#ai#llm#epistemology

> Top Stories to Watch/See - in the #MSM / #media / #FreeSpeech realm

CBS & PBS
1/2

#Context -
CBS has had its famed *60 Minutes* trophy 'news' show, shifted & shamed:
alternet.org/60-minutes-host-l

As told by long-time reporter Scott Pelley. the executive producer has left, out of conscience & concerns about the apparent bending of principles to help corporate Paramount be seen favorably by a tyrannical, #corrupt & #truth -averse Administration. Kissing the Ministry of #Truth & #Disinformation.

Alternet.org · Watch: 60 Minutes host issues on-air attack on bosses for bending to TrumpVon Krystina Alarcon Carroll, Raw Story

Maybe its not #AI scrapers but AI #agents that are contributing to all the load. I've just asked #Gemini's #deepresearch to do a price impact analysis on what balanced trade between the US and the rest of the world would look like and its going to hit up 40 web-sites before thinking about the answer and then followed up with another 40 sites for follow up research. That is an impressive amount of #context for an #llm to chew on.

Sora News 24: Smash Bros. creator learns he can’t tweet carelessly, fans learn they can’t trust AI translations. “Anxious to know what the Japanese text of [Masahiro] Sakurai’s tweet, ほうほう, means, many took to using automatic online translation tools, which in many cases gave them a translation that raised as many questions as it answered when they spat back ‘method’ as the […]

https://rbfirehose.com/2025/03/29/sora-news-24-smash-bros-creator-learns-he-cant-tweet-carelessly-fans-learn-they-cant-trust-ai-translations/

#ai#aiassisted#context