mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#queue

0 Beiträge0 Beteiligte0 Beiträge heute

This redesign of #poser (for #swad) to offer a "multi-reactor" (with multiple #threads running each their own event loop) starts to give me severe headaches.

There is *still* a very rare data #race in the #lockfree #queue. I *think* I can spot it in the pseudo code from the paper I used[1], see screenshot. Have a look at lines E7 and E8. Suppose the thread executing this is suspended after E7 for a "very long time". Now, some dequeue operation from some other thread will eventually dequeue whatever "Q->Tail" was pointing to, and then free it after consumption. Our poor thread resumes, checks the pointer already read in E6 for NULL successfully, and then tries a CAS on tail->next in E9, which is unfortunately inside an object that doesn't exist any more .... If the CAS succeeds because at this memory location happens to be "zero" bytes, we corrupted some random other object that might now reside there. 🤯

Please tell me whether I have an error in my thinking here. Can it be ....? 🤔

Meanwhile, after fixing and improving lots of things, I checked the alternative implementation using #mutexes again, and surprise: Although it's still a bit slower, the difference is now very very small. And it has the clear advantage that it never crashes. 🙈 I'm seriously considering to drop all the lock-free #atomics stuff again and just go with mutexes.

[1] dl.acm.org/doi/10.1145/248052.

Phew! I need to figure out a way to get data from a Raspberry Pi into a database on a server over an unreliable network.
The data is not a stream, but there is a new data point being generated every 5 minutes.
I'm thinking I need something like a local queue on the Pi, and once Internet is there, the elements from the queue need to be transferred, one by one, then added into the database.
Rabbit hole, here I come!
#BuildInPublic #Internet #database #queue

Got an interesting question today about #Fedify's outgoing #queue design!

Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

In the #fediverse, server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

By creating individual queue items for each recipient:

  • Fast servers get messages delivered promptly
  • Slow servers don't delay delivery to others
  • Failed deliveries can be retried independently
  • Your UI remains responsive while deliveries happen in the background

It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

What other aspects of Fedify's design would you like to hear about? Let us know!

@eikedrescher Hallo Eike,
zufällig über Deine Podcast App Queue gestolpert.

Hab eine lange Podcasthistorie unter Apple:
- begonnen mit Apple Podcasts & iPod nano
- iOS und Overcast
- Pocketcasts
- dann lange Castro
- seit einem Jahr wieder Overcast

Experimentiere gerne mit Podcast Apps und Deine neue App sieht schon einmal sehr schick und minimal aus. Schon einmal ein guter Start.