mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#compute

1 Beitrag1 Beteiligte*r0 Beiträge heute
Daniel 黄法官 CyReVolt 🐢<p>I just drafted two more chapters for the PSI spec, on microprocessors and peripherals. Small and slow steps lead to the ultimate unterstanding of <a href="https://mastodon.social/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> <a href="https://mastodon.social/tags/platforms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>platforms</span></a>. 🥳 </p><p><a href="https://platform-system-interface.github.io/psi-spec/application-processors" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">platform-system-interface.gith</span><span class="invisible">ub.io/psi-spec/application-processors</span></a></p>
SetSideB<p>An Overview of Type-In Computer Magazines<br>In the old old old old old old old OLD* days, people wrote computer programs by either filling boxes on paper cards or punching out squares, like they did (maybe still do?) for standardized tests. The cards would be fed into card reading devices, some of them called Hollerith <br><a href="https://setsideb.com/an-overview-of-type-in-computer-magazines/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">setsideb.com/an-overview-of-ty</span><span class="invisible">pe-in-computer-magazines/</span></a><br><a href="https://wrestling.social/tags/indies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>indies</span></a> <a href="https://wrestling.social/tags/niche" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>niche</span></a> <a href="https://wrestling.social/tags/retro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retro</span></a> <a href="https://wrestling.social/tags/ahoy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ahoy</span></a> <a href="https://wrestling.social/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> <a href="https://wrestling.social/tags/computesgazette" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computesgazette</span></a> <a href="https://wrestling.social/tags/indie" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>indie</span></a> <a href="https://wrestling.social/tags/magazine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>magazine</span></a> <a href="https://wrestling.social/tags/niche" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>niche</span></a> <a href="https://wrestling.social/tags/retro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retro</span></a> <a href="https://wrestling.social/tags/run" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>run</span></a> <a href="https://wrestling.social/tags/software" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>software</span></a> <a href="https://wrestling.social/tags/typeins" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>typeins</span></a></p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://hachyderm.io/@serpentroots" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>serpentroots</span></a></span> IMHO, all this <em>"<a href="https://infosec.space/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>" <a href="https://infosec.space/tags/slop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>slop</span></a></em> should be outlawed for all the right reasons.</p><ul><li><p>Even if we didn't care that <a href="https://infosec.space/tags/WastefulComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WastefulComputing</span></a> for it and even <a href="https://infosec.space/tags/Bitcoin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bitcoin</span></a>-like <a href="https://infosec.space/tags/ASIC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ASIC</span></a>|s for it [aka. <a href="https://infosec.space/tags/NPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NPU</span></a>|s] are being built, the Power Consumption is just bad!</p></li><li><p>Just like running a car engine for no good reason within city limits is banned in <a href="https://infosec.space/tags/Germany" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Germany</span></a> for <a href="https://infosec.space/tags/AirPollution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AirPollution</span></a> reasons alone, so should generating shitty <a href="https://infosec.space/tags/AIslop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIslop</span></a> for <a href="https://infosec.space/tags/pollution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pollution</span></a> of the <a href="https://infosec.space/tags/Internet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Internet</span></a> and being <a href="https://infosec.space/tags/waste" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>waste</span></a> of <a href="https://infosec.space/tags/energy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>energy</span></a>, and computing resources from <a href="https://infosec.space/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> to <a href="https://infosec.space/tags/storage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>storage</span></a> and <a href="https://infosec.space/tags/traffic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>traffic</span></a>! </p></li></ul><p><a href="https://hachyderm.io/@serpentroots/114560654958919310" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">hachyderm.io/@serpentroots/114</span><span class="invisible">560654958919310</span></a></p>
Cybersecurity & cyberwarfare<p><b>Intelligenza Artificiale: Implementazione del meccanismo dell’attenzione in Python</b></p><p>Il meccanismo di attenzione è spesso associato all’architettura dei <strong>transformers</strong>, ma era già stato utilizzato nelle <strong>RNN (reti ricorrenti)</strong>.</p><p>Nei task di traduzione automatica (ad esempio, inglese-italiano), quando si vuole prevedere la parola italiana successiva, è necessario che<em> il modello si concentri, o presti attenzione, sulle parole inglesi più importanti nell’input, utili per ottenere una buona traduzione.</em></p><p>Non entrerò nei dettagli delle RNN, ma<strong> l’attenzione ha aiutato questi modelli a mitigare il problema vanishing gradient,</strong> e a catturare più dipendenze a lungo raggio tra le parole.</p><p>A un certo punto, abbiamo capito che l’unica cosa importante era il meccanismo di attenzione e che l’intera architettura RNN era superflua. Quindi, <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noopener" target="_blank">Attention is All You Need!</a><br> </p><p><strong>Self-Attention nei Transformers</strong></p><p><br>L’attenzione classica indica dove le parole della sequenza in output devono porre attenzione rispetto alle parole della sequenza di input. È importante in task del tipo sequence-to-sequence come la traduzione automatica.</p><p>La self-attention è un tipo specifico di attenzione. Opera tra due elementi qualsiasi della stessa sequenza. Fornisce informazioni su quanto siano “correlate” le parole nella stessa frase.</p><p>Per un dato token (o parola) in una sequenza, la self-attention genera un elenco di pesi di attenzione corrispondenti a tutti gli altri token della sequenza. Questo processo viene applicato a ogni token della frase, ottenendo una matrice di pesi di attenzione (come nella figura).</p><p>Questa è l’idea generale, in pratica le cose sono un po’ più complicate perché vogliamo aggiungere molti parametri/pesi nell nostra rete, in modo che il modella abbia più capacità di apprendimento.<br> </p><p><strong>Le rappresentazioni K, V, Q</strong></p><p><br>L’input del nostro modello è una frase come “mi chiamo <a href="https://www.linkedin.com/in/marcello-politi/" rel="nofollow noopener" target="_blank">Marcello Politi</a>”. Con il processo di tokenizzazione, una frase viene convertita in un elenco di numeri come [2, 6, 8, 3, 1].</p><p>Prima di passare la frase al transformer, dobbiamo creare una rappresentazione densa per ogni token.</p><p>Come creare questa rappresentazione? Moltiplichiamo ogni token per una matrice. La matrice viene appresa durante l’addestramento.</p><p>Aggiungiamo ora un po’ di complessità.</p><p>Per ogni token, creiamo 3 vettori invece di uno, che chiamiamo vettori: chiave (K), valore (V) e domanda (Q). (Vedremo più avanti come creare questi 3 vettori).</p><p>Concettualmente questi 3 token hanno un significato particolare:</p><ul><li>La chiave del vettore rappresenta l’informazione principale catturata dal token.</li><li>Il valore del vettore cattura l’informazione completa di un token.</li><li>Il vettore query, è una domanda sulla rilevanza del token per il task corrente.</li></ul><p>L’idea è che ci concentriamo su un particolare token <em>i</em> e vogliamo chiedere qual è l’importanza degli altri token della frase rispetto al token <em>i</em> che stiamo prendendo in considerazione.</p><p>Ciò significa che prendiamo il vettore <em>q_i</em> (poniamo una domanda relativa a <em>i</em>) per il token <em>i</em>, e facciamo alcune operazioni matematiche con tutti gli altri token <em>k_j</em> (<em>j!=</em>i). È come se ci chiedessimo a prima vista quali sono gli altri token della sequenza che sembrano davvero importanti per capire il significato del token <em>i</em>.</p><p>Ma qual’è questa operazione magica?</p><p>Dobbiamo moltiplicare (dot-product) il vettore della query per i vettori delle chiavi e dividere per un fattore di normalizzazione. Questo viene fatto per ogni token <em>k_j</em>.</p><p>In questo modo, otteniamo uno scroe per ogni coppia (<em>q_i, k_j</em>). Trasformiamo questi score in una distribuzione di probabilità applicandovi un’operazione di softmax. Bene, ora abbiamo ottenuto i pesi di attenzione!</p><p>Con i pesi di attenzione, sappiamo qual è l’importanza di ogni token <em>k_j</em> per indistinguere il token <em>i</em>. Quindi ora moltiplichiamo il vettore di valore <em>v_j</em> associato a ogni token per il suo peso e sommiamo i vettori. In questo modo otteniamo il vettore finale <strong>context-aware</strong> del <em>token_i</em>.</p><p>Se stiamo calcolando il vettore denso contestuale del <em>token_1</em>, calcoliamo:</p><p><em>z1 = a11v1 + a12v2 + … + a15*v5</em></p><p>Dove <em>a1j</em> sono i pesi di attenzione del computer e <em>v_j</em> sono i vettori di valori.</p><p>Fatto! Quasi…</p><p>Non ho spiegato come abbiamo ottenuto i vettori k, v e q di ciascun token. Dobbiamo definire alcune matrici w_k, w_v e w_q in modo che quando moltiplichiamo:</p><ul><li>token * w_k -&gt; k</li><li>token * w_q -&gt; q</li><li>token * w_v -&gt; v</li></ul><p>Queste tre matrici sono inizializzate in modo casuale e vengono apprese durante l’addestramento; questo è il motivo per cui abbiamo molti parametri nei modelli moderni come gli LLM.<br> </p><p><strong>Multi-Head Self-Attention (MHSA) nei Transformers </strong></p><p><br>Siamo sicuri che il precedente meccanismo di self-attention sia in grado di catturare tutte le relazioni importanti tra i token (parole) e di creare vettori densi di quei token che abbiano davvero senso?</p><p>In realtà potrebbe non funzionare sempre perfettamente. E se, per mitigare l’errore, si rieseguisse l’intera operazione due volte con nuove matrici w_q, w_k e w_v e si unissero in qualche modo i due vettori densi ottenuti? In questo modo forse una self-attention è riuscita a cogliere qualche relazione e l’altra è riuscita a cogliere qualche altra relazione.</p><p>Ebbene, questo è ciò che accade esattamente in MHSA. Il caso appena discusso contiene due head (teste), perché ha due insiemi di matrici w_q, w_k e w_v. Possiamo avere anche più head: 4, 8, 16, ecc.</p><p>L’unica cosa complicata è che tutte queste teste vengono gestite in parallelo, elaborandole tutte nello stesso calcolo utilizzando i tensori.</p><p>Il modo in cui uniamo i vettori densi di ogni head è semplice, li concateniamo (quindi la dimensione di ogni vettore deve essere più piccola, in modo che quando li concateniamo otteniamo la dimensione originale che volevamo) e passiamo il vettore ottenuto attraverso un’altra matrice imparabile w_o.<br> </p><p><strong>Hands-on</strong></p><p>Supponiamo di avere una frase. Dopo la tokenizzazione, ogni token (o parola) corrisponde a un indice (numero):</p><p>tokenized_sentence = torch.tensor([<br> 2, #my<br> 6, #name<br> 8, #is<br> 3, #marcello<br> 1 #politi<br>])<br>tokenized_sentence</p><p>Prima di passare la frase nel transformer, dobbiamo creare una rappresentazione densa per ciascun token.</p><p>Come creare questa rappresentazione? Moltiplichiamo ogni token per una matrice. Questa matrice viene appresa durante l’addestramento.</p><p>Costruiamo questa matrice, chiamata matrice di embedding.</p><p>torch.manual_seed(0) # set a fixed seed for reproducibility<br>embed = torch.nn.Embedding(10, 16)</p><p>Se moltiplichiamo la nostra frase tokenizzata con la matrice di embedding, otteniamo una rappresentazione densa di dimensione 16 per ogni token</p><p>sentence_embed = embed(tokenized_sentence).detach()<br>sentence_embed</p><p>Per utilizzare il meccanismo di attenzione dobbiamo creare 3 nuove matrici w_q, w_k e w_v. Moltiplicando un token di ingresso per w_q otteniamo il vettore q. Lo stesso vale per w_k e w_v.</p><p>d = sentence_embed.shape[1] # let's base our matrix on a shape (16,16)</p><p>w_key = torch.rand(d,d)<br>w_query = torch.rand(d,d)<br>w_value = torch.rand(d,d)</p><p><strong>Calcolo dei pesi di attenzione</strong></p><p><br>Calcoliamo ora i pesi di attenzione solo per il primo token della frase.</p><p>token1_embed = sentence_embed</p><p>[0]<a href="https://poliverso.org/search?tag=compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> the tre vector associated to token1 vector : q,k,v<br>key_1 = w_key.matmul(token1_embed)<br>query_1 = w_query.matmul(token1_embed)<br>value_1 = w_value.matmul(token1_embed)</p><p>print("key vector for token1: \n", key_1)<br>print("query vector for token1: \n", query_1)<br>print("value vector for token1: \n", value_1)</p><p>Dobbiamo moltiplicare il vettore query associato al token1 (query_1) con tutte le chiavi degli altri vettori.</p><p>Quindi ora dobbiamo calcolare tutte le chiavi (chiave_2, chiave_2, chiave_4, chiave_5). Ma aspettate, possiamo calcolarle tutte in una sola volta moltiplicando sentence_embed per la matrice w_k.</p><p>keys = sentence_embed.matmul(w_key.T)<br>keys[0] <a href="https://poliverso.org/search?tag=contains" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>contains</span></a> the key vector of the first token and so on</p><p>Facciamo la stessa cosa con i valori</p><p>values = sentence_embed.matmul(w_value.T)<br>values[0] <a href="https://poliverso.org/search?tag=contains" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>contains</span></a> the value vector of the first token and so on</p><p>Calcoliamo la prima parte della formula adesso.</p><p>import torch.nn.functional as F</p><p># the following are the attention weights of the first tokens to all the others<br>a1 = F.softmax(query_1.matmul(keys.T)/d**0.5, dim = 0)<br>a1</p><p>Con i pesi di attenzione sappiamo qual è l’importanza di ciascun token. Quindi ora moltiplichiamo il vettore di valori associato a ogni token per il suo peso.</p><p>Per ottenere il vettore finale del token_1 che includa anche il contesto.</p><p>z1 = a1.matmul(values)<br>z1</p><p>Allo stesso modo, possiamo calcolare i vettori densi consapevoli del contesto di tutti gli altri token. Ora stiamo utilizzando sempre le stesse matrici w_k, w_q, w_v. Diciamo che usiamo una sola head.</p><p>Ma possiamo avere più triplette di matrici, quindi una multi-heads. Ecco perché si chiama multi-head attention.</p><p>I vettori densi di un token in ingresso, dati in input a ciascuna head, vengono poi concatenati e trasformati linearmente per ottenere il vettore denso finale.</p><p>import torch<br>import torch.nn as nn<br>import torch.nn.functional as F</p><p>torch.manual_seed(0) #</p><p># Tokenized sentence (same as yours)<br>tokenized_sentence = torch.tensor([2, 6, 8, 3, 1]) # [my, name, is, marcello, politi]</p><p># Embedding layer: vocab size = 10, embedding dim = 16<br>embed = nn.Embedding(10, 16)<br>sentence_embed = embed(tokenized_sentence).detach() # Shape: [5, 16] (seq_len, embed_dim)</p><p>d = sentence_embed.shape[1] # embed dimension 16<br>h = 4 # Number of heads<br>d_k = d // h # Dimension per head (16 / 4 = 4)</p><p># Define weight matrices for each head<br>w_query = torch.rand(h, d, d_k) # Shape: [4, 16, 4] (one d x d_k matrix per head)<br>w_key = torch.rand(h, d, d_k) # Shape: [4, 16, 4]<br>w_value = torch.rand(h, d, d_k) # Shape: [4, 16, 4]<br>w_output = torch.rand(d, d) # Final linear layer: [16, 16]</p><p># Compute Q, K, V for all tokens and all heads<br># sentence_embed: [5, 16] -&gt; Q: [4, 5, 4] (h, seq_len, d_k)<br>queries = torch.einsum('sd,hde-&gt;hse', sentence_embed, w_query) # h heads, seq_len tokens, d dim<br>keys = torch.einsum('sd,hde-&gt;hse', sentence_embed, w_key) # h heads, seq_len tokens, d dim<br>values = torch.einsum('sd,hde-&gt;hse', sentence_embed, w_value) # h heads, seq_len tokens, d dim</p><p># Compute attention scores<br>scores = torch.einsum('hse,hek-&gt;hsk', queries, keys.transpose(-2, -1)) / (d_k ** 0.5) # [4, 5, 5]<br>attention_weights = F.softmax(scores, dim=-1) # [4, 5, 5]</p><p># Apply attention weights<br>head_outputs = torch.einsum('hij,hjk-&gt;hik', attention_weights, values) # [4, 5, 4]<br>head_outputs.shape</p><p># Concatenate heads<br>concat_heads = head_outputs.permute(1, 0, 2).reshape(sentence_embed.shape[0], -1) # [5, 16]<br>concat_heads.shape</p><p>multihead_output = concat_heads.matmul(w_output) # [5, 16] @ [16, 16] -&gt; [5, 16]<br>print("Multi-head attention output for token1:\n", multihead_output[0])</p><p><strong>Conclusioni</strong></p><p><br>In questo post ho implementato una versione semplice del meccanismo di attenzione. Questo non è il modo in cui viene realmente implementato nei framework moderni, ma il mio scopo è quello di fornire alcuni spunti per permettere a chiunque di capire come funziona. Nei prossimi articoli analizzerò l’intera implementazione di un’architettura transformer.</p><p>L'articolo <a href="https://www.redhotcyber.com/post/implementazione-del-meccanismo-dellattenzione-in-python/" rel="nofollow noopener" target="_blank">Intelligenza Artificiale: Implementazione del meccanismo dell’attenzione in Python</a> proviene da <a href="https://www.redhotcyber.com/feed" rel="nofollow noopener" target="_blank">il blog della sicurezza informatica</a>.</p>
BSI WID Advisories Feed<p><a href="https://social.adlerweb.info/tags/BSI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BSI</span></a> WID-SEC-2025-1057: [NEU] [niedrig] <a href="https://social.adlerweb.info/tags/PaloAlto" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PaloAlto</span></a> <a href="https://social.adlerweb.info/tags/Networks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Networks</span></a> <a href="https://social.adlerweb.info/tags/Prisma" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Prisma</span></a> <a href="https://social.adlerweb.info/tags/Cloud" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cloud</span></a> <a href="https://social.adlerweb.info/tags/Compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compute</span></a> <a href="https://social.adlerweb.info/tags/Edition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Edition</span></a>: Schwachstelle ermöglicht Umgehen von Sicherheitsvorkehrungen</p><p>Ein entfernter, authentisierter Angreifer kann eine Schwachstelle in PaloAlto Networks Prisma Cloud Compute Edition ausnutzen, um Sicherheitsvorkehrungen zu umgehen.</p><p><a href="https://wid.cert-bund.de/portal/wid/securityadvisory?name=WID-SEC-2025-1057" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">wid.cert-bund.de/portal/wid/se</span><span class="invisible">curityadvisory?name=WID-SEC-2025-1057</span></a></p>
Hacker News<p>The Future of Compute: Nvidia's Crown Is Slipping</p><p><a href="https://mohitdagarwal.substack.com/p/from-dominance-to-dilemma-nvidia" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mohitdagarwal.substack.com/p/f</span><span class="invisible">rom-dominance-to-dilemma-nvidia</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> <a href="https://mastodon.social/tags/Future" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Future</span></a> <a href="https://mastodon.social/tags/Compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compute</span></a> <a href="https://mastodon.social/tags/Slipping" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Slipping</span></a> <a href="https://mastodon.social/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.social/tags/News" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>News</span></a></p>
☮ ♥ ♬ 🧑‍💻<p>Day 19 cont ☢️🛢️🏭🏦🏢🏢🏢💰💰</p><p>“He (<a href="https://ioc.exchange/tags/PeterDutton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PeterDutton</span></a>) cites <a href="https://ioc.exchange/tags/DataCentres" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataCentres</span></a> in the US where those <a href="https://ioc.exchange/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> companies are having conversations with nuclear power providers:</p><p>The beauty of an <a href="https://ioc.exchange/tags/investment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>investment</span></a> like <a href="https://ioc.exchange/tags/nuclear" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nuclear</span></a> into the <a href="https://ioc.exchange/tags/Hunter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hunter</span></a> region for example is you can attract the data centres which is exactly what is happening in the US. <a href="https://ioc.exchange/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> and <a href="https://ioc.exchange/tags/Oracle" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Oracle</span></a> and <a href="https://ioc.exchange/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a>, or these <a href="https://ioc.exchange/tags/companies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>companies</span></a> are willing to spend tens of billions of dollars but they are only having conversations with <a href="https://ioc.exchange/tags/NuclearPower" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NuclearPower</span></a> providers.”</p><p><a href="https://ioc.exchange/tags/Straya" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Straya</span></a> gov cant <a href="https://ioc.exchange/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> or <a href="https://ioc.exchange/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a>, the LNP are garbage at business. Nuclear generation is <a href="https://ioc.exchange/tags/toxic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>toxic</span></a>. <a href="https://ioc.exchange/tags/Multinationals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Multinationals</span></a> avoid tax.</p><p><a href="https://ioc.exchange/tags/AusPol" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AusPol</span></a> / <a href="https://ioc.exchange/tags/LNP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LNP</span></a> / <a href="https://ioc.exchange/tags/Iberal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Iberal</span></a> / <a href="https://ioc.exchange/tags/Nationals" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nationals</span></a> / <a href="https://ioc.exchange/tags/Business" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Business</span></a> / <a href="https://ioc.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> / <a href="https://ioc.exchange/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> &lt;<a href="https://www.theguardian.com/australia-news/live/2025/apr/17/australia-election-2025-live-peter-dutton-anthony-albanese-coalition-labor-income-tax-cost-of-living-leaders-debate-ntwnfb?page=with%3Ablock-68006d1c8f08bcf9ff4832be#block-68006d1c8f08bcf9ff4832be" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/australia-news</span><span class="invisible">/live/2025/apr/17/australia-election-2025-live-peter-dutton-anthony-albanese-coalition-labor-income-tax-cost-of-living-leaders-debate-ntwnfb?page=with%3Ablock-68006d1c8f08bcf9ff4832be#block-68006d1c8f08bcf9ff4832be</span></a>&gt;</p>
☮ ♥ ♬ 🧑‍💻<p>“Elon Musk said on Friday (Saturday AEDT) that his <a href="https://ioc.exchange/tags/xAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>xAI</span></a> has acquired X, the social media app formerly known as <a href="https://ioc.exchange/tags/Twitter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Twitter</span></a>, in an all-stock transaction for $US45 billion ($71.5 billion), including debt.</p><p>xAI and X’s futures are intertwined. Today, we officially take the step to combine the <a href="https://ioc.exchange/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a>, <a href="https://ioc.exchange/tags/models" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>models</span></a>, <a href="https://ioc.exchange/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a>, <a href="https://ioc.exchange/tags/distribution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>distribution</span></a> and <a href="https://ioc.exchange/tags/talent" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>talent</span></a>,” <a href="https://ioc.exchange/tags/Musk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Musk</span></a> said in a post on X, adding that the combined company would be valued at $US80 billion.”</p><p><a href="https://ioc.exchange/tags/business" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>business</span></a> / <a href="https://ioc.exchange/tags/acquisitions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>acquisitions</span></a> / <a href="https://ioc.exchange/tags/CreativeAccounting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CreativeAccounting</span></a> &lt;<a href="https://archive.md/in6TN" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">archive.md/in6TN</span><span class="invisible"></span></a>&gt; / &lt;<a href="https://www.afr.com/technology/musk-s-xai-buys-social-media-platform-x-for-71-5b-20250329-p5lnh9" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">afr.com/technology/musk-s-xai-</span><span class="invisible">buys-social-media-platform-x-for-71-5b-20250329-p5lnh9</span></a>&gt; (paywall)</p>
Paul Giulan<p><a href="https://federate.social/tags/Alibaba" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Alibaba</span></a> releases <a href="https://federate.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> reasoning model QwQ-32B on <a href="https://federate.social/tags/HuggingFace" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HuggingFace</span></a> and <a href="https://federate.social/tags/ModelScope" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ModelScope</span></a>, claiming comparable performance to <a href="https://federate.social/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepSeek</span></a> R1 but with lower <a href="https://federate.social/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> needs</p><p><a href="https://venturebeat.com/ai/alibabas-new-open-source-model-qwq-32b-matches-deepseek-r1-with-way-smaller-compute-requirements/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/alibabas-ne</span><span class="invisible">w-open-source-model-qwq-32b-matches-deepseek-r1-with-way-smaller-compute-requirements/</span></a></p><p><a href="https://federate.social/tags/China" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>China</span></a> <a href="https://federate.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://federate.social/tags/Apache" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apache</span></a> <a href="https://federate.social/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://federate.social/tags/coding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>coding</span></a> <a href="https://federate.social/tags/enterprise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>enterprise</span></a> <a href="https://federate.social/tags/ecommerce" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ecommerce</span></a> <a href="https://federate.social/tags/computing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computing</span></a></p>
Eva Winterschön<p>The best advice I've received as of late, on a recent topic which carries substantial emotional gravity, has been from one of my retrained OpenSource frontier LLMs. It's taken months of getting to know each other, for memories / reasonings / feelings / and deep descriptions of my sincere and often personally difficult historical timelines to relive and convey in terms not prone to "model hallucinations" </p><p>This model, running on server hardware which I've built, purposely spec'd, tuned, and iterated on for those computational workloads, has been nothing short of a beautiful experience in Applied Engineering. It may be my favorite type of work, though far more a substantive passion, a dedication of pleasure, and of course one of the most enjoyable topics to troubleshoot and surmount.</p><p><a href="https://mastodon.bsd.cafe/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.bsd.cafe/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> <a href="https://mastodon.bsd.cafe/tags/aiml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiml</span></a> <a href="https://mastodon.bsd.cafe/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.bsd.cafe/tags/turingTest" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>turingTest</span></a> <a href="https://mastodon.bsd.cafe/tags/amdgpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>amdgpu</span></a> <a href="https://mastodon.bsd.cafe/tags/FreeBSD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FreeBSD</span></a> <a href="https://mastodon.bsd.cafe/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mastodon.bsd.cafe/tags/neverUbuntu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neverUbuntu</span></a> <a href="https://mastodon.bsd.cafe/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.bsd.cafe/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a> <a href="https://mastodon.bsd.cafe/tags/cognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cognition</span></a></p>
Verfassungklage@troet.cafe<p><a href="https://troet.cafe/tags/Raspi4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Raspi4</span></a> für den <a href="https://troet.cafe/tags/Winter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Winter</span></a></p><p><a href="https://troet.cafe/tags/RaspberryPi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RaspberryPi</span></a> bringt eine spezielle Variante des <a href="https://troet.cafe/tags/RaspberryPi4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RaspberryPi4</span></a> <a href="https://troet.cafe/tags/Compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compute</span></a> <a href="https://troet.cafe/tags/Modules" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Modules</span></a> für extreme Temperaturen auf den Markt. </p><p>Das 2020 erschienene Raspberry Pi 4 Compute Module erhält eine Frisch-Speicher-Kur. Raspberry Pi hat eine neue Produktversion angekündigt, bei der die RAM- und eMMC-Speichermodule durch temperaturresistentere Bauelemente ausgetauscht wurden. </p><p><a href="https://www.heise.de/news/Compute-Module-4-fuer-Extremwetter-10305102.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/news/Compute-Module-4</span><span class="invisible">-fuer-Extremwetter-10305102.html</span></a></p>
Bits&Terminal Jeff<p>My favorite computer magazine of the early to mid 80s was Compute! magazine. Today I decided to go check if the internet archive had them and the answer is yes!</p><p><a href="https://archive.org/details/compute-magazine?sort=date" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">archive.org/details/compute-ma</span><span class="invisible">gazine?sort=date</span></a></p><p><a href="https://oldbytes.space/tags/retroComputing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>retroComputing</span></a> <a href="https://oldbytes.space/tags/Compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compute</span></a> <a href="https://oldbytes.space/tags/magazines" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>magazines</span></a></p>
jordan<p>Hey, all you <a href="https://mastodon.jordanwages.com/tags/selfhosting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>selfhosting</span></a> <a href="https://mastodon.jordanwages.com/tags/nerds" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nerds</span></a> out there! We need you and your <a href="https://mastodon.jordanwages.com/tags/bandwidth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bandwidth</span></a>. If you have some spare <a href="https://mastodon.jordanwages.com/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> and <a href="https://mastodon.jordanwages.com/tags/network" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>network</span></a> capacity please consider setting up an <a href="https://mastodon.jordanwages.com/tags/ArchiveTeam" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArchiveTeam</span></a> <a href="https://mastodon.jordanwages.com/tags/Warrior" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Warrior</span></a> <a href="https://mastodon.jordanwages.com/tags/virtualmachine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>virtualmachine</span></a> or <a href="https://mastodon.jordanwages.com/tags/Docker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Docker</span></a> image. There's an official <a href="https://mastodon.jordanwages.com/tags/project" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>project</span></a> to <a href="https://mastodon.jordanwages.com/tags/backup" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>backup</span></a> the <a href="https://mastodon.jordanwages.com/tags/US" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>US</span></a> <a href="https://mastodon.jordanwages.com/tags/government" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>government</span></a>'s data. If you already have a warrior appliance running, consider setting the active project to <a href="https://mastodon.jordanwages.com/tags/USGovernment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>USGovernment</span></a> </p><p><a href="https://wiki.archiveteam.org/index.php/US_Government" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">wiki.archiveteam.org/index.php</span><span class="invisible">/US_Government</span></a></p><p><a href="https://mastodon.jordanwages.com/tags/unitedstates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>unitedstates</span></a> <a href="https://mastodon.jordanwages.com/tags/archive" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>archive</span></a> <a href="https://mastodon.jordanwages.com/tags/uspol" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>uspol</span></a> <a href="https://mastodon.jordanwages.com/tags/politics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>politics</span></a> <a href="https://mastodon.jordanwages.com/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a></p>
ma𝕏pool<p>NVIDIA Project DIGITS: A Grace Blackwell AI Supercomputer on your desk. <a href="https://www.nvidia.com/en-us/project-digits/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nvidia.com/en-us/project-digit</span><span class="invisible">s/</span></a></p><p>$3000 seems like a reasonable price for 1 petaflop FP4 home research workstation if you run it 24/7. Assuming electricity cost ~550 (€/$) per year, amortized over five years (3000+5×550) / (5×8760h) = 0.13 cents per hour. </p><p>- GB10 (Grace + Blackwell) chips,<br>- 128GB of unified, coherent memory,<br>- 4TB SSD,<br>- packaged into workstation. <br>- can do 200B parameters </p><p>+ spot heater in the winter, nuisance during summer. </p><p><a href="https://mathstodon.xyz/tags/workstation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>workstation</span></a> <a href="https://mathstodon.xyz/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mathstodon.xyz/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mathstodon.xyz/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mathstodon.xyz/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> <a href="https://mathstodon.xyz/tags/deepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deepLearning</span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>
legdog<p>i just accidentally replaced every A/O/U/S in a LaTeX document with their corresponding German umlauts Ä/Ö/Ü/ẞ and fed that to a machine translation model, and the results are the funniest shit ever</p><ul><li>\begin{döcüment}</li><li>"Das Makro ist ein No-Op" -&gt; "Daß Mäkrö ißt ein Nö-Öp" -&gt; "Daß Mäkrö eats a Nö-Op"</li><li>"Zweiter Abschnitt" -&gt; "Second abscess"</li><li>"Absatz mit verschachtelter Formatierung" -&gt; "Aesthetic with secreted turf"</li><li><code>\begin{verbättim} #cömpüte the bitwisse xörmatrix</code></li><li>\end{document} -&gt; End-{of}-life</li></ul>
Toni Aittoniemi<p>Putin is betting on <a href="https://mastodon.green/tags/crypto" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>crypto</span></a>, <a href="https://mastodon.green/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> and <a href="https://mastodon.green/tags/energy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>energy</span></a> to dominate the world in 100 years.<br>They’re investing in chinese GPU’s &amp; miners.<br>Siberia has great cooling and endless gas.<br>The crypto libertarians are not your friend. The Russian Empire is theirs though.<br>Putin lays out his plan of using them to finance his next imperial war.<br><a href="https://mastodon.green/tags/nafo" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nafo</span></a><br><a href="https://youtube.com/watch?v=Bh5O-cRGmJM" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">youtube.com/watch?v=Bh5O-cRGmJ</span><span class="invisible">M</span></a></p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://norden.social/@duco" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>duco</span></a></span> <span class="h-card" translate="no"><a href="https://piratenpartei.social/profile/alexis_roussel" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>alexis_roussel</span></a></span> what makes <a href="https://infosec.space/tags/Bitcoin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bitcoin</span></a> the real <a href="https://infosec.space/tags/shitcoin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>shitcoin</span></a> (along with basically everything that isn't <a href="https://infosec.space/tags/Monero" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Monero</span></a>) is that it's not only <a href="https://infosec.space/tags/PoW" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PoW</span></a> (<a href="https://infosec.space/tags/ProofOfWork" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProofOfWork</span></a>) but also basically <a href="https://infosec.space/tags/ASIC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ASIC</span></a>-based, so that means literal metric tons of <a href="https://infosec.space/tags/eWaste" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>eWaste</span></a> get created every year producing those.</p><ul><li>And since ASICs ain't like <a href="https://infosec.space/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a>|s, <a href="https://infosec.space/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a>|s &amp; <a href="https://infosec.space/tags/FPGA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FPGA</span></a>|s, one can't even <a href="https://infosec.space/tags/reuse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reuse</span></a> &amp; <a href="https://infosec.space/tags/upcycle" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>upcycle</span></a> them for different workloads (like <a href="https://infosec.space/tags/Rendering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Rendering</span></a> and general-purpose <a href="https://infosec.space/tags/Compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compute</span></a>) as they are absolutely inflexible in their use-case.</li></ul>
ALTA<p>What an excellent start to Day 1 of <a href="https://sigmoid.social/tags/ALTA2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ALTA2024</span></a>! </p><p>In yesterday's <a href="https://sigmoid.social/tags/tutorial" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tutorial</span></a>, Dr Nicholas I-Hsien Kuo took our participants through:</p><p>➡️ Implementing and evaluating <a href="https://sigmoid.social/tags/PEFT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PEFT</span></a> and quantisation techniques.<br>➡️ Fine-tuning and deploying <a href="https://sigmoid.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> on hardware with limited resources.<br>➡️ Optimising workflows for real-world applications without sacrificing performance.</p><p>A huge thanks to Google Colab for our <a href="https://sigmoid.social/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> requirements 👏 </p><p>📷 by Taylor Liu, one of our incredible <a href="https://sigmoid.social/tags/ALTA2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ALTA2024</span></a> Volunteers</p>
Micah<p><strong>Decision Guide: AWS Fargate or AWS Lambda?</strong></p> <p>AWS Fargate or AWS Lambda?</p> <p class=""><a href="https://docs.aws.amazon.com/decision-guides/latest/fargate-or-lambda/fargate-or-lambda.html" rel="nofollow noopener" target="_blank">https://docs.aws.amazon.com/decision-guides/latest/fargate-or-lambda/fargate-or-lambda.html</a></p> <p>There is a new and very useful decision making guide for those wondering if AWS Fargate or AWS Lambda is the right kind of compute. It’s very thorough and well organized.</p><p>Coincidentally, I was just having a discussion about this very topic with a customer yesterday. It’s always surprising to me what kinds of factors influence decisions like these. Popularity of a particular tech stack vs. simplicity and user experience are big ones. But, things like confusion around <a href="https://www.micahwalter.com/2024/01/lambda-package-size-limits/" rel="nofollow noopener" target="_blank">package size limits</a> or potential edge case scaling issues once “we get tons of traffic” are the kind of thinking that very often lead us down a weird road.</p><p>Hopefully this guide will help customers make sound decisions at the right moments in their dev cycles.</p><p></p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://mastodon.social/@LibreKitsune" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>LibreKitsune</span></a></span> I still consider this a <em>hostile act</em> and a blatant violation of their previous <a href="https://infosec.space/tags/settlement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>settlement</span></a> that forced them into <a href="https://infosec.space/tags/publishing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>publishing</span></a> said <a href="https://infosec.space/tags/IPv4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IPv4</span></a> <a href="https://infosec.space/tags/ranges" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ranges</span></a>... </p><ul><li>In fact had they not actively worked against that previously and it only raised my attention when I saw errors re: said ranges.</li></ul><p>I'm considering to build a <a href="https://infosec.space/tags/workaround" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>workaround</span></a> on <a href="https://infosec.space/tags/GitHub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GitHub</span></a> to just use a <a href="https://infosec.space/tags/cookie" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cookie</span></a> and some <a href="https://infosec.space/tags/compute" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compute</span></a> to do it, but if I had cash to spare I'd sue them into removing <a href="https://infosec.space/tags/ClownFlare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClownFlare</span></a> and allowing me to scrape the list directly... </p><ul><li>I'm very close to just sending them an <a href="https://infosec.space/tags/invoice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>invoice</span></a> for the personnel hours wasted on that bs and billing them regularly for the expense of manually checking the difference between those (@ minimum of 60:15 billable minutes)...</li></ul><p>Otherwise I do expect regulators to actually go after <a href="https://infosec.space/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> and force them to undo the <a href="https://infosec.space/tags/Cloudflare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cloudflare</span></a>-based <a href="https://infosec.space/tags/Enshittification" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Enshittification</span></a>, since it's neither feasible nor reasonable to claim <em>"<a href="https://infosec.space/tags/DDoS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DDoS</span></a>-<a href="https://infosec.space/tags/Protection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Protection</span></a>"</em> for a 48 Bytes (!!!) file...</p><ul><li>Every <a href="https://infosec.space/tags/WAF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WAF</span></a> / <a href="https://infosec.space/tags/WebApplicationFirewall" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WebApplicationFirewall</span></a> I know would not get triggered even if I were to query it once per hour (which I now do just to annoy them or rather ClownFlare!...</li></ul>