mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#tinyml

0 Beiträge0 Beteiligte0 Beiträge heute

My old introduction was very outdated, so it's time to reintroduce myself:
#introduction

Hi 👋, I’m Laura.

I am a transfeminine person, somewhat in the middle of my transition. 🏳️‍⚧️ #trans #transbubble

A major part of my time I spend as a Postdoc in computer science, working on embedded AI and low-power IoT communication. #cs #TinyML #IoT #academia #science

Outside of work, I am active in the local #queer center (board member, GER: Vorstand), I enjoy playing board games, and I listen to too many #podcasts.

Super excited to share that I’ll be speaking at PyCon+Web 2025 on January 25th in Berlin! 🎤✨ My talk, "𝗘𝗱𝗴𝗲 𝗔𝗜 𝘄𝗶𝘁𝗵 𝗠𝗶𝗰𝗿𝗼𝗣𝘆𝘁𝗵𝗼𝗻 𝗮𝗻𝗱 𝗧𝗲𝗻𝘀𝗼𝗿𝗙𝗹𝗼𝘄 𝗳𝗼𝗿 𝗠𝗶𝗰𝗿𝗼𝗰𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿" is going to dive into the fascinating world where AI meets tiny tech.

It's like fitting an elephant or mammoth into a shoebox! 🐘📦

Can’t wait to meet all the amazing Python and web enthusiasts—let’s make it awesome! 🎉

Details here: pyconweb.com/activity/edge-ai-

www.pyconweb.comEvent: Edge AI with MicroPython and TensorFlow for Microcontroller | TechConfImagine having devices that can understand voice commands without relying on the internet! I’ll show how to run machine learning models on a tiny microcontroller, enabling AI-powered features like wake words and handwritten number recognition. Join me and learn more about Edge AI and TinyML.
#PyConWeb#PythonFun#EdgeAI

I wrote a #rust crate Moden Hopfield Network. I used it to build a neural network that can be trained on the edge. See the demo linked in the README.md.

Modern Hopfield Network has much (much) larger capacity than classical Hopfield Network. They are also called Dense Associative Memory.

#holfield-network #tinyml

github.com/dilawar/moden-hopfi

GitHubGitHub - dilawar/moden-hopfield-network: Modern Hopfield Network (aka Dense Associative Memory) In RustModern Hopfield Network (aka Dense Associative Memory) In Rust - dilawar/moden-hopfield-network

By me for @hackster_io, "Benchmarking TensorFlow and TensorFlow Lite on Raspberry Pi 5." The big take away from these new benchmarks is that the Raspberry Pi 5 has similar performance when using TensorFlow Lite to the Coral TPU, displaying essentially the same inferencing speed as Google's accelerator hardware. #ML #TinyML #AI #TensorFlow #RaspberryPi #CoralTPU hackster.io/news/benchmarking-

Hackster.io · Benchmarking TensorFlow and TensorFlow Lite on Raspberry Pi 5Von Alasdair Allan

Here we go! Depends how much of a purist you are whether you regard these as #TinyML or not. But you have to admit, it's #LLM on a comparatively tight resource budget. arxiv.org/abs/2312.11514

arXiv.orgLLM in a flash: Efficient Large Language Model Inference with Limited MemoryLarge language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, "windowing" strategically reduces data transfer by reusing previously activated neurons, and second, "row-column bundling", tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.