mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#kubernetes

19 Beiträge16 Beteiligte0 Beiträge heute

I'm building a #Bluesky custom feed that filters the firehouse using #MachineLearning on my #Linux #Ubuntu home server underneath a Minikube #kubernetes VM.

I'm in awe on how many stages (three!) I need to deploy to consume the posts, and allow access to the filtered results.

First, a reverse proxy at the internet border. Then a minikube tunnel to allow access into the VM. Then an ingress controller to manage the traffic flows within the VM itself.

Mind boggling complexity!

I've been disappointed about this for at least the last decade, but if you feel that the polling-based designs of Kubernetes and Prometheus are "wrong", here's some science:
arxiv.org/abs/2507.02158

arXiv logo
arXiv.orgSignalling Health for Improved Kubernetes Microservice AvailabilityMicroservices are often deployed and managed by a container orchestrator that can detect and fix failures to maintain the service availability critical in many applications. In Poll-based Container Monitoring (PCM), the orchestrator periodically checks container health. While a common approach, PCM requires careful tuning, may degrade service availability, and can be slow to detect container health changes. An alternative is Signal-based Container Monitoring (SCM), where the container signals the orchestrator when its status changes. We present the design, implementation, and evaluation of an SCM approach for Kubernetes and empirically show that it has benefits over PCM, as predicted by a new mathematical model. We compare the service availability of SCM and PCM over six experiments using the SockShop benchmark. SCM does not require that polling intervals are tuned, and yet detects container failure 86\% faster than PCM and container readiness in a comparable time with limited resource overheads. We find PCM can erroneously detect failures, and this reduces service availability by 4\%. We propose that orchestrators offer SCM features for faster failure detection than PCM without erroneous detections or careful tuning.
Antwortete im Thread

And why did I choose Talos Linux instead of k3s, minikube, or so many other ways to deploy Kubernetes? Very simple answer: immutable deployment + GitOps. I have a number of hosts that need to run apt/dnf update on a regular basis. As much as this can be automated, it is still tiresome to manage. I don't have to worry as much about an immutable host running a Kubernetes cluster, mostly because the bulk of the attack surface is in the pods, which can be easily upgraded by Renovate/GitOps (which is also something I miss on the hosts running Docker Compose).

Now the research starts. I know Kubernetes, but I don't know Talos Linux, so there's a lot to read because each Kubernetes deployment has it's own nitpicks. Besides, I need to figure out how to fit this new player in my current environment (CA, DNS, storage, backups, etc).

Will my experience become a series of blog posts? Honestly: most likely not. In a previous poll the majority of people who read my blog posts expressed that they're more interested in Docker/Podman. Besides, the Fediverse is already full of brilliant people talking extensively talking about Kubernetes, so I will not be " yet another one".

You will, however, hear me ranting. A lot.

3/3

#HomeLab#TalosLinux#k3s
Fortgeführter Thread

The main reason for replacing my Proxmox for a Kubernetes deployment, is because most of what I have deployed on it are LXC containers running Docker containers. This is very cumbersome, sounds really silly, and is not even recommended by the Proxmox developers.

The biggest feature I would miss with that move would be the possibility of running VMs. However, so far I've only needed a single one for a very specific test, that lasted exactly one hour, so it's not a hard requirement. But that problem can be easily solved by running Kubevirt. I've done that before, at work, and have tested it in my home lab, so I know it is feasible. Is it going to be horrible to manage VMs that way? Probably. But like I said, they're an exception. Worst case scenario I can run them on my personal laptop with kvm/libvirt.

2/3

#HomeLab#TalosLinux#Proxmox

Quick talk about the future of my home lab. (broken out in a thread for readability)

After lots of thinking, a huge amount of frustration, and a couple of hours of testing, I am seriously considering replacing my Proxmox host for a Kubernetes deployment using Talos Linux.

This is not set in stone yet. I still need to do some further investigation about how to properly deploy this in a way that is going to be easy to manage. But that's the move that makes sense for me in the current context.

I'm not fully replacing my bunch of Raspberry Pi running Docker Compose. But I do have a couple of extra Intel-based (amd64/x86_64) mini-PCs where I run some bulkier workloads that require lots of memory (more than 8GB). So I am still keeping my promise to continue writing about "the basics", while also probably adding a bit of "the advanced". Besides, I want to play around with multi-architecture deployments (mixing amd64 and arm64 nodes in the same k8s cluster).

1/3

#HomeLab#TalosLinux#Proxmox

🐧 Looking to build a Kubernetes administrator career? Start with these 5 steps:

1️⃣ Complete our FREE, Introduction to Cloud Infrastructure Tech (LFS151) course
2️⃣ Next, take our FREE, Introduction to Kubernetes (LFS158) course
3️⃣ Save 40% on your Certified Kubernetes Administrator (CKA) exam when you bundle it with a THRIVE-Annual subscription
4️⃣ Enroll in Kubernetes Fundamentals (LFS258)
5️⃣ Earn your Certified Kubernetes Administrator (CKA) certification

training.linuxfoundation.org/c