… und als dann der Loadbalancer wieder lief, da hat #kured die Mastodon-Datenbank auf einen anderen Node geschoben. Manchmal kommt eins zum anderen.
This morning I vibecoded a multi-platform system tray app for displaying a little status dot for a #k8s cluster based on pod statuses.
It's a decent start. Works well on macOS and Windows. Haven't tried Linux yet.
New Kubernetes Alpha Release
Kubernetes v1.34.0-alpha.3
https://github.com/kubernetes/kubernetes/releases/tag/v1.34.0-alpha.3
Anybody worked out if it's possible to access AWS Certificate Manager certs in EKS Kubernetes as a TLS Secret? (I need to terminate in the pod not the LoadBalancer to access SNI)
It feels like it should be possible with the Secrets Store CSI driver with the AWS plugin, but it looks it only has access to AWS Secrets Manager. I don't really want to have to export and import every time they need renewing
Gesucht: Linux Systemadministrator (m/w/d) – Cloud-Infrastruktur
Wo: Geisenhausen bei Landshut, Niederbayern (hybrid)
https://www.adito.de/karriere/linux-systemadministrator.html
「 Docker is essentially a sandwich of disk images where you can shove absolutely anything, and then these images get executed by running whatever legacy software you’ve crammed in there, regardless of how horrific or inconsistent it might be, with zero behavioral controls 」
https://andreafortuna.org/2025/06/20/unpopular-opinion-kubernetes-is-a-symptom-not-a-solution
@samir Every single day a team of 25 people is kept busy running a #K8s cluster with 1200 nodes, that actually could be replaced by less than ten 1U machines using a system design that actually solves the 10K problem, instead of one that struggles to handle even 10 req/s.
This is the vicious cycle of technical debt.
This week's problem: cluster-autoscaler has a bug that causes machines that start up to get stuck in a zombie state without successfully registering with the control plane. This causes all kinds of cluster scale up issues, especially with multi-AZ workloads.
Every week is a new bug, a new edge case, a new issue with dependencies (K8s, helm, Rancher, Istio, etcd, ...) a new issue with AWS, it just goes on and on.
I yearn for the days of simplicity of just running servers in racks and you're like "oh, had another hard drive failure in rack 04, have to go swap out a HDD cartridge and rebuild the RAID".
And why did I choose Talos Linux instead of k3s, minikube, or so many other ways to deploy Kubernetes? Very simple answer: immutable deployment + GitOps. I have a number of hosts that need to run apt/dnf update on a regular basis. As much as this can be automated, it is still tiresome to manage. I don't have to worry as much about an immutable host running a Kubernetes cluster, mostly because the bulk of the attack surface is in the pods, which can be easily upgraded by Renovate/GitOps (which is also something I miss on the hosts running Docker Compose).
Now the research starts. I know Kubernetes, but I don't know Talos Linux, so there's a lot to read because each Kubernetes deployment has it's own nitpicks. Besides, I need to figure out how to fit this new player in my current environment (CA, DNS, storage, backups, etc).
Will my experience become a series of blog posts? Honestly: most likely not. In a previous poll the majority of people who read my blog posts expressed that they're more interested in Docker/Podman. Besides, the Fediverse is already full of brilliant people talking extensively talking about Kubernetes, so I will not be " yet another one".
You will, however, hear me ranting. A lot.
3/3
The main reason for replacing my Proxmox for a Kubernetes deployment, is because most of what I have deployed on it are LXC containers running Docker containers. This is very cumbersome, sounds really silly, and is not even recommended by the Proxmox developers.
The biggest feature I would miss with that move would be the possibility of running VMs. However, so far I've only needed a single one for a very specific test, that lasted exactly one hour, so it's not a hard requirement. But that problem can be easily solved by running Kubevirt. I've done that before, at work, and have tested it in my home lab, so I know it is feasible. Is it going to be horrible to manage VMs that way? Probably. But like I said, they're an exception. Worst case scenario I can run them on my personal laptop with kvm/libvirt.
2/3
Quick talk about the future of my home lab. (broken out in a thread for readability)
After lots of thinking, a huge amount of frustration, and a couple of hours of testing, I am seriously considering replacing my Proxmox host for a Kubernetes deployment using Talos Linux.
This is not set in stone yet. I still need to do some further investigation about how to properly deploy this in a way that is going to be easy to manage. But that's the move that makes sense for me in the current context.
I'm not fully replacing my bunch of Raspberry Pi running Docker Compose. But I do have a couple of extra Intel-based (amd64/x86_64) mini-PCs where I run some bulkier workloads that require lots of memory (more than 8GB). So I am still keeping my promise to continue writing about "the basics", while also probably adding a bit of "the advanced". Besides, I want to play around with multi-architecture deployments (mixing amd64 and arm64 nodes in the same k8s cluster).
1/3
At new workplace. They use #OpenShift. Thinking to create a #OKD cluster at home to play around with.
My main #K8S cluster will still be "normal" kubernetes though. The OKD cluster would just be to learn the differences between normal K8S and OKD/OCP
That whole #opensearch (on #k8s) feels so clumsy . I really start to hate it.
Is there any useful IaC tooling to manage its lifecycle ? No #opentofu isn't the right tool.
#rant
New (minor) release for #Kustomize, v5.7.0:
Main evolution, we can use replacement with a static value!
Another good way to replace domain in `ingress` instead of the ugly ${HOSTNAME} managed by another tool.
Full changelog: https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize/v5.7.0