mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#k8s

7 Beiträge5 Beteiligte0 Beiträge heute

Anybody worked out if it's possible to access AWS Certificate Manager certs in EKS Kubernetes as a TLS Secret? (I need to terminate in the pod not the LoadBalancer to access SNI)

It feels like it should be possible with the Secrets Store CSI driver with the AWS plugin, but it looks it only has access to AWS Secrets Manager. I don't really want to have to export and import every time they need renewing

#TLS#AWS#EKS
Antwortete im Thread

@samir Every single day a team of 25 people is kept busy running a #K8s cluster with 1200 nodes, that actually could be replaced by less than ten 1U machines using a system design that actually solves the 10K problem, instead of one that struggles to handle even 10 req/s.
This is the vicious cycle of technical debt.

This week's problem: cluster-autoscaler has a bug that causes machines that start up to get stuck in a zombie state without successfully registering with the control plane. This causes all kinds of cluster scale up issues, especially with multi-AZ workloads.

Every week is a new bug, a new edge case, a new issue with dependencies (K8s, helm, Rancher, Istio, etcd, ...) a new issue with AWS, it just goes on and on.

I yearn for the days of simplicity of just running servers in racks and you're like "oh, had another hard drive failure in rack 04, have to go swap out a HDD cartridge and rebuild the RAID".

Antwortete im Thread

And why did I choose Talos Linux instead of k3s, minikube, or so many other ways to deploy Kubernetes? Very simple answer: immutable deployment + GitOps. I have a number of hosts that need to run apt/dnf update on a regular basis. As much as this can be automated, it is still tiresome to manage. I don't have to worry as much about an immutable host running a Kubernetes cluster, mostly because the bulk of the attack surface is in the pods, which can be easily upgraded by Renovate/GitOps (which is also something I miss on the hosts running Docker Compose).

Now the research starts. I know Kubernetes, but I don't know Talos Linux, so there's a lot to read because each Kubernetes deployment has it's own nitpicks. Besides, I need to figure out how to fit this new player in my current environment (CA, DNS, storage, backups, etc).

Will my experience become a series of blog posts? Honestly: most likely not. In a previous poll the majority of people who read my blog posts expressed that they're more interested in Docker/Podman. Besides, the Fediverse is already full of brilliant people talking extensively talking about Kubernetes, so I will not be " yet another one".

You will, however, hear me ranting. A lot.

3/3

#HomeLab#TalosLinux#k3s
Fortgeführter Thread

The main reason for replacing my Proxmox for a Kubernetes deployment, is because most of what I have deployed on it are LXC containers running Docker containers. This is very cumbersome, sounds really silly, and is not even recommended by the Proxmox developers.

The biggest feature I would miss with that move would be the possibility of running VMs. However, so far I've only needed a single one for a very specific test, that lasted exactly one hour, so it's not a hard requirement. But that problem can be easily solved by running Kubevirt. I've done that before, at work, and have tested it in my home lab, so I know it is feasible. Is it going to be horrible to manage VMs that way? Probably. But like I said, they're an exception. Worst case scenario I can run them on my personal laptop with kvm/libvirt.

2/3

#HomeLab#TalosLinux#Proxmox

Quick talk about the future of my home lab. (broken out in a thread for readability)

After lots of thinking, a huge amount of frustration, and a couple of hours of testing, I am seriously considering replacing my Proxmox host for a Kubernetes deployment using Talos Linux.

This is not set in stone yet. I still need to do some further investigation about how to properly deploy this in a way that is going to be easy to manage. But that's the move that makes sense for me in the current context.

I'm not fully replacing my bunch of Raspberry Pi running Docker Compose. But I do have a couple of extra Intel-based (amd64/x86_64) mini-PCs where I run some bulkier workloads that require lots of memory (more than 8GB). So I am still keeping my promise to continue writing about "the basics", while also probably adding a bit of "the advanced". Besides, I want to play around with multi-architecture deployments (mixing amd64 and arm64 nodes in the same k8s cluster).

1/3

#HomeLab#TalosLinux#Proxmox