Have you tried to automate it?
Have you tried to automate it?
Probably because of the Circle A in the thumbnail
you will have a much easier time setting up database and networking, running backups, porting your infrastructure to other providers, and maintaining everything, than with legacy control panels or docker compose.
I really don’t see this. Database? Same but needs a service. Networking? Services and namespaces instead of docker networks. Backups? Basically same as Docker but k8s has cronjobs so you can have it at the same place as your other stuff which is a good point. Porting infrastrutcture? Copy compose file, env files and volumes vs. copying all resources and pv.
I am absolutely not against self hosting in k8s and if IP already had k8s running, I’d recommend it too. But I don’t see the benefits for the scenario op described.
You might be right with the better/more accessible docker docs everywhere being the main reason it’s so popular, but it’s also usually just one file that describes everything AND is usually the supported install method of many projects where helm charts are often third party and lack configurability.
CNPG is cool, but then OP also needs to learn about operators and custom resources :) More efficient? Yes. More complex? Also yes.
The biggest challenge for kubernetes is probably that the smaller applications don’t come with example configs for Kubernetes. I only see mastodon having one officially. Still, I’ve provided my config for Lemmy, and there are docker containers available for Friendica and mbin (though docker isn’t officially supported for these two). I’m happy to help give yaml examples for the installation of the applications.
As said above, I agree it’s one challenge, but added complexity is not to underestimate.
Completely off topic: Your post did make me think about running my own cluster again though. I also work on k8s at my devops dayjob but with a cloud provider it’s not the same than running your own ofc. I’ve also been thinking about tinkering with old smartphones in that potential cluster…
Don’t you think recommending k8s to someone who just wants to run some services, which partly don’t even have k8s support/helmcharts on the same machine is a bit too much? Compared to docker compose or whatever op is using, it’s way more complex if you’re not already familiar with kubernetes resources.
I don’t know much about k3s in particular admittedly, but I wouldn’t recommend k8s for this unless op just wants to use it as a lab.
You need different Subdomains as you suggested in your first paragraph. And add a reverse proxy like nginx or caddy to the machine which then proxies the different subdomains to the respective services (e.g. lemmy.your.site to localhost:2222, mbin.your.site to localhost:3333).
Theoretically, you could put a landing page behind some SSO/iam like authentik, and then link to the subdomains from the landing page, but eventually users will need be on the subdomain to use a specific site.
Is your current setup up to date?
Yes, you get an email containing a link to your download when the requested build is done.
This guy actually built an automated builder so people can easily request tailored images for their robots which is super cool.
Yeah, I feel like exposing ports 80 and 443 towards an up to date nginx/whatever is referred to as a super dangerous thing in this community and also the selfhosted subreddit. Recommending cloudflare is almost the default, which I find a bit sad given many people selfhost to escape the reliance on big monopolist companies.
One can add different layers of security of course, but having nginx with monitoring in it’s own VM without keys to jump to another VM is enough of risk mitigation for me.
I think the best thing of reddit is them having so many actually active niche subreddits. Many people saying Lemmy doesn’t need to grow don’t seem to care much about that which surprises me a bit.
Op mentioned pixelfed for several people though, is it possible to reverse proxy through tailscale from a VPS or similar? It’s probably not suitable to have a service for several people behind a vpn
Yes! Mostly having a plan on how to make your service reachable in the internet while keeping the rest of your local stuff shutdown.
Many people recommend cloudflare, but I don’t think it’s necessary. If you get a public IP from your ISP, it’s relatively easy with dyndns. Personally, I have a virtual machine running nginx as a reverse proxy and configured the router to forward port 80 and 443 to that machine.
You got quite good answers already, here and in the other thread.
My suggestion is to not start with pixelfed but something else (simple stuff like dokuwiki, you can use it to document your stuff while you’re at it) to get an understanding of the whole process (running the service itself, making it available to the internet after hardening your infrastructure a bit etc).
Also, if you’re not settled for how to do it exactly, give Docker a try. There’s a reason it’s popular among selfhosters!
Most important: replace the raspi SD card with an SSD
General hardware: see if I find a better solution than my current Proxmox box (repurposed desktop which consumes 60w idling but is capped to 16GB Ram)
Incoming traffic: currently having a VM that runs nothing but nginx and certbot. Considering switching to another reverse proxy and, more important, get proper monitoring of the logs (e.g. IP detection, 403, etc)
Maybe add some iam like authentik
Finding a solution for selfhosting podcasts client with sync on Android and Linux… gpodder never really seemed to work, considering audiobookshelf.
Probably setting up calibre web and gethomepage
Keeping what I have and maybe optimize a bit:
On VPS:
If you’re using Prometheus, Blackbox exporter checks cert expiration as well