With the recent discussions around replacing Spotify with selfhosted services and the possibilities to obtain the music itself, I’ve been finally setting up Navidrome. I had to do quite a bit of reorganization to do with my existing collection (beets helping a ton) but now it’s in a neatly organized structure and I’m enjoying it everywhere. I get most of my stuff from Bandcamp but I have a big catalog from when I’ve still had a large physical collection.

I’m also still working on my docker quasi gitops stack. I’ve cleaned up my compose files and put the secrets in env files where I hadn’t already, checked them into my new forgejo instance and (mostly) configured renovate. Komodo is about to get productive but I couldn’t find the time yet. Also I need to figure out how to check in secrets in a secure way. I know some but I haven’t tried those with Komodo yet. This close of my fully automated update-on-merge compose stacks!

I’ve also been doing these for quite a while and decided to sometimes post them in !selfhosting@slrpnk.net to possibly help moving a bit from the biggest Lemmy instance, even though this community as it is is perfectly fine as well as it seems.

What’s going on on your servers? Anything you are trying to pursue at the moment?

  • Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    For privacy reasons, I have finally fully disabled dynamic dns updates and closed the last holes in the home firewall, moving to 100% proxying via a VPS for publicly available stuff, and a tailnet (headscale) for everything private. The only real cross-over is Nextcloud - mountains of private data, but I want it publicly available for file shares. Fortunately, Nextcloud has a setting to whitelist IP addresses that allow log-in, so I can restrict that to just the non-VPS tailnet addresses. From the public internet, only public shares are accessible.

    I set up a L4 proxy so that the encryption for Nextcloud happens at home and the VPS just passes encrypted packets. Then it occurred to me that a compromised VPS could easily grab a SSL cert for my Nextcloud subdomain via a regular-old http-challenge and MITM access to all my files, defeating the point.

    Then I found a neat hack that effectively disables http-challenge certs for subdomains by requiring a wildcard certificate - which can only be created with a dns-challenge. I was able to also disable all other certificate authorities. Obviously, I have /some/ trust in the VPS I administer - it’s on my tailnet network - but no longer have the concern that it could easily MITM Nextcloud. https://www.naut.ca/blog/2019/10/19/mitigating-http-mitm-possibilities-with-lets-encrypt/

  • antimongo@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    Just got my new NAS drives, so about to make the transition.

    It’s actually new drives, and a new host. I’ve been running my old Synology NAS for years. But decided I ought to switch to a “real” NAS through Proxmox.

    Just set up a simple samba container with Cockpit as a web manager, so far working really well. But I want to validate backups before I start moving all the irreplaceable data.

    Something I’m excited about is using my old Synology NAS as an automatic, off-site backup once I transition. Heard about Duplicati from a friend, sounds like a great syncing solution.

    Other than that I’ve been looking into using Apple HomeKit features with my Home Assistant devices. And also planning to move my hardware from the cheap Amazon floor shelf to a real 19” rack.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 hours ago

    I’ve been on full maintenance mode for spring/summer, those are the times to be going placed and doing things. Autumn I’m going to write my winter goals for the server.

    I have another n100 box that I’m going to dedicate to immich, I have 7 users now, so when they all upload on a night my current n100 has a little bit of a cry.

    Security is always a big one. I’m currently relying on tailscale (limited to necessary lxcs), reverse proxies, Https, and app ‘sign ins’. Not bad (it’s bad) but not good either.

    For new projects, I want to integrate Audiobookshelf with Hardcover. I’ve got a project installed but it didn’t work on my first attempt so I gave it up for winter.

    I’d like to set up a virtual DosBox, accessable by a browser, for my 1000s of dos games. Again I’ve found a few projects, none worked out of the box so have been given up for winter.

    Other than that all my front end services are working well. *arrs are becoming a pain for all the malware named as good files confusing rad/sonarr. Qbit knows not to download .exes, and the like, but sonarr doesn’t know to delete them and look again. Lazylibrarian accepts no shit though, if things aren’t going as expected LL very quickly deletes and goes again. I might try vibecode a script for that.

    I’d like to break out my storage into a dedicate box. Probably get some e-waste to fill with drives. Currently I have a n100 running network, storage and virtualization, it’s a little cramped.

    It’s probably smarter to break out networking first, build a little router/firewall box (the above n100 mini would be perfect). But, I don’t get along with networking, I find it challenging in an unsatisfying way. When I’m done banging my head against the wall and things work I’m just relieved I don’t have to do it again, instead of feeling accomplished. New projects are fun, Storage I get the feeling of accomplishment from doing the thing. Networking is a dark art full of black boxes I don’t understand that sometimes play nice together and mostly fuck my shit up.

    I want to move over to IPv6, not for any other reason than it’s probably a good idea to progress to the 2000s. If I can move everything over to Hostnames however, that’d be the dream.

    Moving from Docker to Podman is probably smart.

    Lots to do over winter… I’m probably gonna build a fish tank instead

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      I’d like to set up a virtual DosBox, accessable by a browser, for my 1000s of dos games.

      Maybe something with the new Webtop images from LinuxServerIO? The new Desktop streaming protocol they have is seriously speedy, you can totally game on it!

      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Thanks, I’ll check it out. Speedy could be interesting for games that used the CPU speed as a clock speed.

        The LinuxServerIO peeps make fantastic images. When my server was docker only they pretty much built my homelab. I’m sure my docker hosts still have a bunch of their stuff, *arrs are probably all them.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      *arrs are becoming a pain for all the malware named as good files confusing rad/sonarr. Qbit knows not to download .exes, and the like, but sonarr doesn’t know to delete them and look again.

      And this is exactly why I run Cleanuparr alongside my *arrs. It integrates extension blocking, blocked/failed/stalled retries, and even has crowdsourced blocklists for malware. Between that and Huntarr (which automates background searches, because Sonarr/Radarr don’t continuously search for missing media,) and my *arr stack is running better than ever.

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    I’m a newbie to the whole selfhosting thing. Been doing NAS+minipc for past 6 months with a few services running. 2 days ago I embarresed myself.

    So, I been running 5 services on nginx proxy manager. But I heard that NPMplus is slightly better and can renew certs automatically. I had transferred settings from NPM to NPMplus by hand off the photo and for some reason NPMplus couldn’t work with services ran on NAS. I went back to NPM and haven’t touched the issue til last Sunday.

    During troubleshooting I found out that my dumb ass didnt pay attention and put ‘’:‘’ instead of ‘’." . So 192.168.xxx.xx became 192:168:xxx:xx and that was the reason I spent whole day troubleshooting the issue.

    Next goal: go back to my homeland and set Pi3 at my parent’s place to be my VPN so I can setup an arr stack and automate media downloads in a way that govt. of my current residence couldn’t put a deep hole in my wallet.

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      Do your parents live in a place where that won’t happen?

      For auto renewing certs: You can do that easily with the normal nginx, using certbot alongside it. Just tell certbot to handle the domain once and it will renew it forever.

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago
        • Yes. Kind of. It is in EU but afaik piracy is not an illegal thing there yet or is not enforced as much compared to where I live now.

        • Too late. NPMplus is up and running. Also, I like dark mode.

  • This2ShallPass@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 hours ago

    Just discovered TinyAuth and it is fantastic. I am replacing Authentik with it because it has what I want but is much faster, smaller, and simpler. Also, the license is FOSS.

  • AtariDump@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Trying to figure out how to drop my energy requirements and still keep ~100TB running.

    Right now it’s 12x 10TB drives in a RAID 6 with ~8TB still available; it might be time to bite the bullet and upgrade to 20TB drives. Problem is, if my calculations are correct, I’d still need 7 drives - 5 X 20TB=100TB and then two more drives for “parity”.

    The server I have lined up already has a PERC in it.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      Do you actually need 100TB instantly available? Could a portion of that be cold storage that can be booted quickly from a WOL packet from the always-on machine when needed? With some tweaking, you could probably set up an alpine-based NAS to boot in <10 seconds, especially if you picked something that supported coreboot and could avoid that long bios post time.

  • csm10495@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    I have a couple pis that run docker containers including pihole. The containers have their storage on a centralized share drive.

    I had a power outage and realized they can’t start if they happen to come up before the share drive PC is back up.

    How do people normally do their docker binds? Optimally I guess they would be local but sync/backup to the share drive regularly.

    Sort of related question: in docker compose I have restart always and yet if a container exits successfully or seemingly early in it’s process (like pihole) it doesn’t restart. Is there an easy way to still have them restart?

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      You should be able to modify the docker service to wait until a mount is ready before starting. That would be the standard way to deal with that kind of thing.

      • csm10495@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        What if it’s a network mount inside the container? Doesn’t the mount not happen till the container starts?

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Correct yeah, you’d still need a way on the host to check if the mount is ready though before starting the service. Or you could just do a fixed delay time.

      • hobbsc@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I don’t believe there’s cause for concern. I just assumed based on the prompts while setting up the backups that it would actually restart the VMs. I was wrong.

        • Jason2357@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I understand that COW file-systems can do snapshots at “instantaneous” points in time and KVM snapshots ram state as well, but I still worry that a database could be backed up at just the wrong time and be in an inconsistent state at restore. I’d rather do less frequent backups of a stopped VM and be more confident it will restore and boot correctly. Maybe I’m a curmudgeon?

          • hobbsc@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            I suppose it boils down to a threat model. I wouldn’t lose sleep if any of my VMs imploded and I had to rebuild them from scratch.

  • greybeard@feddit.online
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    I spent some time last week learning both Ansible and Podman Quadlets. They are a powerful duo, especially for self hosting.

    Ansible is a desired state system for Linux. Letting you define a list of servers and what their configuration should be, like “have podman installed” and “have this file at this location with this content”.

    Podman quadlets is a system for defining podman containers as a service. You define the container, volumes, and networks all in essentially Systemd unit files.

    Mixing the two together, I can have my entire podman setup in a format that can be pushed to any server in seconds.

    And of course everything is text files that git well.

    • shadowtofu@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I did the same last week (and am still in the process of setting up more services for my new server). I have a few VMs (running Fedora CoreOS, with podman preinstalled), and I use ansible to push my quadlets, podman secrets, and static configuration files. Persistent data volumes get mounted using virtiofs from the host system, and the VMs are not supposed to contain any state themselves. The VMs are also provisioned using using ansible.

      Do you use ansible to automatically restart changed containers after pushing your changes? So far, I just trigger a systemctl daemon-reload, but trigger restarts manually (which I guess is fine for development).

      • greybeard@feddit.online
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I haven’t gotten too far, but right now I’ve got persistent volumes being pushed by NFS from my NAS. I’m using rocky Linux VMs as my target, but for this use case, Fedora CoreOS should be the same.

        I haven’t yet tried using Ansible to create the VMs, but that would be cool. I know teraform is designed for that sort of thing, but if Ansible can do it, all the better. I’d love to get to a point where my entire stack as Ansible.

        I don’t yet have Ansible restarting the service, but that should be a simple as adding a few new tasks after the daemon-reload task. What I don’t know how to do is tell it to only restart if there is change to any of the config files uploaded. That would be nice to minimize service restarts.

    • theorangeninja@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I was thinking about this for some time now, can you link me to some good tutorials about quadlets in particular? Ansible will have to wait for now.

    • powerofm@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Oh that’s smart! I just got started with podman and quadlets. Loving how simple it is to setup a systemd service and even organize multi-pod apps

  • d13@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    I finally got around to setting up my internal services with TLS. It was surprisingly easy with a Caddy docker image supporting Cloudflare DNS challenge.

    I did this because various services I use are starting to require https.

    Now everything is on a custom domain, https, and I can access it through Tailscale as usual.

  • async_amuro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Just ordered a used HP EliteDesk 800 G3 SFF (3.6GHz Intel Core i7-7700, 8GB DDR4 RAM, 256GB SSD) off EBay to replace my Apple Mac mini “Core i7” 2.3 (Late 2012/Server). Hoping to put 32GB of RAM in it, 1TB NVMe boot drive and maybe a 3.5” HDD for media instead of using an external drive. Might move to NixOS (I’d like to learn how to administer Nix even though it’s very complicated sometimes) and Podman, instead of using Proxmox and Docker Debian VMs and LXC containers.

    Any advice and guidance appreciated!

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      It’s a tool that checks and corrects metadata for your music collection. You can also import music with it to your collection (it will put everything in the right folders etc).

      It does require some manual intervention now and then, though (do you really want to apply this despite some discrepancies? Choose, which of these albums it really is. Etc).

  • BingBong@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I’m trying to find a reasonably priced used rack mount computer to move all my containers to. I have a rack in my house but measuring the depth between posts only gets me around 17.5". Recently deployed paperless-ngx and decided it would be too much to add onto my poor little NAS which hosts everything else so its deployed on my main computer and I want to avoid that strategy.

    Challenge is that being new to rack servers and all of this (the NAS was a great intro box) I’ve got a large learning curve ahead of me.

    • gray@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I have a Dell R220 and a R240 which I’m looking to offload, free. They’re both specifically for short racks if you happen to be near central NC.

      • BingBong@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        Out of curiosity, what has been your experience on noise and power consumption with your r220 and r240 servers?

        • gray@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          They’re pretty decent. Really depends on the temperature.

          If the room is cool you hear the hard drives over the chassis fans. Power is totally dependent on the CPU. I put a 4790k in the R220 and it would sit around 45-55w average with two 3.5” drives.

      • BingBong@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Funny you should say that! If I was still over in that part of the country I’d absolutely take you up. There is a local used IT shop near me and I was really excited to purchase some of those for a good deal. Then found out that they only update their online inventory when you click add to cart. Long story short, the prices were too good to be true and they didn’t have any left in stock.

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Are you using the rack already? Many people are opting for 10" racks for their homelabs these days. There’s 3D printable enclosures for many thin clients and mini PCs. Minilab is the go-to term. This is mine if you’re interested

      • BingBong@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Already have the rack and it hosts most of my items (router, switch, raspberry pi rack, NAS, PS3 apple TV). Honestly other than the raspberry pi rack and the router, the rest are just using shelves anyways. I need to find a good, non-bulky way to take the enclosure fan and switch it on when temps get above a set point. What I’ve done in the past with an arduino is way too bulky.

        I love your idea and its funny you mention the mini PCs and thin clients. As I look at prices I’m more and more leaning towards just another shelf and using mini PCs.

        • AtariDump@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Just be sure to get a decent KVMoIP (KVM over IP); it’s worth it for the first time you’re not home and a system blue screens.

  • Kaldo@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Just got a domain and started exposing my local jellyfin through cloudflare, mostly wanting to listen to my music on my phone when i’m outside too.

    I followed some guides that should make it fine with cloudflare’s policy, video doesnt work when i tried it but otherwise its been fun despite me feeling like im walking on eggshells all the time. I guess time will tell if it holds up

    • Batman@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Some things which have caused issues for me:

      File permissions

      Video/audio format (264/aac stereo is best for compatibility)

      • Kaldo@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        Oh file permissions are a nightmare to me, I thought I managed to get it sorted but after i installed lidarr, it alone suddenly can’t move files out of the download location anymore. I even tried to chmod 777 the data folders and nothing. I dont think I quite have the grasp on how those work with docker on linux yet, it seems like those arr services also have some internal users too which I dont get why would they.

        Wdym with the formats, is this referring to transcoding? I kept those on defaults afaik

        • h0rnman@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Could be that lidarr is setting its own permissions for downloaded stuff (look for something like dmask or fmask in the docker config). You might also need to chmod -R so it hits all sub folders. If you have a file or directory mask option, remember that they’re inverse, so instead of 777, you’d do 000 for rwxrwxrwx.

          • Kaldo@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            You might be onto something, lidarr does have UMASK=002 setting in the .env file. I think the issue is when sabdnzbd puts the files and then lidarr can’t read them, so what exactly is the expected permission setting then in this case? If I put it to 000 for lidarr, won’t other services then be unable to add the files there?

            I always feel so dumb when it comes to these things since in my head it’s something that should be pretty straightforward and simple, why can’t they all just use the same user and share the same permissions within this folder hierarchy…

            • h0rnman@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Sab might have its own mask settings - it would be worth looking at. Same thing applies here - subtract the mask part from 7 to get the real permissions. In this case, mask 002 translates into 775. This gives the uid and gid that the container is running under (probably defined in a variable somewhere) Read/Write/Execute, but anyone else Read/Execute. The “anyone else” would just be any account on the system (regardless of access method) that didn’t match on the actual uid or gid value.

        • raldone01@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          In linux user and group names don’t matter. Only the gid and uid matter. Think of user and group names as human names like domains are for IPS.

          In docker when you use mounts, all your containers that want to share data must agree on the gid and uids.

          In rootless docker and podman things subuids and subgids make it a little more complicated since IDs get mapped between host and container, but its still the IDs that matter.

          • Kaldo@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            I have one .env file with UUID/GUID 1000 set for all docker services in the docker-compose so it would make sense in theory if that’s enough, but it seems it rarely is…

  • gaiety@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    Considering switching my Forgejo to a Tangled.sh knot. Their easy self hosted CI option is appealing

    but mostly itd be easier to collaborate than opening sign ups on my instance of forgejo