Could always save some cash and go the Lack Rack route.
100% when. I’ve learned that the hard way too many times to count at this point…
My NAS is built into an (I think) Thermaltake mid-sized tower running consumer hardware (ASRock X570 Phantom Gaming 4, Ryzen 5 series G proc, G.Skill non-ecc RAM) with the exception of one hard drive. Both that and my proxmox host are repurposed or custom built towers.
I do still use the QNAP NAS too, though only as SMB for my desktop/NFS for my server.
Nothing wrong with Overseer once you have the *arrs up and running, tbh. Though if it’s just you, there isn’t much point since everything can be done directly through the *arr web interfaces. If you’re hosting your media server to other friends, then a request system like Overseerr or Ombi makes way more sense.
Nothing wrong with using what you’ve got and upgrading. And the beautiful thing about Docker is you can just spin up the container elsewhere, point the mount points to their new locations, make sure your perms are good, and continue like nothing changed.
It really is so much easier now. And with UnRAID acting as my container host, it saves everything I spin up (permanent or not) in its last state as a template, so if I need to destroy my docker image disk (which I recently ran into) all I need to do is find the template I was using from the dropdown they give you and click Create. Not a backup solution (which you should also have), but it’s such a time saver if and when something goes horribly wrong, or if you want to spin a container you used to use but since destroyed back up.
I’m still rolling Binhex’s (now deprecated) rTorrent/ruTorrent container, and I’m glad I got it before it stopped being maintained. Tbh the scheduling capability built into that far exceeds anything else I’ve used (three tiers of scheduling on top of “off” and “unlimited”).
I make use of reverse proxying through Nginx Proxy Manager to hit nzb360 from outside my home, though if I can get it working properly I might be dropping that and going through Tailscale with local routing. I just haven’t had a chance to futz with that yet.
I used to run the applications on bare metal when I ran a Windows server (because that’s all I knew at the time). Eventually graduated to a QNAP NAS, that wasn’t enough, and moved on again to Unraid, where many of these apps are available through templates in their Community Apps section. It really lowers the barrier of entry for using Docker and makes it stupid easy to assign your container an IP address on your host network, so it can be its own “device” on your LAN (which helps for me since I’ve got that all segmented off in its own VLAN).
It’s not too deep a rabbit hole to jump down, but it’ll take time to get things just right to limit the amount you need to interact with the apps and manually select what you want to grab.
Docker, if you can run it on your hardware (either your normal system or on dedicated hardware) is a Swiss army knife that can help level up your acquisitions, and provides you with an isolated application environment if you don’t want to install the applications directly to your device. For media specifically, there is a suite of applications under the same *arr naming scheme that allows you to index, monitor for releases of, and acquire different television shows, movies, music, and books.
Some container maintainers build in different capabilities into their torrent client containers, such as Binhex’s qBittorrent and Deluge applications, that have VPN connectivity built in, so any network traffic running through that container will automatically use your VPN provider’s WireGuard or OpenVPN capabilities, depending on who you use. Once you have that running and your tags tuned in the *arr apps, you have a headless, mostly independent machine constantly working on acquiring and upgrading your media.
Sidenote: the *arr apps can be controlled by mobile apps like LunaSea on iOS, and nzb360 on Android. The latter can also integrate with your torrent clients.
Average usage for me hovers around 180-200W. I’m running the following:
Given all it does for me, I’m ok with the tradeoff.
Is it not? I’m not on lemmy.dbzer0.com and can see this.
Hey, thanks for replying! Good to know about the subscription ownership. I only ran the discovery portion of the script just to dip my toe in the water, because I feared what would happen if I subscribed. That said, if I use the discovery flag, that’s just exposing the different communities to my instance, right? It’s not going to retrieve any remote content?
Good to see that storage management is roadmapped. I probably won’t invoke the Subscribe side of this for my bot account until then.
Understandable about 2FA. I’ll just be smart about my password usage and watch for any updates on that.
Thanks again!
For anyone who might know:
So if I’m understanding this right, the bot account you create for this is the one subscribing to every community, so it’s known to the local system, right? As long as I’m not mixing up my main account and my bot account, there should be no observable change on my own account?
How is storage affected on this? If the bot account is subscribing to a number of communities across the fediverse, all that remote content is going to take up quite a bit of space, no?
And will 2FA be supported at any point?
Good thing I play any%.