That’s a very valid point, and certainly a reason to obfuscate the calendar event. I would argue that in general, if the concern is the government forcing you to decrypt the data, there’s really no good solution. If they have a warrant, they will get the encrypted data; the only barrier is how willing you are to refuse to give the encryption key. I think some jurisdictions prevent this on 5th amendment grounds, but I’m not not a lawyer.
I have a full-height server rack with large, loud, noisy, power-inefficient servers, so I can’t provide much of a good suggestion there. I did want to say that you might want to seriously reconsider using a single 10Tb hard drive.
Hard drives fail, and with a single drive, in the event of a failure, your data is gone. Using several smaller drives in an array provides redundancy, so that in the event of a drive failure, parity information exists on the other drives. As long as you replace the failed drive before anything else fails, you don’t lose any data. There are multiple different ways to do this, but I’ll use RAID as an example. In RAID5, one drive stores parity information. If any one drive fails, the array will continue running (albeit slower); you just need to replace the failed drive and allow your controller to rebuild the array. In a RAID5 configuration, you lose the space of one drive to parity. So if you have 4 4TB drives in a RAID5 configuration, you would have a total of 12TB of usable space. RAID6 lets you lose two drives, but you also lose two drives worth of space to parity, meaning your array would be more fault-tolerant, but you’d only have 8TB of space.
There are many different RAID configurations; far too many for me to go into them all here. You also have something called ZFS, which is a file system with many similarities to RAID (and a LOT of extra features… like snapshots). As an example, I have 12 10TB hard drives in my NAS. Two groups of 6 drives are configued as RAIDZ2 (similar to RAID6), for a total of 40TB usable space in each array. Those two arrays are then striped (like RAID0, so that data is written across both arrays with no redundancy at the striped level). In total, that means I have 80TB of usable space, and in a worst-case scenario, I could have 4 drives (two on each array) fail without losing data.
I’m not suggesting you need a setup like mine, but you could probably fit 3 4TB drives in a small case, use RAID5 or ZFS-RAIDZ1, and still have some redundancy. To be clear, RAID is not a substitution for a backup, but it can go a long way toward ensuring you don’t need to use your backup.
As someone who uses Nextcloud, why do you suggest obfuscating the name of the calendar event? My nextcloud instance is only accessible from outside my LAN via HTTPS, so no concern about someone using a packet sniffer on public WiFi or something of that sort. The server is located on my property, so physical security isn’t a real concern unless someone breaks in with a USB drive or physically removes the server from the rack and steals it. If someone was to gain access to my network remotely, they’d still need login credentials for Nextcloud or for Proxmox in order to clone the VM drive.
To be clear, I’m not disagreeing with you; I’m wondering if I may be over-estimating data security on my home network. Considering you’re posting from infosec.pub, I’m assuming you know more about this than I do.
Also, I feel like I need to say that the fact that OP even needs to consider data security for something like really makes me wonder how parts of our society have gone so wrong.
I would strongly disagree. In terms of setting up OPNSense (I use pfSense, but same concept), it’s easier to just do a PCI passthrough. The alternative is to create a virtual network adapter on your hypervisor, bridge it to a physical NIC, and bind the virtual adapter to the VM. The only advantage to be gained from that is being able to switch between physical NICs without reconfiguring the OPNSense installation. For someone with a homelab, when would you ever need to do that?
My Proxmox server uses a 10Gb PCIe adapter for its primary network interface. The onboard NICs are all passed through to pfSense; I’ve never had any need to change that, and it’s been that way for years.
I don’t mean this to sound overly critical, and I’m happy to be proven wrong. I just don’t see a “plethora of reasons” why doing PCI passthrough on a NIC is a bad idea.
What kind of issues do you have with your ISP? I live in a rural area, so my options for ISP are limited; I have a VDSL connection supplemented by Starlink. Starlink uses CGNAT, so I can’t really host anything there unless I use something like Zerotier to Tailscale, but my VDSL connection works pretty well as long as I make sure to drop the bitrate to something that fits in my 4MBit upload. I have anything that accepts incoming connections behind an Nginx reverse proxy, and my routing policy is set up so that Nginx is forced onto the DSL connection.
Not really related to my original post, but I’ve spent way too much time tinkering with my network, so I was curious.
Unfortunately, my CPU does not support quicksync; I’m using dual E5-2650v2s in the server that hosts Jellyfin. It’s been a while since I researched it, but I believe that Haswell was the first architecture with quicksync; my CPUs are Ivy Bridge. I’ve been wanting to upgrade for a while, but it really comes down to the fact that it runs all of my VMs and containers just fine, and there’s always somewhere else I find to spend my money.
Regardless, the Jellyfin docs say that tone mapping is only supported via CUDA, which would mean I couldn’t use quicksync anyway.
I thought the same with Android TV, but at least for me, it doesn’t work at all; I’ve tried two different Android TV boxes, too, and they both have the same problem.
I wish I could find everything in SDR or at least HDR10, but that’s not always an option. I’ve found several “Linux ISOs” that are only available in DV, and some where the only option is 4k HDR or 1080p, and I really prefer to avoid anything below 4k unless absolutely necessary. 4k SDR is always my preferred format, though.
Do you transcode 4k with tonemapping? My P400 does a great job as long as tonemapping is turned off, but that doesn’t do much to help me play HDR content. A GTX 1070 would be a great solution, and cheaper than some of the other cards I’m looking at, assuming it can do what I need it to.
I usually only ever have 1 concurrent stream, too. It’d be nice to have a GPU that could support 2 just in case both of us in my household want to use Jellyfin at the same time, but it’s certainly not essential.
I haven’t tried MPV, but everything I’ve tried so far results in the typical green and purple image, unless I do hardware tonemapping on the server. I also don’t much like the idea of having to load an external player on every device either, especially my Android TV box.
Upgrading my GPU seems like the best solution; I just want to make sure I get something that will do what I need it to.
I think you’re right about max x and max y, but the filesize parameter defaults to 50MB, and max memory to 256MB.
Thanks! Adapting the command you gave to work with snap will be fairly easy. Regarding a backup of config.php, I’ve tried to do that, but with a snap install, I get a permission denied error when I try to enter the config directory, and you can’t “sudo cd.” I’ll try logging in as root or changing permissions.
I’ve said this many times before, but it seems relevant here, too. Using a reverse proxy is a good step for security, but you will still want to block certain incoming connections on your firewall. I block everything except for our cell phone provider, my partner’s employer, and my employer. We will never be accessing my network from any other source. At the very least, block everything and whitelist your own country; this will prevent a lot of illegitimate connections. If you’re using pfSense, the pfBlockerNG plugin makes this very easy to do.
I was a bit confused about this as well. Once Threads implements ActivityPub, what would federation with lemmy.world actually look like in practice? I understand how federation works between Lemmy instances, but how would a microblogging platform fit in? Would Threads users just be able to post to Lemmy, or would it somehow show up in a Lemmy community when a Threads user makes a post on Threads?
I’m not really understanding how two different services like Lemmy and Threads can be intercompatible.
You’ve gotten some good advice regarding VPNs, so I won’t go into that, but if you do decide to open SSH or any other port, I would encourage you to spend some time setting up a firewall to block incoming connections. I have several services on HTTPS open to the world, but my firewall only allows incoming connections from whitelisted IP ranges (basically just from my cell phone and my computer at work). The number of blocked incoming connections is staggering, and even if they’re not malicious, there is absolutely legitimate no reason for someone other than myself or the members of my household to be trying to access my network remotely.
Oops. Yeah, I meant ZFS. Fixed.