• 3 Posts
  • 17 Comments
Joined 2Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

That’s a very valid point, and certainly a reason to obfuscate the calendar event. I would argue that in general, if the concern is the government forcing you to decrypt the data, there’s really no good solution. If they have a warrant, they will get the encrypted data; the only barrier is how willing you are to refuse to give the encryption key. I think some jurisdictions prevent this on 5th amendment grounds, but I’m not not a lawyer.


I have a full-height server rack with large, loud, noisy, power-inefficient servers, so I can’t provide much of a good suggestion there. I did want to say that you might want to seriously reconsider using a single 10Tb hard drive.

Hard drives fail, and with a single drive, in the event of a failure, your data is gone. Using several smaller drives in an array provides redundancy, so that in the event of a drive failure, parity information exists on the other drives. As long as you replace the failed drive before anything else fails, you don’t lose any data. There are multiple different ways to do this, but I’ll use RAID as an example. In RAID5, one drive stores parity information. If any one drive fails, the array will continue running (albeit slower); you just need to replace the failed drive and allow your controller to rebuild the array. In a RAID5 configuration, you lose the space of one drive to parity. So if you have 4 4TB drives in a RAID5 configuration, you would have a total of 12TB of usable space. RAID6 lets you lose two drives, but you also lose two drives worth of space to parity, meaning your array would be more fault-tolerant, but you’d only have 8TB of space.

There are many different RAID configurations; far too many for me to go into them all here. You also have something called ZFS, which is a file system with many similarities to RAID (and a LOT of extra features… like snapshots). As an example, I have 12 10TB hard drives in my NAS. Two groups of 6 drives are configued as RAIDZ2 (similar to RAID6), for a total of 40TB usable space in each array. Those two arrays are then striped (like RAID0, so that data is written across both arrays with no redundancy at the striped level). In total, that means I have 80TB of usable space, and in a worst-case scenario, I could have 4 drives (two on each array) fail without losing data.

I’m not suggesting you need a setup like mine, but you could probably fit 3 4TB drives in a small case, use RAID5 or ZFS-RAIDZ1, and still have some redundancy. To be clear, RAID is not a substitution for a backup, but it can go a long way toward ensuring you don’t need to use your backup.


As someone who uses Nextcloud, why do you suggest obfuscating the name of the calendar event? My nextcloud instance is only accessible from outside my LAN via HTTPS, so no concern about someone using a packet sniffer on public WiFi or something of that sort. The server is located on my property, so physical security isn’t a real concern unless someone breaks in with a USB drive or physically removes the server from the rack and steals it. If someone was to gain access to my network remotely, they’d still need login credentials for Nextcloud or for Proxmox in order to clone the VM drive.

To be clear, I’m not disagreeing with you; I’m wondering if I may be over-estimating data security on my home network. Considering you’re posting from infosec.pub, I’m assuming you know more about this than I do.

Also, I feel like I need to say that the fact that OP even needs to consider data security for something like really makes me wonder how parts of our society have gone so wrong.


Linux ISOs are copies of installers for various Linux distributions. They’re totally free and legal to distribute, and a very above-board and legitimate thing to store on a server with more space than a normal person could reasonably need. They are very much not copyrighted content.


I would strongly disagree. In terms of setting up OPNSense (I use pfSense, but same concept), it’s easier to just do a PCI passthrough. The alternative is to create a virtual network adapter on your hypervisor, bridge it to a physical NIC, and bind the virtual adapter to the VM. The only advantage to be gained from that is being able to switch between physical NICs without reconfiguring the OPNSense installation. For someone with a homelab, when would you ever need to do that?

My Proxmox server uses a 10Gb PCIe adapter for its primary network interface. The onboard NICs are all passed through to pfSense; I’ve never had any need to change that, and it’s been that way for years.

I don’t mean this to sound overly critical, and I’m happy to be proven wrong. I just don’t see a “plethora of reasons” why doing PCI passthrough on a NIC is a bad idea.


What kind of issues do you have with your ISP? I live in a rural area, so my options for ISP are limited; I have a VDSL connection supplemented by Starlink. Starlink uses CGNAT, so I can’t really host anything there unless I use something like Zerotier to Tailscale, but my VDSL connection works pretty well as long as I make sure to drop the bitrate to something that fits in my 4MBit upload. I have anything that accepts incoming connections behind an Nginx reverse proxy, and my routing policy is set up so that Nginx is forced onto the DSL connection.

Not really related to my original post, but I’ve spent way too much time tinkering with my network, so I was curious.


Unfortunately, my CPU does not support quicksync; I’m using dual E5-2650v2s in the server that hosts Jellyfin. It’s been a while since I researched it, but I believe that Haswell was the first architecture with quicksync; my CPUs are Ivy Bridge. I’ve been wanting to upgrade for a while, but it really comes down to the fact that it runs all of my VMs and containers just fine, and there’s always somewhere else I find to spend my money.

Regardless, the Jellyfin docs say that tone mapping is only supported via CUDA, which would mean I couldn’t use quicksync anyway.


I thought the same with Android TV, but at least for me, it doesn’t work at all; I’ve tried two different Android TV boxes, too, and they both have the same problem.

I wish I could find everything in SDR or at least HDR10, but that’s not always an option. I’ve found several “Linux ISOs” that are only available in DV, and some where the only option is 4k HDR or 1080p, and I really prefer to avoid anything below 4k unless absolutely necessary. 4k SDR is always my preferred format, though.


I’d guess that Plex uses ffmpeg internally, which would be the same as Jellyfin. I’ve been looking at both the P2000 and P4000, but I’m leaning a bit toward the T1000 because of the newer architecture. Good to hear that the P2000 is working for you.


Do you transcode 4k with tonemapping? My P400 does a great job as long as tonemapping is turned off, but that doesn’t do much to help me play HDR content. A GTX 1070 would be a great solution, and cheaper than some of the other cards I’m looking at, assuming it can do what I need it to.

I usually only ever have 1 concurrent stream, too. It’d be nice to have a GPU that could support 2 just in case both of us in my household want to use Jellyfin at the same time, but it’s certainly not essential.


I haven’t tried MPV, but everything I’ve tried so far results in the typical green and purple image, unless I do hardware tonemapping on the server. I also don’t much like the idea of having to load an external player on every device either, especially my Android TV box.

Upgrading my GPU seems like the best solution; I just want to make sure I get something that will do what I need it to.


GPU for 4k Transcoding in Jellyfin
I'm starting to get more and more HDR content, and I'm noticing an issue with my Jellyfin server. In nearly all cases, it's required to transcode and tone map the HDR content. All of it is in 4k. My little Quadro P400 just can't keep up. Encoder and decoder usage hovers around 15-17%, but the GPU core usage is pinned at 100% the entire time, and my framerate doesn't exceed 19fps, which makes the video skip so badly it's unwatchable. What's a reasonable upgrade? I'm thinking about the P4000, but that might be excessive. Also, it needs to fit in a low-profile slot. Edit: I'm shocked at how much good feedback I received on this post. Hopefully someone else will stumble on it in the future and be able to learn something. Ultimately, I decided to purchase a used RTX A2000 for just about $250. It's massively overkill for transcoding/tone mapping 4k, but once I'm brave enough to risk breaking my Proxmox install and setting up vGPU, I'm hoping to take advantage of the Tensor cores for AI object detection in my Blue Iris VM. Also, the A2000 supports AV1, and while I don't need that at the moment, it will be nice to have in the future, I think. Final Edit: I replaced the Quadro P400 with an RTX A2000 today. With the P400, transcoding 4k HEVC HDR to 4k HEVC (or h264) SDR with tone mapping resulted in transcode rate of about 19fps with 100% GPU usage. With the A2000, I'm getting a transcode rate of about 120fps with around 30% GPU usage; plenty of room for growth if I add 1 or 2 users to the server. For $250, it was well worth the upgrade.
fedilink

I think you’re right about max x and max y, but the filesize parameter defaults to 50MB, and max memory to 256MB.

https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/config_sample_php_parameters.html


Thanks! Adapting the command you gave to work with snap will be fairly easy. Regarding a backup of config.php, I’ve tried to do that, but with a snap install, I get a permission denied error when I try to enter the config directory, and you can’t “sudo cd.” I’ll try logging in as root or changing permissions.


I’ve said this many times before, but it seems relevant here, too. Using a reverse proxy is a good step for security, but you will still want to block certain incoming connections on your firewall. I block everything except for our cell phone provider, my partner’s employer, and my employer. We will never be accessing my network from any other source. At the very least, block everything and whitelist your own country; this will prevent a lot of illegitimate connections. If you’re using pfSense, the pfBlockerNG plugin makes this very easy to do.


Nextcloud - Preview Settings as Snap in Ubuntu
I recently set up Nextcloud, and so far I'm really enjoying it. With the exception of Gmail and Backblaze, I'm no longer using any online services that aren't self-hosted on my own hardware; Nextcloud has allowed me to get rid of the last few Google services I was using. One issue I'm having is that images I have uploaded to Nextcloud do not have thumbnails when the image size is large. My phone takes photos at 200MP, so this constitutes a significant number of my photos. I've been researching the problem, and I think I need to set the following: 'preview_max_x' => null 'preview_max_y' => null 'preview_max_filesize_image' => -1 'preview_max_memory' => -1 I'm running Nextcloud on a Proxmox hypervisor with 32 cores and 128GB of memory, so I'm not concerned about using system resources; I can always allocate more. The issue I'm having is that I installed Nextcloud as a snap in Ubuntu Server. The last time I tried to use nextcloud.occ to change a configuration option, it set a string as an array, and triggered a bunch of php errors. As far as I can tell, I need to do something like this: sudo nextcloud.occ config:[something, maybe system]:set preview_max_x [some data goes here] How do I format this so that nextcloud.occ inserts the variable into my php config properly? Any examples of a nextcloud.occ command would be very much appreciated.
fedilink

I was a bit confused about this as well. Once Threads implements ActivityPub, what would federation with lemmy.world actually look like in practice? I understand how federation works between Lemmy instances, but how would a microblogging platform fit in? Would Threads users just be able to post to Lemmy, or would it somehow show up in a Lemmy community when a Threads user makes a post on Threads?

I’m not really understanding how two different services like Lemmy and Threads can be intercompatible.


You’ve gotten some good advice regarding VPNs, so I won’t go into that, but if you do decide to open SSH or any other port, I would encourage you to spend some time setting up a firewall to block incoming connections. I have several services on HTTPS open to the world, but my firewall only allows incoming connections from whitelisted IP ranges (basically just from my cell phone and my computer at work). The number of blocked incoming connections is staggering, and even if they’re not malicious, there is absolutely legitimate no reason for someone other than myself or the members of my household to be trying to access my network remotely.


Lemmy bandwidth requirements
I've been considering the idea of hosting my own instance of Lemmy, solely for my own use, maybe with 1 or 2 family members using it as well. I've seen several discussions regarding the requirements for system resources, but not much regarding bandwidth. I have an abundance of processing power, memory, and storage space in my homelab, but my internet connection is terrible. Not much available where I live. I have a 40/3 VDSL connection and a Starlink connection, but neither is particularly good in terms of upload. Seems like a VPS would be a good solution, but to me, that kind of defeats the purpose of self-hosting. I want to use my own hardware. So, for a personal-use Lemmy instance, what kind of bandwidth is recommended? I know my connection would be fine for 1 or 2 users, but I'll admit I'm not entirely sure how servers sync with each other in a federated network, and I could see that using a ton of bandwidth.
fedilink