I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
Half right. I can talk from kbin->kbin/lemmy. Since that is what I’m doing here.
Yep, when you search for a remote community your instance will connect to the remote one to look for that community/group/user. If it finds it, it will show all the info about it. Kbin will also automatically subscribe you to it.
Once the remote instance has your subscription, from that moment on it will send all interactions with that community to your instance. This results in some confusing things happening. You’ll see a lot of new stuff start to appear, but sometimes old stuff, but not all old stuff. The reason for this is, if someone on another instance comments or likes a comment on old content, your local instance will not be able to action that like (or at least show it) or comment without getting the comments above that one up to the post. Which is why you’ll see a mix of old and new content, but never all of it.
This is where things get even more confusing. Kbin DOES have boosts. Plus upvotes and downvotes. So, mastedon boosts become boosts. Lemmy upvotes become upvotes. Lemmy downvotes become. Well nothing right now. There’s a lot of things that are still work in progress and even though they all talk the same protocol there are concepts that are interpreted differently.
There’s also the standard, and then what’s generally used. I fixed an issue recently with kbin, which was happening on the rare occasion I had an interaction with “lotide” (another activitypub program). Lemmy and kbin (and probably mastedon, I dunno) always put “To” and “Cc” as a json array even if there’s just one element. But lotide doesn’t if there is only one item (often the case in To). This is fine as far as I can see in ActivityPub’s structure. But, kbin expected an array. These kind of things are going to be ironed out now there’s far more content moving around. The errors will be appearing in our logs and they will be gradually resolved.
Possibly. I think mastadon has been around a bit longer though? Not sure why the old domain must be up. Unless they don’t store public keys of known instances and they rely on DNS for the security.
e.g. Instance A signs a request, Instance B queries Instance A via DNS lookup (as is normal) and checks public key confirms signature and allows it.
Yes, although you might need to fudge keys if they’re properly enforced. Looking at kbin I can see requests are at least signed with the private key. Not sure if the public key is stored somewhere in database, or is pulled from the instance using DNS as a security guarantor (I guess) every time.
I don’t have any subscriptions to them, but I have those 1000+ errors just from posts their users were involved in.
Re-federation is probably possible. BUT! You’re going to always have problems with older content. Case in point my federation error messages is at 2300. About half are failed requests on fmhy.ml.
So for re-federation what’s needed:
1: Remote instances should unsubscribe all users from any fmhy groups. They’re dead now. They can only announce that and hope they do. I reckon when their errors start ramping up (as I saw yesterday) they will be looking into why. Probably to help de-federate from the old URL
2: The fmhy instance should unsubscribe all users from all remote groups but keep a note of the groups while identifying as fmhy.ml. Then once on a configuration for the new domain re-subscribe to each one. The first step should hopefully stop them trying (and failing) to federate new events to the old URL. The second step should trigger federation with the new one.
3: They could be able to keep the DB. But I am not sure in what places the old domain might be stored in the DB and what would need fixing there. Also not sure if they’d need to regenerate keys. Not sure if they’ll see the key was attached to the old domain and refuse to talk to the instance.
Now what’s going to be a problem? Well ALL the existing content out there has references to users on the old domain. It’s VERY hard to fix that. Like every instance would need to fix their database. Not worth it. But, whenever someone likes/unlikes or comments or whatever a post made from fmhy.ml then there’s a good chance a remote instance will queue up a retrieval of:
1: User info about the poster/commentor/liker
2: Missing comments/posts for a like/comment event
And those will fail and error log. I don’t think there’s a way around that aside from editing the whole database on every instance. Again, IMO not worth it.
Would be a nice federation feature if, provided you could identify with the correct private key, announce a domain change which would automatically trigger the above in federated instances, or at the very least some kind of internal redirect for outgoing messages.
I think this is right but to make it work you’d need to do one of two things to pull it off. First off, if you’re doing it just for Web the nginx proxy putting original ip in the header and unpacking on the other side is the smart move. Otherwise.
1: route all your traffic on your side via the vpn, and have the routing on the vpn side forward the packets to the intranet ip on your side not do dnat on it.
2: if you want to route normal traffic over your normal link then you could do it with source routing on the router. You would need two subnets, one for your normal Internet and one for the vpn traffic. Setup source routing to route packets with the vpn ip addresses go via vpn and the rest nat the normal way then the same as before, vpn on cloud forwards not nat to your side of the vpn.
In both cases snat should be done on the cloud side.
It’s a fiddly setup just to get the ip addresses though.
I think the problem is people say lemmy when they mean either. My cheat sheet.
Fediverse: a family of applications that are able to communicate with each other and provide various facilities.
Threadiverse: subset of the fediverse, any federated program providing content aggregation / forum facilities.
Lemmy: the main established application in the genre. Providing a primarily content aggregation and forum redditesque experience.
Kbin: newer project providing the news aggregation and forum features but also micro logging that works with mastadon.
Others I don’t know about but do exist.
Every instance is doing this, right now. When you post on one instance, every instance with a single subscriber to the community gets sent a copy.
On kbin, even the media is stored on the instance. It helps distribute the load. Instances share posts between instances which can then each support many users.
In terms of “taking over” a community. Not so easy.
See, I could take [email protected] from my instance, do some SQL hacking and turn it into a local community. But, that would only work on my instance. Everyone else would still be following the original and the original would still exist.
For it to work it needs to be a co-ordinated community move.
Mods pick an instance with as much of the original data already federated as possible. They communicate the new home. People start subscribing, the old group is made read only with a message linking the new one.
To keep existing posts though other instances would also need to SQL hack. So adding some features to communicate and automate the SQL effort would be a nice thing.
You can see the hierarchy above not below.
You might get a like. It looks like:
like: [
"id" => "https://lemmy.one/activities/like/7d0ef24f-755f-48dd-9b37-ea42041cb34e",
"actor" => "https://lemmy.one/u/Matt",
"object" => "https://lemmy.procrastinati.org/comment/146844",
"type" => "Like",
"audience" => "https://lemdro.id/c/android"
]
In the object property you see the comment. If you visit that you’ll get among the rest of the json for the comment
"inReplyTo": "https://lemmy.world/comment/1269475",
And again
"inReplyTo": "https://lemdro.id/post/77457",
Now you have the hierarchy from the like’s comment to the post. Not the rest of the tree, enough to render the comment in context though.
The public links with post and comment in are used specifically for federation and can only really be used to build a hierarchy upwards. The use case for this probably will make it make more sense.
Say I subscribe to a remote community from my instance. I won’t get anything specifically until an activity happens. The first activity I get is a like for a comment. But I don’t have the comment. But luckily the like message has a link to the comment URL. So now I can fetch that. But how can I just show a comment out of context? I cannot, but the comment json tells me what it is in reply to. Maybe a post, maybe another comment. I can just keep fetching up the tree until I get to the post. Once I have all that I have something that can be rendered, the whole hierarchy from the liked comment to the post.
That’s why it works that way. It’s probably just to save on DB queries, why query the DB for all the comments if in normal federation use case it already has the comment(s) it wants if it reached the post.
I don’t think I’d call it close to an exodus. But, really that doesn’t matter. It doesn’t matter to us if people are leaving reddit. What matters is that there’s enough people here to create a feed with interesting subjects that we can reply to, or we can create content and people will likely reply to it.
We’re at that critical mass now where the content isn’t really a problem. There’s plenty.
While we have that happening, over time as reddit do more corporately motivated rubbish to their users, they will be looking for alternatives and the threadiverse should be a tempting one.
You don’t need to use nat on ipv6. Most routers are based on Linux and there you have conntrack.
With that you can configure by default outgoing only connections just like nat and poke holes in the firewall for the ports you want specifically.
Also windows and I think Linux use ipv6 privacy extensions by default. That means that while you can assign a fixed address and run services, it will assign random ip addresses within your (usually) /64 allocation for outgoing connections. So people can’t identify you and try to connect back to your ip with a port scanner etc.
All the benefits of nat with none of the drawbacks.
OK then I can see what’s happening and there needs to be filtering added (and not just by magazine as it allows now).
I think it’s putting into random microblog posts by anyone that any user followed. In fact, you can confirm it easily by scrolling down on my one while not logged in. I mentioned I only followed elonjet. And sure enough there’s some posts by elonjet a bit further down, but otherwise it’s just mastadon users tagging groups or kbin users making a mistake.
You could make a report here suggesting an enhancement.
But what comes into the feed is anything posted on kbin as a microblog (potentially by mistake) or, from mastadon where they tagged a community subscribed to.
It isn’t normal threads to groups that end up there.
Just looking at the current feed on my instance (you can see yourself, you don’t need to login https://kbin.life/microblog) the first item is from a mastadon user that tagged the [email protected] group. Hence why it shows up there since the instance takes content from there.
The next 2 items are clearly people on kbin.social that mistakenly (well maybe deliberately, but I suspect not) create a post, not a thread in the [email protected] group.
You can tell which is which, the mastadon users are deliberately using hashtags and tagging groups with @. The others are making a very reddit style post that is turning up in the wrong place because they chose post, not thread.
But I think it’d be a nice option to just show followed people and not groups. Maybe a three way toggle. People, Groups, All.
It kinda does. But the problem is the microblogs people post to the groups on kbin drown it out.
I followed elonjet on my instance. If I click on the main kbin page then microblog (NOT from a group, since it will filter only microblogs to the group) and scroll I will find the elonjet updates along with all the other stuff people probably mistakenly posted by clicking + and then add post instead of add thread (probably because reddit called them posts).
So it does work correctly. If you filter out all the microblogs that were probably posted by mistake.
Oh and you can get directly to only the specific content by clicking your name in top right, profile, followed and choosing the followed user. You’ll see all their posts there.
The issue with salt is that it is stored with the password hash. So you’d pretty much get that information with the password. It’s only designed to make sure the hash won’t be the same for the same password on other users, not to make breaking the hash any harder on its own.
You could store it (and/or pepper) wherever the password is actually checked and splitting it would help. But I cannot imagine they’re doing that. It’s far more likely they’re encrypting the password and keeping the key off the database server. Meaning they need to get both, to get passwords.
I’m finding it hard to see how it would be more secure. If I understand what the other comment meant, they would have something like:
password123 = ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f
We’ll assume they pick 4 random pairs
3rd + 5th (sw) = 7865b7e6b9d241d744d330eec3b3a0fe4f9d36af75d96291638504680f805bfd
9th + 11th (13) = 3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278
2nd + 5th (aw) = f5fe88ee08735ae259265495a93c8de2b0eacfecf0cd90b5821479be3199fa8c
6th + 8th (od) = 32f30ea0e83c41a331c4330213005568675f7542c25f354ebb634db78cc30d12
Assuming all 128 7bit character options are used and ignoring dictionary or optimized attacks the complexity of the full password is 7x11 or 77 bits (or 151,115,727,451,828,646,838,272 combinations). So with just the password hash that’s how many tries you need to exhaust every possible option, again without optimizing the process.
But for each of the pairs the complexity is 14 bits or 16,384 combinations. So it would take microseconds to crack all 4 of the pairs. With that information you’d get a password of ?as?wo?d1?3??? (because we don’t know the length) and if they have used a common word or series of words you might have enough information to guess the rest, but even when brute forcing you’ve removed a decent amount of complexity.
Note: This is SHA256. We’re going to ignore salt for this. Salt only increases complexity because you need to crack each user’s password and not able to really use rainbow tables etc.
Unless I misunderstood the idea. In which case, sorry about that.
In all likelihood it is encrypted in a database and the interface to the phone operator only allows them to enter what is said and confirm (although I wouldn’t be surprised of some showing the whole password).
Well actually if the popular communities weren’t concentrated on the larger instances, and rather spread out it would be less of a problem I think. But, yes at the peak of things I was averaging around 5 hits a second from lemmy.world alone on incoming federation messages.
I think making a separate run relay isn’t the answer. I think perhaps the larger instances running a separate server for federation outgoing messages, and perhaps redirecting incoming federation messages too. So as to separate federation and UI. If they don’t already of course. That could go a long way to making it take longer to overwhelm.
Not wanting to talk to you, because your text bubbles are the wrong colour? Sounds a bit cultlike in behaviour. I daresay I don’t know anyone like that. But, I also don’t know too many people with iphones any more. Also I don’t really use text messages. For text I’m using whatsapp or signal (or for techie types maybe matrix).
Not quite. Other instances subscribed to remote instances are sent the information about new posts, comments etc and they store them locally on that instance. So, while there’s not be new content (since the main instance is the controller for all incoming content and distributes it back out, it would break the connection for new stuff.
There are manual steps an instance admin could take, to take it over. Probably it would need some agreement as to who takes it on.
Not sure how it is on lemmy. But looking at the structure on kbin. I reckon you could (with a little sql magic) convert the existing one to a local magazine without cloning, and then people could subscribe to the new version or existing subs could also hack their sql to change the id to match the new instance and toggle the subscriptions.
On Lemmy though I think images are not cached locally. So you might lose those. Kbin by default will also download images/media locally too.
Not sure this would happen enough to add formal functionality for it though.
Well, more specifically it is protecting against a specific form of data loss, which is hardware failure. A good practice if you’re able is to have RAID and an offsite/cloud backup solution.
But if you don’t, don’t feel terrible. When the OVH datacentre had a fire, I lost my server there. But so did a lot of businesses. You’d be amazed at how many had no backup and were demanding that OVH somehow pry their smouldering drives from what remained of the datacentre wing and salvage all the data.
If you care about your data, you want a backup that is off-site. Cloud backup is quite inexpensive these days.
I think this is probably the only way to do it. But they need to be curated by someone. The reason it can’t happen automatically is based on how federation works on lemmy and kbin.
That is that an instance doesn’t know about the communities another instance has available (it doesn’t even know about any other instances). When a user specifically searches for a remote instance, then it contacts the instance and then knows about it.
But this change could work in that someone on the instance can search out the various communities and create the merged group.
Of course when you reply you’d only reply to the community that post was from but actually that’s fine because anyone in the combined group would still see it.
Yeah, which is why I think storing remote user and instance public keys might be better. Then that can be used to authenticate the migration request (it’d probably need to be an extension to the activitypub standard).
The biggest problem I see is that an instance doesn’t know about all the instances that have data pointing to them. So how does it communicate the changes to everyone? The mastadon way is probably the sensible way to do it, despite not supporting the loss of control of domain scenario.