I’m not gonna lie, I’m surprised it took this long for some dipshit to try something like this. Lemmy’s security has more holes in it than a piece of Swiss cheese and we’re fools if we think it’s viable enough for it to serve as a long-term home for new social media.
We really, really need a better social structure than federation.
Lemmy’s security has more holes in it than a piece of Swiss cheese
This has very little to do with security. There’s inherently “insecure” about posting CSAM, since the accounts and images were likely posted just like any other.
What really needs to happen, is some sort of detection of that kind of content (which would likely require a large change to code) or additional moderation tools.
Software development is a balancing act. You need to pick and choose not only what features to add, but when to add them. Sometimes, mistakes are made in the planning and you get a situation like this.
What likely happened, is that these kinds of features were deemed less likely to be needed, since the majority of lemmy users will never run into the need of them and there is technically a way to handle the situation (nuking your instances image cache.) But you’ll likely see a reshuffling of priorities if these kinds of attacks become more prevalent.
I think you mis-spelled moderation tools, nice quick fix would have been to block posts from new users on X instance and have a pinned post briefly covering why - they’ll eventually run out of instances that don’t have open signups IMO or just give up.
Another mod tools option would be rate limiting of posts, i.e. users can only make a new shitpost every 10-15min, rather than unlimited times per minute
I’m not gonna lie, I’m surprised it took this long for some dipshit to try something like this. Lemmy’s security has more holes in it than a piece of Swiss cheese and we’re fools if we think it’s viable enough for it to serve as a long-term home for new social media.
We really, really need a better social structure than federation.
This has very little to do with security. There’s inherently “insecure” about posting CSAM, since the accounts and images were likely posted just like any other.
What really needs to happen, is some sort of detection of that kind of content (which would likely require a large change to code) or additional moderation tools.
The lack of those tools is what I was talking about
Ah okay, those arent generally considered security but I can understand why you went that route I suppose.
Does anyone know why they were never put in?
Software development is a balancing act. You need to pick and choose not only what features to add, but when to add them. Sometimes, mistakes are made in the planning and you get a situation like this.
What likely happened, is that these kinds of features were deemed less likely to be needed, since the majority of lemmy users will never run into the need of them and there is technically a way to handle the situation (nuking your instances image cache.) But you’ll likely see a reshuffling of priorities if these kinds of attacks become more prevalent.
I think you mis-spelled moderation tools, nice quick fix would have been to block posts from new users on X instance and have a pinned post briefly covering why - they’ll eventually run out of instances that don’t have open signups IMO or just give up.
Another mod tools option would be rate limiting of posts, i.e. users can only make a new shitpost every 10-15min, rather than unlimited times per minute
Those are all fundamental aspects of Lemmy’s security that should be there but are not