Open source software is safe because so few people use it it’s not worth a hacker’s time to break into it (joking, but of course that doesn’t apply to server software)
Honestly, for some software this is the answer. The other one with hackers is that it’s usually easier to trick an employee into giving you the master password than finding an obscure exploit in their codebase, though it does still happen.
IDK why, but this had me imagining someone adding malicious code to a project, but then also being highly proactive with commenting his additions for future developers.
“Here we steal the user’s identity and sell it on the black market for a tidy sum. Using these arguments…”
I don’t know how to audit code. But I can generally get through. For example, I use Aegis for 2FA OTP. How do we know it’s secure? Because I can see very clearly that it doesn’t have network access on Android and that it hasn’t tried to get network access.
You can get a good look at a T-bone by sticking your head up a cow’s ass but I’d rather take the butcher’s word for it.
There are people that do audit open source shit quite often. That is openly documented. I’ll take their fully documented word for it. Proprietary shit does not have that benefit.
And even when problems are found, like the heartbleed bug in OpenSSL, they’re way more likely to just be fixed and update rather than, oh I dunno, ignored and compromise everybody’s security because fixing it would cost more and nobody knows about it anyway. Bodo Moller and Adam Langley fixed the heartbleed bug for free.
Yeah, but that just happens sometimes. With proprietary software you don’t even have the benefit of being able to audit it to see if the programmers missed something critical, you kinda just have to trust that they’re smarter than a would-be hacker.
While I generally agree, the project needs to be big enough that somebody looks through the code.
I would argue Microsoft word is safer than some l small abandoned open source software from some Russian developer
Ehmm. if nobody uses it, it kinda doen’t matter if it’s safe. And for this example: I bet more people had a look at the code of LibreOffice than MS Office. And i dont think it sends telemetry home in default settings.
That’s true, but I’m not a programmer and on a GitHub project with 3 stars I can’t count on someone else doing it. (Of course this argument doesnt apply to big projects like libre office) With Microsoft I can at least trust that they will be in trouble or at least get bad press when doing something malicious.
As a packager, I totally relate to this: we generally don’t have the resources to follow the upstream development of the projects we rely on, let alone audit all the changes they make between releases.
Open source software still has security advantages — we can communicate directly with the maintainers, backport security fixes and immediately release them to users, fix bugs that affect the distribution, etc. — but I agree that it’s not a silver bullet.
I don’t use the term “open source”. I say free software because giving someone else control over your computing is unjust. The proprietor of the program has absolute control over how the program works and you can not change it or use alternative versions of it
There is a much higher chance that someone out of 7 billion people will audit open source than it is likely for a corporation to do it, let alone make it publicly known and fix it.
That’s exactly the problem with many open source projects.
I recently experienced this first hand when submitting some pull requests to Jerboa and following the devs: As long as there is no money funding the project the devs are trying to support the project in their free time which means little to no time for quality control. Mistakes happen… most of them are uncritical but as long as there’s little to no time and expertise to audit code meaningfully and systematically, there will be bugs and these bugs may be critical and security relevant.
For the human-hours of work that’s put into it it’s very expensive. I put in translations, highlighted bugs, put in a Jerboa fork to help mitigate issues with the 0.18 Lemmy upgrade… if I were to do this kind of thing for work I’d bill 25CAD per hour at the very minimum.
Even when you do have time. There have been “researchers” submitting malicious prs and when caught just act like it’s no big deal. Even had an entire institution banned from submitting prs to the Linux kernel.
Well, i think in most of those big incidents, people got caught. That means the concept kinda works well?
Regarding the earlier comment: I think companies just started to figure that out. They/You can’t just take free libraries databases etc… If you’re big tech company you better pay a few developers or an audit to make those libraries safe. This is your way of contributing. Otherwise your big platform will get hacked because you just took some 15 year olds open source code.
agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like…backdoor access.
I don’t really think auditing is a compelling argument for FOSS. You can hire accredited companies to audit and statically analyse closed source code, and one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
The most compelling argument for FOSS (for me) is that innovation is done in the open. When innovation is done in the open, more people can be compelled to learn to code, and redundant projects can be minimised (i.e. just contribute to an existing implementation, rather than inventing a new). It simply is the most efficient way to author software.
I’m probably wearing rose tinted glasses, but the garage and bedroom-coders of the past, whom developed on completely open systems moved the whole industry forward at a completely different pace than today.
one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
LOL, only if by that weasel-word “marketable” you mean “sold for business use along with a support contract and/or SLA).” Otherwise, proprietary software targeting consumers has just as many disclaimers as Free Software does.
(Also, I’m not even going to bother addressing the silly biased framing attempting to disparage Free Software as not marketable.)
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it’s theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
It usually boils down to this, something can be strictly better but not perfect.
The ability to audit the code is usually strictly better than closed source. Though I’m sure an argument could be made about exposing the code base to bad actors I generally think it’s a worthy trade off.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]
Rules:
Be civil and nice.
Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
no , but I know a bunch of passionate geek are doing it.
Open source software is safe because so few people use it it’s not worth a hacker’s time to break into it (joking, but of course that doesn’t apply to server software)
Honestly, for some software this is the answer. The other one with hackers is that it’s usually easier to trick an employee into giving you the master password than finding an obscure exploit in their codebase, though it does still happen.
IDK why, but this had me imagining someone adding malicious code to a project, but then also being highly proactive with commenting his additions for future developers.
“Here we steal the user’s identity and sell it on the black market for a tidy sum. Using these arguments…”
That’s the neat part, you don’t!
Lol that’s literally me. They get me everytime. I have to learn how to audit
I don’t know how to audit code. But I can generally get through. For example, I use Aegis for 2FA OTP. How do we know it’s secure? Because I can see very clearly that it doesn’t have network access on Android and that it hasn’t tried to get network access.
Ahh the old motte and bailey doctrine.
FOSS is superior even for an end user like me. It only fails when corporations are allowed to “embrace, extend, and extinguish” them.
You can get a good look at a T-bone by sticking your head up a cow’s ass but I’d rather take the butcher’s word for it.
There are people that do audit open source shit quite often. That is openly documented. I’ll take their fully documented word for it. Proprietary shit does not have that benefit.
Thanks Callahan!
And even when problems are found, like the heartbleed bug in OpenSSL, they’re way more likely to just be fixed and update rather than, oh I dunno, ignored and compromise everybody’s security because fixing it would cost more and nobody knows about it anyway. Bodo Moller and Adam Langley fixed the heartbleed bug for free.
Wasn’t heartbleed in the wild for 2 years though?
Yeah, but that just happens sometimes. With proprietary software you don’t even have the benefit of being able to audit it to see if the programmers missed something critical, you kinda just have to trust that they’re smarter than a would-be hacker.
I get that, I just caution that FOSS doesn’t automatically mean secure.
Nothing is 100% secure. FOSS is definitely more secure, all else equal.
Completely missing the point. Collective action is what makes open source software accessible to everybody.
You dont NEED to be able to audit yourself. Still safer than proprietary software every way you look at it.
While I generally agree, the project needs to be big enough that somebody looks through the code. I would argue Microsoft word is safer than some l small abandoned open source software from some Russian developer
Ehmm. if nobody uses it, it kinda doen’t matter if it’s safe. And for this example: I bet more people had a look at the code of LibreOffice than MS Office. And i dont think it sends telemetry home in default settings.
I think they’re talking about onlyoffice.
no, proprietary software its always possible malware and you have no weapon against it. being able to audit is always better.
That’s true, but I’m not a programmer and on a GitHub project with 3 stars I can’t count on someone else doing it. (Of course this argument doesnt apply to big projects like libre office) With Microsoft I can at least trust that they will be in trouble or at least get bad press when doing something malicious.
I mean if a github project has only 3 stars, it means no one is using it. Why does safety matter here? Early adopting anything has risks.
This is kind of a false comparison. If it has 3 stars then it doesn’t even qualify for this conversation as literally no one is using it.
undefined> With Microsoft I can at least trust that they will be in trouble
lol yeah if anybody finds out… something something NSA
As a packager, I totally relate to this: we generally don’t have the resources to follow the upstream development of the projects we rely on, let alone audit all the changes they make between releases. Open source software still has security advantages — we can communicate directly with the maintainers, backport security fixes and immediately release them to users, fix bugs that affect the distribution, etc. — but I agree that it’s not a silver bullet.
I don’t use the term “open source”. I say free software because giving someone else control over your computing is unjust. The proprietor of the program has absolute control over how the program works and you can not change it or use alternative versions of it
But someone does
Sure, someone knows how to audit code.
Whether that someone is inclined to do it for whatever random FOSS package / library / application / service / whatever is a different question.
There is a much higher chance that someone out of 7 billion people will audit open source than it is likely for a corporation to do it, let alone make it publicly known and fix it.
“given enough eyeballs, all bugs are shallow” …but sometimes there is a profound lack of eyeballs.
That’s exactly the problem with many open source projects.
I recently experienced this first hand when submitting some pull requests to Jerboa and following the devs: As long as there is no money funding the project the devs are trying to support the project in their free time which means little to no time for quality control. Mistakes happen… most of them are uncritical but as long as there’s little to no time and expertise to audit code meaningfully and systematically, there will be bugs and these bugs may be critical and security relevant.
For the human-hours of work that’s put into it it’s very expensive. I put in translations, highlighted bugs, put in a Jerboa fork to help mitigate issues with the 0.18 Lemmy upgrade… if I were to do this kind of thing for work I’d bill 25CAD per hour at the very minimum.
Even when you do have time. There have been “researchers” submitting malicious prs and when caught just act like it’s no big deal. Even had an entire institution banned from submitting prs to the Linux kernel.
https://www.bleepingcomputer.com/news/security/linux-bans-university-of-minnesota-for-committing-malicious-code/
Well, i think in most of those big incidents, people got caught. That means the concept kinda works well?
Regarding the earlier comment: I think companies just started to figure that out. They/You can’t just take free libraries databases etc… If you’re big tech company you better pay a few developers or an audit to make those libraries safe. This is your way of contributing. Otherwise your big platform will get hacked because you just took some 15 year olds open source code.
Selection bias though. We don’t know how many have not yet been caught.
agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like…backdoor access.
No hypothesis needed https://en.wikipedia.org/wiki/EternalBlue can’t have been a one-off either.
Yeha that was my though. But more a dedicated program to do similar with large FOSS projects.
They also have hardware/supply chain intercept programs to install back doors in closed source appliances (ie: Cisco firewalls)
So something similar but dedicated to open source PRs.
No, but someone knows how and does. If there’s something bad, there’ll be a big stink.
I don’t really think auditing is a compelling argument for FOSS. You can hire accredited companies to audit and statically analyse closed source code, and one could argue that marketable software legally has to meet different (and stricter) criteria due to licensing (MIT, GPL, and BSD are AS IS licenses), that FOSS do not have to meet.
The most compelling argument for FOSS (for me) is that innovation is done in the open. When innovation is done in the open, more people can be compelled to learn to code, and redundant projects can be minimised (i.e. just contribute to an existing implementation, rather than inventing a new). It simply is the most efficient way to author software.
I’m probably wearing rose tinted glasses, but the garage and bedroom-coders of the past, whom developed on completely open systems moved the whole industry forward at a completely different pace than today.
LOL, only if by that weasel-word “marketable” you mean “sold for business use along with a support contract and/or SLA).” Otherwise, proprietary software targeting consumers has just as many disclaimers as Free Software does.
(Also, I’m not even going to bother addressing the silly biased framing attempting to disparage Free Software as not marketable.)
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it’s theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
It usually boils down to this, something can be strictly better but not perfect.
The ability to audit the code is usually strictly better than closed source. Though I’m sure an argument could be made about exposing the code base to bad actors I generally think it’s a worthy trade off.
This all or nothing attitude is boring.
No they don’t fabricate the CPU doesn’t mean thry should hand out their data to some corporation …
Trust has no place in computing.
“Trust has no place in computing” is a concept that we are still quite distant from, in practical terms.
But yeah, definitely don’t hand your personal information over to a corporation, even if they’re offering to take a lot of your money, too!