Daniel Stenberg says the scores are “security misinformation”.

    • OptimusPrimeDownfall@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      The scores do fail though - they don’t encompass enough information. They can’t encompass enough information because something that is critical in one sense (e.g., and making shit up here, Java listening to the internet) might not be in another (e.g. Java running on specific scientific data in an airgapped environment). Security is always situation and risk-appetite dependent. No number can encompass all that.

      • 𝕸𝖔𝖘𝖘@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        No number can encompass all that.

        Maybe they should have a combo number would get us closer. But, still, the actual governing body must be completely impartial and logical in their rating. But also, we have to make a reality check on the priority of the rating in our own environments. Using your example, a 10 rating might be a 1 for that airgapped machine—judgement call.

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Well, CISA will probably be gone next month so no more need to worry about this.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    1 day ago

    So, I think that there are at least two issues raised here.

    First, that CVSS scores may not do a great job of capturing the severity of a bug, and that this may cause the end-user or their insurer to mis-assess the severity of the bug in terms of how they handle the issue on the system.

    I am not too worried about this, because what matters here is how relatively good what they’re doing is. It doesn’t need to be perfect, just the best of the alternatives, and the alternative is probably having no information. The goal is not to perfectly-harden all systems, but a best effort to help IT allocate resources. An end-user for whom this is insufficient could always do their own, per-user per-vulnerability assessment, but frankly, I’d guess that for almost all users, if they had to do that, they probably wouldn’t. An insurer can take into account an error rate on a security scoring tool – they are in the business of assessing and dealing with uncertainties. Insurers work with all kinds of data, some of which is only vaguely-correlated with the actual risk.

    In the curl security team we have discussed setting “fixed” (fake) scores on our CVE entries just in order to prevent CISA or anyone else to ruin them, but we have decided not to since that would be close to lying about them and we actually work fiercely to make sure we have everything correct and meticulously described.

    Every user or distributor of the project should set scores for their different use cases. Maybe even different ones for different cases. Then it could perhaps work.

    The thing is that for the vast bulk of users, that per-user assessment is not going to happen. So the alternative is that their scanner has no severity information. I doubt that there’s anything specific to curl that forces that one number to be less-accurate then for other software packages. I don’t think that other projects that do use this expect it to be perfect, but surely it’s possible to beat no information. If an organization is worried enough about the accuracy of such a score, they can always do a full review of all identified vulnerabilities – if you’re the NSA or whoever, have the capability and need, then you probably also don’t need to worry about being mislead by the score. Hence:

    The reality is that users seem to want the scores so bad that CISA will add CVSS nonetheless, mandatory or not.

    I mean, that’s because most of them are not going to reasonably going to be able to review and understand every vulnerability themselves and it’s implications for them. They want some kind of guidance as to how to prioritize their resources.

    If the author is concerned philosophically about the limitations of the system to the point that they feel that it damages their credibility to provide such a score, I’d think maybe put up an advisory that the CVSS score is only an approximation, and could be misleading for some users’ specific use cases.

    If someone wanted to come up with a more-sophisticated system – like, say, a multiple score system, something that has a “minimum impact” and “maximum impact” severity score per vulnerability, or something that has a score for several scenarios (local attacker able to invoke software, remote attacker, attacker on same system but different user), maybe something like that could work, but I don’t think that that’s what the author is arguing for – he’s arguing that each end-user do an impact assessment to get a score tailored to them.

    Second, that an excessive CVSS score assigned by someone else may result in the curl team getting hassled by worried end users and spending time on it. I think that the best approach is just to mechanically assign something approximate off the curl severity assessment. But even if you don’t – I mean, if you’re hassling an open-source project in the first place about a known, open vulnerability, I think that the right response is to say “submit a patch or wait until it gets fixed”. Like, even if the bug actually were serious, it’s not like going to to the dev team for support is going to accomplish anything. They will already know about the vulnerability and will have prioritized their resources.

    Finally, looking at the bug bounty page referenced in the article, it seems like the bug bounty currently uses a CVSS score to award a bounty. If curl doesn’t assign CVSS scores, I’m a little puzzled as to how this works. Maybe they only go to vulnerabilities from the bug bounty program?

    https://curl.se/docs/bugbounty.html

    The grading of each reported vulnerability that makes a reward claim is performed by the curl security team. The grading is based on the CVSS (Common Vulnerability Scoring System) 3.0.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    1 day ago

    “security misinformation”

    Or actually significant and consistent values that also happen to make you look bad today so they must suck and be ditched.

    Did I get that right? SOUNDS right…

    • BestBouclettes@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Nah, the last few high scoring CVEs curl got were really niche buffer overflows or potential security issues.
      He’s been very vocal about this. Yeah it’s a bug, and usually an easy fix, but they scored like 8 or 9 on CVSS. Which is disproportionate compared to a lot of other 8s or 9s.
      I can understand the frustration there.