LLVM Security group public sync-ups

Meeting minutes for the 16th of April 2024 meeting

  • A proposal to make LLVM a CNA (CVE Numbering Authority).
    @tstellar has proposed steps to make LLVM a CVE Numbering Authority. It seems it’s a relatively simple process by filling in this form.
    • A main motivation for this is to be able to dispute CVEs raised by others against LLVM that are clearly outside of LLVM’s threat model.
    • Would becoming a CNA set the expectation that the security group would need to start raising CVEs for the issues it accepts as security issues? There’s an observation that on quite a few toolchain-related security issues, there is no general consensus on whether CVEs should be created for them.
  • Do we need to learn something from the xz supply chain attack and take some action?
    • Is it OK for there being so many binary files in LLVM’s source repository?
      • Having these binary files results in LLVM getting a lower OpenSSF score. see OpenSSF scorecard report, under Binary-Artefacts, to see all binary files in the repository.
      • It seems most, if not all, binary files are tests. It seems we could have fewer binary files (e.g. by making more use of yaml2obj), but we probably cannot remove all binary files.
      • Would it be helpful from a supply chain security point-of-view to only allow some trusted users to touch these files?
        It’s currently not possible to implement this restriction.
      • Another possibly useful restriction is to only allow binary files in test directories. If then our test/build/package workflows would not include test directories, the risk of supply chain attack making use of binary files might be reduced.
        As a first easy step, it probably would be helpful to explicitly state in the developer policy that binary files are only allowed in test directories.
    • We currently cannot create an audit trail of who pushed code into the repository. The reason is that we allow direct push access, without going through a pull request.
      The only information on who pushed a particular commit for direct pushes (i.e. not using a pull request) is the committer email address in the git commit. That information can very easily be changed by an attacker. No information gets stored on which github account pushed the commit.
      If we required pull requests for all commits, it tracks which github accounts merged the pull request.
      We discussed a bit about why people like doing direct pushes. It seems there are 2 reasons:
      1. being able to quickly revert patches when they break bots.
        • We thought it wouldn’t be too hard to create a script that creates a pull request and merges it for you using the github cli.
        • Furthermore, if all commits are guaranteed to go through pull requests, it’s also possible to use the button on the github web interface to easily do a revert. In other words, it may mean the reason for direct pushes for reverts mostly goes away.
      2. The real or perceived overhead of just directly pushing in small changes.
        • We thought it wouldn’t be too hard to create a script that creates a pull request and merges it for you using the github cli.
        • We thought that these direct pushes are also problematic because there is no track record of who (if anyone) reviewed the changes, either pre- or post-commit.
  • Moving away from the chromium issue tracker to a github-based workflow.
    • @kbeyls points to a recent blog post which highlighted that the github-based mechanism to report security issues privately does not allow to pull in people not on the security group into the ongoing discussions on these issues. The LLVM security group often has to do so.
    • Overall, it seems we have two plausible options: go for an email-based workflow (which is what most other toolchain-like projects such as Rust, Swift, Gcc, Go, etc do), or use the github-based flow.
    • A draw-back of the email-based flow is that it does not automatically create a unique id for each reported issue. It would probably also require a bit more work to create transparency reports.
  • We checked the status of the open security issues and took a few actions to progress them.