LLVM Security group public sync-ups

We’ve been running monthly sync-up calls on the topic of the LLVM security group for a while. With this post, I’m moving the meeting minutes from a Google doc to Discourse, for the same reasons as why we did it for the Pointer Authentication sync-up calls.

Please find a copy of the meeting minutes from past sync-ups below.

When: Tue May 17th 2022. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

  • Discuss meetings not getting canceled even when there is no recorded agenda before the meeting.
  • Moving minutes doc to Discourse?

Minutes:

  • We’re dropping the rule that there must be a recorded agenda for the meeting to take place.
  • Most people attending agree that moving this document to Discourse is a good idea.
  • A discussion on potential supply chain attacks on LLVM.
    • Rust requires code review for all commits for also this reason. In comparison LLVM doesn’t.
    • Other companies also add code review requirements.
    • It’s probably easier to attack small and less active projects than LLVM.
    • Maybe the biggest risk may lay in attackers succeeding in remote code execution on a developer machine. A developer could take precautions against this risk by running LLVM builds and test in a sandbox.
    • Is it possible to push commits to the LLVM repository with a different author and committer than your github account?
    • Moving the LLVM project to use pull requests rather than allowing direct push commits could make some forms of supply chain attack harder.
  • Rust has been working on improving their process to disclose security vulnerabilities. The work in progress can be found at https://github.com/rust-lang/rfcs/pull/3259.

When: Tue April 19th 2022. 8pm CET/11am PDT; length: 1 hour.

Minutes:

  • Are there any security group people going to EuroLLVM (Only Kristof on the call is planning to go).
  • With only security group people on the call, we discussed the current reported security issue.

When: Tue March 15th 2022. 7pm CET/11am PDT; length: 1 hour.

Minutes:

  • In the first half hour only LLVM security group members joined the call. We used the opportunity to discuss an issue under embargo.
  • A person not part of the LLVM security group joined later. We obviously stopped discussing the issue under embargo. We answered questions the person had on how to join the group.

When: Tue January 18th 2022. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

Minutes:

When: Tue December 21st 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

  • Bidi/issue 11 retrospective (Pietro)
  • Security group transparency report (Kristof)

Minutes:

  • Bidi/issue 11 retrospective
    • How did things go; are there things we can improve?
    • Maybe few people on the security group who are full-time focussed on security items?
    • There are a lot of people in the group; so getting consensus takes time/is harder. Does everyone look at all reported issues?
      Can everyone participate on issues?
    • We probably should have communicated better with the rest of the community when issue 11 was no longer under embargo. We did communicate on llvm-dev though that a clang-tidy check was created.
    • (Taking inspiration from the Rust community): Having a coordinator for each issue is valuable. We didn’t explicitly assign someone as a coordinator. Can we improve the groups process to decide on a coordinator for each issue?
    • Using a secure chat room could help with more quickly communicating between security group members.
    • Should we set an expectation of responsiveness from members?
    • Possible set of responses to security bugs can be: no security bug; yes security bug - must be fixed under embargo; but also “yes security bug, but doesn’t necessarily need a fix under embargo.”
    • Be cautious about too quickly coming to the conclusion on whether a reported issue is a security issue or not. It’s important to have all voices heard. It’s OK to not have an opinion. Should we need to set a clear bar on how to decide whether something is considered a security issue or not?
    • Potential actions:
      • We should refine the process to set a bar of a minimum number of security group people agreeing before we lift embargo. To be discussed further within the security group.
      • Should we explore a chat group for the security group?
      • Do we need a way to be able to announce to end users (not just developers) of llvm when a security embargo is lifted?
  • Kristof will write a short transparency report and share it for review initially with the security group. The plan is to publish it on the LLVM documentation site, referring to it from LLVM Security Group — LLVM 18.0.0git documentation

When: Tue November 16th 2021 - if not clashing with the LLVM dev meeting. 8pm CET/11am PDT; length: 1 hour.

Cancelled due to clash with LLVM dev meeting.

When: Tue October 19th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

Minutes:

When: Tue September 21st 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

Minutes:

When: Tue August 17th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

Minutes:

Present: Pietro Albini, Nikhil G, Paul Robinson, Kristof Beyls, (joining later: Shayne Hiet-Block)

  • For voting new members in, maybe it makes sense to reduce the number of members that need to approve to less than a majority in case it’s a representative of a well-known company in the field?
  • Are there reasons why existing members aren’t voting quickly or asking questions quickly?
  • Nikhil G: Has there already been a threat model defined so that it’s clear what is considered a security issue and what is not a security issue?
  • Nikhil G: static analysis on top-of-tree. Is this coming?
    • The security group is a place to report issues to, not a place to pro-actively find new potential security issues.
    • some groups run e.g. coverity on llvm down-stream and find they need to spend a lot of time on fixing coverity-reported issues.
  • We should document what a new member of the group needs to do to get started, such as:
    • subscribe to “private discussion medium” - i.e. the googlegroup.
    • Need to be added to chromium bug tracker.
  • Should Kristof explicitly add all security group members to the meeting invite?
    • DONE: Kristof to ask security group members if they want to be explicitly on the meeting invite.
  • Should we add more or all security group members to the admin role on the google group and bug tracker?
  • Every 6 months a report should be published by the group summarizing security reports - so far we have not produced such reports on time.

When: Tue July 20th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics: None

Minutes:

  • No agenda topics were raised; the meeting was cancelled.

When: Tue June 15th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics: none

Minutes:

  • No agenda topics were raised; the meeting was cancelled.

When: Tue May 18th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics: none

Minutes:

  • No agenda topics were raised; the meeting was cancelled.

When: Tue April 20th 2021. 8pm CET/11am PDT; length: 1 hour.

Agenda topics:

Minutes:

  • review of 5 - llvm - LLVM special issue tracker - Monorail
    • Folks did get email, it was mentioned in the last call. These calls are public so it wasn’t appropriate to talk about the embargoed bug on that call.
    • Someone had signed up to respond but Matt Riley happened to get there first.
    • Kristof felt reservation chiming in because there wasn’t an obvious framework for how to engage – no example of how to do this.
    • Matt Riley to write a short outline on response. Outcomes to avoid, broad timeline, statement of trust/responsibility, where to go for help.
    • Peter Smith points out we should acknowledge receipt. Then start the discussion among the team to say whether someone can or can’t pick it up.
  • Matt Riley comments
    • “14 weeks” disclosure?
      • Intent to match GPZ and industry, but want to avoid ambiguity around calendar vs. business days
      • It’s always calendar days.
      • GPZ is moving to a period divisible by 7 to make it more obvious
    • Making sure a fix is in a release before publishing?
      • Best we can do is start a conversation with the release manager (for LLVM proper) to spin a bugfix
      • There are also questions of severity and impact of the release – spin an entire new release for a small bug? Perhaps not.
      • Many users of LLVM build their own releases snapshotted from upstream but not necessarily at released versions.
      • Our opportunity here is to think about things we would be sad not to have in the moment during response. But proactively building out a response process without having a lot of data points runs the risk of us building out the wrong thing.
      • One communication channel we do provide is allowing vendors to sign up for the LLVM security group.
  • ⚙ D100873 [docs] Describe reporting security issues on the chromium tracker.
    • some questions came up while writing this
    • We’re not using GitHub security advisories because of ACL issues. But should we move/copy resolved issues over from Chromium to GitHub?
      • This could be release notes or a separate security fixes doc.
      • Good to have one place with resolved issues?
      • LLVM bug tracking currently in flux – moving to GitHub.
      • Maybe this isn’t ripe for now? We can reconsider after LLVM moves their bugs.
      • We can wait for vendors to ask.
    • We can have an ad-hoc way of people requesting an LLVM bug or GitHub security advisory.
    • Opportunity in this doc to highlight that the security group exists.
    • Decision here: we are going to publicize the llvm-security@googlegroups.com address and document it as responsible for LLVM security more broadly. TODO: check with folks in the group that they’re okay with this.

When: Tue Mar 16th 2021. 7pm CET/11am PDT; length: 1 hour.

Agenda topics: none

Minutes:

  • No agenda topics were raised; the meeting was cancelled.

When: Tue Feb 16th 2021. 8pm CET/11am PST; length: 1 hour.

Agenda:

  • Call for extra agenda topics
  • Looking at https://llvm.org/docs/Security.html, it seems that the following areas need decisions or actions to get the group’s processes fully defined:
  • AOB

Minutes:

  • Short-term reasons why we can’t use github.
  • No-one on the call opposed to using Chromium.
    • Ahmed happy to write a proposal
    • Can the foundation reach out to github for missing security reporting
  • non-issue for “keeping membership current?” - no problem so far. Maybe just update the documented process.
    • What we’ve done: people who want to get off the list to say so.
    • Is it clear who to contact when someone wants to join?
    • We might get rid of prescriptive statement of “review every 6 months” with long list of criteria; replace with aspirational statement “aim to review every 6 months”?
    • Shayne happy to write a proposal for adapting the process document.
  • Aaron: is there a need for a contact info for security memberships?
    • Maybe not having this is only a problem if members are not well known?
  • Transparency report:
    • Ahmed happy to prepare for the transparency report.
    • Ed happy to help with that.
  • Security issues become public:
    • Why 14 weeks? - 90 days seems a reasonable number for must vulnerabilities. But there are exceptions. There should be an ability to adapt.
    • Should we make sure the fix is in a release? in a point release? a 6 monthly release cycle doesn’t seem to align with 90 days disclosure.
    • In the short term, we could update the process to say “we aim for 14 weeks, but can make judgement calls”.
  • Github security policy: could someone investigate this and make a proposal? Ahmed willing to make a proposal.
  • Suggestions agenda items for next time:
    • To think about: do we as the LLVM security group want to be creating CVE’s when appropriate?
    • Scope: of the group is left wide open. But can we exclude all of the incubator projects that don’t want to opt-in to the security group/process?
  • Yes, let’s have future meetings. Regular meeting; do a call for topics a week before and cancel if no agenda? Kristof to set that up.

When: Thu Oct 8, 2020 11:35 AM - 12:10 PM - round table

Where: Whova Community Web App

Intro:

Let’s discuss the current status and next steps for the LLVM security group, which has recently been formed. See LLVM Security Group — LLVM 18.0.0git documentation.

There are quite a few “FUTURE” markers in that document that need to be defined, so we could talk about how to get those defined. But other topics related to the security group are also in-scope for this round table.

Minutes:

An issue was reported in the wild (stack clash protector related). Serge (the author) implemented a fix after the issue was published publicly. Should that issue have been discussed privately in the security group?

  • Once an issue is publicly visible, it typically is worse to try and keep it private - but it always needs judgement (e.g. years-old bugs that are ignored and later discovered to have a security impact should be considered as “not publicly disclosed yet”).
  • How do (or should) we actually report a security issue?
    github tooling might work. But the tooling is not there yet it seems (Matthew’s experience). E.g. The Chromium bug tracker has more of the needed features.
    Matt proposes straightforward email (results in low barrier to entry). Is unlikely to break. With reasonable operational security?
    The board was not interested in being involved in running the security group. The security group should be competent in managing the security infrastructure needs it has.
    in Rust community: uses email, a shared GPG key for reporting. For the llvm group (which changes more often), this may be harder. Sometimes reports are encrypted, sometimes not.
    Experience with github issues: access control is tricky (e.g. admins are always in the thread); As soon as the issue is made public, the issue is locked - not possible to continue having private comments. For mail servers, the Rust mail infrastructure is used.
    Matt: Google groups exist and would trust that.
    Administrators need a Google account, others don’t. Would this be too much of a barrier for the LLVM security group?
    Matt: on demanding GPG: actual level of security increase is minimal and it adds barriers.
    Just email is probably better; or a https web form (running on llvm.org)
    Needs to be discussed further by the llvm security group to progress this (let’s leave some time for other topics too in this round table).
    We should also reach out to github, reporting shortcomings found by github security tooling. Google is already reaching out.
  • What is in scope; what is in scope for a security issue? Should we provide some guidance?
    Right now it’s wide open. That is currently on purpose, because the security surface will presumably evolve over time. E.g. what level of security maintenance the LLVM community wants to invest for specific things may evolve over time.
    As an example, on the Rust project: there are no guidelines. The security group triages issues that are reported and encourages to report all issues that could be assumed to be a security issue.
    Guidelines we should post for self-triaging: we can post a list of items of things that we don’t consider a security issue (an example could be a bug that causes clang to crash?). So try to define what is not a security issue, don’t try to define what is definitely a security issue? Then we can have a discussion on what would be needed for specific things to be considered a security component.
    Example of exclusion: arbitrary attacker-controlled IR not supported?
    This discussion (what is and is not considered a security issue) should be discussed publicly and refined over time.
    But also a remark that full reasoning on a public mailing list discussing the boundaries of what is considered security sensitive is not ideal - best to make it possible to have some of that discussion in a smaller group (the discussion itself may be sensitive).
    We could improve the process by testing the process with a “fake” security issue, e.g. one of the security group members reporting a fake security issue and going through the process as if it was a real one to pipe clean the process.
  • Do we need some form of rotation to ensure a level of SLO? It needs to be obvious who will take action? Who will be the champion?
    Either we need a rotation; or we need to remove commitments (e.g. time scales on which actions will be taking).
    Champion here is the person responsible for chasing up progress, not necessarily making progress themselves.
    No precedence for this kind of SLA/SLO in the LLVM community.
    e.g. in Rust, expect 24h/48h to get a response. If you don’t get a response, there are guidelines on how to escalate in other ways.
    Pietro is going to propose an update to the security process similar to this. (see https://reviews.llvm.org/D89068)
  • Kristof to send an email to the security group with this report.
  • We need to be able to have a regular kind of sync-up.
  • Quickly being able to do a video call to discuss incoming security issues seems useful.

Identified actions:

  • Define how to report security issues (includes setting up a communication channel for the security group). Matthew Riley already started taking the lead on this.
  • Define SLO and escalation path for first reaction on a security report. Pietro Albini already taking action, see ⚙ D89068 Add expected response time and escalation path to the security docs.
  • Kristof to share this report of the round table more widely.
  • Someone to set up a regular kind of sync-up to continue making progress (Kristof willing to organize this).
  • The security group should be able to have a way to quickly organize a call between themselves when needed (Kristof willing to look into what the options are for this).

We’ve just finished the June instance of this regular sync-up. Here are the meeting minutes:

  • We checked the status of the open security issues.
  • The question was raised whether we should update https://llvm.org/docs/Security.html#what-is-considered-a-security-issue, as from time to time an issue is reported for which the security group concludes it’s not considered a security issue.
    • Currently there is not a lot of overhead in dealing with reported issues for which the security group concludes they do not consider it a security issue.
    • If we were to update the text, probably considerable effort needs to go in to getting the wording right.
    • At the moment, it seems there isn’t a strong need to improve the wording.

Meeting minutes for 16th of August meeting

  • We checked and updated the status of the open security issues.
  • It might be a good idea to meet up in person at the upcoming US dev meeting. We’ll consider organizing a round-table on the security group.

Meeting minutes for the 20th of September meeting

  • We checked and updated the status of the open security issues.
  • Kristof submitted a proposal for a round table at the upcoming US dev meeting.
    • Kristof will try to make it possible to attend the round table remotely.

Meeting minutes for the 18th of October meeting

  • We checked and updated the status of the open security issues.

Meeting minutes for the 15th of November meeting

Meeting minutes for the 20th of December meeting

  • We checked the status of the open security issues and took a few actions to progress them.

Meeting minutes for the 17th of January 2023 meeting

  • We talked about the need to write and publish a 2022 transparency report. This has since been done, see LLVM Security Group Transparency Reports — LLVM 16.0.0git documentation.
  • We had a security researcher join the call to ask a few questions about what the focus of the LLVM security group is.
  • We checked and updated the status of the open security issues. We explored what actions were needed by who to make sure necessary progress is made.

Meeting minutes for the 21st of February 2023 meeting

  • We talked about the implementation of a stack clash mitigation targeting AArch64. While the security group does not own nor necessarily coordinate all security hardening implementations, there were a few people on the call who could give a bit of background on the state of things. There have been a number of patches in review for a long time to add stack clash mitigation to AArch64-linux. Similar mitigations have been implemented in the xcode compiler already. An engineer from Arm will soon pick up the patches in review and complete them.
  • 30 - llvm - LLVM special issue tracker - Monorail is now public - there was just a short mention that there is now one extra patch to further improve the mitigation that has been implemented, which needs reviewers, see ⚙ D143637 StackProtector: add unwind cleanup paths for instrumentation.. For more details: see the issue tracker.
  • There was a question from an engineer working on a product that embeds LLVM. They asked if it was better to follow top-of-trunk or the LLVM releases, from a point of view of picking up security issue fixes. The security group members on the call clarified that there is no defined process on whether security fixes - if implemented - go on top-of-tree, on release branches or both. Typically, when security fixes do get implemented, they do get implemented on at least top-of-tree. We also pointed to @pogo59 and @sqlbyme’s dev meeting talk with advice on pros and cons of following upstream top-of-trunk versus upstream releases, see 2015 LLVM Developers’ Meeting: P. Robinson & M. Edwards "Living Downstream Without Drowning" - YouTube.
  • One of the security group members announced that they will leave the group and propose for another person from their organization to become a member of the security group. We continue to follow our documented process to do so, see LLVM Security Group — LLVM 17.0.0git documentation.

Meeting minutes for the 21st of March meeting

  • There was some confusion as to whether the meeting time had shifted or not with daylight savings time. The time on the Discourse thread and the meeting invite in the LLVM community calendar were inconsistent. That has hopefully been fixed since.
  • We checked the status of the open security issues and took a few actions to progress them.

Meeting minutes for the 18th of April 2023 meeting

  • We had one person joining who was interested in what the LLVM project did to help with supply chain security, given how much software depends on or is built with LLVM. The people on the call didn’t think there was a comprehensive answer to this question. Nonetheless, a few points were raised related to supply chain security:

  • Peter shared that there is a discussion going on on the binutils mailing list about adding a SECURITY.md file to the project, describing how that project defines which issues are considered security issues and which ones not. It seems the discussion roughly suggests a similar scope for security issues in binutils as what the LLVM security group defined so far at LLVM Security Group — LLVM 17.0.0git documentation.

    • One of the points in the binutils thread suggests that many CVE’s filed against the project don’t seem to carry more security risks than regular bug reports. CVE’s, due to corporate processes demanding treating them with high priority, can consume a lot of effort. When many of the CVE’s are not actually security critical, that’s not a good use of a project’s limited resources.
  • There was a report that some of the fixes for issue 30 - llvm - LLVM special issue tracker - Monorail caused a significant code size increase. ⚙ D147975 [StackProtector] don't check stack protector before calling nounwind functions tweaks the heuristics, resulting in less code size overhead.

  • We checked the status of the open security issues and took a few actions to progress them.

Meeting minutes for the 20th of June 2023 meeting

  • We discussed potentially migrating the security issue reporting mechanism from the current mechanism using the Chromium issue tracker, to the mechanism github recently rolled out. Overall, it seems like there was a preference to move to the github-based mechanism, but we’d want to explore first if there are any significant downsides to it.
  • We started exploring if there is anything we need to do about keeping track of new releases of dependencies in the LLVM project, and if those new releases of dependencies have any fixed security vulnerabilities in them. @kbeyls took an action to explore what github-integrated mechanisms exist to automatically keep track of new releases of dependencies with fixed security vulnerabilities.
    • At first sight, after the meeting, it seems that github does offer such a service. It may even be possible to automatically have pull requests created to update versions of dependencies.
  • We checked the status of the open security issues and took a few actions to progress them.

Meeting minutes for the 15th of August 2023 meeting

  • The CVE database shows a number of recent CVE entries against LLVM, see Llvm : Security vulnerabilities, CVEs. These have not been reported to the LLVM security group, but rather directly as public bug reports.
    It seems that if these were reported to the LLVM Security Group, they would probably all be categorized as “not a security vulnerability”.
  • There have been a few questions about whether the LLVM project should use github dependabot somehow to monitor which dependencies in the LLVM project do have security vulnerabilities. Currently, dependabot reports a number of known vulnerabilities in the (older) versions of some python packages that parts of LNT, zorg, test-suite and MLIR depend on. For some of these, there is activity to update the dependencies. For some of them, there isn’t any visible activity. One example of visible activity is 49 - llvm - LLVM special issue tracker - Monorail
    • After moving to github pull requests, we could also enable letting dependabot automatically create github pull requests to update the version of dependencies, to avoid security vulnerabilities in older dependencies.
    • This could be a useful topic to discuss more at a “security group” round table at the upcoming LLVM dev meeting in San Jose.
  • The above 2 items do raise the somewhat related question on whether the LLVM security group should get involved at all with raising/maintaining/monitoring CVEs against LLVM. It seems like they could be a valuable mechanism to communicate to users of LLVM which versions of LLVM have which known security vulnerabilities.
    • This may also be a good topic to discuss further in depth at a round table at the upcoming LLVM dev meeting.

Meeting minutes for the 19th of September 2023 meeting

Meeting minutes for the 21st of March meeting

  • @kbeyls submitted a round table proposal for the upcoming devmeeting for a “security group” round table. Everyone present at the dev meeting and interested in topics related to the LLVM security group, please come to the round table.
  • We checked the status of the open security issues and took a few actions to progress them.

Meeting minutes for the 17th of October 2023 meeting

  • Present: @smithp35 @ahmedbougacha @tstellar Shayne Hiet-Block
  • Could and should LLVM become a CVE numbering authority?
    • A number of CVEs have been raised against LLVM and MLIR recently, which the LLVM security group would probably categorize as “not a security issue”. These CVEs do cause extra work for some organizations, e.g. when they have contracts that require them to fix every reported CVE, irrespective of its severity. Would being a CVE numbering authority enable us to remove LLVM CVEs from the database that according to LLVM’s definition of what are considered security issues, are not actually security issues?
    • At first sight, it looks like becoming a CVE numbering authority may be a good idea. It will require people from companies who would benefit from this to volunteer doing the extra work on the LLVM security group that this will generate.
    • Some of the CVEs mentioned are: NVD - CVE-2023-29932, NVD - CVE-2023-29933, NVD - CVE-2023-29934, NVD - CVE-2023-29935, NVD - CVE-2023-29942.
  • Moving reporting LLVM security issues from Chromium’s bug tracker to Github
    • Github admins can see all details of every reported security issues. Before we can move to Github’s mechanism to report security issues, we’d need to have a defined process for who and how people can become admins of the LLVM organization, so that it remains a small group of highly trusted people.
  • @kbeyls will shortly post a PR to improve the wording of LLVM’s definition of what are considered security issues.
    • This PR has been created since the meeting, see PR69436
  • @kbeyls has posted a pull request to document github user names of people on the LLVM security group. Some people are not code contributors to the LLVM project. This is not a problem at all - they bring expertise to the LLVM security group that many code contributors may not have.
1 Like

Meeting minutes for the 21st of November 2023 meeting

  • We talked a bit about potential improvements to the definition of “what is considered a security issue?” for the LLVM project. There are a few ideas on how to improve it, but we’re seeing little to no questions/demand for improving this section further.
  • We decided to cancel next planned meeting on the 19th of December. The next online sync-up will be on January 16th.

Meeting minutes for the 16th of January 2024 meeting

  • @tstellar shares that he has been going through a/the openssf best practice checklist for security for open source projects and comes with a few questions:
    • Does LLVM use cryptographic technology?
      • We know LLVM use md5; but that may well not count as cryptographic technology. We couldn’t come up with an example of cryptographic technology used in llvm, but then again with the people on the call, we don’t know the whole code base.
    • Do we track for known/published security issues, which releases it was introduced in and in which version it was fixed?
      • The answer is that we don’t.
      • @kbeyls states he was planning to raise this as an issue in this meeting too. The experience of doing toolchain-related security issue disclosure processes highlights that there are security issues that are not universally agreed to be worthy of being covered by a CVE, yet there are good reasons to communicate widely with users of LLVM that they exist and/or what actions to potentially take. A typical example is when a limitation or short-coming is found in the implementation of a security hardening feature (e.g. stack canaries, etc.).
        On the call, it seems there was consensus that indeed it would be good to have a mechanism to share easily with consumers of llvm what the full list of known security issues are, which versions are affected and which versions they are fixed in.
        Keeping such a list up to date will require support from general members of the LLVM community, similar to what is needed for keeping release notes up to date.
        One way this could be implemented is to have an agreed label on github issues (e.g. “security” or “known-security-issue”) to enable automatic creation of the list of known issues.
      • Tom shares that he is going through the check list of openssf recommendations to help make sure that we do CI, administration, etc well from a supply chain security point-of-view.
  • The LLVM security group has been notified that the chromium bug tracker will be shut down this quarter. We use the chromium bug tracker to report security issues so far, as that was the quickest and easiest way to set up a way to report security issues confidentially when the security group was set up.
    Now we need to look for a way to migrate to another system. The obvious “other system” would be something in github. Main requirements probably are:
    • If we use github’s security issue reporting mechanism, it seems like all admins and owners of the github organization and repo(s) will see all reported security issues. That shouldn’t be a huge issue because admins and owners already need to be highly trusted individuals for other reasons.
    • We’d like to make sure that everyone on the LLVM security group gets notified clearly for every new security issue or every update on such an issue. It is unclear if it’s possible to filter out just the security notifications easily.
    • We’d like to make sure that security issues for the few different repositories in the llvm github organization all end up in one space.
    • If the above requirements cannot (easily) be met by using the “default” mechanism github provides to report security issues to a specific repository, then we could consider setting up a separate github organization (e.g. with name “llvm-security”) with the sole purpose of enabling reporting security issues.
  • As we just started a new year, we’ll need to write a transparency report for 2023. @smithp35 kindly volunteered to create this.
  • @kbeyls shares that he will be giving a presentation about the history, achievements and known remaining challenges for the LLVM security group at FOSDEM’s LLVM dev room, see FOSDEM 2024 - The LLVM Security Group: History, progress, remaining challenges.
  • We checked the status of the open security issues and took a few actions to progress them.

The summary notes here were done on a best-effort basis. If you see any inaccuracies or misrepresentation of what was discussed, please do share a correction.

Meeting minutes for the 21st of February 2024 meeting

  • @tstellar raises the question on whether LLVM should become a CVE numbering authority.
    • It seems the advantages of being a CNA is that we’d be able to create CVEs ourselves, and that we’d be able to dispute CVEs against LLVM (e.g. when according to our threat model, they are not considered a security vulnerability).
    • It seems that we would not quickly make use of these advantages (at least, we haven’t felt the need so far to create CVEs ourselves, or dispute CVEs). OTOH, it seems it’s a pretty easy process to become a CNA, so maybe it would be good to become a CNA in case we need it on short notice at some point in the future.
    • As part of the CNA application process, we’d need to name a primary and secondary contact person who would have to share their email address and phone number to be contactable.
    • @tstellar plans to investigate the pros/cons of becoming a CNA a bit further and continue to work with the LLVM security group on this.
  • Migration away from the chromium bug tracker to a github-based bug tracker
    • After discussion within the LLVM security group, it seems the best option is to create a separate github repository within the llvm organization to enable reporting security issues in confidence.
    • We would need to test and experiment with a test repo first to be sure the github-based mechanism works in line with what we need.
    • @tstellar will set up a new test repo in the llvm github organization; @kbeyls will be the initial admin/owner for that repo and get other interested people added to experiment before finalizing our decision on which mechanism to use in the future to report security issues in confidence.
  • Thanks to @smithp35 for publishing the 2023 transparency report.
  • We checked the status of the open security issues and took a few actions to progress them.

Test Repo
Team

Meeting minutes for the 19th of March 2024 meeting

  • There was a question on what could be done to reduce the risk of malicious commits to the code base going unnoticed. In the call, we didn’t come up with one conclusive answer, but concluded it would be hard to remove all risk completely…
  • @kbeyls shared first impressions from experiments with using github’s mechanism to report privately report security issues as a potential replacement for the current mechanism used, which is based on Chromium’s issue tracker. More experimentation is needed before drawing conclusions.
  • We thought it would be good to have another “Security Group” round table at the upcoming EuroLLVM. [AFAIK, no-one registered such a round table to appear on the official agenda. We’ll do an impromptu one if there is interest at the conference]
  • We checked the status of the open security issues and took a few actions to progress them.