We’ve been running monthly sync-up calls on the topic of the LLVM security group for a while. With this post, I’m moving the meeting minutes from a Google doc to Discourse, for the same reasons as why we did it for the Pointer Authentication sync-up calls.
Please find a copy of the meeting minutes from past sync-ups below.
When: Tue May 17th 2022. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
- Discuss meetings not getting canceled even when there is no recorded agenda before the meeting.
- Moving minutes doc to Discourse?
Minutes:
- We’re dropping the rule that there must be a recorded agenda for the meeting to take place.
- Most people attending agree that moving this document to Discourse is a good idea.
- A discussion on potential supply chain attacks on LLVM.
- Rust requires code review for all commits for also this reason. In comparison LLVM doesn’t.
- Other companies also add code review requirements.
- It’s probably easier to attack small and less active projects than LLVM.
- Maybe the biggest risk may lay in attackers succeeding in remote code execution on a developer machine. A developer could take precautions against this risk by running LLVM builds and test in a sandbox.
- Is it possible to push commits to the LLVM repository with a different author and committer than your github account?
- Moving the LLVM project to use pull requests rather than allowing direct push commits could make some forms of supply chain attack harder.
- Rust has been working on improving their process to disclose security vulnerabilities. The work in progress can be found at https://github.com/rust-lang/rfcs/pull/3259.
When: Tue April 19th 2022. 8pm CET/11am PDT; length: 1 hour.
Minutes:
- Are there any security group people going to EuroLLVM (Only Kristof on the call is planning to go).
- With only security group people on the call, we discussed the current reported security issue.
When: Tue March 15th 2022. 7pm CET/11am PDT; length: 1 hour.
Minutes:
- In the first half hour only LLVM security group members joined the call. We used the opportunity to discuss an issue under embargo.
- A person not part of the LLVM security group joined later. We obviously stopped discussing the issue under embargo. We answered questions the person had on how to join the group.
When: Tue January 18th 2022. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
Minutes:
When: Tue December 21st 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
- Bidi/issue 11 retrospective (Pietro)
- Security group transparency report (Kristof)
Minutes:
- Bidi/issue 11 retrospective
- How did things go; are there things we can improve?
- Maybe few people on the security group who are full-time focussed on security items?
- There are a lot of people in the group; so getting consensus takes time/is harder. Does everyone look at all reported issues?
Can everyone participate on issues? - We probably should have communicated better with the rest of the community when issue 11 was no longer under embargo. We did communicate on llvm-dev though that a clang-tidy check was created.
- (Taking inspiration from the Rust community): Having a coordinator for each issue is valuable. We didn’t explicitly assign someone as a coordinator. Can we improve the groups process to decide on a coordinator for each issue?
- There is a flow chart on the security page (see section “Process” on LLVM Security Group — LLVM 17.0.0git documentation). That includes picking a coordinator.
- Using a secure chat room could help with more quickly communicating between security group members.
- Should we set an expectation of responsiveness from members?
- Possible set of responses to security bugs can be: no security bug; yes security bug - must be fixed under embargo; but also “yes security bug, but doesn’t necessarily need a fix under embargo.”
- Be cautious about too quickly coming to the conclusion on whether a reported issue is a security issue or not. It’s important to have all voices heard. It’s OK to not have an opinion. Should we need to set a clear bar on how to decide whether something is considered a security issue or not?
- Potential actions:
- We should refine the process to set a bar of a minimum number of security group people agreeing before we lift embargo. To be discussed further within the security group.
- Should we explore a chat group for the security group?
- Do we need a way to be able to announce to end users (not just developers) of llvm when a security embargo is lifted?
- An example of publicizing a security advisory to the linux distros is using distros@openwall mailing-lists:distros [OSS-Security]
- Kristof will write a short transparency report and share it for review initially with the security group. The plan is to publish it on the LLVM documentation site, referring to it from LLVM Security Group — LLVM 17.0.0git documentation
When: Tue November 16th 2021 - if not clashing with the LLVM dev meeting. 8pm CET/11am PDT; length: 1 hour.
Cancelled due to clash with LLVM dev meeting.
When: Tue October 19th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
Minutes:
When: Tue September 21st 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
Minutes:
When: Tue August 17th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
- Adding new member to the group: a few more votes needed.
Minutes:
Present: Pietro Albini, Nikhil G, Paul Robinson, Kristof Beyls, (joining later: Shayne Hiet-Block)
- For voting new members in, maybe it makes sense to reduce the number of members that need to approve to less than a majority in case it’s a representative of a well-known company in the field?
- Are there reasons why existing members aren’t voting quickly or asking questions quickly?
- Nikhil G: Has there already been a threat model defined so that it’s clear what is considered a security issue and what is not a security issue?
- Nikhil G: static analysis on top-of-tree. Is this coming?
- The security group is a place to report issues to, not a place to pro-actively find new potential security issues.
- some groups run e.g. coverity on llvm down-stream and find they need to spend a lot of time on fixing coverity-reported issues.
- We should document what a new member of the group needs to do to get started, such as:
- subscribe to “private discussion medium” - i.e. the googlegroup.
- Need to be added to chromium bug tracker.
- Should Kristof explicitly add all security group members to the meeting invite?
- DONE: Kristof to ask security group members if they want to be explicitly on the meeting invite.
- Should we add more or all security group members to the admin role on the google group and bug tracker?
- Every 6 months a report should be published by the group summarizing security reports - so far we have not produced such reports on time.
When: Tue July 20th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics: None
Minutes:
- No agenda topics were raised; the meeting was cancelled.
When: Tue June 15th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics: none
Minutes:
- No agenda topics were raised; the meeting was cancelled.
When: Tue May 18th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics: none
Minutes:
- No agenda topics were raised; the meeting was cancelled.
When: Tue April 20th 2021. 8pm CET/11am PDT; length: 1 hour.
Agenda topics:
- Discussion / postmortem for 5 - llvm - LLVM special issue tracker - Monorail
- Deal with Matt Riley’s two comments below
- Docs update around reporting medium: ⚙ D100873 [docs] Describe reporting security issues on the chromium tracker.
Minutes:
- review of 5 - llvm - LLVM special issue tracker - Monorail
- Folks did get email, it was mentioned in the last call. These calls are public so it wasn’t appropriate to talk about the embargoed bug on that call.
- Someone had signed up to respond but Matt Riley happened to get there first.
- Kristof felt reservation chiming in because there wasn’t an obvious framework for how to engage – no example of how to do this.
- Matt Riley to write a short outline on response. Outcomes to avoid, broad timeline, statement of trust/responsibility, where to go for help.
- Peter Smith points out we should acknowledge receipt. Then start the discussion among the team to say whether someone can or can’t pick it up.
- Matt Riley comments
- “14 weeks” disclosure?
- Intent to match GPZ and industry, but want to avoid ambiguity around calendar vs. business days
- It’s always calendar days.
- GPZ is moving to a period divisible by 7 to make it more obvious
- Making sure a fix is in a release before publishing?
- Best we can do is start a conversation with the release manager (for LLVM proper) to spin a bugfix
- There are also questions of severity and impact of the release – spin an entire new release for a small bug? Perhaps not.
- Many users of LLVM build their own releases snapshotted from upstream but not necessarily at released versions.
- Our opportunity here is to think about things we would be sad not to have in the moment during response. But proactively building out a response process without having a lot of data points runs the risk of us building out the wrong thing.
- One communication channel we do provide is allowing vendors to sign up for the LLVM security group.
- “14 weeks” disclosure?
-
⚙ D100873 [docs] Describe reporting security issues on the chromium tracker.
- some questions came up while writing this
- We’re not using GitHub security advisories because of ACL issues. But should we move/copy resolved issues over from Chromium to GitHub?
- This could be release notes or a separate security fixes doc.
- Good to have one place with resolved issues?
- LLVM bug tracking currently in flux – moving to GitHub.
- Maybe this isn’t ripe for now? We can reconsider after LLVM moves their bugs.
- We can wait for vendors to ask.
- We can have an ad-hoc way of people requesting an LLVM bug or GitHub security advisory.
- Opportunity in this doc to highlight that the security group exists.
- Decision here: we are going to publicize the llvm-security@googlegroups.com address and document it as responsible for LLVM security more broadly. TODO: check with folks in the group that they’re okay with this.
When: Tue Mar 16th 2021. 7pm CET/11am PDT; length: 1 hour.
Agenda topics: none
Minutes:
- No agenda topics were raised; the meeting was cancelled.
When: Tue Feb 16th 2021. 8pm CET/11am PST; length: 1 hour.
Agenda:
- Call for extra agenda topics
- Looking at https://llvm.org/docs/Security.html, it seems that the following areas need decisions or actions to get the group’s processes fully defined:
- “Discussion Medium”: is not documented. How can people report an issue?
- Shall we just update the documentation and provide a pointer to https://bugs.chromium.org/p/llvm? Do we need to add any further documentation or adapt the template?
- My personal guess is that our default should be to use GitHub infrastructure where possible. That being said, https://docs.github.com/en/github/managing-security-vulnerabilities/creating-a-security-advisory seems to document that github currently does not support a vulnerability reporting mechanism. So even after general bug tracking moves from bugzilla to github for the llvm project, it looks like we’ll stick with https://bugs.chromium.org/p/llvm.
- “Keeping Membership Current” states “At least every six months, the LLVM Security Group applies the above criteria. The membership list is pruned accordingly.”
- If I take the 11th of July 2020 as the starting point of the LLVM security group (commit date ofhttps://llvm.org/docs/Security.html, see https://github.com/llvm/llvm-project/commit/7bf73bcf6d9335938bd072b11809d305173c7c1e#diff-f7ce6871b193fab3b0a19993a70b690e3ce736361c76b244d664e8a5523a86ae), then we should’ve done this part of the group’s process by 11th of January 2021 if I calculated correctly…
- Do we need an owner to ensure this happens every 6 months?
- “Transparency report”: “Every year, the LLVM Security Group must publish a transparency report.”
- Similarly, this defines we’ll need to publish a transparency report by the 11th of July 2021.
- Is there a volunteer to own preparing this?
- Process: “Security issues become public within 14 weeks of a fix landing in the LLVM project”.
- I can’t recall right now if we’ve actually done so for the few reports we’ve received?
- Where/how should an issue become public? Does this depend on the medium, i.e. could this be just flipping a switch on https://bugs.chromium.org/p/llvm? Or should we aim to also create github security advisories?
- Should we add a security policy to llvm’s github project, following github’s standard practice as documented at https://docs.github.com/en/github/managing-security-vulnerabilities/adding-a-security-policy-to-your-repository?
- AOB
Minutes:
- Short-term reasons why we can’t use github.
- No-one on the call opposed to using Chromium.
- Ahmed happy to write a proposal
- Can the foundation reach out to github for missing security reporting
- non-issue for “keeping membership current?” - no problem so far. Maybe just update the documented process.
- What we’ve done: people who want to get off the list to say so.
- Is it clear who to contact when someone wants to join?
- We might get rid of prescriptive statement of “review every 6 months” with long list of criteria; replace with aspirational statement “aim to review every 6 months”?
- Shayne happy to write a proposal for adapting the process document.
- Aaron: is there a need for a contact info for security memberships?
- Maybe not having this is only a problem if members are not well known?
- Transparency report:
- Ahmed happy to prepare for the transparency report.
- Ed happy to help with that.
- Security issues become public:
- Why 14 weeks? - 90 days seems a reasonable number for must vulnerabilities. But there are exceptions. There should be an ability to adapt.
- Should we make sure the fix is in a release? in a point release? a 6 monthly release cycle doesn’t seem to align with 90 days disclosure.
- In the short term, we could update the process to say “we aim for 14 weeks, but can make judgement calls”.
- Github security policy: could someone investigate this and make a proposal? Ahmed willing to make a proposal.
- Suggestions agenda items for next time:
- To think about: do we as the LLVM security group want to be creating CVE’s when appropriate?
- Scope: of the group is left wide open. But can we exclude all of the incubator projects that don’t want to opt-in to the security group/process?
- Yes, let’s have future meetings. Regular meeting; do a call for topics a week before and cancel if no agenda? Kristof to set that up.
When: Thu Oct 8, 2020 11:35 AM - 12:10 PM - round table
Where: Whova Community Web App
Intro:
Let’s discuss the current status and next steps for the LLVM security group, which has recently been formed. See LLVM Security Group — LLVM 17.0.0git documentation.
There are quite a few “FUTURE” markers in that document that need to be defined, so we could talk about how to get those defined. But other topics related to the security group are also in-scope for this round table.
Minutes:
An issue was reported in the wild (stack clash protector related). Serge (the author) implemented a fix after the issue was published publicly. Should that issue have been discussed privately in the security group?
- Once an issue is publicly visible, it typically is worse to try and keep it private - but it always needs judgement (e.g. years-old bugs that are ignored and later discovered to have a security impact should be considered as “not publicly disclosed yet”).
- How do (or should) we actually report a security issue?
github tooling might work. But the tooling is not there yet it seems (Matthew’s experience). E.g. The Chromium bug tracker has more of the needed features.
Matt proposes straightforward email (results in low barrier to entry). Is unlikely to break. With reasonable operational security?
The board was not interested in being involved in running the security group. The security group should be competent in managing the security infrastructure needs it has.
in Rust community: uses email, a shared GPG key for reporting. For the llvm group (which changes more often), this may be harder. Sometimes reports are encrypted, sometimes not.
Experience with github issues: access control is tricky (e.g. admins are always in the thread); As soon as the issue is made public, the issue is locked - not possible to continue having private comments. For mail servers, the Rust mail infrastructure is used.
Matt: Google groups exist and would trust that.
Administrators need a Google account, others don’t. Would this be too much of a barrier for the LLVM security group?
Matt: on demanding GPG: actual level of security increase is minimal and it adds barriers.
Just email is probably better; or a https web form (running on llvm.org)
Needs to be discussed further by the llvm security group to progress this (let’s leave some time for other topics too in this round table).
We should also reach out to github, reporting shortcomings found by github security tooling. Google is already reaching out. - What is in scope; what is in scope for a security issue? Should we provide some guidance?
Right now it’s wide open. That is currently on purpose, because the security surface will presumably evolve over time. E.g. what level of security maintenance the LLVM community wants to invest for specific things may evolve over time.
As an example, on the Rust project: there are no guidelines. The security group triages issues that are reported and encourages to report all issues that could be assumed to be a security issue.
Guidelines we should post for self-triaging: we can post a list of items of things that we don’t consider a security issue (an example could be a bug that causes clang to crash?). So try to define what is not a security issue, don’t try to define what is definitely a security issue? Then we can have a discussion on what would be needed for specific things to be considered a security component.
Example of exclusion: arbitrary attacker-controlled IR not supported?
This discussion (what is and is not considered a security issue) should be discussed publicly and refined over time.
But also a remark that full reasoning on a public mailing list discussing the boundaries of what is considered security sensitive is not ideal - best to make it possible to have some of that discussion in a smaller group (the discussion itself may be sensitive).
We could improve the process by testing the process with a “fake” security issue, e.g. one of the security group members reporting a fake security issue and going through the process as if it was a real one to pipe clean the process. - Do we need some form of rotation to ensure a level of SLO? It needs to be obvious who will take action? Who will be the champion?
Either we need a rotation; or we need to remove commitments (e.g. time scales on which actions will be taking).
Champion here is the person responsible for chasing up progress, not necessarily making progress themselves.
No precedence for this kind of SLA/SLO in the LLVM community.
e.g. in Rust, expect 24h/48h to get a response. If you don’t get a response, there are guidelines on how to escalate in other ways.
Pietro is going to propose an update to the security process similar to this. (see https://reviews.llvm.org/D89068) - Kristof to send an email to the security group with this report.
- We need to be able to have a regular kind of sync-up.
- Quickly being able to do a video call to discuss incoming security issues seems useful.
Identified actions:
- Define how to report security issues (includes setting up a communication channel for the security group). Matthew Riley already started taking the lead on this.
- Define SLO and escalation path for first reaction on a security report. Pietro Albini already taking action, see ⚙ D89068 Add expected response time and escalation path to the security docs.
- Kristof to share this report of the round table more widely.
- Someone to set up a regular kind of sync-up to continue making progress (Kristof willing to organize this).
- The security group should be able to have a way to quickly organize a call between themselves when needed (Kristof willing to look into what the options are for this).