Learn more about GitHub Advanced Security, including Secret Protection and Code Security.
The GitHub Blog
Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.
Allow us to reintroduce ourselves: GitHub Copilot is getting a whole lot more agentic with increased context of your tools and services, powered by the worldâs leading models, starting today. đ
We are excited to roll out agent mode in Visual Studio Code to all users, now complete with MCP support that unlocks access to any context or capabilities you want. Whatâs more, we are thrilled to release a new open source and local GitHub MCP server, giving you the ability to add GitHub functionality to any LLM tool that supports MCP. đ€
In keeping with our commitment to offer multi-model choice, weâre making Anthropic Claude 3.5, 3.7 Sonnet, 3.7 Sonnet Thinking, Google Gemini 2.0 Flash, and OpenAI o3-mini generally available via premium requests, included in all paid Copilot tiers. These premium requests are in addition to unlimited requests for agent mode, context-driven chat, and code completions that all paid plans have when using our base model (đ more below). With the new Pro+ tier, individual developers get the most out of the latest models with Copilot.
The agent awakening doesnât stop there. We are also announcing the general availability of the Copilot code review agent, which in just over a month in preview has been used by over 1 million developers on GitHub. Plus, the general availability of next edit suggestions so you can tab tab tab your way to coding glory. đ
Agent mode in VS Code
Agent mode is progressively rolling out to VS Code users in stable, as we aim for full availability to all users in the coming weeks. You can also manually enable it now. Compared to chat or multi-file edits, which allow you to propose code changes across multiple files in your workspace, agent mode is fundamentally capable of taking action to translate your ideas into code. With simple prompts, agent mode takes Copilot beyond answering a question, instead completing all necessary subtasks across automatically identified or generated files to ensure your primary goal is achieved. Agent mode can suggest terminal commands or tool calls and ask you to execute them. It also analyzes run-time errors with self-healing capabilities.
Since the launch to VS Code Insiders in February, developers have been using agent mode for a variety of tasks: from autofixing code gen errors, to building webapps, to yeeting commits â whatever that means. đ

Agent mode is powered by your choice of Claude 3.5 and 3.7 Sonnet, Google Gemini 2.0 Flash, and OpenAI GPT-4o. Currently, agent mode achieves a pass rate of 56.0% on SWE-bench Verified with Claude 3.7 Sonnet. We anticipate agent mode to grow more capable as chain of thought reasoning models continue to advance.
Model Context Protocol (MCP) is now available in public preview
Developers spend their days conducting a wide array of tasks to get the job done, from research, to navigating telemetry, to infrastructure management, to coding and debugging. And they use many tools for this, the so-called engineering stack. MCP allows you to equip agent mode with the context and capabilities it needs to help you, like a USB port for intelligence. When you enter a chat prompt in agent mode within VS Code, the model can use different tools to handle tasks like understanding database schema or querying the web. This setup allows for more interactive and context-sensitive coding support.
For example, with a prompt to âUpdate my GitHub profile to include the title of the PR that was assigned to me yesterday,â agent mode would take that request, combined with the list of all available MCP tools, and ask an LLM what to do next. Over time, the agent would continue calling tools iteratively, until the task is complete.
Already, GitHub is home to a massive and growing MCP ecosystem that you can discover and use today. Here is a great repository that acts as a community inventory with some of the best MCP servers to use. The GitHub local MCP server equips agent mode with compelling capabilities such as searching across repositories and code, managing issues and creating PRs â turning agent mode into a powerful user of the GitHub platform.
Get started by setting up local and remote MCP servers and using tools with agent mode in Visual Studio Code. To get started with the GitHub local MCP server, visit the repository, now supported natively in VS Code.
Premium model requests
Since GitHub Universe, we introduced a number of new models for chat, multi-file edits, and now agent mode. With the general availability of these models, we are introducing a new premium request type. Premium requests are in addition to the unlimited requests for agent mode, context-driven chat, and code completions in all paid plans for our base model (currently: OpenAI GPT-4o).
Customers with Copilot Pro will receive 300 monthly premium requests, beginning on May 5, 2025. Customers with Copilot Business and Copilot Enterprise will receive 300 and 1000 monthly premium requests respectively, starting between May 12 and May 19, 2025. Until then, use of these premium models is unlimited.
We are also introducing a new Pro+ plan for individuals with 1500 monthly premium requests and access to the best models, like GPT-4.5, for $39 per month.
Copilot paid users1 will also have the ability to pay-as-you-go for additional premium request usage. Individuals and organizations can choose to opt-in to use additional requests beyond their included amount, in addition to setting spending limits on requests to control costs with ease. GitHub Copilot Business and Enterprise administrators can manage requests via their Copilot Admin Billing Settings. Additional premium requests start at $0.04 per request.
Each premium model will consume a specific number of premium requests, allowing you to use a more powerful or efficient model when you need it, all while you have continued, unlimited access to Copilotâs base model.
Happy 50th birthday, Microsoft!
Today is also a hallmark moment in the history of technology: our mothership turns 50! From the creation of BASIC or MS-DOS, to the .NET Framework and VS Code, to the acquisition of GitHub â Microsoft has always been a developer company at heart. Half a century of dev love is no small feat. â€ïž
Now, with GitHub Copilot â what started out as a developer platform company is a platform where anyone can be a developer. Together, GitHub and Microsoft fully intend on enabling a world with 1 billion developers. đșïž
Here is Satya using agent mode to re-create MSFTâs first BASIC in a single shot. Let the vibes take you away, Mr. Nadella. đș
- Option to purchase additional premium requests not available to users that subscribe or have subscribed to Pro or Pro+ through GitHub Mobile on iOS or Android. â©
The post Vibe coding with GitHub Copilot: Agent mode and MCP support rolling out to all VS Code users appeared first on The GitHub Blog.
At GitHub Security Lab, one of the most common vulnerability types we find relates to the cross-origin resource sharing (CORS) mechanism. CORS allows a server to instruct a browser to permit loading resources from specified origins other than its own, such as a different domain or port.
Many developers change their CORS rules because users want to connect to third party sites, such as payment or social media sites. However, developers often donât fully understand the dangers of changing the same-origin policy, and they use unnecessarily broad rules or faulty logic to prevent users from filing further issues.
In this blog post, weâll examine some case studies of how a broad or faulty CORS policy led to dangerous vulnerabilities in open source software. Weâll also discuss DNS rebinding, an attack with similar effects to a CORS misconfiguration thatâs not as well known among developers.
What is CORS and how does it work?
CORS is a way to allow websites to communicate with each other directly by bypassing the same-origin policy, a security measure that restricts websites from making requests to a different domain than the one that served the web page. Understanding the Access-Control-Allow-Origin
and Access-Control-Allow-Credentials
response headers is crucial for correct and secure CORS implementation.
Access-Control-Allow-Origin
is the list of origins that are allowed to make cross site requests and read the response from the webserver. If the Access-Control-Allow-Credentials
header is set, the browser is also allowed to send credentials (cookies, http authentication) if the origin requests it. Some requests are considered simple requests and do not need a CORS header in order to be sent cross-site. This includes the GET, POST, and HEAD requests with content types restricted to application/x-www-form-urlencoded
, multipart/form-data
, and text/plain
. When a third-party website needs access to account data from your website, adding a concise CORS policy is often one of the best ways to facilitate such communication.
To implement CORS, developers can either manually set the Access-Control-Allow-Origin
header, or they can utilize a CORS framework, such as RSCors, that will do it for them. If you choose to use a framework, make sure to read the documentationâdonât assume the framework is safe by default. For example, if you tell the CORS library you choose to reflect all origins, does it send back the response with a blanket pattern matching star (*) or a response with the actual domain name (e.g., stripe.com)?
Alternatively, you can create a custom function or middleware that checks the origin to see whether or not to send the Access-Control-Allow-Origin
header. The problem is, you can make some security mistakes when rolling your own code that well-known libraries usually mitigate.
Common mistakes when implementing CORS
For example, when comparing the origin header with the allowed list of domains, developers may use the string comparison function equivalents of startsWith
, exactMatch
, and endsWith
functions for their language of choice. The safest function is exactMatch
where the domain must match the allow list exactly. However, what if payment.stripe.com wants to make a request to our backend instead of stripe.com? To get around this, weâd have to add every subdomain to the allow list. This would inevitably cause users frustration when third-party websites change their APIs.
Alternatively, we can use the endsWith
function. If we want connections from Stripe, letâs just add stripe.com to the allowlist and use endsWith
to validate and call it a day. Not so fast, since the domain attackerstripe.com is now also valid. We can tell the user to only add full urls to the allowlist, such as https://stripe.com, but then we have the same problem as exactMatch
.
We occasionally see developers using the startsWith
function in order to validate domains. This also doesnât work. If the allowlist includes https://stripe.com then we can just do https://stripe.com.attacker.com.
For any origin with subdomains, we must use .stripe.com (notice the extra period) in order to ensure that we are looking at a subdomain. If we combine exactMatch
for second level domains and endsWith
for subdomains, we can make a secure validator for cross site requests.
Lastly, thereâs one edge case found in CORS: the null origin should never be added to allowed domains. The null origin can be hardcoded into the code or added by the user to the allowlist, and itâs used when requests come from a file or from a privacy-sensitive context, such as a redirect. However, it can also come from a sandboxed iframe, which an attacker can include in their website. For more practice attacking a website with null origin, check out this CORS vulnerability with trusted null origin exercise in the Portswigger Security Academy.
How can attackers exploit a CORS misconfiguration?
CORS issues allow an attacker to make actions on behalf of the user when a web application uses cookies (with SameSite None
) or HTTP basic authentication, since the browser must send those requests with the required authentication.
Fortunately for users, Chrome has defaulted cookies with no Samesite
to SameSite Lax
, which has made CORS misconfiguration useless in most scenarios. However, Firefox and Safari are still vulnerable to these issues using bypass techniques found by PTSecurity, whose research we highly recommend reading for knowing how someone can exploit CORS issues.
What impact can a CORS misconfiguration have?
CORS issues can give a user the power of an administrator of a web application, so the usefulness depends on the application. In many cases, administrators have the ability to execute scripts or binaries on the serverâs host. These relaxed security restrictions allow attackers to get remote code execution (RCE) capabilities on the server host by convincing administrators to visit an attacker-owned website.
CORS issues can also be chained with other vulnerabilities to increase their impact. Since an attacker now has the permissions of an administrator, they are able to access a broader range of services and activities, making it more likely theyâll find something vulnerable. Attackers often focus on vulnerabilities that affect the host system, such as arbitrary file write or RCE.
Real-world examples
A CORS misconfiguration allows for RCE
Cognita is a Python project that allows users to test the retrieval-augmented generation (RAG) ability of LLM models. If we look at how it used to call the FastAPI CORS middleware, we can see it used an unsafe default setting, with allow_origins
set to all
and allow_credentials
set to true
. Usually if the browser receives Access-Control-Allow-Origin: *
and Access-Control-Allow-Credentials: true
, the browser knows not to send credentials with the origin, since the application did not reflect the actual domain, just a wildcard.
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
However, FastAPI CORS middleware is unsafe by default and setting these two headers like this resulted in the origin being reflected along with credentials.
Currently, Cognita does not have authentication, but if its developers implemented authentication without fixing the CORS policy, their authentication could be bypassed. As it stands, any website can send arbitrary requests to any endpoint in Cognita, as long as they know how to access it. Due to its lack of authentication, Cognita appears intended to be hosted on intranets or locally. An attacking website can try guessing the local IP of a Cognita instance by sending requests to local addresses such as localhost, or it can enumerate the internal IP address space by continually making requests until it finds the Cognita instance. With this bug alone, our access is limited to just using the RAG endpoints and possibly deleting data. We want to get a foothold in the network. Letâs look for a real primitive.
We found a simple arbitrary file write primitive; the developers added an endpoint for Docker without considering file sanitization, and now we can write to any file we want. The file.filename
is controlled by the request and os.path.join
resolves the â..â, allowing file_path
to be fully controlled.
@router.post("/upload-to-local-directory")
async def upload_to_docker_directory(
upload_name: str = Form(
default_factory=lambda: str(uuid.uuid4()), regex=r"^[a-z][a-z0-9-]*$"
),
files: List[UploadFile] = File(...),
):
...
for file in files:
logger.info(f"Copying file: {file.filename}, to folder: {folder_path}")
file_path = os.path.join(folder_path, file.filename)
with open(file_path, "wb") as f:
f.write(file.file.read())
Now that we have an arbitrary file write target, what should we target to get RCE? This endpoint is for Docker users and the Cognita documentation only shows how to install via Docker. Letâs take a look at that Dockerfile.
command: -c "set -e; prisma db push --schema ./backend/database/schema.prisma && uvicorn --host 0.0.0.0 --port 8000 backend.server.app:app --reload"
Looking carefully, thereâs the --reload
when starting up the backend server. So we can overwrite any file in the server and uvicorn will automatically restart the server to apply changes. Thanks uvicorn! Letâs target the init.py
files that are run on start, and now we have RCE on the Cognita instance. We can use this to read data from Cognita, or use it as a starting point on the network and attempt to connect to other vulnerable devices from there.
Logic issues lead to credit card charges and backdoor access
Next, letâs look at some additional real life examples of faulty CORS logic.
We found the following code was found on the website https://tamagui.dev. Since the source code is found on GitHub, we decided to take a quick look. (Note: The found vulnerability has since been reported by our team and fixed by the developer.)
export function setupCors(req: NextApiRequest, res: NextApiResponse) {
const origin = req.headers.origin
if (
typeof origin === 'string' &&
(origin.endsWith('tamagui.dev') ||
origin.endsWith('localhost:1421') ||
origin.endsWith('stripe.com'))
) {
res.setHeader('Access-Control-Allow-Origin', origin)
res.setHeader('Access-Control-Allow-Credentials', 'true')
}
}
As you can see, the developer added hardcoded endpoints. Taking a guess, the developer most likely used Stripe for payment, localhost for local development and tamagui.dev for subdomain access or to deal with https issues. In short, it looks like the developer added allowed domains as they became needed.
As we know, using endsWith
is insufficient and an attacker may be able to create a domain that fulfills those qualities. Depending on the tamagui.dev accountâs permissions, an attacker could perform a range of actions on behalf of the user, such as potentially buying products on the website by charging their credit card.
Lastly, some projects donât prioritize security and developers are simply writing the code to work. For example, the following project used the HasPrefix
and Contains
functions to check the origin, which is easily exploitable. Using this vulnerability, we can trick an administrator to click on a specific link (letâs say https://localhost.attacker.com), and use the user-add endpoint to install a backdoor account in the application.
func CorsFilter(ctx *context.Context) {
origin := ctx.Input.Header(headerOrigin)
originConf := conf.GetConfigString("origin")
originHostname := getHostname(origin)
host := removePort(ctx.Request.Host)
if strings.HasPrefix(origin, "http://localhost") || strings.HasPrefix(origin, "https://localhost") || strings.HasPrefix(origin, "http://127.0.0.1") || strings.HasPrefix(origin, "http://casdoor-app") || strings.Contains(origin, ".chromiumapp.org") {
setCorsHeaders(ctx, origin)
return
}
func setCorsHeaders(ctx *context.Context, origin string) {
ctx.Output.Header(headerAllowOrigin, origin)
ctx.Output.Header(headerAllowMethods, "POST, GET, OPTIONS, DELETE")
ctx.Output.Header(headerAllowHeaders, "Content-Type, Authorization")
ctx.Output.Header(headerAllowCredentials, "true")
if ctx.Input.Method() == "OPTIONS" {
ctx.ResponseWriter.WriteHeader(http.StatusOK)
}
}
DNS rebinding
DNS rebinding has the same mechanism as a CORS misconfiguration, but its ability is limited. DNS rebinding does not require a misconfiguration or bug on the part of the developer or user. Rather, itâs an attack on how the DNS system works.
Both CORS and DNS rebinding vulnerabilities facilitate requests to API endpoints from unintended origins. First, an attacker lures the victimâs browser to a domain that serves malicious javascript. The malicious javascript makes a request to a host that the attacker controls, and sets the DNS records to redirect the browser to a local address. With control over the resolving DNS server, the attacker can change the IP address of the domain and its subdomains in order to get the browser to connect to various IP addresses. The malicious javascript will scan for open connections and send their malicious payload requests to them.
This attack is very easy to set up using NCCGroupâs singularity tool. Under the payloads folder, you can view the scripts that interact with singularity and even add your own script to tell singularity how to send requests and respond.
Fortunately, DNS rebinding is very easy to mitigate as it cannot contain cookies, so adding simple authentication for all sensitive and critical endpoints will prevent this attack. Since the browser thinks it is contacting the attacker domain, it would send any cookies from the attacker domain, not those from the actual web application, and authorization would fail.
If you donât want to add authentication for a simple application, then you should check that the host header matches an approved host name or a local name. Unfortunately, many newly created AI projects currently proliferating do not have any of these security protections built in, making any data on those web applications possibly retrievable and any vulnerability remotely exploitable.
public boolean isValidHost(String host) {
// Allow loopback IPv4 and IPv6 addresses, as well as localhost
if (LOOPBACK_PATTERN.matcher(host).find()) {
return true;
}
// Strip port from hostname - for IPv6 addresses, if
// they end with a bracket, then there is no port
int index = host.lastIndexOf(':');
if (index > 0 && !host.endsWith("]")) {
host = host.substring(0, index);
}
// Strip brackets from IPv6 addresses
if (host.startsWith("[") && host.endsWith("]")) {
host = host.substring(1, host.length() - 2);
}
// Allow only if stripped hostname matches expected hostname
return expectedHost.equalsIgnoreCase(host);
}
Because DNS rebinding requires certain parameters to be effective, it is not caught by security scanners for the fear of many false positives. At GitHub, our DNS rebinding reports to maintainers commonly go unfixed due to the unusual nature of this attack, and we see that only the most popular repos have checks in place.
When publishing software that holds security critical information or takes privileged actions, we strongly encourage developers to write code that checks that the origin header matches the host or an allowlist.
Conclusion
Using CORS to bypass the same-origin policy has always led to common mistakes. Finding and fixing these issues is relatively simple once you understand CORS mechanics. New and improving browser protections have mitigated some of the risk and may eliminate this bug class altogether in the future. Oftentimes, finding CORS issues is as simple as searching for âCORSâ or Access-Control-Allow-Origin
in the code to see if any insecure presets or logic are used.
Check out the Mozilla Developer Network CORS page if you wish to become better acquainted with how CORS works and the config you choose when using a CORS framework.
If youâre building an application without authentication that utilizes critical functionality, remember to check the Host header as an extra security measure.
Finally, GitHub Code Security can help you secure your project by detecting and suggesting a fix for bugs such as CORS misconfiguration!
The post Localhost dangers: CORS and DNS rebinding appeared first on The GitHub Blog.
If you know where to look, exposed secrets are easy to find. Secrets are supposed to prevent unauthorized access, but in the wrong hands, they can beâand typically areâexploited in seconds.
To give you an idea of the scope of the problem, more than 39 million secrets were leaked across GitHub in 2024 alone.1 Every minute GitHub blocks several secrets with push protection.2 Still, secret leaks remain one of the most commonâand preventableâcauses of security incidents. As we develop code faster than ever previously imaginable, weâre leaking secrets faster than ever, too.
Thatâs why, at GitHub, weâre working to prevent breaches caused by leaked tokens, credentials, and other secretsâensuring protection against secret exposures is built-in and accessible to every developer.
Today, weâre launching the next evolution of GitHub Advanced Security, aligning with our ongoing mission to keep your secretsâŠsecret.
- Secret Protection and Code Security, now available as standalone products
- Advanced Security for GitHub Team organizations
- A free, organization-wide secret scan to help teams identify and reduce exposure.3
Hereâs how secrets leak, what weâre doing to stop it, and what you can do to protect your code. Letâs jump in.
How do secret leaks happen?
Most software today depends on secretsâcredentials, API keys, tokensâthat developers handle dozens of times a day. These secrets are often accidentally exposed. Less intuitively, a large number of breaches come from well-meaning developers who purposely expose a secret. Developers also often underestimate the risk of private exposures, committing, sharing, or storing these secrets in ways that feel convenient in the moment, but which introduce risk over time.
Unfortunately, these seemingly innocuous secret exposures are small threads to pull for an attacker looking to unravel a whole system. Bad actors are extremely skilled at using a foothold provided by âlow riskâ secrets for lateral movement to higher-value assets. Even without the risk of insider threats, persisting any secret in git history (or elsewhere) makes us vulnerable to future mistakes. Research shows that accidental mistakes (like inadvertently making a repository public) were higher in 2024 than ever before.
If youâre interested in learning more about secret leaks and how to protect yourself, check out this great video from my colleague Chris Reddington:
What is GitHub doing about it?
We care deeply about protecting the developer community from the risk of exposed secrets. A few years ago, we formally launched our industry partnership program, which has now grown to hundreds of token issuers like AWS, Google Cloud Platform, Meta, and OpenAIâall fully committed to protecting the developer community from leaked secrets.
Last year, we rolled out push protection by default for public repositories, which has since blocked millions of secrets for the open source community.
And finally, as of today, weâre rolling out additional changes to our feature availability, aligning with our ongoing goal to help organizations of all sizes protect themselves from the risk of exposed secrets: a new point-in-time scan, free for organizations; a new pricing plan, to make our paid security tooling more affordable; and the release of Secret Protection and Code Security to GitHub Team plans.
What you can do to protect yourself from exposed secrets
The easiest way to protect yourself from leaked secrets is not to have any in the first place. Push protection, our built-in solution, is the simplest way to block secrets from accidental exposure. It leverages the same detectors that we created through our partnership program with cloud providers, ensuring secrets are caught quickly and accurately with the lowest rate of false positives possible.
Studies have shown that GitHub Secret Protection is the only secret scanning toolâproprietary or open sourceâthat can claim an over one in two true positive rate across all findings4. GitHub received a precision score of 75% (compared to the next best, 46% precision). Compared to alternatives like open source scanning solutions, itâs not that GitHub is finding fewer secrets⊠itâs that weâre finding real ones. That way, youâre able to spend your time worrying less about false positives, and more about what mattersâshipping.
Long-lived credentials are some of the most common and dangerous types of secrets to leak, as they often persist unnoticed for monthsâor yearsâand give bad actors extended access. Thatâs why managing secrets through their full lifecycle is critical.
Beyond push protection, you can protect yourself from leaks by following security best practices to ensure secrets are securely managed from creation to revocation:
- Creation: follow the principle of least privilege and make sure secrets are securely generated.
- Rotation: outside of user credentials, secrets should be regularly rotated.
- Revocation: restrict access when no longer neededâor when compromised.
Throughout the lifecycle of a secret, you should eliminate human interaction and automate secret management whenever possible.
In addition, you should adopt a continuous monitoring solution for detecting exposures, so you can react quickly. Like push protection, GitHubâs built-in solution for secret scanning is the simplest way to triage previously leaked secrets.
Starting today, investing in GitHubâs built-in security tooling is more affordable and in reach for many teams with the release of GitHub Secret Protection (free for public repositories), in addition to a new point-in-time scan (free for all organization repositories), which can be run periodically to check for exposed secrets.
Learn more about deploying and managing secret protection at scale:
GitHub Secret Protection and GitHub Code Security
As of today, our security products are available to purchase as standalone products for enterprises, enabling development teams to scale security quickly. Previously, investing in secret scanning and push protection required purchasing a larger suite of security tools, which made fully investing unaffordable for many organizations. This change ensures scalable security with Secret Protection and Code Security is no longer out of reach for many organizations.
In addition, as of today, our standalone security products are also available as add-ons for GitHub Team organizations. Previously, smaller development teams were unable to purchase our security features without upgrading to GitHub Enterprise. This change ensures our security products remain affordable, accessible, and easy to deploy for organizations of all sizes.
Have your secrets been exposed? Try our new public preview
Understanding whether you have existing exposed secrets is a critical step. Starting today, you can run a secret risk assessment for your organization.
The secret risk assessment is a point-in-time scan leveraging our scanning engine for organizations, covering all repositoriesâpublic, private, internal, and even archivedâand can be run without purchase. The point-in-time scan provides clear insights into the exposure of your secrets across your organization, along with actionable steps to strengthen your security and protect your code. In order to lower barriers for organizations to use and benefit from the feature, no specific secrets are stored or shared.
The public preview is releasing today for organizations across GitHub Team and Enterprise plans to try. Itâs still quite early, so weâd love to hear your feedback, like whether additional guidance on next steps would be helpful, or whether this is something youâd leverage outside of Team and Enterprise plans.
If you have feedback or questions, please do join the discussion in GitHub Communityâweâre listening.
Notes
- State of the Octoverse, 2024 â©
- Push protection helps prevent secret leaksâwithout compromising the developer experienceâby scanning for secrets before they are pushed. Learn more about push protection. â©
- The secret risk assessment is a free tool which will provide clear insights into secret exposure across your organization, along with actionable steps to strengthen their security and protect their code. Learn more about the secret risk assessment. â©
- A Comparative Study of Software Secrets Reporting by Secret Detection Tools, Setu Kumar Basak et al., North Carolina State University, 2023 â©
The post GitHub found 39M secret leaks in 2024. Hereâs what weâre doing to help appeared first on The GitHub Blog.
Welcome back to season two of GitHub for Beginners, a series designed to help you navigate GitHub more confidently! So far, weâve explored how to use GitHub Copilot and some of its essential features. Today, we will be learning all about large language models (LLMs) and the basics of prompt engineering.
LLMs are powerful, and the way we interact with them via prompts matters. For example, have you ever tried asking an LLM a question, but it canât really figure out what youâre trying to ask? Understanding the power of prompts (and the limitations that come with them) can help you become even more productive.
In this post, weâll explore:
- How LLMs work and how prompts are processed.
- How to engineer the most effective prompts.
- How to troubleshoot prompts when we donât get the outcomes we want.
Letâs get started!
Whatâs an LLM?
Large language models are a type of AI that are trained on a large (hence the name) amount of text data to understand and generate human-like language.
By predicting the next word in a sentence based on the context of the words that came before it, LLMs respond to humans in a way that is relevant and coherent. Sort of like an ultra-smart autocomplete!
When it comes to using LLMs, there are three important things to understand:
- Context: This is the surrounding information that helps an LLM understand what youâre talking about. Just like when you have a conversation with a friend, the more context you offer, the more likely the conversation will make sense.
- Tokens: For LLMs, text is broken down into units of tokens. This could be a word, part of a word, or even just one single letter. AI models process tokens to generate responses, so the number of tokens you use with an LLM can impact its response. Too few tokens can lead to a lack of context, but too many could overwhelm the AI model or run into its built-in token limits.
- Limitations: LLMs are powerful, but not all-powerful. Instead of understanding language like humans, LLMs rely on patterns and probabilities from training data. Taking a deeper dive into training data is beyond the scope of this post, but as a general rule, the ideal data set is diverse and broad. Models are never perfectâsometimes they can hallucinate, provide incorrect answers, or give nonsensical responses.
What is a prompt?
A prompt is a natural language request that asks an LLM to perform a specific task or action. A prompt gives the model context via tokens, and works around the modelâs potential limitations, so that the model can give you a response. For example, if you prompt an LLM with âWrite a JavaScript function to calculate the factorial of a number,â it will use its training data to give you a function that accomplishes that task.
Depending on how a specific model was trained, it might process your prompt differently, and present different code. Even the same model can produce different outputs. These models are nondeterministic, which means you can prompt it the same way three times and get three different results. This is why you may receive different outputs from various models out in the world, like OpenAIâs GPT, Anthropicâs Claude, and Googleâs Gemini.
Now that we know what a prompt is, how do we use prompts to get the outputs we want?
What is prompt engineering?
Imagine that a friend is helping you complete a task. Itâs important to give them clear and concise instructions if thereâs a specific way the task needs to be done. The same is true for LLMs: a well-crafted prompt can help the model understand and deliver exactly what youâre looking for. The act of crafting these prompts is prompt engineering.
Thatâs why crafting the right prompt is so important: when this is done well, prompt engineering can drastically improve the quality and relevance of the outputs you get from an LLM.
Here are a few key components of effective prompting:
- An effective prompt is clear and precise, because ambiguity can confuse the model.
- Itâs also important to provide enough context, but not too much detail, since this can overwhelm the LLM.
- If you donât get the answer youâre expecting, donât forget to iterate and refine your prompts!
Letâs try it out!
Example: How to refine prompts to be more effective
Imagine youâre using GitHub Copilot and say: Write a function that will square numbers in a list
in a new file with no prior code to offer Copilot context. At first, this seems like a straightforward and effective prompt. But there are a lot of factors that arenât clear:
- What language should the function be written in?
- Do you want to include negative numbers?
- Will the input ever have non-numbers?
- Should it affect the given list or return a new list?
How could we refine this prompt to be more effective? Letâs change it to: Write a Python function that takes a list of integers and returns a new list where each number is squared, excluding any negative numbers.
This new prompt is clear and specific about what language we want to use, what the function should do, what constraints there are, and the expected input type. When we give GitHub Copilot more context, the output will be better aligned with what we want from it!
Just like coding, prompt engineering is about effective communication. By crafting your prompts thoughtfully, you can more effectively use tools like GitHub Copilot to make your workflows smoother and more efficient. That being said, working with LLMs means there will still be some instances that call for a bit of troubleshooting.
How to improve results when prompting LLMs
As you continue working with GitHub Copilot and other LLM tools, you may occasionally not get the output you want. Oftentimes, itâs because your initial prompt wasnât specific enough. Here are a few scenarios you might run into when prompting LLMs.
Prompt confusion
Itâs easy to mix multiple requests or be unclear when writing prompts, which can confuse the model youâre using. Say you highlight something in Visual Studio Code and tell Copilot fix the errors in this code and optimize it.
Is the AI supposed to fix the errors or optimize it first? For that matter, what is it supposed to optimize for? Speed, memory, or readability?
To solve this, you need to break your prompt down into concrete steps with context. We can adjust this prompt by separating our asks: First, fix the errors in the code snippet. Then, optimize the fixed code for better performance.
Building a prompt iteratively makes it more likely that youâll get the result you want because the specific steps the model needs to take are more clear.
Token limitations
Remember, tokens are units of words or partial words that a model can handle. But thereâs a limit to how many tokens a given model can handle at once (this varies by model, tooâand there are different models available with GitHub Copilot). If your prompt is too long or the expected output is very extensive, the LLM may hallucinate, give a partial response, or just fail entirely.
That means you want to keep your prompts concise. Again, itâs important to iterate on smaller sections of your prompt, but itâs also crucial to only provide necessary context. Does the LLM actually need an entire code file to return your desired output, or would just a few lines of code in a certain function do the trick? Instead of asking it to generate an entire application, can you ask it to make each component step-by-step?
Assumption errors
Itâs easy to assume that the LLM knows more than it actually does. If you say add authentication to my app,
does the model know what your app does? Does it know which technologies you may want to use for authentication?
When crafting a prompt like this, youâll need to explicitly state your requirements. This can be done by outlining specific needs, mentioning best practices if you have any, and once again, iterating with edge cases and restraints. By stating your requirements, youâll help ensure the LLM doesnât overlook critical aspects of your request when it generates the output.
Prompt engineering best practices
Prompt engineering can be tricky to get the hang of, but youâll get better the more you do it. Here are some best practices to remember when working with GitHub Copilot or any other LLM:
- Give the model enough context while considering any limitations it might have.
- Prompts should be clear, concise, and precise for the best results.
- If you need multiple tasks completed, break down your prompts into smaller chunks and iterate from there.
- Be specific about your requirements and needs, so that the model accurately understands the constraints surrounding your prompt.
Your next steps
We covered quite a bit when it comes to prompt engineering. We went over what LLMs are and why context is important, defined prompt engineering and crafting effective prompts, and learned how to avoid common pitfalls when working with large language models.
- If you want to watch this demo in action, weâve created a YouTube tutorial that accompanies this blog.
- If you have any questions, pop them in the GitHub Community thread and weâll be sure to respond.
- Remember to sign up for GitHub Copilot (if you havenât already) to get started for free.
- Join us for the next part of the series where weâll walk through security best practices.
Happy coding!
Looking to learn more about GitHub Copilot?
Try GitHub Copilot for free or read more about Copilot.
The post GitHub for Beginners: How to get LLMs to do what you want appeared first on The GitHub Blog.
Maintaining and contributing to open source projects can be rewardingâbut it comes with a lot of small, repetitive tasks. The good news? GitHub Actions can automate the more tedious and error-prone parts of maintainership, freeing you up to focus on what matters: building and growing your community. Whether youâve just launched your project or youâre looking to scale, here are a few of the most helpful actions to help you along your way.
Pro tip: Itâs best practice to audit the source code of any action you use, and pin actions to a full length commit SHA so that you always know what version of the code youâre using.
Now, letâs get started.
1. Clean up your backlog with stale
Managing issues or pull requests can be challenging, especially when users open issues that require additional information to resolve. If they donât respond with what you need, these issues can pile up and make your backlog look daunting. Stale closes any issues or pull requests that lack activity after a set number of days, keeping your open issues list nice and tidy.
đ Who uses it: DeepSeek-R1, opentelemetry-go, and more.
2. Let super-linter sweat the small stuff for you
Itâs awesome when someone takes the time to submit a pull request to your project. Itâs not so awesome when you have to manually reject that pull request because of a small mistake. A linter is a tool that helps you enforce best practices and consistent formatting. Super-linter is a collection of linters for a variety of languages that can automate many of the chores associated with code reviews, including enforcing style guidelines, detecting syntax errors, identifying security vulnerabilities, and ensuring code consistency across multiple languages.
đ Who uses it: Barman, frankenphp, and more.
3. Stop repeating yourself with create-or-update-comment
Repetitive comments for common scenarios can become tedious. Create-or-update-comment offers a reprieve, enabling you to automate tasks, like sending welcome messages to new contributors or providing standardized feedback when linters and other automated processes detect problems.
đ Who uses it: woocommerce, lucide, and more.
4. Create release notes with ease with Release Drafter
After all the merging, testing, and other work that goes into preparing a release, writing up the release notes is often the last thing you want to do. The good news: Release Drafter automates the process for you. Each time you merge a pull request, it updates a draft text of your release notes, so theyâll be ready when itâs time to publish.
đ Who uses it: LightGBM, Mealie, and more.
5. Stay organized with pull request labeler
Overwhelmed with PRs? Pull request labeler automatically labels pull requests based on the files or branch modified, helping you triage work and maintain a consistent labeling system.
đ Who uses it: Apache Lucene, Marvin, and more.
Maintaining an open source project is a labor of love, but with the right tools, it doesnât have to feel overwhelming. These actions are just a few examples of how automation can save time, reduce frustration, and help you focus on writing great code and growing your community.
Why not give them a try and see how they can transform your open source journey? Your future self (and your contributors) will thank you!
Find more actions on GitHub Marketplace.
The post 5 GitHub Actions every maintainer needs to know appeared first on The GitHub Blog.
One-on-one meetings with your manager are one of the most valuable tools you have for career growth, problem-solving, and unlocking new opportunities. So if youâre only using them to provide status updates, youâre leaving a lot on the table.
I didnât fully realize this potential until I mentioned in a one-on-one that I was interested in mentorship and growing my leadership skills. Not long after, I was asked to co-lead a project with an intern to build an internal tool that helped surface enterprise configuration details. This gave me the opportunity to take technical ownership on a project while mentoring someone in a real-world contextâboth of which pushed me outside my comfort zone in the best way. That experience made it clear: When used intentionally, one-on-ones can open doors you didnât even know were there.
Many engineers treat one-on-ones as a low-stakes standup: reporting work, mentioning blockers, and getting general feedback. While that can be useful, it barely scratches the surface of what these meetings can accomplish. Instead, think of them as a system design review for your roleâa time to debug challenges, optimize your workflow, and align on long-term career goals.
Reframing your perception of what a one-on-one can accomplish
A well-structured one-on-one meeting with your manager isnât just a check-in, itâs an opportunity to shape your work environment and career trajectory. You wouldnât build a system without evaluating its constraints, dependencies, and long-term maintainability. Why approach your career any differently?
Start by shifting your mindset: These meetings are not status updates. Your manager already sees your pull requests, sprint velocity, and planning docs. Instead, use this time to highlight what mattersâwhat youâve shipped, the value itâs delivered, and where the friction is.
You can also use this space to validate decisions and gather context. If youâre weighing different paths forward, donât just ask for approvalâframe the conversation in terms of trade-offs:
âHere are the pros and cons of refactoring this service now versus later. How does this align with our broader business goals?â
Treat your manager like a decision-making API: Feed in the relevant signals, surface whatâs unclear, and work together on an informed response.
Use one-on-ones for career versioning (even before youâre âreadyâ)
One-on-one meetings are a great time to discuss your long-term career growthâeven if youâre not actively seeking a promotion. Instead of waiting until promotion season, start having these conversations early to build clarity, direction, and momentum over time.
- If youâre more than a year away from seeking a promotion, start talking to your manager about:
- Where am I already meeting expectations?
- Where should I focus on strengthening my skills?
- If youâre approaching the next level or considering going up for promotion soon, try focusing the conversation on:
- What kind of work would demonstrate readiness for the next level?
- Are there specific opportunities I can take on to grow my scope or visibility?
By treating growth as an iterative process rather than an all-or-nothing milestone, you can continuously improve and course-correct based on early feedback.
A useful framework for structuring these discussions is the Three Circles of Impact:
- Individual Contributions â The direct value of your work.
- Collaboration â How you work with and support others across the team.
- Enabling Others â Mentorship, knowledge sharing, or improving systems and tooling for your peers.
If youâre not sure how to show impact across all three, your one-on-one is a great place to explore it. The key is surfacing your goals early so your manager can help guide you toward the kinds of work that will stretch your skills and broaden your influence.
The more you shape your contributions around these areas, the clearer your readiness for growth becomesâand the easier it is for your manager to advocate on your behalf.
Your manager canât debug what they donât see
Managers donât have full visibility into your day-to-day experience, so one-on-ones are the right time to highlight persistent blockers and unclear expectations.
For instance, I once brought up a latency issue I was chasing down. The endpointâs performance was slightly above our service level objective (SLO) target, and I had already spent a good chunk of time optimizing it. But in that conversation, my manager offered a different lens:
âAre we optimizing for the right thing? We control the SLO. If the extra latency is due to how the system is designed (and if users arenât impacted) maybe the right move is to revisit the threshold instead of squeezing more performance out of it.â
That single conversation saved me hours and helped me reframe the problem entirely. Sometimes, the fix isnât in your codeâitâs in how youâre measuring success.
Make your one-on-ones work for you
Your one-on-ones will become far more effectiveâand lead to real growthâwhen you treat them as time to think strategically, not just check in. Reframing these meetings around your goals, your environment, and your long-term development puts you in a much stronger position to advocate for yourself and your work.
Start thinking about your career progression earlier than feels natural. Come prepared. Bring in whatâs going well, whatâs stuck, and where you want to grow. And remember: your manager canât fix what they donât know about, and they canât support your goals if you never share them.
If this shift feels unfamiliar, youâre not alone. The Engineerâs Survival Guide helped me reframe my thinking around one-on-ones.
Here are a few ideas that stuck with me:
- Your manager isnât a mind reader.
- You canât expect guidance if you donât come with a direction.
- Your growth is a shared effort, but it starts with you.
The earlier you see one-on-ones as a tool for impact and growth, the more value youâll get from them.
The post How engineers can use one-on-ones with their manager to accelerate career growth appeared first on The GitHub Blog.
Ever find yourself staring at an AI coding assistant, wondering why itâs not quite nailing what you need? Maybe itâs spitting out code thatâs close but not quite right, or youâre stuck wrestling with a problem that spans multiple files, wishing it could just get the bigger picture. Often, when developers hit these snags, itâs less about the tool, and more about knowing how to use it.
So hereâs the key question you should ask yourself: âDo I need a quick answer or a thoughtful conversation?â Thatâs the secret to unlocking AI coding tools like GitHub Copilot.
Because different aspects of Copilot serve different needs, and weâre here to help you discern when is the best time for agent mode and when you should be using Copilot Edits instead. Both are powerhouse Copilot features, built to supercharge your coding, but they shine in different scenarios.
Copilot Edits is your go-to for fast, precise tweaksâthink refactoring a function, squashing a bug, or applying consistent changes across files without losing your flow.
Agent mode, on the other hand, steps up as your AI collaborator for thornier, multi-file challengesâanalyzing your codebase, proposing architectural fixes, and even running terminal commands while you steer the ship and approve each move.
Figuring out which one fits your needs doesnât just save time, it turns your AI assistant into a seamless extension of your own coding instincts.
Copilot command center: Your chat window
Before learning more about agent mode and Copilot Edits, itâs essential to understand the Copilot chat window in VS Codeâyour central hub for AI interactions.
This is where you can:
- Ask coding questions like âHow do I implement JWT authentication in Node.js?â
- Use
/explain
to understand complex code blocks - Debug issues with
/fix
- Generate tests with
/tests
- Access both Edits and agent mode features
With the chat window giving you a solid grasp of your codeâs context, youâre in the driverâs seat to pick the perfect Copilot tool: Edits or agent mode, for whateverâs next.
Let me take you through a real-world example of how Iâve been using GitHub Copilot to evolve my personal website. It started with a simple idea: create an interactive terminal-style easter egg that showcases my skills in a developer-friendly way.
I began with a basic command processing function that handled a few simple commands:
function processCommand(command) {
try {
if (!command?.trim()) {
return 'Please enter a command. Type "more" for available commands.';
}
const sanitizedCommand = command.toLowerCase().trim();
switch (sanitizedCommand) {
case 'more':
return commands.more;
case 'about':
return commands.about;
case 'skills':
return commands.skills;
case 'projects':
return commands.projects;
case 'contact':
return commands.contact;
default:
return `Command not found: "${command}". Type 'more' for available commands.`;
}
} catch (error) {
console.error('[Terminal] Error processing command:', error);
return 'An error occurred while processing the command.';
}
}
Initially, I used Copilot Edits to quickly add new features. For instance, I wanted to add a âgithubâ command to showcase my repositories:
Copilot Edits made targeted changes to the switch statement in the processCommand
function and added the new command to the commands objectâa perfect example of quick, focused modifications to existing code.
As my terminal interface matured, I wanted to implement several interactive features that would require more comprehensive improvements. To get animation effects, keyboard navigation controls, and an extensible theming system that spanned multiple files to work together seamlessly, I turned to agent mode.
The agent mode advantage
When we designed agent mode, we didnât want to create a tool thatâs a simple AI feature: We wanted to create an AI that could pair with you! Rather than working on specific, limited changes where you might be reinventing the wheel, youâre now providing higher-level direction while the AI tackles the implementation details across multiple files or systems.
That being said, with agent mode, youâre still in control of the process. The AI has more latitude to explore your codebase and suggest comprehensive solutions, but you always review and approve the changes before theyâre applied. Itâs not about surrendering control, itâs about effectively delegating implementation details while you focus on the bigger picture.
What makes agent mode special:
- Codebase search: It searches your codebase to find relevant files without you explicitly specifying them.
- Self-iteration: It can iterate on its own output to complete your entire request in one go.
- Error identification and repair: It automatically recognizes and fixes errors in suggested code.
- Terminal command execution: It suggests and runs terminal commands with your permission.
- Build and run capabilities: It can build and run your application to check if changes work correctly.
I reach for agent mode when:
- Building complete features: âAdd analytics tracking throughout the appâ
- Navigating unfamiliar codebases: âHelp me understand how authentication works in this projectâ
- Writing and verifying tests: âWrite tests for the UserService and make sure they passâ
- Implementing terminal-heavy tasks: âSet up a new React project with TypeScript, Redux, and styled-componentsâ
- Doing complex refactoring: âRefactor our API calls to use the new error handling patternâ
Continuing with my terminal easter egg, letâs say I want to implement a much more extensive upgrade with multiple features, like adding a typing animation effect for responses, command history navigation with up/down arrows, and tab completion for commands. At the same time, I want to create a new TerminalThemes.css file with different color scheme options that users can switch between with a âthemeâ command.
This is where agent mode truly shines. The task spans multiple files, requires an understanding of the existing codebase, and involves creating entirely new functionality. Here, agent mode would:
- Search through the codebase to understand the terminal implementation.
- Create the new CSS file for themes.
- Add typing animation functionality to terminal responses.
- Implement keyboard handlers for history navigation.
- Build the tab completion system.
- Add the theme command to the switch statement.
- Test the implementation to ensure everything works correctly.
This is the beauty of agent mode: it has a complex understanding of patterns and relationships in different parts of the codebase! Agent mode can ensure that the typing animation did not interfere with existing functionality, check that the history navigation was compatible with the current command processing, and confirm that the theme switching properly modified the UI. It is precisely in this multi-file feature implementation with interlinking components that agent mode can shine compared to the more laser-focused Copilot Edits.
In the below episode of GitHub Checkout, you can see agent mode implementing a complete water consumption tracker feature in an iOS app. It not only writes the code but also builds, runs, and tests the app, all from a single prompt, without the developer needing to touch the editor directly.
The secret: use both together!
After many hours with both tools, hereâs my takeaway: Itâs not about picking one, or one being better than the other. Itâs about wielding them together for different purposes. Copilot Edits and agent mode work well together to augment our coding abilities, whether thatâs a more prompt-centric style of development with agent mode, or maintaining granular file edit control via Copilot Edits.
Regardless of which tool you are using, you are still the pilot, steering the AI to supercharge your own productivity. The more context you provide in your prompts, the better results youâll getâand remember, you always have the final say on any suggested changes.
If youâre just jumping into the world of AI, check out our new Getting started with GitHub Copilot tutorial series, where we walk you through everything from initial setup to advanced techniques. Itâs perfect for developers at any skill level who want to quickly become productive with AI-assisted coding.
Happy coding, everyone! đ»âš
How to get started
Want to try these features? Hereâs my quick setup guide:
For Copilot Edits:
- Open VS Code and look for the Copilot Chat window.
- Click the âEdit with Copilotâ button to open the Copilot Edits view.
- Add your relevant files to the working set. Copilot will only make changes to files in this set (unless itâs creating a new file).
- Type your prompt describing what changes you need.
- Review the âdiffâ of all proposed changes before accepting them.
For agent mode:
- Make sure youâre using VS Code version 1.99 or higher (as of original publish date, available in VS Code Insiders).
- Switch from âEditâ to âAgentâ mode.
- Type your prompt, describing the complex task or feature you want to implement.
- Agent mode will work autonomously, but still require your approval for changes.
- For best results, consider using custom instructions to personalize how agent mode works with your codebase.
Try GitHub Copilot free
Ready to automate your dependency management with AI? Learn more about GitHub Copilot for Business or start your free trial of GitHub Enterprise today.
The post Mastering GitHub Copilot: When to use AI agent mode appeared first on The GitHub Blog.
Imagine this: Youâre sipping your morning coffee and scrolling through your emails, when you spot itâa vulnerability report for your open source project. Itâs your first one. Panic sets in. What does this mean? Where do you even start?
Many maintainers face this moment without a clear roadmap, but the good news is that handling vulnerability reports doesnât have to be stressful. Below, weâll show you that with the right tools and a step-by-step approach, you can tackle security issues efficiently and confidently.
Letâs dig in.
What is vulnerability disclosure?
If you discovered that the lock on your front door was faulty, would you attach a note announcing it to everyone passing by? Of course not! Instead, youâd quietly tell the people who need to knowâyour family or housematesâso you can fix it before it becomes a real safety risk.
Thatâs exactly how vulnerability disclosure should be handled. Security issues arenât just another bug. They can be a blueprint for attackers if exposed too soon. Instead of discussing them in the open, maintainers should work with security researchers behind the scenes to fix problems before they become public.
This approach, known as Coordinated Vulnerability Disclosure (CVD), keeps your users safe while giving you time to resolve the issue properly.
To support maintainers in this process, GitHub provides tools like Private Vulnerability Reporting (PVR), draft security advisories, and Dependabot alerts. These tools are free to use for open source projects, and are designed to make managing vulnerabilities straightforward and effective.
Letâs walk through how to handle vulnerability reports, so that the next time one lands in your inbox, youâll know exactly what to do!
The vulnerability disclosure process, at a glance
Hereâs a quick overview of what you should do if you receive a vulnerability report:
- Enable Private Vulnerability Reporting (PVR) to handle submissions securely.
- Collaborate on a fix: Use draft advisories to plan and test resolutions privately.
- Request a Common Vulnerabilities and Exposures (CVE) identifier: Learn how to assign a CVE to your advisory for broader visibility.
- Publish the advisory: Notify your community about the issue and the fix.
- Notify and protect users: Utilize tools like Dependabot for automated updates.
Now, letâs break down each step.
1. Start securely with PVR
Hereâs the thing: There are security researchers out there actively looking for vulnerabilities in open source projects and trying to help. But if they donât know who to report the problem to, itâs hard to resolve it. They could post the issue publicly, but this could expose users to attacks before thereâs a fix. They could send it to the wrong person and delay the response. Or they could give up and move on.
The best way to ensure these researchers can reach you easily and safely is to turn on GitHubâs Private Vulnerability Reporting (PVR).
Think of PVR as a private inbox for security issues. It provides a built-in, confidential way for security researchers to report vulnerabilities directly in your repository.
đ How to enable PVR for a repository or an organization.
Heads up! By default, maintainers donât receive notifications for new PVR reports, so be sure to update your notification settings so nothing slips through the cracks.
Enhance PVR with a SECURITY.md file
PVR solves the âwhereâ and the âhowâ of reporting security issues. But what if you want to set clear expectations from the start? Thatâs where a SECURITY.md file comes in handy.
PVR is your front door, and SECURITY.md is your welcome guide telling visitors what to do when they arrive. Without it, researchers might not know whatâs in scope, what details you need, or whether their report will be reviewed.
Maintainers are constantly bombarded with requests, making triage difficultâespecially if reports are vague or missing key details. A well-crafted SECURITY.md helps cut through the noise by defining expectations early. It reassures researchers that their contributions are valued while giving them a clear framework to follow.
A good SECURITY.md file includes:
- How to report vulnerabilities (ex: âPlease submit reports through PVR.â)
- What information should be included in a report (e.g., steps to reproduce, affected versions, etc.)
Pairing PVR with a clear SECURITY.md file helps you streamline incoming reports more effectively, making it easier for researchers to submit useful details and for you to act on them efficiently.
2. Collaborate on a fix: Draft security advisories
Once you confirm the issue is a valid vulnerability, the next step is fixing it without tipping off the wrong people.
But where do you discuss the details? You canât just drop a fix in a public pull request and hope no one notices. If attackers spot the change before the fix is officially released, they can exploit it before users can update.
What youâll need is a private space where you and your collaborators can investigate the issue, work on and test a fix, and then coordinate its release.
GitHub provides that space with draft security advisories. Think of them like a private fork, but specifically for security fixes.
Why use draft security advisories?
- They keep your discussion private, so that you can work privately with your team or trusted contributors without alerting bad actors.
- They centralize everything, so your discussions, patches, and plans are kept in a secure workspace.
- Theyâre ready for publishing when you are: You can convert your draft advisory into a public advisory whenever youâre ready.
đ How to create a draft advisory.
By using draft security advisories, you take control of the disclosure timeline, ensuring security issues are fixed before they become public knowledge.
3. Request a CVE with GitHub
Some vulnerabilities are minor contained issues that can be patched quietly. Others have a broader impact and need to be tracked across the industry.
When a vulnerability needs broader visibility, a Common Vulnerabilities and Exposures (CVE) identifier provides a standardized way to document and reference it. GitHub allows maintainers to request a CVE directly from their draft security advisory, making the process seamless.
What is a CVE, and why does it matter?
A CVE is like a serial number for a security vulnerability. It provides an industry-recognized reference so that developers, security teams, and automated tools can consistently track and respond to vulnerabilities.
Why would you request a CVE?
- For maintainers, it helps ensure a vulnerability is adequately documented and recognized in security databases.
- For security researchers, it provides validation that their findings have been acknowledged and recorded.
CVEs are used in security reports, alerts, feeds, and automated security tools. This helps standardize communication between projects, security teams, and end users.
Requesting a CVE doesnât make a vulnerability more or less critical, but it does help ensure that those affected can track and mitigate risks effectively.
đ How to request a CVE.
Once assigned, the CVE is linked to your advisory but will remain private until you publish it.
By requesting a CVE when appropriate, youâre helping improve visibility and coordination across the industry.
4. Publish the advisory
Good job! Youâve fixed the vulnerability. Now, itâs time to let your users know about it. A security advisory does more than just announce an issue. It guides your users on what to do next.
What is a security advisory, and why does it matter?
A security advisory is like a press release for an important update. Itâs not just about disclosing a problem, itâs about ensuring your users know exactly whatâs happening, why it matters, and what they need to do.
A clear and well-written advisory helps to:
- Inform users: Clearly explain the issue and provide instructions for fixing it.
- Build trust: Demonstrate accountability and transparency by addressing vulnerabilities proactively.
- Trigger automated notifications: Tools, like GitHub Dependabot, use advisories to alert developers with affected dependencies.
đ How to publish a security advisory.
Once published, the advisory becomes public in your repository and includes details about the vulnerability and how to fix it.
Best practices for writing an advisory
- Use plain language: Write in a way thatâs easy to understand for both developers and non-technical users
- Include essential details:
- A description of the vulnerability and its impact
- Versions affected by the issue
- Steps to update, patch, or mitigate the risk
- Provide helpful resources:
- Links to patched versions or updated dependencies
- Workarounds for users who canât immediately apply the fix
- Additional documentation or best practices
đ Check out this advisory for a well-structured reference.
A well-crafted security advisory is not just a formality. Itâs a roadmap that helps your users stay secure. Just as a company would carefully craft a press release for a significant change, your advisory should be clear, reassuring, and actionable. By making security easier to understand, you empower your users to protect themselves and keep their projects safe.
5. After publication: Notify and protect users
Publishing your security advisory isnât the finish line. Itâs the start of helping your users stay protected. Even the best advisory is only effective if the right people see it and take action.
Beyond publishing the advisory, consider:
- Announcing it through your usual channels: Blog posts, mailing lists, release notes, and community forums help reach users who may not rely on automated alerts.
- Documenting it for future users: Someone might adopt your project later without realizing a past version had a security issue. Keep advisories accessible and well-documented.
You should also take advantage of GitHub tools, including:
- Dependabot alerts
- Automatically informs developers using affected dependencies
- Encourages updates by suggesting patched versions
- Proactive prevention
- Use scanning tools to find similar problems in different parts of your project. If you find a problem in one area, it might also exist elsewhere
- Regularly review and update your projectâs dependencies to avoid known issues
- CVE publication and advisory database
- If you requested a CVE, GitHub will publish the CVE record to CVE.org for industry-wide tracking
- If eligible, your advisory will also be added to the GitHub Advisory Database, improving visibility for security researchers and developers
Whether through automated alerts or direct communication, making your advisory visible is key to keeping your project and its users secure.
Next report? Youâre ready!
With the right tools and a clear approach, handling vulnerabilities isnât just manageableâitâs part of running a strong, secure project. So next time a report comes in, take a deep breath. Youâve got this!
FAQ: Common questions from maintainers
Youâve got questions? We got answers! Whether youâre handling your first vulnerability report or just want to sharpen your response process, here is what you need to know.
1. Why is Private Vulnerability Reporting (PVR) better than emails or public issues for vulnerability reports?
Great question! At first glance, email or public issue tracking might seem like simple ways to handle vulnerability reports. But PVR is a better choice because it:
- Keeps things private and secure: PVR ensures that sensitive details stay confidential. No risk of accidental leaks, and no need to juggle security concerns over email.
- Keeps everything in one place: No more scattered emails or external tools. Everythingâdiscussions, reports, and updatesâis neatly stored right in your repository.
- Makes it easier for researchers: PVR gives researchers a dedicated, structured way to report issues without jumping through hoops.
Bottom line? PVR makes life easier for both maintainers and researchers while keeping security under control.
2. What steps should I take if I receive a vulnerability report that I believe is a false positive?
Not every report is a real security issue, but itâs always worth taking a careful look before dismissing it.
- Double-check details: Sometimes, what seems like a false alarm might be misunderstood. Review the details thoroughly.
- Ask for more information: Ask clarifying questions or request additional details through GitHubâs PVR. Many researchers are happy to provide further context.
- Check with others: If youâre unsure, bring in a team member or a security-savvy friend to help validate the report.
- Close the loop: If it is a false positive, document your reasoning in the PVR thread. Transparency keeps things professional and builds trust with the researcher.
3. How fast do I need to respond?
* Acknowledge ASAP: Even if you donât have a fix yet, let the researcher know you got their report. A simple âThanks, weâre looking into itâ goes a long way.
* Follow the 90-day best practice: While thereâs no hard rule, most security pros aim to address verified vulnerabilities within 90 days.
* Prioritize by severity: Use the Common Vulnerability Scoring System (CVSS) to gauge urgency and decide what to tackle first.
Think of it this way: No one likes being left in the dark. A quick update keeps researchers engaged and makes collaboration smoother.
4. How do I figure out the severity of a reported vulnerability?
Severity can be tricky, but donât stress! There are tools and approaches that make it easier.
- Use the CVSS calculator: It gives you a structured way to evaluate the impact and exploitability of a vulnerability.
- Consider real-world impact: A vulnerability that requires special conditions to exploit might be lower risk, while one that can be triggered easily by any user could be more severe.
- Collaborate with the reporter: They might have insights on how the issue could be exploited in real-world scenarios.
Take it step by stepâitâs better to get it right than to rush.
5. Should I request a CVE before or after publishing an advisory?
Thereâs no one-size-fits-all answer, but hereâs a simple way to decide:
- If itâs urgent: Publish the advisory first, then request a CVE. CVE assignments can take 1â3 days, and you donât want to delay the fix.
- For less urgent cases: Request a CVE beforehand to ensure itâs included in Dependabot alerts from the start.
Either way, your advisory gets published, and your users stay informed.
6. Where can I learn more about managing vulnerabilities and security practices?
Thereâs no need to figure everything out on your own. These resources can help:
- GitHub security documentation: Get detailed guidance on repository security advisories.
- Open Source Security Foundation (OpenSSF): Join the open source security community to learn from others.
Security is an ongoing journey, and every step you take makes your projects stronger. Keep learning, stay proactive, and youâll be in great shape.
Next steps
- Turn on private vulnerability reporting today and make it easy for researchers to report security issues.
- Learn more about GitHub Security Advisories and how they can streamline your process.
- Explore coordinated vulnerability disclosure and build a security-first approach for your project.
By taking these steps, youâre protecting your project and contributing to a safer and more secure open source ecosystem.
The post A maintainerâs guide to vulnerability disclosure: GitHub tools to make it simple appeared first on The GitHub Blog.
March 22 is World Water Dayâa day intended to educate, inspire, and promote action around the importance of clean and safe water. For organizations like charity: water, this is their mission year-round. Theyâre working to help the more than 700 million people worldwide who donât have access to clean water, and theyâre finding innovative ways of using technology to do it.
Weâve seen entire communities transformed with access to clean water and the impact that brings, like the ability to grow your own food and start a small community business. These things really start to accelerate once people are not spending their time retrieving clean water.
How tech is changing the tide of the water crisis
What else makes charity: waterâs approach unique? They have an in-house engineering team, making them an innovator in the nonprofit space. This enables them to manage their own systems, ensuring they function correctly and evolve to have the greatest impact on their mission. Their developers are dedicated to changing the tide of the water crisis, and they know how impactful their work can be.
And they have used this innovative mindset to help them do what they call âreinventing charity.â They want to create a space based on trust, proof, and integrity that makes donors feel connected to something thatâs happening half a world away. Theyâve done this by putting every single project onto a map with GPS coordinates and photos. Now, everyone who gives can see exactly how theyâre making a difference.
The impact of GitHub Copilot on their mission
GitHub is an integral part of how their team collaborates and scales, but the adoption of GitHub Copilot has completely changed their workflow. Now, engineers can spend more time focused on solving business problems and doing things that require creative human thought.
Copilot is like a magical autocomplete, it lets us focus more on what is the actual challenge weâre trying to solve, rather than worrying about the syntax.
Here are their top three reasons why GitHub Copilot has been a gamechanger:
- Inline version of chat: they love the ease of telling it exactly what they want it to do
- Testing: every person on the team uses it to test, helping them keep 99% spec coverage
- Suggestions and corrections: theyâre able to spend less time looking up syntax and correcting trivial errors
GitHub Copilot helps us code about 10% faster, which across a team of 8, is extremely effective.
Another thing that helps this small but mighty team to get the job done is relying on the power of open source. They know that developers are rarely solving a truly unique problem, which is where open source libraries come into their workflow.
Collaborating with the open source community is almost the same as collaborating with our own teammates.
From simplifying the workflow of a developer to having an impact on the global water crisis, technology and AI are reshaping the way we work. Dive into a better way of working by trying GitHub Copilot for free today, or if you are a nonprofit organization, check out GitHub for Nonprofits for exclusive discounts.
GitHub and Microsoft are committed to charity: waterâs work, which is why we have supported them, along with many other organizations, to help further their missions. We are dedicated to achieving our goal of replenishing more water than we consume by 2030, and invest in projects focused on land conservation, aquatic habitat restoration, water supply reliability, and water quality. Learn more about Microsoftâs Sustainability goals.
The post World Water Day: how GitHub Copilot is helping bring clean water to communities appeared first on The GitHub Blog.
Have you ever wondered how the cool cats of the internet make Markdown checklists in their GitHub repositories, issues, and pull requests? Me too. And Iâm here to say: you too can be a cool cat.
Markdown checklists are handy for visualizing tasks that need to be done. Beyond that, GitHub can turn these checklists into task lists in your issues and pull requests to make your project tracking easier and better. Letâs learn how to make âem!
How to create a checklist in Markdown
Making a checklist is exactly like making a regular list in Markdown (a -
starting each line), but you add [ ]
or [x]
after the -
, like so:
A regular list:
- This is a list item
- This is another list item
A checklist:
- [ ] This is an unchecked item
- [x] This is a checked item
When the checklist is rendered, it looks like this:
Feel free to use the short Markdown checklist template above to get a list going for yourself.
Markdown checklists are a great way to keep track of tasks in your README files. For example, you could mark down which features might be missing or which aspects of your project need contributors. You can also use these checklists for documentation (like for noting docs coverage), notes, issues, or pull requests.
The best part: GitHub adds extra functionality to your Markdown checklists â and Iâll get to that rightâŠnow!
From checklist to task list
A task list on GitHub is a glorified Markdown checklist, allowing you to do even more project planning and tracking when itâs in an issue or pull request.
For example, letâs say you make an issue to add some features to a project of yours, like so:
- [ ] Update dependencies
- [ ] Add archiving feature
- [ ] Research design system
It will render like this in the issue:
It might look like a normal checklist, but if you hover over one of the bullet points, youâll be able to drag and drop each task to re-order them:
Or, if you hit the ...
menu on the right side, you can turn each of the individual checklist items into its own issue, sub-issue, or re-order from there!
Turning those checklist items into issues and sub-issues means that you can turn that planned work into tracked tasks. Breaking down your existing issues into smaller pieces might seem daunting, but doing it in this way connects all the pieces together nicely as you break things up into more specific chunks.
And letâs just say you did the exact same thing, but in a pull request. You still have the ability to re-order the tasks, and it will render the same way as it does in the issue:
But now, because youâve built this checklist in the pull request, you get more visibility into how many tasks have been completed and how many remain in the list of pull requests for your repository. Look just below the title of your pull request: You can see the status of your project right there!
Where can I learn more?
Love that you asked that question. Good job. Head over to the GitHub Docs to learn more about task lists on GitHub.
Happy coding!
The post Video: How to create checklists in Markdown for easier task tracking appeared first on The GitHub Blog.