Find secrets exposed in your organization with the secret risk assessment >
The GitHub Blog
Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.
In March, we experienced one incident that resulted in degraded performance across GitHub services.
March 29 7:00 UTC (lasting 58 hours)
Between March 29 7:00 UTC and March 31 17:00 UTC, GitHub experienced service degradation due to two separate, but related incidents. On March 29, users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit ebook and event registration forms on resources.github.com, also due to a service outage.
The March 29 incident occurred due to expired credentials used for an internal service, preventing customers from being able to unsubscribe directly from marketing/sales topics through github.com/settings/emails UI and from performing the double opt-in step required by some countries. A similar credential expiry on March 31 resulted in users experiencing degradation accessing resources.github.com.
The cause of the incident was traced to an issue in the automated alerting for monitoring upcoming credential expirations. The bug in alerting resulted in the invalid credentials being discovered after they had expired. This resulted in two incidents before we could deploy a durable fix. We mitigated it by renewing the credentials and redeploying the affected services.
To improve future response times and prevent similar issues, we have enhanced our credential expiry detection, alerting, and rotation processes, and are working on improving on-call observability.
Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog.
The post GitHub Availability Report: March 2025 appeared first on The GitHub Blog.
Whether it’s building, testing, or deploying code, automating manual processes is key to improving developer experience and achieving a successful DevOps strategy.
On GitHub, you can use GitHub Actions to not only implement your CI/CD pipeline, but also automate other processes both on and off GitHub. When you are adopting GitHub Actions on GitHub Enterprise Cloud, you can choose between GitHub-hosted runners and self-hosted runners to run your workloads, and each has its pros and cons.
In this post, we’ll compare GitHub-hosted runners with self-hosted runners across five areas to help you determine which type best fits your GitHub Actions adoption strategy.
What are GitHub-hosted runners and self-hosted runners?
GitHub-hosted runners and self-hosted runners are based on the same open-source software and both support macOS, Windows, and Linux. But they have many differences.
GitHub-hosted runners are fully managed on GitHub’s infrastructure using pre-configured Windows, Linux, and macOS virtual machines. In addition to offering standard runners for typical workloads, hosted runners offer larger runners with more resources (memory, CPU, and storage), custom images, static IP ranges, and Azure Virtual Network integration for enhanced security control.
Self-hosted runners operate on your own infrastructure, whether on-premises or in the cloud. You manage all aspects—configuration, security, and scaling. They also allow you to operate runners in places you couldn’t otherwise—for example, on GitHub Enterprise Server or on custom hardware. They can also be the only way to implement certain compliance requirements, especially when working with highly secured systems.
Both options offer distinct advantages depending on your specific needs and resources. Let’s explore when GitHub-hosted runners may be the right choice for your projects, and when it may be better to use self-hosted runners.
Fully managed or self-managed?
A key distinction between these two options is where they’re hosted, as we’ve pointed out. But that choice comes with several implications.
GitHub-hosted runners provide managed infrastructure with pools of on-demand virtual machines (VMs) that are automatically secured and updated. The environments are ephemeral, with the disks reimaged after each job, preventing files from previous jobs from affecting subsequent runs. The VMs are optimized for GitHub Actions, with pre-installed software and tools, including the latest versions of GitHub CLI, Docker, and common development platforms to ensure fast start times and avoid rate limits.
With GitHub-hosted runners, you can jump right in and start building workflows. There’s nothing to configure or secure before you start, making them ideal when you want to get started quickly. And we all prefer to spend more time on code than infrastructure, right?
Self-hosted runners offer you complete flexibility in defining your solution, but also means you are responsible for managing the infrastructure, images, caches, and security, and monitoring availability and usage against GitHub’s rate limits. This requires expertise in GitHub Actions architecture, VM and container image building, and network and infrastructure management. If your core business offering is scalable infrastructure solutions or Kubernetes, self-hosted runners may make sense.
Let’s take a closer look.
Scalability
To remain productive, it’s important to have highly-available resources available on demand, especially for CI/CD workloads, where waiting for a job to run may mean you’re blocked from working on other tasks. In fact, a single wasted hour each week can cost a company over $4,000 a year per developer!
But scaling highly available, on-demand resources is hard. Even with a well-designed cloud infrastructure, it takes time to provision new virtual machines. You need systems in multiple regions to maintain up time, with 20-25% spare capacity to scale quickly and handle unexpected system failures.
GitHub-hosted runners take advantage of Microsoft’s deep data center and cloud expertise and have dedicated teams to meet our service level agreement (SLA) of 99.9% availability. And that’s without any expertise on your part. In fact, many teams consider self-hosted runners in hopes of beating this availability, but it turns out that’s not even technically possible, as all runnings depend on the same services and control plane. That said, there are conditions where self-hosted runners may work for you.
Self-hosted runners may meet your needs if you need a fixed number of servers, are primarily focused on deployment to non-cloud resources, and don’t need to scale on demand. Just remember that the instances are not natively ephemeral, so you’ll need to have a strategy to keep the instances free from artifacts created by earlier runs. Self-hosted runners also lack automatic scaling capabilities; they require a scaling solution to be able to support large teams or create new instances dynamically.
GitHub’s Actions Runner Controller (ARC) offers a solution, but it has limitations as it requires Kubernetes expertise and only supports Linux runners. Kubernetes relies on containers instead of VMs, which can require you to troubleshoot resource contention and scaling issues. ARC can also offer high availability by having multiple clusters. As we noted before, if your primary business is hosting and managing Kubernetes clusters, then ARC may be the right approach.
ARC does not support macOS or Windows workloads, and both environments present a number of limitations. For example, on macOS, you are required to use Apple hardware, you are limited to two VMs per machine, and containerizing the Apple runtime is not supported. For Windows, virtual machines are supported, but you need a custom orchestrator for scaling the instances. While you can create Windows containers and manage them with Kubernetes, the containers have slow startup times and may not support some of the necessary development and testing tools.
In short, we recommend GitHub-hosted runners for both macOS and Windows workloads.
Security
Security is critical for CI/CD processes, since they may require access to internal or production resources, and builds often use third-party libraries, runtimes, and tools, which can create a large attack surface if not properly secured.
GitHub-hosted runners provide built-in security through a defense-in-depth, zero-trust approach. VMs provide network isolation, preventing exposure to other runners and corporate resources. In fact, access to corporate or cloud resources requires elevating privileges (we recommend OIDC). Their ephemeral nature eliminates code persistence and prevents application execution after job completion, reducing unauthorized access risks.
Storage disks for hosted runners are encrypted at rest, ensuring the code is protected on the disk. All communications are encrypted to GitHub, and deployments to Microsoft Azure are routed through the Azure backbone, minimizing transits through the public internet. We provide regular security updates to both operating systems and runner software. The minimized attack surface and reduced risk of security breaches are key factors in the Department of Defense DevSecOps Reference Design’s recommendation to prefer GitHub-hosted runners for workloads up to Impact Level 5.
Self-hosted runners shift security responsibility entirely to you, requiring management of network, infrastructure, images, containers, and caches—that’s a lot of work. You also need to keep everything up to date, as runners connected to GitHub Enterprise Cloud will not be able to connect if they are more than 30 days behind the current release.
Not to mention, if you operate runners within your network environment with access to corporate resources and production environments, you’ll want to implement a zero-trust, defense-in-depth strategy with time-limited resource access, which demands a high level of network security expertise.
Finally, you’ll need to implement and keep updated both a tool cache and an Actions archive cache. Otherwise, you’re likely to encounter our rate limits as you scale up.
Troubleshooting
Keeping you productive means that problems with workflows or jobs—lack of resources, network issues, outages—need to be solved quickly. As a result, it’s important to have a support strategy.
GitHub-hosted runners come with 24/7 support across all time zones, with premium plans offering dedicated reliability engineers and rapid 30-minute response times for critical issues. This eliminates the need for infrastructure troubleshooting on your part. GitHub handles all runner environment issues, from performance problems to queue times, letting you focus on development while we roll up our sleeves, figure out the problems, and get them fixed.
Self-hosted runners, however, shift first-level support responsibility to you, which means someone will have to troubleshoot performance, network, or queueing issues when they happen, leaving less time for the fun coding stuff. 🙁
Not only that, but GitHub can only assist with the Actions service itself; we cannot assist with your infrastructure, Kubernetes clusters, or custom orchestration solutions. So if they figure out the issue is with your system, you’ll be on your own to solve it. Without sufficient planning, you can spend a lot of time waiting for a solution that lets you get back to writing and deploying code. That can be a big price to pay for self-hosted runners.
Cost management
Finally, there’s the issue of cost. If you are offering Kubernetes or infrastructure management solutions, self-hosted runners may have some advantages. If not, then GitHub-hosted runners are likely the answer here too.
GitHub-hosted runners operate on a pay-as-you-go model with no upfront costs or commitments. Teams optimize expenses through workflow improvements and appropriate runner selection. In addition, there are built-in cost savings. For example, GitHub doesn’t charge network egress fees—a significant advantage when working with large container images on cloud platforms. GitHub also has a partnership with Docker that allows unlimited image pulls from Docker Hub by GitHub-hosted runners, which often eliminates the need to create a pass-through registry or purchase business licenses for your CI/CD processes. Maintaining, supporting, and securing the environment is handled by GitHub, avoiding additional staff and service expenses. Finally, Enterprise accounts benefit from 50,000 free monthly minutes for standard runners.
Self-hosted runners, as in other areas, means organizations assume responsibility for all infrastructure, network, storage, security, and support costs. This gives you a lot of flexibility in defining the environment, right-sizing your resources, and customizing the networking. While per-minute virtual machine expenses might initially seem lower, the total ownership cost can (and often does) exceed GitHub-hosted solutions when accounting for these additional support costs.
Which runner is best for you?
Choosing the right runner depends on your specific needs. Self-hosted runners are most suitable when using GitHub Enterprise Server (which lacks hosted runners), if your core business involves managing infrastructure or Kubernetes, or when you have compliance requirements not met by GitHub Enterprise Cloud with data residency. Scaling and ephemerality challenges make self-hosting less ideal for Windows and macOS workloads. If self-hosting is necessary, consider a hybrid approach and use self-hosted runners just for the specific workloads where they are needed.
For most developers and the vast majority of scenarios, unless you have very unique requirements or are willing to deeply invest in infrastructure to keep your CI/CD system humming, GitHub-hosted runners are likely your best option. They’re especially beneficial for those new to GitHub Actions and they let you spend your time focused on business value, new ideas, and writing code—instead of managing runners.
The post When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions appeared first on The GitHub Blog.
Welcome to the next episode in our GitHub for Beginners series, where we are diving into the world of GitHub Copilot. This is our fourth episode, and we’ve already talked about Copilot in general, some of its essential features, and how to write good prompts to get the most out of Copilot. We have all the previous episodes on our blog and available as videos.
Today we’re going to be talking about security. Everyone knows that you should make your code secure and no one wants to be responsible for any potential hacks. But did you know that GitHub Copilot can help you with some security best practices? We’ll show you how you can use it and other tools to help make your projects more secure from day one.
Let’s talk security
Most security teams, through no fault of their own, don’t have sufficient time or resources to properly secure their companies’ code. That makes developers the first line of defense, which is tough because many developers don’t have enough formal security training to do the job properly. For those who don’t have the proper training, they’re often forced to learn these skills on the job.
Fortunately, GitHub is here to help—both with GitHub Copilot and a bunch of security tools you can use for free if you’re building open source code.
Copilot to the rescue
GitHub Copilot doesn’t just know how to write code; it also understands how to make code more secure. But it isn’t perfect, so you have to know what to ask. You also shouldn’t rely on it as your only security tool.
Let’s start with a simple example.
Let’s say you’re working with a SQL database using an INSERT
statement to add data. Now, you may have heard of SQL injection. If you haven’t, or if you need a refresher, that’s where someone sneaks in a malicious command through a regular text field, like a comment box or name input. This could be a command like DROP TABLE
, which could delete your entire database.
One way to protect your code is to ask GitHub Copilot to regenerate it. Delete the code that uses the INSERT
command and write a clear comment explaining what you want to do. For example, you could insert the following comment:
/*
insert from cart using a parameterized query:
mail, product_name, user_name, product_id, address, phone, ship_date, price" and get a Copilot suggestion
*/
Copilot will suggest a safer, parameterized query—because security shouldn’t be guesswork.
Once you’ve added that comment, if GitHub Copilot is enabled in your IDE, it’ll suggest some code for you. Before accepting the code, be sure to review what Copilot wrote—it’s a helpful assistant, but it’s your job to verify and validate. Once you’ve verified the output, hit Tab to accept the suggestion.
Now, what if you don’t want to delete your existing code? You may have a chunk of code that you want to check. In this case, you can ask Copilot to look through your code and fix it. To do this, highlight the code you want to review, open Copilot Chat and ask “are there any vulnerabilities in this function?”
If you want to expand the scope, you could select the whole file or use @workspace
in the Copilot Chat window. Using @workspace
tells Copilot to look at all the files in your workspace, not just the ones that you currently have open. A great prompt to try is: “@workspace
what’s the attack surface?” That’s a fancy way of asking how someone might try to attack this project.
Copilot will scan your code and offer suggestions it can find. These might include individual changes to the code or certain packages you might want to consider adding to the project to make it more secure.
You can take this a step further by using the slash command /fix
, to get suggestions for improving overall code quality and efficiency.
Don’t forget that you can always follow up with more detailed questions after Copilot gives you a response or makes a suggestion. You can ask Copilot questions like:
- “What does this vulnerability mean?”
- “Can you suggest a safer way to do this?”
For more sample prompts, check out our Copilot Chat Cookbook. The section on Finding vulnerabilities is particularly appropriate for this topic.
Dedicated security tools
While Github Copilot is powerful, it’s not meant to replace all your security tools. This is partly because Copilot is a generalist, not a specialist. It can’t always see the full context of your production environment. Built-to-purpose security tools can take other factors into account, such as your compiler, your environment variables, and your deployment method.
Fortunately, GitHub has a number of free security offerings for open source maintainers. This means you can use them at no cost on your public repositories. Let’s take a look at some of them.
Dependabot
If you’ve been working on code in a public GitHub repository, you may have noticed a pull request from someone called dependabot. That’s GitHub’s tool that checks to make sure your dependencies are vulnerability-free and up-to-date.
To enable or disable Dependabot, click the Settings tab for your repository. Scroll down and click Code security in the left-hand menu. In this menu, there is an entire section for Dependabot. You can enable or disable alerts as well as automatic updates.
Code scanning & CodeQL
The next section in a public repository is Code scanning. If you don’t see this section, it likely means you’re working in a private repository. Code scanning will automatically detect common vulnerabilities and coding errors, such as the SQL injection vulnerability we talked about at the beginning of this post. We highly recommend you enable CodeQL analysis.
To do so, click the Set up button, and select Default from the menu that appears. Then click the green Enable CodeQL button at the bottom of the pop up window. This enables GitHub to look for pattern-based vulnerabilities in your code. This happens automatically when you commit new code to your repository, when you make pull requests, and as a periodic scan. If GitHub finds a new problem, it will send you a notification so you can fix it.
Copilot Autofix
If you want to have Copilot automatically suggest fixes to these vulnerabilities, enable Copilot Autofix. It can be found on the Code security page, under the button to Enable CodeQL. With this enabled, GitHub will run Copilot when it discovers a problem, and it will create a pull request with a suggested solution provided by Copilot. Once you review the solution, you’ll have the option to accept the pull request, eliminating the need for you to manually run Copilot to generate a possible solution.
Secret scanning
The next security tool we want to point out is secret scanning. You can find the option for secret scanning at the bottom of the Code security page. By enabling secret scanning, GitHub scans your code to look for possible exposed secrets, such as passwords or security tokens. You never want these directly embedded in your code because of the security risk. With Push protection enabled, GitHub will proactively block any new secrets from being committed to your GitHub-hosted code.
There are several other features that we don’t have time to explore here. If you’d like to read more on your own, check out our documentation about GitHub security features and our quickstart for securing your repository.
Your next steps
Security is an incredibly deep topic, and we don’t expect you to be an expert. For now, remember these three things:
- Ask Copilot to find and fix vulnerabilities. This might be as simple as running
/fix
in Copilot Chat, or it could be more detailed such as selecting some code and asking Copilot to check it for vulnerabilities. - Ask Copilot for more details when problems are found. It’s important to understand the solution, both to make sure you can verify its work and also as a learning opportunity to help you in the future.
- Enable Dependabot, code scanning, and secret scanning. These tools in the Code security settings will be useful for sending you alerts when there are potential problems you want to be aware of. And remember that Copilot can help you fix them!
Don’t forget that you can use GitHub Copilot for free! If you have any questions, pop them in the GitHub Community thread, and we’ll be sure to respond. Join us for the next part in this series, where we’ll learn how to build a REST API with Copilot.
Happy coding!
The post GitHub for Beginners: Security best practices with GitHub Copilot appeared first on The GitHub Blog.
Recently we launched sub-issues, a feature designed to tackle complex issue management scenarios. This blog post delves into the journey of building sub-issues, what we learned along the way, how we implemented sub-issues, and the benefits of being able to use sub-issues to build itself.
What are sub-issues?
Sub-issues are a way to break a larger issue into smaller, more manageable tasks. With this feature, you can now create hierarchical lists within a single issue, making it easier to track progress and dependencies. By providing a clear structure, sub-issues help teams stay organized and focused on their goals.
For example, I often realize that a batch of work requires multiple steps, like implementing code in different repositories. Breaking this task into discrete sub-issues makes it easier to track progress and more clearly define the work I need to do. In practice we’ve noticed this helps keep linked PRs more concise and easier to review.
A brief history
Issues have long been at the heart of project management on GitHub. From tracking bugs to planning feature development, issues provide a flexible and collaborative way for teams to organize their work. Over time, we’ve enriched this foundation with tools like labels, milestones, and task lists, all to make project management even more intuitive and powerful.
One of the key challenges we set out to solve was how to better represent and manage hierarchical tasks within issues. As projects grow in complexity, breaking down work into smaller, actionable steps becomes essential. We want to empower users to seamlessly manage these nested relationships while maintaining the simplicity and clarity GitHub is known for.
Our journey toward sub-issues began with a fundamental goal: to create a system that integrates deeply into the GitHub Issues experience, enabling users to visually and functionally organize their work without adding unnecessary complexity. Achieving this required careful design and technical innovation.
Building sub-issues
To build sub-issues, we began by designing a new hierarchical structure for tasks rather than modifying the existing task list functionality. We introduced the ability to nest tasks within tasks, creating a hierarchical structure. This required updates to our data models and rendering logic to support nested sub-issues.
From a data modeling perspective, the sub-issues table stores the relationships between parent and child issues. For example, if Issue X is a parent of Issue Y, the sub-issues table would store this link, ensuring the hierarchical relationship is maintained.
In addition, we roll up sub-issue completion information into a sub-issue list table. This allows us to performantly get progress without having to traverse through a list of sub-issues. For instance, when Issue Y is completed, the system automatically updates the progress of Issue X, eliminating the need to manually check the status of all sub-issues.
We wanted a straightforward representation of sub-issues as relationships in MySQL. This approach provided several benefits, including easier support for sub-issues in environments like GitHub Enterprise Server and GitHub Enterprise Cloud with data residency.
We exposed sub-issues through GraphQL endpoints, which let us build upon the new Issues experience and leverage newly crafted list-view components. This approach provided some benefits, including more efficient data fetching and enhanced flexibility in how issue data is queried and displayed. Overall, we could move faster because we reused existing components and leveraged new components that would be used in multiple features. This was all made possible by building sub-issues in the React ecosystem.
We also focused on providing intuitive controls for creating, editing, and managing sub-issues. To this end, we worked closely with accessibility designers and GitHub’s shared components team that built the list view that powers sub-issues.
Our goal was to make it as easy as possible for users to break down their tasks without disrupting their workflow.
Using sub-issues in practice
Dogfooding is a best practice at GitHub and it’s how we build GitHub! We used sub-issues extensively within our own teams throughout the company to manage complex projects and track progress. Having a discrete area to manage our issue hierarchy resulted in a simpler, more performant experience. Through this hands-on experience, we identified areas for improvement and ensured that the feature met our high standards.
Our teams found that sub-Issues significantly improved their ability to manage large projects. By breaking down tasks into smaller, actionable items, they maintained better visibility and control over their work. The hierarchical structure also made it easier to identify dependencies and ensure nothing fell through the cracks.
Gathering early feedback
Building sub-issues was a team effort. Feedback from our beta testers was instrumental in shaping the final product and ensuring it met the needs of our community. For example, understanding how much metadata to display in the sub-issue list was crucial. We initially started with only issue titles, but eventually added the issue number and repository name, if the issue was from another repository.
Building features at GitHub makes it really easy to improve our own features as we go. It was really cool to start breaking down the sub-issues work using sub-issues. This allowed us to experience the feature firsthand and identify any pain points or areas for improvement. For example, the has:sub-issues-progress
and has:parent-issue
filters evolved from early discussions around filtering syntax. This hands-on approach ensured that we delivered a polished and user-friendly product.
These lessons have been invaluable in not only improving sub-issues, but also in shaping our approach to future feature development. By involving users early and actively using our own features, we can continue to build products that truly meet the needs of our community. These practices will be important to our development process going forward, ensuring that we deliver high-quality, user-centric solutions.
Call to action
Sub-issues are designed to help you break down complex tasks into manageable pieces, providing clarity and structure to your workflows. Whether you’re tracking dependencies, managing progress, or organizing cross-repository work, sub-issues offer a powerful way to stay on top of your projects.
We’d love for you to try sub-issues and see how they can improve your workflow. Your feedback is invaluable in helping us refine and enhance this feature. Join the conversation in our community discussion to share your thoughts, experiences, and suggestions.
Thank you for being an integral part of the GitHub community. Together, we’re shaping the future of collaborative development!
The post Introducing sub-issues: Enhancing issue management on GitHub appeared first on The GitHub Blog.
It feels like everyone’s talking about MCP (Model Context Protocol) these days when it comes to large language models (LLMs), but hardly anyone is actually defining it.
TL;DR: It’s an open standard for connecting LLMs to data and tools.
Let’s dive in deeper!
The context problem for LLMs
LLMs often struggle when they are asked for information outside of their training data. They’ll sometimes either hallucinate and say something incorrect, or simply say, “I don’t know.”
Giving them the right amount of context when you prompt them (whether it’s your codebase, your repository data, your documentation, etc.) is necessary for AI agents built on top of LLMs to be useful.
Usually, you have to really refine your prompting to give LLMs that context, or use some sort of external tool. For example, GitHub Copilot has tools like @workspace to give relevant information from your codebase to your prompts. This type of “extra tooling” is cool, but can get fairly complex fairly quickly as you implement things across different APIs and services.
A solution: Model Context Protocol, or MCP
In November, Anthropic open sourced the Model Context Protocol as a standard for connecting LLMs and AI assistants to data and tools!
MCP grew the way you sleep… slowly and then all at once. As tools and organizations have adopted the MCP standard, it has only become more and more valuable. And because MCP is model agnostic, anyone can use and create MCP integrations. As with all open standards, a rising tide lifts all boats: the more people that use it, the better it becomes.
I think that MCP has “won” the hearts of so many AI developers and tools because of this openness, and also because it’s a very “AI-first” version of existing ideas.
This isn’t the first time we’ve seen a protocol like this become a standard, either. Back in 2016, Microsoft released the Language Server Protocol (LSP), which provided standards for code editors to support programming languages. Fast forward to today: because of LSP, programming language support across editors is better than ever, to the point where developers don’t even need to think about it anymore!
MCP takes a lot of its inspiration from LSP, and could be absolutely transformative for AI tooling. It allows for everyone, from the largest tech giants to the smallest indie developer shops, to enable robust AI solutions in any AI client with minimal setup.
That’s why this is a huge deal! An open standard that is backed more and more by the tech community means better tools, better developer experiences, and better user experiences for everyone.
GitHub and MCP
We’re not just talking about MCP: we’re contributing, too!
We’re SO excited to have recently released our new open source, official, local GitHub MCP Server! It provides seamless integration with GitHub APIs, allowing for advanced automation and integration capabilities for developers to build with!
You can chat more with us about it in the GitHub Community or you can check out the official announcement.
How do I contribute and learn more?
Hoorah, I thought you’d never ask! Here’s some resources to get you on your way:
- MCP documentation
- Repository of reference implementations for MCP
- MCP specification for protocol requirements
- More documentation on LSP
Also, if you don’t mind the shameless plug, you can also use it with agent mode now. Go forth and code!
The post What the heck is MCP and why is everyone talking about it? appeared first on The GitHub Blog.
Let’s be honest—most security tools can be pretty painful to use.
These tools usually aren’t designed with you, the developer, in mind—even if it’s you, not the security team, who is often responsible for remediating issues. The worst part? You frequently need to switch back and forth between your tool and your dev environment, or add a clunky integration.
And oftentimes the alerts aren’t very actionable. You may need to spend time researching on your own. Or worse, false positives can pull you away from building the next thing. Alert fatigue creeps in, and you find yourself paying less and less attention as the vulnerabilities stack up.
We’re trying to make this better at GitHub by building security into your workflows so you can commit better code. From Secret Protection to Code Security to Dependabot and Copilot Autofix, we’re working to go beyond detection to help you prioritize and remediate problems—with a little help from AI.
We’re going to show you how to write more secure code on GitHub, all in less than 10 minutes.
At commit and before the pull request: Secret Protection
You’ve done some work and you’re ready to commit your code to GitHub. But there’s a problem: You’ve accidentally left an API key in your code.
Even if you’ve never left a secret in your code before, there’s a good chance you will someday. Leaked secrets are one of the most common, and most damaging, forms of software vulnerability. In 2024, developers across GitHub simplified the process by using Secret Protection, detecting more than 39 million secret leaks.
Let’s start with some context. Traditionally, it could take months to uncover the forgotten API key because security reviews would take place only after a new feature is finished. It might not even be discovered until someone exploited it in the wild. In that case, you’d have to return to the code, long after you’d moved on to working on other features, and rewrite it.
But GitHub Secret Protection, formerly known as Secret Scanning, can catch many types of secrets before they can cause you real pain. Secret Protection runs when you push code to your repository and will warn you if it finds something suspicious. You will know right away that something is wrong and can fix it while the code is fresh in your mind. Push protection—which blocks contributors from pushing secrets to a repository and generates an alert whenever a contributor bypasses the block—shows you exactly where the secret is so you can fix it before there’s any chance of it falling into the wrong hands. If the secret is part of a test environment or the alert is a false positive, you can easily bypass the alert, so it will never slow you down unnecessarily.
What you don’t have to do is jump to another application or three to read about a vulnerability alert or issue assignment.
After commit: Dependabot
OK, so now you’ve committed some code. Chances are it contains one or more open source dependencies.
Open source is crucial for your day-to-day development work, but a single vulnerability in a transient dependency—that is to say, your dependencies’ dependencies—could put your organization at risk (which isn’t something you want coming up in a performance review).
Dependabot, our free tool for automated software supply chain security, helps surface vulnerabilities in your dependencies in code you’ve committed. And once again, it finds problems right away—not when the security team has a chance to review a completed feature. If a fix already exists, Dependabot will create a pull request for you, enabling you to fix issues without interrupting your workflow.
Dependabot now features data to help you prioritize fixes. Specifically, alerts now include Exploit Prediction Scoring System (EPSS) data from the global Forum of Incident Response and Security Teams to help you prioritize alerts based on exploit likelihood. Only 10% of vulnerability alerts have an EPSS score above 0.95%, so you can focus on fixing this smaller subset of more urgent vulnerabilities. It can really make your backlog easier to manage and keep you from spending time on low-risk issues.
At the pull request: Code Security
You’ve committed some code, you’re confident you haven’t leaked any secrets, and you’re not relying on dependencies with known vulnerabilities. So, naturally, you create a pull request. Traditionally, you might be expected to run some linters and security scanning tools yourself, probably switching between a number of disparate tools. Thanks to our automation platform GitHub Actions, all of this happens as soon as you file your pull request.
You can run a variety of different security tools using Actions or our security scanning service GitHub Code Security (formerly known as Code Scanning). Our semantic static analysis engine CodeQL transforms your code into a database that you can query to surface known vulnerabilities and their unknown variations, potentially unsafe coding practices, and other code quality issues.
You can write your own CodeQL queries, but GitHub provides thousands of queries that cover the most critical types of vulnerabilities. These queries have been selected for their high level of accuracy, ensuring a low false positive rate for the user.
But we don’t just flag problems. We now recommend solutions for 90% of alert types in JavaScript, Typescript, Java, and Python thanks to GitHub Copilot Autofix, a new feature available for free on public repositories or as part of GitHub Code Security for private repositories.
Let’s say you’ve got a pesky SQL injection vulnerability (it happens all the time). Copilot Autofix will create a pull request for you with a suggested fix, so you can quickly patch a vulnerability. You no longer need to be a security expert to find a fix. We’ve found that teams using Autofix remediate vulnerabilities up to 60% faster, significantly reducing Mean Time to Remediation (MTTR).
This is what we mean when we say “found means fixed.” We don’t want to put more work on your already overloaded kanban board. Our security tools are designed for remediation, not just detection.
Take this with you
Keep in mind you probably won’t need to touch all those security tools every time you commit or file a pull request. They’ll only pop up when they’re needed and will otherwise stay out of your way, quietly scanning for trouble in the background.
When they do show up, they show up with good reason. It’s much less work to handle vulnerabilities at the point of commit or pull request than it is to wait until months or years later. And with actionable solutions right at your fingertips, you won’t need to spend as much time going back and forth with your security team.
Writing secure code takes effort. But by integrating security protections and automatic suggestions natively into your development workflow, we’re making shifting left easier and less time consuming than the status quo.
The post How we’re making security easier for the average developer appeared first on The GitHub Blog.
Ever come across a Common Vulnerabilities and Exposures (CVE) ID affecting software you use or maintain and thought the information could be better?
CVE IDs are a widely-used system for tracking software vulnerabilities. When a vulnerable dependency affects your software, you can create a repository security advisory to alert others. But if you want your insight to reach the most upstream data source possible, you’ll need to contact the CVE Numbering Authority (CNA) that issued the vulnerability’s CVE ID.
GitHub, as part of a community of over 400 CNAs, can help in cases when GitHub issued the CVE (such as with this community contribution). And with just a few key details, you can identify the right CNA and reach out with the necessary context. This guide shows you how.
Step 1: Find the CNA that issued the CVE
Every CVE record contains an entry that includes the name of the CNA that issued the CVE ID. The CNA is responsible for updating the CVE record after its initial publication, so any requests should be directed to them.
On cve.org, the CNA is listed as the first piece of information under the “Required CVE Record Information” header. The information is also available on the right side of the page.
On nvd.nist.gov, information about the issuing CNA is available in the “QUICK INFO” box. The issuing CNA is called “Source”.
Step 2: Find the contact information for the CNA
After identifying the CNA from the CVE record, locate their official contact information to request updates or changes. That information is available on the CNA partners website at https://www.cve.org/PartnerInformation/ListofPartners.
Search for the CNA’s name in the search bar. Some organizations may have more than one CNA, so make sure that the CVE you want corresponds to the correct CNA.
The left column, under “Partner,” has the name of the CNA that links to a profile page with its scope and contact information.
Step 3: Contact the CNA
Most CNAs have an email address for CVE-related communications. Click the link under “Step 2: Contact” that says Email to find the CNA’s email address.
The most notable exception to the general preference for email communication among CNAs is the MITRE Corporation, the world’s most prolific CVE Numbering Authority. MITRE uses a webform at https://cveform.mitre.org/ for submitting requests to create, update, dispute, or reject CVEs.
What to include in your communication to the CNA
- The CVE ID you want to discuss
- The information you want to add, remove, or change within the CVE record
- Why you want to change the information
- Supporting evidence, usually in the form of a reference link
Including publicly available reference links is important, as they justify the changes. Examples of reference links include:
- A publicly available vulnerability report, advisory, or proof-of-concept
- A fix commit or release notes that describe a patch
- An issue in the affected repository in which the maintainer discusses the vulnerability in their software with the community
- A community contribution pull request that suggests a change to the CVE’s corresponding GitHub Security Advisory
When submitting changes, keep in mind that the CNA isn’t your only audience. Clear context around disclosure decisions and vulnerability details helps the broader developer and security community understand the risks and make informed decisions about mitigation.
The time it takes for a CNA to respond may vary. Rules 3.2.4.1 and 3.2.4.2 of the CVE CNA rules state:
“3.2.4.1 Subject to their respective CNA Scope Definitions, CNAs MUST respond in a timely manner to CVE ID assignment requests submitted through the CNA’s public POC.
3.2.4.2 CNAs SHOULD document their expected response times, including those for the public POC.”
The CNA rules establish firm timelines for assignment of CVE IDs to vulnerabilities that are already public knowledge. For CVE ID assignment or record publication in particular, section 4.2 and section 4.5 of the CVE CNA rules establish 72 hours as the time limit in which CNAs should issue CVE IDs or publish CVE records for publicly-known vulnerabilities. However, no such guidance exists for changing a CVE record.
What if the CNA doesn’t respond or disagrees with me?
If the CNA doesn’t respond or you cannot reach an agreement about the content of the CVE record, the next step is to engage in the dispute process.
The CVE Program Policy and Procedure for Disputing a CVE Record provides details on how you may go about disputing a CVE record and escalating a dispute. The details of that process are beyond the scope of this post. However, if you end up disputing a CVE record, it’s good to know who the root or top-level root of the CNA is that reviews the dispute.
When viewing a CNA’s partner page linked from https://www.cve.org/PartnerInformation/ListofPartners, you can find the CNA’s root under the column “Top-Level Root.” For most CNAs, their root is the Top-Level Root, MITRE.
Want to improve a CVE record and a CVE record’s corresponding security advisory? Learn more about editing security advisories in the GitHub Advisory Database.
The post How to request a change to a CVE record appeared first on The GitHub Blog.
We get it: you’d rather spend your time shipping features than chasing security alerts. That’s why we’ve built tools like Copilot Autofix directly into pull requests, enabling teams to remediate security issues up to 60% faster, significantly reducing Mean Time to Remediation (MTTR) compared to manual fixes. Autofix helps you catch vulnerabilities before they ever make it into production, so you spend less time fixing bugs and more time coding.
But what about the vulnerabilities already lurking in your existing code? Every unresolved security finding adds to your security debt—a growing risk you can’t afford to ignore. In fact, our data shows that teams typically address only 10% of their security debt, leaving 90% of vulnerabilities unprioritized and unresolved.
Our data shows that security debt is the biggest unaddressed risk that customers face: historically, only 10% of lingering security debt in merged code gets addressed, meaning until today, 90% of risks did not get prioritized. Now, our data shows that 55% of security debt included in security campaigns is fixed.
Security campaigns bridge this gap by bringing security experts and developers together, streamlining the vulnerability remediation process right within your workflow, and at scale. Using Copilot Autofix to generate code suggestions for up to 1,000 code scanning alerts at a time, security campaigns help security teams take care of triage and prioritization, while you can quickly resolve issues using Autofix—without breaking your development momentum.
Security campaigns in action
Since security campaigns were launched in public preview at GitHub Universe last year, we have seen organizations at all different stages of their security journey try them out. Whether they’ve been used to reduce security debt across an entire organization or to target alerts in critical repositories, security campaigns have delivered value for both developers and security teams in their efforts to tackle security debt.
Security campaigns simplify life for our developers. They can easily group alerts from multiple repositories, reducing time spent on triage and prioritization while quickly remediating the most critical issues with the help of Copilot Autofix.
GitHub security campaigns is a game-changer for our development teams. It’s educated us about existing vulnerabilities, brought our engineers together to collaboratively tackle fixes, and significantly improved our remediation time.
In a sample of early customers, we found that 55% of alerts included in security campaigns were fixed, compared to around only 10% of security debt outside security campaigns, a 5.5x improvement. This shows that when alerts are included in a campaign, you can spend more time fixing the security debt, since the prioritization of which alerts to work on has already been taken care of by your security team. In fact, our data shows that alerts in campaigns get roughly twice as much developer engagement than those outside of campaigns.
Security campaigns: how they work
Triaging and prioritizing security problems already present in a codebase has to happen as part of the normal software development lifecycle. Unfortunately, when product teams are under pressure to ship faster, they often don’t have enough time to dig through their security alerts to decide which ones to address first. Luckily, in most software organizations, there is already a group of people who are experts in understanding these risks: the security team. With security campaigns, we play to the different strengths of developers and security teams in a new collaborative approach to addressing security debt.
- Security teams prioritize which risks need to be addressed across their repositories in a security campaign. Security campaigns come with predefined templates based on commonly used themes (such as the MITRE top 10 known exploited vulnerabilities) to help scope the campaign. GitHub’s security overview also provides statistics and metrics summarizing the overall risk landscape.
- Once the campaign alerts are selected and a timeline is specified, the campaign is communicated to any developers who are impacted by the campaign. The work defined in a campaign is brought to developers where they work on GitHub, so that it can be planned and managed just like any other feature work.
- Copilot Autofix immediately starts suggesting automatic remediations for all alerts in a campaign, as well as custom help text to explain the problems. Fixing an alert becomes as easy as reviewing a diff and creating a pull request.
Crucially, security campaigns are not just lists of alerts. Alongside the alerts, campaigns are complemented with notifications to ensure that developers are aware of which alert they (or their team) are responsible for. To foster stronger collaboration between developers and the security team, campaigns also have an appointed manager to oversee the campaign progress and be on hand to assist developers. And of course: security managers have an organization-level view on GitHub to track progress and collaborate with developers as needed.
Starting today, you can also access several new features to plan and manage campaign-related work more effectively:
- Draft security campaigns: security managers can now iterate on the scope of campaigns and save them as draft campaigns before making them available to developers. With draft campaigns, security managers can ensure that the highest priority alerts are included before the work goes live.
- Automated GitHub Issues: security managers can optionally create GitHub Issues in repositories that have alerts included in the campaign. These issues are created and updated automatically as the campaign progresses and can be used by teams to track, manage and discuss campaign-related work.
- Organization-level security campaign statistics: security managers can now view aggregated statistics showing the progress across all currently-active and past campaigns.
For more information about using security campaigns, see About security campaigns in the GitHub documentation.
The post Found means fixed: Reduce security debt at scale with GitHub security campaigns appeared first on The GitHub Blog.
Exactly twenty years ago, on April 7, 2005, Linus Torvalds made the very first commit to a new version control system called Git. Torvalds famously wrote Git in just 10 days after Linux kernel developers lost access to their proprietary tool, BitKeeper, due to licensing disagreements. In fact, in that first commit, he’d written enough of Git to use Git to make the commit!
Git’s unconventional and decentralized design—nowadays ubiquitous and seemingly obvious—was revolutionary at the time, and reshaped how software teams collaborate and develop. (To wit, GitHub!)
To celebrate two decades of Git, we sat down with Linus himself to revisit those early days, explore the key design decisions behind Git’s lasting success, and discuss how it forever changed software development.
Check out the transcript of our interview below, and watch the full video above.
The following transcript has been lightly edited for clarity.
Taylor Blau: It’s been 20 years, almost to the hour, since Git was self-hosted enough to write its initial commit. Did you expect to be sitting here 20 years later, still using it and talking about it?
Linus Torvalds: Still using it, yes. Maybe not talking about it. I mean, that has been one of the big surprises—basically how much it took over the whole SCM world. I saw it as a solution to my problems, and I obviously thought it was superior. Even literally 20 years ago to the day, I thought that first version, which was pretty raw—to be honest, even that version was superior to CVS.
But at the same time, I’d seen CVS just hold on to the market—I mean, SVN came around, but it’s just CVS in another guise, right?—for many, many decades. So I was like, okay, this market is very sticky. I can’t use CVS because I hate it with a passion, so I’ll do my own thing. I couldn’t use BitKeeper, obviously, anymore. So I was like, okay, I’ll do something that works for me, and I won’t care about anybody else. And really that showed in the first few months and years—people were complaining that it was kind of hard to use, not intuitive enough. And then something happened, like there was a switch that was thrown.
“I’ll do something that works for me, and I won’t care about anybody else.”
Well, you mentioned BitKeeper. Maybe we can talk about that.
Sure.
Pretty famously, you wrote the initial version of Git in around 10 or so days as a replacement for the kernel.
Yes and no. It was actually fewer than—well, it was about 10 days until I could use it for the kernel, yes. But to be fair, the whole process started like December or November the year before, so 2004.
What happened was BitKeeper had always worked fairly well for me. It wasn’t perfect, but it was light years ahead of anything else I’ve tried. But BitKeeper in the kernel community was always very, like, not entirely welcomed by the community because it was commercial. It was free for open source use because Larry McVoy, who I knew, really liked open source. I mean, at the same time, he was making a business around it and he wanted to sell BitKeeper to big companies. [It] not being open source and being used for one of the biggest open source projects around was kind of a sticking point for a lot of people. And it was for me, too.
I mean, to some degree I really wanted to use open source, but at the same time I’m very pragmatic and there was nothing open source that was even remotely good enough. So I was kind of hoping that something would come up that would be better. But what did come up was that Tridge in Australia basically reverse engineered BitKeeper, which wasn’t that hard because BitKeeper internally was basically a good wrapper around SCCS, which goes back to the 60s. SCCS is almost worse than CVS.
But that was explicitly against the license rules for BitKeeper. BitKeeper was like, you can use this for open source, but you can’t reverse engineer it. And you can’t try to clone BitKeeper. And that made for huge issues. And this was all in private, so I was talking to Larry and I was emailing with Tridge and we were trying to come up with a solution, but Tridge and Larry were really on completely opposite ends of the spectrum and there was no solution coming up.
So by the time I started writing Git, I had actually been thinking about the issue for four months and thinking about what worked for me and thinking about “How do I do something that does even better than BitKeeper does but doesn’t do it the way BitKeeper does it?” I did not want to be in the situation where Larry would say, “Hey, you did the one thing you were not supposed to do.”
“…how do I do something that does even better than BitKeeper does, but doesn’t do it the way BitKeeper does it.”
So yes, the writing part was maybe 10 days until I started using Git for the kernel, but there was a lot of mental going over what the ideas should be.
I want to talk about maybe both of those things. We can start with that 10-day period. So as I understand it, you had taken that period as a time away from the kernel and had mostly focused on Git in isolation. What was that transition like for you to just be working on Git and not thinking about the kernel?
Well, since it was only two weeks, it ended up being that way. It wasn’t actually a huge deal. I’d done things like that just for—I’ve been on, like in the last 35 years, I’ve been on vacation a couple of times, right, not very many times. But I have been away from the kernel for two weeks at a time before.
And it was kind of interesting because it was—one of my reactions was how much easier it is to do programming in the userspace. There’s so much less you need to care about. You don’t need to worry about memory allocations. You don’t need to worry about a lot of things. And debugging is so much easier when you have all this infrastructure that you’re writing when you’re doing a kernel.
So it was actually somewhat—I mean, I wouldn’t say relaxing, but it was fun to do something userspace-y where I had a fairly clear goal of what I wanted. I mean, a clear goal in the sense I knew the direction. I didn’t know the details.
One of the things I find so interesting about Git, especially 20 years on, is it’s so… the development model that it encourages, to me, seems so simple that it’s almost obvious at this point. But I don’t say that as a reductive term. I think there must have been quite a lot of thought into distilling down from the universe of source control ideas down into something that became Git. Tell me, what were the non-obvious choices you made at the time?
The fact that you say it’s obvious now, I think it wasn’t obvious at the time. I think one of the reasons people found Git to be very hard to use was that most people who started without using Git were coming from a background of something CVS like. And the Git mindset, I came at it from a file system person’s standpoint, where I had this disdain and almost hatred of most source control management projects, so I was not at all interested in maintaining the status quo.
And like the biggest issue for me—well, there were two huge issues. One was performance—back then I still applied a lot of patches, which I mean, Git has made almost go away because now I just merge other people’s code.
But for me, one of the goals was that I could apply a patch series in basically half a minute, even when it was like 50, 100 patches.
You shouldn’t need a coffee to…
Exactly. And that was important to me because it’s actually a quality-of-life thing. It’s one of those things where if things are just instant, some mistake happens, you see the result immediately and you just go on and you fix it. And some of the other projects I had been looking at took like half a minute per patch, which was not acceptable to me. And that was because the kernel is a very large project and a lot of these SCMs were not designed to be scalable.
“And that was important to me because it’s actually a quality-of-life thing.”
So that was one of the issues. But one of the issues really was, I knew I needed it to be distributed, but it needed to be really, really stable. And people kind of think that using the SHA-1 hashes was a huge mistake. But to me, SHA-1 hashes were never about the security. It was about finding corruption.
Because we’d actually had some of that during the BitKeeper things, where BitKeeper used CRCs and MD5s, right, but didn’t use it for everything. So one of the early designs for me was absolutely everything was protected by a really good hash.
And that kind of drove the whole project—having two or three really fundamental design ideas. Which is why at a low level it is actually fairly simple right and then the complexities are in the details and the user interfaces and in all the things it has to be able to do—because everybody wants it to do crazy things. But having a low level design that has a few core concepts made it easier to write and much easier to think and also to some degree explain to people what the ideas are.
And I kind of compare it to Unix. Unix has like a core philosophy of everything is a process, everything is a file, you pipe things between things. And then the reality is it’s not actually simple. I mean, there’s the simple concepts that underlie the philosophy, but then all the details are very complicated.
I think that’s what made me appreciate Unix in the first place. And I think Git has some of the same kind of, there’s a fundamental core simplicity to the design and then there’s the complexity of implementation.
There’s a through line from Unix into the way that Git was designed.
Yes.
You mentioned SHA-1. One of the things that I think about in this week or two where you were developing the first version of Git is you made a lot of decisions that have stuck with us.
Yeah.
Were there any, including SHA-1 or not, that you regretted or wish you had done differently?
Well, I mean, SHA-1 I regret in the sense that I think it caused a lot of pointless churn with the whole “trying to support SHA-256 as well as SHA-1.” And I understand why it happened, but I do think it was mostly pointless.
I don’t think there was a huge, real need for it, but people were worried, so it was short. So I think there’s a lot of wasted effort there. There’s a number of other small issues. I think I made a mistake in how the index file entries are sorted. I think there’s these stupid details that made things harder than they should be.
But at the same time, many of those things could be fixed, but they’re small enough. It doesn’t really matter. All the complexities are elsewhere in the end.
So it sounds like you have few regrets. I think that’s good. Were there any moments where you weren’t sure that what you were trying to achieve was going to work or come together or be usable? Or did you already have a pretty clear idea?
I had a clear idea of the initial stages but I wasn’t sure how it would work in the long run. So honestly, after the first week, I had something that was good for applying patches, but not so much for everything else. I had the basics for doing merges, and the data structures were in place for that, but it actually took, I think it took an additional week before I did my first merge.
There were a number of things where I had kind of the big picture and result in mind, but I wasn’t sure if I’d get there. Yeah, the first steps, I mean the first week or two, I mean, you can go and look at the code—and people have—and it is not complicated code.
No.
I think the first version was 10,000 lines or something.
You can more or less read it in a single sitting.
Yeah, and it’s fairly straightforward and doesn’t do a lot of error checking and stuff like that. It’s really a, “Let’s get this working because I have another project that I consider to be more important than I need to get back to.” It really was. It happened where I would hit issues that required me to do some changes.
“There were a number of things where I had kind of the big picture and result in mind, but I wasn’t sure if I’d get there.”
The first version—I think we ended up doing a backwards incompatible object store transfer at one point. At least fsck
complains about some of the old objects we had because I changed the data format.
I didn’t know where that came from.
Yeah, no. The first version just was not doing everything it needed to do.
And I forget if I actually did a conversion or not. I may not have ever needed to convert. And we just have a few warnings for a few objects in the kernel where fsck
will say, “Hey, this is an old, no longer supported format.” That kind of thing. But on the other, on the whole, it really worked, I mean, surprisingly well.
The big issue was always people’s acceptance of it.
Right.
And that took a long time.
“But on the other, on the whole, it really worked, I mean, surprisingly well.”
Well, we talked a little bit about how merging was put in place but not functional until maybe week two or week three. What were the other features that you left out of the initial version that you later realized were actually quite essential to the project?
Well, it wasn’t so much “later realized,” it was stuff that I didn’t care about, but I knew that if this is going to go anywhere, somebody else will. I mean, the first week when I was using it for the kernel, I was literally using the raw, what are now called “plumbing commands” by hand.
Of course.
Because there was no so-called porcelain. There was nothing above that to make it usable. So to make a commit, you’d do these very arcane things.
Set your index, commit-tree
.
Yeah, commit-tree
, write, and that just returns an SHA that you write by hand into the head file and that was it.
Did hash-object
exist in the first version?
I think that was one of the first binaries that I had where I could just check that I could hash everything by hand and it would return the hash to standard out, then you could do whatever you wanted to it. But it was like the early porcelain was me scripting shell scripts around these very hard-to-use things.
And honestly, it wasn’t easy to use even with my shell scripts.
But to be fair, the first initial target audience for this were pretty hardcore kernel people who had been using BitKeeper. They at least knew a lot of the concepts I was aiming for. People picked it up.
It didn’t take that long before some other kernel developers started actually using it. I was actually surprised by how quickly some source control people started coming in. And I started getting patches from the outside within days of making the first Git version public.
I want to move forward a bit. You made the decision to hand off maintainership to Junio pretty early on in the project. I wonder if you could tell me a little bit about what it’s been like to watch him run the project and really watch the community interact with it at a little bit of a distance after all these years?
I mean, to be honest, I maintained Git for like three or four months. I think I handed it off in August [of 2005] or something like that.
And when I handed it off, I truly just handed it off. I was like, “I’m still around.” I was still reading the Git mailing list, which I don’t do anymore. Junio wanted to make sure that if he asked me anything, I’d be okay.
But at the same time, I was like, this is not what I want to do. I mean, this is… I still feel silly. My oldest daughter went off to college, and two months later, she sends this text to me and says that I’m more well-known at the computer science lab for Git than for Linux because they actually use Git for everything there. And I was like, Git was never a big thing for me. Git was an “I need to get this done to do the kernel.” And it’s kind of ridiculous that, yes, I used four months of my life maintaining it, but now, at the 20 years later…
Yes, you should definitely talk to Junio, not to me because he’s been doing a great job and I’m very happy it worked out so well. But to be honest I’ll take credit for having worked with people on the internet for long enough that I was like—during the four months I was maintaining Git, I was pretty good at picking up who has got the good taste to be a good maintainer.
My oldest daughter went off to college, and two months later, she sends this text to me and says that I’m more well known at the computer science lab for Git than for Linux because they actually use Git for everything there.
That’s what it’s about—taste—for you.
For me, it’s hard to describe. You can see it in patches, you can see it in how they react to other people’s code, “how they think” kind of things. Junio was not the first person in the project, but he was one of the early ones that was around from pretty much week one after I had made it public.
So he was one of the early persons—but it wasn’t like you’re the first one, tag you’re it. It was more like okay, I have now seen this person work for three months and I don’t want to maintain this project. I will ask him if he wants to be the maintainer. I think he was a bit nervous at first, but it really has been working out.
Yeah he’s certainly run the project very admirably in the…
Yeah, I mean, so taste is to me very important, but practically speaking, the fact that you stick around with a project for 20 years, that’s the even more important part, right? And he has.
I think he’s knowledgeable about almost every area of the tree to a surprising degree.
Okay, so we’ve talked a lot about early Git. I want to talk a little bit about the middle period of Git maybe, or maybe even the period we’re in now.
One of the things that I find so interesting about the tool, given how ubiquitous it’s become, it’s clearly been effective at aiding the kernel’s development, but it’s also been really effective for university students writing little class projects on their laptops. What do you think was unique about Git that made it effective at both extremes of the software engineering spectrum?
So the distributed nature really ends up making so many things so easy and that was one big part that set Git apart from pretty much all SCMs before, was… I mean there had been distributed SCMs, but there had, as far as I know, never been something where it was like the number one design goal—I mean, along with the other number one design goals—where it means that you can work with Git purely locally and then later if you want to make it available in any other place it’s so easy.
And that’s very different from, say, CVS where you have to set up this kind of repository and if you ever want to move it anywhere else it’s just very very painful and you can’t share it with somebody else without losing track of it.
Or there’s always going to be one special repository when you’re using a traditional SCM and the fact that Git didn’t do that, and very much by design didn’t do that, I mean that’s what made services like GitHub trivial. I mean I’m trivializing GitHub because I realized there’s a lot of work in making all the infrastructure around Git, but at the same time the basic Git hosting site is basically nothing because the whole design of Git is designed around making it easy to copy, and every repository is the same and equal.
And I think that ended up being what made it so easy to then use as an individual developer. When you make a new Git repository, it’s not a big deal. It’s like you do in Git and you’re done. And you don’t need to set up any infrastructure and you don’t need to do any of the stuff that you traditionally needed to do with an SCM. And then if that project ever grows to be something where you decide, “Oh, maybe I want other people to work with it,” that works too. And again, you don’t have to do anything about it. You just push it to GitHub and again, you’re done.
That was something I very much wanted. I didn’t realize how many other people wanted it, too. I thought people were happy with CVS and SVN. Well, I didn’t really think that, but I thought they were sufficient for most people, let’s put it that way.
I’ve lived my whole life with version control as part of software development, and one of the things I’m curious about is how you see Git’s role in shaping how software development gets done today.
That’s too big of a question for me. I don’t know. It wasn’t why I wrote Git. I wrote it for my own issues.
I think GitHub and the other hosting services have made it clear how easy it is now to make all these random small projects in ways that it didn’t used to be. And that has resulted in a lot of dead projects too. You find these one-off things where somebody did something and left it behind and it’s still there.
But does that really change how software development is done in the big picture? I don’t know. I mean, it changes the details. It makes collaboration easier to some degree. It makes it easier to do these throwaway projects. And if they don’t work, they don’t work. And if they do work, now you can work together with other people. But I’m not sure it changed anything fundamentally in software development.
“It makes collaboration easier to some degree.”
Moving ahead a little bit, modern software development has never been changing faster than it is today…
Are you going to say the AI word?
I’m not going to say the AI word, unless you want me to.
No, no, no.
…what are some of the areas of the tool that you think have evolved or maybe still need to evolve to continue to support the new and demanding workflows that people are using it for?
I’d love to see more bug tracking stuff. I mean, everybody is doing that. I mean, there are, whether you call it bug tracking or issues or whatever you want to call it, I’d love to see that be more unified. Because right now it’s very fragmented where every single hosting site does their own version of it.
And I understand why they do it. A, there is no kind of standard good base. And B, it’s also a way to do the value add and keep people in that ecosystem even when Git itself means that it’s really easy to move the code.
But I do wish there was a more unified thing where bug tracking and issues in general would be something that would be more shared among the hosting sites.
You mentioned earlier that it’s at least been a while since you regularly followed the mailing list.
Yeah.
In fact, it’s been a little bit of time since you even committed to the project. I think by my count, August of 2022 was the last time…
Yeah, I have a few experimental patches in my tree that I just keep around. So these days I do a pull of the Git sources and I have, I think, four or five patches that I use myself. And I think I’ve posted a couple of them to the Git mailing list, but they’re not very important. They’re like details that tend to be very specific to my workflow.
But honestly, I mean, this is true of the Linux kernel, too. I’ve been doing Linux for 35 years, and it did everything I needed in the first year—right? And the thing that keeps me going on the kernel side is, A, hardware keeps evolving, and a kernel needs to evolve with that, of course. But B, it’s all the needs of other people. Never in my life would I need all of the features that the kernel does. But I’m interested in kernels, and I’m still doing that 35 years later.
When it came to Git, it was like Git did what I needed within the first year. In fact, mostly within the first few months. And when it did what I needed, I lost interest. Because when it comes to kernels, I’m really interested in how they work, and this is what I do. But when it comes to SCMs, it’s like—yeah, I’m not at all interested.
“When it came to Git, it was like Git did what I needed within the first year. In fact, mostly within the first few months.”
Have there been any features that you’ve followed in the past handful of years from the project that you found interesting?
I liked how the merge strategies got slightly smarter. I liked how some of the scripts were finally rewritten in C just to make them faster, because even though I don’t apply, like, 100 patch series anymore, I do end up doing things like rebasing for test trees and stuff like that and having some of the performance improvements.
But then, I mean, those are fairly small implementation details in the end. They’re not the kind of big changes that, I mean—I think the biggest change that I was still tracking a few years ago was all the multiple hashes thing, which really looks very painful to me.
Have there been any tools in the ecosystem that you’ve used alongside? I mean, I’m a huge tig
user myself. I don’t know if you’ve ever used this.
I never—no, even early on when we had, like when Git was really hard to use and they were like these add-on UIs, the only wrapper around Git I ever used was gitk
. And that was obviously integrated into Git fairly quickly, right? But I still use the entire command language. I don’t use any of the editor integration stuff. I don’t do any of that because my editor is too stupid to integrate with anything, much less Git.
I mean, I occasionally do statistics on my Git history usage just because I’m like, “What commands do I use?” And it turns out I use five Git commands. And git merge
and git blame
and git log
are three of them, pretty much. So, I’m a very casual user of Git in that sense.
I have to ask about what the other two are.
I mean obviously git commit
and git pull
. I did this top five thing at some point and it may have changed, but there’s not a lot of—I do have a few scripts that then do use git rev-list
and go really low and do statistics for the project…
In terms of your interaction with the project, what do you feel like have been some of the features in the project either from early on or in the time since that maybe haven’t gotten the appreciation they deserve?
I mean Git has gotten so much more appreciation than it deserves. But that’s the reverse of what I would ask me. A big thing for me was when people actually started appreciating what Git could do instead of complaining about how different it was.
And that, I mean, that was several years after the initial Git. I think it was these strange web developers who started using Git in a big way. It’s like Ruby on Rails, I think. Which I had no idea, I still don’t know what Ruby even is. But the Ruby on Rails people started using Git sometime in 2008, something like this.
It was strange because it brought in a completely new kind of Git user—at least one that I hadn’t seen before. It must have existed in the background, it just made it very obvious that suddenly you had all these young people who had never used SCM in their life before and Git was the first thing they ever used and it was what the project they were using was using, so it was kind of the default thing.
And I think it changed the dynamics. When you didn’t have these old timers who had used a very different SCM their whole life, and suddenly you had young people who had never seen anything else and appreciated it, and instead of saying, “Git is so hard,” I started seeing these people who were complaining about “How do I do this when this old project is in CVS?” So, that was funny.
But yeah, no. The fact that people are appreciating Git, I mean, way more than I ever thought. Especially considering the first few years when I got a lot of hate for it.
Really?
Oh, the complaints kept coming.
Tell me about it.
Oh, I mean, it’s more like I can’t point to details. You’d have to Google it. But the number of people who sent me, “Why does it do this?” And the flame wars over my choice of names. For example, I didn’t have git status
, which actually is one of the commands I use fairly regularly now.
It’s in the top five?
It’s probably not in the top five, but it’s still something fairly common. I don’t think I’d ever used it with CVS because it was so slow.
And people had all these expectations. So I just remember the first few years, the complaints about why the names of the subcommands are different for no good reason. And the main reason was I just didn’t like CVS very much, so I did things differently on purpose sometimes.
And the shift literally between 2007 and 2010—those years, when people went from complaining about how hard Git was to use to really appreciating some of the power of Git, was interesting to me.
I want to spend maybe just a moment thinking about the future of the project. In your mind, what are the biggest challenges that Git either is facing or will face?
I don’t even know. I mean, it has just been so much more successful than I ever… I mean, the statistics are insane. It went from use for the kernel and a couple of other projects to being fairly popular to now being like 98% of the SCMs used. I mean, that’s a number I saw in some report from last year.
So, I mean, it’s—I don’t know how true that is, but it’s like big. And in that sense, I wouldn’t worry about challenges because I think SCMs, there is a very strong network effect. And that’s probably why, once it took off, it took off in a big way. Just when every other project is using Git, by default, all the new projects will use Git, too. Because the pain of having two different SCMs for two different projects to work on is just not worth it.
So I would not see that as a challenge for Git as much as I would see it as a challenge for anybody else who thinks they have something better. And honestly, because Git does everything that I need, the challenges would likely come from new users.
I mean, we saw some of that. We saw some of that with people who used Git in ways that explicitly were things I consider to be the wrong approach. Like Microsoft, the monorepo for everything, which showed scalability issues. I’m not saying Microsoft was wrong to do that. I’m saying this is literally what Git was not designed to do.
I assume most of those problems have been solved because I’m not seeing any complaints, but at the same time I’m not following the Git mailing list as much as I used to.
I don’t even know if the large file issue is considered to be solved. If you want to put a DVD image in Git, that was like, why would you ever want to do that?
But, I mean, that’s the challenge. When Git is everywhere, you find all these people who do strange things that you would never imagine—that I didn’t imagine and that I consider to be actively wrong.
But hey, I mean, that’s a personal opinion. Clearly other people have very different personal opinions. So that’s always a challenge. I mean, that’s something I see in the kernel, too, where I go, why the hell are you doing that? I mean, that shouldn’t work, but you’re clearly doing it.
“When Git is everywhere, you find all these people who do strange things that you would never imagine—that I didn’t imagine and that I consider to be actively wrong.”
We talked about how Git is obviously a huge dominant component in software development. At the same time, there are new version control upstarts that seem to pop up. Pijul comes to mind, Jujutsu, Piper, and things like that. I’m curious if you’ve ever tried any of them.
No, I don’t. I mean, literally, since I came from this, from being completely uninterested in source control, why would I look at alternatives now that I have something that works for me?
I really came into Git not liking source control, and now I don’t hate it anymore. And I think that databases are my particular—like, that’s the most boring-thing-in-life thing. But SCMs still haven’t been something I’m really interested in.
“I really came into Git not liking source control, and now I don’t hate it anymore.”
You’ve given me a little bit of an end to my last question for you. So on schedule, Linux came about 34 years ago, Git 20…
Oh, that question.
And so we’re maybe five or so years overdue for the next big thing.
No, no, I see it the other way around. All the projects that I’ve had to make, I had to make because I couldn’t find anything better that somebody else did.
But I much prefer other people solving my problems for me. So me having to come up with a project is actually a failure of the world—and the world just hasn’t failed in the last 20 years for me.
I started doing Linux because I needed an operating system and there was nothing that suited my needs. I started doing Git for the same reason. And there hasn’t been any… I started Subsurface, which is my divelog, well, no longer my divelog software, but that was so specialized that it never took off in a big way. And that solved one particular problem, but my computer use is actually so limited that I think I’ve solved all the problems.
Part of it is probably, I’ve been doing it so long that I can only do things in certain ways. I’m still using the same editor that I used when I was in college because my fingers have learned one thing and there’s no going back. And I know the editor is crap and I maintain it because it’s a dead project that nobody else uses.
“But I much prefer other people solving my problems for me. So me having to come up with a project is actually a failure of the world—and the world just hasn’t failed in the last 20 years for me.“
So, I have a source tree and I compile my own version every time I install a new machine and I would suggest nobody ever use that editor but I can’t. I’ve tried multiple times finding an editor that is more modern and does fancy things like colorize my source code and do things like that. And every time I try it, I’m like, “Yeah, these hands are too old for this.” So I really hope there’s no project that comes along that makes me go, “I have to do this.”
Well, on that note.
On that note.
Thank you for 20 years of Git.
Well, hey, I did it for my own very selfish reasons. And really—I mean, this is the point to say again that yes, out of the 20 years, I spent four months on it. So really, all the credit goes to Junio and all the other people who are involved in Git that have by now done so much more than I ever did.
In any event, thank you.
The post Git turns 20: A Q&A with Linus Torvalds appeared first on The GitHub Blog.
Allow us to reintroduce ourselves: GitHub Copilot is getting a whole lot more agentic with increased context of your tools and services, powered by the world’s leading models, starting today. 👏
We are excited to roll out agent mode in Visual Studio Code to all users, now complete with MCP support that unlocks access to any context or capabilities you want. What’s more, we are thrilled to release a new open source and local GitHub MCP server, giving you the ability to add GitHub functionality to any LLM tool that supports MCP. 🤖
In keeping with our commitment to offer multi-model choice, we’re making Anthropic Claude 3.5, 3.7 Sonnet, 3.7 Sonnet Thinking, Google Gemini 2.0 Flash, and OpenAI o3-mini generally available via premium requests, included in all paid Copilot tiers. These premium requests are in addition to unlimited requests for agent mode, context-driven chat, and code completions that all paid plans have when using our base model (👀 more below). With the new Pro+ tier, individual developers get the most out of the latest models with Copilot.
The agent awakening doesn’t stop there. We are also announcing the general availability of the Copilot code review agent. Plus, the general availability of next edit suggestions so you can tab tab tab your way to coding glory. 🏆
Agent mode in VS Code
Agent mode is progressively rolling out to VS Code users in stable, as we aim for full availability to all users in the coming weeks. You can also manually enable it now. Compared to chat or multi-file edits, which allow you to propose code changes across multiple files in your workspace, agent mode is fundamentally capable of taking action to translate your ideas into code. With simple prompts, agent mode takes Copilot beyond answering a question, instead completing all necessary subtasks across automatically identified or generated files to ensure your primary goal is achieved. Agent mode can suggest terminal commands or tool calls and ask you to execute them. It also analyzes run-time errors with self-healing capabilities.
Since the launch to VS Code Insiders in February, developers have been using agent mode for a variety of tasks: from autofixing code gen errors, to building webapps, to yeeting commits – whatever that means. 🙂

Agent mode is powered by your choice of Claude 3.5 and 3.7 Sonnet, Google Gemini 2.0 Flash, and OpenAI GPT-4o. Currently, agent mode achieves a pass rate of 56.0% on SWE-bench Verified with Claude 3.7 Sonnet. We anticipate agent mode to grow more capable as chain of thought reasoning models continue to advance.
Model Context Protocol (MCP) is now available in public preview
Developers spend their days conducting a wide array of tasks to get the job done, from research, to navigating telemetry, to infrastructure management, to coding and debugging. And they use many tools for this, the so-called engineering stack. MCP allows you to equip agent mode with the context and capabilities it needs to help you, like a USB port for intelligence. When you enter a chat prompt in agent mode within VS Code, the model can use different tools to handle tasks like understanding database schema or querying the web. This setup allows for more interactive and context-sensitive coding support.
For example, with a prompt to “Update my GitHub profile to include the title of the PR that was assigned to me yesterday,” agent mode would take that request, combined with the list of all available MCP tools, and ask an LLM what to do next. Over time, the agent would continue calling tools iteratively, until the task is complete.
Already, GitHub is home to a massive and growing MCP ecosystem that you can discover and use today. Here is a great repository that acts as a community inventory with some of the best MCP servers to use. The GitHub local MCP server equips agent mode with compelling capabilities such as searching across repositories and code, managing issues and creating PRs – turning agent mode into a powerful user of the GitHub platform.
Get started by setting up local and remote MCP servers and using tools with agent mode in Visual Studio Code. To get started with the GitHub local MCP server, visit the repository, now supported natively in VS Code.
Premium model requests
Since GitHub Universe, we introduced a number of new models for chat, multi-file edits, and now agent mode. With the general availability of these models, we are introducing a new premium request type. Premium requests are in addition to the unlimited requests for agent mode, context-driven chat, and code completions in all paid plans for our base model (currently: OpenAI GPT-4o).
Customers with Copilot Pro will receive 300 monthly premium requests, beginning on May 5, 2025. Customers with Copilot Business and Copilot Enterprise will receive 300 and 1000 monthly premium requests respectively, starting between May 12 and May 19, 2025. Until then, use of these premium models is unlimited.
We are also introducing a new Pro+ plan for individuals with 1500 monthly premium requests and access to the best models, like GPT-4.5, for $39 per month.
Copilot paid users1 will also have the ability to pay-as-you-go for additional premium request usage. Individuals and organizations can choose to opt-in to use additional requests beyond their included amount, in addition to setting spending limits on requests to control costs with ease. GitHub Copilot Business and Enterprise administrators can manage requests via their Copilot Admin Billing Settings. Additional premium requests start at $0.04 per request.
Each premium model will consume a specific number of premium requests, allowing you to use a more powerful or efficient model when you need it, all while you have continued, unlimited access to Copilot’s base model.
Happy 50th birthday, Microsoft!
Today is also a hallmark moment in the history of technology: our mothership turns 50! From the creation of BASIC or MS-DOS, to the .NET Framework and VS Code, to the acquisition of GitHub – Microsoft has always been a developer company at heart. Half a century of dev love is no small feat. ❤️
Now, with GitHub Copilot – what started out as a developer platform company is a platform where anyone can be a developer. Together, GitHub and Microsoft fully intend on enabling a world with 1 billion developers. 🗺️
Here is Satya using agent mode to re-create MSFT’s first BASIC in a single shot. Let the vibes take you away, Mr. Nadella. 🕺
- Option to purchase additional premium requests not available to users that subscribe or have subscribed to Pro or Pro+ through GitHub Mobile on iOS or Android. ↩
The post Vibe coding with GitHub Copilot: Agent mode and MCP support rolling out to all VS Code users appeared first on The GitHub Blog.