The GitHub Blog
Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.
To ensure that you have access to the best technology available, weâre continuously adding support for new models to GitHub Copilot. That being said, we know it can be hard to keep up with so many new models being released all the time.
All of this raises an obvious question: Which model should you use?
You can read our recent blog post for an overview of the models currently available in Copilot and their strengths, or check out our documentation for a deep dive comparing different models and tasks. But the AI landscape moves quickly. In this article weâll explore a frameworkâincluding a few strategiesâfor evaluating whether any given AI model is a good fit for your use, even as new models continue to appear at a rapid pace.
Itâs hard to go wrong with our base model, which has been fine-tuned specifically for programming-related tasks. But depending on what youâre working on, you likely have varying needs and preferences. Thereâs no single âbestâ model. Some may favor a more verbose model for chat, while others prefer a terse one, for example.
We spoke with several developers about their model selection process. Keep reading to discover how to apply their strategies to your own needs.
đĄ Watch the video below for tips on prompt engineering to get the best results.
Why use multiple models?
Thereâs no reason you have to pick one model and stick with it. Since you can easily switch between models for both chat and code completion with GitHub Copilot, you can use different models for different use cases.
It's kind of like dogfooding your own stack: You wonât know if it really fits your workflow until you've shipped some real code with it.
Chat vs. code completion
Using one model for chat and another for autocomplete is one of the most common patterns we see among developers. Generally, developers prefer autocompletion models because theyâre fast and responsive, which they need if theyâre looking for suggestions as they think and type. Developers are more tolerant of latency in chat, when theyâre in more of an exploratory state of mind (like considering a complex refactoring job, for instance).
Reasoning models for certain programming tasks
Reasoning models like OpenAI o1 often respond slower than traditional LLMs such as GPT-4o or Claude Sonnet 3.5. Thatâs in large part because these models break a prompt down into parts and consider multiple approaches to a problem. That introduces latency in their response times, but makes them more effective at completing complex tasks. Many developers prefer these more deliberative models for particular tasks.
For instance, Fatih Kadir AkÄąn, a developer relations manager, uses o1 when starting new projects from scratch. âReasoning models better âunderstandâ my vision and create more structured projects than non-reasoning models,â he explains.
FirstQuadrant CTO and co-founder Anand Chowdhary favors reasoning models for large-scale code refactoring jobs. âA model that rewrites complex backend code without careful reasoning is rarely accurate the first time,â he says. âSeeing the thought process also helps me understand the changes.â
When creating technical interview questions for her newsletter, GitHub Senior Director of Developer Advocacy, Cassidy Williams mixes models for certain tasks. When she writes a question, she uses GPT-4o to refine the prose, and then Claude 3.7 Sonnet Thinking to verify code accuracy. âReasoning models help ensure technical correctness because of their multi-step process,â she says. âIf they initially get something wrong, they often correct themselves in later steps so the final answer is more accurate.â
Thereâs some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices.
What to look for in a new AI model
Letâs say a new model just dropped and youâre ready to try it out. Here are a few things to consider before making it your new go-to.
Recentness
Different models use different training data. That means one model might have more recent data than another, and therefore might be trained on new versions of the programming languages, frameworks, and libraries you use.
âWhen Iâm trying out a new model, one of the first things I do is check how up to date it is,â says Xavier Portilla Edo, a cloud infrastructure engineering lead. He typically does this by creating a project manifest file for the project to see what version numbers Copilot autocomplete suggests. âIf the versions are quite old, Iâll move on,â he says.
Speed and responsiveness
As mentioned, developers tend to tolerate more latency in a chat than in autocomplete. But responsiveness is still important in chat. âI enjoy bouncing ideas off a model and getting feedback,â says Rishab Kumar, a staff developer evangelist at Twilio. âFor that type of interaction, I need fast responses so I can stay in the flow.â
Accuracy
Naturally, you need to evaluate which models produce the best code. âThereâs some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices,â Portilla Edo says. âI also look at how readable and maintainable the code isâdoes it follow naming conventions? Is it modular? Are the comments helpful or just restating what the code does? These are all signals of quality that go beyond whether the code simply runs.â
How to test an AI model in your workflow
OK, so now you know what to look for in a model. But how do you actually evaluate it for responsiveness and correctness? You use it, of course.
Start with a simple app
AkÄąn will generally start with a simple todo app written in vanilla JavaScript. âI just check the code, and how well itâs structured,â he says. Similarly, Kumar will start with a websocket server in Python. The idea is to start with something that you understand well enough to evaluate, and then layer on more complexity. âEventually Iâll see if it can build something in 3D using 3js,â AkÄąn says.
Portilla Edo starts by prompting a new model he wants to evaluate in Copilot Chat. âI usually ask it for simple things, like a function in Go, or a simple HTML file,â he says. Then he moves on to autocompletion to see how the model performs there.
Use it as a âdaily driverâ for a while
Chowdhary prefers to just jump in and start using a model. âWhen a new model drops, I swap it into my workflow as my daily driver and just live with it for a bit,â he says. âAvailable benchmarks and tests only tell you part of the story. I think the real test is seeing if it actually improves your day to day.â
For example, he checks to see if it actually speeds up his debugging jobs or produces cleaner refactors. âItâs kind of like dogfooding your own stack: You wonât know if it really fits your workflow until youâve shipped some real code with it,â he says. âAfter evaluating it for a bit, I decide whether to stick with the new model or revert to my previous choice.â
Take this with you
What just about everyone agrees on is that the best way to evaluate a model is to use it.
The important thing is to keep learning. âYou donât need to be switching models all the time, but itâs important to know whatâs going on,â Chowdhary says. âThe state of the art is moving quickly. Itâs easy to get left behind.â
Additional resources
- Choosing the right AI model for your task
- Examples for AI model comparison
- Which AI models should I use with GitHub Copilot?
The post A guide to deciding what AI model to use in GitHub Copilot appeared first on The GitHub Blog.
GitHub Copilot has quickly become an integral part of how I build. Whether Iâm exploring new ideas or scaffolding full pages, using Copilotâs agent mode in my IDE helps me move fasterâand more confidentlyâthrough each step of the development process.
GitHub Copilot agent mode is an interactive chat experience built right into your IDE that turns Copilot into an active participant in your development workflow. After you give it a prompt, agent mode streamlines complex coding tasks by autonomously iterating on its own code, identifying and fixing errors, suggesting and executing terminal commands, and resolving runtime issues with self-healing capabilities.
And hereâs the best part: You can attach images, reference files, and give natural language instructions, and Copilot will generate and modify code directly in your project!
In this post, Iâll walk you through how I built a developer-focused landing pageâfrom product requirements to codeâusing GitHub Copilot agent mode and the Claude 3.5 Sonnet model. This kind of build could easily take a few hours if I did it all by myself. But with Copilot, I had a working prototype in under 30 minutes! Youâll see how I used design artifacts, inline chat, and Copilotâs awareness of context to go from idea â design â code, with minimal friction.
You can also watch the full build in the video above!
Designing with AI: From PRD to UI
Before I wrote a single line of code, I needed a basic product vision. I started by using GitHub Copilot on GitHub.com to generate a lightweight product requirements document (PRD) using GPT-4o. Here was my prompt:
> âDescribe a landing page for developers in simple terms.â
Copilot returned a structured but simple outline of a PRD for a developer-focused landing page. I then passed this PRD into Claude 3.5 Sonnet and asked it to generate a design based on that prompt.
Claude gave me a clean, organized layout with common landing page sections: a hero, feature list, API examples, a dashboard preview, and more. This was more than enough for me to get started.
You can explore the full design that Claude built here; itâs pretty cool.
Setting up the project
For the tech stack, I chose Astro because of its performance and flexibility. I paired it with Tailwind CSS and React for styling and component architecture. I started in a blank directory and ran the following commands:
npm create astro@latest
npx astro add react
npx astro add tailwind
I initialized the project, configured Tailwind, and opened it in VS Code with GitHub Copilot agent mode enabled (learn how to enable it with our docs!). Once the server was running, I was ready to start building.
Building section by section with Copilot agent mode
Copilot agent mode really shines when translating visual designs into production-ready code because it understands both image and code context in your project. By attaching a screenshot and specifying which file to edit, I could prompt it to scaffold new components, update layout structure, and even apply Tailwind stylesâall without switching tabs or writing boilerplate manually.
For our project here, this meant I could take screenshots of each section from Claudeâs design and drop them directly into Copilotâs context window.
đĄ Pro tip: When building from a visual design like this, I recommend working on one section at a time. This not only keeps the context manageable for the model, but also makes it easier to debug if something goes off track. Youâll know exactly where to look! |
Creating the hero and navigation section
I opened index.astro
, attached the design screenshot, and typed the following prompt:
> âUpdate index.astro to reflect the attached design. Add a new navbar and hero section to start the landing page.â
Copilot agent mode then returned the following:
- Created
Navbar.astro
andHero.astro
- Updated
index.astro
to render them - Applied Tailwind styling based on the visual layout
And hereâs what I got:
Now, this is beautiful! Though it doesnât have the image on the right per the design, it did a very good job of getting the initial design down. Weâll go back in later to update the section to be exactly what we want.
Commit early and often
đĄ Pro tip: When building with AI tools, commit early and often. Iâve seen too many folks lose progress when a prompt goes sideways. |
And in case you didnât know, GitHub Copilot can help here too. After staging your changes in the Source Control panel, click the ⨠sparkles icon to automatically generate a commit message. Itâs a small step that can save you a lot of time (and heartache).
Improve accuracy with Copilot custom instructions
One of the best ways to improve the quality of GitHub Copilotâs suggestionsâespecially in multi-file projectsâis by providing it with custom instructions. These are short, structured notes that describe your tech stack, project structure, and any conventions or tools youâre using.
Instead of repeatedly adding this contextual detail to your chat questions, you can create a file in your repository that automatically adds this information for you. The additional information wonât be displayed in the chat, but is available to Copilotâallowing it to generate higher-quality responses.
To give Copilot better context, I created a CopilotInstructions.md
file describing my tech stack:
- Astro v5
- Tailwind CSS v4
- React
- TypeScript
When Copilot agent mode referenced this file when making suggestions, I noticed the results became more accurate and aligned with my setup.
Hereâs what some of the file looked like:
# GitHub Copilot Project Instructions
## Project Overview
This is an Astro project that uses React components and Tailwind CSS for styling. When making suggestions, please consider the following framework-specific details and conventions.
## Tech Stack
- Astro v5.x
- React as UI library
- Tailwind CSS for styling (v4.x)
- TypeScript for type safety
## Project Structure
```
âââ src/
â âââ components/ # React and Astro components
â âââ layouts/ # Astro layout components
â âââ pages/ # Astro pages and routes
â âââ styles/ # Global styles
â âââ utils/ # Utility functions
âââ public/ # Static assets
âââ astro.config.mjs # Astro configuration
```
## Component Conventions
### Astro Components
- Use `.astro` extension
- Follow kebab-case for filenames
- Example structure:
```astro
---
// Imports and props
interface Props {
title: string;
}
const { title } = Astro.props;
---
<div class="component-wrapper">
<h1>{title}</h1>
<slot />
</div>
<style>
/* Scoped styles if needed */
</style>
```
You can explore the full instructions file in my repo, along with the full code, setup instructions, and a link to the deployed landing page.
Iterating on your designs by prompting Copilot
I then repeated the same process to build each new section. Hereâs what this looked like in practice:
âBuilt by Developersâ section
> âAdd a new section to the landing page called âBy Developersâ and follow the attached design.â
Copilot generated a reusable component with feature cards structured in a Tailwind-styled grid.
âAPI developmentâ section
> âAdd the API development section based on the design.â
This section featured interactive code samples in tabs. Copilot interpreted that from the screenshot and added UI logic to switch between examplesâwithout me asking.
âDashboard previewâ section
> âNow add the dashboard management section on the landing page based on the design.â
I uploaded a screenshot of my editor as a placeholder image, and Copilot added it seamlessly to the new component.
Itâs so amazing how fast weâre building this landing page. Look at the progress weâve already made!
Smart suggestions, fast results
Even with sections like âTrusted by Developersâ and âTry it Yourself,â Copilot created placeholder images, added semantic HTML, and applied Tailwind stylingâall based on a single image and prompt. đ¤Ż
When I updated the final hero section to match the layout more closely, Copilot flagged and fixed TypeScript issues without being prompted.
That might sound small, but itâs a big deal. It means Copilot agent mode wasnât just taking instructionsâit was actively understanding my codebase, looking at my terminal, identifying problems, and resolving them in real time. This reduced my need to context switch, so I could focus on shipping!
This wasnât just a series of generated components. It was a fully structured, landing page built with modern best practices baked in. And I didnât have to build it alone!
Wrapping up:
With GitHub Copilot agent mode and Claude working together, I was able to:
- Generate a usable PRD and design mockup with a single prompt
- Build a responsive Astro-based landing page in less than thirty minutes
- Scaffold, test, and iterate on each section with minimal manual coding
- Use natural language to stay in the flow as I developed
Whatâs next?
To complete this project, I updated the README with a clear project structure, added instructions for getting started, and staged it for deployment. From here, you can:
- Deploy it with GitHub Pages, Netlify, or your host of choice
- Set up GitHub Actions for CI/CD
- Add unit tests or accessibility checks
- Replace placeholder content with real data (like logos, dashboard, and profile images)
- Add new pages based on the Navbar
Want to explore it yourself?
Take this with you
AI tools like GitHub Copilot agent mode are transforming how we build, but like any tool, their power depends on how well we use them. Adding context, being explicit, and committing often made building this web page smooth and successful.
If youâre thinking about building with GitHub Copilot, give this workflow a try:
- Start with a PRD using Copilot on GitHub.com
- Generate a design from your PRD with Claude
- Use Copilot Agent in your IDE to code it, step by step.
Until next time, happy coding!
The post From prompt to production: Building a landing page with Copilot agent mode appeared first on The GitHub Blog.
You might have heard of the GitHub CLI and all of the awesome things you can do with it. However, one of its hidden superpowers is the ability to execute complex queries and mutations through GitHubâs GraphQL API. This post will walk you through what GitHubâs GraphQL API endpoint is and how to query it with the GitHub CLI.
What is GraphQL?
Letâs start with the basics: GraphQL is a query language for APIs and a runtime for executing those queries against your data. Unlike traditional REST APIs that provide fixed data structures from predefined endpoints, GraphQL allows clients to request exactly the data they need in a single request. This single-request approach reduces network overhead, speeds up application performance, and simplifies client-side logic by eliminating the need to reconcile multiple API responsesâa capability that has been openly available since the specification was open sourced in 2015.
GraphQL operations come in two primary types: queries and mutations. Queries are read-only operations that retrieve data without making any changesâsimilar to GET requests in REST. Mutations, on the other hand, are used to modify server-side data (create, update, or delete)âcomparable to POST, PATCH, PUT, and DELETE in REST APIs. This clear separation between reading and writing operations makes GraphQL interactions predictable while maintaining the flexibility to precisely specify what data should be returned after a change is made.
How is GraphQL used at GitHub?
GitHub implemented GraphQL in 2016 to address limitations of RESTful APIs. This adoption has significantly enhanced the developer experience when working with GitHub data. With the GraphQL endpoint, you can retrieve a repositoryâs issues, its labels, assignees, and comments with a single GraphQL query. Using our REST APIs, this would have otherwise taken several sets of nested calls.
Some GitHub data and operations are only accessible through the GraphQL API (such as discussions, projects, and some enterprise settings), others exclusively through REST APIs (such as querying actions workflows, runners, or logs), and some using either endpoint (such as repositories, issues, pull requests, and user information). GitHubâs GraphQL endpoint is accessible at api.github.com/graphql
and you can explore the full schema in our GraphQL documentation or through the interactive GraphQL Explorer.
A key consideration when choosing between the REST API and the GraphQL API is how the rate limits are calculated. As a quick summary for how this is implemented:
- REST API: Limited by number of requests (typically 5,000 requests per hour for authenticated users and up to 15,000 for GitHub Apps installed in an Enterprise)
- GraphQL API: Limited by âpointsâ (typically 5,000 points per hour for authenticated users but can go up to 10,000-12,500 points per hour for GitHub Apps)
Each GraphQL query costs at least one point, but the cost increases based on the complexity of your query (number of nodes requested, connections traversed, etc.). The GraphQL API provides a rateLimit
field you can include in your queries to check your current limit status.
For scenarios where you need to fetch related data that would otherwise require multiple REST calls, GraphQL is often more rate limit friendly because:
- One complex GraphQL query might cost 5-10 points but replace 5-10 separate REST API calls.
- You avoid âover-fetchingâ data you donât need, which indirectly helps with rate limits.
- The GraphQL API allows for more granular field selection, potentially reducing the complexity and point cost.
However, poorly optimized GraphQL queries that request large amounts of nested data could potentially use up your rate limit faster than equivalent REST requestsâand quickly run into secondary rate limit issues.
A quick rule of thumb on deciding between which to use:
- For querying relational objects, such as GitHub Projects and their issues, GraphQL is often more effective, especially if itâs a discrete number of items.
- For bulk data of one type or single data points, such as pulling in a list of repository names in an organization, the REST API is often preferred.
Sometimes there isnât a right or wrong answer; so as long as the object exists, try one out!
Why use GitHub CLI for GraphQL?
While many developers start with GitHubâs GraphQL Explorer on the web, curl
, or other API querying tools, thereâs a more streamlined approach: using built-in GraphQL support in the GitHub CLI. Before diving into the how-to, letâs understand why GitHub CLI is often my go-to tool for GraphQL queries and mutations:
- Authentication is handled automatically: No need to manage personal access tokens manually.
- Streamlined syntax: Simpler than crafting
curl
commands. - Local development friendly: Run queries and mutations right from your terminal.
- JSON processing: Built-in options for filtering and formatting results.
- Pagination support: Ability to work with cursor-based pagination in GraphQL responses.
- Consistent experience: Same tool youâre likely using for other GitHub tasks.
How to get started with gh api graphql
First, ensure you have GitHub CLI installed and authenticated with gh auth login
. The basic syntax for making a GraphQL query with gh api graphql
is:
gh api graphql -H X-Github-Next-Global-ID:1 -f query='
query {
viewer {
login
name
bio
}
}
'
This simple query returns your GitHub username, the name you have defined in your profile, and your bio. The -f
flag defines form variables, with query=
being the GraphQL query itself.
Hereâs our example output:
{
"data": {
"viewer": {
"login": "joshjohanning",
"name": "Josh Johanning",
"bio": "DevOps Architect | GitHub"
}
}
}
Running queries and mutations
Basic query example
Letâs try something more practicalâfetching information about a repository. To get started, weâll use the following query:
gh api graphql -H X-Github-Next-Global-ID:1 -f query='
query($owner:String!, $repo:String!) {
repository(owner:$owner, name:$repo) {
name
description
id
stargazerCount
forkCount
issues(states:OPEN) {
totalCount
}
}
}
' -F owner=octocat -F repo=Hello-World
The -F
flag sets variable values that are referenced in the query with $variable
.
Hereâs our example output:
{
"data": {
"repository": {
"name": "Hello-World",
"description": "My first repository on GitHub!",
"id": "R_kgDOABPHjQ",
"stargazerCount": 2894,
"forkCount": 2843,
"issues": {
"totalCount": 1055
}
}
}
}
đĄ Tip: The -H X-Github-Next-Global-ID:1 parameter sets an HTTP header that instructs GitHubâs GraphQL API to use the new global node ID format rather than the legacy format. While your query will function without this header, including it prevents deprecation warnings when referencing node IDs (such as when passing repository.ID in subsequent operations). GitHub recommends adopting this format for all new integrations to ensure long-term compatibility.
|
Running mutations
Mutations work similarly. Hereâs how to create a new issue:
gh api graphql -H X-Github-Next-Global-ID:1 -f query='
mutation($repositoryId:ID!, $title:String!, $body:String) {
createIssue(input:{repositoryId:$repositoryId, title:$title, body:$body}) {
issue {
url
number
title
body
state
}
}
}
' -F repositoryId="R_kgDOABPHjQ" -F title="Creating issue with GraphQL" -F body="Issue body created via GraphQL\!"
Make sure to update the repositoryId
parameter with the actual repositoryâs GraphQL ID (an example of returning a repositoryâs ID is shown in the basic query above!).
Hereâs our example output:
{
"data": {
"createIssue": {
"issue": {
"url": "https://github.com/octocat/Hello-World/issues/3706",
"number": 3706,
"title": "Creating issue with GraphQL",
"body": "Issue body created via GraphQL!",
"state": "OPEN"
}
}
}
}
Filtering GraphQL results
GitHub CLI supports JQ-style filtering for extracting specific parts of the response, which is invaluable when you need to parse just the repository names or URLs from a query for use in automation scripts. Here is an example of using the --jq
flag:
gh api graphql -H X-Github-Next-Global-ID:1 -f query='
query($owner:String!, $repo:String!) {
repository(owner:$owner, name:$repo) {
issues(first:3, states:OPEN) {
nodes {
number
title
url
}
}
}
}
' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[]'
The --jq
flag accepts JQ expressions to process JSON output. This query returns just the array of issues, without the surrounding GraphQL response structure.
Hereâs our example output:
{
"number": 26,
"title": "test issue",
"url": "https://github.com/octocat/Hello-World/issues/26"
}
{
"number": 27,
"title": "just for test",
"url": "https://github.com/octocat/Hello-World/issues/27"
}
{
"number": 28,
"title": "Test",
"url": "https://github.com/octocat/Hello-World/issues/28"
}
We could have modified the --jq
flag to just return the issue URLs, like so:
gh api graphql -H X-Github-Next-Global-ID:1 -f query='
query($owner:String!, $repo:String!) {
repository(owner:$owner, name:$repo) {
issues(first:3, states:OPEN) {
nodes {
number
title
url
}
}
}
}
' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[].url'
Hereâs our example output:
https://github.com/octocat/Hello-World/issues/26
https://github.com/octocat/Hello-World/issues/27
https://github.com/octocat/Hello-World/issues/28
Handling pagination
GitHubâs GraphQL API limits results to a maximum of 100 items per page, which means youâll need pagination to retrieve larger datasets.
Pagination in GraphQL works by returning a âcursorâ with each page of results, which acts as a pointer to where the next set of results should begin. When you request the next page, you provide this cursor to indicate where to start.
The easiest way to handle this pagination in the GitHub CLI is with the --paginate
flag, which automatically collects all pages of results for you by managing these cursors behind the scenes. Hereâs what that looks like in a query:
gh api graphql --paginate -H X-Github-Next-Global-ID:1 -f query='
query($owner:String!, $repo:String!, $endCursor:String) {
repository(owner:$owner, name:$repo) {
issues(first:100, after:$endCursor, states:OPEN, orderBy:{field:CREATED_AT, direction:DESC}) {
pageInfo {
hasNextPage
endCursor
}
nodes {
number
title
createdAt
}
}
}
}
' -F owner=octocat -F repo=Hello-World
The pageInfo object with its hasNextPage
and endCursor
fields is essential for pagination. When you use the --paginate
flag, GitHub CLI automatically uses these fields to fetch all available pages for your query, combining the results into a single response.
Hereâs our example output:
{
"data": {
"repository": {
"issues": {
"pageInfo": {
"hasNextPage": true,
"endCursor": "Y3Vyc29yOnYyOpK5MjAyNC0xMi0zMFQxNDo0ODo0NC0wNjowMM6kunD3"
},
"nodes": [
{
"number": 3708,
"title": "Creating issue with GraphQL once more",
"createdAt": "2025-04-02T18:15:11Z",
"author": {
"login": "joshjohanning"
}
},
{
"number": 3707,
"title": "Creating issue with GraphQL again",
"createdAt": "2025-04-02T18:15:02Z",
"author": {
"login": "joshjohanning"
}
},
{
"number": 3706,
"title": "Creating issue with GraphQL",
"createdAt": "2025-04-02T18:14:37Z",
"author": {
"login": "joshjohanning"
}
},
⌠and so on
]
}
}
}
}
This approach works great for moderate amounts of data, but keep in mind that GitHubâs GraphQL API has rate limits, so extremely large queries might need to implement delays between requests.
đĄ Important limitation: The --paginate flag can only handle pagination for a single connection at a time. For example, when listing repository issues as shown above, it can paginate through all issues, but cannot simultaneously paginate through each issueâs comments. For nested pagination, youâll need to implement custom logic. |
Building complex scripts: Chaining GraphQL queries together
When working with GitHubâs GraphQL API, you often need to connect multiple queries to accomplish a complex task. Letâs look at how to chain GraphQL calls together using the GitHub CLI:
ISSUE_ID=$(gh api graphql -H X-Github-Next-Global-ID:1 -f query='
query($owner: String!, $repo: String!, $issue_number: Int!) {
repository(owner: $owner, name: $repo) {
issue(number: $issue_number) {
id
}
}
}
' -F owner=joshjohanning -F repo=graphql-fun -F issue_number=1 --jq '.data.repository.issue.id')
gh api graphql -H GraphQL-Features:sub_issues -H X-Github-Next-Global-ID:1 -f query='
query($issueId: ID!) {
node(id: $issueId) {
... on Issue {
subIssuesSummary {
total
completed
percentCompleted
}
}
}
}' -F issueId="$ISSUE_ID"
Hereâs what this shell script is doing:
- The first query captures an issueâs ID using the repository name and issue number
- The
--jq
flag extracts just the ID value and stores it in a variable - The second query passes this ID to retrieve a summary of sub-issues
Hereâs our example output:
{
"data": {
"node": {
"subIssuesSummary": {
"total": 3,
"completed": 1,
"percentCompleted": 33
}
}
}
}
Take this with you
The gh api graphql
command provides a convenient way to interact with GitHubâs GraphQL API directly from your terminal. It eliminates the need for token management, simplifies query syntax and formatting, and handles basic pagination that would otherwise be complex to implement. Whether youâre running complex queries or simple mutations, this approach offers a streamlined developer experience.
Next time you need to interact with GitHubâs GraphQL API, skip the GraphQL Explorer on the web and try the GitHub CLI approach. It might just become your preferred method for working with GitHubâs powerful GraphQL API capabilities.
The post Exploring GitHub CLI: How to interact with GitHubâs GraphQL API endpoint appeared first on The GitHub Blog.
We launched the GitHub Innovation Graph to give developers, researchers, and policymakers an easy way to analyze trends in public software collaboration activity around the world. With todayâs quarterly1 release, updated through December 2024, we now have five full years of data.
To help us celebrate, weâve created some animated bar charts showcasing the growth in developers and pushes of some of the top economies around the world over time. Enjoy!
Animated bar charts
What a photo finish! The European Union surpassing the United States in cumulative git pushes was certainly a highlight, but weâd also note the significant movements of Brazil and Korea in climbing up the rankings.
Another close race, this time showing India outpacing the European Union in repositories between Q2 and Q3 2024.
Zooming into economies in APAC, we can appreciate the speed of developer growth in India, more than quadrupling in just 5 years.
Flying over to EMEA, we saw very impressive growth from Nigeria, which rose up from rank 20 in Q1 2020 to rank 11 in Q4 2024.
Finally, in LATAM, it was exciting to see how close most of the economies are in developer counts (with the exception of Brazil), with frequent back-and-forth swaps in rankings between economies like Argentina and Colombia, or Guatemala and Bolivia.
Want to explore more? Dive into the datasets yourself. We canât wait to check out what you build.
Global line charts
Weâve also made a feature update that will enable you to quickly understand the global scale of some of the metrics we publish, including the numbers of public git pushes, repositories, developers, and organizations on GitHub worldwide.
Simply follow the installation steps for our newly released GitHub MCP Server, and youâll be able to prompt GitHub Copilot in agent mode within VS Code to retrieve the CSVs from the data repo using the get_file_contents tool. Then, you can have the agent sum up the latest values for you.
Afterward, you can double-check its results with these handy charts that weâve added to their respective global metrics pages for git pushes, repositories, developers, and organizations. Check them out below.
Click to view slideshow.- The GitHub Innovation Graph reports metrics according to calendar year quarters, which correspond to the following: Q1: January 1 to March 31; Q2: April 1 to June 30; July 1 to September 30; and Q4: October 1 to December 31. âŠ
The post Racing into 2025 with new GitHub Innovation Graph data appeared first on The GitHub Blog.
Climate change is one of the most pressing issues of this century. We are working with developers to leverage technology to create a greener world. So, this Earth Day, weâre excited to launch the Climate Action Plan for Developers.
Weâve curated tools and projects to help you kick-start your climate action journey and contribute to achieving net zero carbon emissions. Explore over 60,000 green software and climate-focused repositories on GitHub.
Not sure where to start? Take a look below at a few highlights that can help you start to green your code today.
đ Speed & Scale
Speed & Scale is a global initiative to move leaders to act on the climate crisis. Their team has developed a net zero action plan, with 10 objectives and 49 key results that track yearly progress.
âĄď¸ Electricity Maps
Electricity Maps is the leading electricity grid API, offering a single source for accessing carbon intensity and energy mix globally. As a developer you can go beyond just viewing the maps to pull data from their API, download data files, and even contribute to their open source project.
Access the Electricity Maps API
đĽď¸ CodeCarbon
CodeCarbon is a lightweight software package that allows for integration into any Python project to track and reduce CO2 emissions from your computing. Get started with using the software package and check out the opportunities to help support this open source project.
Get started with the software package
đł ClimateTriage, by OpenSustain.Tech
ClimateTriage helps developers discover a meaningful way to contribute to open source projects focused on climate technology and sustainability. Harness the power of open source collaboration to tackle environmental challenges such as climate change, clean energy, biodiversity, and natural resource conservation. Whether youâre an experienced developer, a scientist, or a newcomer looking to contribute, connect you with opportunities to use your skills to create a sustainable future.
Get started with a Good First Issue
đŞ Use GitHub Copilot and CodeCarbon for greener code
Computational tasks, especially in AI, have a growing carbon footprint. Learn how CodeCarbon, an open-source Python library, helps measure CO2 emissions from your code. Together with GitHub Copilot, integrate CodeCarbon into your projects, allowing you to track energy use and optimize for sustainability.
Get started with GitHub Copilot for free today
Learn more about how you can take climate action today.
The post How to take climate action with your code appeared first on The GitHub Blog.
GitHub supports dark mode and light mode, and as developers, we can make our README images look great in both themes. Hereâs a quick guide to using the <picture>
element in your GitHub Markdown files to dynamically switch images based on the userâs color scheme.
When developers switch to GitHubâs dark mode (or vice versa), standard images can look out of place, with bright backgrounds or clashing colors.
Instead of forcing a one-size-fits-all image, you can tailor your visuals to blend seamlessly with the theme. Itâs a small change, but it can make your project look much more polished.
One snippet, two themes!
Hereâs the magic snippet you can copy into your README (or any Markdown file):
<picture>
<source media="(prefers-color-scheme: dark)" srcset="dark-mode-image.png">
<source media="(prefers-color-scheme: light)" srcset="light-mode-image.png">
<img alt="Fallback image description" src="default-image.png">
</picture>
Now, we say itâs magic, but letâs take a peek behind the curtain to show how it works:
- The
<picture>
tag lets you define multiple image sources for different scenarios. - The
<source media="...">
attribute matches the userâs color scheme.- When
media="(prefers-color-scheme: dark)"
, the browser loads thesrcset
image when GitHub is in dark mode. - Similarly, when
media="(prefers-color-scheme: light)"
, the browser loads thesrcset
image when GitHub is in light mode.
- When
- If the browser doesnât support the
<picture>
element, or the userâs system doesnât match any defined media queries, the fallback<img>
tag will be used.
You can use this approach in your repo README files, documentation hosted on GitHub, and any other Markdown files rendered on GitHub.com!
Demo
Whatâs better than a demo to help you get started? Hereâs what this looks like in practice:
The post How to make your images in Markdown on GitHub adjust for dark mode and light mode appeared first on The GitHub Blog.
GitHub Universe returns to San Francisco on October 28 and 29âbringing together the builders, dreamers, and changemakers shaping the future of software. From first-time speakers with big ideas to DevRel pros with demos to share and business leaders rethinking workflows with AI, we believe that a diverse range of voices belong on our stage.
But writing a compelling conference session submission can feel like decoding a complex algorithm. What makes your idea stand out? How do you grab the content committeeâs attention? And what if youâve never done this before?
Good news: weâve cracked the code, and weâre sharing it with you.
Here are four proven tips to help you put together a proposal thatâs clear, compelling, and uniquely you.
Apply to speak or nominate a speaker to take the stage at GitHub Universe by Friday, May 2 at 11:59 pm PT to be considered.
1. Find something youâre truly passionate about đĄ
Hereâs the truth: passion is magnetic. If youâre excited about your topic, it shows. It pulses through your proposal, powers your delivery onstage, and pulls in your audienceâcontent committee included.
Instead of chasing the latest trends, talk about something that lights you up. Maybe itâs a story from building an open source project in your off-hours. Maybe itâs how your team shipped something new using GitHub Copilot. Or maybe itâs the unexpected way you quickly scaled developer experience across a global org. Your unique perspective is your superpower.
Content committees can sense authenticity. Theyâre not just looking for polished buzzwords. Theyâre looking for people who care deeply and can teach others something meaningful.
đ¤ Pro tip: If itâs a topic youâd talk about over lunch with a teammate or geek out about on a podcast, itâs probably a great fit.
2. Write a title they canât ignore âď¸
Think of your session title like an email subject lineâitâs your chance to make a strong first impression, and it needs to do the heavy lifting for you. A strong title shouldnât just sound good. It should clearly communicate what your talk is about and why it matters.
Letâs take our title as an example:
- â Engaging: âCracking the Codeâ suggests thereâs an inside strategy, and it sparks curiosity.
-
â Clear: âHow to wow the acceptance committee at your next tech eventâ leaves no doubt about the topic.
-
â Action-oriented: It promises practical takeaways, not just theory.
-
â Balanced: It walks the line between fun and professional.
Avoid vague titles (âA new approach to softwareâ) or clickbait (âThis one trick will fix your codebaseâ). Instead, aim for clarity with flair. Give the content committee a reason to want to learn more along with the confidence that your talk can deliver.
đ¤ Pro tip: After you write your title, ask yourselfâwould I attend this session? Would I understand what Iâm getting from it in five seconds?
3. Make it easy for the content committee to say yes â
The content committee is rooting for you, but youâve got to help them out. The best submissions remove all ambiguity and make a strong case for why this session matters.
Hereâs how:
- Be specific about your audience: Who is this for? Senior engineers? OSS maintainers? Platform teams? Product leads?
-
Spell out the takeaways: What will people learn? Tools, frameworks, fresh mindsets?
-
Tie it to the event: Why does this belong at GitHub Universe? How does it support the eventâs themes?
Also, show that your content has a life beyond the stage:
- Can your session be turned into a blog, case study, or video?
-
Is your abstract compelling enough to be featured in a marketing email or keynote recap?
-
Will attendees be able to apply what they learned the next day?
đ¤ Hot tip: Think beyond the talk itself. Thatâs pure gold for event organizers.
4. Seal the deal with your online presence đ
Yes, your session submission is the star, but reviewers on the content committee can also look you up. Your online presence helps us understand:
- Your credibility and expertise
-
Your speaking experience (or potential!)
-
How easy it will be to promote you as a speaker
You donât need a massive following. But you do want a strong, relevant footprint. Here are a few tips to consider:
On LinkedIn:



- Use a headline that highlights your expertise, not just your title.
-
Make your âAboutâ section shine with links to talks, blogs, and projects.
-
Add speaking experience under âExperienceâ or âFeatured.â
On GitHub:



- Update your profile README with your focus areas and links.
-
Pin key repos or projects youâve contributed to.
-
Be active in discussions, even if most of your code is private.
đ¤ Hot tip: Post about your submission journey! Sharing your process helps you engage with the community and might even inspire someone else to apply.
Ready to take the stage?
Youâve got the ideas. Now youâve got the blueprint. If youâve made it this far, we hope you feel readyâand excitedâto throw your hat in the ring. Letâs recap:
- Lead with passion to find a topic you care deeply about.
-
Craft a clear, compelling title that grabs attention and gives the content committee an immediate idea of your session topic and takeaways.
-
Make your submission a no-brainer by showing how it aligns with the event and adds value.
-
Polish your online presenceâit might just tip the scale in your favor.
Whether youâre a seasoned speaker or stepping into the spotlight for the first time, we canât wait to hear from you. And if you donât have a session idea this year, you can also nominate a speaker who deserves to take the stage. Submit a session proposal or a speaker nomination from now until Friday, May 2 at 11:59 pm PT to be considered!
Letâs build the future togetherâone session at a time. đŤ
The post Cracking the code: How to wow the acceptance committee at your next tech event appeared first on The GitHub Blog.
This was originally published on our developer newsletter, GitHub Insider, which offers tips and tricks for devs at every level. If youâre not subscribed, go do that nowâyou wonât regret it (we promise).
If youâve ever wondered which AI model is the best fit for your GitHub Copilot project, youâre not alone. Since each model has its own strengths, picking the right one can feel somewhat mysterious.
With models that prioritize speed, depth, or a balance of both, it helps to know what each one brings to the table. Letâs break it down together. đ
The TL;DR
- đł Balance between cost and performance: Go with GPT-4.1, GPT-4o, or Claude 3.5 Sonnet.
- đŞ Fast, lightweight tasks: o4-mini or Claude 3.5 Sonnet are your buddies.
- đ Deep reasoning or complex debugging: Think Claude 3.7 Sonnet, o3, or GPT 4.5.
- đźď¸ Multimodal inputs (like images): Check out Gemini 2.0 Flash or GPT-4o.
Your mileage may vary and itâs always good to try things yourself before taking someone elseâs word for it, but this is how these models were designed to be used. All that being saidâŚ
Letâs talk models.
đď¸ Putting coding speed first
o4-mini and o3-mini: The speed demons đ
Fast, efficient, and cost-effective, o4-mini and o3-mini are ideal for simple coding questions and quick iterations. If youâre looking for a no-frills model, use these.
â Use them for:
- Quick prototyping.
- Explaining code snippets.
- Learning new programming concepts.
- Generating boilerplate code.
đ You may prefer another model: If your task spans multiple files or calls for deep reasoning, a higherâcapacity model such as GPTâ4.5 or o3 can keep more context in mind. Looking for extra expressive flair? Try GPTâ4o.
âď¸ AI models designed for balance
Claude 3.5 Sonnet: The budget-friendly helper đ
Need solid performance but watching your costs? Claude 3.5 Sonnet is like a dependable sidekick. Itâs great for everyday coding tasks without burning through your monthly usage.
â Use it for:
- Writing documentation.
- Answering language-specific questions.
- Generating code snippets.
đ You may prefer another model: For elaborate multiâstep reasoning or bigâpicture planning, consider stepping up to Claude 3.7 Sonnet or GPTâ4.5.
GPT-4o and GPT-4.1: The all-rounders đ
These are your go-to models for general tasks. Need fast responses? Check. Want to work with text *and* images? Double check. GPT-4o and GPT-4.1 are like the Swiss Army knives of AI models: flexible, dependable, and cost-efficient.
â Use them for:
- Explaining code blocks.
- Writing comments or docs.
- Generating small, reusable snippets.
- Multilingual prompts.
đ You may prefer another model: Complex architectural reasoning or multiâstep debugging may land more naturally with GPTâ4.5 or Claude 3.7 Sonnet.
đ§ Models for deep thinking and big projects
Claude 3.7 Sonnet: The architect đ
This oneâs the power tool for large, complex projects. From multi-file refactoring to feature development across front end and back end, Claude 3.7 Sonnet shines when context and depth matter most.
â Use it for:
- Refactoring large codebases.
- Planning complex architectures.
- Designing algorithms.
- Combining high-level summaries with deep analysis.
đ You may prefer another model: For quick iterations or straightforward tasks, Claude 3.5 Sonnet or GPTâ4o may deliver results with less overhead.
Gemini 2.5 Pro: The researcher đ
Gemini 2.5 Pro is the powerhouse for advanced reasoning and coding. Itâs built for complex tasks (think: deep debugging, algorithm design, and even scientific research). With its long-context capabilities, it can handle extensive datasets or documents with ease.
â Use it for:
- Writing full functions, classes, or multi-file logic.
- Debugging complex systems.
- Analyzing scientific data and generating insights.
- Processing long documents, datasets, or codebases.
đ You may prefer another model: For cost-sensitive tasks, o4-mini or Gemini 2.0 Flash are more budget-friendly options.
GPT-4.5: The thinker đ
Got a tricky problem? Whether youâre debugging multi-step issues or crafting full-on systems architectures, GPT-4.5 thrives on nuance and complexity.
â Use it for:
- Writing detailed README files.
- Generating full functions or multi-file solutions.
- Debugging complex errors.
- Making architectural decisions.
đ You may prefer another model: When you just need a quick iteration on something smallâor youâre watching tokensâGPTâ4o can finish faster and cheaper.
o3 and o1: The deep diver đĽ˝
These models are perfect for tasks that need precision and logic. Whether youâre optimizing performance-critical code or refactoring a messy codebase, o3 and o1 excel in breaking down problems step by step.
â Use them for:
- Code optimization.
- Debugging complex systems.
- Writing structured, reusable code.
- Summarizing logs or benchmarks.
đ You may prefer another model: During early prototyping or lightweight tasks, a nimble model such as o4âmini or GPTâ4o may feel snappier.
đźď¸ Multimodal, or designed to handle it all
Gemini 2.0 Flash: The visual thinker đ¤
Got visual inputs like UI mockups or diagrams? Gemini 2.0 Flash lets you bring images into the mix, making it a great choice for front-end prototyping or layout debugging.
â Use it for:
- Analyzing diagrams or screenshots.
- Debugging UI layouts.
- Generating code snippets.
- Getting design feedback.
đ You may prefer another model: If the job demands stepâbyâstep algorithmic reasoning, GPTâ4.5 or Claude 3.7 Sonnet will keep more moving parts in scope.
So⌠which model do I choose?
Hereâs the rule of thumb: Match the model to the task. Practice really does make perfect, and as you work with different models, itâll become clearer which ones work best for different tasks. The more Iâve personally used certain models, the more Iâve learned, âoh, I should switch for this particular task,â and âthis one will get me there.â
And because I enjoy staying employed, I would love to cheekily mention that you can (and should!) use these models withâŚ
Good luck, go forth, and happy coding!
The post Which AI model should I use with GitHub Copilot? appeared first on The GitHub Blog.
In March, we experienced one incident that resulted in degraded performance across GitHub services.
March 29 7:00 UTC (lasting 58 hours)
Between March 29 7:00 UTC and March 31 17:00 UTC, GitHub experienced service degradation due to two separate, but related incidents. On March 29, users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit ebook and event registration forms on resources.github.com, also due to a service outage.
The March 29 incident occurred due to expired credentials used for an internal service, preventing customers from being able to unsubscribe directly from marketing/sales topics through github.com/settings/emails UI and from performing the double opt-in step required by some countries. A similar credential expiry on March 31 resulted in users experiencing degradation accessing resources.github.com.
The cause of the incident was traced to an issue in the automated alerting for monitoring upcoming credential expirations. The bug in alerting resulted in the invalid credentials being discovered after they had expired. This resulted in two incidents before we could deploy a durable fix. We mitigated it by renewing the credentials and redeploying the affected services.
To improve future response times and prevent similar issues, we have enhanced our credential expiry detection, alerting, and rotation processes, and are working on improving on-call observability.
Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what weâre working on, check out the GitHub Engineering Blog.
The post GitHub Availability Report: March 2025 appeared first on The GitHub Blog.
Whether itâs building, testing, or deploying code, automating manual processes is key to improving developer experience and achieving a successful DevOps strategy.
On GitHub, you can use GitHub Actions to not only implement your CI/CD pipeline, but also automate other processes both on and off GitHub. When you are adopting GitHub Actions on GitHub Enterprise Cloud, you can choose between GitHub-hosted runners and self-hosted runners to run your workloads, and each has its pros and cons.
In this post, weâll compare GitHub-hosted runners with self-hosted runners across five areas to help you determine which type best fits your GitHub Actions adoption strategy.
What are GitHub-hosted runners and self-hosted runners?
GitHub-hosted runners and self-hosted runners are based on the same open-source software and both support macOS, Windows, and Linux. But they have many differences.
GitHub-hosted runners are fully managed on GitHubâs infrastructure using pre-configured Windows, Linux, and macOS virtual machines. In addition to offering standard runners for typical workloads, hosted runners offer larger runners with more resources (memory, CPU, and storage), custom images, static IP ranges, and Azure Virtual Network integration for enhanced security control.
Self-hosted runners operate on your own infrastructure, whether on-premises or in the cloud. You manage all aspectsâconfiguration, security, and scaling. They also allow you to operate runners in places you couldnât otherwiseâfor example, on GitHub Enterprise Server or on custom hardware. They can also be the only way to implement certain compliance requirements, especially when working with highly secured systems.
Both options offer distinct advantages depending on your specific needs and resources. Letâs explore when GitHub-hosted runners may be the right choice for your projects, and when it may be better to use self-hosted runners.
Fully managed or self-managed?
A key distinction between these two options is where theyâre hosted, as weâve pointed out. But that choice comes with several implications.
GitHub-hosted runners provide managed infrastructure with pools of on-demand virtual machines (VMs) that are automatically secured and updated. The environments are ephemeral, with the disks reimaged after each job, preventing files from previous jobs from affecting subsequent runs. The VMs are optimized for GitHub Actions, with pre-installed software and tools, including the latest versions of GitHub CLI, Docker, and common development platforms to ensure fast start times and avoid rate limits.
With GitHub-hosted runners, you can jump right in and start building workflows. Thereâs nothing to configure or secure before you start, making them ideal when you want to get started quickly. And we all prefer to spend more time on code than infrastructure, right?
Self-hosted runners offer you complete flexibility in defining your solution, but also means you are responsible for managing the infrastructure, images, caches, and security, and monitoring availability and usage against GitHubâs rate limits. This requires expertise in GitHub Actions architecture, VM and container image building, and network and infrastructure management. If your core business offering is scalable infrastructure solutions or Kubernetes, self-hosted runners may make sense.
Letâs take a closer look.
Scalability
To remain productive, itâs important to have highly-available resources available on demand, especially for CI/CD workloads, where waiting for a job to run may mean youâre blocked from working on other tasks. In fact, a single wasted hour each week can cost a company over $4,000 a year per developer!
But scaling highly available, on-demand resources is hard. Even with a well-designed cloud infrastructure, it takes time to provision new virtual machines. You need systems in multiple regions to maintain up time, with 20-25% spare capacity to scale quickly and handle unexpected system failures.
GitHub-hosted runners take advantage of Microsoftâs deep data center and cloud expertise and have dedicated teams to meet our service level agreement (SLA) of 99.9% availability. And thatâs without any expertise on your part. In fact, many teams consider self-hosted runners in hopes of beating this availability, but it turns out thatâs not even technically possible, as all runnings depend on the same services and control plane. That said, there are conditions where self-hosted runners may work for you.
Self-hosted runners may meet your needs if you need a fixed number of servers, are primarily focused on deployment to non-cloud resources, and donât need to scale on demand. Just remember that the instances are not natively ephemeral, so youâll need to have a strategy to keep the instances free from artifacts created by earlier runs. Self-hosted runners also lack automatic scaling capabilities; they require a scaling solution to be able to support large teams or create new instances dynamically.
GitHubâs Actions Runner Controller (ARC) offers a solution, but it has limitations as it requires Kubernetes expertise and only supports Linux runners. Kubernetes relies on containers instead of VMs, which can require you to troubleshoot resource contention and scaling issues. ARC can also offer high availability by having multiple clusters. As we noted before, if your primary business is hosting and managing Kubernetes clusters, then ARC may be the right approach.
ARC does not support macOS or Windows workloads, and both environments present a number of limitations. For example, on macOS, you are required to use Apple hardware, you are limited to two VMs per machine, and containerizing the Apple runtime is not supported. For Windows, virtual machines are supported, but you need a custom orchestrator for scaling the instances. While you can create Windows containers and manage them with Kubernetes, the containers have slow startup times and may not support some of the necessary development and testing tools.
In short, we recommend GitHub-hosted runners for both macOS and Windows workloads.
Security
Security is critical for CI/CD processes, since they may require access to internal or production resources, and builds often use third-party libraries, runtimes, and tools, which can create a large attack surface if not properly secured.
GitHub-hosted runners provide built-in security through a defense-in-depth, zero-trust approach. VMs provide network isolation, preventing exposure to other runners and corporate resources. In fact, access to corporate or cloud resources requires elevating privileges (we recommend OIDC). Their ephemeral nature eliminates code persistence and prevents application execution after job completion, reducing unauthorized access risks.
Storage disks for hosted runners are encrypted at rest, ensuring the code is protected on the disk. All communications are encrypted to GitHub, and deployments to Microsoft Azure are routed through the Azure backbone, minimizing transits through the public internet. We provide regular security updates to both operating systems and runner software. The minimized attack surface and reduced risk of security breaches are key factors in the Department of Defense DevSecOps Reference Designâs recommendation to prefer GitHub-hosted runners for workloads up to Impact Level 5.
Self-hosted runners shift security responsibility entirely to you, requiring management of network, infrastructure, images, containers, and cachesâthatâs a lot of work. You also need to keep everything up to date, as runners connected to GitHub Enterprise Cloud will not be able to connect if they are more than 30 days behind the current release.
Not to mention, if you operate runners within your network environment with access to corporate resources and production environments, youâll want to implement a zero-trust, defense-in-depth strategy with time-limited resource access, which demands a high level of network security expertise.
Finally, youâll need to implement and keep updated both a tool cache and an Actions archive cache. Otherwise, youâre likely to encounter our rate limits as you scale up.
Troubleshooting
Keeping you productive means that problems with workflows or jobsâlack of resources, network issues, outagesâneed to be solved quickly. As a result, itâs important to have a support strategy.
GitHub-hosted runners come with 24/7 support across all time zones, with premium plans offering dedicated reliability engineers and rapid 30-minute response times for critical issues. This eliminates the need for infrastructure troubleshooting on your part. GitHub handles all runner environment issues, from performance problems to queue times, letting you focus on development while we roll up our sleeves, figure out the problems, and get them fixed.
Self-hosted runners, however, shift first-level support responsibility to you, which means someone will have to troubleshoot performance, network, or queueing issues when they happen, leaving less time for the fun coding stuff. đ
Not only that, but GitHub can only assist with the Actions service itself; we cannot assist with your infrastructure, Kubernetes clusters, or custom orchestration solutions. So if they figure out the issue is with your system, youâll be on your own to solve it. Without sufficient planning, you can spend a lot of time waiting for a solution that lets you get back to writing and deploying code. That can be a big price to pay for self-hosted runners.
Cost management
Finally, thereâs the issue of cost. If you are offering Kubernetes or infrastructure management solutions, self-hosted runners may have some advantages. If not, then GitHub-hosted runners are likely the answer here too.
GitHub-hosted runners operate on a pay-as-you-go model with no upfront costs or commitments. Teams optimize expenses through workflow improvements and appropriate runner selection. In addition, there are built-in cost savings. For example, GitHub doesnât charge network egress feesâa significant advantage when working with large container images on cloud platforms. GitHub also has a partnership with Docker that allows unlimited image pulls from Docker Hub by GitHub-hosted runners, which often eliminates the need to create a pass-through registry or purchase business licenses for your CI/CD processes. Maintaining, supporting, and securing the environment is handled by GitHub, avoiding additional staff and service expenses. Finally, Enterprise accounts benefit from 50,000 free monthly minutes for standard runners.
Self-hosted runners, as in other areas, means organizations assume responsibility for all infrastructure, network, storage, security, and support costs. This gives you a lot of flexibility in defining the environment, right-sizing your resources, and customizing the networking. While per-minute virtual machine expenses might initially seem lower, the total ownership cost can (and often does) exceed GitHub-hosted solutions when accounting for these additional support costs.
Which runner is best for you?
Choosing the right runner depends on your specific needs. Self-hosted runners are most suitable when using GitHub Enterprise Server (which lacks hosted runners), if your core business involves managing infrastructure or Kubernetes, or when you have compliance requirements not met by GitHub Enterprise Cloud with data residency. Scaling and ephemerality challenges make self-hosting less ideal for Windows and macOS workloads. If self-hosting is necessary, consider a hybrid approach and use self-hosted runners just for the specific workloads where they are needed.
For most developers and the vast majority of scenarios, unless you have very unique requirements or are willing to deeply invest in infrastructure to keep your CI/CD system humming, GitHub-hosted runners are likely your best option. Theyâre especially beneficial for those new to GitHub Actions and they let you spend your time focused on business value, new ideas, and writing codeâinstead of managing runners.
The post When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions appeared first on The GitHub Blog.