author

The GitHub Blog

Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.

April 24, 2025  16:00:51

To ensure that you have access to the best technology available, we’re continuously adding support for new models to GitHub Copilot. That being said, we know it can be hard to keep up with so many new models being released all the time.

All of this raises an obvious question: Which model should you use?

You can read our recent blog post for an overview of the models currently available in Copilot and their strengths, or check out our documentation for a deep dive comparing different models and tasks. But the AI landscape moves quickly. In this article we’ll explore a framework—including a few strategies—for evaluating whether any given AI model is a good fit for your use, even as new models continue to appear at a rapid pace.

It’s hard to go wrong with our base model, which has been fine-tuned specifically for programming-related tasks. But depending on what you’re working on, you likely have varying needs and preferences. There’s no single “best” model. Some may favor a more verbose model for chat, while others prefer a terse one, for example.

We spoke with several developers about their model selection process. Keep reading to discover how to apply their strategies to your own needs.

💡 Watch the video below for tips on prompt engineering to get the best results.

Why use multiple models?

There’s no reason you have to pick one model and stick with it. Since you can easily switch between models for both chat and code completion with GitHub Copilot, you can use different models for different use cases.

It's kind of like dogfooding your own stack: You won’t know if it really fits your workflow until you've shipped some real code with it.

- Anand Chowdhary, FirstQuadrant CTO and co-founder

Chat vs. code completion

Using one model for chat and another for autocomplete is one of the most common patterns we see among developers. Generally, developers prefer autocompletion models because they’re fast and responsive, which they need if they’re looking for suggestions as they think and type. Developers are more tolerant of latency in chat, when they’re in more of an exploratory state of mind (like considering a complex refactoring job, for instance).

Reasoning models for certain programming tasks

Reasoning models like OpenAI o1 often respond slower than traditional LLMs such as GPT-4o or Claude Sonnet 3.5. That’s in large part because these models break a prompt down into parts and consider multiple approaches to a problem. That introduces latency in their response times, but makes them more effective at completing complex tasks. Many developers prefer these more deliberative models for particular tasks.

For instance, Fatih Kadir Akın, a developer relations manager, uses o1 when starting new projects from scratch. “Reasoning models better ‘understand’ my vision and create more structured projects than non-reasoning models,” he explains.

FirstQuadrant CTO and co-founder Anand Chowdhary favors reasoning models for large-scale code refactoring jobs. “A model that rewrites complex backend code without careful reasoning is rarely accurate the first time,” he says. “Seeing the thought process also helps me understand the changes.”

When creating technical interview questions for her newsletter, GitHub Senior Director of Developer Advocacy, Cassidy Williams mixes models for certain tasks. When she writes a question, she uses GPT-4o to refine the prose, and then Claude 3.7 Sonnet Thinking to verify code accuracy. “Reasoning models help ensure technical correctness because of their multi-step process,” she says. “If they initially get something wrong, they often correct themselves in later steps so the final answer is more accurate.”

There’s some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices.

- Portilla Edo, cloud infrastructure engineering lead

What to look for in a new AI model

Let’s say a new model just dropped and you’re ready to try it out. Here are a few things to consider before making it your new go-to.

Recentness

Different models use different training data. That means one model might have more recent data than another, and therefore might be trained on new versions of the programming languages, frameworks, and libraries you use.

“When I’m trying out a new model, one of the first things I do is check how up to date it is,” says Xavier Portilla Edo, a cloud infrastructure engineering lead. He typically does this by creating a project manifest file for the project to see what version numbers Copilot autocomplete suggests. “If the versions are quite old, I’ll move on,” he says.

Speed and responsiveness

As mentioned, developers tend to tolerate more latency in a chat than in autocomplete. But responsiveness is still important in chat. “I enjoy bouncing ideas off a model and getting feedback,” says Rishab Kumar, a staff developer evangelist at Twilio. “For that type of interaction, I need fast responses so I can stay in the flow.”

Accuracy

Naturally, you need to evaluate which models produce the best code. “There’s some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices,” Portilla Edo says. “I also look at how readable and maintainable the code is—does it follow naming conventions? Is it modular? Are the comments helpful or just restating what the code does? These are all signals of quality that go beyond whether the code simply runs.”

How to test an AI model in your workflow

OK, so now you know what to look for in a model. But how do you actually evaluate it for responsiveness and correctness? You use it, of course.

Start with a simple app

Akın will generally start with a simple todo app written in vanilla JavaScript. “I just check the code, and how well it’s structured,” he says. Similarly, Kumar will start with a websocket server in Python. The idea is to start with something that you understand well enough to evaluate, and then layer on more complexity. “Eventually I’ll see if it can build something in 3D using 3js,” Akın says.

Portilla Edo starts by prompting a new model he wants to evaluate in Copilot Chat. “I usually ask it for simple things, like a function in Go, or a simple HTML file,” he says. Then he moves on to autocompletion to see how the model performs there.

Use it as a “daily driver” for a while

Chowdhary prefers to just jump in and start using a model. “When a new model drops, I swap it into my workflow as my daily driver and just live with it for a bit,” he says. “Available benchmarks and tests only tell you part of the story. I think the real test is seeing if it actually improves your day to day.”

For example, he checks to see if it actually speeds up his debugging jobs or produces cleaner refactors. “It’s kind of like dogfooding your own stack: You won’t know if it really fits your workflow until you’ve shipped some real code with it,” he says. “After evaluating it for a bit, I decide whether to stick with the new model or revert to my previous choice.”

Take this with you

What just about everyone agrees on is that the best way to evaluate a model is to use it.

The important thing is to keep learning. “You don’t need to be switching models all the time, but it’s important to know what’s going on,” Chowdhary says. “The state of the art is moving quickly. It’s easy to get left behind.”

Additional resources

Learn more about AI models.

The post A guide to deciding what AI model to use in GitHub Copilot appeared first on The GitHub Blog.

April 23, 2025  16:06:05

GitHub Copilot has quickly become an integral part of how I build. Whether I’m exploring new ideas or scaffolding full pages, using Copilot’s agent mode in my IDE helps me move faster—and more confidently—through each step of the development process.

GitHub Copilot agent mode is an interactive chat experience built right into your IDE that turns Copilot into an active participant in your development workflow. After you give it a prompt, agent mode streamlines complex coding tasks by autonomously iterating on its own code, identifying and fixing errors, suggesting and executing terminal commands, and resolving runtime issues with self-healing capabilities.

And here’s the best part: You can attach images, reference files, and give natural language instructions, and Copilot will generate and modify code directly in your project!

In this post, I’ll walk you through how I built a developer-focused landing page—from product requirements to code—using GitHub Copilot agent mode and the Claude 3.5 Sonnet model. This kind of build could easily take a few hours if I did it all by myself. But with Copilot, I had a working prototype in under 30 minutes! You’ll see how I used design artifacts, inline chat, and Copilot’s awareness of context to go from idea → design → code, with minimal friction.

You can also watch the full build in the video above!

Designing with AI: From PRD to UI

Before I wrote a single line of code, I needed a basic product vision. I started by using GitHub Copilot on GitHub.com to generate a lightweight product requirements document (PRD) using GPT-4o. Here was my prompt:

> “Describe a landing page for developers in simple terms.”

Copilot returned a structured but simple outline of a PRD for a developer-focused landing page. I then passed this PRD into Claude 3.5 Sonnet and asked it to generate a design based on that prompt.

An image showing the interface for GitHub Copilot responding to the prompt "Describe a landing page for developers in simple terms" with a a structured but simple outline of a PRD for a developer-focused landing page.

Claude gave me a clean, organized layout with common landing page sections: a hero, feature list, API examples, a dashboard preview, and more. This was more than enough for me to get started.

You can explore the full design that Claude built here; it’s pretty cool.

Setting up the project

For the tech stack, I chose Astro because of its performance and flexibility. I paired it with Tailwind CSS and React for styling and component architecture. I started in a blank directory and ran the following commands:

npm create astro@latest
npx astro add react
npx astro add tailwind

I initialized the project, configured Tailwind, and opened it in VS Code with GitHub Copilot agent mode enabled (learn how to enable it with our docs!). Once the server was running, I was ready to start building.

Building section by section with Copilot agent mode

Copilot agent mode really shines when translating visual designs into production-ready code because it understands both image and code context in your project. By attaching a screenshot and specifying which file to edit, I could prompt it to scaffold new components, update layout structure, and even apply Tailwind styles—all without switching tabs or writing boilerplate manually.

For our project here, this meant I could take screenshots of each section from Claude’s design and drop them directly into Copilot’s context window.

💡 Pro tip: When building from a visual design like this, I recommend working on one section at a time. This not only keeps the context manageable for the model, but also makes it easier to debug if something goes off track. You’ll know exactly where to look!

Creating the hero and navigation section

I opened index.astro, attached the design screenshot, and typed the following prompt:

> “Update index.astro to reflect the attached design. Add a new navbar and hero section to start the landing page.”

Copilot agent mode then returned the following:

  • Created Navbar.astro and Hero.astro
  • Updated index.astro to render them
  • Applied Tailwind styling based on the visual layout

And here’s what I got:

Now, this is beautiful! Though it doesn’t have the image on the right per the design, it did a very good job of getting the initial design down. We’ll go back in later to update the section to be exactly what we want.

Commit early and often

💡 Pro tip: When building with AI tools, commit early and often. I’ve seen too many folks lose progress when a prompt goes sideways.

And in case you didn’t know, GitHub Copilot can help here too. After staging your changes in the Source Control panel, click the ✨ sparkles icon to automatically generate a commit message. It’s a small step that can save you a lot of time (and heartache).

Improve accuracy with Copilot custom instructions

One of the best ways to improve the quality of GitHub Copilot’s suggestions—especially in multi-file projects—is by providing it with custom instructions. These are short, structured notes that describe your tech stack, project structure, and any conventions or tools you’re using.

Instead of repeatedly adding this contextual detail to your chat questions, you can create a file in your repository that automatically adds this information for you. The additional information won’t be displayed in the chat, but is available to Copilot—allowing it to generate higher-quality responses.

To give Copilot better context, I created a CopilotInstructions.md file describing my tech stack:

  • Astro v5
  • Tailwind CSS v4
  • React
  • TypeScript

When Copilot agent mode referenced this file when making suggestions, I noticed the results became more accurate and aligned with my setup.

Here’s what some of the file looked like:

# GitHub Copilot Project Instructions

## Project Overview
This is an Astro project that uses React components and Tailwind CSS for styling. When making suggestions, please consider the following framework-specific details and conventions.

## Tech Stack
- Astro v5.x
- React as UI library
- Tailwind CSS for styling (v4.x)
- TypeScript for type safety

## Project Structure
```
├── src/
│   ├── components/     # React and Astro components
│   ├── layouts/        # Astro layout components
│   ├── pages/          # Astro pages and routes
│   ├── styles/         # Global styles
│   └── utils/          # Utility functions
├── public/             # Static assets
└── astro.config.mjs    # Astro configuration
```

## Component Conventions

### Astro Components
- Use `.astro` extension
- Follow kebab-case for filenames
- Example structure:

```astro
---
// Imports and props
interface Props {
  title: string;
}

const { title } = Astro.props;
---

<div class="component-wrapper">
  <h1>{title}</h1>
  <slot />
</div>

<style>
  /* Scoped styles if needed */
</style>
```

You can explore the full instructions file in my repo, along with the full code, setup instructions, and a link to the deployed landing page.

Iterating on your designs by prompting Copilot

I then repeated the same process to build each new section. Here’s what this looked like in practice:

“Built by Developers” section

> “Add a new section to the landing page called ‘By Developers’ and follow the attached design.”

Copilot generated a reusable component with feature cards structured in a Tailwind-styled grid.

An image showing a reusable component with feature cards structured in a Tailwind-styled grid.

“API development” section

> “Add the API development section based on the design.”

This section featured interactive code samples in tabs. Copilot interpreted that from the screenshot and added UI logic to switch between examples—without me asking.

“Dashboard preview” section

> “Now add the dashboard management section on the landing page based on the design.”

I uploaded a screenshot of my editor as a placeholder image, and Copilot added it seamlessly to the new component.

A screenshot of the dashboard management section.

It’s so amazing how fast we’re building this landing page. Look at the progress we’ve already made!

Smart suggestions, fast results

Even with sections like “Trusted by Developers” and “Try it Yourself,” Copilot created placeholder images, added semantic HTML, and applied Tailwind styling—all based on a single image and prompt. 🤯

A screenshot of the "Trusted by developers worldwide" section of the landing page.

When I updated the final hero section to match the layout more closely, Copilot flagged and fixed TypeScript issues without being prompted.

That might sound small, but it’s a big deal. It means Copilot agent mode wasn’t just taking instructions—it was actively understanding my codebase, looking at my terminal, identifying problems, and resolving them in real time. This reduced my need to context switch, so I could focus on shipping!

This wasn’t just a series of generated components. It was a fully structured, landing page built with modern best practices baked in. And I didn’t have to build it alone!

Wrapping up:

With GitHub Copilot agent mode and Claude working together, I was able to:

  • Generate a usable PRD and design mockup with a single prompt
  • Build a responsive Astro-based landing page in less than thirty minutes
  • Scaffold, test, and iterate on each section with minimal manual coding
  • Use natural language to stay in the flow as I developed

What’s next?

To complete this project, I updated the README with a clear project structure, added instructions for getting started, and staged it for deployment. From here, you can:

  • Deploy it with GitHub Pages, Netlify, or your host of choice
  • Set up GitHub Actions for CI/CD
  • Add unit tests or accessibility checks
  • Replace placeholder content with real data (like logos, dashboard, and profile images)
  • Add new pages based on the Navbar

Want to explore it yourself?

Take this with you

AI tools like GitHub Copilot agent mode are transforming how we build, but like any tool, their power depends on how well we use them. Adding context, being explicit, and committing often made building this web page smooth and successful.

If you’re thinking about building with GitHub Copilot, give this workflow a try:

  1. Start with a PRD using Copilot on GitHub.com
  2. Generate a design from your PRD with Claude
  3. Use Copilot Agent in your IDE to code it, step by step.

Until next time, happy coding!

The post From prompt to production: Building a landing page with Copilot agent mode appeared first on The GitHub Blog.

April 22, 2025  16:00:01

You might have heard of the GitHub CLI and all of the awesome things you can do with it. However, one of its hidden superpowers is the ability to execute complex queries and mutations through GitHub’s GraphQL API. This post will walk you through what GitHub’s GraphQL API endpoint is and how to query it with the GitHub CLI.

What is GraphQL?

Let’s start with the basics: GraphQL is a query language for APIs and a runtime for executing those queries against your data. Unlike traditional REST APIs that provide fixed data structures from predefined endpoints, GraphQL allows clients to request exactly the data they need in a single request. This single-request approach reduces network overhead, speeds up application performance, and simplifies client-side logic by eliminating the need to reconcile multiple API responses—a capability that has been openly available since the specification was open sourced in 2015.

GraphQL operations come in two primary types: queries and mutations. Queries are read-only operations that retrieve data without making any changes—similar to GET requests in REST. Mutations, on the other hand, are used to modify server-side data (create, update, or delete)—comparable to POST, PATCH, PUT, and DELETE in REST APIs. This clear separation between reading and writing operations makes GraphQL interactions predictable while maintaining the flexibility to precisely specify what data should be returned after a change is made.

How is GraphQL used at GitHub?

GitHub implemented GraphQL in 2016 to address limitations of RESTful APIs. This adoption has significantly enhanced the developer experience when working with GitHub data. With the GraphQL endpoint, you can retrieve a repository’s issues, its labels, assignees, and comments with a single GraphQL query. Using our REST APIs, this would have otherwise taken several sets of nested calls.

Some GitHub data and operations are only accessible through the GraphQL API (such as discussions, projects, and some enterprise settings), others exclusively through REST APIs (such as querying actions workflows, runners, or logs), and some using either endpoint (such as repositories, issues, pull requests, and user information). GitHub’s GraphQL endpoint is accessible at api.github.com/graphql and you can explore the full schema in our GraphQL documentation or through the interactive GraphQL Explorer.

A key consideration when choosing between the REST API and the GraphQL API is how the rate limits are calculated. As a quick summary for how this is implemented:

  • REST API: Limited by number of requests (typically 5,000 requests per hour for authenticated users and up to 15,000 for GitHub Apps installed in an Enterprise)
  • GraphQL API: Limited by “points” (typically 5,000 points per hour for authenticated users but can go up to 10,000-12,500 points per hour for GitHub Apps)

Each GraphQL query costs at least one point, but the cost increases based on the complexity of your query (number of nodes requested, connections traversed, etc.). The GraphQL API provides a rateLimit field you can include in your queries to check your current limit status.

For scenarios where you need to fetch related data that would otherwise require multiple REST calls, GraphQL is often more rate limit friendly because:

  • One complex GraphQL query might cost 5-10 points but replace 5-10 separate REST API calls.
  • You avoid “over-fetching” data you don’t need, which indirectly helps with rate limits.
  • The GraphQL API allows for more granular field selection, potentially reducing the complexity and point cost.

However, poorly optimized GraphQL queries that request large amounts of nested data could potentially use up your rate limit faster than equivalent REST requests—and quickly run into secondary rate limit issues.

A quick rule of thumb on deciding between which to use:

  • For querying relational objects, such as GitHub Projects and their issues, GraphQL is often more effective, especially if it’s a discrete number of items.
  • For bulk data of one type or single data points, such as pulling in a list of repository names in an organization, the REST API is often preferred.

Sometimes there isn’t a right or wrong answer; so as long as the object exists, try one out!

Why use GitHub CLI for GraphQL?

While many developers start with GitHub’s GraphQL Explorer on the web, curl, or other API querying tools, there’s a more streamlined approach: using built-in GraphQL support in the GitHub CLI. Before diving into the how-to, let’s understand why GitHub CLI is often my go-to tool for GraphQL queries and mutations:

  1. Authentication is handled automatically: No need to manage personal access tokens manually.
  2. Streamlined syntax: Simpler than crafting curl commands.
  3. Local development friendly: Run queries and mutations right from your terminal.
  4. JSON processing: Built-in options for filtering and formatting results.
  5. Pagination support: Ability to work with cursor-based pagination in GraphQL responses.
  6. Consistent experience: Same tool you’re likely using for other GitHub tasks.

How to get started with gh api graphql

First, ensure you have GitHub CLI installed and authenticated with gh auth login. The basic syntax for making a GraphQL query with gh api graphql is:

gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  query {
    viewer {
      login
      name
      bio
    }
  }
'

This simple query returns your GitHub username, the name you have defined in your profile, and your bio. The -f flag defines form variables, with query= being the GraphQL query itself.

Here’s our example output:

{
  "data": {
    "viewer": {
      "login": "joshjohanning",
      "name": "Josh Johanning",
      "bio": "DevOps Architect | GitHub"
    }
  }
}

Running queries and mutations

Basic query example

Let’s try something more practical—fetching information about a repository. To get started, we’ll use the following query:

gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  query($owner:String!, $repo:String!) {
    repository(owner:$owner, name:$repo) {
      name
      description
      id
      stargazerCount
      forkCount
      issues(states:OPEN) {
        totalCount
      }
    }
  }
' -F owner=octocat -F repo=Hello-World

The -F flag sets variable values that are referenced in the query with $variable.

Here’s our example output:

{
  "data": {
    "repository": {
      "name": "Hello-World",
      "description": "My first repository on GitHub!",
      "id": "R_kgDOABPHjQ",
      "stargazerCount": 2894,
      "forkCount": 2843,
      "issues": {
        "totalCount": 1055
      }
    }
  }
}
💡 Tip: The -H X-Github-Next-Global-ID:1 parameter sets an HTTP header that instructs GitHub’s GraphQL API to use the new global node ID format rather than the legacy format. While your query will function without this header, including it prevents deprecation warnings when referencing node IDs (such as when passing repository.ID in subsequent operations). GitHub recommends adopting this format for all new integrations to ensure long-term compatibility.

Running mutations

Mutations work similarly. Here’s how to create a new issue:

gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  mutation($repositoryId:ID!, $title:String!, $body:String) {
    createIssue(input:{repositoryId:$repositoryId, title:$title, body:$body}) {
      issue {
        url
        number
        title
        body
        state
      }
    }
  }
' -F repositoryId="R_kgDOABPHjQ" -F title="Creating issue with GraphQL" -F body="Issue body created via GraphQL\!"

Make sure to update the repositoryId parameter with the actual repository’s GraphQL ID (an example of returning a repository’s ID is shown in the basic query above!).

Here’s our example output:

{
  "data": {
    "createIssue": {
      "issue": {
        "url": "https://github.com/octocat/Hello-World/issues/3706",
        "number": 3706,
        "title": "Creating issue with GraphQL",
        "body": "Issue body created via GraphQL!",
        "state": "OPEN"
      }
    }
  }
}

Filtering GraphQL results

GitHub CLI supports JQ-style filtering for extracting specific parts of the response, which is invaluable when you need to parse just the repository names or URLs from a query for use in automation scripts. Here is an example of using the --jq flag:

gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  query($owner:String!, $repo:String!) {
    repository(owner:$owner, name:$repo) {
      issues(first:3, states:OPEN) {
        nodes {
          number
          title
          url
        }
      }
    }
  }
' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[]'

The --jq flag accepts JQ expressions to process JSON output. This query returns just the array of issues, without the surrounding GraphQL response structure.

Here’s our example output:

{
  "number": 26,
  "title": "test issue",
  "url": "https://github.com/octocat/Hello-World/issues/26"
}
{
  "number": 27,
  "title": "just for test",
  "url": "https://github.com/octocat/Hello-World/issues/27"
}
{
  "number": 28,
  "title": "Test",
  "url": "https://github.com/octocat/Hello-World/issues/28"
}

We could have modified the --jq flag to just return the issue URLs, like so:

gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  query($owner:String!, $repo:String!) {
    repository(owner:$owner, name:$repo) {
      issues(first:3, states:OPEN) {
        nodes {
          number
          title
          url
        }
      }
    }
  }
' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[].url'

Here’s our example output:

https://github.com/octocat/Hello-World/issues/26
https://github.com/octocat/Hello-World/issues/27
https://github.com/octocat/Hello-World/issues/28

Handling pagination

GitHub’s GraphQL API limits results to a maximum of 100 items per page, which means you’ll need pagination to retrieve larger datasets.

Pagination in GraphQL works by returning a “cursor” with each page of results, which acts as a pointer to where the next set of results should begin. When you request the next page, you provide this cursor to indicate where to start.

The easiest way to handle this pagination in the GitHub CLI is with the --paginate flag, which automatically collects all pages of results for you by managing these cursors behind the scenes. Here’s what that looks like in a query:

gh api graphql --paginate -H X-Github-Next-Global-ID:1 -f query='
  query($owner:String!, $repo:String!, $endCursor:String) {
    repository(owner:$owner, name:$repo) {
      issues(first:100, after:$endCursor, states:OPEN, orderBy:{field:CREATED_AT, direction:DESC}) {
        pageInfo {
          hasNextPage
          endCursor
        }
        nodes {
          number
          title
          createdAt
        }
      }
    }
  }
' -F owner=octocat -F repo=Hello-World

The pageInfo object with its hasNextPage and endCursor fields is essential for pagination. When you use the --paginate flag, GitHub CLI automatically uses these fields to fetch all available pages for your query, combining the results into a single response.

Here’s our example output:

{
  "data": {
    "repository": {
      "issues": {
        "pageInfo": {
          "hasNextPage": true,
          "endCursor": "Y3Vyc29yOnYyOpK5MjAyNC0xMi0zMFQxNDo0ODo0NC0wNjowMM6kunD3"
        },
        "nodes": [
          {
            "number": 3708,
            "title": "Creating issue with GraphQL once more",
            "createdAt": "2025-04-02T18:15:11Z",
            "author": {
              "login": "joshjohanning"
            }
          },
          {
            "number": 3707,
            "title": "Creating issue with GraphQL again",
            "createdAt": "2025-04-02T18:15:02Z",
            "author": {
              "login": "joshjohanning"
            }
          },
          {
            "number": 3706,
            "title": "Creating issue with GraphQL",
            "createdAt": "2025-04-02T18:14:37Z",
            "author": {
              "login": "joshjohanning"
            }
          },
          … and so on
        ]
      }
    }
  }
}

This approach works great for moderate amounts of data, but keep in mind that GitHub’s GraphQL API has rate limits, so extremely large queries might need to implement delays between requests.

💡 Important limitation: The --paginate flag can only handle pagination for a single connection at a time. For example, when listing repository issues as shown above, it can paginate through all issues, but cannot simultaneously paginate through each issue’s comments. For nested pagination, you’ll need to implement custom logic.

Building complex scripts: Chaining GraphQL queries together

When working with GitHub’s GraphQL API, you often need to connect multiple queries to accomplish a complex task. Let’s look at how to chain GraphQL calls together using the GitHub CLI:

ISSUE_ID=$(gh api graphql -H X-Github-Next-Global-ID:1 -f query='
  query($owner: String!, $repo: String!, $issue_number: Int!) {
    repository(owner: $owner, name: $repo) {
      issue(number: $issue_number) {
        id
      }
    }
  }
' -F owner=joshjohanning -F repo=graphql-fun -F issue_number=1 --jq '.data.repository.issue.id') 
gh api graphql -H GraphQL-Features:sub_issues -H X-Github-Next-Global-ID:1 -f query='
query($issueId: ID!) {
  node(id: $issueId) {
    ... on Issue {
      subIssuesSummary {
        total
        completed
        percentCompleted
      }
    }
  }
}' -F issueId="$ISSUE_ID"

Here’s what this shell script is doing:

  1. The first query captures an issue’s ID using the repository name and issue number
  2. The --jq flag extracts just the ID value and stores it in a variable
  3. The second query passes this ID to retrieve a summary of sub-issues

Here’s our example output:

{
  "data": {
    "node": {
      "subIssuesSummary": {
        "total": 3,
        "completed": 1,
        "percentCompleted": 33
      }
    }
  }
}

Take this with you

The gh api graphql command provides a convenient way to interact with GitHub’s GraphQL API directly from your terminal. It eliminates the need for token management, simplifies query syntax and formatting, and handles basic pagination that would otherwise be complex to implement. Whether you’re running complex queries or simple mutations, this approach offers a streamlined developer experience.

Next time you need to interact with GitHub’s GraphQL API, skip the GraphQL Explorer on the web and try the GitHub CLI approach. It might just become your preferred method for working with GitHub’s powerful GraphQL API capabilities.

The post Exploring GitHub CLI: How to interact with GitHub’s GraphQL API endpoint appeared first on The GitHub Blog.

April 21, 2025  17:00:13

We launched the GitHub Innovation Graph to give developers, researchers, and policymakers an easy way to analyze trends in public software collaboration activity around the world. With today’s quarterly1 release, updated through December 2024, we now have five full years of data.

To help us celebrate, we’ve created some animated bar charts showcasing the growth in developers and pushes of some of the top economies around the world over time. Enjoy!

Animated bar charts

What a photo finish! The European Union surpassing the United States in cumulative git pushes was certainly a highlight, but we’d also note the significant movements of Brazil and Korea in climbing up the rankings.

Another close race, this time showing India outpacing the European Union in repositories between Q2 and Q3 2024.

Zooming into economies in APAC, we can appreciate the speed of developer growth in India, more than quadrupling in just 5 years.

Flying over to EMEA, we saw very impressive growth from Nigeria, which rose up from rank 20 in Q1 2020 to rank 11 in Q4 2024.

Finally, in LATAM, it was exciting to see how close most of the economies are in developer counts (with the exception of Brazil), with frequent back-and-forth swaps in rankings between economies like Argentina and Colombia, or Guatemala and Bolivia.

Want to explore more? Dive into the datasets yourself. We can’t wait to check out what you build.

Global line charts

We’ve also made a feature update that will enable you to quickly understand the global scale of some of the metrics we publish, including the numbers of public git pushes, repositories, developers, and organizations on GitHub worldwide.

Simply follow the installation steps for our newly released GitHub MCP Server, and you’ll be able to prompt GitHub Copilot in agent mode within VS Code to retrieve the CSVs from the data repo using the get_file_contents tool. Then, you can have the agent sum up the latest values for you.

A portion of the README.md file for the github/github-mcp-server repo which contains the easy installation buttons for installing the GitHub MCP server onto VS Code.

Afterward, you can double-check its results with these handy charts that we’ve added to their respective global metrics pages for git pushes, repositories, developers, and organizations. Check them out below.

Click to view slideshow.

  1. The GitHub Innovation Graph reports metrics according to calendar year quarters, which correspond to the following: Q1: January 1 to March 31; Q2: April 1 to June 30; July 1 to September 30; and Q4: October 1 to December 31. â†Š

The post Racing into 2025 with new GitHub Innovation Graph data appeared first on The GitHub Blog.

April 21, 2025  13:00:35

Climate change is one of the most pressing issues of this century. We are working with developers to leverage technology to create a greener world. So, this Earth Day, we’re excited to launch the Climate Action Plan for Developers.

We’ve curated tools and projects to help you kick-start your climate action journey and contribute to achieving net zero carbon emissions. Explore over 60,000 green software and climate-focused repositories on GitHub.

Not sure where to start? Take a look below at a few highlights that can help you start to green your code today.

🚀 Speed & Scale

Speed & Scale is a global initiative to move leaders to act on the climate crisis. Their team has developed a net zero action plan, with 10 objectives and 49 key results that track yearly progress.

Learn about their action plan

⚡️ Electricity Maps

Electricity Maps is the leading electricity grid API, offering a single source for accessing carbon intensity and energy mix globally. As a developer you can go beyond just viewing the maps to pull data from their API, download data files, and even contribute to their open source project.

Access the Electricity Maps API

🖥️ CodeCarbon

CodeCarbon is a lightweight software package that allows for integration into any Python project to track and reduce CO2 emissions from your computing. Get started with using the software package and check out the opportunities to help support this open source project.

Get started with the software package

🌳 ClimateTriage, by OpenSustain.Tech

ClimateTriage helps developers discover a meaningful way to contribute to open source projects focused on climate technology and sustainability. Harness the power of open source collaboration to tackle environmental challenges such as climate change, clean energy, biodiversity, and natural resource conservation. Whether you’re an experienced developer, a scientist, or a newcomer looking to contribute, connect you with opportunities to use your skills to create a sustainable future.

Get started with a Good First Issue

💪 Use GitHub Copilot and CodeCarbon for greener code

Computational tasks, especially in AI, have a growing carbon footprint. Learn how CodeCarbon, an open-source Python library, helps measure CO2 emissions from your code. Together with GitHub Copilot, integrate CodeCarbon into your projects, allowing you to track energy use and optimize for sustainability.

Get started with GitHub Copilot for free today

Learn more about how you can take climate action today.

The post How to take climate action with your code appeared first on The GitHub Blog.

April 18, 2025  19:30:46

GitHub supports dark mode and light mode, and as developers, we can make our README images look great in both themes. Here’s a quick guide to using the <picture> element in your GitHub Markdown files to dynamically switch images based on the user’s color scheme.

When developers switch to GitHub’s dark mode (or vice versa), standard images can look out of place, with bright backgrounds or clashing colors.

Instead of forcing a one-size-fits-all image, you can tailor your visuals to blend seamlessly with the theme. It’s a small change, but it can make your project look much more polished.

One snippet, two themes!

Here’s the magic snippet you can copy into your README (or any Markdown file):

<picture>
  <source media="(prefers-color-scheme: dark)" srcset="dark-mode-image.png">
  <source media="(prefers-color-scheme: light)" srcset="light-mode-image.png">
  <img alt="Fallback image description" src="default-image.png">
</picture>

Now, we say it’s magic, but let’s take a peek behind the curtain to show how it works:

  • The <picture> tag lets you define multiple image sources for different scenarios.
  • The <source media="..."> attribute matches the user’s color scheme.
    • When media="(prefers-color-scheme: dark)", the browser loads the srcset image when GitHub is in dark mode.
    • Similarly, when media="(prefers-color-scheme: light)", the browser loads the srcset image when GitHub is in light mode.
  • If the browser doesn’t support the <picture> element, or the user’s system doesn’t match any defined media queries, the fallback <img> tag will be used.

You can use this approach in your repo README files, documentation hosted on GitHub, and any other Markdown files rendered on GitHub.com!

Demo

What’s better than a demo to help you get started? Here’s what this looks like in practice:

The post How to make your images in Markdown on GitHub adjust for dark mode and light mode appeared first on The GitHub Blog.

April 18, 2025  16:48:20

GitHub Universe returns to San Francisco on October 28 and 29—bringing together the builders, dreamers, and changemakers shaping the future of software. From first-time speakers with big ideas to DevRel pros with demos to share and business leaders rethinking workflows with AI, we believe that a diverse range of voices belong on our stage.

But writing a compelling conference session submission can feel like decoding a complex algorithm. What makes your idea stand out? How do you grab the content committee’s attention? And what if you’ve never done this before?

Good news: we’ve cracked the code, and we’re sharing it with you.

Here are four proven tips to help you put together a proposal that’s clear, compelling, and uniquely you.

Apply to speak or nominate a speaker to take the stage at GitHub Universe by Friday, May 2 at 11:59 pm PT to be considered.

1. Find something you’re truly passionate about 💡

A Venn diagram titled 'Signature talk formula' showing the intersection of three circles labeled 'What you know', 'What you are passionate about', and 'What the audience cares about'. The diagram is displayed on a dark background with the circles in blue, teal, and purple, illustrating how effective talks should combine knowledge, passion, and audience relevance.

Here’s the truth: passion is magnetic. If you’re excited about your topic, it shows. It pulses through your proposal, powers your delivery onstage, and pulls in your audience—content committee included.

Instead of chasing the latest trends, talk about something that lights you up. Maybe it’s a story from building an open source project in your off-hours. Maybe it’s how your team shipped something new using GitHub Copilot. Or maybe it’s the unexpected way you quickly scaled developer experience across a global org. Your unique perspective is your superpower.

Content committees can sense authenticity. They’re not just looking for polished buzzwords. They’re looking for people who care deeply and can teach others something meaningful.

🎤 Pro tip: If it’s a topic you’d talk about over lunch with a teammate or geek out about on a podcast, it’s probably a great fit.

2. Write a title they can’t ignore ✍️

Think of your session title like an email subject line—it’s your chance to make a strong first impression, and it needs to do the heavy lifting for you. A strong title shouldn’t just sound good. It should clearly communicate what your talk is about and why it matters.

Let’s take our title as an example:

  • ✅ Engaging: “Cracking the Code” suggests there’s an inside strategy, and it sparks curiosity.
  • ✅ Clear: “How to wow the acceptance committee at your next tech event” leaves no doubt about the topic.

  • ✅ Action-oriented: It promises practical takeaways, not just theory.

  • ✅ Balanced: It walks the line between fun and professional.

Avoid vague titles (“A new approach to software”) or clickbait (“This one trick will fix your codebase”). Instead, aim for clarity with flair. Give the content committee a reason to want to learn more along with the confidence that your talk can deliver.

🎤 Pro tip: After you write your title, ask yourself—would I attend this session? Would I understand what I’m getting from it in five seconds?

3. Make it easy for the content committee to say yes ✅

The content committee is rooting for you, but you’ve got to help them out. The best submissions remove all ambiguity and make a strong case for why this session matters.

Here’s how:

  • Be specific about your audience: Who is this for? Senior engineers? OSS maintainers? Platform teams? Product leads?
  • Spell out the takeaways: What will people learn? Tools, frameworks, fresh mindsets?

  • Tie it to the event: Why does this belong at GitHub Universe? How does it support the event’s themes?

Also, show that your content has a life beyond the stage:

  • Can your session be turned into a blog, case study, or video?
  • Is your abstract compelling enough to be featured in a marketing email or keynote recap?

  • Will attendees be able to apply what they learned the next day?

🎤 Hot tip: Think beyond the talk itself. That’s pure gold for event organizers.

4. Seal the deal with your online presence 🌐

Yes, your session submission is the star, but reviewers on the content committee can also look you up. Your online presence helps us understand:

  • Your credibility and expertise
  • Your speaking experience (or potential!)

  • How easy it will be to promote you as a speaker

You don’t need a massive following. But you do want a strong, relevant footprint. Here are a few tips to consider:


On LinkedIn:

A LinkedIn profile card showing professional information. The profile belongs to Cassidy Williams who uses She/Her pronouns. Her title lists multiple roles: Developer advocate, educator, advisor, software engineer, and memer. She's based in Chicago, Illinois, United States. The profile shows she has 17,124 followers and over 500 connections. The card includes a circular profile photo and partial view of keyboard keys in the upper right corner. Two social media posts from Joseph Katsioloudes, a Tech Speaker in cyber security. The left post shows a selfie taken at a conference in Seattle with an audience visible in the background. The post mentions CyberWeek by ThinkCyber Foundation with GitHub Security Lab as a sponsor. A tag indicates he's with Nancy G. The right post mentions returning to London for a guest lecture, showing what appears to be a lecture hall. Both posts show profile pictures and engagement information. A social media profile and post from Jeffrey Berthiaume, Technology Innovator. Left side shows his profile with specialties in iOS, tvOS, Vision Pro, IoT, and Emerging Tech, including Connect and Message buttons. Right side displays his post about creating an app called 'nanglish' with his kids during the holiday season. The post includes screenshots of the colorful app interface showing a grid of different colored squares. The post has engagement options below it and indicates a repost from BrainXchange which has 4,364 followers.
  • Use a headline that highlights your expertise, not just your title.
  • Make your “About” section shine with links to talks, blogs, and projects.

  • Add speaking experience under “Experience” or “Featured.”


On GitHub:

A GitHub profile page for Kedasha Kerr (username LadyKerr). The profile has a dark theme with a circular profile picture on the left showing a person with long braided hair, glasses, and red lipstick against an orange background. The profile introduction starts with 'Hey, I'm Kedasha!' followed by a partial bio mentioning she's a Software Engineer passionate about creation and learning. She describes herself as a Developer Advocate @github and Technical Content Creator. The profile includes a Follow button, a pinned repository called 'mealmetrics-copilot' that was forked from another repository, and a small cartoon avatar wearing a red cap. Her Instagram handle @itsthatladdydev is also mentioned. GitHub contribution activity chart showing 2,593 contributions in the last year. The chart displays a grid of contribution squares organized by day of week (Monday, Wednesday, Friday) and month (March through September). Each square is colored in varying shades of green indicating different levels of activity, with darker green representing more contributions on those days. Below the chart are links to GitHub profiles (@github, @github-samples, @octobooth) and an activity overview section showing contributions to repositories including github/devrel and github/gh-skyline. A small note says 'Learn how we count contributions' under the chart. GitHub profile pinned repositories section on a dark theme. Six repositories are displayed: 'octolamp' (a 3D printed, GitHub infused smart light with 689 stars and 34 forks), 'DasDeployer' (a Raspberry Pi powered manual release approval gate for Azure Pipelines written in Python with 95 stars and 5 forks), 'rpi-cluster' (brief instructions about a Raspberry Pi Cluster visible in background on calls), 'PumpkinPi' (spooky build status indicator with 76 stars, written in Python), 'smart-xmas' (repository for adding something with 203 stars and 6 forks), and 'Camera Setup' (with numbered instructions visible in a readme file).
  • Update your profile README with your focus areas and links.
  • Pin key repos or projects you’ve contributed to.

  • Be active in discussions, even if most of your code is private.

🎤 Hot tip: Post about your submission journey! Sharing your process helps you engage with the community and might even inspire someone else to apply.

Ready to take the stage?

You’ve got the ideas. Now you’ve got the blueprint. If you’ve made it this far, we hope you feel ready—and excited—to throw your hat in the ring. Let’s recap:

  1. Lead with passion to find a topic you care deeply about.
  2. Craft a clear, compelling title that grabs attention and gives the content committee an immediate idea of your session topic and takeaways.

  3. Make your submission a no-brainer by showing how it aligns with the event and adds value.

  4. Polish your online presence—it might just tip the scale in your favor.

Whether you’re a seasoned speaker or stepping into the spotlight for the first time, we can’t wait to hear from you. And if you don’t have a session idea this year, you can also nominate a speaker who deserves to take the stage. Submit a session proposal or a speaker nomination from now until Friday, May 2 at 11:59 pm PT to be considered!

Let’s build the future together—one session at a time. 💫

The post Cracking the code: How to wow the acceptance committee at your next tech event appeared first on The GitHub Blog.

April 18, 2025  12:47:07

This was originally published on our developer newsletter, GitHub Insider, which offers tips and tricks for devs at every level. If you’re not subscribed, go do that now—you won’t regret it (we promise).

If you’ve ever wondered which AI model is the best fit for your GitHub Copilot project, you’re not alone. Since each model has its own strengths, picking the right one can feel somewhat mysterious.

With models that prioritize speed, depth, or a balance of both, it helps to know what each one brings to the table. Let’s break it down together. 👇

The TL;DR

  • 💳 Balance between cost and performance: Go with GPT-4.1, GPT-4o, or Claude 3.5 Sonnet.
  • 🪙 Fast, lightweight tasks: o4-mini or Claude 3.5 Sonnet are your buddies.
  • 💎 Deep reasoning or complex debugging: Think Claude 3.7 Sonnet, o3, or GPT 4.5.
  • 🖼️ Multimodal inputs (like images): Check out Gemini 2.0 Flash or GPT-4o.

Your mileage may vary and it’s always good to try things yourself before taking someone else’s word for it, but this is how these models were designed to be used. All that being said…

Let’s talk models.

🏎️ Putting coding speed first

o4-mini and o3-mini: The speed demons 😈

Fast, efficient, and cost-effective, o4-mini and o3-mini are ideal for simple coding questions and quick iterations. If you’re looking for a no-frills model, use these.

✅ Use them for:

  • Quick prototyping.
  • Explaining code snippets.
  • Learning new programming concepts.
  • Generating boilerplate code.

👀 You may prefer another model: If your task spans multiple files or calls for deep reasoning, a higher‑capacity model such as GPT‑4.5 or o3 can keep more context in mind. Looking for extra expressive flair? Try GPT‑4o.


⚖️ AI models designed for balance

Claude 3.5 Sonnet: The budget-friendly helper 😊

Need solid performance but watching your costs? Claude 3.5 Sonnet is like a dependable sidekick. It’s great for everyday coding tasks without burning through your monthly usage.

✅ Use it for:

  • Writing documentation.
  • Answering language-specific questions.
  • Generating code snippets.

👀 You may prefer another model: For elaborate multi‑step reasoning or big‑picture planning, consider stepping up to Claude 3.7 Sonnet or GPT‑4.5.

GPT-4o and GPT-4.1: The all-rounders 🌎

These are your go-to models for general tasks. Need fast responses? Check. Want to work with text *and* images? Double check. GPT-4o and GPT-4.1 are like the Swiss Army knives of AI models: flexible, dependable, and cost-efficient.

✅ Use them for:

  • Explaining code blocks.
  • Writing comments or docs.
  • Generating small, reusable snippets.
  • Multilingual prompts.

👀 You may prefer another model: Complex architectural reasoning or multi‑step debugging may land more naturally with GPT‑4.5 or Claude 3.7 Sonnet.


🧠 Models for deep thinking and big projects

Claude 3.7 Sonnet: The architect 🏠

This one’s the power tool for large, complex projects. From multi-file refactoring to feature development across front end and back end, Claude 3.7 Sonnet shines when context and depth matter most.

✅ Use it for:

  • Refactoring large codebases.
  • Planning complex architectures.
  • Designing algorithms.
  • Combining high-level summaries with deep analysis.

👀 You may prefer another model: For quick iterations or straightforward tasks, Claude 3.5 Sonnet or GPT‑4o may deliver results with less overhead.

Gemini 2.5 Pro: The researcher 🔎

Gemini 2.5 Pro is the powerhouse for advanced reasoning and coding. It’s built for complex tasks (think: deep debugging, algorithm design, and even scientific research). With its long-context capabilities, it can handle extensive datasets or documents with ease.

✅ Use it for:

  • Writing full functions, classes, or multi-file logic.
  • Debugging complex systems.
  • Analyzing scientific data and generating insights.
  • Processing long documents, datasets, or codebases.

👀 You may prefer another model: For cost-sensitive tasks, o4-mini or Gemini 2.0 Flash are more budget-friendly options.

GPT-4.5: The thinker 💭

Got a tricky problem? Whether you’re debugging multi-step issues or crafting full-on systems architectures, GPT-4.5 thrives on nuance and complexity.

✅ Use it for:

  • Writing detailed README files.
  • Generating full functions or multi-file solutions.
  • Debugging complex errors.
  • Making architectural decisions.

👀 You may prefer another model: When you just need a quick iteration on something small—or you’re watching tokens—GPT‑4o can finish faster and cheaper.

o3 and o1: The deep diver 🥽

These models are perfect for tasks that need precision and logic. Whether you’re optimizing performance-critical code or refactoring a messy codebase, o3 and o1 excel in breaking down problems step by step.

✅ Use them for:

  • Code optimization.
  • Debugging complex systems.
  • Writing structured, reusable code.
  • Summarizing logs or benchmarks.

👀 You may prefer another model: During early prototyping or lightweight tasks, a nimble model such as o4‑mini or GPT‑4o may feel snappier.


🖼️ Multimodal, or designed to handle it all

Gemini 2.0 Flash: The visual thinker 🤔

Got visual inputs like UI mockups or diagrams? Gemini 2.0 Flash lets you bring images into the mix, making it a great choice for front-end prototyping or layout debugging.

✅ Use it for:

  • Analyzing diagrams or screenshots.
  • Debugging UI layouts.
  • Generating code snippets.
  • Getting design feedback.

👀 You may prefer another model: If the job demands step‑by‑step algorithmic reasoning, GPT‑4.5 or Claude 3.7 Sonnet will keep more moving parts in scope.


So… which model do I choose?

Here’s the rule of thumb: Match the model to the task. Practice really does make perfect, and as you work with different models, it’ll become clearer which ones work best for different tasks. The more I’ve personally used certain models, the more I’ve learned, “oh, I should switch for this particular task,” and “this one will get me there.”

And because I enjoy staying employed, I would love to cheekily mention that you can (and should!) use these models with…

Good luck, go forth, and happy coding!

Learn more about AI models.

The post Which AI model should I use with GitHub Copilot? appeared first on The GitHub Blog.

April 16, 2025  21:02:57

In March, we experienced one incident that resulted in degraded performance across GitHub services.

March 29 7:00 UTC (lasting 58 hours)

Between March 29 7:00 UTC and March 31 17:00 UTC, GitHub experienced service degradation due to two separate, but related incidents. On March 29, users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit ebook and event registration forms on resources.github.com, also due to a service outage.

The March 29 incident occurred due to expired credentials used for an internal service, preventing customers from being able to unsubscribe directly from marketing/sales topics through github.com/settings/emails UI and from performing the double opt-in step required by some countries. A similar credential expiry on March 31 resulted in users experiencing degradation accessing resources.github.com.

The cause of the incident was traced to an issue in the automated alerting for monitoring upcoming credential expirations. The bug in alerting resulted in the invalid credentials being discovered after they had expired. This resulted in two incidents before we could deploy a durable fix. We mitigated it by renewing the credentials and redeploying the affected services.

To improve future response times and prevent similar issues, we have enhanced our credential expiry detection, alerting, and rotation processes, and are working on improving on-call observability.


Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog.

The post GitHub Availability Report: March 2025 appeared first on The GitHub Blog.

April 15, 2025  16:00:22

Whether it’s building, testing, or deploying code, automating manual processes is key to improving developer experience and achieving a successful DevOps strategy.

On GitHub, you can use GitHub Actions to not only implement your CI/CD pipeline, but also automate other processes both on and off GitHub. When you are adopting GitHub Actions on GitHub Enterprise Cloud, you can choose between GitHub-hosted runners and self-hosted runners to run your workloads, and each has its pros and cons.

In this post, we’ll compare GitHub-hosted runners with self-hosted runners across five areas to help you determine which type best fits your GitHub Actions adoption strategy.

What are GitHub-hosted runners and self-hosted runners?

GitHub-hosted runners and self-hosted runners are based on the same open-source software and both support macOS, Windows, and Linux. But they have many differences.

GitHub-hosted runners are fully managed on GitHub’s infrastructure using pre-configured Windows, Linux, and macOS virtual machines. In addition to offering standard runners for typical workloads, hosted runners offer larger runners with more resources (memory, CPU, and storage), custom images, static IP ranges, and Azure Virtual Network integration for enhanced security control.

Self-hosted runners operate on your own infrastructure, whether on-premises or in the cloud. You manage all aspects—configuration, security, and scaling. They also allow you to operate runners in places you couldn’t otherwise—for example, on GitHub Enterprise Server or on custom hardware. They can also be the only way to implement certain compliance requirements, especially when working with highly secured systems.

Both options offer distinct advantages depending on your specific needs and resources. Let’s explore when GitHub-hosted runners may be the right choice for your projects, and when it may be better to use self-hosted runners.

Fully managed or self-managed?

A key distinction between these two options is where they’re hosted, as we’ve pointed out. But that choice comes with several implications.

GitHub-hosted runners provide managed infrastructure with pools of on-demand virtual machines (VMs) that are automatically secured and updated. The environments are ephemeral, with the disks reimaged after each job, preventing files from previous jobs from affecting subsequent runs. The VMs are optimized for GitHub Actions, with pre-installed software and tools, including the latest versions of GitHub CLI, Docker, and common development platforms to ensure fast start times and avoid rate limits.

With GitHub-hosted runners, you can jump right in and start building workflows. There’s nothing to configure or secure before you start, making them ideal when you want to get started quickly. And we all prefer to spend more time on code than infrastructure, right?

Self-hosted runners offer you complete flexibility in defining your solution, but also means you are responsible for managing the infrastructure, images, caches, and security, and monitoring availability and usage against GitHub’s rate limits. This requires expertise in GitHub Actions architecture, VM and container image building, and network and infrastructure management. If your core business offering is scalable infrastructure solutions or Kubernetes, self-hosted runners may make sense.

Let’s take a closer look.

Scalability

To remain productive, it’s important to have highly-available resources available on demand, especially for CI/CD workloads, where waiting for a job to run may mean you’re blocked from working on other tasks. In fact, a single wasted hour each week can cost a company over $4,000 a year per developer!

But scaling highly available, on-demand resources is hard. Even with a well-designed cloud infrastructure, it takes time to provision new virtual machines. You need systems in multiple regions to maintain up time, with 20-25% spare capacity to scale quickly and handle unexpected system failures.

GitHub-hosted runners take advantage of Microsoft’s deep data center and cloud expertise and have dedicated teams to meet our service level agreement (SLA) of 99.9% availability. And that’s without any expertise on your part. In fact, many teams consider self-hosted runners in hopes of beating this availability, but it turns out that’s not even technically possible, as all runnings depend on the same services and control plane. That said, there are conditions where self-hosted runners may work for you.

Self-hosted runners may meet your needs if you need a fixed number of servers, are primarily focused on deployment to non-cloud resources, and don’t need to scale on demand. Just remember that the instances are not natively ephemeral, so you’ll need to have a strategy to keep the instances free from artifacts created by earlier runs. Self-hosted runners also lack automatic scaling capabilities; they require a scaling solution to be able to support large teams or create new instances dynamically.

GitHub’s Actions Runner Controller (ARC) offers a solution, but it has limitations as it requires Kubernetes expertise and only supports Linux runners. Kubernetes relies on containers instead of VMs, which can require you to troubleshoot resource contention and scaling issues. ARC can also offer high availability by having multiple clusters. As we noted before, if your primary business is hosting and managing Kubernetes clusters, then ARC may be the right approach.

ARC does not support macOS or Windows workloads, and both environments present a number of limitations. For example, on macOS, you are required to use Apple hardware, you are limited to two VMs per machine, and containerizing the Apple runtime is not supported. For Windows, virtual machines are supported, but you need a custom orchestrator for scaling the instances. While you can create Windows containers and manage them with Kubernetes, the containers have slow startup times and may not support some of the necessary development and testing tools.

In short, we recommend GitHub-hosted runners for both macOS and Windows workloads.

Security

Security is critical for CI/CD processes, since they may require access to internal or production resources, and builds often use third-party libraries, runtimes, and tools, which can create a large attack surface if not properly secured.

GitHub-hosted runners provide built-in security through a defense-in-depth, zero-trust approach. VMs provide network isolation, preventing exposure to other runners and corporate resources. In fact, access to corporate or cloud resources requires elevating privileges (we recommend OIDC). Their ephemeral nature eliminates code persistence and prevents application execution after job completion, reducing unauthorized access risks.

Storage disks for hosted runners are encrypted at rest, ensuring the code is protected on the disk. All communications are encrypted to GitHub, and deployments to Microsoft Azure are routed through the Azure backbone, minimizing transits through the public internet. We provide regular security updates to both operating systems and runner software. The minimized attack surface and reduced risk of security breaches are key factors in the Department of Defense DevSecOps Reference Design’s recommendation to prefer GitHub-hosted runners for workloads up to Impact Level 5.

Self-hosted runners shift security responsibility entirely to you, requiring management of network, infrastructure, images, containers, and caches—that’s a lot of work. You also need to keep everything up to date, as runners connected to GitHub Enterprise Cloud will not be able to connect if they are more than 30 days behind the current release.

Not to mention, if you operate runners within your network environment with access to corporate resources and production environments, you’ll want to implement a zero-trust, defense-in-depth strategy with time-limited resource access, which demands a high level of network security expertise.

Finally, you’ll need to implement and keep updated both a tool cache and an Actions archive cache. Otherwise, you’re likely to encounter our rate limits as you scale up.

Troubleshooting

Keeping you productive means that problems with workflows or jobs—lack of resources, network issues, outages—need to be solved quickly. As a result, it’s important to have a support strategy.

GitHub-hosted runners come with 24/7 support across all time zones, with premium plans offering dedicated reliability engineers and rapid 30-minute response times for critical issues. This eliminates the need for infrastructure troubleshooting on your part. GitHub handles all runner environment issues, from performance problems to queue times, letting you focus on development while we roll up our sleeves, figure out the problems, and get them fixed.

Self-hosted runners, however, shift first-level support responsibility to you, which means someone will have to troubleshoot performance, network, or queueing issues when they happen, leaving less time for the fun coding stuff. 🙁

Not only that, but GitHub can only assist with the Actions service itself; we cannot assist with your infrastructure, Kubernetes clusters, or custom orchestration solutions. So if they figure out the issue is with your system, you’ll be on your own to solve it. Without sufficient planning, you can spend a lot of time waiting for a solution that lets you get back to writing and deploying code. That can be a big price to pay for self-hosted runners.

Cost management

Finally, there’s the issue of cost. If you are offering Kubernetes or infrastructure management solutions, self-hosted runners may have some advantages. If not, then GitHub-hosted runners are likely the answer here too.

GitHub-hosted runners operate on a pay-as-you-go model with no upfront costs or commitments. Teams optimize expenses through workflow improvements and appropriate runner selection. In addition, there are built-in cost savings. For example, GitHub doesn’t charge network egress fees—a significant advantage when working with large container images on cloud platforms. GitHub also has a partnership with Docker that allows unlimited image pulls from Docker Hub by GitHub-hosted runners, which often eliminates the need to create a pass-through registry or purchase business licenses for your CI/CD processes. Maintaining, supporting, and securing the environment is handled by GitHub, avoiding additional staff and service expenses. Finally, Enterprise accounts benefit from 50,000 free monthly minutes for standard runners.

Self-hosted runners, as in other areas, means organizations assume responsibility for all infrastructure, network, storage, security, and support costs. This gives you a lot of flexibility in defining the environment, right-sizing your resources, and customizing the networking. While per-minute virtual machine expenses might initially seem lower, the total ownership cost can (and often does) exceed GitHub-hosted solutions when accounting for these additional support costs.

Which runner is best for you?

Choosing the right runner depends on your specific needs. Self-hosted runners are most suitable when using GitHub Enterprise Server (which lacks hosted runners), if your core business involves managing infrastructure or Kubernetes, or when you have compliance requirements not met by GitHub Enterprise Cloud with data residency. Scaling and ephemerality challenges make self-hosting less ideal for Windows and macOS workloads. If self-hosting is necessary, consider a hybrid approach and use self-hosted runners just for the specific workloads where they are needed.

For most developers and the vast majority of scenarios, unless you have very unique requirements or are willing to deeply invest in infrastructure to keep your CI/CD system humming, GitHub-hosted runners are likely your best option. They’re especially beneficial for those new to GitHub Actions and they let you spend your time focused on business value, new ideas, and writing code—instead of managing runners.

The post When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions appeared first on The GitHub Blog.