Home 🐋 Giants The GitHub Blog
author

The GitHub Blog

Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.

October 3, 2024  17:52:20
research and copywriting by Emily Akers

What began as a challenge to decipher texts buried for centuries under the ashes of Mount Vesuvius quickly turned into a historical breakthrough using AI and GitHub tools. Three students—Youssef Nader, Luke Farritor, and Julian Schilliger—worked together across time zones and borders to unlock the secrets of 2,000-year-old scrolls, ultimately winning the Vesuvius Challenge and earning $700,000 in prizes. Their work showcases the powerful intersection of AI, open-source collaboration, and the drive to solve mysteries that have puzzled scientists for centuries.

About the challenge

In March of 2023, a group of leading technologists created the Vesuvius Challenge, a competition to decipher the Herculaneum Papyri which were buried after the eruption of Mount Vesuvius 2,000 years ago. Due to carbonization, these scrolls cannot be opened without falling to pieces. So how do we access these ancient texts?

In an interview with NPR’s Simon Scott, Brent Seals, a computer science professor at the University of Kentucky, spoke about the virtual unwrapping of scrolls and evolution of technology in this realm. This technology has been around for decades, but there was a breakthrough in 2015 which led to the reading of a scroll in the Dead Sea Scrolls collection. The scientists used tomography and X-rays and found success with the Dead Sea Scrolls, but were unsuccessful when it came to Herculaneum texts. “Not only were those scrolls difficult to apply virtual unwrapping to, but the ink from the ancient world did not readily show up in the scans that we made, and we needed an AI-based approach to be able to see that ink,” Seals said. Once the scientists captured these scans of the Herculaneum papyri, technologists from around the world set out to analyze them.

See Brent Seals speak about the process of using virtual unwrapping to recover some of the text on the Herculaneum and how this led to the creation of the Vesuvius Challenge.

Meet the Students

With hundreds of engineers around the world working on this, it’s no surprise that the winning group met and worked together completely online. Youssef Nader, Luke Farritor, and Julian Schilliger teamed up through the challenge’s Discord server and chatted about the other projects they had worked on under the Vesuvius Challenge. Luke won the First Letters Prize and reached out to Youssef upon his being named runner-up. While we weren’t able to sit down with Luke, he described the process of realizing he had found the first letter in this interview with his school, University of Nebraska-Lincoln.

Still screenshot from a video of Luke.
Click to hear Luke explain his experience and the moment he realized that he had discovered the first letters in the Vesuvius Challenge.

At this point in the competition, Luke was working on segmentation and Youssef had created AI detection models. Shortly thereafter, the two teamed up with Julian who made a breakthrough automating the segmentation working extensively with GitHub Copilot.

“There are so many benefits to using GitHub as a student, from a free GitHub Pro account to the use of GitHub Copilot in Visual Studio Code,” Julian said. “For the Vesuvius challenge, I had to write pipeline most of the time. I’d write one piece of code that I needed to use to achieve the next goal and if I knew what I wanted to write, I could use the auto completion tool to help me write faster. It was a huge time-saver!”

We also sat down with Youssef who shared insight into his experience.

“We were working in different time zones, so sometimes, we had to intentionally overlap with each other which led to some very late nights and early mornings,” he said. “All of it was worth it because not only did the team win the challenge, but they were able to create special memories together. I remember a night or so before the submission, Julian sent me an exciting message that around 1600 cm^2 of the scrolls were being segmented by his software. We spent the better part of the night hunting for the title of the scroll.”

Youssef recalled the morning that everything started coming together. “I had almost given up hope. I had tried so many different things. In the morning, I was running one last experiment and to my surprise, it worked. There were some parts that made me feel connected to this 2000 year old scroll in a way. It required tracing the writing of this ancient writer on the scroll and finding smart ways of figuring out what letter it would be based on ink deposits.”

These three students realized the dream of all papyrologists from 1754 AD onward. Papyrologist Gianluca Del Mastro recalled meeting Luke in Kentucky for the First Letter Prize. “I saw this young student in front of me. It amazed me as I was expecting someone older. It made me realize we have entered a new world of information technology in which it’s possible to make new discoveries even if you are very young.”

The team invites everyone to take a look at their winning team’s code on GitHub. Having used GitHub for many years and utilizing the tools in the Student Developer Pack, Youssef and Julian felt it was the perfect place to share their team’s findings. “This challenge from the very beginning was to foster collaboration even in the face of competition. Housing our code on GitHub was the only thing that made sense so the community can continue to build and to have easy access to collaborate and push progress forward,” said Youssef.

After the challenge

Dr. Del Mastro had the chance to meet two of the winning teammates after his team sponsored their flight out to Naples, Italy to see the scrolls in person. It was the first time that Youssef and Julian were able to meet in person. “It was surreal to meet in person after spending so much time collaborating over the internet,” Youssef shared. While in Naples, the two went to a conference where they were able to meet some of the professors who were behind the evaluation of their work. Youssef happily reports that he, Julian, and Luke are still in touch and hope they can all work on a project together in the future.

The winning team and others stand in front of a doorway in Naples, Italy, where the traveled to see the scrolls in person.
Left to Right: Aya Elzoheiry, Youssef Nader, Julian Schilliger, Marzia D’Angelo, Claudio Vergara, Fabrizio Diozzi, Alessia Lavorante.
Front: Rossella Villa

The experience was life changing in so many ways. Not only did the three winners help uncover part of the past, but one of them found his future. Julian shared that through the challenge, he’s met so many wonderful teachers and mentors who opened his eyes to all the work to be done at the intersection of code and history. Since completing the challenge, he accepted a full-time role working at the Vesuvius Project where he spends his time decoding the scrolls and learning new information about the ancient past.

Julian said, “Youssef, Luke, and I won this grand prize, but this is only a small piece in the ongoing efforts to decode the scrolls. Lots of people have worked on this before 2023 and there is plenty left to be done in 2024.” If you’re interested in getting involved, check out their Discord.

Are you a student or teacher? Get started with the Student Developer Pack.

The post How students teamed up to decode 2,000-year-old texts using AI appeared first on The GitHub Blog.

October 4, 2024  22:33:22

Get ready for two days of open source magic at GitHub Universe 2024, where innovation meets inspiration at San Francisco’s iconic Fort Mason. On October 28-29, we’re transforming a dedicated area, the Open Source Zone, into a hotbed of groundbreaking ideas, live demos, and community connections featuring rising stars from our GitHub Accelerator program, champions from our Maintainer Community, friends from our GitHub Fund , and passionate creators from around the globe.

Let’s take a closer look at some of the stars of the Open Source Zone 🔎

A-Frame: Create VR magic with just a few lines of code

A-Frame is your gateway to effortlessly creating stunning VR experiences directly in your browser with just a few lines of code. Whether you’re a web developer, designer, or VR enthusiast—your next mind-blowing project is just a <script> tag away!

Find out how A-Frame makes it easy for developers of all levels to create immersive VR experiences with creator and maintainer, @dmarcos.

Did you know: A-Frame was recently selected for our GitHub Accelerator program earlier this year.

Home Assistant: Take control of your home—your way

Home Assistant is an open source platform for home automation that gives you complete control over your smart devices, with a strong emphasis on privacy and local management. Manage everything from lighting to climate control seamlessly—all from a single hub, without relying on the cloud. Setup and configure your own dashboards, cherry-picking from hundreds of community-built themes, cards, and other integrations.

Screenshot of Home Assistant dashboard.
What @adamdiel‘s dashboard looks like. Allegedly ಠ_ಠ

Wondering how to integrate your IoT devices or create custom automations? Bring your questions to @balloob, @missyquarry, and @frenck to find out!

Did you know: Home Assistant started as a weekend project, just celebrated it’s 11th birthday,  and has evolved into a platform used by millions? 🤯

Homebrew: The essential package manager for macOS and Linux

Homebrew is an open source package manager that fills in the gaps on macOS (or Linux), making it easy to install software and tools that aren’t included by default. With simple commands, you can manage everything from essential packages to macOS apps, all neatly organized and easily customizable through straightforward scripts.

Join leaders and maintainers @mikemcquaid and @p-linnane  at the Homebrew booth and brew up the perfect development setup with some expert advice!

Kubernetes: Your goto platform for container orchestration

Kubernetes is an open source platform that automates the deployment, scaling, and management of containerized applications, simplifying the complexity of managing clusters across multiple environments.

Curious about managing multi-cluster environments or optimizing your Kubernetes deployments? @mrbobbytables (no relation!) will be there to answer your questions and more. And don’t forget to ask: “Is it Kuber- ‘netes’  or Kuber-‘neat-ees’?” Let’s settle this once and for all!

Did you know: Kubernetes is celebrating its 10-year anniversary this year—just like GitHub Universe! Since that very first commit, Kubernetes has grown from a groundbreaking idea to the gold standard for container orchestration.

Ladybird: A browser for the bold

Ladybird started as a humble HTML viewer for SerenityOS and is now evolving into an independent, cross-platform browser with its own web engine, focusing on performance, security, and privacy.

Screenshot of the Ladybird Browser showing the Ladybird website.

Meet @ADKaster, the maintainer of Ladybird, and learn what sets this browser apart from the rest. Ever wondered how Ladybird is built to be fast and secure? Drop by and find out!

Mermaid: Simplify diagrams with Markdown magic

Mermaid is a JavaScript-based tool that simplifies the creation and maintenance of diagrams and charts using a text-based syntax inspired by Markdown. It integrates seamlessly with various applications to help visually communicate complex ideas and workflows.

Did you know:  You can use Mermaid directly in GitHub issues and pull requests to create and embed diagrams right in your Markdown files. Curious about how to get started? Open up an issue and explore something like this curious felinoctopoda markup:

```mermaid
classDiagram
    class Animal {
        +string name
        +string color
    }

    class Octocat {
        +int numberOfArms
        +int numberOfLegs
        +string specialSkill() : string
    }

    class Cat {
        +int numberOfLegs
        +string purr() : void
    }

    class Octopus {
        +int numberOfArms
        +string inkColor
        +string squirtInk() : void
    }

    Animal <|-- Cat
    Animal <|-- Octopus
    Animal <|-- Octocat
    Octocat -- Cat : Uses Legs
    Octocat -- Octopus : Uses Arms
```mermaid

It should look like this:

EER diagram showing the relationships between cats, octopuses, octocats, and, animals.

Node: Build faster, scalable applications with ease

Node.js is an open source, cross-platform runtime that lets you run JavaScript on the server side, making it ideal for building fast and scalable network applications. Its event-driven, non-blocking architecture efficiently handles large volumes of data, making it perfect for real-time apps and microservices.

Learn from experts like @ovflowd on how Node.js can power your next big project. Got questions about server-side JavaScript or optimizing performance? This is your chance to ask!

Did you know: Node.js is part of the OpenJS Foundation, supported by contributions from organizations (including us), which help fund its development and sustain its community.

Oh My Zsh: Because your terminal should be as stylish as your code

Oh My Zsh is a framework that enhances your Zsh shell with powerful features, plugins, and themes, making your terminal both functional and fabulous. With over 2,000 contributors, hundreds of themes and plugins, Oh My Zsh is a shining example of a wonderful product and community.

Screenshot of a terminal displaying a list of files.

Join @carlosala, @mcornella, and @robbyrussell to explore how Oh My Zsh can transform your workflow. Wondering how to make your terminal feel less like work and more like magic? Stop by and find out!

Swift: The language for building next-gen Apple apps

Swift is a programming language developed by Apple and released as open source in 2014. It’s designed for building fast, safe, and expressive apps across iOS, macOS, watchOS, visionOS, and beyond.

Get expert advice on Swift best practices, tools, and resources for building your next great app. Ready to ‘tailor’ your Swift app to perfection? Don’t miss this chance!

Unsloth AI: Faster, lighter custom AI models for everyone

Unsloth AI’s goal is to make custom AI models more accessible by providing an open source tool that supercharges your fine-tuning of large language models like Llama, Mistral, and Gemma, making them 2-5x faster while using up to 80% less memory. Unsloth AI is another project in our GitHub Accelerator program and was also recently backed by Y-Combinator.


Meet the founders, Australian brothers, @danielhanchen and @shimmyshimmer, and learn how Unsloth AI can speed up your workflow—because nobody likes to move at a sloth’s pace!

Sloth from Zootopia movie laughing. Sloowwwly.
Actual footage of readers reacting to the awful puns and dad jokes in this post!

What a lineup!  Complete lineup and schedule shown below:

Get your Universe ticket now

The post Leading the way: 10 projects in the Open Source Zone at GitHub Universe 2024 appeared first on The GitHub Blog.

October 2, 2024  17:41:42

As we kick off Cybersecurity Awareness Month, the GitHub Bug Bounty team is excited to spotlight one of the top performing security researchers who participates in the GitHub Security Bug Bounty Program, @imrerad!

As home to over 100 million developers and 420 million repositories, GitHub maintains a strong dedication to ensuring the security and reliability of the code that powers daily development activities. The GitHub Bug Bounty Program continues to play a pivotal role in advancing the security of the software ecosystem, empowering developers to create and build confidently on our platform and with our products. We firmly believe that the foundation of a successful bug bounty program is built on collaboration with skilled security researchers.

As we celebrate 10 years of the GitHub Security Bug Bounty program, we are proud of what the program has become. Not only is the program a fundamental component of GitHub’s security strategy, but we have also become more involved with the hacker community. We have been able to pay over $5.5 million in total rewards via HackerOne since 2016; travel and meet in person many of our program participants at various conferences; and we have presented a number of talks on how we as a company work on security issues. We continuously listen to feedback from the community and are striving to make our program more exciting for the researchers to hack on. We have some exciting ideas that we are working on, so stay tuned for even more announcements in the future!

To celebrate Cybersecurity Awareness Month (this month), we’re interviewing one of the top contributing researchers to our bug bounty program and learning more about their methodology, techniques, and experiences hacking on GitHub. @imrerad specializes in command injections and logic implementation flaws and has found and reported some really interesting and complex issues.


How did you get involved with Bug Bounty? What has kept your interest?

I’ve been passionate about IT security since the end of my teenage years. I remember reporting vulnerabilities to companies even before bug bounty turned into a mainstream thing. I got my first reward in 2016 (by Android) and I was proud as it was not a common thing at that time.

I’m not a full-time bug bounty hacker, I do this as a hobby in my free time, next to a full-time job, and without sacrificing my personal life. As bug bounty programs turned into an industry standard, I realized that I’m a lucky guy with this hobby. It drives me to study more about various technologies that I encounter during the research and the recognition coming with it is good for career development.

What keeps you coming back to it?

Its addictive nature—you always want one more finding.

What do you enjoy doing when you aren’t hacking?

I love music and try to attend shows of bands that are important to me. I also enjoy building various automations around the house that make life easier and more comfortable. For example, I’ve been working on an irrigation system recently. The next challenge is to store more water, somehow.

How do you keep up with and learn about vulnerability trends?

Bug bounty write-ups by others are an invaluable source of information: you can learn about tricks you haven’t seen, about features you haven’t been aware of, and with some luck, they could even give you an idea about yet-unconsidered additional attack vectors.

Reviewing the changelog of your target can also hint at what to focus on next. For example, in the release notes of GitHub Enterprise Server (GHES), you could see the trend of privilege escalation issues in the management console.

Besides this, the experience gained in my current and past roles as a full-time security engineer also contributes to my process at some level.

What are your favorite classes of bugs to research and why?

I like logic bugs the most, ones that are unique. School book vulns (for example, a reflected XSS) that could be found by off-the-shelf tools also are not exciting to me. I love coding, so I also enjoy building tools to verify potential attack vectors or to find additional instances of a flaw that I just discovered. At race condition issues, I relish exploring the options that improve my chances to win.

You’ve found some complex and significant bugs in your work—can you talk a bit about your process?

I don’t have a super special methodology; it is something like the following:

  1. Choose a target you like or you are familiar with (I tend to be less motivated at products I don’t like, so I try to focus on others instead).
  2. Come up with a list of features that you suspect problematic (for example, because the impact of a flaw could be devastating or simply because it is just hard to implement securely).
  3. Build a list of attack vectors for each.
  4. Prioritize the list.
  5. Go through the list / execute the attacks.
  6. Update and expand the list as you’re making the conclusions.
  7. Repeat.

Do you have any advice or recommended resources for researchers looking to get involved with Bug Bounty?

Make verbose notes. This will save you a lot of time when you eventually need to reproduce something several months later or just want to help out someone with the conclusions you made.

Don’t let prejudice fool you. Even super-talented engineers make mistakes sometimes, so don’t skip verifying attacks that you think are trivial.

Find the right balance. Sometimes, you need to invest quite some time to research a promising attack surface, and the right conclusions would require even more. It is hard to make the decision whether to stop or to pursue it even further.

Give back. Publishing write-ups about your findings and tools helps the researcher community and makes the internet safer.

Do you have any social media platforms you’d like to share with our readers?

Connect with me on LinkedIn, read my posts on Medium, or check out my tools on GitHub.


Thank you, @imrerad, for participating in GitHub’s bug bounty researcher spotlight! Each submission to our bug bounty program is a chance to make GitHub, our products, and our customers more secure, and we continue to welcome and appreciate collaboration with the security research community. So, if this inspired you to go hunting for bugs, feel free to report your findings through HackerOne.

Interested in helping us secure GitHub products and services? Check out our open roles!

The post Cybersecurity spotlight on bug bounty researcher @imrerad appeared first on The GitHub Blog.

September 30, 2024  20:14:08

When people imagine developers leveraging public code, they often think of developers using code that’s already been written and building upon it to create something new. It’s faster to build from something than to build from nothing, right? While this is transactionally true, there’s more to it than that. Central to the open source culture is not only leveraging public code but being able to explore the repository in which it exists, learn about code origins, see who else has worked on the code, increase knowledge sharing among the community, and understand possible licensing structures.

With the advent and scale of new AI technology solutions like GitHub Copilot, the way developers work is changing, and with those changes it’s important to incorporate the habits and practices developers value in new ways of working. In this case, identifying myriad sources where code may appear required the creation of a new solution—one that enables developers to prioritize these values while fostering learning and knowledge sharing at scale.

GitHub engineering teams got to work to address this, and today we’re announcing the general availability of code referencing in GitHub Copilot Chat and GitHub Copilot code completions. Developers can now choose whether to block suggestions containing matching code or allow those suggestions with information about the matches. This feature is currently available in VS Code and will be more widely available soon.

Code referencing in Copilot in VS Code

How code referencing works

With billions of files to index and a latency budget of only 10-20ms, it’s a miracle of engineering that finding specific matches is even possible. Still, when a match is found (if public code matching is allowed), a notification appears in the editor showing: (1) the matching code, (2) the file where that code appears, and (3) licensing info (if any) detected in the relevant repository. This information is shown for all the public code matches that are detected in a model response.

We’re also excited to announce that GitHub has partnered with Microsoft Azure to make the code referencing API available on Azure AI Content Safety. Azure AI Content Safety users can leverage this feature via the protected material detection for code filter. We believe in transparency as a core value of the open source community and want to ensure that this capability is available to everyone, no matter which tool you use.

Whether you’re using GitHub Copilot or other generative AI tools leveraging the code referencing API, you can depend on transparency so that you can make informed development decisions for the project at hand.

Why code referencing matters

The power of code referencing for individual developers

For individual developers using GitHub Copilot, this adds a layer of transparency and keeps you in the driver’s seat. We’ve always had a filter that users can apply to prevent Copilot from producing suggestions matching public code. Now, with code referencing, you have the additional option to allow all suggestions while still utilizing Copilot because if Copilot produces suggestions of 150 characters or more that match public code, you’ll be notified about the matches found, the repositories the code was found in, and potential licenses detected. This information helps you make more informed decisions so that you can build, with Copilot, with confidence.

The power of code referencing for businesses

Copilot helps organizations innovate faster than ever. To help businesses innovate responsibly, the option to block suggestions matching public code has always been available to admins, and using that filter ensures customers are protected by GitHub’s indemnification commitment.

For dev teams wanting to benefit from the learning code referencing enables, GitHub’s indemnity will now extend to their use of code referencing where GitHub Copilot Business or GitHub Copilot Enterprise customers comply with cited licenses. This means that Copilot Business and Copilot Enterprise customers can expand their teams’ Copilot context, use, and effectiveness while keeping existing contractual protections.

We’ve collaborated to make code referencing a reality, not just for GitHub, but for all AI dev tools, and have been driven by the values that the open source community has long cultivated and upheld—that surfacing and sharing information can unlock innovation in new and groundbreaking ways. As we continue to grow and scale our capabilities with AI, GitHub is excited to empower developers with greater transparency, knowledge sharing, and tools for innovation.

Learn more about code referencing.

The post Code referencing now generally available in GitHub Copilot and with Microsoft Azure AI appeared first on The GitHub Blog.

October 1, 2024  14:55:46

We’ve updated our Transparency Center with data from the first half of 2024 and invite you to explore the data and visualizations and download them for your own research.

In the most recent period, you may notice a significant jump in projects affected by DMCA takedowns, with 1,041 notices processed and 18,472 projects taken down in H1 2024 versus 964 notices and 6,358 projects taken down in H2 2023. This jump can largely be attributed to a single takedown.

Moderating GitHub presents challenges specific to the code collaboration environment, but policymakers, researchers, and other stakeholders are often less familiar with how a platform like GitHub works. That’s one of the reasons our policy team regularly advocates on behalf of the interests of developers, code collaboration, and open source development. Open source software is a public good, underpinning all sectors of the economy and serving as essential digital infrastructure, and moderating the home for open source software requires careful consideration to ensure that essential code remains accessible. Meanwhile, our Trust and Safety team has continually evolved our developer-first approach to content moderation in response to technological and societal developments.

In the interest of broadening understanding of code collaboration, advancing the transparency of our own governance practices, and enriching platform studies research, we are proud to share the recently published article, “Nuances and Challenges of Moderating a Code Collaboration Platform” in the Journal of Online Trust and Safety and co-authored by members of our Trust and Safety, Legal, and Policy teams. The paper is available to all, and we encourage you to read it in its entirety. It covers how moderating a code collaboration platform presents unique considerations illustrated with diverse case studies. We also consider how the new frontiers of AI will present challenges and opportunities for maintaining our developer-first standards at scale.

Clickable screenshot of a paper titled Nuances and Challenges of Moderating a Code Collaboration Platform.
Click to read the PDF.

The post The nuances and challenges of moderating a code collaboration platform appeared first on The GitHub Blog.

September 26, 2024  18:44:46

Today, GitHub Copilot Individual and Business plans now include preview access to Copilot functionality, including GitHub Copilot Chat, in github.com. The integration with GitHub allows Copilot to leverage the rich context from repositories, pull requests, issues, actions, and more, providing you with more valuable interactions, more tailored coding assistance, and an AI-native developer experience with GitHub.

Doing more with what you know

With this latest release, GitHub Copilot is now ubiquitous across the IDE, Visual Studio Code, browser, and mobile for all Copilot users and is there to assist you across the software development lifecycle. Whether you’re on the browser or mobile, you can now use Copilot not only as an AI pair programmer that makes code suggestions, but also as a coding assistant powered by entire codebases, conversations between collaborators, and workflows.

Now, you can ask Copilot to help:

Dig deeper with OpenAI o1

For questions that require more time and intensive analysis by GitHub Copilot to construct a response, switch into immersive mode, or go directly to github.com/copilot. You can even try using an OpenAI o1 model to power your conversation. We know that one model doesn’t fit each and every task, so while the base model for GitHub Copilot Chat, GPT-4o, may provide satisfactory explanations on pull request diffs and generate great boilerplate code, o1-preview or o1-mini may suit complex tasks like crafting advanced algorithms or helping to fix performance bugs much better. Join the waitlist for early access to OpenAI o1 for Copilot Chat in immersive mode.

Get started today

By integrating GitHub Copilot into GitHub, we’re taking another step of putting AI right where you need it—whether you’re coding in an editor or troubleshooting and collaborating with your team in github.com. And with a growing ecosystem of GitHub Copilot Extensions, you can even integrate your favorite third-party and critical internal developer tools with Copilot to keep you in your flow state. This update is designed to streamline your development process, empower you with your organization’s context, and allow you to focus on what you do best—creating great software.

As with all GitHub betas, these features are governed by our pre-release terms. We’re eager to see how you all leverage these new capabilities, and, as always, you can provide feedback in the GitHub Community.


Note: videos have been sped up for display purposes.

The post GitHub Copilot now available in github.com for Copilot Individual and Copilot Business plans appeared first on The GitHub Blog.

September 26, 2024  16:55:29

Cybersecurity Awareness Month is a global initiative that highlights the importance of protecting our digital work. At GitHub, security is the core of how we operate. We’re proud to participate and demonstrate our commitment to safeguarding our customer’s data. As such, GitHub’s Bug Bounty team is excited to celebrate the Cybersecurity Awareness Month this year with some additional incentives for security researchers! This includes:

  • Bonuses for new and existing researchers.
  • Bonus for providing Nuclei template for reproductions and fix verifications.
  • Spotlight on a few of the talented security researchers who participate in the GitHub Security Bug Bounty Program.

Bonuses for new and existing researchers

For the month of October:

  • A new hacker to our program will receive an additional 20% bonus on their highest severity valid submission.
  • For returning hackers, we are offering an additional 10% bonus on their highest severity valid submission.

Note: these bonuses will only apply to (1) submission per researcher.

Bonus for providing Nuclei templates

A valid report that also contains a functional Nuclei template that we can use to both reproduce the report and verify that it is fixed will receive an additional 5% bonus. To learn more about Nuclei, please visit this documentation.

Researcher’s spotlight

Every year, we like to spotlight researchers who are participating in our program and learn more about them. In these interviews, we learn about their hunting methodology, interests, and more.

To read more about our previous spotlights, please check out:

  1. Cybersecurity spotlight on bug bounty researchers @chen-robert and @ginkoid
  2. Cybersecurity spotlight on bug bounty researcher @yvvdwf
  3. Cybersecurity spotlight on bug bounty researcher @ahacker1
  4. Cybersecurity spotlight on bug bounty researcher @inspector-ambitious
  5. Cybersecurity spotlight on bug bounty researcher @Ammar Askar

Stay tuned for more researcher spotlights this coming month!


Each submission to our bug bounty program is a chance to make GitHub, our products, the developer community, and our customers more secure, and we’re thrilled with the ongoing collaboration to make GitHub better for everyone with the help of your skills. If you are interested in participating, visit our website for details of the program’s scope, rules, and rewards.

The post Kicking off Cybersecurity Awareness Month: Researcher spotlights and additional incentives! appeared first on The GitHub Blog.

September 26, 2024  15:54:15

Working with the command line is something many developers love. Even though we love it, there are times when it can be really frustrating. What’s the command for switching to a branch? How do I fix merge conflicts? Do I have the correct credentials or permissions for my file or directory?

In our recent blogs, we showed you some of the top Git commands and useful commands for the GitHub CLI. However, there are hundreds, if not thousands, of terminal-based commands, and knowing them all would be difficult. We could search for the correct command in a browser but at the cost of breaking our flow and maybe still not finding exactly what we need.

In the previous blog, we showed you how to use --help to receive some helpful suggestions about which commands to use, but this is usually a basic list. Instead, wouldn’t it be great if we could have a conversation with our terminal and ask which commands to use? This is where GitHub Copilot in the CLI comes into play.

GitHub Copilot in the CLI

Many developers are loving GitHub Copilot Chat, and the time-saving benefits and productivity gains that come with it. So, we thought, “Why not bring GitHub Copilot to the command line?” With GitHub Copilot in the CLI, we can ask questions to help us with our terminal tasks, whether they are Git-related commands, GitHub, or even generic terminal commands.

If this sounds like something you want to try, then read on. We’ve also left you with some challenges for you to try yourself.

Getting started

To get started, you’ll need to make sure you have the GitHub CLI installed on your Windows, Mac, or Linux machine, and an active subscription of GitHub Copilot. If your Copilot subscription is part of a Business or Enterprise license, you’ll need to ensure your organization or enterprise’s policy allows you to use “Copilot in the CLI:”

Screenshot showing Copilot policies with four policies listed: Copilot in the CLI, Copilot Chat in the IDE, Copilot Chat in GitHub Copilot, Suggestions matching public doe (duplication detection filter). The first three are set to enabled and the final one is set to allowed.

You can find these Copilot settings by clicking your profile icon in the top right-hand corner on github.com → Settings → Copilot.

Ensure you’re authenticated in your terminal with GitHub by using gh auth login. You can follow the guide in our CLI Docs to ensure you authenticate correctly.

Copilot in the CLI is a GitHub CLI extension. Thus, you have to install it by typing gh extension install github/gh-copilot into your terminal of choice. Since I’m using Windows, all the examples you see here will be Windows PowerShell:

Now that you have Copilot in your CLI, you can use gh copilot to help you find information you are looking for. Let’s look at some of the most common things you can do with Copilot.

Have GitHub Copilot explain computer science concepts

GitHub Copilot is your friend when it comes to using the terminal, regardless of how familiar you are. Copilot can help explain something by using gh copilot explain, followed by a natural language statement of what you want to know more about. As an example, you might like to know how to roll back a commit:

You can receive help from Copilot when you don’t understand exactly what a particular command does. For example, a teammate recently passed me the script npx sirv-cli . to run in my terminal as part of a project we were working on. If I want to better understand what this command does, I can ask Copilot:

TRY IT: Ask Copilot to explain the difference between Git and GitHub.

If you get stuck, you can type gh copilot --help to see a list of commands and examples for how to use Copilot in the CLI:

GitHub Copilot can suggest commands

Explaining concepts is good for understanding and knowledge. When we want to execute a command, however, the explain command might not be enough. Instead, we can use the suggest command to have GitHub Copilot suggest an appropriate command to execute. When it comes to suggesting commands based on your questions, Copilot will follow up with another question, such as “What kind of command can I help you with?” with three options for you to choose from:

What kind of command can I help you with? [use arrows to move, type to filter] generic shell command, gh command, git command

The user can choose between:

  • generic shell command (terminal command)
  • gh command (GitHub CLI command)
  • git command

Copilot can then provide a suggestion based on the type of command you want to use. Let’s dive into each of the three command types.

Generic shell commands

There are hundreds of terminal specific commands we can execute. When asking GitHub Copilot for a suggested answer, we’ll need to select generic shell command from the drop down. As an example, let’s ask Copilot how to kill a process if we’re listening on a specific port:

Along the way, we are answering the questions Copilot is providing to us to help refine the prompt. At the end of each suggestion, we are able to have Copilot explain further, execute the command, copy the command, revise the command, or exit the current question.

TRY IT: Ask Copilot how to list only *.jpg files from all subfolders of a directory.

Git commands

In our recent blog, we went through some of the main Git commands every developer should know. Instead of having to search for a specific command, you can ask GitHub Copilot for help directly from the command line.

For Git commands, select git command from the Copilot drop down. If we wanted to know which branch we were on before making a commit, we could ask Copilot to suggest the best way to achieve this:

In this example, you can see I first ask to have the answer explained, and then execute the command to see that we are currently working on the main branch.

What if we accidentally checked our new changes onto the main branch, but we actually want them on a new branch? We can ask Copilot how would we go about fixing this:

Remember to also check the responses Copilot is giving you. In the above example, Copilot gave me a very long answer to my question, and my question was also rather long. We didn’t get exactly what I needed. Instead, I select to revise the question further. Still, we didn’t get exactly what we wanted. Let’s revise it again, and ask to add the changes to a new branch instead:

Now, I have three steps to execute in order to create a new branch, reset the previous branch, and then switch to the new branch. From there, I can make a new commit.

Once we’ve made a commit, we can ask how to update the previous commit message:

Now, we can change the previous commit message.

TRY IT: Ask Copilot how to merge a pull request as an admin.

GitHub commands

In our last blog, we showed you useful GitHub commands for using the GitHub CLI. Now, let’s ask GitHub Copilot when we get stuck. When we want to ask Copilot about GitHub- specific commands, choose the gh command from the drop down menu in the terminal.

Let’s look at diffs—the difference between two files or two commits on GitHub. We can ask Copilot how to view these differences. Here, I’m also asking Copilot in the terminal via VS Code, and it’s also providing me with suggestions for the question:

Here, I didn’t specify in the prompt whether it was a GitHub command. By choosing gh command, Copilot knows I am looking for a GitHub-specific command, and therefore shows me the command for showing a difference of the pull request number we selected.

Now, let’s see if there are any pull requests or issues from this repository that are assigned to me:

Copilot tells me there are none assigned to me from this repository—winning!

Let’s put a few Git and GitHub commands together. Let’s ask how to open a pull request from a branch using a gh command. Firstly, let’s ask Copilot to commit all my changes to a new branch, and then ensure I’m on the correct branch. After switching to the correct brand, we can ask Copilot how to open a pull request from a branch we are on:

Remember (again) to check the responses we are given. In this example, Copilot gave us the command gh pr create --base --head --title “” --body &lt;"pull_request_body&gt;" which provides all the tags. If we just use gh pr create then we are guided through the pull request process. We can follow the prompts within the command line and ask Copilot along the way for help if we get stuck. I created a draft pull request so I can work on it with my team further before converting it to an open pull request.

By answering the questions Copilot gives us, such as “What kind of command” is this, and selecting the correct option, we can have Copilot successfully execute a command. In this case, we have committed our code to a new branch, navigated to the correct branch, and opened the pull request as a draft.

TRY IT: Ask Copilot how to create a new release and edit the contents.

Working with aliases

All this typing gh copilot suggest has got me thinking “there’s got to be a faster way to use GitHub Copilot”, and there is. We can use the prebuilt ghcs alias to have “Copilot suggest” a command for us. We’ll need to configure GitHub Copilot in the CLI before we can use these commands. There are also flags like --target (or -t for short), which allow us to specify a target for the suggestion, such as shell, gh, or git. In this way, we can make our conversation with Copilot so much faster. To learn more about the commands and flags available, you can use --help with any Copilot command or either of the ghce and ghcs aliases.

Each system configures these aliases differently. Check out the Copilot Docs and video for how to configure aliases for you.

TRY IT: Configure Copilot in the CLI with aliases for even fewer keystrokes.

Using GitHub Copilot CLI

When it comes to using GitHub Copilot in the CLI, the question you ask–also called the prompt–is really important for receiving an answer that is correct for your situation. Unlike GitHub Copilot in your editor, Copilot CLI doesn’t have as much context to draw from. You’ll need to ensure the prompt you write is succinct, and captures the question you are wanting to ask. If you want some tips on writing good questions, check out our guide on prompt engineering. You can always revise your question to get the answer you are looking for.

This has been a brief introduction on using Copilot from the command line. Now, you’re ready to give our “try it” examples a go. When you try these out, share your results in this discussion so we can see the answers Copilot gives you and discuss them together.

The post Boost your CLI skills with GitHub Copilot appeared first on The GitHub Blog.

September 26, 2024  16:15:01

Today, we announced that GitHub Enterprise Cloud will offer data residency, starting with the European Union (EU) on October 29, 2024, to address a critical desire from customers and enable an optimal, unified experience on GitHub for our customers.

Data residency and what it means for developers

We’ve heard for years from enterprises that being able to control where their data resides is critical for them. With data residency, organizations can now store their GitHub code and repository data in their preferred geographical region. With this need met, even more developers across the globe can build on the world’s AI-powered developer platform.

Enterprise Cloud with data residency provides enhanced user control and unique namespaces on ghe.com isolated from the open source cloud on github.com. It’s built on the security, business continuity, and disaster recovery capabilities of Microsoft Azure.

Image of a Contoso repository displaying a unique namespace which is magnified.

This is a huge milestone for our customers and for GitHub–a multi-year effort that required extensive time, effort, and dedication across the company. We’re excited to share a behind-the-scenes look at how we leveraged GitHub to develop the next evolution of Enterprise Cloud.

Designing the architecture for the next evolution of GitHub Enterprise

This effort started in summer of 2022 with a proof of concept (PoC) and involved teams across GitHub. We carefully considered which architecture would enable us to be successful. After iterating with different approaches, we decided to build the new offering as a feature set that extends Enterprise Cloud. This approach would allow us to be consistently in sync with features on github.com and provide the performance, reliability, and security that our customers expect. For hosting, we effectively leveraged Microsoft Azure’s scale, security, and regional footprint to produce a reliable and secured product with data residency built-in, without having to build new data centers ourselves.

As the home for all developers, developer experience is critically important for us. We recognized early on that consistency was important, so we sought to minimize differences in developing for Enterprise Cloud and Enterprise Cloud with data residency. To this end, the architecture across both is very similar, reducing complexity, risk, and development costs. The deployment model is familiar to our developers: it builds off of GitHub Actions. Also, changes to github.com and Enterprise Cloud with data residency are deployed minutes apart as part of a unified pipeline.

To accomplish this, we had to organize the work, modify our build and deployment systems, and validate the quality of the platform. We were able to do all three of these by using GitHub.

Organizing with GitHub Issues and Projects

To organize the project, we used GitHub Issues and Projects, taking advantage of multiple views to effectively drive work across multiple projects, more than 100 teams, and over 2,000 issues. Different stakeholders and teams could take advantage of these views to focus on the information most relevant to them. Our talented technical project management team helped coordinate updates and used the filtering and slicing capabilities of Projects to present continuously updated information for each milestone in an easily consumable way.

We also used upcoming features like issues hierarchy to help us understand relationships between issues, and issue types to help clearly classify issues across repositories. As part of using these features internally we were able to give feedback to the teams working on them and refine the final product. Keep an eye out for future announcements for issues hierarchy and issue types coming soon!

Image of hierarchies directly inside a GitHub project.

Image of issue types.

All of these powerful features helped us keep the initiative on track. We were able to clearly understand potential risk areas and partner across multiple teams to resolve blockers and complex dependencies, keeping the project effectively moving forward across multiple years.

Building Enterprise Cloud with data residency using GitHub

GitHub has always been built using GitHub. We wanted to continue this practice to set ourselves up for success with the new data residency feature. To this end, we continued leveraging GitHub Codespaces for development and GitHub Actions for continuous integration (CI). In addition, we added deployment targets for new regions. This produced a development, testing, and CI model that required no changes for our developers and a deployment process that was tightly integrated into the existing flow.

We have previously discussed our deploy then merge model, where we deploy branches before merging into the main branch. We expanded this approach to include successful deployments to Enterprise Cloud data residency targets before changes could be merged and considered complete, continuing to use the existing GitHub merge queue. A visualization of our monolithic deployment pipeline is shown in the figure below.

Image showing a visualization of the deployment pipeline.

We start by deploying to environments used by GitHub employees in parallel. This includes the internal environment for Enterprise Cloud with data residency discussed more in the next section. As we use GitHub every day to build GitHub, this step helps us catch issues as employees use the product before it impacts our customers. After automated and manual testing, we proceed to roll out to “Canary.” Canary is the name for the stage where we configure our load balancers to gradually direct an increasing percentage of github.com traffic to the updated version of the code in a staged manner. Additional testing occurs in between each stage. Once we successfully deploy the updated version of github.com to all users, we then deploy and validate Enterprise Cloud with data residency in the EU before finishing the process and merging the pull request.

Ensuring all deployments are successful before we merge means changes are deployed in sync across all Enterprise Cloud environments and monitored effectively. Note that in addition to deployments, we also use feature flags to gradually roll out changes to groups of customers to reduce risk. If a deployment to any target fails, we roll back the change completely. Once we have understood the failure and are ready to deploy again, the entire process starts from the beginning with the merge queue.

Finally, to maintain consistency across all teams and services, we created automation to generate deployment pipelines for over 100 services so, as new targets are introduced, each service automatically deploys to the new environment in a consistent order.

Using Enterprise Cloud with data residency ourselves

To create the best possible product, we also prioritized using it ourselves and stood up an isolated environment for this purpose. Using our GitHub migration tooling, we moved the day-to-day development for the team working on GitHub Enterprise Importer to that environment, and invested in updating our build, deploy, and development environments to support working from the data resident environment. Since its creation, we have deployed to this environment over 8,000 times. This gave us invaluable feedback about the experience of working in the product with issues, pull requests, and actions that we were able to address early in the development process. We were also able to iterate on our status page tooling and internal Service Level Objective (SLO) process with the new environment in mind. The team is continuing to work in this environment today and runs over 1,000 actions jobs a month. This is a testament to the stability and quality we’ve been able to deliver and our commitment to this feature.

What’s next

We are proud that we’ve been able to evolve Enterprise Cloud to offer data residency while using GitHub to organize, build, deploy, and test it. We’re excited to unlock GitHub for even more developers and for you to experience what we have built, starting on October 29, 2024 in the EU, with more regions on the way.

If you’re excited about Enterprise Cloud with data residency, please join us at GitHub Universe 2024 to learn more and hear from other companies how they’ve used this to accelerate software development and innovation.

The post GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub appeared first on The GitHub Blog.

September 19, 2024  20:16:50

Starting today, we’re opening a preview to give developers an opportunity to test OpenAI o1-preview and o1-mini, hosted on Azure, in both GitHub Copilot and Models. Sign up to get access to use OpenAI o1 in GitHub Copilot Chat with Visual Studio Code and in the playground with GitHub Models.

OpenAI o1 is a new series of AI models equipped with advanced reasoning capabilities, trained to think through complex tasks using an internal thought process. During our exploration of using o1-preview with GitHub Copilot, we found the model’s reasoning capability allows for a deeper understanding of code constraints and edge cases produced a more efficient and higher quality result. And o1-preview’s deliberate and purposeful responses made it easy to pinpoint problems and quickly implement solutions.

Now, you can test it out and start building on GitHub with o1-preview and o1-mini. During the preview, you can choose to use o1-preview or o1-mini to power Copilot Chat in VS Code in place of the current default model, GPT-4o. Toggle between models during a conversation, moving from quickly explaining APIs or generating boilerplate code to designing complex algorithms or analyzing logic bugs. Using o1-preview or o1-mini with Copilot gives you a first-hand look at the new models’ ability to tackle complex coding challenges.

You can also test either of the o1 models in the playground in GitHub Models to discover their unique capabilities and performance. And once you’re familiar with how the models work, take the next step and start to integrate the models into your own apps.

Test OpenAI o1 in a playground in the GitHub Marketplace.

With this preview, we’re excited to bring OpenAI’s latest advancements to you, whether you’re developing software along with Copilot or building the next great LLM-based product. We can’t wait to see what you build!

The post Try out OpenAI o1 in GitHub Copilot and Models appeared first on The GitHub Blog.