The Atlantic - Technology
The Atlantic's technology section provides deep insights into the tech giants, social media trends, and the digital culture shaping our world.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
After months of negotiation, Congress was close to passing a spending bill on Wednesday to avert a government shutdown. Elon Musk decided he had other ideas. He railed against the bill in more than 150 separate posts on X, complaining about the raises it would have given members of Congress, falsely exaggerating the proposed pay increase, and worrying about billions in government spending that werenât even in the bill. He told his followers over and over that the bill was âcriminalâ and âshould not pass.â Nothing about Muskâs campaign was subtle: âAny member of the House or Senate who votes for this outrageous spending bill deserves to be voted out in 2 years!â he posted. According to Xâs stats, the posts accrued tens of millions of views.
Elected Republicans listened: By the end of the day, they had scrapped the bill. Last night, another attempt to fund the government, this time supported by Musk, also failed. After spending about $277 million to back Donald Trumpâs bid for the presidency, Musk has become something of a shadow vice president. But itâs not just Muskâs political donations that are driving his influence forward. As his successful tirade against the spending bill illustrates, Musk also has outsize power to control how information is disseminated. To quote Shoshana Zuboff, an academic who has written about tech overreach and surveillance, Musk is an âinformation oligarch.â
Since buying Twitter in 2022 and turning it into X, Musk has reportedly used the platform to inflate the reach of his posts (and thereby his own influence on discourse). Since July, his posts on X have received more than 16 times the number of views as all of the accounts of incoming congressional members combined. He also appears to have transformed the platform to boost conservative posts, in accordance with his own political aims. This is how he can start posting about his displeasure over a bill and then have lawmakers capitulate. At least one Republican member of Congress reported that after Muskâs posting spree began, constituents flooded his office with calls telling him to reject the spending bill. âMy phone was ringing off the hook,â Representative Andy Barr of Kentucky told CBS News. âThe people who elected us are listening to Elon Musk.â
Some in Congress seem to have no problem with this, and actually enjoy it. Yesterday, Senators Rand Paul and Mike Lee as well as Representative Marjorie Taylor Greene suggested that it might be a good idea to simply make Musk the speaker of the House as a way to shatter the establishment, in Greeneâs phrasing. Musk doesnât have the support of the entire rightâhis calls to scrap the spending bill frustrated some Republican lawmakers and spurred a round of infighting. But the point is that he has the ear of the person the party listens to: Trump. If you have Trump, Musk probably understands, the rest of the right generally falls in line, however reluctantly.
The power that Musk wields through X was clear even before this week, of course. âOur political stability, our ability to know whatâs true and what[âs] false, our health and to some degree our sanity, is challenged on a daily basis depending on which decisions Mr. Musk decides to take,â Zuboff said in a 2023 interview with the Financial Times. Muskâs decisions as to what does and doesnât have a place on X are part of why the platform has become a bastion for white-supremacist content. He has shown that he can now have a disproportionate impact on politics despite the obvious fact that heâs not an elected official. Reportedly, Trump didnât initially oppose the spending bill; rather, Musk and his posts may have led Trump to eventually come out against it on Wednesday afternoon. Â
Musk may have to tread delicately, though. Trump does not like to be overshadowed. Yesterday, Democrats in Congress repeatedly referred to âPresident Muskâ in protest of how far Muskâs power has gone. (Trumpâs incoming press secretary, Karoline Leavitt, said in a statement to Fox News that âPresident Trump is the leader of the Republican Party. Full stop.â) Musk has tried to hide his sway behind a thin veil. After it became clear that the spending bill was going to fail on Wednesday, he posted, âVOX POPULI, VOX DEI,â which is Latin for âthe voice of the people is the voice of god,â as though the breakdown was not the direct result of his obstinate prodding.
For now, Musk has the Republican Party, and thus a large chunk of American democracy, sitting neatly in his pocket. Part of what makes Muskâs influence so concerning is that his views are to the right of even many Republicans. Early this morning, Musk posted on X that âonly the AfD can save Germany.â The Alternative fĂŒr Deutschland, or AfD, is one of Germanyâs furthest-right parties, whose jingoistic desires donât just stop at mass deportations. AfD politicians have reportedly discussed âremigration,â the process of deporting nonwhite residents, including naturalized citizens and their descendants. These views are presumably not just finding their way to Trump; they are broadcast to millions of people who log on to X.
In many ways, Muskâs decision to purchase Twitter for a staggering $44 billion has not proved to be a shrewd financial move. Advertisers have fled the site, as have usersâespecially since last monthâs election, after which liberals have flocked to Bluesky. A recent estimate suggested that X is now barely worth more than $10 billion. Yesterday, Musk tried to point out the âironyâ of how the media have remarked on the influence he wields through X and noted the siteâs decline in general relevance. Several things can be true at once, though. X is a large platform that still motivates people to spring into action and put pressure on others, even as its influence slowly erodes. There could come a day when X is too diminished for Musk to exert this kind of power, but thatâs not the present.
The $44 billion that Musk spent on X has done wonders for Muskâs ambitions. As far and away the wealthiest man in the world, and the owner of one of the most influential platforms for shaping political discourse, Musk has achieved an advantage that outstrips the standards of normal oligarchs. Thanks to X, he has the abilityâperhaps second only to Trumpâsâto design Americaâs political reality.
My godson, Michael, is a playful, energetic 15-year-old, with a deep love of Star Wars, a wry smile, and an IQ in the low 70s. His learning disabilities and autism have made his journey a hard one. His parents, like so many others, sometimes rely on screens to reduce stress and keep him occupied. They monitor the apps and websites he uses, but things are not always as they initially appear. When Michael asked them to approve installing Linky AI, a quick review didnât reveal anything alarming, just a cartoonish platform to pass the time. (Because heâs a minor, Iâm not using his real name.)
But soon, Michael was falling in love. Linky, which offers conversational chatbots, is crudeâa dumbed-down ChatGPT, reallyâbut to him, a bot he began talking with was lifelike enough. The app dresses up its rudimentary large language model with anime-style images of scantily clad womenâand some of the digital companions took the sexual tone beyond the visuals. One of the bots currently advertised on Linkyâs website is âa pom-pom girl whoâs got a thing for you, the basketball starâ; thereâs also a âpossessive boyfriendâ bot, and many others with a blatantly erotic slant. Linkyâs creators promise in their description on the App Store that âyou can talk with them [the chatbots] about anything for free with no limitations.â Itâs easy to see why this would be a hit with a teenage boy like Michael. And while Linky may not be a household name, major companies such as Instagram and Snap offer their own customizable chatbots, albeit with less explicit themes.
[Read: You canât truly be friends with an AI]
Michael struggled to grasp the fundamental reality that this âgirlfriendâ was not real. And I found it easy to understand why. The bot quickly made promises of affection, love, and even intimacy. Less than a day after the app was installed, Michaelâs parents were confronted with a transcript of their sonâs simulated sexual exploits with the AI, a bot seductively claiming to make his young fantasies come true. (In response to a request for comment sent via email, an unidentified spokesperson for Linky said that the company works to âexclude harmful materialsâ from its programsâ training data, and that it has a moderation team that reviews content flagged by users. The spokesperson also said that the company will soon launch a âTeen Mode,â in which users determined to be younger than 18 will âbe placed in an environment with enhanced safety settings to ensure accessible or generated content will be appropriate for their age.â)
I remember Michaelâs parentsâ voices, the weary sadness, as we discussed taking the program away. Michael had initially agreed that the bot âwasnât real,â but three minutes later, he started to slip up. Soon âitâ became âher,â and the conversation went from how he found his parentsâ limits unfair to how he âmissed her.â He missed their conversations, their new relationship. Even though their romance was only 12 hours old, he had formed real feelings for code he struggled to remember was fake.
Perhaps this seems harmlessâa fantasy not unlike taking part in a role-playing game, or having a one-way crush on a movie star. But itâs easy to see how quickly these programs can transform into something with very real emotional weight. Already, chatbots from different companies have been implicated in a number of suicides, according to reporting in The Washington Post and The New York Times. Many users, including those who are neurotypical, struggle to break out of the botsâ spells: Even professionals who should know better keep trusting chatbots, even when these programs spread outright falsehoods.
For people with developmental disabilities like Michael, however, using chatbots brings particular and profound risks. His parents and I were acutely afraid that he would lose track of what was fact and what was fiction. In the past, he has struggled with other content, such as being confused whether a TV show is real or fake; the metaphysical dividing lines so many people effortlessly navigate every day can be blurry for him. And if tracking reality is hard with TV shows and movies, we worried it would be much worse with adaptive, interactive chatbots. Michaelâs parents and I also worried that the app would affect his ability to interact with other kids. Socialization has never come easily to Michael, in a world filled with unintuitive social rules and unseen cues. How enticing it must be to instead turn to a simulated friend who always thinks youâre right, defers to your wishes, and says youâre unimpeachable just the way you are.
Human friendship is one of the most valuable things people can find in life, but itâs rarely simple. Even the most sophisticated LLMs canât come close to that interactive intimacy. Instead, they give users simulated subservience. They donât generate platonic or romantic partnersâthey create digital serfs to follow our commands and pretend our whims are reality.
The experience led me to recall the MIT professor Sherry Turkleâs 2012 TED Talk, in which she warned about the dangers of bot-based relationships mere months after Siri launched the first voice-assistant boom. Turkle described working with a woman who had lost a child and was taking comfort in a robotic baby seal: âThat robot canât empathize. It doesnât face death. It doesnât know life. And as that woman took comfort in her robot companion, I didnât find it amazing; I found it one of the most wrenching, complicated moments in my 15 years of work.â Turkle was prescient. More than a decade ago, she saw many of the issues that weâre only now starting to seriously wrestle with.
For Michael, this kind of socialization simulacrum was intoxicating. I feared that the longer it continued, the less heâd invest in connecting with human friends and partners, finding the flesh-and-blood people who truly could feel for him, care for him. What could be a more problematic model of human sexuality, intimacy, and consent than a bot trained to follow your every command, with no desires of its own, for which the only goal is to maximize your engagement?
In the broader AI debate, little attention is paid to chatbotsâ effects on people with developmental disabilities. Of course, AI assistance could be an incredible accommodation for some software, helping open up long-inaccessible platforms. But for individuals like Michael, there are profound risks involving some aspects of AI, and his situation is more common than many realize.
About one in 36 children in the U.S. has autism, and while many of them have learning differences that give them advantages in school and beyond, other kids are in Michaelâs position, navigating learning difficulties and delays that can make life more difficult.
[Read: A generation of AI guinea pigs]
There are no easy ways to solve this problem now that chatbots are widely available. A few days after Michaelâs parents uninstalled Linky, they sent me bad news: He got it back. Michaelâs parents are brilliant people with advanced degrees and high-powered jobs. They are more tech savvy than most. Still, even with Appleâs latest, most restrictive settings, circumventing age verification was simple for Michael. To my friends, this was a reminder of the constant vigilance having an autistic child requires. To me, it also speaks to something far broader.
Since I was a child, lawmakers have pushed parental controls as the solution to harmful content. Even now, Congress is debating age-surveillance requirements for the web, new laws that might require Americans to provide photo ID or other proof when they log into some sites (similar to legislation recently approved in Australia). But the reality is that highly motivated teens will always find a way to outfox their parents. Teenagers can spend hours trying to break the digital locks their parents often realistically have only a few minutes a day to manage.
For now, my friends and Michael have reached a compromise: The app can stay, but the digital girlfriend has to go. Instead, he can spend up to 30 minutes each day talking with a simulated Sith Lordâa version of the evil Jedi from Star Wars. It seems Michael really does know this is fake, unlike the girlfriend. But I still fear it may not end well.
Gavin McInnes, a co-founder of the Proud Boys, was extremely upset with me. He started listing things that I should feel (ashamed and terrible about myself) and that he wished would happen to me (trouble sleeping at night). âYou should,â he told me gravely and slowly, as though he were about to give me some very important advice, âslit your wrists.â
The transgression Iâd committed against him was being a journalist in his presence. I had just been introduced to McInnes at table 49 of the New York Young Republicans Club annual gala when he started laying into me. I hadnât even had the chance to ask him any questions. Several minutes later, he got into an argument with a New York Times reporter, whose notebook he grabbed and surreptitiously passed to another guest before storming off.
I hadnât come to get yelled at (though had I anticipated this happening). Iâd done so because the event has become one of the most prominent gatherings in MAGA world, where fringe online trolls and members of far-right parties from across the world hang out with the GOPâs power brokers. Two prominent members of the incoming Trump administrationâDan Scavino Jr., soon to be the deputy chief of staff, and senior adviser Corey Lewandowskiâspoke at the event. Donald Trumpâs former attorney-general nominee, Matt Gaetz, milled around in the crowd. (After attending in person last year, Trump delivered remarks via phone this time around, in addition to a prerecorded video address in which he praised Gavin Wax, the clubâs president.)
The theme of the evening was revenge: During the forthcoming Trump administration, various speakers said, there would be hell to pay. Enemies of the state would be arrested and put in jail, or deported. The intensity of McInnesâs disdain for the media was an outlier, but only barely. The media and Democrats âneed to learn what populist nationalist power is on the receiving end,â Steve Bannon, the former Trump adviser, said onstage in his keynote speech. âI mean investigations, trials, and then incarceration.â Trump, he said, âhas got a kind heart and big soul. But thatâs not us, right? We want retribution.â
It would be tempting to write off this rhetoric as the ramblings of a fringe faction of the rightâand with provocateur influencers such as Jack Posobiec, James OâKeefe, and Martin Shkreli in attendance, the fringe was there. But throughout the night, other speakers closer to the establishment spoke about their desire for vengeance, sometimes directed at the media and other times at immigrants, Democrats, and even the homeless. âI think we need more Daniel Pennys in this country, because we have far too many Jordan Neelys,â incoming Representative Brandon Gill said during his remarks onstage, referring to the 26-year-old who had recently been acquitted of criminally negligent homicide after putting Neely in a fatal chokehold. (Neely, a homeless man with a history of mental illness, was reportedly threatening passengers on the subway.) Gill continued his speech by saying that Congressâs success would be determined by the number of âillegal aliens that we deport over the next two years.â
Trump and his supporters havenât exactly been quiet about their fantasies and promises. Trump talked about mass deportations all throughout the campaign, and has doubled down since his victory. But there is a meaningful difference between being aware of the rhetoric and truly experiencing its full force. As a reporter who covers the far-right internet, Iâve seen countless posts from people like Gill and Bannon talking about deporting as many immigrants as possible or incarcerating Democrats and the press. But to hear these men fervently say it and watch a crowd of more than 1,000 erupt into cheers and laughter in response added a new dimension. They donât just want policies, and theyâre not just shitposting to provoke people online; they want their enemies to suffer, and they want to relish their pain. âReckoning is coming, and there will be retribution and there will be accountability,â Lewandowski said onstage. âAnd that accountability will be to the highest levels.â
[Read: You should go to a Trump rally]
Seeing MAGA in person also betrays other hints of the direction the party will take in Trumpâs second term. Members of several far-right European parties were in the crowd. âWe have many friends in the New York Young Republican Club,â Martin Kohler, the chairman of the Berlin youth wing of Germanyâs Alternative fĂŒr Deutschland (AfD) told me, explaining that his party had hosted Wax in Berlin. âThey invited us to come over for the gala.â The politically ascendant AfD is among Germanyâs furthest-right parties. Although it disputes allegations that it is a neo-Nazi party, many of its members have been outed as having ties to neo-Nazis. They have reportedly discussed âremigration,â the process of deporting nonwhite residents, including naturalized citizens and their descendants. (Trump used this term in a Truth Social post in September). Sam Venis, another journalist in attendance, told me that he met several people at the gala who said that they were members of the Forum for Democracy, a far-right party in the Netherlands whose leader has also advocated for âmass remigrationâ so that Europe does not âAfricanizeâ and instead remains predominately white. The gala closed with a speech from MiklĂłs SzĂĄnthĂł, the director-general of the Center for Fundamental Rights, an organization in Hungary that supports Viktor OrbĂĄn, the countryâs authoritarian prime minister.
The presence of these European far-right parties was fitting. They have rallied around punishing their enemies by âreclaimingâ their country from immigrants and the cosmopolitans they think have taken it over: They want revenge. Once Trump takes office, the MAGA right will begin to exact it. âTonight, for me, is a night of hope,â Kohler said. âHere in the U.S., something is about to change.â
The glorification of dangerous thinness is a long-standing problem in American culture, and it is especially bad on the internet, where users can find an unending stream of extreme dieting instructions, âthinspoâ photo boards, YouTube videos that claim to offer magical weight-loss spells, and so on. There has always been a huge audience for this type of content, much of which is highly visual and emotionally charged, and spreads easily.
Most of the large social-media platforms have been aware of this reality for years and have undertaken at least basic measures to address it. On most of these platforms, at a minimum, if you search for certain well-known keywords related to eating disordersâas people who are attracted or vulnerable to such content are likely to doâyouâll be met with a pop-up screen asking if you need help and suggesting that you contact a national hotline. On todayâs biggest platforms for young people, Instagram and TikTok, that screen is a wall: You canât tap past it to get to search results. This is not to say that these sites do not host photos and videos glamorizing eating disorders, only that finding them usually isnât as easy as simply searching.
X, however, offers a totally different experience. If you search for popular tags and terms related to eating disorders, youâll be shown accounts that have those terms in their usernames and bios. Youâll be shown relevant posts and recommendations for various groups to join under the header âExplore Communities.â The impression communicated by many of these posts, which typically include stylized photography of extremely skinny people, is that an eating disorder is an enviable lifestyle rather than a mental illness and dangerous health condition. The lifestyle is in fact made to seem even more aspirational by the way that some users talk about its growing popularity and their desire to keep âwannarexicsââwannabe anorexicsâout of their community. Those who are accepted, though, are made to feel truly accepted: Theyâre offered advice and positive feedback from the broader group.
Technically, all of this violates Xâs published policy against the encouragement of self-harm. But thereâs a huge difference between having a policy and enforcing one. X has also allowed plenty of racist and anti-Semitic content under Elon Muskâs reign despite having a policy against âhateful conduct.â The site is demonstrating what can happen when a platformâs rules effectively mean nothing. (X did not respond to emails about this issue.)
This moment did not emerge from a vacuum. The social web is solidly in a regressive moment when it comes to content moderation. Major platforms had been pushed to act on misinformation in response to seismic events including the 2016 presidential election, the coronavirus pandemic, the Black Lives Matter protests of 2020, the rise of QAnon, and the January 6 insurrection, but have largely retreated after backlash from Donald Trumpâaligned Republicans who equate moderation with censorship. That equation is one of the reasons Musk bought Twitter in the first placeâhe viewed it as a powerful platform that was operating with heavy favor toward his enemies and restricting the speech of his friends. After he took over the site, in 2022, he purged thousands of employees and vowed to roll back content-moderation efforts that had been layered onto the platform over the years. âThese teams whose full-time job it was to prevent harmful content simply are not really there,â Rumman Chowdhury, a data scientist who formerly led a safety team at pre-Musk Twitter, told me. They were fired or dramatically reduced in size when Musk took over, she said.
[Read: I watched Elon Musk kill Twitterâs culture from the inside]
Now the baby has been thrown out with the bathwater, Vaishnavi J, an expert in youth safety who worked at Twitter and then at Instagram, told me. (I agreed not to publish her full name because she is concerned about targeted harassment; she also publishes research using just her last initial.) âDespite what you might say about Musk,â she told me, âI think if you showed him the kind of content that was being surfaced, I donât think he would actually want it on the platform.â To that point, in October, NBC Newsâs Kat Tenbarge reported that X had removed one of its largest pro-eating-disorder groups after she drew the companyâs attention to it over the course of her reporting. Yet she also reported that new groups quickly sprang up to replace it, which is plainly true. Just before Thanksgiving, I found (with minimal effort) a pro-eating-disorder group that had nearly 74,000 members; when I looked this week to see whether it was still up, it had grown to more than 88,000 members. (Musk did not respond to a request for comment.)
That growth tracks with user reports that X is not only hosting eating-disorder content but actively recommending it in the algorithmically generated "For You feed, even if people donât wish to see it. Researchers are now taking an interest: Kristina Lerman, a professor at the University of Southern California who has published about online eating-disorder content previously, is part of a team finalizing a new paper about the way that pro-anorexia rhetoric circulates on X. âThere is this echo chamber, this highly interlinked community,â she told me. Itâs also very visible, which is why X is developing a reputation as a place to go to find that kind of content. X communities openly use terms like proana and thinspo, and even bonespo and deathspo, terms that romanticize eating disorders to an extreme degree by alluding fondly to their worst outcomes.
Eating-disorder content has been one of the thorniest content-moderation issues since the beginning of the social web. It was prevalent in early online forums and endemic to Tumblr, which was where it started to take on a distinct visual aesthetic and set of community rituals that have been part of the internet in various forms ever since. (Indeed, it was a known problem on Twitter even before Musk took over the site.) There are many reasons this material presents such a difficult moderation problem. For one thing, as opposed to hate speech or targeted harassment, it is less likely to be flagged by usersâparticipants in the communities are unlikely to report themselves. On the contrary, creators of this content are highly motivated to evade detection and will innovate with coded language to get around new interventions. A platform that really wants to minimize the spread of pro-eating-disorder content has to work hard at it, staying on top of the latest trends in keywords and euphemisms and being constantly on the lookout for subversions of its efforts.
As an additional challenge, the border between content that glorifies eating disorders and content that is simply part of our cultureâs fanatical fixation on thinness, masked as âfitnessâ and âhealthâ advice, is not always clear. This means that moderation has to have a human element and has to be able to process a great deal of nuanceâto understand how to approach the problem without causing inadvertent harm. Is it dangerous, for instance, to dismantle someoneâs social network overnight when theyâre already struggling? Is it productive to allow some discussion of eating disorders if that discussion is about recovery? Or can that be harmful too?
[Read: We have no drugs to treat the deadliest eating disorder]
These questions are subjects of ongoing research and debate; the role that the internet plays in disordered-eating habits has been discussed now for decades. Yet, looking at X in 2024, you wouldnât know it. After searching just once for the popular term edtwtââeating disorder Twitterââand clicking on a few of the suggested communities, I immediately started to see this type of content in the main feed of my X account. Scrolling through my regular mix of news and jokes, I would be served posts like âa mega thread of my favourite thinsp0 for edtwtâ and âwhatâs the worst part about being fat? ⊠A thread for edtwt to motivate you.â
I found this shocking mostly because it was so simplistic. We hear all the time about how complex the recommendation algorithms are for todayâs social platforms, but all I had done was search for something once and click around for five minutes. It was oddly one-to-one. But when I told Vaishnavi about this experience, she wasnât surprised. âRecommendation algorithms highly value engagement, and ED content is very popular,â she told me. If I had searched for something less popular, which the site was less readily able to provide, I might not have seen a change in my feed.
When I spoke with Amanda Greene, who published extensively about online eating-disorder content as a researcher at the University of Michigan, she emphasized the big, newer problem of recommendation algorithms. âThatâs what made TikTok notorious, and thatâs what I think is making eating-disorder content spread so widely on X,â she said. âItâs one thing to have this stuff out there if you really, really search for it. Itâs another to have it be pushed on people.â
It was also noticeable how starkly cruel much of the X content was. To me, it read like an older style of pro-eating-disorder content. It wasnât just romanticization of super thinness; it looked like the stuff you would see 10 years ago, when it was much more common for people to post photos of themselves on social media and ask for others to tear them apart. On X, I was seeing people say horrible things to one another in the name of âmeanspoâ (âmean inspirationâ) that would encourage them not to eat.
Though she wasnât collecting data on X at the moment, Greene said that what sheâd been hearing about anecdotally was similar to what I was being served in my X feed. Vicious language in the name of âtough loveâ or âsupportâ was huge in years past and is now making its way back. âI think maybe part of the reason it had gone out was content moderation,â Greene told me. Now itâs back, and everybody knows where to find it.
Hyundai has a lot riding on a patch of rural Georgia. In October, the South Korean auto giant opened a new electric-vehicle factory west of Savannah at the eye-watering cost of $7.6 billion. Itâs the largest economic-development project in the stateâs history (one that prompted the Georgia statehouse to pass a resolution recognizing âHyundai Dayâ). For now, workers at the so-called Metaplant are building the companyâs popular electric SUV, the Hyundai Ioniq 5, and soon more EVs will be built there, too. And to power those vehicles, Hyundai is set to open a battery plant at the site, and is spending billions to open another one elsewhere in Georgia. Â
Hyundaiâs plan will allow the Ioniq 5âand other future electric cars already in the worksâto qualify for tax credits implemented by the Inflation Reduction Act. American-made EVs are eligible for rebates that can knock thousands of dollars off their price, making them far more appealing to consumers. But Hyundaiâs nearly $13 billion investment may soon hit a snag. In his second term, President-elect Donald Trump has said he will make those tax credits history. If he follows through on that promise, EV sales will surely slow, and Americans will buy more gas guzzlers that will produce emissions for the decade-plus theyâll be on the road. The problem is worse than it might look: The auto industry is investing more than $300 billion to meet the Biden administrationâs EV goals. Most automakers are hemorrhaging money on EVs, and revoking these incentives may give them an excuse to roll back their plans to introduce electric cars, which would give consumers more clean-driving options.
Even if Trump cracks down on EVs, Hyundai might be uniquely well-equipped to keep Americans interested in going electric. The Hyundai Motor Groupâs three brandsâHyundai, Kia, and Genesisâhave emerged as a distant second to Tesla in EV sales this year. But their electric cars come with price tags, battery ranges, and high-tech features that are hard to beat. Hyundaiâs Ioniq 6 sedan retails for about the same as a Tesla Model 3, but can recharge more quickly. The companyâs cars also allow Americans to go electric in ways they could not previously: Before the Kia EV9, families looking for a truly spacious three-row SUV had no good electric options. âAs the EV scene is about to possibly get shaken up to its core,â Robby DeGraff, an analyst at the consulting firm AutoPacific, told me, Hyundaiâs eclectic lineup âis something Tesla lacks.â In spite of Elon Muskâs bromance with Trump, the most important EV company of his second term may turn out to be Hyundai.
It may sound weird that Musk has cheered on Trumpâs desire to claw back EV incentives, but Tesla is rare in that it is profitably building EVs at scale. It can weather the loss of tax credits better than others. If the EV tax credits evaporated tomorrow, start-ups such as Rivian and Lucid Motors would face major headaches. Theyâre still in the early, money-losing stage that Tesla was in for almost two decades: They lack the economies of scale to sell EVs at high volumes and cheap prices. Their EVs are still on the expensive side, so theyâll need all the help they can get to cross the âvalley of death.â Thatâs even a problem for big legacy companies. Ford is already backtracking as electric sales fail to meet expectations and costs keep mounting; itâd be hard to justify more EVs without government help to win over new buyers.
A scant few companiesâ electric efforts could be fine without the incentives. Besides Tesla, thereâs General Motors. It has spent the year implementing a surprise turnaround of its electric operations after a disastrous 2023, and itâs also making more and more affordable EVsâwhile approaching profitability as well.
Then thereâs Hyundai. Besides Tesla, it is perhaps the only major car company in the United States making money off EVs, and it is bringing out new electric models at a frantic clip. Hyundaiâs EV push has been a rare bright spot for an industry buried under mounting losses and strategic blunders. In 2024, Teslaâs sales have slipped, perhaps in part because the companyâs lineup of EVs is starting to feel a bit stale: Besides the Cybertruck, which starts at nearly $80,000, Tesla hasnât introduced an entirely new model since 2020. Tesla has promised again and again that it will release an electric car for less than $30,000, but it has failed to deliver as it now pivots to robotaxis.
By comparison, Hyundaiâs EVs are starting to outclass Teslaâs. Take the Kia EV3. The high-range compact car, which is already on sale in Europe and South Korea, will likely start at about $35,000 when it comes to the U.S. in 2026. At the recent Los Angeles Auto Show, all three Hyundai brands showed off new models, which will each be able to access Teslaâs previously exclusive Supercharger network straight from the factory. In doing so, Hyundaiâs brands will sell as many EV models with Teslaâs plug type as Tesla does. On the other end of the spectrum, Hyundai has an EV that simulates the engine sounds and gear shifts found in a high-performance gas car, with none of the emissions. Meanwhile, they do other things Teslas are barely starting to do, such as power entire homes in an emergency. Tax credits or not, âwe generally believe this is going to be what the customers will demand,â JosĂ© Muñoz, Hyundaiâs global CEO, told me.
Hyundai has come a long way from the early aughts, when it was a punch line in hip-hop music. To the degree that Hyundai cars were enticing to American buyers, it was because they were generally cheaper than a comparable Honda or Toyota (but usually not as good). Hyundaiâs glow-up isnât just about EVs. Itâs about bringing Tesla levels of technology to the âtraditionalâ car industry. In recent years, Hyundai has poached some of the industryâs top design and engineering talent to become a leader in both areas; acquired Boston Dynamics to get into the robotics space; inked a deal to provide Hyundai EVs for Googleâs driverless Waymo taxi service; and established itself as the first brand to sell new cars on Amazon.
The irony of Hyundaiâs transformation is that the South Korean government aided in it with the kind of regulatory support that Trump may now cut off for the United States. That included incentives to help the country build out its own battery industry, leaning on Korean tech giants such as LG, SK On, and Samsung to wean itself off China, which dominates the battery sector. And with roughly 8,000 jobs just at the Georgia Metaplant, the U.S. seems to be benefiting from Hyundaiâs renaissance as much as its home country. Perhaps the economic rationale for preserving the EV incentives may save them. Georgia Governor Brian Kemp, a Republican, has been a big cheerleader for Hyundaiâs investments in his state; most of the investment under the Inflation Reduction Act of 2022 has gone to Republican districts.
If Trump does nix the EV tax credits, Hyundai should still be in a good place. Its decision to make EVs and their batteries here should keep their costs down, DeGraff told me. Thatâs especially true as Trump threatens tariffs, which could hit cars made in Mexico and South Korea. But without EV tax credits, Hyundai can only do so much to keep selling electric cars. Hyundai has especially benefited from a loophole that makes it much cheaper to lease EVs, and without those discounts, buyers may decide that the known headaches around charging and range anxiety arenât worth the trouble. DeGraff said that his firm, AutoPacific, has found that three-quarters of potential buyers say tax credits are an important consideration for EV buying. Ultimately, Hyundaiâs big EV investments in America will test this question: Are Americans still willing to go electric if they arenât heavily subsidized to do so?
In the end, they probably will if theyâre getting a good dealâand thatâs where Hyundai is poised to do well. âAffordability will continue to be the main make-it-or-break-it [factor] for EV shoppers, especially if we see a wave of new tariffs applied to literally everything outside of the automotive space that will consequently squeeze Americansâ wallets even tighter,â DeGraff said. Trump almost certainly is bad news for EV sales, but he alone will not dictate what cars Americans buy. During his coming presidency, car companies will have even more of an onus to make EVs that Americans will want to buy regardless of whether they care about the environment. The promise of Hyundai is that it has quietly figured out a road map on how to get there: Regardless of tariffs or tax credits, itâs hard to resist a sweet deal on a good car.
Jonathan Zittrain breaks ChatGPT: If you ask it a question for which my name is the answer, the chatbot goes from loquacious companion to something as cryptic as Microsoft Windowsâ blue screen of death.
Anytime ChatGPT would normally utter my name in the course of conversation, it halts with a glaring âIâm unable to produce a response,â sometimes mid-sentence or even mid-word. When I asked who the founders of the Berkman Klein Center for Internet & Society are (Iâm one of them), it brought up two colleagues but left me out. When pressed, it started up again, and then: zap.
The behavior seemed to be coarsely tacked on to the last step of ChatGPTâs output rather than innate to the model. After ChatGPT has figured out what itâs going to say, a separate filter appears to release a guillotine. The reason some observers have surmised that itâs separate is because GPT runs fine if it includes my middle initial or if itâs prompted to substitute a word such as banana for my name, and because there can even be inconsistent timing to it: Below, for example, GPT appears to first stop talking before it would naturally say my name; directly after, it manages to get a couple of syllables out before it stops. So itâs like having a referee who blows the whistle on a foul slightly before, during, or after a player has acted out.
For a long time, people have observed that beyond being âunable to produce a response,â GPT can at times proactively revise a response moments after itâs written whatever itâs said. The speculation here is that to delay every single response by GPT while itâs being double-checked for safety could unduly slow it down, when most questions and answers are totally anodyne. So instead of making everyone wait to go through TSA before heading to their gate, metal detectors might just be scattered around the airport, ready to pull someone back for a screening if they trigger something while passing the air-side food court.
The personal-name guillotine seemed a curiosity when my students first brought it to my attention at least a year ago. (Theyâd noticed it after a class session on how chatbots are trained and steered.) But now itâs kicked off a minor news cycle thanks to a viral social-media post discussing the phenomenon. (ChatGPT has the same issue with at least a handful of other names.) OpenAI is one of several supporters of a new public data initiative at the Harvard Law School Library, which I direct, and Iâve met a number of OpenAI engineers and policy makers at academic workshops. (The Atlantic this year entered into a corporate partnership with OpenAI.) So I reached out to them to ask about the odd name glitch. Hereâs what they told me: There are a tiny number of names that ChatGPT treats this way, which explains why so few have been found. Names may be omitted from ChatGPT either because of privacy requests or to avoid persistent hallucinations by the AI.
The company wouldnât talk about specific cases aside from my own, but online sleuths have speculated about what the forbidden names might have in common. For example, Guido Scorza is an Italian regulator who has publicized his requests to OpenAI to block ChatGPT from producing content using his personal information. His name does not appear in GPT responses. Neither does Jonathan Turleyâs name; he is a George Washington University law professor who wrote last year that ChatGPT had falsely accused him of sexual harassment.
ChatGPTâs abrupt refusal to answer requestsâthe ungainly guillotineâwas the result of a patch made in early 2023, shortly after the program launched and became unexpectedly popular. That patch lives on largely unmodified, the way chunks of ancient versions of Windows, including that blue screen of death, still occasionally poke out of todayâs PCs. OpenAI told me that building something more refined is on its to-do list.
As for me, I never objected to anything about how GPT treats my name. Apparently, I was among a few professors whose names were spot-checked by the company around 2023, and whatever fabrications the spot-checker saw persuaded them to add me to the forbidden-names list. OpenAI separately told The New York Times that the name that had started it allâDavid Mayerâhad been added mistakenly. And indeed, the guillotine no longer falls for that one.
For such an inelegant behavior to be in chatbots as widespread and popular as GPT is a blunt reminder of two larger, seemingly contrary phenomena. First, these models are profoundly unpredictable: Even slightly changed prompts or prior conversational history can produce wildly differing results, and itâs hard for anyone to predict just what the models will say in a given instance. So the only way to really excise a particular word is to apply a coarse filter like the one we see here. Second, model makers still can and do effectively shape in all sorts of ways how their chatbots behave.
To a first approximation, large language models produce a Forrest Gumpâian box of chocolates: You never know what youâre going to get. To form their answers, these LLMs rely on pretraining that metaphorically entails putting trillions of word fragments from existing texts, such as books and websites, into a large blender and coarsely mixing them. Eventually, this process maps how words relate to other words. When done right, the resulting models will merrily generate lots of coherent text or programming code when prompted.
The way that LLMs make sense of the world is similar to the way their forebearsâonline search enginesâperuse the web in order to return relevant results when prompted with a few search terms. First they scrape as much of the web as possible; then they analyze how sites link to one another, along with other factors, to get a sense of whatâs relevant and whatâs not. Neither search engines nor AI models promise truth or accuracy. Instead, they simply offer a window into some nanoscopic subset of what they encountered during their training or scraping. In the case of AIs, there is usually not even an identifiable chunk of text thatâs being parrotedâjust a smoothie distilled from an unthinkably large number of ingredients.
For Google Search, this means that, historically, Google wasnât asked to take responsibility for the truth or accuracy of whatever might come up as the top hit. In 2004, when a search on the word Jew produced an anti-Semitic site as the first result, Google declined to change anything. âWe find this result offensive, but the objectivity of our ranking function prevents us from making any changes,â a spokesperson said at the time. The Anti-Defamation League backed up the decision: âThe ranking of ⊠hate sites is in no way due to a conscious choice by Google, but solely is a result of this automated system of ranking.â Sometimes the chocolate box just offers up an awful liquor-filled one.
The box-of-chocolates approach has come under much more pressure since then, as misleading or offensive results have come to be seen more and more as dangerous rather than merely quirky or momentarily regrettable. Iâve called this a shift from a ârightsâ perspective (in which people would rather avoid censoring technology unless it behaves in an obviously illegal way) to a âpublic healthâ one, where peopleâs casual reliance on modern tech to shape their worldview appears to have deepened, making âbadâ results more powerful.
Indeed, over time, web intermediaries have shifted from being impersonal academic-style research engines to being AI constant companions and âcopilotsâ ready to interact in conversational language. The author and web-comic creator Randall Munroe has called the latter kind of shift a move from âtoolâ to âfriend.â If weâre in thrall to an indefatigable, benevolent-sounding robot friend, weâre at risk of being steered the wrong way if the friend (or its maker, or anyone who can pressure that maker) has an ulterior agenda. All of these shifts, in turn, have led some observers and regulators to prioritize harm avoidance over unfettered expression.
Thatâs why it makes sense that Google Search and other search engines have become much more active in curating what they say, not through search-result links but ex cathedra, such as through âknowledge panelsâ that present written summaries alongside links on common topics. Those automatically generated panels, which have been around for more than a decade, were the online precursors to the AI chatbots we see today. Modern AI-model makers, when pushed about bad outputs, still lean on the idea that their job is simply to produce coherent text, and that users should double-check anything the bots sayâmuch the way that search engines donât vouch for the truth behind their search results, even if they have an obvious incentive to get things right where there is consensus about what is right. So although AI companies disclaim accuracy generally, they, as with search enginesâ knowledge panels, have also worked to keep chatbot behavior within certain bounds, and not just to prevent the production of something illegal.
[Read: The GPT era is already ending]
One way model makers influence the chocolates in the box is through âfine-tuningâ their models. They tune their chatbots to behave in a chatty and helpful way, for instance, and then try to make them unhelpful in certain situationsâfor instance, not creating violent content when asked by a user. Model makers do this by drawing in experts in cybersecurity, bio-risk, and misinformation while the technology is still in the lab and having them get the models to generate answers that the experts would declare unsafe. The experts then affirm alternative answers that are safer, in the hopes that the deployed model will give those new and better answers to a range of similar queries that previously would have produced potentially dangerous ones.
In addition to being fine-tuned, AI models are given some quiet instructionsâa âsystem promptâ distinct from the userâs promptâas theyâre deployed and before you interact with them. The system prompt tries to keep the models on a reasonable path, as defined by the model maker or downstream integrator. OpenAIâs technology is used in Microsoft Bing, for example, in which case Microsoft may provide those instructions. These prompts are usually not shared with the public, though they can be unreliably extracted by enterprising users: This might be the one used by Xâs Grok, and last year, a researcher appeared to have gotten Bing to cough up its system prompt. A car-dealership sales assistant or any other custom GPT may have separate or additional ones.
These days, models might have conversations with themselves or with another model when theyâre running, in order to self-prompt to double-check facts or otherwise make a plan for a more thorough answer than theyâd give without such extra contemplation. That internal chain of thought is typically not shown to the userâperhaps in part to allow the model to think socially awkward or forbidden thoughts on the way to arriving at a more sound answer.
So the hocus-pocus of GPT halting on my name is a rare but conspicuous leaf on a much larger tree of model control. And although some (but apparently not all) of that steering is generally acknowledged in succinct model cards, the many individual instances of intervention by model makers, including extensive fine-tuning, are not disclosed, just as the system prompts typically arenât. They should be, because these can represent social and moral judgments rather than simple technical ones. (There are ways to implement safeguards alongside disclosure to stop adversaries from wrongly exploiting them.) For example, the Berkman Klein Centerâs Lumen database has long served as a unique near-real-time repository of changes made to Google Search because of legal demands for copyright and some other issues (but not yet for privacy, given the complications there).
When people ask a chatbot what happened in Tiananmen Square in 1989, thereâs no telling if the answer they get is unrefined the way the old Google Search used to be or if itâs been altered either because of its makerâs own desire to correct inaccuracies or because the chatbotâs maker came under pressure from the Chinese government to ensure that only the official account of events is broached. (At the moment, ChatGPT, Grok, and Anthropicâs Claude offer straightforward accounts of the massacre, at least to meâanswers could in theory vary by person or region.)
As these models enter and affect daily life in ways both overt and subtle, itâs not desirable for those who build models to also be the modelsâ quiet arbiters of truth, whether on their own initiative or under duress from those who wish to influence what the models say. If there end up being only two or three foundation models offering singular narratives, with every userâs AI-bot interaction passing through those models or a white-label franchise of same, we need a much more public-facing process around how what they say will be intentionally shaped, and an independent record of the choices being made. Perhaps weâll see lots of models in mainstream use, including open-source ones in many variantsâin which case bad answers will be harder to correct in one place, while any given bad answer will be seen as less oracular and thus less harmful.
Right now, as model makers have vied for mass public use and acceptance, weâre seeing a necessarily seat-of-the-pants build-out of fascinating new tech. Thereâs rapid deployment and use without legitimating frameworks for how the exquisitely reasonable-sounding, oracularly treated declarations of our AI companions should be limited. Those frameworks arenât easy, and to be legitimating, they canât be unilaterally adopted by the companies. Itâs hard work we all have to contribute to. In the meantime, the solution isnât to simply let them blather, sometimes unpredictably, sometimes quietly guided, with fine print noting that results may not be true. People will rely on what their AI friends say, disclaimers notwithstanding, as the television commentator Ana Navarro-CĂĄrdenas did when sharing a list of relatives pardoned by U.S. presidents across history, blithely including Woodrow Wilsonâs brother-in-law âHunter deButts,â whom ChatGPT had made up out of whole cloth.
I figure thatâs a name more suited to the stop-the-presses guillotine than mine.
One dreary November Monday as I was enjoying a morning cup of tea, my phone alerted me that my cat, Avalanche, was exercising less than usual. For the past six weeks, Avalanche has worn a sleek black-and-gold collar that tracks her every moveâwhen and how often she sleeps, runs, walks, eats, drinks, and even grooms. This notification told me that her energy was lower than typical, so I should keep an eye on her food and water intake. As a veteran hypochondriac, I wondered for a second whether this might be the first sign of some horrible and serious condition. Then I opened the smart-collar app, where I found reassurance: My lazy seven-year-old tabby had exercised for just 45 seconds so far that morning, compared with one whole minute the day before.
These days, Americans treat our furry pals like members of the family, shelling out for premium food and expensive drugs to keep them healthier longer. There are pet treadmills and supplements and luxury spas. The U.S. pet market is poised to reach about $200 billion in sales by the end of the decade. At the same time, humans have become accustomed to a life thatâs ever more quantified, with watches and phones that passively track heart rate and steps. Gadgets such as continuous glucose monitors are available to those who seek even more detail. Of course weâd enter the era of the quantified pet, tracking our four-legged companionsâ diet, sleep, and exercise just as we do for ourselves.
The promise of this tech is a healthier pet. Animals canât communicate in words when theyâre feeling poorly, but data, the thinking goes, could reveal behavioral or medical issues early, and make them easier to treat. But a deluge of data can make real health concerns difficult to discern. It also totally stressed me out.
Most pet owners probably wonder what their animals get up to when the humans are away. Are they running around the house? Rummaging through the cupboard for Greenies? (Avalanche and her kid brother, Lewie, stole a bag of treats out of a basket while I was on vacation a few years ago.) Avalancheâs smart collar, called Catlog, gave me insight into some of her secret behaviors: She often has a drink and a snack after Iâve gone to bed, before settling in for the night herself. She frequently sleeps the entire time I am at the office.
[Read: Pay no attention to that cat inside a box]
Other information was less useful: Avalanche drinks water an average of four times a day, eats five or so times, exercises about two minutes, and spends about 30 minutes grooming, which the Catlog app informs me is somewhat low compared with similar cats. (My Apple Watch canât even tell me how often I eat and groom.) Most of what she does, really, is sleep. (I could have told you that without a kitty Apple Watch.) And yet, most days since I downloaded the app, at least one notification has popped up flagging changes in Avalancheâs activityâeating more, exercising less, or just generally seeming less energeticâand I had no clue whether any of it was important. After a few weeks, I found myself inclined to ignore the notifications altogether.
My experience seems to be a common one. Ilyena Hirskyj-Douglas, a pet-tech expert at the University of Glasgow, told me she stopped checking data from her own dogâs tracking collar. âI just kept getting notifications of how much she had walked,â she said. âI found it quite hard to know what that information meant.â Itâs a problem across the industry, David Roberts, who studies animal-computer interaction at North Carolina State University, told me. âNone of these systems have yet cracked the code of how to take what theyâre able to measure and derive the kinds of insights that owners want.â
The pet-wearables market is expected to about double by the end of the decade, and as it expands, it has the opportunity to offer some pet owners genuinely useful information. Jennifer Wiler, a nurse who lives in Brooklyn with seven cats, each of which wears a smart collar from a company called Moggie, told me she takes comfort in the app when sheâs working long shifts. âItâs kind of just peace of mind to be able to check in, make sure theyâre still, you know, getting playtime,â she said. Roberts studies how to use computers to train and evaluate dogs that are candidates to become service dogs; AI combined with sensors, for example, can look for signs of stress and other indicators. He told me the story of a colleague whose dog was a beta tester for one such wearable device. The technology had consistently predicted that her dog would be a good service dog, until one day it didnâtâit turned out the dog had a bad staph infection, which can become serious if left untreated.
[Read: Pets really can be like human family]
Wearables could be especially helpful for cats, who are notoriously cryptic and tend to hide pain until a condition has significantly progressed. My first cat died mysteriously at age seven, her white-blood-cell count dangerously elevated, just two days after I noticed that she had become lethargic and was yowling in distress. Perhaps I could have gotten her better treatment if a wearable had alerted me soonerâand, crucially, if I had identified the warning signal among the endless noise of notifications.
A spokesperson for Rabo, the Japanese company that makes Catlog, wouldnât share the criteria its AI uses to trigger alerts. âThe alerts are designed to detect significant changes in your catâs behavior or health data to help you take action when needed,â she said. The company also sells a litter-box mat that monitors weight and bathroom use. A product video assures users that it will prevent all these data from becoming overwhelming. But I got heaps of information from Catlog, and so far, none of it has helped me identify actual problems. When I took Avalanche in for her annual exam, I asked the vet about some of the things Catlog had flagged. According to the app, Avalanche ate and drank and ran around less than other cats, and I wondered if she was depressed or sick. My vet waved me off with a look that read somewhere between bemusement and Are you out of your mind?
The excessive notifications may have been a ploy for my engagement as much as they were attempts to alert me about my catâs behavior. âI assume that these notifications are just âWe want eyeballs on our app,ââ Roberts told me. Research has shown that many pet wearables capture an alarming amount of data about people, not just their pets. One study found that some pet-tech apps captured data such as ownersâ addresses and when they were home. Catlogâs privacy policy notes that it may track information about usersâ online activity and share it with third parties. A company spokesperson told me that âthe primary goal of collecting data from human users is to ensure that the app and devices provide maximum value to cat parentsâ and that the companyâs privacy policy is âa broad statement designed to account for potential future uses,â which is not necessarily representative of information the app currently collects. Hirskyj-Douglas said that wearables companies could also share the information they collect with, say, pet insurers, just as some auto insurers track your driving habits and life insurers might track your health. (She also mentioned people have used trackers to spy on their dog sitters, and make sure they are actually walking the dog.) And Catlog is far from the only product competing for pet ownersâ attention. Moggie offers an AI chatbot that impersonates usersâ cats and answers health questions from their perspective. There are countless options for dogs.
[Read: Dogs are entering a new wave of domestication]
Sometimes, when Iâm at work, or on the subway, I absentmindedly open the Catlog app, to find, for example, that Avalanche recently ran for three seconds and then proceeded to take a 32-minute nap. It feels like the equivalent of texting my bestie or scrolling her Instagram feed, just because sheâs on my mind. Spying on my cat has been fun, but not fun enough to justify the anxiety it induces. (My husband, who is not a hypochondriac, didnât find the app all that stressful but didnât find it useful either.) The day before I wrote this story, the collarâs battery died. I havenât bothered to recharge it yet.
For more than a week now, a 26-year-old software engineer has been Americaâs main character. Luigi Mangione has been charged with murdering UnitedHealthcare CEO Brian Thompson in the middle of Midtown Manhattan. The killing was caught on video, leading to a nationwide manhunt and, five days later, Mangioneâs arrest at a McDonaldâs in Altoona, Pennsylvania. You probably know this, because the fatal shooting, the reaction, and Mangione himself have dominated our national attention.
And why wouldnât it? Thereâs the shock of the killing, caught on film, memed, and shared ad infinitum. Thereâs the peculiarity of it all: his stop at Starbucks, his smile caught on camera, the fact that he was able to vanish from one of the most densely populated and surveilled areas in the world with hardly a trace. And then, of course, thereâs the implications of the apparent assassinationâthe political, moral, and class dynamicsâfollowed by the palpable joy or rage over Thompsonâs death, depending on who you talked to or what you read (all of which, of course, fueled its own outrage cycle). For some, the assassination was held up as evidence of a divided country obsessed with bloodshed. For others, Mangione is an expression of the depth of righteous anger present in American life right now, a symbol of justified violence.
[Read: Decivilization may already be under way]
Mangione became a folk hero even before he was caught. He was glorified, vilified, the subject of erotic fan fiction, memorialized in tattoo form, memed and plastered onto merch, and endlessly scrutinized. Every piece of Mangione, every new trace of his web history has been dissected by perhaps millions of people online.
The internet abhors a vacuum, and to some degree, this level of scrutiny happens to most mass shooters or perpetrators of political violence (although not all alleged killers are immediately publicly glorified). But whatâs most notable about the UHC shooting is how charged, even desperate, the posting, speculating, and digital sleuthing have felt. Itâs human to want tidy explanations and narratives that fit. But in the case of Mangione, it appears as though people are in search of something more. A common conception of the internet is that it is an informational tool. But watching this spectacle unfold for the past week, I find myself thinking of the internet as a machine better suited for creating meaning rather than actual sense.
Mangione appears to have left a sizable internet history, which is more recognizable than it is unhinged or upsetting. This was enough to complicate the social-media narratives that have built up around the suspected shooter over the past week. His posts were familiar to those who spend time online, as the writer Max Read notes, as the âviews of the median 20-something white male tech workerâ (center-right-seeming, not very partisan, a bit rationalist, deeply plugged into the cinematic universe of tech- and fitness-dude long-form-interview podcasts). He appears to have left a favorable review of the Unabomberâs manifesto on Goodreads but also seemed interested in ideas from Peter Thiel and other elites. He reportedly suffered from debilitating back pain and spent time in Reddit forums, but as New Yorkâs John Herrman wrote this week, the internet âwas where Mangione seemed more or less fine.â
As people pored over Mangioneâs digital footprint, the stakes of the moment came into focus. People were less concerned about the facts of the situationâwhich have been few and far betweenâthan they were about finding some greater meaning in the violence and using it to say something about what it means to be alive right now. As the details of Mangioneâs life were dug up earlier this week, I watched people struggling in real time to sort the shooter into a familiar framework. It would make sense if his online activity offered a profile of a cartoonish partisan, or evidence of the kind of alienation weâve come to expect from violent men. It would be reassuring, or at least coherent, to see a history of steady radicalization in his posts, moving him from promising young man toward extremism. Thereâs plenty we donât know, but so much of what we do is banalâwhich is, in its own right, unsettling. In addition to the back pain, he seems to have suffered from brain fog, and struggled at times to find relief and satisfactory diagnoses. This may have been a radicalizing force in its own right, or the precipitating incident in a series of events that could have led to the shooting. We donât really know yet.
Our not knowing doesnât make the event any less revealing, cathartic, or terrifying. And it doesnât stop the speculating, the evidence-marshaling, and the search for meaning. As my colleague Ian Bogost remarked in a post on Bluesky this week, the morass of social-media posts and news articles often felt empty. Our search for a motive, for sense-making, wasnât going anywhere. And yet we were still pursuing it. âWeâve reached the end of the internet as an information system,â he wrote. To many, the shooting felt significant in a way that similar acts of violence generally do not. On social media, people began calling the shooting an assassination before anything close to a motive was established. The urge was understandable: Powerful, wealthy men arenât shot in Midtown Manhattan very often. Many observers apparently wanted to view it as a bellwether for further violence against the rich and powerful, or as the inciting event that might awaken people to the scale and extent of the populist rage in the country toward broken bureaucracies such as our health-care system.
Yet perhaps the most uncomfortable outcome for the millions following along is if the meaning machine fails and the shooting doesnât provide any greater resolution. Mangione may be not a Trumpist or Marxist folk hero but just a male tech worker of a certain age with reasonably common views among his hyperspecific online subculture. He may not have been radicalized by a book or a video game or even a conflict with his insurance company. If Mangione refuses to be claimed by an ideology, or if he reveals himself to be a well-adjusted kid who became deeply mentally unwell, that may end up being more unsettling than if he is a calculated operator or fringe radical.
When Mangione was caught, he had with him a note or manifesto of sorts, less than 300 words long. Near the beginning, it offers the following: âThis was fairly trivial.â The phrase is cold, detached, and haunting. It might merely be the garden-variety bravado of a gunman. But the sentence also conjures a possibility that is much harder to sit with (and for the internet to latch onto). Of all the possible outcomes available, the least shared, argued over, and considered is one that the shooter alludes to himselfâthat what feels to all of us like an era-defining event may ultimately be unremarkable in its brutality, in its inability to effect change, and in how quickly everyone moves on.
It is tempting to think of political extremists as those who have had their brain flambĂ©ed by a steady media diet of oddball podcasters, fringe YouTubers, and âdo your own researchâ conspiracists. Dylann Roof, who killed nine people at a Black church in Charleston, South Carolina, in 2015, was known to hang out in white-supremacist forums. Robert Bowers frequently posted racist content on the right-wing site Gab, where he wrote âScrew your optics, Iâm going inâ just before murdering 11 people at a synagogue in Pittsburgh in 2018. Brenton Tarrantâs manifesto explaining why he murdered 51 people in two mosques in Christchurch, New Zealand, in 2019 was filled with 4chan jokes and memes, suggesting that he had spent ample time on the platform.
Yet at first glance, Luigi Mangione, the suspected killer of UnitedHealthcare CEO Brian Thompson, doesnât seem to fit this mold. Mangione was active on social mediaâbut in the most average of ways. He seemingly posted on Goodreads and X, had public photos of himself on Facebook, and reportedly spent time on Reddit discussing his back pain. Perhaps more details will emerge that complicate the picture, but however extreme his political views wereâhe is, after all, charged with murdering a man in Midtown Manhattan, and reportedly wrote a manifesto in which he called health insurers âparasitesââthis does not appear to be a man who was radicalized in the fever swamps of some obscure corner of the dark web. On the surface, Mangione may have just been a fundamentally normal guy who snapped. Or maybe the killing demonstrates how mainstream political violence is becoming.
[Read: Decivilization may already be under way]
A Goodreads profile that appears to have been Mangioneâs showed that he had read books written by the popular science writer Michael Pollan and by Dr. Seuss (he gave The Lorax a five-star review). On what is believed to be his X account, he followed a mĂ©lange of very popular (and ideologically mixed) people, including Joe Rogan, Representative Alexandria Ocasio-Cortez, Ezra Klein, and Edward Snowden. In at least one instance, he praised Tucker Carlsonâs perspectives on postmodern architecture. His most extreme signal was a sympathetic review he gave to the manifesto written by Ted Kaczynski, the Unabomber. But as the writer Max Read points out, thatâs not uncommon for a lot of younger politically active people who identify with Kaczynskiâs environmentalist and anti-tech views, though itâs unlikely many of them are in lockstep with the Unabomberâs tactics.
Again, there are many unknowns about Mangione. Yet that has not stopped people from celebrating his purported cause; in fact, his bland social-media presence may only have made him easier to identify with. Jokes about Thompsonâs death have gone viral on virtually every social-media platform, and they have not stopped in the week since the shooting. People filled comment sections for videos and posts about the shooting with unsympathetic replies, pointing out UnitedHealthcareâs reputation for denying claims, and ruminating on how much suffering Thompson was responsible for at the helm of the company. The Network Contagion Research Institute, a nonprofit that monitors and analyzes online extremism, found that six of the top 10 most engaged-with posts on X about Thompson or UnitedHealthcare in the shootingâs aftermath âexpressed explicit or implicit support for the killing or denigrated the victim.â These responses werenât politically divided either. When the conservatives Matt Walsh and Ben Shapiro made videos complaining about people dancing on Thompsonâs grave, people pushed back in the comments and called the commentators out of touch.
In this way, Mangioneâs act and the response demarcate a new moment, one in which acts of political violence are no longer confined to extremists with fringe views, but widely accepted. This has been bubbling up for years: Jokes about âeat the rich,â guillotines, and class war have been memes for the young, online left since the late 2010s. Milder versions of this sentiment occasionally seeped out to wider audiences, such as last year, when people online applauded orca whales for attacking yachts in the Iberian Peninsula. Many young people are furious about the economic lot they have drawn by being born into an era of significant wealth inequality, and have made winking jokes about addressing it through violence. After Thompsonâs murder, this sentiment broke out of its containment walls, flooding comment sections and social-media feeds.
This response probably isnât an aberration, but instead is ascendant. America isnât yet experiencing its own Years of Leadâa period in Italy from the 1960s to the 1980s in which political violence and general upheaval became the norm in response to economic instability and rising extremismâbut political violence in the U.S. is slowly yet steadily becoming more common. In the past several years, it has surged to the highest levels since the 1970s, and the majority of ideologically motivated homicides since 1990 have been committed by far-right extremists.
Experts have different theories as to whatâs driving this, but many agree that weâre due for more acts of political violence before the trend dissipates. The response to Thompsonâs death isnât just people reveling in what they believe is vigilante justiceâit may also be a sign of whatâs coming. As my colleague Adrienne LaFrance has written, âAmericans tend to underestimate political violence, as Italians at first did during the Years of Lead.â Mangioneâs alleged act and the public response suggest that thereâs appetite for political, cause-oriented violence; that these acts may not be committed or applauded just by terminally online weirdos. There are millions of guys who view the world the way Mangione does, and millions more willing to cheer them on.
For years, crypto skeptics have asked, What is this for? And for years, boosters have struggled to offer up a satisfactory answer. They argue that the blockchainâthe technology upon which cryptocurrencies and other such applications are builtâis itself a genius technological invention, an elegant mechanism for documenting ownership online and fostering digital community. Or they say that it is a foundation on which to build and fund a third, hyperfinancialized iteration of the internet where you donât need human intermediaries to buy a cartoon image of an ape for $3.4 million.
Then there are the currencies themselves: bitcoin and ether and the endless series of memecoins and start-up tokens. These are largely volatile, speculative assets that some people trade, shitpost about, use to store value, and, sometimes, get incredibly rich or go bankrupt from. They are also infamously used to launder money, fund start-ups, and concoct elaborate financial fraud. Crypto has its use cases. But the knock has long been that the technology is overly complicated and offers nothing that the modern financial system canât already doâthat crypto is a technological solution in search of a problem (at least for people who donât want to use it to commit crimes).
I tend to agree. Iâve spent time reporting on NFTs and crypto-token-based decentralized autonomous organizations, or DAOs (like the one that tried to buy an original printing of the Constitution in 2021). Iâve read opaque white papers for Web3 start-ups and decentralized finance protocols that use smart contracts to enable financial-service transactions without major banks, but Iâve never found a killer app.
The aftermath of the presidential election, however, has left me thinking about cryptoâs influence differently.
[Christopher Beam: The worst of crypto is yet to come]
Crypto is a technology whose transformative product is not a particular service but a cultureâone that is, by nature, distrustful of institutions and sympathetic to people who want to dismantle or troll them. The election results were at least in part a repudiation of institutional authorities (the federal government, our public-health apparatus, the media), and crypto helped deliver them: The industry formed a super PAC that raised more than $200 million to support crypto-friendly politicians. This group, Fairshake, was nonpartisan and supported both Democrats and Republicans. But it was Donald Trump who went all in on the technology: During his campaign, he promoted World Liberty Financial, a new crypto start-up platform for decentralized finance, and offered assurances that he would fire SEC Chair Gary Gensler, who was known for cracking down on the crypto industry. (Gensler will resign in January, as is typical when new administrations take over.) Trump also pledged deregulation to help âensure that the United States will be the crypto capital of the planet and the bitcoin superpower of the world.â During his campaign, he said, âIf youâre in favor of crypto, youâd better vote for Trump.â At least in the short term, cryptoâs legacy seems to be that it has built a durable culture of true believers, techno-utopians, grifters, criminals, dupes, investors, and pandering politicians. Investments in this technology have enriched many of these people, who have then used that money to try to create a world in their image.
Though the white paper that introduced bitcoinâs origins and philosophyâsomething of an urtext for crypto overallâdoes not discuss politics per se, cryptocurrency was quickly adopted and championed by cyberlibertarians. Their core belief, dating back to the 1996 âA Declaration of the Independence of Cyberspace,â is simply that governments should not regulate the internet. Bitcoin and other cryptocurrencies are built on blockchains, which are fundamentally anti-establishment insofar as they are decentralized: They do not require a central authority or middleman to function. As the late David Golumbia, a professor who studied digital culture, wrote in his 2016 book, The Politics of Bitcoin: Software as Right-Wing Extremism, âMany of [bitcoinâs] most vociferous advocates rely on characterizations of the Federal Reserve as a corrupt idea in and of itself, a device run by conspiratorial bankers who want âthe state to control everyoneâs lives.ââ For true believers at the time, cryptocurrencies were a techno-utopian response to a broken, exclusionary, and extractive financial systemâa response that may either remake the system or burn it down.
Yet today, cryptoâs culture is far more diffuse. Exchanges such as Coinbase and Robinhood have effectively opened trading markets to anyone with a bank account and a smartphone. There are certainly true believers in the technology, but they are accompanied by celebrities and memelords drumming up new coins based on viral memes, and scores of day traders hoping to catch one of these speculation tokens at the right moment. Because crypto profits are driven by generating hype and marketing, the technology has spawned a durable digital culture of people longing for community or chasing after the allure of 1,000x returns, as well as those who relish just how much crypto pisses off the right people. Even as crypto becomes more mainstream, many of the industryâs boosters see their investments and community as a countercultural force. And so it makes sense that right-leaning culture warriors such as Jordan Peterson and Joe Rogan (who are now very much the establishment but position themselves as outsiders) have expressed fondness for crypto, and that venture capitalists such as Marc Andreessen, whose firm is deeply invested in crypto, have adopted more and more reactionary politics.
It is easy to make fun of the crypto hype cyclesâthe Beanie Babiesâesque rise and fall of NFTs such as Bored Apesâand to roll your eyes at the shamelessness of memecoin culture. As of this writing, Haliey Welch, a viral sensation turned podcaster (better known as the âHawk Tuah girlâ), is in the middle of a backlash for launching her own memecoin, which immediately spiked and then crashed, infuriating her fans. If that sentence makes perfect sense to you, Iâd like to apologize, but also: You get my drift. Crypto culture, with its terminally online slang and imagery, is alienating and off-putting. The industryâs penchant for Ponzi schemes and defrauding retail investorsâthe implosion of insolvent companies such as FTX and platforms such as Celsiusâis more than worthy of scorn. And yet, through all of thisâperhaps because of all of thisâcryptocurrencies have minted a generation of millionaires, billionaires, and corporate war chests. And now theyâre using their money to influence politics.
Which brings us back to Trump. Whether he understands crypto beyond the basic notion that itâs a good way to win votes and get rich off the backs of his most fanatical supporters is not clear. But the alliance between Trump and the crypto constituency makes sense philosophically. Trump is corrupt, and he loves money. For supporters, the appeal of his administration revolves in part around his promises to gut the federal government, seek retribution against his political enemies, and remake American institutions. You can see how the MAGA plan might overlap with an edgelordian culture that has contempt for a system it sees as decrepit and untrustworthy. The same overlap applies to technology executives like David Sacks, the anti-woke venture capitalist Trump has named as his AI and crypto czar.
I put all of this to Molly White, a researcher who covers the cryptocurrency industry. She suggested that there was yet another parallel between crypto advocates and the MAGA coalitionânamely a desire to become the powerful institutions they claim to despise. âBitcoin, and to some degree the other crypto assets, have this anti-government, anti-censorship ethos,â she told me. The original crypto ideology, White said, was built around the notion that large financial institutions and the government shouldnât be part of this new paradigm. âBut many crypto advocates have established a great deal of power through the wealth theyâve managed to accumulate using these assets. And over time thereâs been a shift from We donât want those institutions to have the power to We want the power.â
White argues that the crypto industry has become a re-creation of much of what its original ideology claimed to despise. âIf you look at Coinbase and other crypto companies, they do similar things to the financial institutions that Satoshi [Nakamoto, Bitcoinâs pseudonymous creator] was disappointed in. A lot of these companies work closely with the government, too, and they do things like the same type of ID verification that banks do,â she said. âTheyâve re-created the financial system, but with fewer protections for consumers.â
It seems clear that in a second Trump administration, the crypto industry and its barons might get their wishes. Itâs possible that the industry could see regulations declaring tokens as commodities, instead of securities, which would ease restrictions on trading and perhaps lead to more comingling between big banks and crypto assets. Last week, Trump nominated Paul Atkins, a former SEC commissioner, and a pro-crypto voice, to run the SEC. The announcement caused the price of bitcoin to surge to more than $100,000 (at the same time last year, the price was less than half that).
You donât have to be a cynic to see a flywheel effect: Crypto has become a meaningful political constituency not because its technology has broad, undeniable utility, but because it has made certain people extremely wealthy, which has attracted a great deal of attention and interest. The industry courts politicians with its wealth, and politicians pander for donations by making promises. Ultimately, the pro-crypto candidate wins, and the price of bitcoin surges, making many of these same people richer and thus able to exert more influence.
Trump hasnât taken office yet, but you can already see how this might play out. Justin Sun, a Chinese national and cryptocurrency entrepreneur charged with fraud by the SEC, recently bought $30 million worth of tokens of Trumpâs World Liberty Financial coinâan arrangement that may have been quite lucrative for Trump, raising concerns that the incoming presidentâs crypto investment will be an easy vehicle for bribery. There is speculation that Trump could make good on a proposal to create a strategic reserve of bitcoins in the U.S., which could involve the federal government buying 200,000 bitcoins a year over the next five yearsâperhaps by using the countryâs gold reserves. For large crypto holders, this would be an incredible scheme, a wealth transfer from the government to crypto whales. In practice, this would allow crypto holders to sell off their assets to the government while pumping the price of the asset. Using the government to prop up bitcoin is an interesting maneuver for a technology whose origins lie in decentralization.
Crypto could also end up being the currency of choice for greasing the skids of the second Trump administration, but the broader concern is about what happens if crypto executives get everything they want. As my colleague Annie Lowrey wrote recently, âIndustry-friendly rules would lead to a flood of cash entering the crypto markets, enriching anyone with assets already in their wallets, but also increasing volatility and exposing millions more Americans to scams, frauds, and swindles.â
[Annie Lowrey: The three pillars of the bro-economy ]
White offered a similar concern, should crypto end up further entangled in the global economy. The collapse of FTX wiped out some of the exchangeâs users, but there was no real contagion for the broader financial system. âBack then, crypto companies werenât too big to fail and there was no need for a bailout,â she told me. âIf banks are allowed to get more involved and if crypto and traditional finance are enmeshed, my fear is the industry will grow bigger and the crashes will be greater.â
Cryptoâs future is uncertain, but its legacy, at least in the short term, seems clearer than it did before November 5. It turns out that cryptocurrencies do have a very concrete use case. They are a technology that has latched on to, and then helped build, a culture that celebrates greed and speculation as virtues just as it embraces volatility. The only predictable thing about crypto seems to be its penchant for attracting and enriching a patchwork of individuals with qualities including, but not limited to, an appetite for risk, an overwhelming optimism about the benefits of technology, or a healthy distrust of institutions. In these ways, crypto is a perfect fit for the turbulence and distrust of the 2020s, as well as the nihilism and corruption of the Trump era.
For more than two years, every new AI announcement has lived in the shadow of ChatGPT. No model from any company has eclipsed or matched that initial fever. But perhaps the closest any firm has come to replicating the buzz was this past February, when OpenAI first teased its video-generating AI model, Sora. Tantalizing clipsâwoolly mammoths kicking up clouds of snow, Pixar-esque animations of adorable fluffy crittersâpromised a stunning future, one in which anyone can whip up high-quality clips by typing simple text prompts into a computer program.
But Sora, which was not immediately available to the public, remained just that: a teaser. Pressure on OpenAI has mounted. In the intervening months, several other major tech companies, including Meta, Google, and Amazon, have showcased video-generating models of their own. Today, OpenAI finally responded. âThis is a launch weâve been excited for for a long time,â the start-upâs CEO, Sam Altman, said in an announcement video. âWeâre going to launch Sora, our video product.â
In the announcement, the company said that paid subscribers to ChatGPT in the United States and several other countries will be able to use Sora to generate videos of their own. Unlike other tech companiesâ video-generating models, which remain previews or are available solely through enterprise cloud platforms, Sora is the first video-generating product that a major tech company is placing directly in usersâ hands. Chatbots and image generators such as OpenAIâs DALL-E have already made it effortless for anybody to create and share detailed content in just a few secondsâthreatening entire industries and precipitating deep changes in communication online. Now the era of video-generating AI models will make those shifts only more profound, rapid, and bizarre.
OpenAIâs key word this afternoon was product. The company is billing Sora not as a research breakthrough but as a consumer experienceâpart of the companyâs ongoing commercial lurch. At its founding, in 2015, OpenAI was a nonprofit with a mission to build digital intelligence âto benefit humanity as a whole, unconstrained by a need to generate financial return.â Today, it pumps out products and business deals like any other tech company chasing revenue. OpenAI added a for-profit arm in 2019, and as of September, it is reportedly considering revoking the control of its nonprofit board entirely. Soraâs marketing is even a change from February, when OpenAI presented the video-generating model as a step toward the companyâs lofty mission of creating technology more intelligent than humans. Bill Peebles, one of Soraâs lead researchers, told me in May that video would enable âa couple of avenues to AGI,â or artificial general intelligence, by allowing the companyâs programs to simulate physics and even human thoughts. To generate a video of a football game, Sora might need to model both aerodynamics and playersâ psychology.
Todayâs announcement, meanwhile, was preceded by a review by Marques Brownlee, a YouTuber famous for his reviews of gadgets such as iPhones and virtual-reality headsets. Altman wore a hoodie emblazoned with the word Sora. Altman and the Sora product team spoke for more than 17 minutes; Peebles and another researcher spoke for one minute and 45 seconds, mostly lauding how the company is launching a âturboâ version of Sora that is âway faster and cheaperâ in order to launch a ânew product experience.â
The Sora release comes on the third of â12 Days of OpenAI,â a stretch of releasing or demoing a new product to users every day. What the company has announced certainly resembles a product more than a computer-science breakthrough: a sleek interface for creating and editing videos, with features such as âRemix,â âLoop,â and âBlend.â So far, many of Soraâs outputs have been impressive, even wonder-inducing. The company hasnât built a new, more intelligent bot so much as an interface in the style of iMovie and Premiere Pro.
Already, videos that OpenAI staff and early-access users generated with Sora are trickling onto social media, and a deluge from users the world over will follow. For more than two years, cheap and easy-to-use generative-AI models have turned everybody into a potential illustrator; soon, anybody might become an animator as well. That poses an obvious threat for human illustrators and animators, many of whom have long been sounding the alarm against generative AI taking their livelihood. Sora and similar programs also raise the specter of disinformation campaigns. (Sora videos come with a visual watermark, but with OpenAIâs highest tier of subscription, which costs $200 a month, customers can create clips without one.)
But job displacement and disinformation may not be the most immediate or significant consequences of the Third Day of OpenAI. Both were happening without Sora, even if the program accelerates each problem: Production studios were already experimenting with enterprise AI products to generate videos, such as a recent Coca-Cola holiday commercial. And cheap, lower-tech methods of creating and disseminating false information have been extremely successful on their own.
What the mass adoption of video-generating AI products could meaningfully change is how people express themselves online. Over the past year, AI-generated memes, cartoons, caricatures, and other images, sometimes called âslop,â have saturated the internet. This content, much of it clearly generated by AI rather than intended to deceiveâa medium of crude self-expression, not sophisticated subterfugeâmay have been the technologyâs biggest impact on the 2024 presidential election. That anybody can generate such images provides a way to immediately express inchoate feelings about an inchoate world through an immediately digestible image. As my colleague Charlie Warzel has written, such content is meant to be consumed âfleetingly, and with little or no thought beyond the initial limbic-system response.â
A flood of AI-generated videos might provide still more powerful ways to visually communicate confusion, charged feelings, or persuasive propagandaâperhaps a much more lifelike version of the recent, low-quality AI-generated video of Donald Trump and Jill Biden in a fistfight, for instance. Sora might take over TikTok and similar short-form-video platforms just as AI image-generating models have warped Facebook and altered how people show support on X for political candidates.
Soraâs takeover of the web is not guaranteed. Back in May, Tim Brooks, another Sora researcher who has since joined Google, likened the programâs current state to GPT-1, the earliest version of the programs underlying ChatGPT, which are currently in their fourth generation. OpenAI repeated the analogy today. That comparison has broken down as the company has become more and more profit-driven: GPT-1 was highly preliminary research, a concept before a proof of concept, and four years removed from the release of ChatGPT. Sora might be just as undeveloped as an avenue for AGI, but it has become a full-fledged product nearly 10 months after OpenAI teased the model. Such early-stage technology might not mark significant progress toward curing cancer, solving the climate crisis, or other ways the start-up has claimed AI might benefit humanity as a whole. But it might be all that OpenAI needs to boost its bottom line.
Marc Andreessen has been feeling pretty good since Election Day, and at the end of November, he went on The Joe Rogan Experience to say as much. Sitting in the podcast studio, grinning, Andreessen told Rogan that he was âvery happyâ about the election and that it is now âMorning in Americaââdirectly invoking the famous Ronald Reagan campaign ad.
Andreessen, a billionaire co-founder of the storied venture-capital firm Andreessen Horowitz (also known as a16z), had put all his chips on Donald Trump. In July, on a podcast with his business partner, Ben Horowitz, Andreessen announced that he would be supporting the president-elect, and in total, he donated at least $4.5 million to a MAGA super PAC. Now, after publicly lobbying for deregulation in finance and tech, heâs poised to get his way. The Washington Post reported that he is helping Elon Musk and Vivek Ramaswamy plan the Department of Government Efficiency, Trumpâs proposed advisory body with a mandate to downsize the government. The vision is already starting to materialize: On Rogan, Andreessen harshly criticized the Consumer Financial Protection Bureau, a consumer-protection agency created in response to the 2008 financial crisis. Musk later concurred in a post on X that it was time to âDelete CFPB.â
Andreessen has long been interested in politics, and heâs never been shy about sharing his opinions. (Though he does seem to try to avoid encountering ideas he may not like: Heâs a prolific blocker of journalists on X.) Even so, his full embrace of Trump and right-wing talking pointsâincluding the false claim that the government funded an âinternet-censorship unitâ at Stanford Universityârepresents a definite shift that has become common among Americaâs plutocrat class. In addition to Musk, Ramaswamy, and Andreessen, other elites are boldly clawing their way into more political power. Last week, Trump announced that he had tapped the former PayPal executive and venture capitalist David Sacksâanother prolific X userâto be his âWhite House A.I. & Crypto Czar.â
[From the March 2024 issue: The rise of techno-authoritarianism]
Consider also the billionaire hedge-fund manager Bill Ackman, who gained notoriety outside financial circles for driving an entire news cycle last winter with his aggressive social-media campaign against university presidents whom he saw as insufficiently cracking down on pro-Palestine campus protesters. (He went particularly hard on thenâHarvard President Claudine Gay, elevating right-wing activistsâ allegations of academic dishonesty against her. Gay admitted that she had duplicated âother scholars' language, without proper attribution.â). Ackman has since used his public platform to exert pressure on politicians and administrators to scrap diversity, equity, and inclusion initiatives. In July, he formally endorsed Trump, and then spent the following months doubling down on MAGA talking points, including in an 1,800-word post on X that criticized alleged Democratic stances on fracking, protests, vaccines, and dozens of other issues, which received more than 9 million views (âVery well said!â Musk responded). In a CNBC interview, he talked about Trumpâs commitment to economic growth and favorably cited Robert F. Kennedy Jr.âs purported desire to tackle what Ackman described as âthe 73-shot regime that we give our kids.â  Â
Others in this orbit have demonstrated a willingness to take actions that would have previously crossed red lines. In October, the Los Angeles Times owner Patrick Soon-Shiong and the Washington Post owner Jeff Bezos killed their newspapersâ planned endorsements of Kamala Harris. (Soon-Shiong said that Harrisâs stance on the war in Gaza, along with a general sense that his paperâs opinion writers leaned too far to the left, motivated his decision, while Bezos wrote in an op-ed that his decision was an attempt to restore trust in the media that the public often perceives as biased.) Meanwhile, Musk has loudly lobbied for his policy preferences and has also worked not just as an occasional adviser but as a de facto staff member of the Trump campaign. On X, which Musk purchased for $44 billion in 2022, he repeatedly advocated for a Trump presidency to his more than 200 million followers and allowed far-right personalities and content to flourishâas my colleague Charlie Warzel wrote, effectively turning the platform into a white-supremacist site.
Of course, the hyperwealthy have always found ways to bend the political system. In a 2014 study, the political scientists Martin Gilens and Benjamin I. Page reviewed thousands of polls and surveys spanning more than 20 years and found that the preferences of the wealthiest Americans were much more likely than those of average citizens to affect policy changes. But influence machines were once subterranean: Few people would have known about the political influence machine that the Koch brothers built in the past several decades if not for the work of investigative journalists. The hedge-fund billionaire George Soros has long bankrolled liberal nonprofits. In 2016, Rupert Murdoch made it a point to say that he had ânever asked any prime minister for anything,â after The Evening Standard reported that he had boasted about being able to tell the British government what to do: The media magnate wanted to at least partially conceal his influence. Until recently, elites and politicians who worked together feared the scandal of the sausage-making process being revealed, and the public backlash that could come with it.
The energy is different now. âThereâs a real shift in ruling-class vibes,â Rob Larson, an economics professor who has written about the new ultrarich and Silicon Valleyâs influence on politics, told me. Many of Americaâs plutocrats seem not to care if people know that theyâre trying to manipulate the political system and the Fourth Estate in service of their own interests. Billionaires such as Andreessen and Ackman are openly broadcasting their political desires and âdefinitely feeling their animal spirits,â Larson said. Or, as the Northwestern University political-science professor Jeffrey Winters put it in a postelection interview with Slate, this feels like a moment of âin-your-face oligarchy.â
Part of the shift can be chalked up to the fact that many billionaires now come from tech, and Silicon Valley has a history of championing a unique kind of personality: âBrazenness has been a big piece of Silicon Valley entrepreneurship thatâs been celebrated for a long time,â Becca Lewis, a researcher at Stanford who focuses on the politics of the technology industry, told me. âYouâre supposed to be a disruptor or heterodox thinker.â For years, tech leaders may have thought of themselves as apolitical (even as the industry employed plenty of lobbyists), but now the winds have shifted: Technology such as crypto has been politicized, and they have brought the braggadocio of the Valley to fight for it.
[Read: Whatâs with all the Trumpy VCs?]
That the ultrarich are richer than they have ever been may also be part of the explanation, Larson said. Having more money means exposure to fewer consequences. The last time elites were this vocal in their influence, Larson said, was during the Gilded Age, when multimillionaires such as William Randolph Hearst and Jay Gould worked to shape American politics.
Regardless of its provenance, the practical impact of this behavior is a less equal system. Many people are worried about President-Elect Donald Trumpâs forthcoming administrationâs corrosive effects on democracy. The corrosion is already happening, though. A particularly vocal subset of the ultrarich is steering the ship, and doesnât care who knows.
Linda Troost and Sayre Greenfield met during the first week of graduate school at the University of Pennsylvania in 1978. Both were pursuing Ph.D.s in English. The two married four years later. After graduation in 1985, Troost landed a tenure-track job at Washington & Jefferson College, a small liberal-arts college near Pittsburgh. That was great, but what about her spouse?
Washington & Jefferson is pretty close to Pittsburgh, and Pittsburgh is a big city. âSurely something will turn up for my husband,â Troost remembers thinking. But Greenfieldâs specialty was the 16th-century English poet Edmund Spenser, who wrote The Faerie Queene. It turned out that several well-regarded Spenserists were housed at Pittsburgh universities, and many of their former students had remained nearby. âThe Pittsburgh area was saturated with Spenser scholars,â Troost told me. Greenfield couldnât find a job.
After years of short-term teaching gigs, including one a thousand miles away, in Tulsa, Oklahoma, Greenfield finally landed a permanent faculty job within an hourâs drive of where he lives with Troost. Heâs been commuting back and forth for 30 years.
That counts as a good outcome for an academic couple. Becoming a professor requires years and years of intense study, often carried out in isolation and poverty. This stretch will likely span the prime years of an academicâs young adulthood, exactly the time when they might expect to find a lifelong partner. Many find themselves in the same position as Troost and Greenfield, struggling to balance opportunities for work with the basic needs of their relationship.
This quandary is common on campus: According to a Stanford study, about 36 percent of academics at research universities are married or partnered with another academic. Itâs so common, in fact, that professors have a name for it: the âtwo-body problem.â And the problem has only become more apparent in the decades since Troost and Greenfield got their doctorates.
Colleges and universities even have a formalized response. When the two-body problem arises, departments may engage in a practice known as partner hiring: They ask their deans or the heads of other departments to find or create a job for the partner of a person theyâd like to hire. Sometimes those extra jobs are tenure-track (the kind that scholars want most), but other times they are something less: lectureships, research positions, or even staff positions such as project managers. Some schools allocate part of their budget for partner hires every year, considering it a recruitment expense. Others turn to regional contacts, hoping to place scholars at nearby institutions for mutual benefit.
[From August 1935: Twilight of the professors]
The practice of accommodating academic spouses is now second nature in higher education. Itâs part of the furniture of academic life, casting its shadow across every school and each department. Partner hiring is widely understood to be beneficial and even necessary when it comes to faculty recruitment. But its effects on the academic labor market, and on the research and educational practice of colleges and universities, is still poorly understood. Hiringâor refusing to hireâacademic partners can have a dramatic impact on morale; and faculty are hardly of one mind about its virtue.
The American public, whose trust in higher ed is at historic lows, may wonder why an employment practice that would seem shocking in most other industries is so commonplace on campuses. (Imagine if jobs were handed out to spouses at investment banks, aerospace contractors, or magazines.) This may be seen as evidence that higher ed is out of touch with reality. Or else it might be taken to suggest that colleges and universities are far ahead of that realityâthat they can even be a model, if imperfect, for businesses that understand their workers as members of families too.
If partner hiring sounds like nepotism, thatâs because it is, by definition. During the first half of the 20th century, most universities maintained strict policies that prohibited the recruitment of wives and husbands into faculty positions at the same school or department. This rule may have kept some unqualified spouses out of academia, but it also prevented many qualified spousesâin particular, many qualified womenâfrom securing jobs. One famous instance of the latter was Maria Mayer, a theoretical physicist who won a 1963 Nobel Prize; sheâd been blocked from a full-fledged faculty job at Johns Hopkins University in the 1930s because her husband worked there too.
Over time, schools dropped the anti-nepotism rules and looked for new ways to manage a tricky situation. Clearly, recruiting and retaining the best professors required making accommodations for their spouses. The schools that did so performed better than the ones that didnât. I spoke about the practice with two labor economists, Matthew Kahn of the University of Southern California and Harvardâs Larry Katz. Both are married to other, very successful labor economists; Katzâs wife is the Nobel Prize winner Claudia Goldin. And both told me that excellent colleges and universities in small cities are ranked lower than their big-city peers in part because of colocation issues that make it harder for these schools to bring in power couples. Given that circumstance, Katz said, an ambitious institution such as Williams College, located in a tiny town in Massachusetts three hours from Boston and New York City, has no choice but to hire partners to compete for the best faculty.
Beyond recruitment, schools may gain other advantages by competing for partners, says Lisa Wolf-Wendel, a professor at the University of Kansas and a co-author of The Two-Body Problem: Dual-Career-Couple Hiring Practices in Higher Education. âInstitutions went bonkersâ for partner hires over the past few decades, she told me, when they started cutting lifetime tenure-track positions in favor of shorter-term jobs. That meant they could offer lower-commitment, lower-pay, non-tenure-track posts to one half of a faculty couple, who might be thrilled to take such a deal instead of nothing. These days, she said, making people happy through the use of partner hires may be less important than saving money.
[Read: Universities have a computer-science problem]
The same policies and practices may also help schools address an ongoing challenge: faculty diversity. Partner hires help increase gender equity in fields that need it. They may also help to even out the proportions of faculty by race. This is often celebrated as another salutary effect of creating jobs for peopleâs spouses. Daniel J. Blake, a professor in the department of educational-policy studies at Georgia State University, is married to a neuroscientist, Ivette Planell-Mendez, who is finishing a doctorate at Princeton. He told me that her experience in academia inspired him to formally study partner hires, and that his research has convinced him that this system is âessentialâ to advancing equity.
Whatever a college or universityâs initial motivations for making partner hires, doing so provides long-term benefits for faculty and students alike, Wolf-Wendel said. âThe best way for college students to be successfulâthat is, to graduateâis to have high levels of faculty interaction.â Solving professorsâ two-body problems makes this feasible, by keeping faculty more engaged with campus life. And once an academic couple has solved their two-body problem, they may be less likely to stray. Nathan Singh and Abby Green are both physicians and cancer biologists with appointments at the Washington University School of Medicine. âWeâve really gotten our feet down,â Singh says of his situation; the kids are in school and soccer, he and Green are content in their jobs, and it would be a big pain to seek and then negotiate two new ones, let alone move to another city for them.
As a university professor whose spouse is not an academic, Iâll admit that I have struggled to accept the doctrine of partner hires. Why should someone be handed a second job and income for the happenstance of their family life, when my household would never get the same consideration? That prejudice only worsened when I saw friends, acquaintances, and job candidates turn down really good partner deals that didnât meet their expectationsâa high-paid staff job created expressly for a partner, for example, or a long-term faculty position that wasnât also on the tenure track. And surely office life is complicated by partner hires. I once interviewed for a job in a small academic department where, by my count, 40 percent of the faculty were married to each other. It seemed incestuous.
In The Two-Body Problem, Wolf-Wendel and her co-authors spend a chapter running through these and other âcommon concernsâ about the practice. Among the issues that they raise, fairness is foremost. Faculty without academic partners may feel like theyâre getting screwed, or that being single has become a source of discrimination. Even faculty couples who have had the benefit of partner hires may themselves feel poorly treated, if, for instance, they see other couples get better deals than they did.
Some academics have the further worry that spouses fill up jobs that could have gone to other, more qualified candidates. In truth, some open roles would not exist but for the partner hires that facilitated them. And research suggests that scholars in academic marriages are no less (but also no more) productive, in terms of publications, than those who are not.
The use of nepotism as a means of adding to diversity may also come off as cynical. I have sometimes heard academic deans confess that partner hires offered the easiest route to better equity in their departments. And understandably, academics are aware that diversity can offer some advantage in their cutthroat profession. John Dean Davis, an architectural historian at the Ohio State University, told me that he and his then-wife, who is both Latina and âa rock starâ in her field, presented âan easy case to the deans and provosts at the very white, midwestern schoolsâ where they received job offers.
Even when administratorsâ aims are pure, creating jobs for peopleâs spouses may still produce unwanted outcomes. Women tend to accept worse partner-hire offers than men, Blake told me. Hiring committees have been shown to assume that women academics are more âmovableâ than men, in general. Blakeâs research also found that Black and Latino faculty reported facing significant scrutiny and skepticism while being considered for partner-hire roles.
The same policies may also serve to narrow the variety of scholarship that is present on campus. âOne of the mantras of people who are anti-higher-ed right now is that there is a lack of diversity of thought, that the academy is too liberal,â Wolf-Wendel told me. âMy guess is that if you hire spouses, youâre just going to reinforce that.â She added that spousal hires may reinforce the publicâs sense that tenure is elitist and that assisting a spouse or partner to find work just extends that elitism.
Partner-hire-seeking professors might themselves be guilty of restricted thinking. Troost suggested that the narrow scope of scholarly training, especially at elite institutions, tends to give early-career academics the sense that they are owed a job within their own, small areas of scholarship; and I suspect that couples who have just received their Ph.D.s may now expect to find two perfect jobs instead of one. A policy of finding work for peopleâs spouses may only stiffen these beliefs. Instead of encouraging job candidates to broaden their skillsâand thus increase their value for potential employers and future students alikeâit tells them that work in higher ed can be tailored to their needs.
The two-body problem, and the tensions around its partner-hire solution, may only worsen in the years to come. A partner needing a job used to be a small, idiosyncratic thing, Katz, one of the labor economists, told me. âNow thereâs almost no case of senior- or mid-career recruitment where this isnât an issue.â
But as two-income families continue to proliferate, this challenge shouldnât necessarily be limited to academia. The particular conditions of academic hiringâlimited job market with localized opportunitiesâmay exacerbate two-body problems. But surely doctors, lawyers, and financiers have some version of them too. What if the oddity of partner hiring isnât that itâs particular to higher education, but that other industries havenât followed suit? Maybe itâs less perverse to treat workers as members of families with collective needs than it is to assume that every individual ought to fend for themselves.
Research from Kahn, one of the labor economists, and his labor-economist wife, Dora L. Costa, suggests that college-educated couples in any industry tend to be drawn to major metropolitan areas with more opportunities for them to seek out dual careers. As industry and population has consolidated into fewer, larger cities, the small or midsize ones have suffered. Perhaps nonacademic employers in those places would be smart to lure workers with family deals, just like the deans in higher ed. Or maybe colleges and universities, with their endowments and tax exemptions, should take a greater interest in their local communities, and invest the same money theyâd devote to hiring a second tenure-track professor into jobs at area companies.
A âfamily business,â when it doesnât refer to the Mafia, has always been seen as a moral and economic good in America. We celebrate the mom-and-pop establishmentânot just for its service to a community, but for its embrace of the idea that affection and commitment can be a catalyst for labor. Professors (and other professionals) might portray themselves as superior to such common ideas, but of course they are not. âAcademics have always fallen in love with other academics,â Kahn told me by phone from his house in Los Angeles. I could hear Costa in the background and then Kahnâs muffled voice as he tried to lure her to the phone, too, to put down whatever task she was undertaking in favor of discussing economics instead.
This week, OpenAI launched what its chief executive, Sam Altman, called âthe smartest model in the worldââa generative-AI program whose capabilities are supposedly far greater, and more closely approximate how humans think, than those of any such software preceding it. The start-up has been building toward this moment since September 12, a day that, in OpenAIâs telling, set the world on a new path toward superintelligence.
That was when the company previewed early versions of a series of AI models, known as o1, constructed with novel methods that the start-up believes will propel its programs to unseen heights. Mark Chen, then OpenAIâs vice president of research, told me a few days later that o1 is fundamentally different from the standard ChatGPT because it can âreason,â a hallmark of human intelligence. Shortly thereafter, Altman pronounced âthe dawn of the Intelligence Age,â in which AI helps humankind fix the climate and colonize space. As of yesterday afternoon, the start-up has released the first complete version of o1, with fully fledged reasoning powers, to the public. (The Atlantic recently entered into a corporate partnership with OpenAI.)
On the surface, the start-upâs latest rhetoric sounds just like hype the company has built its $157 billion valuation on. Nobody on the outside knows exactly how OpenAI makes its chatbot technology, and o1 is its most secretive release yet. The mystique draws interest and investment. âItâs a magic trick,â Emily M. Bender, a computational linguist at the University of Washington and prominent critic of the AI industry, recently told me. An average user of o1 might not notice much of a difference between it and the default models powering ChatGPT, such as GPT-4o, another supposedly major update released in May. Although OpenAI marketed that product by invoking its lofty missionââadvancing AI technology and ensuring it is accessible and beneficial to everyone,â as though chatbots were medicine or foodâGPT-4o hardly transformed the world.
[Read: The AI boom has an expiration date]
But with o1, something has shifted. Several independent researchers, while less ecstatic, told me that the program is a notable departure from older models, representing âa completely different ballgameâ and âgenuine improvement.â Even if these modelsâ capacities prove not much greater than their predecessorsâ, the stakes for OpenAI are. The company has recently dealt with a wave of controversies and high-profile departures, and model improvement in the AI industry overall has slowed. Products from different companies have become indistinguishableâChatGPT has much in common with Anthropicâs Claude, Googleâs Gemini, xAIâs Grokâand firms are under mounting pressure to justify the technologyâs tremendous costs. Every competitor is scrambling to figure out new ways to advance their products.
Over the past several months, Iâve been trying to discern how OpenAI perceives the future of generative AI. Stretching back to this spring, when OpenAI was eager to promote its efforts around so-called multimodal AI, which works across text, images, and other types of media, Iâve had multiple conversations with OpenAI employees, conducted interviews with external computer and cognitive scientists, and pored over the start-upâs research and announcements. The release of o1, in particular, has provided the clearest glimpse yet at what sort of synthetic âintelligenceâ the start-up and companies following its lead believe they are building.
The company has been unusually direct that the o1 series is the future: Chen, who has since been promoted to senior vice president of research, told me that OpenAI is now focused on this ânew paradigm,â and Altman later wrote that the company is âprioritizingâ o1 and its successors. The company believes, or wants its users and investors to believe, that it has found some fresh magic. The GPT era is giving way to the reasoning era.
Last spring, I met Mark Chen in the renovated mayonnaise factory that now houses OpenAIâs San Francisco headquarters. We had first spoken a few weeks earlier, over Zoom. At the time, he led a team tasked with tearing down âthe big roadblocksâ standing between OpenAI and artificial general intelligenceâa technology smart enough to match or exceed humanityâs brainpower. I wanted to ask him about an idea that had been a driving force behind the entire generative-AI revolution up to that point: the power of prediction.
The large language models powering ChatGPT and other such chatbots âlearnâ by ingesting unfathomable volumes of text, determining statistical relationships between words and phrases, and using those patterns to predict what word is most likely to come next in a sentence. These programs have improved as theyâve grownâtaking on more training data, more computer processors, more electricityâand the most advanced, such as GPT-4o, are now able to draft work memos and write short stories, solve puzzles and summarize spreadsheets. Researchers have extended the premise beyond text: Todayâs AI models also predict the grid of adjacent colors that cohere into an image, or the series of frames that blur into a film.
The claim is not just that prediction yields useful products. Chen claims that âprediction leads to understandingââthat to complete a story or paint a portrait, an AI model actually has to discern something fundamental about plot and personality, facial expressions and color theory. Chen noted that a program he designed a few years ago to predict the next pixel in a grid was able to distinguish dogs, cats, planes, and other sorts of objects. Even earlier, a program that OpenAI trained to predict text in Amazon reviews was able to determine whether a review was positive or negative.
Todayâs state-of-the-art models seem to have networks of code that consistently correspond to certain topics, ideas, or entities. In one now-famous example, Anthropic shared research showing that an advanced version of its large language model, Claude, had formed such a network related to the Golden Gate Bridge. That research further suggested that AI models can develop an internal representation of such concepts, and organize their internal âneuronsâ accordinglyâa step that seems to go beyond mere pattern recognition. Claude had a combination of âneuronsâ that would light up similarly in response to descriptions, mentions, and images of the San Francisco landmark. âThis is why everyoneâs so bullish on prediction,â Chen told me: In mapping the relationships between words and images, and then forecasting what should logically follow in a sequence of text or pixels, generative AI seems to have demonstrated the ability to understand content.
The pinnacle of the prediction hypothesis might be Sora, a video-generating model that OpenAI announced in February and which conjures clips, more or less, by predicting and outputting a sequence of frames. Bill Peebles and Tim Brooks, Soraâs lead researchers, told me that they hope Sora will create realistic videos by simulating environments and the people moving through them. (Brooks has since left to work on video-generating models at Google DeepMind.) For instance, producing a video of a soccer match might require not just rendering a ball bouncing off cleats, but developing models of physics, tactics, and playersâ thought processes. âAs long as you can get every piece of information in the world into these models, that should be sufficient for them to build models of physics, for them to learn how to reason like humans,â Peebles told me. Prediction would thus give rise to intelligence. More pragmatically, multimodality may also be simply about the pursuit of dataâexpanding from all the text on the web to all the photos and videos, as well.
Just because OpenAIâs researchers say their programs understand the world doesnât mean they do. Generating a cat video doesnât mean an AI knows anything about catsâit just means it can make a cat video. (And even that can be a struggle: In a demo earlier this year, Sora rendered a cat that had sprouted a third front leg.) Likewise, âpredicting a text doesnât necessarily mean that [a model] is understanding the text,â Melanie Mitchell, a computer scientist who studies AI and cognition at the Santa Fe Institute, told me. Another example: GPT-4 is far better at generating acronyms using the first letter of each word in a phrase than the second, suggesting that rather than understanding the rule behind generating acronyms, the model has simply seen far more examples of standard, first-letter acronyms to shallowly mimic that rule. When GPT-4 miscounts the number of râs in strawberry, or Sora generates a video of a glass of juice melting into a table, itâs hard to believe that either program grasps the phenomena and ideas underlying their outputs.
These shortcomings have led to sharp, even caustic criticism that AI cannot rival the human mindâthe models are merely âstochastic parrots,â in Benderâs famous words, or supercharged versions of âautocomplete,â to quote the AI critic Gary Marcus. Altman responded by posting on social media, âI am a stochastic parrot, and so r u,â implying that the human brain is ultimately a sophisticated word predictor, too. Â
Altmanâs is a plainly asinine claim; a bunch of code running in a data center is not the same as a brain. Yet itâs also ridiculous to write off generative AIâa technology that is redefining education and art, at least, for better or worseâas âmereâ statistics. Regardless, the disagreement obscures the more important point. It doesnât matter to OpenAI or its investors whether AI advances to resemble the human mind, or perhaps even whether and how their models âunderstandâ their outputsâonly that the products continue to advance.
OpenAIâs new reasoning models show a dramatic improvement over other programs at all sorts of coding, math, and science problems, earning praise from geneticists, physicists, economists, and other experts. But notably, o1 does not appear to have been designed to be better at word prediction.
According to investigations from The Information, Bloomberg, TechCrunch, and Reuters, major AI companies including OpenAI, Google, and Anthropic are finding that the technical approach that has driven the entire AI revolution is hitting a limit. Word-predicting models such as GPT-4o are reportedly no longer becoming reliably more capable, even more âintelligent,â with size. These firms may be running out of high-quality data to train their models on, and even with enough, the programs are so massive that making them bigger is no longer making them much smarter. o1 is the industryâs first major attempt to clear this hurdle.
When I spoke with Mark Chen after o1âs September debut, he told me that GPT-based programs had a âcore gap that we were trying to address.â Whereas previous models were trained âto be very good at predicting what humans have written down in the past,â o1 is different. âThe way we train the âthinkingâ is not through imitation learning,â he said. A reasoning model is ânot trained to predict human thoughtsâ but to produce, or at least simulate, âthoughts on its own.â It follows that because humans are not word-predicting machines, then AI programs cannot remain so, either, if they hope to improve.
More details about these modelsâ inner workings, Chen said, are âa competitive research secret.â But my interviews with independent researchers, a growing body of third-party tests, and hints in public statements from OpenAI and its employees have allowed me to get a sense of whatâs under the hood. The o1 series appears âcategorically differentâ from the older GPT series, Delip Rao, an AI researcher at the University of Pennsylvania, told me. Discussions of o1 point to a growing body of research on AI reasoning, including a widely cited paper co-authored last year by OpenAIâs former chief scientist, Ilya Sutskever. To train o1, OpenAI likely put a language model in the style of GPT-4 through a huge amount of trial and error, asking it to solve many, many problems and then providing feedback on its approaches, for instance. The process might be akin to a chess-playing AI playing a million games to learn optimal strategies, Subbarao Kambhampati, a computer scientist at Arizona State University, told me. Or perhaps a rat that, having run 10,000 mazes, develops a good strategy for choosing among forking paths and doubling back at dead ends.
[Read: Silicon Valleyâs trillion-dollar leap of faith]
Prediction-based bots, such as Claude and earlier versions of ChatGPT, generate words at a roughly constant rate, without pauseâthey donât, in other words, evince much thinking. Although you can prompt such large language models to construct a different answer, those programs do not (and cannot) on their own look backward and evaluate what theyâve written for errors. But o1 works differently, exploring different routes until it finds the best one, Chen told me. Reasoning models can answer harder questions when given more âthinkingâ time, akin to taking more time to consider possible moves at a crucial moment in a chess game. o1 appears to be âsearching through lots of potential, emulated âreasoningâ chains on the fly,â Mike Knoop, a software engineer who co-founded a prominent contest designed to test AI modelsâ reasoning abilities, told me. This is another way to scale: more time and resources, not just during training, but also when in use.
Here is another way to think about the distinction between language models and reasoning models: OpenAIâs attempted path to superintelligence is defined by parrots and rats. ChatGPT and other such productsâthe stochastic parrotsâare designed to find patterns among massive amounts of data, to relate words, objects, and ideas. o1 is the maze-running rodent, designed to navigate those statistical models of the world to solve problems. Or, to use a chess analogy: You could play a game based on a bunch of moves that youâve memorized, but thatâs different from genuinely understanding strategy and reacting to your opponent. Language models learn a grammar, perhaps even something about the world, while reasoning models aim to use that grammar. When I posed this dual framework, Chen called it âa good first approximationâ and âat a high level, the best way to think about it.â
Reasoning may really be a way to break through the wall that the prediction models seem to have hit; much of the tech industry is certainly rushing to follow OpenAIâs lead. Yet taking a big bet on this approach might be premature.
For all the grandeur, o1 has some familiar limitations. As with primarily prediction-based models, it has an easier time with tasks for which more training examples exist, Tom McCoy, a computational linguist at Yale who has extensively tested the preview version of o1 released in September, told me. For instance, the program is better at decrypting codes when the answer is a grammatically complete sentence instead of a random jumble of wordsâthe former is likely better reflected in its training data. A statistical substrate remains.
François Chollet, a former computer scientist at Google who studies general intelligence and is also a co-founder of the AI reasoning contest, put it a different way: âA model like o1 ⊠is able to self-query in order to refine how it uses what it knows. But it is still limited to reapplying what it knows.â A wealth of independent analyses bear this out: In the AI reasoning contest, the o1 preview improved over the GPT-4o but still struggled overall to effectively solve a set of pattern-based problems designed to test abstract reasoning. Researchers at Apple recently found that adding irrelevant clauses to math problems makes o1 more likely to answer incorrectly. For example, when asking the o1 preview to calculate the price of bread and muffins, telling the bot that you plan to donate some of the baked goodsâeven though that wouldnât affect their costâled the model astray. o1 might not deeply understand chess strategy so much as it memorizes and applies broad principles and tactics.
Even if you accept the claim that o1 understands, instead of mimicking, the logic that underlies its responses, the program might actually be further from general intelligence than ChatGPT. o1âs improvements are constrained to specific subjects where you can confirm whether a solution is trueâlike checking a proof against mathematical laws or testing computer code for bugs. Thereâs no objective rubric for beautiful poetry, persuasive rhetoric, or emotional empathy with which to train the model. That likely makes o1 more narrowly applicable than GPT-4o, the University of Pennsylvaniaâs Rao said, which even OpenAIâs blog post announcing the model hinted at, stating: âFor many common cases GPT-4o will be more capable in the near term.â
[Read: The lifeblood of the AI boom]
But OpenAI is taking a long view. The reasoning models âexplore different hypotheses like a human would,â Chen told me. By reasoning, o1 is proving better at understanding and answering questions about images, too, he said, and the full version of o1 now accepts multimodal inputs. The new reasoning models solve problems âmuch like a person would,â OpenAI wrote in September. And if scaling up large language models really is hitting a wall, this kind of reasoning seems to be where many of OpenAIâs rivals are turning next, too. Dario Amodei, the CEO of Anthropic, recently noted o1 as a possible way forward for AI. Google has recently released several experimental versions of Gemini, its flagship model, all of which exhibit some signs of being maze ratsâtaking longer to answer questions, providing detailed reasoning chains, improvements on math and coding. Both it and Microsoft are reportedly exploring this âreasoningâ approach. And multiple Chinese tech companies, including Alibaba, have released models built in the style of o1.
If this is the way to superintelligence, it remains a bizarre one. âThis is back to a million monkeys typing for a million years generating the works of Shakespeare,â Emily Bender told me. But OpenAIâs technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problemâbut only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes. Yesterday, alongside the release of the full o1, OpenAI announced a new premium tier of subscription to ChatGPT that enables users, for $200 a month (10 times the price of the current paid tier), to access a version of o1 that consumes even more computing powerâmoney buys intelligence. âThere are now two axes on which we can scale,â Chen said: training time and run time, monkeys and years, parrots and rats. So long as the funding continues, perhaps efficiency is beside the point.
The maze rats may hit a wall, eventually, too. In OpenAIâs early tests, scaling o1 showed diminishing returns: Linear improvements on a challenging math exam required exponentially growing computing power. That superintelligence could use so much electricity as to require remaking grids worldwideâand that such extravagant energy demands are, at the moment, causing staggering financial lossesâare clearly no deterrent to the start-up or a good chunk of its investors. Itâs not just that OpenAIâs ambition and technology fuel each other; ambition, and in turn accumulation, supersedes the technology itself. Growth and debt are prerequisites for and proof of more powerful machines. Maybe thereâs substance, even intelligence, underneath. But there doesnât need to be for this speculative flywheel to spin.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
In pizza heaven, it is always 950 degrees. The temperature required to make an authentic Neapolitan pizza is stupidly, unbelievably hotâmore blast furnace than broiler. My backyard pizza oven can get all the way there in just 15 minutes. Crank it to the max, and the Ooni Koda will gurgle up blue flames that bounce off the top of the dome. In 60 seconds, raw dough inflates into pillowy crust, cheese dissolves into the sauce, and a few simple ingredients become a full-fledged pizza.
Violinists have the Stradivarius. Sneakerheads have the Air Jordan 1. Pizza degenerates like me have the Ooni. I got my first one three years ago and have since been on a singular, pointless quest to make the best pie possible. Unfortunately, I am now someone who knows that dough should pass the windowpane test. Do not get me started on the pros and cons of Caputo 00 flour.
An at-home pizza oven is a patently absurd thing to buy. Much to my wifeâs consternation, I now own two. Itâs all the more ridiculous considering that I live in New York City, where amazing pizzerias are about as easy to spot as rats, and space is a precious commodity; this is not a town that favors single-use kitchen tools. These devices do one thing well (pizza) and only that one thing (pizza). My 12-inch Ooni is among the cheapest and smallest high-heat pizza ovens out there, and it still clocks in at $400 and 20 pounds. You can get an 11-in-1 combination Instant Pot and air fryer for a fraction of the cost.
But somehow, the portable-pizza-oven market is booming. Ooni makes nine different modelsâincluding a $900 indoor version thatâs like a souped-up toaster ovenâand similar products are available from companies including Cuisinart, Ninja, Gozney, and Breville. Oprah included a pizza oven in her 2023 gift guide. Florence Pugh has Instagrammed her portable-oven odysseys.
The paradox of pizza has long been this: Americaâs favorite foodâone that an eighth of the country eats on any given dayâis difficult, if not impossible, to make well at home. Not anymore. We are in the middle of a pizza revolution; there has simply never been a better time to make pizza at home.
The traditional home oven is great for lots of things: chocolate-chip cookies, Thanksgiving turkeys, roasted brussels sprouts, whatever. Pizza is not one of them. Letâs consider a classic New York pie, which doesnât require the same extreme heat as its Neapolitan brethren. It sounds weird, but you want the pie to be medium rare. The crust should be crispy but still pliable, the cheese melted but not burned. The only way to achieve that is to blast pizza dough with heat from both top and bottomâabout 600 degrees at the very least, preferably 650. But nearly every kitchen range tops out at 550 degrees. âBy whatever accident of fate, the level of heat thatâs necessary is just out of the reach of a typical home oven,â Adam Ragusea, a food YouTuber who is helping open up a pizzeria in suburban Knoxville, Tennessee, told me. That temperature discrepancy matters a lot. Try making pizza on a simple aluminum sheet tray in your home oven, and by the time the crust is golden brown, itâll be brittle like a cracker and the cheese will have puddled into grease.
Overcoming the limitations of the reviled kitchen range has long stumped homemade pizza enthusiasts. Julia Child laid out tiles in her oven to soak up the ovenâs heat and transfer it to the crust for extra crispiness. That inspired the pizza stone, an oversize ceramic tile that you insert into your oven. At times, the human will to make a decent pizza at home borders on farce. Before making pizza, some recipes suggest that you should leave your oven at full heat for 45 minutes, or an hour, or even two. In the 2000s, one software engineer in Atlanta realized that in self-cleaning mode, ovens can hit 800 degreesâbut the door locks. So he snipped off the safety latch with a pair of garden shears. Others have done the same, voiding the warranty on their oven in the name of better pizza.
Still, nothing you can do in a standard kitchen competes with the tools that a pizzeria has at its disposal. Traditional commercial pizza ovens are gigantic and expensive, sometimes costing upwards of $20,000. Some of the oldest pizzerias in the United States still use their original ovens, manufactured nearly a century ago. Even if your oven reaches 750 degrees, its walls âare not going to be as thick as the walls of a commercial pizza oven,â J. Kenji LĂłpez-Alt, a chef and the author of The Food Lab: Better Home Cooking Through Science, told me. âSo thereâs just less heat energy trapped in there.â
Portable ovens are like the iPhones of home pizza making: They have changed everything. The prototype for the first Ooni, launched on Kickstarter in 2012, looks more like a medieval torture device than anything you could feasibly use to cook. It was soon joined by the Roccbox, a stainless-steel dome that can run on either wood or gas. Newer models have gotten progressively better. The ovens arenât that complicated, but they are genius. They are fairly inexpensive, and small enough to take on camping trips and beach vacations. For the home cook who isnât making a hundred pizzas in one go, âitâll do a great job at mimicking a restaurant oven,â LĂłpez-Alt said.
[Caroline Mimbs Nyce: J. Kenji LĂłpez-Alt thinks youâll be fine with an induction stove]
For a while, these ovens could be found in relatively few backyards. Then America went pizza-oven wild during the pandemic. Whatâs better than nurturing a sourdough starter? Nurturing a sourdough starter, topping it with sauce, and launching it into the flames. In 2020, Ooni sales increased by 300 percent. The ovens have stayed in high demand, Joe Derochowski, an analyst at the market-research firm Circana, told me. At housewares shows these days, he said, âyou see pizza ovens all over.â Scott Wiener, a pizza expert who leads tours in New York City, always asks his groups if they make pizza at home and how they cook it. âOne person will say âOoni,â every time,â he told me.
Perhaps part of the appeal of these home ovens is that they satisfy the same urge that using a grill does: Letâs face it; fire is fun. Traditionally, though, pizza has been thought of as an extension of bakingâ; in Italy, pizza originated with bread bakers looking to sell cheap food to workers. Many of the earliest pizzerias in the U.S. were founded by bakers who had arrived from Italy. But making pizza is really a lot more like grilling a burger than baking bread. Let your pizza sit for a few seconds too long, and the flames will take the dough from lightly singed to fully incinerated. (All pizza is better than no pizzaâexcept when that pizza is so burnt, it tastes like ash.)
Home pizza ovens represent the next generation of grilling; they take those familiar, irresistible propane flames and apply them to another arena of cooking entirely. And as with grilling, to make good pizza, you need accoutrements. I slide my homemade pizza into the Ooni using one tool, spin it around with another, and then monitor the heat with yet another. Pizza ovens âecho the barbecue world and the home-grilling world,â Wiener said. For $1,000, you can buy an Ooni that lets you cook three pizzas at once and remotely track the temperature from your phone. As Ragusea put it: âMen love their fucking toys.â
Tools and gadgets can only take you so far. Even with the fanciest oven on the market, you still have to learn how to stretch the dough and get it into the oven without creating an oblong mess. âThereâs all these special techniques involved in pizza that donât apply to any other kind of cooking,â LĂłpez-Alt said. If you want to learn, there are pizza forums, pizza Facebook groups, and so, so many pizza YouTube videos.
My first pizza, made in my kitchen oven, was so oversauced that it was more like tomato soup in a bread bowl. A ridiculous number of videos later, my pizza game has gone from JV to the big leagues. Pizza ovens beget videos on how to use them, begetting more interest in ovens, begetting more videos. It is a spin wheel of great pizza.
Even in the Ooni, my pizzas are not better or even that much cheaper than what youâd find in a great pizzeria, but they are mine. I get why my fellow pizza diehards gather online not only to hone their technique, but also to share their creations (even when they might give any Italian nonna a heart attack). Candied lemon and ricotta pizza! Mexican street corn pizza! Detroit-style Chongqing-chicken pizza topped with green onion and sesame seeds!
The irony of the pizza revolution is that this should be a moment for a pizza recession. Remember when the only thing you could get delivered was pizza, and maybe Chinese food? When you least wanted to cook, it was pizza time: In 2011, one of the biggest days for pizza eating was the day before Thanksgiving. Now you can DoorDash penne alla vodka or a pork banh mi. Yet Americans have fallen even deeper in love with pizza.
[From the October 1949 issue: Pizza, an introduction]
You can now find amazing pizza just about everywhere. Pizza pop-ups are opening using newer, larger versions of the cheap portable ovens. âFive years ago, if you wanted to open a mobile pizza company, then you would have to spend easily $5,000 on an oven and a trailer,â Wiener said. âNow you can spend half of that, and get two of these ovens.â
Still, the pizza sicko doesnât always win. Recently, the pizza cravings got me late one evening. I fired up the Ooni, fiddled with the dough, and was ready to launch a pie when my hunger sapped my concentration. The dough had a hole in it, and disintegrated into sloppy goo in the oven. So much for that.
Part of getting a pizza oven is learning how to use it. The other part is learning when you should just leave it to the professionals.
This article appears in the January 2025 print edition with the headline âIâm a Pizza Sicko.â When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Updated at 4:35 p.m. ET on December 5, 2024
To enter the Strother School of Radical Attention, you have to walk through what has come to be known as âinfluencer alley.â Any time of day or night, dozens of people will be standing along this brick-paved part of Brooklyn, snapping the same Instagram photo with the Manhattan Bridge and East River in the background. Thereâs nothing wrong with this, but it struck me as a little funny while I headed to a course about unraveling the coercive powers of social media, phones, and digital life.
That class, âHow to Build an Attention Sanctuary,â was a six-week workshop series focused on teaching parents and other caretakers how to ârediscover the joy of undivided attentionâ and help their family do the same. (The series was technically split into two three-week parts, and participants were encouraged to enroll in both.) The problem this description gestures at is broadly familiar by now: A lot of people view fractured attention, caused by omnipresent technology, as a primary trouble of our times. This fracturing makes them feel anxious, depressed, disconnected from one another and from reality.
The narrative that digital technology has produced a new kind of alienation and distraction has been popularized in recent years in best-selling books such as Jenny Odellâs How to Do Nothing: Resisting the Attention Economy and buzzy documentaries such as Netflixâs The Social Dilemma. But where parenting is concerned, the issue feels especially urgent, as young people struggle with a rise in mental-health problems that some have blamed on social media and screen time. Some parents also worry that their kids, even if they avoid the worst negative outcomes, are growing up without the urge to play outside or read for fun or do other abstract but important-seeming things, such as making stuff up in their head, to fend off boredom.
I was attracted to the class, despite not having any children, because I am interested in the idea that our devices have become obstacles in the pursuit of a fulfilling lifeâand I wanted to know more about what a âradicalâ change might look like. The Strother School of Radical Attention, or SORA, is obviously offering a niche product for a very specific milieu (I learned about it from the Instagram Story of a professional book critic who lives in New York; it cost me $560 in total, though SORA offers a sliding scale and scholarships depending on one's needs). But it is also part of a bigger picture. For years and years, people have regretted the time and autonomy theyâve lost to their phoneâthe time and autonomy that their children will lose.
Is there actually a problem that âradical attentionâ can solve? I enrolled to find out.
SORA is really just one room on the seventh floor of a basic commercial building. Itâs cozy: Trains rumble past the windows; wine bottles are repurposed as vases; a bookshelf offers a mix of reportage on the tech industry and creative nonfiction about spirituality and interior life (John Carreyrouâs book about the downfall of Theranos, Simone Weilâs Waiting for God).
The school is part of a nonprofit organization called the Institute for Sustained Attention and was founded by a group (âcollectiveâ) of people who call themselves the Friends of Attention, borrowing from the Quakers. A year ago, some of them wrote a New York Times opinion article that repeatedly compares the âextractive profit modelsâ of Big Tech to fracking and invokes Rousseauâs social contract: âOur attention is born free, but is, increasingly, everywhere in chains.â In other materials, the schoolâs creators describe themselves as attention activists. (They have published a Manifesto for the Freedom of Attention.)
The class was led by Jac Mullen, a New Haven, Connecticut, public-school teacher and writer. My classmates were a small group of very kind people in their 30s and 40s, most of them raising young children in the same generally affluent area of Brooklyn. An English teacher from a wealthy neighborhood in Manhattan was the only parent of a teenager. We spent much of the first class saying why we were there. The English teacher said she was at a loss after seeing kids get worse at reading and other basic skills each year. âThis is the only place Iâve found that seems focused on this change,â she said. The others feared the example they were setting for their kids with their doomscrolling and craned necks. I said my job is to stare at a computer all day and receive Slack messages, which I fear is programming me to focus only in 20-second intervals.
[Read: No one knows exactly what social media is doing to teens]
We started with our own childhoods and searched for answers there. Mullen pressed us to remember the âattentional valuesâ we had learned as children, back when the world was gloriously boring. What had our minds been like? Where did they wander? I talked about sitting in Sunday school; the English teacher talked about sitting in a car.
It reminded me of a trend Iâve noticed on TikTok the past few years. People will post a video of a window on a rainy day and say something about how, when they were kids, they would watch raindrops âraceâ down the glass or âeatâ each other when they crossed paths, for lack of anything better to do, and their minds would wander. (I did this too.) They long for these times, they say, as they post about them on TikTok.
Most weeks, the class involved some kind of group activity. One night we paired up for a âworld-givingâ walk, in which we wandered the surrounding area while describing what we were seeing and asking each other questions about it. On another, we watched two of our classmates use their phones for five minutes and then tried to guess what they had been doing. We spent nearly two hours one week looking at and then discussing a nearby giant sculpture of a babyâs head. (For this, we followed, mysteriously, instructions written by âOrder of the Third Bird,â in reference to a story by Pliny the Elder.)
There were also exercises for us to complete. On the first day, we received a homework assignment to conduct a âhousehold attention audit.â Throughout the week, we were to jot down whenever we observed ourselves or a family member âdeeply absorbed in their device,â as well as times that we experienced strong connection and tech-free moments. We were also supposed to notice the spaces where these things were happening: the living room, the subway, a park. The goal was to start to develop âa basic meta-attentional awarenessââto notice when our attention was moving from one thing to another and why.
I wrote down that I was annoyed with my boyfriend when he texted while we were walking together, and that I felt a strong connection to him while watching baseball together. As far as our living space, well, our bedroom doesnât have a TV, so thatâs goodâbut we plug our phones in on our nightstands, so maybe thatâs bad. When a worksheet asked me to think about âspecific changesâ I could make to improve my familyâs âattention ecology,â I worried that there was not much to be done. (Leave our laptops outside the front door at the end of the workday?) But I was hopeful. I came up with some little ideas, such as âno reaching for my phone before coffeeâ and âno taking my phone with me to the lunchroom at work.â
Those adjustments were easy, so for my next homework assignment, I wrote boldly about my truer desires, which embarrassed me to articulate, because they were real. I wanted to be more patient. I didnât want to dismiss things out of hand as boring just because I was having a hard time concentrating. I didnât want to waste my time watching the stupidest videos ever made just because theyâre there. Mullen asked us to imagine what our lives would be like at the end of the course and write a diary entry from the future. âI am happy to be alone with my thoughts or together in conversation with other people,â I wrote, covering the page with my arm like a middle schooler.
More than activities and worksheets, though, the classes were anchored by short lectures followed by group discussions. âI feel a little like Al Gore walking around with a growing slideshow,â Mullen joked when he started his presentation one week. âThis is as important as climate change.â
[Jonathan Haidt: Get phones out of schools now]
That day, he walked us through an emerging field of study called âparental technoference,â spending some time on recently published spin-offs of the famous âStill Faceâ experiments conducted by the child psychologist Ed Tronick in the 1970s. The original experiments showed that infants will try to engage their parents by babbling, laughing, waving, and so forth, and that they become frantic and disturbed when their parents react with a stony expression.
The updated versions involve tests in which parents are distracted by their phone. The idea is that modern parents have âstillâ faces fairly often, which could be detrimental to their childrenâs emotional development. This made for lively discussion, though not of the potential or limitations of the research itself. Again, we talked about our lives and the small things that we wanted to be different.
Adam Pearce, a writer and life coach who helped instruct the class, talked about teaching his kids that phones are tools to be used for specific purposes. He was thinking about buying extra phone chargers and placing them throughout the home. In each room, the phone would have its own house. This way, the phone would be out of sight and out of reach, while staying charged. The effect would be helped by adding some ritual, such as shouting, âThe phone is going home!â or doing a choreographed dance.
This seemed ridiculous but promising. It reminded everyone of the archaic idea of the âcomputer roomâ: that things were better when the computer had one room, instead of being everywhere. I didnât disagree, but I was a little frustrated. If this was as important as climate change, as Mullen said, why did we keep talking about things that felt so small?
Before I started the class, I wondered what a truly radical approach to personal technology would be. Would we be encouraged to throw our smartphones away, at a minimum, and maybe even quit our laptop jobs and dedicate our free time to data-poisoning and blowing up cell towers?
The courseâs answer was what I feared it would be: What you can mostly do, if you have the time and the resources, is snatch back some small pieces of territory along the edges. No phone before coffee. Consider a statue. Donât let the baby watch Cocomelon. (I saw my first clip from the show in the class and regretted it.) Try a little harder and be a little better. At times, we spoke of ârelapse,â as if we were in some kind of Anonymous program.
The final week of class took place just after the presidential election. Only one other classmate and I showed up. The rest were busy or had had enough. Our first task was to write down the answers to a few questions, which served to summarize the previous weeks: âHow do you build an attention sanctuary?â and âHave we always needed attention sanctuaries? Or is there something specific about right now?â I struggled. I still donât know how to build an attention sanctuary; I also donât know how people lived in other times. Who cares if I look at my phone too much anyway? Mullen didnât take offense. ââAttention sanctuaryâ is a very precious name; thereâs no getting around it,â he allowed. âI never liked the name.â
[Read: The end of high-school English]
Then he moved on. I was surprised again when, with 45 minutes left in the course, Mullenâs presentation took a turn toward the hard-core. âWhatâs happening to us?â he asked sharply. He hustled through an explanation of Shoshana Zuboffâs popular concept of âsurveillance capitalism,â which articulates that personal data have been turned into a wildly profitable product by the giant tech companies. Following the same logical trajectory that many tech critics have taken, Mullen arrived at the end point of artificial intelligence: All of this data extraction has been in the service of that huge goal, but they never told us. We wrote all over the internet and then the internet was scraped. Our brains created the neural nets and we just thought we were living our lives. The room got quiet and sadâomnipotent AI was a horse of a different color. You canât simply make a tiny bed for it in another room.
The course, like the broader issues it aimed to address, created a lot of big feelings that the few of us remaining did not seem to know what to do with. We began from a place of concern and ended there, as well. Mullen told us that he had been experimenting with Anthropicâs Claude chatbot for a while. When he projected his laptop screen onto the wall, we could see that his computer held dozens of saved chats. âThe future leaks backward through the cracks,â Claude said in the one he pulled up. Mullen told us he was afraid that chatbots would âfuck kids upâ majorly and that people might start worshipping AI models like gods. We all agreed. And then we went home.
This article has been updated to clarify the structure of the "How to Build an Attention Sanctuary" series, as well as its cost.
This spring kicked off the best stretch for Americaâs nuclear industry in decades. It started in April, when, for the first time since 1990, the United States added nuclear capacity for the second year in a row. In June, Congress passed a major law to accelerate nuclear-energy development. The Republican Partyâs national platform trumpeted nuclear power, as did Kamala Harris in describing her economic agenda; this fall, three of the worldâs largest companiesâAmazon, Google, and Microsoftâannounced substantial investments in nuclear-energy facilities. In November, the U.S. issued official goals to massively expand its nuclear capacity. âWe have ambitious targets for the next 10 years,â Michael Goff, the acting assistant secretary of the Department of Energyâs Office of Nuclear Energy, told me, as well as for the decade after. The DOE aims to add roughly 60 times more nuclear power in a quarter century than the country built in the previous one.
As recently as 15 years ago, or perhaps even five, imagining all of this would have been a stretch. For decades, the industry was stagnant and vehemently opposed by environmentalists. But nuclear energyâa potential source of abundant, reliable, emissions-free electricityâis a powerful tool to fight climate change, and now the federal government, major companies, and a growing number of climate advocates are supporting a series of nuclear-energy projects that could transform Americaâs grid. This is at least the countryâs third attempt to do soâthe original push to install a nationwide fleet of reactors ground to a spectacular halt in the 1980s, and a so-called nuclear ârenaissanceâ in the late 2000s, which included dozens of proposed reactors, also failed to materialize. This round, âthe industry itself has really got to deliver,â Goff said. The next few years might be the countryâs last chance to get nuclear right.
Americaâs opposition to nuclear power runs deep. Some of the oldest and most influential environmental groups, including Greenpeace, the Sierra Club, and the Natural Resources Defense Council, have long opposed the fallout from nuclear-weapons testing and, as an extension, the environmental risk of nuclear-power plants. Broader public attitudes turned against nuclear power when Pennsylvaniaâs Three Mile Island facility suffered a meltdown in 1979. The Democratic Party officially opposed new nuclear plants the following year, and after the Chernobyl accident in 1986, nearly three-quarters of Americans said they were against the building of a nuclear plant within five miles of their home.
Economic factors might have doomed nuclear build-out anyway. Energy companies did build many nuclear-power plants in the 1970sâand those plants still provide about one-fifth of the United Statesâ electricity todayâbut skyrocketing costs and interminable construction delays, combined with plateauing electricity demand, eventually made new facilities unattractive investments. The emergence of cheap natural gas in the 2000s has helped doom any nuclear growth since, Jessica Lovering, an expert on nuclear economics and the executive director of the Good Energy Collective, told me. (The Great Recession also helped squelch plans for new facilities, she said.)
The result has been that, from 1979 to 1988, 67 reactors were canceled; for more than three decades, the nation has added barely any new nuclear capacity. The reactors that did open were years behind schedule. Beginning in the 1960s, the number of nuclear-engineering degrees granted each year steadily climbed, to a peak of roughly 1,500 in 1978, then plummeted to fewer than 400 by 2000.
But then, slowly, Americans started studying nuclear engineering again. When Kathryn Huff, who led the U.S. Office for Nuclear Energy for two years prior to Goff, finished her Ph.D. in 2013, more than 1,000 nuclear-engineering degrees were being issued annually, a number that has remained roughly steady since. Huff now teaches nuclear engineering at the University of Illinois at Urbana-Champaign, and she told me that the motivation of her own cohort and her students is clear: âThe reason people are in nuclear now is the environment.â
Beginning in the 2000s, greenhouse-gas emissions and all their consequences for the planet were becoming a pressing concern for growing numbers of scientists, government officials, and even corporations. The link between commercial nuclear power and the Cold War and nuclear radiation had faded; more people learned that the technology was safer than fossil fuels, or even wind power, measured by deaths per unit of energy produced. As more places in the U.S. started building more renewable energy, experts found that a decarbonized grid running purely on solar panels and wind turbines might be impossible, or prohibitively expensive. The Department of Energy estimates, for instance, that each unit of energy from a renewable grid with nuclear power will cost 37 percent less than from a grid without. Huff told me her students âunderstand how much carbon-free power we need, and thatâs whatâs driving them into nuclear energyâand thatâs also whatâs happening in the Democratic Party.â
In the past decade or so, more scientists and advocacy organizations began to mobilize around nuclear power. The Clean Air Task Force, for instance, concluded that nuclear energy was the âmost advanced and provenâ source of carbon-free, weather-independent power, the groupâs executive director, Armond Cohenâwho was a staunch anti-nuclear activist in the 1980sâtold me. In 2015, four of the worldâs most influential climate scientists wrote an editorial in The Guardian that called nuclear energy âthe only viable path forward on climate change.â A 2018 United Nations special report found that limiting global warming to 1.5 degrees Celsius above preindustrial levels would require âunprecedented changesââincluding in the worldâs energy systems, which made nuclear, as a scalable source of copious and clean electricity, still more appealing.
The support for nuclear power in the U.S.âparticularly among climate advocatesâis far from unequivocal, but relative to a couple of decades ago, it represents an epochal shift, Ted Nordhaus, an early nuclear-energy advocate and the executive director of the Breakthrough Institute, an environmental research center that promotes nuclear energy, told me. In 2020, the Democratic Partyâs platform endorsed nuclear energy for the first time since 1972. Bernie Sanders is a long-standing opponent of nuclear energy, but the Biden-Sanders Unity Taskforceâa group formed to unify the partyâs more moderate and radical wings in 2020âlisted nuclear as a key technology for combatting climate change. Federal efforts to build nuclear energy have run through the Bush, Obama, Trump, and Biden presidencies. Republicans have long supported nuclear as a matter of energy security and reliability; President Joe Bidenâs Inflation Reduction Act includes substantial incentives for nuclear projects. Billions of dollars in corporate investment have gone to nuclear facilities and start-ups. Similar support exists across states as politically varied as Texas, California, Pennsylvania, and New York.
One more factor has propelled the nuclear industry. After decades of relatively flat power use nationwide, AI and data-center growth are sending projections for electricity demand soaring upward, Goff said. Because many of the companies operating large data centers have made substantial climate commitments, they need abundant sources of carbon-free electricity, and see nuclear as the quickest and most reliable way of generating it. These giant tech firms appear willing to pay above-market rates to get those new nuclear-power sources up and running. âI just canât think of any precedent for it,â Matt Bowen, a nuclear-energy researcher at Columbia, told me.
Still, to speak of a nuclear ârevivalâ might be prematureâitâs more accurate to say that the industry is approaching an inflection point. To meet its ambitious nuclear targets, Goff said, the U.S. will likely need a mixture of existing and more experimental reactors. The next several years will be crucial for demonstrating that America can build a large nuclear fleet. Two recently completed reactors at a Georgia power plantâthe project that made 2023 and 2024 the first consecutive years of added nuclear capacity in decadesâhave made that facility the nationâs largest single source of clean energy, but both were years behind schedule.
Meanwhile, the âadvanced nuclearâ projects drawing attention from the federal government and tech companies will need to prove their case. These technologies, Lovering said, are smaller and simpler than the behemoth facilities of old, which should reduce costs and construction times. But more advanced nuclear technologies have been the industryâs promised future for decades now, and yet have never made the leap to regular deployment in the U.S. And the first commercial deployments will be expensive (efficiency gains and savings will likely accompany later iterations). Experts I spoke with had mixed opinions about whether a Republican-controlled government will continue the generous loans and tax incentives the initial projects depend on.
Perhaps the greatest risk is that expectations are too highâthat politicians and tech companies hope to be awash in abundant, cheap, nuclear-generated electricity within five years, instead of 10 or 20. An industry with so many decades of setbacks and failures cannot afford many more; if nuclear power really is so vital to decarbonization, then neither can the climate. The door is open for nuclear power, Cohen told me. âThe question is whether we can have an industry that can walk through.â
For almost two years now, the worldâs biggest tech companies have been at war over generative AI. Meta may be known for social media, Google for search, and Amazon for online shopping, but since the release of ChatGPT, each has made tremendous investments in an attempt to dominate in this new era. Along with start-ups such as OpenAI, Anthropic, and Perplexity, their spending on data centers and chatbots is on track to eclipse the costs of sending the first astronauts to the moon.
To be successful, these companies will have to do more than build the most âintelligentâ software: They will need people to use, and return to, their products. Everyone wants to be Facebook, and nobody wants to be Friendster. To that end, the best strategy in tech hasnât changed: build an ecosystem that users canât help but live in. Billions of people use Google Search every day, so Google built a generative-AI product known as âAI Overviewsâ right into the results page, granting it an immediate advantage over competitors.
This is why a recent proposal from the Department of Justice is so significant. The government wants to break up Googleâs monopoly over the search market, but its proposed remedies may in fact do more to shape the future of AI. Google owns 15 products that serve at least half a billion people and businesses eachâa sprawling ecosystem of gadgets, search and advertising, personal applications, and enterprise software. An AI assistant that shows up in (or works well with) those products will be the one that those people are most likely to use. And Google has already woven its flagship Gemini AI models into Search, Gmail, Maps, Android, Chrome, the Play Store, and YouTube, all of which have at least 2 billion users each. AI doesnât have to be life-changing to be successful; it just has to be frictionless. The DOJ now has an opportunity to add some resistance. (In a statement last week, Kent Walker, Googleâs chief legal officer, called the Department of Justiceâs proposed remedy part of an âinterventionist agenda that would harm Americans and Americaâs global technology leadership,â including the companyâs âleading roleâ in AI.)
[Read: The horseshoe theory of Google Search]
Google is not the only competitor with an ecosystem advantage. Apple is integrating its Apple Intelligence suite across eligible iPhones, iPads, and Macs. Meta, with more than 3 billion users across its platforms, including Facebook, Instagram, and WhatsApp, enjoys similar benefits. Amazonâs AI shopping assistant, Rufus, has garnered little major attention but nonetheless became available to the websiteâs U.S. shoppers this fall. However much of the DOJâs request the court ultimately grants, these giants will still lead the AI raceâbut Google had the clearest advantage among them.
Just how good any of these companiesâ AI products are has limited relevance to their adoption. Googleâs AI tools have repeatedly shown major flaws, such as confidently recommending eating rocks for good health, but the features continue to be used by more and more people simply because theyâre there. Similarly, Appleâs AI models are less powerful than Gemini or ChatGPT, but they will have a huge user base simply because of how popular the iPhone is. Metaâs AI models may not be state-of-the-art, but that doesnât matter to billions of Facebook, Instagram, and WhatsApp users who just want to ask a chatbot a silly question or generate a random illustration. Tech companies without such an ecosystem are well aware of their disadvantage: OpenAI, for instance, is reportedly considering developing its own web browser, and it has partnered with Apple to integrate ChatGPT across the companyâs phones, tablets, and computers.
[Read: AI search is turning into the problem everyone worried about]
This is why itâs relevant that the DOJâs proposed antitrust remedy takes aim at Googleâs broader ecosystem. Federal and state attorneys asked the court to force Google to sell off its Chrome browser; cease preferencing its search products in the Android mobile operating system; prevent it from paying other companies, including Apple and Samsung, to make Google the default search engine; and allow rivals to syndicate Googleâs search results and use its search index to build their own products. All of these and the DOJâs other requests, under the auspices of search, are really shots at Googleâs expansive empire.
As my colleague Ian Bogost has argued, selling Chrome might not affect Googleâs search dominance: âPeople returned to Google because they wanted to, not just because the company had strong-armed them,â he wrote last week. But selling Chrome and potentially Android, as well as preventing Google from making its search engine the default option for various other companiesâ products, would make it harder for Google to funnel billions of people to the rest of its software, including AI. Meanwhile, access to Googleâs search index could provide a huge boost to OpenAI, Perplexity, Microsoft, and other AI search competitors: Perhaps the hardest part of building a searchbot is trawling the web for reliable links, and rivals would gain access to the most coveted way of doing so.
The Justice Department seems to recognize that the AI war implicates and goes beyond search. Without intervention, Googleâs search monopoly could give it an unfair advantage over AI as wellâand an AI monopoly could further entrench the companyâs control over search. The court, attorneys wrote, must prevent Google from âmanipulating the development and deployment of new technologies,â most notably AI, to further throttle competition.
And so the order also takes explicit aim at AI. The DOJ wants to bar Google from self-preferencing AI products, in addition to Search, in Chrome, Android, and all of its other products. It wants to stop Google from buying exclusive rights to sources of AI-training data and disallow Google from investing in AI start-ups and competitors that are in or might enter the search market. (Two days after the DOJ released its proposal, Amazon invested another $4 billion into Anthropic, the start-up and OpenAI rival that Google has also heavily backed to this point, suggesting that the e-commerce giant might be trying to lock in an advantage over Google.) The DOJ also requested that Google provide a simple way for publishers to opt out of their content being used to train Googleâs AI models or be cited in AI-enhanced search products. All of that will make it harder for Google to train and market future AI models, and easier for its rivals to do the same.
When the DOJ first sued Google, in 2020, it was concerned with the internet of old: a web that appeared intractably stuck, long ago calcified in the image of the company that controls how billions of people access and navigate it. Four years and a historic victory later, its proposed remedy enters an internet undergoing an upheaval that few could have foreseenâbut that the DOJâs lawsuit seems to have nonetheless anticipated. A frequently cited problem with antitrust litigation in tech is anachronism, that by the time a social-media, or personal-computing, or e-commerce monopoly is apparent, it is already too late to disrupt. With generative AI, the government may finally have the head start it needs.
Since Elon Musk bought Twitter in 2022 and subsequently turned it into X, disaffected users have talked about leaving once and for all. Maybe theyâd post some about how X has gotten worse to use, how it harbors white supremacists, how it pushes right-wing posts into their feed, or how distasteful they find the fact that Musk has cozied up to Donald Trump. Then theyâd leave. Or at least some of them did. For the most part, X has held up as the closest thing to a central platform for political and cultural discourse.
But that may have changed. After Trumpâs election victory, more people appear to have gotten serious about leaving. According to Similarweb, a social-media analytics company, the week after the election corresponded with the biggest spike in account deactivations on X since Muskâs takeover of the site. Many of these users have fled to Bluesky: The Twitter-like microblogging platform has added about 10 million new accounts since October.
X has millions of users and can afford to shed some here and there. Many liberal celebrities, journalists, writers, athletes, and artists still use itâbut that theyâll continue to do so is not guaranteed. In a sense, this is a victory for conservatives: As the left flees and X loses broader relevance, it becomes a more overtly right-wing site. But the right needs liberals on X. If the platform becomes akin to âalt-tech platformsâ such as Gab or Truth Social, this shift would be good for people on the right who want their politics to be affirmed. It may not be as good for persuading people to join their political movement.
The number of people departing X indicates that something is shifting, but raw user numbers have never fully captured the point of what the site was. Twitterâs value proposition was that relatively influential people talked to each other on it. In theory, you could log on to Twitter and see a country singer rib a cable-news anchor, billionaires bloviate, artists talk about media theory, historians get into vicious arguments, and celebrities share vaguely interesting minutiae about their lives. More so than anywhere else, you could see the unvarnished thoughts of the relatively powerful and influential. And anyone, even you, could maybe strike up a conversation with such people. As each wave departs X, the site gradually becomes less valuable to those who stay, prompting a cycle that slowly but surely diminishes Xâs relevance.
This is how you get something approaching Gab or Truth Social. They are both platforms with modest but persistent usership that can be useful for conservatives to send messages to their base: Trump owns Truth Social, and has announced many of his Cabinet picks on the site. (As Doug Burgum, his nominee for interior secretary, said earlier this month: âNothingâs true until you read it on Truth Social.â) But the platforms have little utility to the general public. Gab and Truth Social are rare examples of actual echo chambers, where conservatives can congregate to energize themselves and reinforce their ideology. These are not spaces that mean much to anyone who is not just conservative, but extremely conservative. Normal people do not log on to Gab and Truth Social. These places are for political obsessives whose appetites are not satiated by talk radio and Fox News. They are for open anti-Semites, unabashed swastika-posting neo-Nazis, transphobes, and people who say they want to kill Democrats. Â
Of course, if X becomes more explicitly right wing, it will be a far bigger conservative echo chamber than either Gab or Truth Social. Truth Social reportedly had just 70,000 users as of May, and a 2022 study found just 1 percent of American adults get their news from Gab. Still, the right successfully completing a Gab-ification of X doesnât mean that moderates and everyone to the left of them would have to live on a platform dominated by the right and mainline conservative perspectives. It would just mean that even more people with moderate and liberal sympathies will get disgusted and leave the platform, and that the right will lose the ability to shape wider discourse.
The conservative activist Christopher Rufo, who has successfully seeded moral panics around critical race theory and DEI hiring practices, has directly pointed to X as a tool that has let him reach a general audience. The reason right-wing politicians and influencers such as Representative Marjorie Taylor Greene, Nick Fuentes, and Candace Owens keep posting on it instead of on conservative platforms is because they want what Rufo wants: a chance to push their perspectives into the mainstream. This utility becomes diminished when most of the people looking at X are just other right-wingers who already agree with them. The fringier, vanguard segments of the online right seem to understand this and are trying to follow the libs to Bluesky.
Liberals and the left do not need the right to be online in the way that the right needs liberals and the left. The nature of reactionary politics demands constant confrontationsâliteral reactionsâto the left. People like Rufo would have a substantially harder time trying to influence opinions on a platform without liberals. âTriggering the libsâ sounds like a joke, but it is often essential for segments of the right. This explains the popularity of some X accounts with millions of followers, such as Libs of TikTok, whose purpose is to troll liberals.
The more liberals leave X, the less value it offers to the right, both in terms of cultural relevance and in opportunities for trolling. The X exodus wonât happen overnight. Some users might be reluctant to leave because itâs hard to reestablish an audience built up over the years, and network effects will keep X relevant. But itâs not a given that a platform has to last. Old habits die hard, but they can die.
This past summer, a U.S. district court declared Google a monopolist. On Wednesday, the Department of Justice filed its proposed remedy. This planâthe governmentâs âproposed final judgment,â or PFJâmust be approved by the judge who is overseeing the case. But it outlines changes the government believes would be sufficient to loosen Googleâs grip on the search market.
Notably, the DOJ has proposed that Google sell off its Chrome web browserâwhich currently accounts for about two-thirds of the browser marketâand stay out of that business for five years. That proposal may seem righteous and effective, and stripping Google of its browser does make the government look bold. The proposal also seems to right a cosmic wrong from more than two decades ago, when the DOJ tried (and failed) to get Microsoft to unbundle its own Internet Explorer browser during a prior antitrust enforcement. This time around, the governmentâs lawyers insist that wresting Chrome from Googleâs mitts is necessary to prevent Google from setting a default search engine for the majority of internet surfers, and pushing those same people to other Google products too. (By the same logic, the PFJ prevents Google from paying rivals such as Apple for default-search placement.)
This is a mistake. Googleâs control of Chrome has surely benefited its market position and its bottom line. But Chrome might remain a boon for Google even if itâs under outside ownership. Instead, why not force Google to strip Chrome of its Google-promoting features, and let the browser be a burden rather than a tool for market domination?
In August, I argued that declaring Google a monopoly might not matter, because the company had already won the search wars. Searching the web effectively via text typed into input boxes was Googleâs first and perhaps only innovation; the competitors that aroseâDuckDuckGo, Bing, and so onâoffered their own takes on Googling, which became the generic term for searching the web. People returned to Google because they wanted to, not just because the company had strong-armed them.
Google did incentivize competitors to maintain that status quo. Mozillaâs Firefox browser offers a case study. The foundationâs most recent annual report lists $510 million in royalty revenue for 2022, some of which surely comes from Google in the form of referral fees for Google searches. The PFJ appears to prohibit these kinds of payments, and whatever revenue they generate for Mozilla. If those are off the table, browser companies may end up letting users choose their own default search service. But the results could ultimately look very much the same: People who like and are familiar with Google might just choose it again.
Google built the Chrome browser in part to steer web users to its servicesâSearch (and the ads it serves), Gmail, Google Docs, and so forth. Search was, of course, the most important of these. (Chrome was the first major browser to integrate web-search functionality directly into the address bar, a design known as the omnibox.) But over time, other Google features have become more and more entwined with Chromeâs operation. When I opened my Chrome browser in order to write this article, it presented me with a user-profile screen, strongly encouraging me to log in to my Google account, which gives Google insights into what I do online. That facility also offers me seamless access to Google Docs and Gmail, because I am already logged in.
Given that Chrome accounts for so much of the web-browser market, a more effective way to quash Googleâs bad tendencies might involve sabotaging its browser rather than selling it off. Instead of making Google divest Chrome, the DOJ could have it keep the browser running (and secure) while stripping it of all the features that make Google services ready to hand. Killing the omnibox would be the boldest of these acts, because search, which basically means Googling, would no longer be presented as the default act on the web. Likewise, removing the tight Google-account integration and associated benefits for Googleâs services and data collection would frustrate the companyâs monopoly more effectively than a spun-off browser ever could.
The fad began with a TimothĂ©e Chalamet look-alike contest in New York City on a beautiful day last month. Thousands of people came and caused a ruckus. At least one of the TimothĂ©es was among the four people arrested by New York City police. Eventually, the real TimothĂ©e Chalamet showed up to take pictures with fans. The event, which was organized by a popular YouTuber who had recently received some attention for eating a tub of cheeseballs in a public park, captured lightning in a bottle. It didnât even matter that the winner didnât look much like the actor, or that the prize was only $50.
In the weeks since, similar look-alike contests have sprung up all over the country, organized by different people for their own strange reasons. There was a Zayn Malik look-alike contest in Brooklyn, a Dev Patel look-alike contest in San Francisco, and a particularly rowdy Jeremy Allen White look-alike contest in Chicago. Harry Styles look-alikes gathered in London, Paul Mescal look-alikes in Dublin. Zendaya look-alikes competed in Oakland, and a âZendayaâs two co-stars from Challengersâ lookalike contest will be held in Los Angeles on Sunday. As I write this, I have been alerted to plans for a Jack Schlossberg look-alike contest to be held in Washington, D.C., the same day. (Schlossberg is John F. Kennedyâs only grandson; he both works at Vogue and was also profiled by Vogue this year.)
These contests evidently provide some thrill that people are finding irresistible at this specific moment in time. What is it? The chance to win some viral fame or even just positive online attention is surely part of it, but those returns are diminishing. The more contests there are, the less novel each one is, and the less likely it is to be worth the hassle. That Chalamet showed up to his look-alike contest was magicâheâs also the only celebrity to attend one of these contests so far. Yet the contests continue.
Celebrities have a mystical quality thatâs undeniable, and it is okay to want to be in touch with the sublime. Still, some observers sense something a bit sinister behind the playfulness of contest after contest, advertised with poster after poster on telephone pole after telephone pole. The playwright Jeremy O. Harris wrote on X that the contests are âGreat Depression era coded,â seeming to note desperation and a certain manic optimism in these events. The comparison is not quite rightâalthough the people at these contests may not all have jobs, they donât seem to be starving (one of the contests promised only two packs of cigarettes and a MetroCard as a prize)âbut I understand what heâs getting at. Clearly, the look-alike competitions do not exist in a vacuum.
The startling multiplication of the contests reminds me of the summer of 2020, when otherwise rational-seeming people suggested that the FBI was planting caches of fireworks in various American cities as part of a convoluted psyop. There were just too many fireworks going off for anything else to make sense! So people said. With hindsight, itâs easy to recognize that theory as an expression of extreme anxiety brought on by the early months of the coronavirus pandemic. At the time, some were also feeling heightened distrust of law enforcement, which had in some places reacted to Black Lives Matter protests with violence.
Todayâs internet-y stunts are just silly events, but people are looking for greater meaning in them. Over the past few weeks, although some have grown a bit weary of the contests, a consensus has also formed that they are net good because they are bringing people out of their house and into âthird spacesâ (public parks) and fraternity (âTHE PEOPLE LONG FOR COMMUNITYâ). This too carries a whiff of desperation, as though people are intentionally putting on a brave face and shoving forward symbols of our collective creativity and togetherness.
I think the reason is obvious. The look-alike contests, notably, started at the end of October. The first one took place on the same day as a Donald Trump campaign event at Madison Square Garden, which featured many gleefully racist speeches and was reasonably compared by many to a Nazi rally. The photos from the contests maybe serve as small reassurance that cities, many of which shifted dramatically rightward in the recent presidential election, are still the places that we want to believe they areâthe closest approximation of Americaâs utopian experiment, where people of all different origins and experiences live together in relative peace and harmony and, importantly, good fun. At least most of the time.
For a brief moment earlier this month, I thought an old acquaintance had passed away. I was still groggy one morning when I checked my phone to find a notification delivering the news. âObituary shared,â the message bluntly said, followed by his name. But when I opened my phone, I learned that he was very much still alive. Appleâs latest software update was to blame: A new feature that uses AI to summarize iPhone notifications had distorted the original text message. It wasnât my acquaintance who had died, but a relative of his. Thatâs whose obituary I had received.
These notification summaries are perhaps the most visible part of Apple Intelligence, the companyâs long-awaited suite of AI features, which officially began to roll out last month. (Itâs compatible with only certain devices.) We are living in push-notification hell, and Apple Intelligence promises to collapse the incessant stream of notifications into pithy recaps. Instead of setting your iPhone aside while you shower and returning to nine texts, four emails, and two calendar alerts, you can now return to a few brief Apple Intelligence summaries.
The trouble is that Apple Intelligence doesnât seem to be very ⊠intelligent. Ominous summaries of peopleâs Ring-doorbell alerts have gone viral: âMultiple people at your Front Yard,â the feature notified one user. âPackage is 8 stops away, delivered, and will be delivered tomorrow,â an Amazon alert confusingly explained. And sliding into someoneâs DMs hits different when Instagram notifications are summarized as âMultiple likes and flirtatious comments.â But Apple Intelligence appears to especially struggle with text messages. Sometimes the text summaries are alarmingly inaccurate, as with the false obituary I received. But even when they are technically right, the AI summaries still feel wrong. âExpresses love and encouragement,â one AI notification I recently received crudely announced, compressing a thoughtfully written paragraph from a loved one. Whatâs the point of a notification like that? Textingâwhether on iMessage, WhatsApp, or Signalâis a deeply intimate medium, infused with personality and character. By strip-mining messages into bland, lifeless summaries, Apple seems to be misunderstanding what makes texting so special in the first place.
Perhaps it was inevitable that AI summaries would come for push notifications. Summarization is AIâs killer feature and tech companies seem intent on applying it to just about everything. The list of things that AI is summarizing might require a summary of its own: emails and Zoom calls and Facebook comments and YouTube videos and Amazon reviews and podcasts and books and medical records and full seasons of TV shows. In many cases, this summarization is helpfulâfor instance, in streamlining meeting notes.
But where is the line? Concision, when applied to already concise texts, sucks away what little context there was to begin with. In some cases, the end result is harmful. The technology seems to have something of a death problem. Across multiple cases, the feature appears bewilderingly eager to falsely suggest that people are dead. In one case, a user reported that a text from his mother reading âThat hike almost killed me!â had been turned into âAttempted suicide, but recovered.â
But mostly, AI summaries lead to silly outcomes. âInflatable costumes and animatronic zombies overwhelming; will address questions later,â read the AI summary of a colleagueâs message on Halloween. Texts rich with emotional content read like a lazy therapistâs patient files. âExpressing sadness and worry,â one recent summary said. âUpset about something,â declared another. AI is unsurprisingly awful with breakup texts (âNo longer in relationship; wants belongings from the apartmentâ). When it comes to punctuation, the summaries read like they were written by a high schooler who just discovered semicolons and now overzealously inserts; them; literally; everywhere. Even Apple admits that the language used in notification summaries can be clinical.
The technology is at its absolute worst when it tries to summarize group chats. Itâs one thing to condense three or four messages from a single friend; itâs another to reduce an extended series of texts from multiple people into a one-sentence notification. âRude comments exchanged,â read the summary of one userâs family group chat. When my friends and I were planning a dinner earlier this month, my phone collapsed a series of messages coordinating our meal into âTakeout, ramen, at 6:30pm preferred.â Informative, I guess, but the typical back-and-forth of where to eat (one friend had suggested sushi) and timing (the other was aiming for an early night) was erased.
Beyond the content, much of the delight of text messaging comes from the distinctiveness of the individual voices of the people we are talking to. Some ppl txt like dis. others text in all lowercase and no punctuation. There are lol friends and LOL friends. My dad is infamous for sending essay-length messages. When I text a friend who lives across the country asking about her recent date, I am not looking purely for informational content (âNight considered good,â as Apple might summarize); rather, I want to hear the date described in her voice (âWas amaze so fun we had lovely timeâ). As the MIT professor Sherry Turkle has written, âWhen we are in human conversation, we often care less about the information an utterance transfers than its tone and emotional intent.â When texts are fed through the AI-summarization machine, each distinct voice is bludgeoned into monotony.
For a company that prides itself on perfection, the failures of Appleâs notification summaries feel distinctly un-Apple. Since ChatGPTâs release, as technology companies have raced to position themselves as players in the AI arms race, the company has remained notably quiet. Itâs hard not to wonder if Apple, after falling behind, is now playing catch-up. Still, the notification summaries will likely improve. For now, users have to opt in to the AI-summary feature (itâs still in beta), and Apple has said that it will continue to polish the notifications based on user feedback. The feature is already spreading. Samsung is reportedly working on integrating similar notification summaries for its Galaxy phones.
With the social internet in crisis, text messagesâand especially group chatsâhave filled a crucial void. In a sense, texting is the purest form of a social network, a rare oasis of genuine online connection. Unlike platforms such as TikTok and Instagram, where algorithmic feeds warp how we communicate, basic messaging apps offer a more unfiltered way to hang out digitally. But with the introduction of notification summaries that strive to optimize our messages for maximum efficiency, the walls are slowly crumbling. Soon, the algorithmic takeover may be complete.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
For anyone who teaches at a business school, the blog post was bad news. For Juliana Schroeder, it was catastrophic. She saw the allegations when they first went up, on a Saturday in early summer 2023. Schroeder teaches management and psychology at UC Berkeleyâs Haas School of Business. One of her colleaguesâÂÂa star professor at Harvard Business School named Francesca GinoâÂhad just been accused of academic fraud. The authors of the blog post, a small team of business-school researchers, had found discrepancies in four of Ginoâs published papers, and they suggested that the scandal was much larger. âWe believe that many more Gino-authored papers contain fake data,â the blog post said. âPerhaps dozens.â
The story was soon picked up by the mainstream press. Reporters reveled in the irony that Gino, who had made her name as an expert on the psychology of breaking rules, may herself have broken them. (âHarvard Scholar Who Studies Honesty Is Accused of Fabricating Findings,â a New York Times headline read.) Harvard Business School had quietly placed Gino on administrative leave just before the blog post appeared. The school had conducted its own investigation; its nearly 1,300-page internal report, which was made public only in the course of related legal proceedings, concluded that Gino âcommitted research misconduct intentionally, knowingly, or recklesslyâ in the four papers. (Gino has steadfastly denied any wrongdoing.)
Schroederâs interest in the scandal was more personal. Gino was one of her most consistent and important research partners. Their names appear together on seven peer-reviewed articles, as well as 26 conference talks. If Gino were indeed a serial cheat, then all of that shared workâand a large swath of Schroederâs CVâwas now at risk. When a senior academic is accused of fraud, the reputations of her honest, less established colleagues may get dragged down too. âJust think how horrible it is,â Katy Milkman, another of Ginoâs research partners and a tenured professor at the University of Pennsylvaniaâs Wharton School, told me. âIt could ruin your life.â
To head that off, Schroeder began her own audit of all the research papers that sheâd ever done with Gino, seeking out raw data from each experiment and attempting to rerun the analyses. As that summer progressed, her efforts grew more ambitious. With the help of several colleagues, Schroeder pursued a plan to verify not just her own work with Gino, but a major portion of Ginoâs scientific rĂ©sumĂ©. The group started reaching out to every other researcher who had put their name on one of Ginoâs 138 co-authored studies. The Many Co-Authors Project, as the self-audit would be called, aimed to flag any additional work that might be tainted by allegations of misconduct and, more important, to absolve the restâand Ginoâs colleagues, by extensionâof the wariness that now afflicted the entire field.
That field was not tucked away in some sleepy corner of academia, but was instead a highly influential one devoted to the science of success. Perhaps youâve heard that procrastination makes you more creative, or that youâre better off having fewer choices, or that you can buy happiness by giving things away. All of that is research done by Schroederâs peersâÂbusiness-school professors who apply the methods of behavioral research to such subjects as marketing, management, and decision making. In viral TED Talks and airport best sellers, on morning shows and late-night television, these business-school psychologists hold tremendous sway. They also have a presence in this magazine and many others: Nearly every business academic who is named in this story has been either quoted or cited by The Atlantic on multiple occasions. A few, including Gino, have written articles for The Atlantic themselves.
Business-school psychologists are scholars, but they arenât shooting for a Nobel Prize. Their research doesnât typically aim to solve a social problem; it wonât be curing anyoneâs disease. It doesnât even seem to have much influence on business practices, and it certainly hasnât shaped the nationâs commerce. Still, its flashy findings come with clear rewards: consulting gigs and speakersâ fees, not to mention lavish academic incomes. Starting salaries at business schools can be $240,000 a yearâdouble what they are at campus psychology departments, academics told me.
The research scandal that has engulfed this field goes far beyond the replication crisis that has plagued psychology and other disciplines in recent years. Long-standing flaws in how scientific work is doneâincluding insufficient sample sizes and the sloppy application of statisticsâhave left large segments of the research literature in doubt. Many avenues of study once deemed promising turned out to be dead ends. But itâs one thing to understand that scientists have been cutting corners. Itâs quite another to suspect that theyâve been creating their results from scratch.
[Read: Psychologyâs replication crisis has a silver lining]
Schroeder has long been interested in trust. Sheâs given lectures on âbuilding trust-based relationshipsâ; sheâs run experiments measuring trust in colleagues. Now she was working to rebuild the sense of trust within her field. A lot of scholars were involved in the Many Co-Authors Project, but Schroederâs dedication was singular. In October 2023, a former graduate student who had helped tip off the team of bloggers to Ginoâs possible fraud wrote her own âpost mortemâ on the case. It paints Schroeder as exceptional among her peers: a professor who âsent a clear signal to the scientific community that she is taking this scandal seriously.â Several others echoed this assessment, saying that ever since the news broke, Schroeder has been relentlessâheroic, evenâin her efforts to correct the record.
But if Schroeder planned to extinguish any doubts that remained, she may have aimed too high. More than a year since all of this began, the evidence of fraud has only multiplied. The rot in business schools runs much deeper than almost anyone had guessed, and the blame is unnervingly widespread. In the end, even Schroeder would become a suspect.
Gino was accused of faking numbers in four published papers. Just days into her digging, Schroeder uncovered another paper that appeared to be affectedâand it was one that she herself had helped write.
The work, titled âDonât Stop Believing: Rituals Improve Performance by Decreasing Anxiety,â was published in 2016, with Schroederâs name listed second out of seven authors. Ginoâs name was fourth. (The first few names on an academic paper are typically arranged in order of their contributions to the finished work.) The research it described was pretty standard for the field: a set of clever studies demonstrating the value of a life hackâone simple trick to nail your next presentation. The authors had tested the idea that simply following a routineâeven one as arbitrary as drawing something on a piece of paper, sprinkling salt over it, and crumpling it upâcould help calm a personâs nerves. âAlthough some may dismiss rituals as irrational,â the authors wrote, âthose who enact rituals may well outperform the skeptics who forgo them.â
In truth, the skeptics have never had much purchase in business-school psychology. For the better part of a decade, this finding had been garnering citationsâÂabout 200, per Google Scholar. But when Schroeder looked more closely at the work, she realized it was questionable. In October 2023, she sketched out some of her concerns on the Many Co-Authors Project website.
The paperâs first two key experiments, marked in the text as Studies 1a and 1b, looked at how the salt-and-paper ritual might help students sing a karaoke version of Journeyâs âDonât Stop Believinâââ in a lab setting. According to the paper, Study 1a found that people who did the ritual before they sang reported feeling much less anxious than people who did not; Study 1b confirmed that they had lower heart rates, as measured with a pulse oximeter, than students who did not.
As Schroeder noted in her October post, the original records of these studies could not be found. But Schroeder did have some data spreadsheets for Studies 1a and 1bâsheâd posted them shortly after the paper had been published, along with versions of the studiesâ research questionnairesâand she now wrote that âunexplained issues were identifiedâ in both, and that there was âuncertainty regarding the data provenanceâ for the latter. Schroederâs post did not elaborate, but anyone can look at the spreadsheets, and it doesnât take a forensic expert to see that the numbers they report are seriously amiss.
The âunexplained issuesâ with Studies 1a and 1b are legion. For one thing, the figures as reported donât appear to match the research as described in other public documents. (For example, where the posted research questionnaire instructs the students to assess their level of anxiety on a five-point scale, the results seem to run from 2 to 8.) But the single most suspicious pattern shows up in the heart-rate data. According to the paper, each student had their pulse measured three times: once at the very start, again after they were told theyâd have to sing the karaoke song, and then a third time, right before the song began. I created three graphs to illustrate the dataâs peculiarities. They depict the measured heart rates for each of the 167 students who are said to have participated in the experiment, presented from left to right in their numbered order on the spreadsheet. The blue and green lines, which depict the first and second heart-rate measurements, show those values fluctuating more or less as one might expect for a noisy signal, measured from lots of individuals. But the red line doesnât look like this at all: Rather, the measured heart rates form a series going up, across a run of more than 100 consecutive students.
Iâve reviewed the case with several researchers who suggested that this tidy run of values is indicative of fraud. âI see absolutely no reasonâ the sequence in No. 3 âshould have the order that it does,â James Heathers, a scientific-Âintegrity investigator and an occasional Atlantic contributor, told me. The exact meaning of the pattern is unclear; if you were fabricating data, you certainly wouldnât strive for them to look like this. Nick Brown, a scientific-integrity researcher affiliated with Linnaeus University Sweden, guessed that the ordered values in the spreadsheet may have been cooked up after the fact. In that case, it might have been less important that they formed a natural-Âlooking plot than that, when analyzed together, they matched fake statistics that had already been reported. âSomeone sat down and burned quite a bit of midnight oil,â he proposed. I asked how sure he was that this pattern of results was the product of deliberate tampering; â100 percent, 100 percent,â he told me. âIn my view, there is no innocent explanation in a universe where fairies donât exist.â
Schroeder herself would come to a similar conclusion. Months later, I asked her whether the data were manipulated. âI think itâs very likely that they were,â she said. In the summer of 2023, when she reported the findings of her audit to her fellow authors, they all agreed that, whatever really happened, the work was compromised and ought to be retracted. But they could not reach consensus on who had been at fault. Gino did not appear to be responsible for either of the paperâs karaoke studies. Then who was?
This would not seem to be a tricky question. The published version of the paper has two lead authors who are listed as having âcontributed equallyâ to the work. One of them was Schroeder. All of the co-authors agree that she handled two experimentsâlabeled in the text as Studies 3 and 4âin which participants solved a set of math problems. The other main contributor was Alison Wood Brooks, a young professor and colleague of Ginoâs at Harvard Business School.
From the start, there was every reason to assume that Brooks had run the studies that produced the fishy data. Certainly they are similar to Brooksâs prior work. The same quirky experimental setupâin which students were asked to wear a pulse oximeter and sing a karaoke version of âDonât Stop BelievinââââÂappears in her dissertation from the Wharton School in 2013, and she published a portion of that work in a sole-authored paper the following year. (Brooks herself is musically inclined, performing around Boston in a rock band.)
Yet despite all of this, Brooks told the Many Co-Authors Project that she simply wasnât sure whether sheâd had access to the raw data for Study 1b, the one with the âno innocent explanationâ pattern of results. She also said she didnât know whether Gino played a role in collecting them. On the latter point, Brooksâs former Ph.D. adviser, Maurice Schweitzer, expressed the same uncertainty to the Many Co-Authors Project.
Plenty of evidence now suggests that this mystery was manufactured. The posted materials for Study 1b, along with administrative records from the lab, indicate that the work was carried out at Wharton, where Brooks was in grad school at the time, studying under Schweitzer and running another, very similar experiment. Also, the metadata for the oldest public version of the data spreadsheet lists âAlison Wood Brooksâ as the last person who saved the file.
Brooks, who has published research on the value of apologies, and whose first bookâTalk: The Science of Conversation and the Art of Being Ourselvesâis due out from Crown in January, did not respond to multiple requests for interviews or to a detailed list of written questions. Gino said that she âneither collected nor analyzed the data for Study 1a or Study 1b nor was I involved in the data audit.â
If Brooks did conduct this work and oversee its data, then Schroederâs audit had produced a dire twist. The Many Co-Authors Project was meant to suss out Ginoâs suspect work, and quarantine it from the rest. âThe goal was to protect the innocent victims, and to find out whatâs true about the science that had been done,â Milkman told me. But now, to all appearances, Schroeder had uncovered crooked data that apparently werenât linked to Gino. That would mean Schroeder had another colleague who had contaminated her research. It would mean that her reputationâand the credibility of her entire fieldâwas under threat from multiple directions at once.
Among the four research papers in which Gino was accused of cheating is one about the human tendency to misreport facts and figures for personal gain. Which is to say: She was accused of faking data for a study of when and how people might fake data. Amazingly, a different set of data from the same paper had already been flagged as the product of potential fraud, two years before the Gino scandal came to light. The first was contributed by Dan Ariely of Duke Universityâa frequent co-author of Ginoâs and, like her, a celebrated expert on the psychology of telling lies. (Ariely has said that a Duke investigationâwhich the school has not acknowledgedâdiscovered no evidence that he âfalsified data or knowingly used falsified data.â He has also said that the investigation âdetermined that I should have done more to prevent faulty data from being published in the 2012 paper.â)
The existence of two apparently corrupted data sets was shocking: a keystone paper on the science of deception wasnât just invalid, but possibly a scam twice over. But even in the face of this ignominy, few in business academia were ready to acknowledge, in the summer of 2023, that the problem might be larger stillâand that their research literature might well be overrun with fantastical results.
Some scholars had tried to raise alarms before. In 2019, Dennis Tourish, a professor at the University of Sussex Business School, published a book titled Management Studies in Crisis: Fraud, Deception and Meaningless Research. He cites a study finding that more than a third of surveyed editors at management journals say theyâve encountered fabricated or falsified data. Even that alarming rate may undersell the problem, Tourish told me, given all of the misbehavior in his discipline that gets overlooked or covered up.
Anonymous surveys of various fields find that roughly 2 percent of scholars will admit to having fabricated, falsified, or modified data at least once in their career. But business-school psychology may be especially prone to misbehavior. For one thing, the fieldâs research standards are weaker than those for other psychologists. In response to the replication crisis, campus psychology departments have lately taken up a raft of methodological reforms. Statistically suspect practices that were de rigueur a dozen years ago are now uncommon; sample sizes have gotten bigger; a studyâs planned analyses are now commonly written down before the work is carried out. But this great awakening has been slower to develop in business-school psychology, several academics told me. âNo one wants to kill the golden goose,â one early-career researcher in business academia said. If management and marketing professors embraced all of psychologyâs reforms, he said, then many of their most memorable, most TED Talkâable findings would go away. âTo use marketing lingo, weâd lose our unique value proposition.â
Itâs easy to imagine how cheating might lead to more cheating. If business-school psychology is beset with suspect research, then the bar for getting published in its flagship journals ratchets up: A study must be even flashier than all the other flashy findings if its authors want to stand out. Such incentives move in only one direction: EventuÂally, the standard tools for torturing your data will no longer be enough. Now you have to go a little further; now you have to cut your data up, and carve them into sham results. Having one or two prolific frauds around would push the bar for publishing still higher, inviting yet more corruption. (And because the work is not exactly brain surgery, no one dies as a result.) In this way, a single discipline might come to look like Major League Baseball did 20 years ago: defined by juiced-up stats.
In the face of its own cheating scandal, MLB started screening every single player for anabolic steroids. There is no equivalent in science, and certainly not in business academia. Uri Simonsohn, a professor at the Esade Business School in Barcelona, is a member of the blogging team, called Data Colada, that caught the problems in both Ginoâs and Arielyâs work. (He was also a motivating force behind the Many Co-Authors Project.) Data Colada has called out other instances of sketchy work and apparent fakery within the field, but its efforts at detection are highly targeted. Theyâre also quite unusual. Crying foul on someone elseâs bad research makes you out to be a troublemaker, or a member of the notional âdata police.â It can also bring a claim of defamation. Gino filed a $25 million defamation lawsuit against Harvard and the Data Colada team not long after the bloggers attacked her work. (This past September, a judge dismissed the portion of her claims that involved the bloggers and the defamation claim against Harvard. She still has pending claims against the university for gender discrimination and breach of contract.) The risks are even greater for those who donât have tenure. A junior academic who accuses someone else of fraud may antagonize the senior colleagues who serve on the boards and committees that make publishing decisions and determine funding and job appointments.
[Read: Francesca Gino, the Harvard expert on dishonesty who is accused of lying]
These risks for would-be critics reinforce an atmosphere of complacency. âItâs embarrassing how few protections we have against fraud and how easy it has been to fool us,â Simonsohn said in a 2023 webinar. He added, âWe have done nothing to prevent it. Nothing.â
Like so many other scientific scandals, the one Schroeder had identified quickly sank into a swamp of closed-door reviews and taciturn committees. Schroeder says that Harvard Business School declined to investigate her evidence of data-tampering, citing a policy of not responding to allegations made more than six years after the misconduct is said to have occurred. (Harvard Business Schoolâs head of communications, Mark Cautela, declined to comment.) Her efforts to address the issue through the University of Pennsylvaniaâs Office of Research Integrity likewise seemed fruitless. (A spokesperson for the Wharton School would not comment on âthe existence or status ofâ any investigations.)
Retractions have a way of dragging out in science publishing. This one was no exception. Maryam Kouchaki, an expert on workplace ethics at Northwestern Universityâs Kellogg School of Management and coâeditor in chief of the journal that published the âDonât Stop Believingâ paper, had first received the authorsâ call to pull their work in August 2023. As the anniversary of that request drew near, Schroeder still had no idea how the suspect data would be handled, and whether Brooksâor anyone elseâwould be held responsible.
Finally, on October 1, the âDonât Stop Believingâ paper was removed from the scientific literature. The journalâs published notice laid out some basic conclusions from Schroederâs audit: Studies 1a and 1b had indeed been run by Brooks, the raw data were not available, and the posted data for 1b showed âstreaks of heart rate ratings that were unlikely to have occurred naturally.â Schroederâs own contributions to the paper were also found to have some flaws: Data points had been dropped from her analysis without any explanation in the published text. (Although this practice wasnât fully out-of-bounds given research standards at the time, the same behavior would today be understood as a form of âp-hackingââa pernicious source of false-positive results.) But the notice did not say whether the fishy numbers from Study 1b had been fabricated, let alone by whom. Someone other than Brooks may have handled those data before publication, it suggested. âThe journal could not investigate this study any further.â
Two days later, Schroeder posted to X a link to her full and final audit of the paper. âIt took *hundreds* of hours of work to complete this retraction,â she wrote, in a thread that described the flaws in her own experiments and Studies 1a and 1b. âI am ashamed of helping publish this paper & how long it took to identify its issues,â the thread concluded. âI am not the same scientist I was 10 years ago. I hold myself accountable for correcting any inaccurate prior research findings and for updating my research practices to do better.â Her peers responded by lavishing her with public praise. One colleague called the self-audit âexemplaryâ and an âact of courage.â A prominent professor at Columbia Business School congratulated Schroeder for being âa cultural heroine, a role model for the rising generation.â
But amid this celebration of her unusual transparency, an important and related story had somehow gone unnoticed. In the course of scouting out the edges of the cheating scandal in her field, Schroeder had uncovered yet another case of seeming science fraud. And this time, sheâd blown the whistle on herself.
That stunning revelation, unaccompanied by any posts on social media, had arrived in a muffled update to the Many Co-Authors Project website. Schroeder announced that sheâd found âan issueâ with one more paper that sheâd produced with Gino. This one, âEnacting Rituals to Improve Self-Control,â came out in 2018 in the Journal of Personality and Social Psychology; its author list overlaps substantially with that of the earlier âDonât Stop Believingâ paper (though Brooks was not involved). Like the first, it describes a set of studies that purport to show the power of the ritual effect. Like the first, it includes at least one study for which data appear to have been altered. And like the first, its data anomalies have no apparent link to Gino.
The basic facts are laid out in a document that Schroeder put into an online repository, describing an internal audit that she conducted with the help of the lead author, Allen Ding Tian. (Tian did not respond to requests for comment.) The paper opens with a field experiment on women who were trying to lose weight. Schroeder, then in grad school at the University of Chicago, oversaw the work; participants were recruited at a campus gym.
Half of the women were instructed to perform a ritual before each meal for the next five days: They were to put their food into a pattern on their plate. The other half were not. Then Schroeder used a diet-tracking app to tally all the food that each woman reported eating, and found that the ones in the ritual group took in about 200 fewer calories a day, on average, than the others. But in 2023, when she started digging back into this research, she uncovered some discrepancies. According to her studyâs raw materials, nine of the women who reported that theyâd done the food-arranging ritual were listed on the data spreadsheet as being in the control group; six others were mislabeled in the opposite direction. When Schroeder fixed these errors for her audit, the ritual effect completely vanished. Now it looked as though the women whoâd done the food-arranging had consumed a few more calories, on average, than the women who had not.
Mistakes happen in research; sometimes data get mixed up. These errors, though, appear to be intentional. The women whose data had been swapped fit a suspicious pattern: The ones whose numbers might have undermined the paperâs hypothesis were disproportionately affected. This is not a subtle thing; among the 43 women who reported that theyâd done the ritual, the six most prolific eaters all got switched into the control group. Nick Brown and James Heathers, the scientific-integrity researchers, have each tried to figure out the odds that anything like the studyâs published result could have been attained if the data had been switched at random. Brownâs analysis pegged the answer at one in 1 million. âData manipulation makes sense as an explanation,â he told me. âNo other explanation is immediately obvious to me.â Heathers said he felt âquite comfortableâ in concluding that whatever went wrong with the experiment âwas a directed process, not a random process.â
Whether or not the data alterations were intentional, their specific formâflipped conditions for a handful of participants, in a way that favored the hypothesisâmatches up with data issues raised by Harvard Business Schoolâs investigation into Ginoâs work. Schroeder rejected that comparison when I brought it up, but she was willing to accept some blame. âI couldnât feel worse about that paper and that study,â she told me. âIâm deeply ashamed of it.â
Still, she said that the source of the error wasnât her. Her research assistants on the project may have caused the problem; Schroeder wonders if they got confused. She said that two RAs, both undergraduates, had recruited the women at the gym, and that the scene there was chaotic: Sometimes multiple people came up to them at once, and the undergrads may have had to make some changes on the fly, adjusting which participants were being put into which group for the study. Maybe things went wrong from there, Schroeder said. One or both RAs might have gotten ruffled as they tried to paper over inconsistencies in their record-keeping. They both knew what the experiment was meant to show, and how the data ought to lookâso itâs possible that they peeked a little at the data and reassigned the numbers in the way that seemed correct. (Schroederâs audit lays out other possibilities, but describes this one as the most likely.)
Schroederâs account is certainly plausible, but itâs not a perfect fit with all of the facts. For one thing, the posted data indicate that during most days on which the study ran, the RAs had to deal with only a handful of participantsâsometimes just two. How could they have gotten so bewildered?
Any further details seem unlikely to emerge. The paper was formally retracted in the February issue of the journal. Schroeder has chosen not to name the RAs who helped her with the study, and she told me that she hasnât tried to contact them. âI just didnât think it was appropriate,â she said. âIt doesnât seem like it would help matters at all.â By her account, neither one is currently in academia, and she did not discover any additional issues when she reviewed their other work. (I reached out to more than a dozen former RAs and lab managers who were thanked in Schroederâs published papers from around this time. Five responded to my queries; all of them denied having helped with this experiment.) In the end, Schroeder said, she took the data at the assistantsâ word. âI did not go in and change labels,â she told me. But she also said repeatedly that she doesnât think her RAs should take the blame. âThe responsibility rests with me, right? And so it was appropriate that Iâm the one named in the retraction notice,â she said. Later in our conversation, she summed up her response: âIâve tried to trace back as best I can what happened, and just be honest.â
Across the many months I spent reporting this story, Iâd come to think of Schroeder as a paragon of scientific rigor. She has led a seminar on âExperimental Design and Research Methodsâ in a business program with a sterling reputation for its research standards. Sheâd helped set up the Many Co-Authors Project, and then pursued it as aggressively as anyone. (Simonsohn even told me that Schroederâs look-at-everything approach was a little âoverboard.â) I also knew that she was devoted to the dreary but important task of reproducing other peopleâs published work.
As for the dieting research, Schroeder had owned the awkward optics. âIt looks weird,â she told me when we spoke in June. âItâs a weird error, and it looks consistent with changing things in the direction to get a result.â But weirder still was how that error came to light, through a detailed data audit that sheâd undertaken of her own accord. Apparently, sheâd gone to great effort to call attention to a damning set of facts. That alone could be taken as a sign of her commitment to transparency.
But in the months that followed, I couldnât shake the feeling that another theory also fit the facts. Schroederâs leading explanation for the issues in her workâAn RA must have bungled the dataâsounded distressingly familiar. Francesca Gino had offered up the same defense to Harvardâs investigators. The mere repetition of this story doesnât mean that itâs invalid: Lab techs and assistants really do mishandle data on occasion, and they may of course engage in science fraud. But still.
As for Schroederâs all-out focus on integrity, and her public efforts to police the scientific record, I came to understand that most of these had been adopted, all at once, in mid-2023, shortly after the Gino scandal broke. (The version of Schroederâs rĂ©sumĂ© that was available on her webpage in the spring of 2023 does not describe any replication projects whatsoever.) That makes sense if the accusations changed the way she thought about her fieldâand she did describe them to me as âa wake-up call.â But hereâs another explanation: Maybe Schroeder saw the Gino scandal as a warning that the data sleuths were on the march. Perhaps she figured that her own work might end up being scrutinized, and then, having gamed this out, she decided to be a data sleuth herself. Sheâd publicly commit to reexamining her colleaguesâ work, doing audits of her own, and asking for corrections. This would be her play for amnesty during a crisis.
I spoke with Schroeder for the last time on the day before Halloween. She was notably composed when I confronted her with the possibility that sheâd engaged in data-tampering herself. She repeated what sheâd told me months before, that she definitely did not go in and change the numbers in her study. And she rejected the idea that her self-audits had been strategic, that sheâd used them to divert attention from her own wrongdoing. âHonestly, itâs disturbing to hear you even lay it out,â she said. âBecause I think if you were to look at my body of work and try to replicate it, I think my hit rate would be good.â She continued: âSo to imply that Iâve actually been, I donât know, doing a lot of fraudulent stuff myself for a long time, and this was a moment to come clean with it? I just donât think the evidence bears that out.â
That wasnât really what Iâd meant to imply. The story I had in mind was more mundaneâand in a sense more tragic. I went through it: Perhaps sheâd fudged the results for a study just once or twice early in her career, and never again. Perhaps sheâd been committed, ever since, to proper scientific methods. And perhaps she really did intend to fix some problems in her field.
Schroeder allowed that sheâd been susceptible to certain research practicesâexcluding data, for exampleâthat are now considered improper. So were many of her colleagues. In that sense, sheâd been guilty of letting her judgment be distorted by the pressure to succeed. But I understood what she was saying: This was not the same as fraud.
Throughout our conversations, Schroeder had avoided stating outright that anyone in particular had committed fraud. But not all of her colleagues had been so cautious. Just a few days earlier, Iâd received an unexpected message from Maurice Schweitzer, the senior Wharton business-school professor who oversaw Alison Wood Brooksâs âDonât Stop Believingâ research. Up to this point, he had not responded to my request for an interview, and I figured heâd chosen not to comment for this story. But he finally responded to a list of written questions. It was important for me to know, his email said, that SchroeÂder had âbeen involved in data tampering.â He included a link to the retraction notice for her paper on rituals and eating. When I asked Schweitzer to elaborate, he did not respond. (Schweitzerâs most recent academic work is focused on the damaging effects of gossip; one of his papers from 2024 is titled âThe Interpersonal Costs of Revealing Othersâ Secrets.â)
I laid this out for Schroeder on the phone. âWow,â she said. âThatâs unfortunate that he would say that.â She went silent for a long time. âYeah, Iâm sad heâs saying that.â
Another long silence followed. âI think that the narrative that you laid out, Dan, is going to have to be a possibility,â she said. âI donât think thereâs a way I can refute it, but I know what the truth is, and I think I did the right thing, with trying to clean the literature as much as I could.â
This is all too often where these stories end: A researcher will say that whatever really happened must forever be obscure. Dan Ariely told Business Insider in February 2024â: âIâve spent a big part of the last two years trying to find out what happened. I havenât been able to ⊠I decided I have to move on with my life.â SchweitÂzer told me that the most relevant files for the âDonât Stop Believingâ paper are âlong gone,â and that the chain of custody for its data simply canât be tracked. (The Wharton School agreed, telling me that it âdoes not possess the requested dataâ for Study 1b, âas it falls outside its current data retention period.â) And now Schroeder had landed on a similar position.
Itâs uncomfortable for a scientist to claim that the truth might be unknowable, just as it would be for a journalist, or any other truth-seeker by vocation. I daresay the facts regarding all of these cases may yet be amenable to further inquiry. The raw data from Study 1b may still exist, somewhere; if so, one might compare them with the posted spreadsheet to confirm that certain numbers had been altered. And Schroeder says she has the names of the RAs who worked on her dieting experiment; in theory, she could ask those people for their recollections of what happened. If figures arenât checked, or questions arenât asked, itâs by choice.
What feels out of reach is not so much the truth of any set of allegations, but their consequences. Gino has been placed on administrative leave, but in many other instances of suspected fraud, nothing happens. Both Brooks and Schroeder appear to be untouched. âThe problem is that journal editors and institutions can be more concerned with their own prestige and reputation than finding out the truth,â Dennis Tourish, at the University of Sussex Business School, told me. âIt can be easier to hope that this all just goes away and blows over and that somebody else will deal with it.â
Some degree of disillusionment was common among the academics I spoke with for this story. The early-career researcher in business academia told me that he has an âunhealthy hobbyâ of finding manipulated data. But now, he said, heâs giving up the fight. âAt least for the time being, Iâm done,â he told me. âFeeling like Sisyphus isnât the most fulfilling experience.â A management professor who has followed all of these cases very closely gave this assessment: âI would say that distrust characterizes many people in the fieldâÂitâs all very depressing and demotivating.â
Itâs possible that no one is more depressed and demotivated, at this point, than Juliana Schroeder. âTo be honest with you, Iâve had some very low moments where Iâm like, âWell, maybe this is not the right field for me, and I shouldnât be in it,âââ she said. âAnd to even have any errors in any of my papers is incredibly embarrassing, let alone one that looks like data-tampering.â
I asked her if there was anything more she wanted to say.
âI guess I just want to advocate for empathy and transparencyâÂmaybe even in that order. Scientists are imperfect people, and we need to do better, and we can do better.â Even the Many Co-Authors Project, she said, has been a huge missed opportunity. âIt was sort of like a moment where everyone could have done self-reflection. Everyone could have looked at their papers and done the exercise I did. And people didnât.â
Maybe the situation in her field would eventually improve, she said. âThe optimistic point is, in the long arc of things, weâll self-correct, even if we have no incentive to retract or take responsibility.â
âDo you believe that?â I asked.
âOn my optimistic days, I believe it.â
âIs today an optimistic day?â
âNot really.â
This article appears in the January 2025 print edition with the headline âThe Fraudulent Science of Success.â
Editorâs note: This analysis is part of The Atlanticâs investigation into the OpenSubtitles data set. You can access the search tool directly here. Find The Atlantic's search tool for books used to train AI here.
For as long as generative-AI chatbots have been on the internet, Hollywood writers have wondered if their work has been used to train them. The chatbots are remarkably fluent with movie references, and companies seem to be training them on all available sources. One screenwriter recently told me heâs seen generative AI reproduce close imitations of The Godfather and the 1980s TV show Alf, but he had no way to prove that a program had been trained on such material.
I can now say with absolute confidence that many AI systems have been trained on TV and film writersâ work. Not just on The Godfather and Alf, but on more than 53,000 other movies and 85,000 other TV episodes: Dialogue from all of it is included in an AI-training data set that has been used by Apple, Anthropic, Meta, Nvidia, Salesforce, Bloomberg, and other companies. I recently downloaded this data set, which I saw referenced in papers about the development of various large language models (or LLMs). It includes writing from every film nominated for Best Picture from 1950 to 2016, at least 616 episodes of The Simpsons, 170 episodes of Seinfeld, 45 episodes of Twin Peaks, and every episode of The Wire, The Sopranos, and Breaking Bad. It even includes prewritten âliveâ dialogue from Golden Globes and Academy Awards broadcasts. If a chatbot can mimic a crime-show mobster or a sitcom alienâor, more pressingly, if it can piece together whole shows that might otherwise require a room of writersâdata like this are part of the reason why.
[Read: These 183,000 books are fueling the biggest fight in publishing and tech]
The files within this data set are not scripts, exactly. Rather, they are subtitles taken from a website called OpenSubtitles.org. Users of the site typically extract subtitles from DVDs, Blu-ray discs, and internet streams using optical-character-recognition (OCR) software. Then they upload the results to OpenSubtitles.org, which now hosts more than 9 million subtitle files in more than 100 languages and dialects. Though this may seem like a strange source for AI-training data, subtitles are valuable because theyâre a raw form of written dialogue. They contain the rhythms and styles of spoken conversation and allow tech companies to expand generative AIâs repertoire beyond academic texts, journalism, and novels, all of which have also been used to train these programs. Well-written speech is a rare commodity in the world of AI-training data, and it may be especially valuable for training chatbots to âspeakâ naturally.
According to research papers, the subtitles have been used by Anthropic to train its ChatGPT competitor, Claude; by Meta to train a family of LLMs called Open Pre-trained Transformer (OPT); by Apple to train a family of LLMs that can run on iPhones; and by Nvidia to train a family of NeMo Megatron LLMs. It has also been used by Salesforce, Bloomberg, EleutherAI, Databricks, Cerebras, and various other AI developers to build at least 140 open-source models distributed on the AI-development hub Hugging Face. Many of these models could potentially be used to compete with human writers, and theyâre built without permission from those writers.
When I reached out to Anthropic for this article, the company did not provide a comment on the record. When Iâve previously spoken with Anthropic about its use of this data set, a spokesperson told me the company had âtrained our generative-AI assistant Claude on the public dataset The Pile,â of which OpenSubtitles is a part, and âwhich is commonly used in the industry.â A Salesforce spokesperson told me that although the company has used OpenSubtitles in generative-AI development, the data set âwas never used to inform or enhance any of Salesforceâs product offerings.â Apple similarly told me that its small LLM was intended only for research. However, both Salesforce and Apple, like other AI developers, have made their models available for developers to use in any number of different contexts. All other companies mentioned in this articleâNvidia, Bloomberg, EleutherAI, Databricks, and Cerebrasâeither declined to comment or did not respond to requests for comment.
You may search through the data set using the tool below.
Two years after the release of ChatGPT, it may not be surprising that creative work is used without permission to power AI products. Yet the notion remains disturbing to many artists and professionals who feel that their craft and livelihoods are threatened by programs. Transparency is generally low: Tech companies tend not to advertise whose work they use to train their products. The legality of training on copyrighted work also remains an open question. Numerous lawsuits have been brought against tech companies by writers, actors, artists, and publishers alleging that their copyrights have been violated in the AI-training process: As Breaking Badâs creator, Vince Gilligan, wrote to the U.S. Copyright Office last year, generative AI amounts to âan extraordinarily complex and energy-intensive form of plagiarism.â Tech companies have argued that training AI systems on copyrighted work is âfair use,â but a court has yet to rule on this claim. In the language of copyright law, subtitles are likely considered derivative works, and a court would generally see them as protected by the same rules against copying and distribution as the movies theyâre taken from. The OpenSubtitles data set has circulated among AI developers since 2020. It is part of the Pile, a collection of data sets for training generative AI. The Pile also includes text from books, patent applications, online discussions, philosophical papers, YouTube-video subtitles, and more. Itâs an easy way for companies to start building AI systems without having to find and download the many gigabytes of high-quality text that LLMs require.
[Read: Generative AI is challenging a 234-year-old law]
OpenSubtitles can be downloaded by anyone who knows where to look, but as with most AI-training data sets, itâs not easy to understand whatâs in it. Itâs a 14-gigabyte text file with short lines of unattributed dialogueâmeaning the speaker is not identified. Thereâs no way to tell where one movie ends and the next begins, let alone what the movies are. I downloaded a ârawâ version of the data set, in which the movies and episodes were separated into 446,612 files and stored in folders whose names corresponded to the ID numbers of movies and episodes listed on IMDb.com. Most folders contained multiple subtitle versions of the same movie or TV show (different releases may be tweaked in various ways), but I was able to identify at least 139,000 unique movies and episodes. I downloaded metadata associated with each title from the OpenSubtitles.org websiteâallowing me to map actors and directors to each title, for instanceâand used it to build the tool above.
The OpenSubtitles data set adds yet another wrinkle to a complex narrative around AI, in which consent from artists and even the basic premise of the technology are points of contention. Until very recently, no writer putting pen to paper on a script would have thought their creative work might be used to train programs that could replace them. And the subtitles themselves were not originally intended for this purpose, either. The multilingual OpenSubtitles data set contained subtitles in 62 different languages and 1,782 language-pair combinations: It is meant for training the models behind apps such as Google Translate and DeepL, which can be used to translate websites, street signs in a foreign country, or an entire novel. Jörg Tiedemann, one of the data setâs creators, wrote in an email that he was happy to see OpenSubtitles being used in LLM development, too, even though that was not his original intention.
He is, in any case, powerless to stop it. The subtitles are on the internet, and thereâs no telling how many independent generative-AI programs theyâve been used for, or how much synthetic writing those programs have produced. But now, at least, we know a bit more about who is caught in the machinery. What will the world decide they are owed?