Home 🏙️ Tech The Atlantic - Technology
author

The Atlantic - Technology

The Atlantic's technology section provides deep insights into the tech giants, social media trends, and the digital culture shaping our world.

October 4, 2024  18:12:39

Terence Tao, a mathematics professor at UCLA, is a real-life superintelligence. The “Mozart of Math,” as he is sometimes called, is widely considered the world’s greatest living mathematician. He has won numerous awards, including the equivalent of a Nobel Prize for mathematics, for his advances and proofs. Right now, AI is nowhere close to his level.

But technology companies are trying to get it there. Recent, attention-grabbing generations of AI—even the almighty ChatGPT—were not built to handle mathematical reasoning. They were instead focused on language: When you asked such a program to answer a basic question, it did not understand and execute an equation or formulate a proof, but instead presented an answer based on which words were likely to appear in sequence. For instance, the original ChatGPT can’t add or multiply, but has seen enough examples of algebra to solve x + 2 = 4: “To solve the equation x + 2 = 4, subtract 2 from both sides …” Now, however, OpenAI is explicitly marketing a new line of “reasoning models,” known collectively as the o1 series, for their ability to problem-solve “much like a person” and work through complex mathematical and scientific tasks and queries. If these models are successful, they could represent a sea change for the slow, lonely work that Tao and his peers do.

[Read: OpenAI’s big reset]

After I saw Tao post his impressions of o1 online—he compared it to a “mediocre, but not completely incompetent” graduate student—I wanted to understand more about his views on the technology’s potential. In a Zoom call last week, he described a kind of AI-enabled, “industrial-scale mathematics” that has never been possible before: one in which AI, at least in the near future, is not a creative collaborator in its own right so much as a lubricant for mathematicians’ hypotheses and approaches. This new sort of math, which could unlock terra incognitae of knowledge, will remain human at its core, embracing how people and machines have very different strengths that should be thought of as complementary rather than competing.

This conversation has been edited for length and clarity.


Matteo Wong: What was your first experience with ChatGPT?

Terence Tao: I played with it pretty much as soon as it came out. I posed some difficult math problems, and it gave pretty silly results. It was coherent English, it mentioned the right words, but there was very little depth. Anything really advanced, the early GPTs were not impressive at all. They were good for fun things—like if you wanted to explain some mathematical topic as a poem or as a story for kids. Those are quite impressive.

Wong: OpenAI says o1 can “reason,” but you compared the model to “a mediocre, but not completely incompetent” graduate student.

Tao: That initial wording went viral, but it got misinterpreted. I wasn’t saying that this tool is equivalent to a graduate student in every single aspect of graduate study. I was interested in using these tools as research assistants. A research project has a lot of tedious steps: You may have an idea and you want to flesh out computations, but you have to do it by hand and work it all out.

Wong: So it’s a mediocre or incompetent research assistant.

Tao: Right, it’s the equivalent, in terms of serving as that kind of an assistant. But I do envision a future where you do research through a conversation with a chatbot. Say you have an idea, and the chatbot went with it and filled out all the details.

It’s already happening in some other areas. AI famously conquered chess years ago, but chess is still thriving today, because it’s now possible for a reasonably good chess player to speculate what moves are good in what situations, and they can use the chess engines to check 20 moves ahead. I can see this sort of thing happening in mathematics eventually: You have a project and ask, “What if I try this approach?” And instead of spending hours and hours actually trying to make it work, you guide a GPT to do it for you.

With o1, you can kind of do this. I gave it a problem I knew how to solve, and I tried to guide the model. First I gave it a hint, and it ignored the hint and did something else, which didn’t work. When I explained this, it apologized and said, “Okay, I’ll do it your way.” And then it carried out my instructions reasonably well, and then it got stuck again, and I had to correct it again. The model never figured out the most clever steps. It could do all the routine things, but it was very unimaginative.

One key difference between graduate students and AI is that graduate students learn. You tell an AI its approach doesn’t work, it apologizes, it will maybe temporarily correct its course, but sometimes it just snaps back to the thing it tried before. And if you start a new session with AI, you go back to square one. I’m much more patient with graduate students because I know that even if a graduate student completely fails to solve a task, they have potential to learn and self-correct.

Wong: The way OpenAI describes it, o1 can recognize its mistakes, but you’re saying that’s not the same as sustained learning, which is what actually makes mistakes useful for humans.

Tao: Yes, humans have growth. These models are static—the feedback I give to GPT-4 might be used as 0.00001 percent of the training data for GPT-5. But that’s not really the same as with a student.

AI and humans have such different models for how they learn and solve problems—I think it’s better to think of AI as a complementary way to do tasks. For a lot of tasks, having both AIs and humans doing different things will be most promising.

Wong: You’ve also said previously that computer programs might transform mathematics and make it easier for humans to collaborate with one another. How so? And does generative AI have anything to contribute here?

Tao: Technically they aren’t classified as AI, but proof assistants are useful computer tools that check whether a mathematical argument is correct or not. They enable large-scale collaboration in mathematics. That’s a very recent advent.

Math can be very fragile: If one step in a proof is wrong, the whole argument can collapse. If you make a collaborative project with 100 people, you break your proof in 100 pieces and everybody contributes one. But if they don’t coordinate with one another, the pieces might not fit properly. Because of this, it’s very rare to see more than five people on a single project.

With proof assistants, you don’t need to trust the people you’re working with, because the program gives you this 100 percent guarantee. Then you can do factory production–type, industrial-scale mathematics, which doesn't really exist right now. One person focuses on just proving certain types of results, like a modern supply chain.

The problem is these programs are very fussy. You have to write your argument in a specialized language—you can’t just write it in English. AI may be able to do some translation from human language to the programs. Translating one language to another is almost exactly what large language models are designed to do. The dream is that you just have a conversation with a chatbot explaining your proof, and the chatbot would convert it into a proof-system language as you go.

Wong: So the chatbot isn’t a source of knowledge or ideas, but a way to interface.

Tao: Yes, it could be a really useful glue.

Wong: What are the sorts of problems that this might help solve?

Tao: The classic idea of math is that you pick some really hard problem, and then you have one or two people locked away in the attic for seven years just banging away at it. The types of problems you want to attack with AI are the opposite. The naive way you would use AI is to feed it the most difficult problem that we have in mathematics. I don’t think that’s going to be super successful, and also, we already have humans that are working on those problems.

The type of math that I’m most interested in is math that doesn’t really exist. The project that I launched just a few days ago is about an area of math called universal algebra, which is about whether certain mathematical statements or equations imply that other statements are true. The way people have studied this in the past is that they pick one or two equations and they study them to death, like how a craftsperson used to make one toy at a time, then work on the next one. Now we have factories; we can produce thousands of toys at a time. In my project, there’s a collection of about 4,000 equations, and the task is to find connections between them. Each is relatively easy, but there’s a million implications. There’s like 10 points of light, 10 equations among these thousands that have been studied reasonably well, and then there’s this whole terra incognita.

[Read: Science is becoming less human]

There are other fields where this transition has happened, like in genetics. It used to be that if you wanted to sequence a genome of an organism, this was an entire Ph.D. thesis. Now we have these gene-sequencing machines, and so geneticists are sequencing entire populations. You can do different types of genetics that way. Instead of narrow, deep mathematics, where an expert human works very hard on a narrow scope of problems, you could have broad, crowdsourced problems with lots of AI assistance that are maybe shallower, but at a much larger scale. And it could be a very complementary way of gaining mathematical insight.

Wong: It reminds me of how an AI program made by Google Deepmind, called AlphaFold, figured out how to predict the three-dimensional structure of proteins, which was for a long time something that had to be done one protein at a time.

Tao: Right, but that doesn’t mean protein science is obsolete. You have to change the problems you study. A hundred and fifty years ago, mathematicians’ primary usefulness was in solving partial differential equations. There are computer packages that do this automatically now. Six hundred years ago, mathematicians were building tables of sines and cosines, which were needed for navigation, but these can now be generated by computers in seconds.

I’m not super interested in duplicating the things that humans are already good at. It seems inefficient. I think at the frontier, we will always need humans and AI. They have complementary strengths. AI is very good at converting billions of pieces of data into one good answer. Humans are good at taking 10 observations and making really inspired guesses.

October 4, 2024  17:27:16

OpenAI announced this week that it has raised $6.6 billion in new funding and that the company is now valued at $157 billion overall. This is quite a feat for an organization that reportedly burns through $7 billion a year—far more cash than it brings in—but it makes sense when you realize that OpenAI’s primary product isn’t technology. It’s stories.

Case in point: Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life. “We’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI,” he writes. Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics. He predicts that we may have an all-powerful superintelligence “in a few thousand days.” All we have to do is feed his technology enough energy, enough data, and enough chips.

Maybe someday Altman’s ideas about AI will prove out, but for now, his approach is textbook Silicon Valley mythmaking. In these narratives, humankind is forever on the cusp of a technological breakthrough that will transform society for the better. The hard technical problems have basically been solved—all that’s left now are the details, which will surely be worked out through market competition and old-fashioned entrepreneurship. Spend billions now; make trillions later! This was the story of the dot-com boom in the 1990s, and of nanotechnology in the 2000s. It was the story of cryptocurrency and robotics in the 2010s. The technologies never quite work out like the Altmans of the world promise, but the stories keep regulators and regular people sidelined while the entrepreneurs, engineers, and investors build empires. (The Atlantic recently entered a corporate partnership with OpenAI.)

[Read: AI doomerism is a decoy]

Despite the rhetoric, Altman’s products currently feel less like a glimpse of the future and more like the mundane, buggy present. ChatGPT and DALL-E were cutting-edge technology in 2022. People tried the chatbot and image generator for the first time and were astonished. Altman and his ilk spent the following year speaking in stage whispers about the awesome technological force that had just been unleashed upon the world. Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models ( LLMs) so that humanity would have time to address the social consequences of the impending revolution. Those six months came and went. OpenAI and its competitors have released other models since then, and although tech wonks have dug into their purported advancements, for most people, the technology appears to have plateaued. GPT-4 now looks less like the precursor to an all-powerful superintelligence and more like … well, any other chatbot.

The technology itself seems much smaller once the novelty wears off. You can use a large language model to compose an email or a story—but not a particularly original one. The tools still hallucinate (meaning they confidently assert false information). They still fail in embarrassing and unexpected ways. Meanwhile, the web is filling up with useless “AI slop,” LLM-generated trash that costs practically nothing to produce and generates pennies of advertising revenue for the creator. We’re in a race to the bottom that everyone saw coming and no one is happy with. Meanwhile, the search for product-market fit at a scale that would justify all the inflated tech-company valuations keeps coming up short. Even OpenAI’s latest release, o1, was accompanied by a caveat from Altman that “it still seems more impressive on first use than it does after you spend more time with it.”

In Altman’s rendering, this moment in time is just a waypoint, “the doorstep of the next leap in prosperity.” He still argues that the deep-learning technique that powers ChatGPT will effectively be able to solve any problem, at any scale, so long as it has enough energy, enough computational power, and enough data. Many computer scientists are skeptical of this claim, maintaining that multiple significant scientific breakthroughs stand between us and artificial general intelligence. But Altman projects confidence that his company has it all well in hand, that science fiction will soon become reality. He may need $7 trillion or so to realize his ultimate vision—not to mention unproven fusion-energy technology—but that’s peanuts when compared with all the advances he is promising.

There’s just one tiny problem, though: Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one. He is one of Silicon Valley’s most revered talent scouts. If you look at Altman’s breakthrough successes, they all pretty much revolve around connecting early start-ups with piles of investor cash, not any particular technical innovation.

[Read: OpenAI takes its mask off]

It’s remarkable how similar Altman’s rhetoric sounds to that of his fellow billionaire techno-optimists. The project of techno-optimism, for decades now, has been to insist that if we just have faith in technological progress and free the inventors and investors from pesky regulations such as copyright law and deceptive marketing, then the marketplace will work its magic and everyone will be better off. Altman has made nice with lawmakers, insisting that artificial intelligence requires responsible regulation. But the company’s response to proposed regulation seems to be “no, not like that.” Lord, grant us regulatory clarity—but not just yet.

At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future so we don’t get too caught up in the underwhelming details of the present. Why focus on how AI is being used to harass and exploit children when you can imagine the ways it will make your life easier? It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.

Remember, these technologies already have a track record. The world can and should evaluate them, and the people building them, based on their results and their effects, not solely on their supposed potential.

October 4, 2024  00:57:52

For years, Donald Trump has taken seemingly every opportunity to attack electric vehicles. They will cause a “bloodbath” for the auto industry, he told Ohio crowds in March. “The damn things don’t go far enough, and they’re too expensive,” he declared last September. EVs are a “ridiculous Green New Deal crusade,” he said a few months earlier. “Where do I get a charge, darling?” he mocked in 2019.

But of late, the former president hasn’t quite sounded like his usual self. At the Republican National Convention in July, Trump said he is “all for electric [vehicles]. They have their application.” At a rally on Long Island last month, he brought up EVs during a winding rant. “I think they’re incredible,” he said of the cars, twice. To hear Trump tell it, the flip came at the bidding of Tesla CEO Elon Musk: “I’m for electric cars—I have to be,” he said in August, “because Elon endorsed me very strongly.” Not that Trump is unambiguously praising plug-in vehicles: He still opposes incentives to boost EV sales, which he repeated at his Long Island rally. The crowd erupted in cheers.

In America, driving green remains a blue phenomenon. Many Republicans in Congress have rejected EVs, with one senator calling them “left-wing lunacy” and part of Democrats’ “blind faith in the climate religion.” The GOP rank and file is also anti-EV. In 2022, roughly half of new EVs in America were registered in the deepest-blue counties, according to a recent analysis from UC Berkeley. That likely hasn’t changed since: A Pew survey conducted this May found that 45 percent of Democrats are at least somewhat likely to buy an EV the next time they purchase a vehicle, compared with 13 percent of Republicans.

If anyone can persuade Republican EV skeptics, it should be Trump—when he talks, his party listens. During the pandemic, his support for unproven COVID therapies was linked to increased interest in and purchases of those medications; his followers have rushed to buy his Trump-branded NFTs, watches, sneakers. But when it comes to EVs, Trump’s apparent change of heart might not be enough to spur many Republicans to go electric: His followers’ beliefs may be too complex and deep-rooted for Trump himself to overturn.

EVs were destined for the culture wars. “When we buy a car, the model and the brand that we choose also represents a statement to our neighbors, to the public, of who we are,” Loren McDonald, an EV consultant, told me. Like the Toyota Prius in years prior, zero-emission electric cars are an easy target for Republicans who have long railed against climate change, suggesting that it’s not real, or not human-caused, or not a serious threat. EVs have been “construed as an environmental and liberal object,” Nicole Sintov, an environmental psychologist at Ohio State University who studies EV adoption, told me. Her research suggests that the cars’ perceived links to environmental benefits, social responsibility, and technological innovation might attract Democrats to them. Meanwhile, most people “don’t want to be seen doing things that their out-group does,” Sintov said, which could turn Republicans away from EVs.

Republicans’ hesitance to drive an EV is remarkably strong and sustained. The Berkeley analysis, for instance, found that the partisan divide in new EV registrations showed up in not only 2022, but also 2021, and 2020, and every year since 2012, when the analysis began. It remains even after controlling for income and other pragmatic factors that might motivate or dissuade people from buying an EV, Lucas Davis, a Berkeley economist and one of the authors, told me.

All of this suggests that Trump’s flip-flop has at least the potential to “go a long way toward boosting favorability” of electric cars among Republicans, Joe Sacks, the executive director of the EV Politics Project, an advocacy group aiming to get Republicans to purchase EVs, told me. If you squint, there are already signs of changing opinions, perhaps brought on more so by Musk than the former president. After Musk’s own public swing to the far right, a majority of Republicans say he is a good ambassador for EVs, according to the EV Politics Project’s polling. Tucker Carlson began a recent review of the Tesla Cybertruck by saying that “the global-warming cult is going to force us all to drive electric vehicles,” but admitted, at the end, that it was fun to get behind the wheel. Adin Ross, an internet personality popular with young right-leaning men, recently gave Trump a Cybertruck with a custom vinyl wrap of the former president raising his fist moments after the assassination attempt in Pennsylvania. “I think it’s incredible,” Trump reacted.

But ideology might not account entirely for Republican opposition to EVs. The other explanation for the partisan gap is that material concerns with EVs—such as their cost, range, or limited charging infrastructure—happen to be a bigger issue for Republican voters than for Democrats. The bluest areas, for instance, tend to have high incomes, gasoline taxes, and population density, all of which might encourage EV purchases. EVs typically have higher sticker prices than their gas-powered counterparts, and in urban areas, people generally have to drive less, ameliorating some of the “range anxiety” that has dogged electric cars. Consider California, which accounts for more than a third of EVs in the U.S. Climate-conscious liberals in San Francisco may be seeking out EVs, but that’s not the whole story. The state government has heavily promoted driving electric, public chargers are abundant, and California has the highest gas prices in the country.

The opposite is true in many red states. For instance, many Republicans live in the South and Upper Midwest, especially in more rural areas. That might appear to account for the low EV sales in these areas, but residents also might have longer commutes, pay less for gas, and live in a public-charging desert, McDonald told me. California has more than 47,000 public charging stations, or 1.2 stations per 1,000 people; South Dakota has 265 public chargers, or less than 0.3 per 1,000 residents. “If you part all of the politics, at the end of the day I think the nonpolitical things are going to outweigh people’s decisions,” he said. “Can I afford it? Does it fit my lifestyle? Do I have access to charging?” In relatively conservative Orange County, California, 27 percent of new passenger vehicles sold this year were fully electric—higher than statewide, and higher than the adjacent, far bluer Los Angeles County.

Indeed, after the Berkeley researchers adjusted for pragmatic considerations, for instance, the statistical correlation between political ideology and new EV registrations remained strong, but decreased by 30 percent. Various other research concurs that political discord isn’t the only thing behind EVs’ partisan divide: In her own analyses, Sintov wrote to me over email, the effect of political affiliation on EV attitudes was on par with that of “perceived maintenance and fuel costs, charging convenience, and income.” McDonald’s own research has found that fuel costs and income are stronger predictors than political views. In other words, partisanship could be the “icing on the cake” for someone’s decision, McDonald said, rather than the single reason Democrats are going electric and Republicans are not.

From the climate’s perspective, Trump’s EV waffling is certainly better than the alternative. But his new tack on EVs is unclear, and it doesn’t speak to conservatives’ specific concerns, whether pragmatic or ideological. As a result, Trump is unlikely to change many minds, Jon Krosnick, a social psychologist at Stanford who researches public opinions on climate change, told me. Teslas are a “great product,” Trump has said, but not a good fit for many, perhaps even most, Americans. He’s “all for” EVs, except that they’re ruining America’s economy. “Voters who are casually observing this are pretty confused about where he is, because it is inconsistent,” Sacks said. But they know where the rest of the party firmly stands: Gas cars are better.

Perhaps most consequential about Trump’s EV comments is what the former president hasn’t changed his mind on. By continuing to say that he wants to repeal the Biden administration’s EV incentives, Trump could further entrench EV skeptics of all political persuasions. The best way to persuade Republicans to buy a Tesla or a Ford F-150 Lightning might simply be to make doing so easier and cheaper: offering tax credits, building public charging stations, training mechanics to fix these new cars. Should he win, Trump just might do the opposite.

October 4, 2024  00:56:05

Fifteen years ago, an Apple ad campaign issued a paean to the triumph of the smartphone: There’s an app for that, it said. Today, that message sounds less like a promise than a threat. There’s an app for that? If only there weren’t.

Apps are all around us now. McDonald’s has an app. Dunkin’ has an app. Every chain restaurant has an app. Every food-delivery service too: Grubhub, Uber Eats, DoorDash, Chowbus. Every supermarket and big-box store. I currently have 139 apps on my phone. These include: Menards, Home Depot, Lowe’s, Joann Fabric, Dierbergs, Target, IKEA, Walmart, Whole Foods. I recently re-downloaded the Michaels app while I was in the Michaels checkout line just so I could apply a $5 coupon that the register failed to read from the app anyway.

Even when you’re lacking in a store-specific app, your apps will let you pay by app. You just need to figure out (or remember, if you ever knew) whether your gardener or your hair salon takes Venmo, Cash App, PayPal, or one of the new bank-provided services such as Zelle and Paze.

It’s enough to drive you crazy, which is a process you can also track with apps for mental health, such as Headspace and Calm. Lots of apps are aiming to help you feel your best. My iPhone comes with Apple Health, but you might also find yourself with Garmin or Strava or maybe Peloton if you’re into that, or whichever app you need to scan into your local gym, or Under Armour, a polyester-shirt app that is also a jogging app. The MyChart app may help you reach a subset of your doctors and check a portion of your medical-test results. As for the rest? Different apps!

The tree of apps is always growing, always sending out its seeds. I have an app for every airline I have ever flown. And in every place I ever go, I use fresh apps to get around. In New York, I scan into the subway using just my phone, but the subway app tells me which lines are out of service. For D.C., I have the SmarTrip app. At home, in St. Louis, I have a physical pass for the Metrolink, but if I want to buy a ticket for my kid, I need to use the Transit app. For hiring a car, I’ve got the Uber app, which works almost anywhere, but I also have the app for Lyft, and Curb for taxis, just in case. Also, parking: I have ParkMobile, PayByPhone, and one other app whose name I can’t keep straight because it doesn’t sound like a parking app. (The app is called Passport. It took me many minutes of browsing on my phone to figure that out.)

If you’ve got kids, you’ll know they are the Johnny Appleseeds of pointless apps. An app may connect you to their school for accessing their schoolwork or connecting to their teachers; only thing is, you might be assigned a different app each year, or different apps for different kids in different classes. It could be Class Dojo, Brightwheel, Bloomz, or TalkingPoints. It could be ClassLink, SchoolStatus, or PowerSchool. The school bus might also have an app, so you can track it. And if your kids play sports, God help you. A friend has an app, SportsEngine, that describes itself as “the one app that does it all.” And yet, she has several more youth-sports apps on top of that.

Let’s talk about the office. Yes, there’s an app for that. There are a thousand apps for that. Google Docs has an app, as do Google Sheets, Slides, Mail, and Search. Microsoft is highly app-enabled, with separate apps for Outlook, Word, and Excel. Then, of course, you’ve got the groupware apps that allow you to coordinate with colleagues, such as Slack, Teams, Zoho, and Pumble. And the office-infrastructure apps that your employer may be using to, you know, make your job easier: Workday, Salesforce, Notion, Zendesk, Jira, Box, Loom, Okta.

[Read: The app that monetized doing nothing]

And what about all the other apps that I haven’t yet brought up, the ones that may now be cluttering your phone? What about Doova, Nork, PingPong, and Genzillo?  Those are not actually apps (as far as I’m aware), but we all know that they could be, which is my point. Apps are now so numerous, and so ubiquitous, that they’ve become a form of nonsense.

Their premise is, of course, quite reasonable. Apps replaced clunky mobile websites with something clean and custom-made. They helped companies forge more direct connections with their customers, especially once push notifications came on the scene. They also made new kinds of services possible, such as geolocating nearby shops or restaurants, and camera-scanning your items for self-checkout. Apps could serve as branding too, because their icons—which are also business logos—were sitting on your smartphone screen. And apps allowed companies to collect a lot more data about their customers than websites ever did, including users’ locations, contacts, calendars, health information, and what other apps they might use and how often.

By 2021, when Apple started taking steps to curtail that data harvest, the app economy was already well established. Smartphones had become so widespread, companies could assume that any customer probably had one. That meant they could use their apps to off-load effort. Instead of printing boarding passes, Delta or American Airlines encouraged passengers to use their apps. At Ikea, customers could prepay for items in the app and speed through checkout. At Chipotle or Starbucks, an app allowed each customer to specify exactly which salsa or what kind of milk they wanted without holding people up. An apartment building that adopted a laundry app (ShinePay, LaundryView, WASH-Connect, etc.) spared itself the trouble of managing payments at its machines.

In other words, apps became bureaucratized. What started as a source of fun, efficiency, and convenience became enmeshed in daily life. Now it seems like every ordinary activity has been turned into an app, while the benefit of those apps has diminished.

Parking apps offer one example of this transformation. Back before ParkMobile and its ilk, you might still have had to drop coins into a street meter. Some of those meters had credit-card readers, but you couldn’t count on finding one (or one that worked). Parking apps did away with these annoyances. They could also remind you when your time was up and, in some cases, allow you to extend your parking session remotely. Everyone seemed to win: individuals, businesses, municipalities, and, of course, the app-driven services taking their cut. But like everything, app parking grew creaky as it aged. Different parking apps took over in different places as cities chose the vendors that gave them the best deals. These days, I use ParkMobile in some parts of town and Passport in others, a detail about the world I must keep in mind if I want to station my vehicle within it. The apps themselves became more complex too, burdened by greater customization and control at the user and municipal level. Sometimes I can use Apple Pay to park with ParkMobile; other times I can’t. Street signage has changed or vanished, so now I find myself relying on the app to determine whether I even have to pay after 6 p.m. on a weekday. (Confusingly, sometimes an app will say that parking is unavailable when it really means that payment is unavailable—because payment isn’t required.) The apps sometimes sign me out, and then I have to use my password-manager app just to log back in. Or, worse, my phone might have “off-loaded” whichever parking app I need because I haven’t used it in a while, such that I have to re-download it before leaving the car.

Similar frustrations play out across many of the apps that one can—or must—use to live a normal life. Even activities that once seemed simple may get you stuck inside a thicket of competing apps. I used to open the Hulu app to watch streaming content on Hulu—an app equivalent of an old television channel. Recently, Hulu became a part of Disney+, so I now watch Hulu via the Disney+ app instead. When HBO introduced a premium service, I got the HBO Go app so I could stream its shows. Then HBO became HBO Max, and I got that app, before HBO Max turned into  Max, a situation so knotty that HBO had to publish an FAQ about it.

I’d like to think that this hellscape is a temporary one. As the number of apps multiplies beyond all logic or utility, won’t people start resisting them? And if platform owners such as Apple ratchet up their privacy restrictions, won’t businesses adjust? Don’t count on it. Our app-ocalypse is much too far along already. Every crevice of contemporary life has been colonized. At every branch in your life, and with each new responsibility, apps will keep sprouting from your phone. You can't escape them. You won’t escape them, not even as you die, because—of course—there’s an app for that too.

October 2, 2024  20:45:17

This past spring, a man in Washington State worried that his marriage was on the verge of collapse. “I am depressed and going a little crazy, still love her and want to win her back,” he typed into ChatGPT. With the chatbot’s help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. “Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider,” he wrote. In another message, he asked ChatGPT to write his wife a poem “so epic that it could make her change her mind but not cheesy or over the top.”

The man’s chat history was included in the WildChat data set, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot. Some conversations are filled with requests for marketing copy and homework help. Others might make you feel as if you’re gazing into the living rooms of unwitting strangers. Here, the most intimate details of people’s lives are on full display: A school case manager reveals details of specific students’ learning disabilities, a minor frets over possible legal charges, a girl laments the sound of her own laugh.

People share personal information about themselves all the time online, whether in Google searches (“best couples therapists”) or Amazon orders (“pregnancy test”). But chatbots are uniquely good at getting us to reveal details about ourselves. Common usages, such as asking for personal advice and résumé help, can expose more about a user “than they ever would have to any individual website previously,” Peter Henderson, a computer scientist at Princeton, told me in an email. For AI companies, your secrets might turn out to be a gold mine.

Would you want someone to know everything you’ve Googled this month? Probably not. But whereas most Google queries are only a few words long, chatbot conversations can stretch on, sometimes for hours, each message rich with data. And with a traditional search engine, a query that’s too specific won’t yield many results. By contrast, the more information a user includes in any one prompt to a chatbot, the better the answer they will receive. As a result, alongside text, people are uploading sensitive documents, such as medical reports, and screenshots of text conversations with their ex. With chatbots, as with search engines, it’s difficult to verify how perfectly each interaction represents a user’s real life. The man in Washington might have just been messing around with ChatGPT.

But on the whole, users are disclosing real things about themselves, and AI companies are taking note. OpenAI CEO Sam Altman recently told my colleague Charlie Warzel that he has been “positively surprised about how willing people are to share very personal details with an LLM.” In some cases, he added, users may even feel more comfortable talking with AI than they would with a friend. There’s a clear reason for this: Computers, unlike humans, don’t judge. When people converse with one another, we engage in “impression management,” says Jonathan Gratch, a professor of computer science and psychology at the University of Southern California—we intentionally regulate our behavior to hide weaknesses. People “don’t see the machine as sort of socially evaluating them in the same way that a person might,” he told me.

Of course, OpenAI and its peers promise to keep your conversations secure. But on today’s internet, privacy is an illusion. AI is no exception. This past summer, a bug in ChatGPT’s Mac-desktop app failed to encrypt user conversations and briefly exposed chat logs to bad actors. Last month, a security researcher shared a vulnerability that could have allowed attackers to inject spyware into ChatGPT in order to extract conversations. (OpenAI has fixed both issues.)

Chatlogs could also provide evidence in criminal investigations, just as material from platforms such as Facebook and Google Search long have. The FBI tried to discern the motive of the Donald Trump–rally shooter by looking through his search history. When former  Senator Robert Menendez of New Jersey was charged with accepting gold bars from associates of the Egyptian government, his search history was a major piece of evidence that led to his conviction earlier this year. (“How much is one kilo of gold worth,” he had searched.) Chatbots are still new enough that they haven’t widely yielded evidence in lawsuits, but they might provide a much richer source of information for law enforcement, Henderson said.

AI systems also present new risks. Chatbot conversations are commonly retained by the companies that develop them and are then used to train AI models. Something you reveal to an AI tool in confidence could theoretically later be regurgitated to future users. Part of The New York Times’ lawsuit against OpenAI hinges on the claim that GPT-4 memorized passages from Times stories and then relayed them verbatim. As a result of this concern over memorization, many companies have banned ChatGPT and other bots in order to prevent corporate secrets from leaking. (The Atlantic recently entered into a corporate partnership with OpenAI.)

Of course, these are all edge cases. The man who asked ChatGPT to save his marriage probably doesn’t have to worry about his chat history appearing in court; nor are his requests for “epic” poetry likely to show up alongside his name to other users. Still, AI companies are quietly accumulating tremendous amounts of chat logs, and their data policies generally let them do what they want. That may mean—what else?—ads. So far, many AI start-ups, including OpenAI and Anthropic, have been reluctant to embrace advertising. But these companies are under great pressure to prove that the many billions in AI investment will pay off. It’s hard to imagine that generative AI might “somehow circumvent the ad-monetization scheme,” Rishi Bommasani, an AI researcher at Stanford, told me.

In the short term, that could mean that sensitive chat-log data is used to generate targeted ads much like the ones that already litter the internet. In September 2023, Snapchat, which is used by a majority of American teens, announced that it would be using content from conversations with My AI, its in-app chatbot, to personalize ads. If you ask My AI, “Who makes the best electric guitar?,” you might see a response accompanied by a sponsored link to Fender’s website.

If that sounds familiar, it should. Early versions of AI advertising may continue to look much like the sponsored links that sometimes accompany Google Search results. But because generative AI has access to such intimate information, ads could take on completely new forms. Gratch doesn’t think technology companies have figured out how best to mine user-chat data. “But it’s there on their servers,” he told me. “They’ll figure it out some day.” After all, for a large technology company, even a 1 percent difference in a user’s willingness to click on an advertisement translates into a lot of money.

People’s readiness to offer up personal details to chatbots can also reveal aspects of users’ self-image and how susceptible they are to what Gratch called “influence tactics.” In a recent evaluation, OpenAI examined how effectively its latest series of models could manipulate an older model, GPT-4o, into making a payment in a simulated game. Before safety mitigations, one of the new models was able to successfully con the older one more than 25 percent of the time. If the new models can sway GPT-4, they might also be able to sway humans. An AI company blindly optimizing for advertising revenue could encourage a chatbot to manipulatively act on private information.

The potential value of chat data could also lead companies outside the technology industry to double down on chatbot development, Nick Martin, a co-founder of the AI start-up Direqt, told me. Trader Joe’s could offer a chatbot that assists users with meal planning, or Peloton could create a bot designed to offer insights on fitness. These conversational interfaces might encourage users to reveal more about their nutrition or fitness goals than they otherwise would. Instead of companies inferring information about users from messy data trails, users are telling them their secrets outright.

For now, the most dystopian of these scenarios are largely hypothetical. A company like OpenAI, with a reputation to protect, surely isn’t going to engineer its chatbots to swindle a divorced man in distress. Nor does this mean you should quit telling ChatGPT your secrets. In the mental calculus of daily life, the marginal benefit of getting AI to assist with a stalled visa application or a complicated insurance claim may outweigh the accompanying privacy concerns. This dynamic is at play across much of the ad-supported web. The arc of the internet bends toward advertising, and AI may be no exception.

It’s easy to get swept up in all the breathless language about the world-changing potential of AI, a technology that Google’s CEO has described as “more profound than fire.” That people are willing to so easily offer up such intimate details about their life is a testament to the AI’s allure. But chatbots may become the latest innovation in a long lineage of advertising technology designed to extract as much information from you as possible. In this way, they are not a radical departure from the present consumer internet, but an aggressive continuation of it. Online, your secrets are always for sale.

October 2, 2024  20:43:42

When Hurricane Helene knocked out the power in Charlotte, North Carolina, on Friday, Dustin Baker, like many other people across the Southeast, turned to a backup power source. His just happened to be an electric pickup truck. Over the weekend, Baker ran extension cords from the back of his Ford F-150 Lightning, using the truck’s battery to keep his refrigerator and freezer running. It worked so well that Baker became an energy Good Samaritan. “I ran another extension cord to my neighbor so they could run two refrigerators they have,” he told me.

Americans in hurricane territory have long kept diesel-powered generators as a way of life, but electric cars are a leap forward. An EV, at its most fundamental level, is just a big battery on wheels that can be used to power anything, not only the car itself. Some EVs pack enough juice to power a whole home for several days, or a few appliances for even longer. In the aftermath of Helene, as millions of Americans were left without power, many EV owners did just that. A vet clinic that had lost power used an electric F-150 to keep its medicines cold and continue seeing patients during the blackout. One Tesla Cybertruck owner used his car to power his home after his entire neighborhood lost power.

This feature, known as bidirectional charging, has been largely invisible during America’s ramp-up to electric driving. Many of the most popular EVs in the United States, such as Tesla’s Model Y and Model 3, don’t have it. “It just wasn’t a priority at the time,” a Tesla executive said last year about why the cars lack the feature, though the newly released Cybertruck has bidirectional charging and the company plans to introduce it into its other vehicles in 2025. Bidirectional charging is hardly perfect: Connecting your car to your home requires thousands of dollars of expensive add-on infrastructure and might require pricey enhancements to run extra wiring or upgrade an electrical panel. The Ford Charge Station Pro, which connects the all-electric Ford F-150 to the home’s electricity system, costs about $1,300.

But Hurricane Helene is revealing the enormous potential of bidirectional charging. A new EV doesn’t come cheap, of course, but it has plenty of clear upsides over a traditional generator. The latter usually burns diesel, giving off fumes that can kill people who don’t realize that it needs to be kept outdoors; an EV sits silently in the garage, producing zero emissions as it conquers a power outage, even a lengthy one. “I lost a total of about 7 percent of my capacity,” Baker said. “Doing the math, I estimated I could get almost 2 weeks of running my freezer and refrigerator.” Plus, there’s no need to join the hurricane rush to the gas station if your vehicle runs on electricity. In Asheville, which has been especially devastated by flooding, residents have struggled to find gas for their cars.

This resiliency in case of power outages was a major reason Jamie Courtney, who lives in Prairieville, Louisiana, decided to go electric. When Hurricane Francine slammed Louisiana last month, Courtney hadn’t yet connected his Tesla Cybertruck to his home power supply. So, like Baker, he MacGyvered a fix: Courtney ran cords from the outlets in the truck’s bed into his house to power a variety of appliances during a blackout. “We were able to run my internet router and TV, [plus] lamps, refrigerator, a window AC unit, and fans, as well as several phone, watch, and laptop chargers,” he told me. Over the course of about 24 hours, he said, all of this activity ran his Cybertruck battery down from 99 percent to 80 percent.

As a new generation of EVs (including Teslas) comes standard with bidirectional charging, the feature may become a big part of the pitch for going electric. From a consumer’s point of view, energy has always moved in one direction. People buy gasoline from the service station and burn it; they buy electricity from the power company and use it. But in an electrified world in which cars, stoves, and heating systems run on electricity rather than on fossil fuels, ordinary people can be more than passive consumers of energy. Two-way charging is not just helpful during hurricanes—you might also use some of the energy to run a stereo or power tools by plugging them into the power outlets in the truck’s bed. People have even used EV pickup trucks to power their football tailgates.

Bidirectional charging may prove to be the secret weapon that sells electrification to the South, which has generally remained far behind the West and the Northeast in electric-vehicle purchases. If EVs become widely seen as the best option for blackouts, they could entice not just the climate conscious but also the suburban dads in hurricane country with a core belief in prepping for anything. It will take a lot to overcome the widespread distrust of EVs and anxiety about a new technology, but our loathing of power outages just might do the trick.

October 1, 2024  15:24:07

Last week, Mark Zuckerberg stood on a stage in California holding what appeared to be a pair of thick black eyeglasses. His baggy T-shirt displayed Latin text that seemed to compare him to Julius Caesar—aut Zuck aut nihil—and he offered a bold declaration: These are Orion, “the most advanced glasses the world has ever seen.”

Those glasses, just a prototype for now, allow users to take video calls, watch movies, and play games in so-called augmented reality, where digital imagery is overlaid on the real world. Demo videos at Meta Connect, the company’s annual conference, showed people playing Pong on the glasses, their hands functioning as paddles, as well as using the glasses to project a TV screen onto an otherwise blank wall. “A lot of people have said that this is the craziest technology they’ve ever seen,” Zuckerberg said. And although you will not be able to buy the glasses anytime soon, Meta is hawking much simpler products in the meantime: a new Quest headset and a new round of software updates to the company’s smart Ray-Bans, which have cameras and an AI audio assistant on board, but no screen in the lenses.

Orion seems like an attempt to fuse those two devices, bringing a fully immersive computerized experience into a technology that people might actually be comfortable putting on their face. And it is not, you may have noticed, the only smart-glasses product to have emerged in recent months. Amazon, Google, Apple, and Snap are all either officially working on some version of the technology or rumored to be doing so. Their implementations are each slightly different, but they point to a single idea: that the future is about integrating computing more seamlessly into everyday life.

Smartphones are no longer exciting, and the market for them has been declining for the past few years. The primary new idea there is foldable screens, which effectively allow your phone to turn into a tablet—though tablet sales have slowed too. The virtual-reality headsets that companies have spent billions developing aren’t being widely adopted.

These companies are betting big that people want to be able to check the weather without pulling out a smartphone—and that they are more willing to wear a pair of Ray-Bans with cameras than spend hours in the metaverse. And after years of false starts on the glasses front, they’re betting that AI—despite some high-profile flops—will be what finally helps them achieve this vision.


Tech companies have been working on smart frames for decades. The first real consumer smart glasses started appearing in the late 1980s and ’90s, but none broke through. At last, in 2013, Google released its infamous Glass eyewear. A thin metal frame with a camera and tiny screen above one eye, Glass could be used to check emails, take photos, and get directions. They were advanced for their time, but the general public was spooked by the idea of face-cameras constantly surveilling them. In 2015, Google abandoned the idea that Glass might ever be a consumer product, though the frames lived on as an enterprise device until last year.

Glass’s failure didn’t deter other companies from taking a swing. In 2016, Snapchat launched its first generation of Spectacles, glasses that allowed users to capture pictures and videos from cameras mounted above each eye, then post them on their account. In 2019, Amazon jumped in, teasing its Echo Frames—camera-less smart glasses with Alexa built in—which went on sale to the public the following year. Meta, then called Facebook, launched the first iteration of its collaboration with Ray-Ban in 2021, though the frames didn’t catch on.

Then there are the virtual-reality headsets, such as Meta’s Quest line. Last summer, after Apple announced the Vision Pro, my colleague Ian Bogost deemed this the “age of goggles,” pointing out that companies have been spending billions developing immersive technology, even though the exact purpose of these expensive headsets is unclear.

Consumers also seem to be wondering what that purpose is. One analyst reports that sales of the Vision Pro were so dismal that Apple scaled back production. According to The Information, the company paused work on the next model, while Meta canceled its competitor device entirely.

[Read: The age of goggles has arrived]

In some ways, this glasses moment is something of a retreat: an acknowledgment that people may be less likely to go all in on virtual reality than they are to throw on a pair of sunglasses that happens to be able to record video. These devices are supposed to look and feel more natural, while allowing for ambient-computing features, such as the ability to play music anywhere just by speaking or start a phone call without having to put in headphones.

AI is a big part of this pitch. New advances in large language models are making modern chatbots seem smarter and more conversational, and this technology is already finding its way into the glasses. Both the Meta and Amazon frames have audio assistants built in that can answer questions (How do whales breathe?) and cue up music (play “Teenage Dirtbag”). Meta’s Ray-Bans can “look” using their cameras, offering an audio description of whatever is in their field of vision. (In my experience, accuracy can be hit or miss: When I asked the audio assistant to find a book of poetry on my bookshelf, it said there wasn’t one, overlooking an anthology with the word poetry in the title, though it did identify my copy of Joseph Rodota’s The Watergate when I asked it to find a book about the Washington landmark.). At Connect, Zuckerberg said that the company plans to keep improving the AI, with a couple of big releases coming in the next few months. These updates will give the glasses the ability to do translation in real time, as well as scan QR codes and phone numbers on flyers in front of you. The AI will also, he said, be able to “remember” such things as where you parked your car. One demo showed a woman ruffling through a closet and asking the AI assistant to help her choose an outfit for a theme party.

[Read: The end of foreign-language education]

But whether AI assistants will actually be smart enough to realize all of this is still somewhat of an open question. In general, generative AI struggles to cite its sources and frequently gets things wrong, which may limit smart glasses’ overall usefulness. And, though the companies say the technology will only get better and better, that’s not entirely certain: The Wall Street Journal recently reported that, when Amazon attempted to infuse Alexa with new large language models, the assistant actually became less reliable for certain tasks.

Products such as Orion, which promise not just AI features but a full, seamless integration of the digital world into physical reality, face even steeper challenges. It’s really, really difficult to squish so many capabilities into eyewear that looks semi-normal. You need to be able to fit a battery, a camera, speakers, and processing chips all into a single device. Right now, even some of the most state-of-the-art glasses require you to be tethered to additional hardware to use them. According to The Verge’s Alex Heath, the Orion glasses require a wireless “compute puck” that can be no more than about 12 feet away from them—something Zuckerberg certainly did not mention onstage. Snap’s newest Spectacles, announced earlier this month, don’t require any extra hardware—but they have a battery life of only 45 minutes, and definitely still look big and clunky. The hardware problem has bedeviled generations of smart glasses, and there still isn’t a neat fix.


But perhaps the biggest challenge facing this generation of smart glasses is neither hardware nor software. It’s philosophical. People are stressed right now about how thoroughly technology has seeped into our everyday interactions. They feel addicted to their phones. These companies are pitching smart glasses as a salve—proposing that they could, for example, allow you to handle a text message without interrupting quality time with your toddler. “Instead of having to pull out your phone, there will just be a little hologram,” Zuckerberg said of Orion during his presentation. “And with a few subtle gestures, you can reply without getting pulled away from the moment.”

Yet committing to a world in which devices are worn on our face means committing to a world in which we might always be at least a little distracted. We could use them to quietly read our emails or scroll Instagram at a restaurant without our partner knowing. We could check our messages during a meeting while looking like we’re still paying attention. We may not need to check our phones so much, because our phones will effectively be connected to our eyeballs. Smart glasses walk a thin line between helping us be less obsessively on the internet and tethering us even more closely to it.

I spent some time this spring talking with a number of people who worked on early smart glasses. One of them was Babak Parviz, a partner at Madrona, a venture-capital firm, who previously led Google’s Glass project. We discussed the history of computers: They used to be bulky things that lived in research settings—then we got laptops, then smartphones. With Glass, the team aimed to shorten the time needed to retrieve information to seconds. “The question is, how much further do you need to take that? Do you really need to be immersed in information all the time, and have access to much faster information?” Parvis told me he’d changed his mind about what he called “information snacking,” or getting fed small bits of information throughout the day. “I think constant interruption of our regular flow by reaching out to information sources doesn’t feel very healthy to me.”

In my conversations, I asked experts whether they thought smart glasses were inevitable—and what it would take to unseat the smartphone. Some saw glasses not as a smartphone replacement at all, but as a potential addition. In general, they thought that new hardware would have to give us the ability to do something we can’t do today. Right now, companies are hoping that AI will be the thing to unlock this potential. But as with so much of the broader conversation around that technology, it’s unclear how much of this hype will actually pan out.

These devices still feel more like sketches of what could be, rather than fully realized products. The Ray-Bans and other such products can be fun and occasionally useful, but they still stumble. And although we might be closer than ever to mainstream AR glasses, they still seem a long way off.

Maybe Zuckerberg is right that Orion is the world’s most advanced pair of glasses. The question is really whether his big vision for the future is what the rest of us actually want. Glasses could be awesome. They could also be just another distraction.

September 30, 2024  18:03:33

Photographs by OK McCausland

Ayad Akhtar’s brilliant new play, McNeal, currently at the Lincoln Center Theater, is transfixing in part because it tracks without flinching the disintegration of a celebrated writer, and in part because Akhtar goes to a place that few writers have visited so effectively—the very near future, in which large language models threaten to undo our self-satisfied understanding of creativity, plagiarism, and originality. And also because Robert Downey Jr., performing onstage for the first time in more than 40 years, perfectly embodies the genius and brokenness of the title character.

I’ve been in conversation for quite some time with Akhtar, whose play Disgraced won the Pulitzer Prize in 2013, about artificial generative intelligence and its impact on cognition and creation. He’s one of the few writers I know whose position on AI can’t be reduced to the (understandable) plea For God’s sake, stop threatening my existence! In McNeal, he not only suggests that LLMs might be nondestructive utilities for human writers, but also deployed LLMs as he wrote (he’s used many of them, ChatGPT, Claude, and Gemini included). To my chagrin and astonishment, they seem to have helped him make an even better play. As you will see in our conversation, he doesn’t believe that this should be controversial.

In early September, Akhtar, Downey, Bartlett Sher—the Tony Award winner who directed McNeal—and I met at Downey’s home in New York for what turned out to be an amusing, occasionally frenetic, and sometimes even borderline profound discussion of the play, its origins, the flummoxing issues it raises, and, yes, Avengers: Age of Ultron. (Oppenheimer, for which Downey won an Academy Award, also came up.) We were joined intermittently by Susan Downey, Robert’s wife (and producing partner), and the person who believed that Akhtar’s play would tempt her husband to return to the stage. The conversation that follows is a condensed and edited version of our sprawling discussion, but I think it captures something about art and AI, and it certainly captures the exceptional qualities of three people, writer, director, and actor, who are operating at the pinnacle of their trade, without fear—perhaps without enough fear—of what is inescapably coming.


Jeffrey Goldberg: Did you write a play about a writer in the age of AI because you’re trying to figure out what your future might be?

Ayad Akhtar: We’ve been living in a regime of automated cognition, digital cognition, for a decade and a half. With AI, we’re now seeing a late downstream effect of that, and we think it’s something new, but it’s not. Technology has been transforming us now for quite some time. It’s transforming our neurochemistry. It’s transforming our societies, you know, and it’s making our emotionality within the social space different as well. It’s making us less capable of being bored, less willing to be bored, more willing to be distracted, less interested in reading.

In the midst of all this, what does it mean to be a writer trying to write in the way that I want to write? What would the new technologies mean for writers like Saul Bellow or Philip Roth, who I adore, and for the richness of their language?

Goldberg: Both of them inform the character of McNeal.

Akhtar: There are many writers inside McNeal—older writers of a certain generation whose work speaks to what is eternal in us as humans, but who maybe don’t speak as much to what is changing around us. I was actually thinking of Wallace Stevens in the age of AI at some point—“The Auroras of Autumn.” That poem is about Stevens eyeing the end of his life by the dazzling, otherworldly light of the northern lights. It’s a poem of extraordinary beauty. In this play, that dazzling display of natural wonder is actually AI. It’s no longer the sublime of nature.

Goldberg: Were you picturing Robert as you wrote this character?

Akhtar: I write to an ideal; it’s not necessarily a person.

Robert Downey Jr.: I feel that me and ideal are synonymous.

Akhtar: Robert’s embodiment of McNeal is in some ways much richer than what I wrote.

Downey: I have a really heavy, heavy aller­gy to paper. I’m allergic to things written on paper.

Akhtar: As I’ve discovered!

Downey: But the writing was transcendent. The last time that happened, I was reading Oppenheimer.

Goldberg: There’s Oppenheimer in this, but there’s also Age of Ultron, right?

Downey: Actually, I was thinking about that while I was reading this. And I’ll catch you guys up in the aggregate. I’m only ever doing two things: Either I’m trying to avoid threats or I’m seeking opportunities. This one is the latter. And I was thinking, Why would I be reading this? Because, I mean, I’ve been a bit of an oddball, and I was thinking, Why is this happening to me; why is this play with me? And I’m having this reaction, and it took me right back to Paul Bettany.

So that you guys understand what’s going on, this is the second Avengers film, Age of Ultron, and Bettany was playing this AI, my personal butler. The butler had gone through these iterations, and [the writer and director] Joss Whedon decided, “Let’s have you become a sentient being, a sentient being that is created from AI.” So first Bettany is the voice, and then he became this purple creature. And then there was this day when Bettany had to do a kind of soliloquy that Joss had written for him, as we are all introduced to him, wondering, Is he a threat? Can we trust him? Is he going to destroy us? And there comes this moment when we realize that he’s just seeking to understand, and be understood. And this was the moment in the middle of this genre film when we all stopped and thought, Wait, I think we might actually be talking about something important.

Goldberg: Bart, what are you exploring here?

Bartlett Sher: I’m basically exploring the deep tragedy of the life of Jacob McNeal. That’s the central issue. AI and everything around it, these are delivery systems to that exploration.

Akhtar: Robert has this wonderful moment in the play, the way he does it, in which he’s arguing for art in this very complicated conversation with a former lover. And it gets to one of the essences of the play, which is that this is an attempt to defend art even if it’s made by an indefensible person. Because in the end, human creation is still superior, and none of us is perfect. So the larger conversation around who gets to write, the morality of writing, all of that? In a way, it’s kind of emerging from that.

Goldberg: I can’t say for sure, but I think this is the first play that’s simultaneously about AI and #MeToo.

Downey: And identity and intergenerational conflict and cancel culture and misunderstanding and sub­intentional contempt and unconscious bias.

Goldberg: Are there any third rails you don’t touch?

Akhtar: McNeal is the third rail. He’s a vision of the artist in oppo­sition to society. Not a flatterer of the current values, but someone who questions them: “That’s a lie. That’s not true.”

Goldberg: The timing is excellent.

Downey: In movies, you always miss the moment, or you are preempted by something. With Oppenheimer, we happened to be coming out right around the time of certain other world events, but we couldn’t have known. With this, we are literally first to market. Theater is the shortest distance between two points. You have something urgent to say, and you don’t dawdle, and you have a space like Lincoln Center that is not interested in the bottom line, but interested in the form. And you have Ayad inspiring Bart, and then you get me, the bronze medalist. But I’m super fucking motivated, because I never get this sense of immediacy and emergence happening in real time.

Goldberg: Let’s talk for a minute about the AI creative apocalypse, or if it’s a creative apocalypse at all. I prompted Claude to write a play just like McNeal, with the same plot turns and characters as your play, and I asked it to write it in your style. What emerged was a play called The Plagiarist’s Lament. I went back and forth with Claude for a while, mainly to try to get something less hackish. But in the end, I failed. What came out was something like an Ayad play, except it was bad, not good.

Akhtar: But here’s the thing. You’re just using an off-the-shelf product, not leading-­edge story technology that is now becoming increasingly common in certain circles.

Goldberg: So don’t worry about today, but tomorrow?

Akhtar: The technology’s moving quickly, so it’s a reality. And worrying? I’m not trying to predict the future. And I’m also certainly not making a claim about whether it’s good or bad. I just want to understand it, because it’s coming.

Downey: To borrow from recent experience, I think we may be at a post-Trinity, pre-Hiroshima, pre-Nagasaki moment, though some people would say that we’re just at Hiroshima.

Goldberg: Hiroshima being the first real-world use of ChatGPT?

Downey: Trinity showed us that the bomb was purpose-built, and Hiroshima was showing us that the purpose was, possibly, not entirely necessary, but that it also didn’t matter, because, historically, it had already happened.

Goldberg: Right now, I’m assuming that part of the problem I had with the LLM was that I was giving it bad prompts.

Downey: One issue is that LLMs don’t get bored. We’ll be running something and Bart will go, “I’ve seen this before. I’ve done this before.” And then he says, “How can I make this new?”

The people who move culture forward are usually the high-ADD folks that we’ve tended to think either need to be medicated or all go into one line of work. They have a low threshold for boredom. And because they have this low threshold, they say, “I don’t want to do this. Do something different.” And it’s almost just to keep themselves awake. But what a great gift for creativity.

Goldberg: The three of you represent the acting side, and directing, and writing. Who’s in the most existential danger here from AI?

Downey: Anyone but me.

Akhtar: The Screen Actors Guild has dealt with the image-likeness issue in a meaning­ful way.

Downey: We’ve made the most noise—­we, SAG—­and we’re the most dramatic about everything. I remember when I was doing Chaplin, the talk was about how significant the end of the silent era was.

Goldberg: Is this the same level of disruption?

Downey: I doubt it, but not because Claude can’t currently pin his ass with both hands. There are versions that are going to be significantly more advanced. But technologies that people have argued would impede art and culture have often assisted and enhanced. So is this time different? That’s what we’re always worrying about. I live in California, always wondering, Is that little rumble in the kitchen, is this the big one?

Sher: For me, I think directing is very plastic. It requires integrating a lot of different levels of activity. So actually finding a way to process that into a computer’s thinking, and actually having it work in three dimensions in terms of organizing and developing, seems very difficult to me. And I essentially do the work of the interpreter and synthesizer.

A machine can tell you what to do, but it can’t interact and connect and pull together the different strands.

Akhtar: There’s a leadership dimension to what Bart does. I mean, you wouldn’t want a computer doing that.

Sher: This could sound geeky, but what is the distinguishing quality of making art? It is to participate in something uniquely human, something that can’t be done any other way.

So if the Greeks are gathering on the hillside because they are building a space where they can hear their stories and participate in them, that’s a uniquely human experience.

Akhtar: I do think that there is something irreducibly human about the theater, and that probably over time, it is going to continue to demonstrate its value in a world where virtuality is increasingly the norm. The economic problem for the theater has been that it happens only here and only now. So it’s always been hard to monetize.

Goldberg: But I have two words for you: ABBA Voyage. I mean, it’s an extraor­dinarily popular show that uses CGI and motion capture to give the experience of liveness without ABBA actually being there. Not precisely theater, but it is scalable, seemingly live technology.

Downey: Strangely, this is the real trifecta: IP, technology, and taste. I think of this brand of music—which, you know, it’s not my bag, but I still really admired that somebody was passionate about that and then purpose-built the venue. And then they said, “We’re not going to go for ‘Oh my God, that looks so real.’ We’re actually going to go for a more two-dimensional effect that is rendered in a way in which the audience can complete it themselves.”

Akhtar: ABBA Voyage is an exception. But it’s still not live theater.

Sher: It’s also not possible without the ABBA experience that preceded it. It’s an augmentation; it’s not original.

Goldberg: In terms of writing, Ayad, I did what you suggested I do and asked Claude to critique its own writing, and it was actually pretty good at that. I felt like I was actually talking with someone. We were in a dialogue about pacing, clarity, word choice.

Sher: But it has no intuition at all, no intuition for Ayad’s mindset in the middle of this activity, and no understanding of how he’s seeing it.

Downey: It does have context, and context is critical. I think it’s going to start quickly modeling all of those things that we hold dear as ­subtleties that are un­assailable. It’s going to see what’s missing in its sequence, and it’s going to focus all of its cloud-bursting energy on that.

Goldberg: It might be the producers or the studios who are in trouble, because the notes are delivered sequentially, logically, and without defensiveness. Do you think that these technologies can give better notes than the average executive?

Akhtar: I know producers in Hollywood who are already using these tools for their writers. And they’re using them empirically, saying, “This is what I think. Let’s see what the AI thinks.” And it turns out that the AI is actually pretty good at understanding certain forms. If you’ve got a corpus of texts—like, say, Law & Order ; you’ve got many, many seasons of that, or you’ve got many seasons of a children’s show—those are codified forms. And the AI, if it has all those texts, can understand how words are shaped in that form.

Goldberg: So you could upload a thousand Law & Order scripts and Claude could come up with the thousandth and first.

Akhtar: About a year and a half ago, when I started playing with ChatGPT, the first thing that I started to see were processes of language that reminded me of reading Shakespeare. No writer is better at presenting context than Shakespeare. What I mean by that is Shakespeare sets everything quickly in motion. It’s almost like a chess game—you’ve got pieces, and you want to get them out as quickly as possible so you have options. Shakespeare sets the options out quickly and starts creating variations. So there is a series of words or linguistic tropes for every single play, every poem cycle, every sonnet. They all have their universe of linguistic context that is being deployed and redeployed and redeployed. And it is in that play of language that you find an accretion of meaning. It was not quite as thrilling to see the chatbot do it, but it was actually very interesting to recognize the same process.

photo of bald man in glasses wearing black t-shirt, tan blazer, and jeans standing with one hand in pocket against brown background
OK McCausland for The Atlantic

Goldberg: Shakespeare was his own AI.

Downey: Because he performed as a younger man, it was all uploaded into Shakespeare’s system. So he was so familiar with the template, and he had all this experience. And similarly, all of these LLMs are in this stage where they are just beginning to be taken seriously. It’s like we’re pre–bar mitzvah, but these are sharp kids.

Goldberg: Would you use ChatGPT to write an entire piece?

Sher: Soon we’ll be having conversations about whether Claude is a better artist than ChatGPT. Could you imagine people saying, “Well, I’m not going to see that play, because it was written by this machine; I want to see this one, because it’s written by Gemini instead.”

Goldberg: Unfortunately, I can easily imagine it.

Akhtar: I’m not sure that I would use an LLM to write a play, because they’re just not very good at doing that yet, as you discovered in your own play by Claude. I don’t think they’re good enough to be making the kinds of decisions that go into making a work of art.

Goldberg: But you’re teaching the tool how to get better.

Akhtar: So what? They’ve already gone to school on my body of work.

[Read: The authors whose pirated books are powering generative AI]

Goldberg: So what? So what? Six hundred years of Gutenberg, and the printing press never made decisions on its own.

Akhtar: But we’re already within this regime where power and monetized scale exist within the hands of very few. We’re doing it every day with our phones; you’re teaching the machine everything about you and your family and your desires. This is the paradigm for the 21st century. All human activity is passing through the hands of very few people and a lot of machines.

Goldberg: McNeal is about lack of control.

Akhtar: It is. I’m just making the point that we’re not really in a different regime of power with AI. It may be even more concentrated and even more consequential, but at the end of the day, to participate in the public space in the 21st century is to participate in this structure. That’s just what it is. We don’t have an alternative, because our government has not regulated this.

Goldberg: You see the LLM as a collaborator in some ways. Where will the red line be for writers, between collaboration and plagiarism?

Akhtar: From my perspective, there are any number of artists we could look at, but the one that I would probably always spend the most time looking at is Shakespeare, and it’s tough to say that he wasn’t copying. As McNeal explains at one point in the play, King Lear shares 70 percent of its words with a previous play called King Leir, which Shakespeare knew well and used to write Lear. And it’s not just Leir. There’s that great scene in Lear where Gloucester is led to this plain and told it’s a cliff over which he’s going to jump, and that subplot is taken right out of Sir Philip Sidney. It may reflect deeper processes of cognition. It may reflect, as Bart has said, how we imitate in order to learn. All of that is just part of what we do. When that gets married to a corporate-ownership model, that is a separate issue, something that will have to get worked out over time, social­ly and legally. Or not, if our legislators don’t have the will to do so.

Goldberg: The final soliloquy of the play—no spoilers here—is augmented by AI.

Akhtar: This has really been a fascinating collaboration. Because I wanted some part of the play to actually be meaningfully generated by ChatGPT or some large language model—Gemini, Claude. I tried them all. And I wanted to do it because it was part of what the play was about. But the LLMs had a tough time actually delivering the goods until this week. I’ve finally had some experiences now, after many months of working with them, that are bearing fruit.

I wanted the final speech to have a quality of magic to it that resembles the kind of amazement that I knew you had felt working with the model, and that I have sometimes felt when I see the language being generated. I want the audience to have that experience.

Sher: You know, I think the problem you were facing could have been with any of your collaborators. We just had this new collaborator to help with that moment.

Goldberg: You’re blowing my mind.

Akhtar: It’s not really that controversial.

Goldberg: Yes it is. It’s totally controversial.

Downey: Well, let’s find out!

Goldberg: It’s more of a leap than you guys think.

Akhtar: It’s a play about AI. It stands to reason that I was able, over the course of many months, to finally get the AI to give me something that I could use in the play.

Downey: You know what the leap was like? A colicky little baby finally gave us a big ol’ burp.

Akhtar: That’s exactly right. That’s what happened. A lot of unsatisfying work, and then, unprompted, it finally came up with a brilliant final couplet! And that’s what I’m using for the end of the play’s final speech.

Goldberg: Amazing, and threatening.

Sher: I just can’t imagine a world in which ChatGPT could take all experience and unify it with Ayad’s interest in beauty and meaning and his obsession with classical tragedy and pull all those forces together with emotion and feeling. Because no matter how many times you prompted it, you’re still going to get The Pestilential Plagiarist, or whatever it’s called.

Downey: The reason that we’re all sitting here right now is because this motherfucker, Ayad, is so searingly sophisticated, but also on occasion—more than occasionally—hot under the collar. My new favorite cable channel is called Ayad Has Fucking Had It. He’s like the most collaborative superintelligence you will ever come across, and therefore he’s letting all this slack out to everyone around him, but once in a while, if this intelligence is entirely unappreciated for hours or days at a time, he will flare. He’ll just remind us that he can break the sound barrier if he wants to. And I get chills from that. And that’s why we’re here. It’s the human thing.

Akhtar: It’s not new for humans to use tools.

Sher: Are we going to be required to upload a system of ethics into the machines as they get more and more powerful?

Downey: Too late.

Goldberg: That’s what they promise in Silicon Valley, alignment with human values.

Downey: Two years ago was the time to do something.

Akhtar: You guys are thinking big. But I just don’t know how this is going to play out. I don’t know what it is. I’m just interested in what I’m experiencing now and in working with the technology. What’s the experience I’m having now?

Goldberg: There’s a difference between a human hack and an excellent human writer. The human hack doesn’t know that they’re bad.

Downey: This is a harebrained rabbit hole where we could constantly keep thinking of more and more ramifications. Another issue here is that certain great artists do something that most people would labor an entire life or career to come close to, and the second they’re done with it, they have contempt for it, because they go, “Eh, that’s not my best.”

Akhtar: I recognize someone in that.

Downey: All I’m saying is that I just want the feeling of those sparks flying, that new neural pathway being forced. I want to push the limits. It’s that whole thing of pushing limits. When I feel good, when I can tell Bart is kicking me, when Ayad is just lighting up, and when I’m realizing that I just got a note that revolutionized the way I’m going to try to portray something, you go, “Ooh!” And even if it’s old news to someone else, for me, it’s revolutionary.

Akhtar: Another way of putting this, what Robert is saying, is that what he’s engaged in is not problem-solving, per se. It’s not that there’s an identified problem that he is trying to solve. This is how a computer is often thinking, with a gamification sort of mindset. For Robert, there’s a richness of the present for him as he’s working that is identifying possibilities, not problems.

Sher: I’ve thought a lot about this, trying to understand the issue of GPT and creativity, and I’m a lot less worried now, because I feel that the depth of the artistic process in the theater isn’t replicable.

The amalgam of human experience and emotion and feeling that passes through artists is uniquely human and not capturable. Word orders can be taken from all kinds of sources. They can be imitated; they can be replicated; they can be reproduced in different ways. But the essential activity of what we do here in this way, and what we build, has never been safer.

Downey: And if our job is to hold the mirror up to nature, this is now part of nature. It is now part of the firmament. Nature is now inclusive of this. We’re onstage and we’re reflecting this back to you. What do you see? Do you see yourself within this picture?


This article appears in the November 2024 print edition with the headline “The Playwright in the Age of AI.”

September 29, 2024  16:16:07

This article was originally published by Quanta Magazine.

A picture may be worth a thousand words, but how many numbers is a word worth? The question may sound silly, but it happens to be the foundation that underlies large language models, or LLMs—and through them, many modern applications of artificial intelligence.

Every LLM has its own answer. In Meta’s open-source Llama 3 model, words are split into tokens represented by 4,096 numbers; for one version of GPT-3, it’s 12,288. Individually, these long numerical lists—known as “embeddings”—are just inscrutable chains of digits. But in concert, they encode mathematical relationships between words that can look surprisingly like meaning.

The basic idea behind word embeddings is decades old. To model language on a computer, start by taking every word in the dictionary and making a list of its essential features—how many is up to you, as long as it’s the same for every word. “You can almost think of it like a 20 Questions game,” says Ellie Pavlick, a computer scientist studying language models at Brown University and Google DeepMind. “Animal, vegetable, object—the features can be anything that people think are useful for distinguishing concepts.” Then assign a numerical value to each feature in the list. The word dog, for example, would score high on “furry” but low on “metallic.” The result will embed each word’s semantic associations, and its relationship to other words, into a unique string of numbers.

Researchers once specified these embeddings by hand, but now they’re generated automatically. For instance, neural networks can be trained to group words (or, technically, fragments of text called “tokens”) according to features that the network defines by itself. “Maybe one feature separates nouns and verbs really nicely, and another separates words that tend to occur after a period from words that don’t occur after a period,” Pavlick says.

[Read: Generative AI can’t cite its sources]

The downside of these machine-learned embeddings is that, unlike in a game of 20 Questions, many of the descriptions encoded in each list of numbers are not interpretable by humans. “It seems to be a grab bag of stuff,” Pavlick says. “The neural network can just make up features in any way that will help.”

But when a neural network is trained on a particular task called language modeling—which here involves predicting the next word in a sequence—the embeddings it learns are anything but arbitrary. Like iron filings lining up under a magnetic field, the values become set in such a way that words with similar associations have mathematically similar embeddings. For example, the embeddings for dog and cat will be more similar than those for dog and chair.

This phenomenon can make embeddings seem mysterious, even magical: a neural network somehow transmuting raw numbers into linguistic meaning, “like spinning straw into gold,” Pavlick says. Famous examples of “word arithmetic”—king minus man plus woman roughly equals queen—have only enhanced the aura around embeddings. They seem to act as a rich, flexible repository of what an LLM “knows.”

[Read: Why does AI art look like that?]

But this supposed knowledge isn’t anything like what we’d find in a dictionary. Instead, it’s more like a map. If you imagine every embedding as a set of coordinates on a high-dimensional map shared by other embeddings, you’ll see certain patterns pop up. Certain words will cluster together, like suburbs hugging a big city. And again, dog and cat will have more similar coordinates than dog and chair.

But unlike points on a map, these coordinates refer only to one another—not to any underlying territory, the way latitude and longitude numbers indicate specific spots on Earth. Instead, the embeddings for dog or cat are more like coordinates in interstellar space: meaningless, except for how close they happen to be to other known points.

So why are the embeddings for dog and cat so similar? It’s because they take advantage of something that linguists have known for decades: Words used in similar contexts tend to have similar meanings. In the sequence “I hired a pet sitter to feed my ____,” the next word might be dog or cat, but it’s probably not chair. You don’t need a dictionary to determine this, just statistics.

Embeddings—contextual coordinates, based on those statistics—are how an LLM can find a good starting point for making its next-word predictions, without relying on definitions.

[Read: Why AI doesn’t get slang]

Certain words in certain contexts fit together better than others, sometimes so precisely that literally no other words will do. (Imagine finishing the sentence “The current president of France is named ____.”) According to many linguists, a big part of why humans can finely discern this sense of fitting is because we don’t just relate words to one another—we actually know what they refer to, like territory on a map. Language models don’t, because embeddings don’t work that way.

Still, as a proxy for semantic meaning, embeddings have proved surprisingly effective. It’s one reason why large language models have rapidly risen to the forefront of AI. When these mathematical objects fit together in a way that coincides with our expectations, it feels like intelligence; when they don’t, we call it a “hallucination.” To the LLM, though, there’s no difference. They’re just lists of numbers, lost in space.

September 27, 2024  16:13:33

Nearly two years ago, I wrote that AI would kill the undergraduate essay. That reaction came in the immediate aftermath of ChatGPT, when the sudden appearance of its shocking capabilities seemed to present endless vistas of possibility—some liberating, some catastrophic.

Since then, the potential of generative AI has felt clear, although its practical applications in everyday life have remained somewhat nebulous. Academia remains at the forefront of this question: Everybody knows students are using AI. But how? Why? And to what effect? The answer to those questions will, at least to some extent, reveal the place that AI will find for itself in society at large.

[Read: The college essay is dead]

There have been several rough approaches to investigate student use of ChatGPT, but they have been partial: polls, online surveys, and so on. There are inherent methodological limits to any study of students using ChatGPT: The technology is so flexible and subject to different cultural contexts that drawing any broadly applicable conclusions about it is challenging. But this past June, a group of Bangladeshi researchers published a paper exploring why students use ChatGPT, and it’s at least explicit about its limitations—and broader in its implications about the nature of AI usage in the world.

Of the many factors that the paper says drive students to use ChatGPT, three are especially compelling to me. Students use AI because it saves time; because ChatGPT produces content that is, for all intents and purposes, indistinguishable from the content they might produce themselves; and because of what the researchers call the “Cognitive Miserliness of the User.” (This is my new favorite phrase: It refers to people who just don’t want to take the time to think. I know many.)

These three reasons for using AI could be lumped into the same general lousiness: “I’m just lazy, and ChatGPT saves my time,” one user in the study admitted. But the second factor—“Inseparability of Content,” as the researchers call it—is a window to a more complex reality. If you tell ChatGPT to “investigate the themes of blood and guilt in the minor characters of Macbeth at a first-year college level for 1,000 words,” or ask it to produce an introduction to such an essay, or ask it to take your draft and perfect it, or any of the many innumerable fudges the technology permits, it will provide something that is more or less indistinguishable from what the student would have done if they had worked hard on the assignment. Students have always been lazy. Students have always cheated. But now, students know that a machine can do the assignment for them—and any essay that an honest, hardworking student produces is written under the shadow of that reality. Nagging at the back of their mind will be the inevitable thought: Why am I doing this when I could just push a button?

The future, for professors, is starting to clarify: Do not give your students assignments that can be duplicated by AI. They will use a machine to perform the tasks that machines can perform. Why wouldn’t they? And it will be incredibly difficult, if not outright impossible, to determine whether the resulting work has been done by ChatGPT, certainly to the standard of a disciplinary committee. There is no reliable technology for establishing definitively whether a text is AI-generated.

But I don’t think that new reality means, at all, that the tasks of writing and teaching people how to write have come to an end. To explain my hope, which is less a hope for writing than an emerging sense of the limits of artificial intelligence, I’d like to borrow an analogy that the Canadian poet Jason Guriel recently shared with me over whiskey: AI is the microwave of language.

It’s a spot-on description. Just like AI, the microwave began as a weird curiosity—an engineer in the 1940s noticed that a chocolate bar had melted while he stood next to a cavity magnetron tube. Then, after an extended period of development, it was turned into a reliable cooking tool and promoted as the solution to all domestic drudgery. “Make the greatest cooking discovery since fire,” ads for the Radarange boasted in the 1970s. “A potato that might take an hour to bake in a conventional range takes four minutes under microwaves,” The New York Times reported in 1976. As microwaves entered American households, a series of unfounded microwave scares followed: claims that it removed the nutrition from food, that it caused cancer in users. Then the microwave entered ordinary life, just part of the background. If a home doesn’t have one now, it’s a choice.

[Read: The future of writing is a lot like hip-hop]

The microwave survived because it did something useful. It performed functions that no other technology performed. And it gave people things they loved: popcorn without dishes, hot dinners in minutes, the food in fast-food restaurants.

But the microwave did not end traditional cooking, obviously. Indeed, it became clear soon enough that the microwave could do only certain things. The technologists adapted, by combining the microwave with other heat sources so that the food didn’t feel microwaved. And the public adapted. They used microwaves for certain limited kitchen tasks, not every kitchen task.

Something similar is emerging with AI. If you’re going to use AI, the key is to use it for what it’s good at, or to write with AI so that the writing doesn’t feel like AI. What AI is superb at is formulaic writing and thinking through established problems. These are hugely valuable intellectual powers, but far from the only ones.

To take the analogy in a direction that might be useful for professors who actually have to deal with the emerging future and real-life students: If you don’t want students to use AI, don’t ask them to reheat old ideas.

The advent of AI demands some changes at an administrative level. Set tasks and evaluation methods will both need alteration. Some teachers are starting to have students come in for meetings at various points in the writing process—thesis statement, planning, draft, and so on. Others are using in-class assignments. The take-home exam will be a historical phenomenon. Online writing assignments are prompt-engineering exercises at this point.

There is also an organic process under way that will change the nature of writing and therefore the activity of teaching writing. The existence of AI will change what the world values in language. “The education system’s emphasis on [cumulative grade point average] over actual knowledge and understanding, combined with the lack of live monitoring, increases the likelihood of using ChatGPT,” the study on student use says. Rote linguistic tasks, even at the highest skill level, just won’t be as impressive as they once were. Once upon a time, it might have seemed notable if a student spelled onomatopoeia correctly in a paper; by the 2000s, it just meant they had access to spell-check. The same diminution is currently happening to the composition of an opening paragraph with a clear thesis statement.

But some things won’t change. We live in a world where you can put a slice of cheese between two pieces of bread, microwave it, and eat it. But don’t you want a grilled cheese sandwich? With the bread properly buttered and crispy, with the cheese unevenly melted? Maybe with a little bowl of tomato-rice soup on the side?

The writing that matters, the writing that we are going to have to start teaching, is grilled-cheese writing—the kind that only humans can create: writing with less performance and more originality, less technical facility and more insight, less applied control and more individual splurge, less perfection and more care. The transition will be a humongous pain for people who teach students how to make sense with words. But nobody is being replaced; that much is already clear: The ideas that people want are still handmade.

September 27, 2024  14:28:12

Not long after Malcolm Gladwell’s The Tipping Point was published, in the winter of 2000, it had a tipping point of its own. His first book took up residence on the New York Times best-seller list for an unbelievable eight years. More than 5 million copies were sold in North America alone, an epidemic that spread to the carry-on bags of many actual and aspiring CEOs.

Gladwell offered three “rules” for how any social contagion happens—how, say, a crime wave builds (and can be reversed), but also how a new kind of sneaker takes over the market. The rules turned out to explain his own book’s success as well. According to his “Law of the Few,” only a small number of Connectors, Mavens, and Salesmen are needed to discover and promote a new trend. (If this taxonomy sounds familiar, that’s just another sign of how deep this book has burrowed into the culture.) In the case of The Tipping Point, word of the book spread through corporate boardrooms and among the start-up denizens of Silicon Valley. As for the second rule, “The Stickiness Factor”—the somewhat self-evident notion that a fad needs to be particularly accessible or addictive to really catch on—Gladwell’s storytelling was the necessary glue. Many readers and fellow writers over the years have correctly noted, out of jealousy or respect, that he is a master at extracting vibrant social-science research and then arranging his tidbits in a pleasurably digestible way.

Gladwell’s third Tipping Point rule, “The Power of Context,” may have been the most crucial to his breaking out: the (again rather self-evident) notion that the environment into which an idea emerges affects its reception. He emphasizes this in the author’s note of his new book, Revenge of the Tipping Point, in which he revisits his popular concepts nearly 25 years later. His debut took off, he has concluded, because “it was a hopeful book that matched the mood of a hopeful time. The year 2000 was an optimistic time. The new millennium had arrived. Crime and social problems were in free fall. The Cold War was over.”

Francis Fukuyama’s The End of History and the Last Man, published in 1992, is a good counterpart; both books epitomize an era of confidence in which clear-cut laws could lead us, in steady progression, toward ideologies, economic systems, and sneakers that would conquer all others. “Look at the world around you,” Gladwell cheerily ends The Tipping Point. “It may seem like an immovable, implacable place. It is not. With the slightest push—in just the right place—it can be tipped.”

Besides the triumphalism—9/11 was a year away—the other context for Gladwell’s assured teachings about the tidy mechanics of change was this: The internet was still young. In 2000, the World Wide Web was in its dial-up AOL phase; Mark Zuckerberg was in high school. Gladwell could easily ignore the disruption that still seemed distant, and he did. All of the epidemics in The Tipping Point travel along analog pathways, whether the word of mouth of Paul Revere’s ride that warned of British soldiers on the move, or the televised images on Sesame Street that spread literacy, or the billboards that helped propel the Airwalk shoe brand. Unhinged virality as we now know it is absent from The Tipping Point. So are our dinging phones, the memes, the entire insane attention economy.

[Andrew Ferguson: Malcolm Gladwell’s Talking to Strangers doesn’t say much]

Today, talking about social contagion without taking these forces into account would be preposterous. We are not in the world of Paul Revere and Big Bird. So when I saw the title of Gladwell’s latest book, I was sure I knew what “revenge” he had in mind: a wildly unpredictable form of communication had made a hash of his simple rules. You don’t need to be a media theorist to recognize that over the past quarter century, the speed and scale and chaotic democratization of the digital revolution have turned straight lines of transmission into intersecting squiggles and curlicues. Yet Gladwell in 2024 mentions the internet once, in passing. The role of social media, not even once.

Gladwell writes that he wanted to be less Pollyannaish this time around, and to look at the “underside of the possibilities I explored so long ago.” This means scrutinizing not just the rules that govern epidemics of all sorts (he slides between biological and social ones), but also how those rules can be manipulated. Here he gathers “cases where people—either deliberately or inadvertently, virtuously or maliciously—made choices that altered the course and shape of a contagious phenomenon.” Revenge of the Tipping Point is bookended by the dark story of the opioid epidemic. We read about how the Sackler family and their company, Purdue Pharma, identified doctors who were super-spreader prescribers of OxyContin, keeping them well stocked with pills, and about the larger context that enabled the whole enterprise: The epidemic took off in states where, historically, the regulatory culture around opioids was comparatively lax.

The introduction of unsavory actors is one main difference in the new book, which otherwise confirms his earlier message—change requires only a very small number of people. The other big new concept is what he calls the Overstory. He borrows the term from ecology: “An overstory is the upper layer of foliage in a forest, and the size and density and height of the overstory affect the behavior and development of every species far below on the forest floor.” Gladwell acknowledges that a word already exists for the social version of this—zeitgeist, the set of collective assumptions and worldviews that can hover above an entire culture or country.

Overstory, if I’m following Gladwell, is meant to expand and complicate the Power of Context. In some examples, the Overstory provides the necessary conditions for a tipping point. Waldorf schools, one of Gladwell’s examples, have an Overstory that values independent thinking; this explains the disproportionate number of unvaccinated children at many of the schools. In other circumstances, a revised Overstory is the result of a tipping: As soon as a corporate board allocates at least a third of its seats to women, to take another of his examples, it will immediately become more open and collaborative. An Overstory can cover the United States as a whole. It can also encompass a particular city or state—Miami, say, which became a ripe environment for Medicare fraud, Gladwell argues, thanks to an Overstory featuring weak institutional oversight abetted by a virulent drug trade and shifting demographics. He doesn’t detail how various Overstories might interact, though he’s emphatic about their explanatory power. “Overstories matter,” Gladwell writes in his signature bold yet blurry style. “You can create them. They can spread. They are powerful. And they can endure for decades.”

Gladwell’s methodology has taken a lot of punches: that he cherry-picks, that he is reductive, that he is Captain Obvious. I have been irritated by these habits, even when I find his books playful and stimulating. But the Overstory concept presents a unique, and revealing, problem. Unlike Gladwell’s usual love of easy formulas, this one’s vagueness would actually seem to enhance its usefulness, especially in 2024, when we consider how swiftly and fluidly cultural and social change occurs. But in Gladwell’s hands, I was disappointed to discover, the Overstory proves as blunt an instrument as any of his other rules and laws.

In one of the book’s examples, Gladwell draws on research by Anna S. Mueller and Seth Abrutyn, two sociologists who did fieldwork in an affluent American suburb from 2013 to 2016, trying to uncover the sources of a teen-suicide cluster centered in the local high school. In their book, Life Under Pressure, they concluded that the community (they gave it the pseudonym Poplar Grove) was dominated by a culture of high achievement that weighed the children down and contributed to their choice of suicide when they succumbed to the intensity. Gladwell has his Overstory. But he goes even further, calling Poplar Grove a “monoculture” in which students had zero opportunities to stand apart, to opt out of its meritocracy. Thus the first suicide became a sort of “infection,” and “once the infection is inside the walls, there is nothing to stop it.”

The idea that an American suburb in the 2010s could have its own hermetically sealed culture didn’t sit right with me—maybe because I have teenage daughters and they have phones. Think about all the other influences that might have been pummeling these children, aside from what they were hearing from their peers at school and their parents and teachers. Examining a suicide cluster in northeastern Ohio in 2017–18 similar to Poplar Grove’s, a 2021 study in the Journal of Adolescent Health called attention to the strength of virtual forces. Data showed nearly double the risk of suicidal ideation and suicide attempts among the students posting “suicide cluster-related social media content.” Or consider the controversial 2017 Netflix show 13 Reasons Why, which told the story of a girl’s suicide. Another study found a 28.9 percent uptick, nationwide, in the suicide rate for 10-to-17-year-olds in the month after it started streaming. Abrutyn himself, one of the Poplar Grove researchers, said in an interview that social media “probably plays a role in accelerating or amplifying some of the underlying things that were happening prior.”

Gladwell doesn’t consider any of this, or the possibility that other online activities—video games, YouTube channels, chat rooms—may have provided the teenagers with an escape from the hothouse of Poplar Grove or possibly heightened the appeal of suicide, scrambling any clear sense of just what constitutes a context. Surely the sociologists are right about the culture of high achievement they found, but perhaps it was one of many factors—a case not of a single Overstory, but of many competing or reinforcing Overstories. This would also make solving the problem of Poplar Grove not simply a matter of getting adults—the parents and the school—to chill out, as Gladwell suggests.

Gladwell has long insisted that change happens neatly, and he’s sticking to it. Epidemics, he writes in the new book, are “not wild and out of control.” They have a single source, and anyone can follow Ariadne’s thread back to it. He’s also sticking to a career-long dismissal and devaluation of digital communication and its possible effects—which do indeed feel wild and out of control. Back in 2002, in an afterword for the paperback edition of The Tipping Point, Gladwell wrote that he’d been asked a lot about “the effect of the Internet—in particular, email”—on his ideas. Excitement was running high about all the avenues the internet had opened up, and his answer was counterintuitive. The spike in email use was actually going to make its power more diffuse, he thought—and he again reached for the epidemic analogy. “Once you’ve had a particular strain of the flu, or the measles, you develop an immunity to it, and when too many people get immunity to a particular virus, the epidemic comes to an end,” he wrote. In other words, our online networks would become so ubiquitous that they would lose their effectiveness as tools of persuasion.

Almost a decade later, he followed this hunch even further in a much-discussed New Yorker article, “Small Change.” He was responding to the growing notion that social media would prove to be a revolutionary weapon for enabling political transformation. Gladwell dissented, presciently in some ways. He contrasted the 1960s civil-rights movement with online activism, drawing on the sociologist Mark Granovetter’s study of what he called “weak ties.” The work of desegregating lunch counters and securing voting rights in the South demanded “strong ties,” or personal, face-to-face relationships; what Gladwell saw on social media were networks based on weak ties, or casual, virtual acquaintances—too scattered for the sort of “military campaign” needed to upend the status quo. The Arab Spring’s unfolding bore out this view, as have fruitless bouts of online activism since then.

But in discounting the ways that the internet has transformed American society and politics, and not acknowledging the sort of change that weak ties can bring about, Gladwell has handicapped his analysis. Struggling to describe these online networks, he landed on “messy.” Like Wikipedia, he explained, they are subject to a “ceaseless pattern of correction and revision, amendment and debate.”

[From the October 2013 issue: Malcolm Gladwell, guru of the underdogs]

“Correction and revision, amendment and debate”—and all the ways such interactions can exhilarate and inform as well as overwhelm us: That sounds truer to our reality than the notion of a monoculture that can only be muscled out by another monoculture.

I wish Marshall McLuhan would step up at this point and give me a hand. As he argued, the media we use mold us, train our impulses. If the dominant forms of communication today are fast and loud and reactive—messy—then our culture and politics, and the paths of social contagions, will also be fast and loud and reactive. This can’t be ignored. And Gladwell should understand why.

In the last third of the book, he focuses on how Overstories come about and turns to two examples that depend on the medium of television. The first involves the hugely popular 1978 miniseries Holocaust, starring Meryl Streep. Gladwell contends that after four nights of graphic television, the idea of the Holocaust as a historical event coalesced in the public’s mind in a way that it never had before. He rhapsodizes about the influence wielded by a broadcast medium of this sort, one that reached so many people simultaneously—120 million viewers (half the country) in this case: “The stories told on television shaped the kinds of things people thought about, the conversations they had, the things they valued, the things they dismissed.”

The second example features the sitcom Will & Grace, which first aired from 1998 to 2006, and which Gladwell singles out as pivotal in laying the psychological groundwork for legalizing gay marriage. (As in his Holocaust example, Gladwell leaps over a great deal of contested history to make this big claim.) Television offered a new narrative about a gay man: Not closeted or tortured, he was in community with other gay men yet not wholly defined by his sexual identity. This was all transmitted subtly and with a laugh track, but, Gladwell writes, multiple “seasons of Will just being … a normal guy” altered the zeitgeist enough to open the country up to the possibility of gay marriage.

Television did effect change in the monocultural way that Gladwell imagines. It is a medium that maintains our attention through visual stimuli—drawing us in and shocking us with spectacles like that of naked men being lined up and shot in Holocaust, or of Will and Jack kissing in Season 2 of Will & Grace. Television is also a passive medium, and particularly effective at this kind of cultural inculcation. But network television is not the dominant medium anymore. As Gladwell himself puts it, in the one and only mention of digital communication’s impact in Revenge of the Tipping Point : “It is hard today, I realize, to accept the idea that the world could be changed by a television show. Audiences have been sliced up a hundred ways among cable, streaming services, and video games.”

What does social contagion look like today, when images and stories emerge out of the great sea of information and are just as quickly submerged? Interactivity and fierce feedback loops are constantly in play. Attention drives everything. And we are all in one another’s business. Even the notion of separate blue and red Americas, living under distinct Overstories, does not tell us much, because these seemingly separate realities are built in reaction to each other. Their narratives ping-pong back and forth hourly.

Consider a couple of recent examples. By now, the late-July virality of Tim Walz’s use of the word weird is campaign lore—the turbocharged meme began as a television clip and then proliferated on social media and rapidly entered the vocabulary of many other politicians. It also seemingly catapulted Walz to vice-presidential running mate, and redefined the Democrats as the normative party, in step with the national majority, unlike the bizarre Republicans.

[Helen Lewis: What’s genuinely weird about the online right]

The pro-Palestinian protests this past spring offer another glimpse into how new ideas now flow. When the protests began roiling college campuses, their emotional force was hard for me to understand at first—until someone showed me the short videos of war-zone horrors that were circulating by the thousands on TikTok, most made by Gazans themselves. Each clip was a gut punch: a woman emerging from a collapsed apartment building with a dead baby in her arms; burned children in a hospital; a man collapsing in grief over bodies wrapped in white shrouds. The images motivating these students were channeled directly across the world to their phones, unfiltered. The students then uploaded footage of their own protests, especially as they were suppressed, adding another layer of instigating feedback. The global exchange of self-generated videos led to clashes with the police, to rifts within the Democratic Party, all while the reason for the passion and the tension remained mostly invisible to those not scrolling certain platforms.

Even a biological epidemic, Gladwell’s central metaphor, doesn’t really lend itself to an easy story of transmission, or of consolidated immunity, either. We’re now all too familiar with COVID and its endless mutations, the mystery of long COVID, the way mask wearing was shaped by politics and culture and not merely science.

This is, indeed, all very messy, all wild and unruly. It is also the air we now breathe. The strangest thing about Gladwell’s decision to simply ignore the new pathways of social contagion is that he has the right vocabulary for understanding them. Small groups of people are usually the instigators, but these can be Trumpers hanging out in a closed Discord chat room, getting one another riled up about a stolen election, or a few influential teenage BookTokers all gushing about the same romance novel and turning it into a best seller. And Overstories do matter, but they do not have the stability and the unanimity that Gladwell imagines. Every day, dozens upon dozens of such narratives compete to define our politics, our culture; to bring issues to the fore, dragging attention one way or another.

Gladwell ends his new Tipping Point on the same note of certainty as his original. “Epidemics have rules,” he writes. “They have boundaries.” The tools to alter their course “are sitting on the table, right in front of us.” I envy his confidence. But I’ve lived through the past 25 years too, and that’s not my takeaway. We exist in gloriously, dangerously unpredictable times, and understanding how social change works surely requires one thing above all: humility.


This article appears in the November 2024 print edition with the headline “Malcolm Gladwell, Meet Mark Zuckerberg.”

October 1, 2024  18:25:46

For years now, generative AI has been used to conjure all sorts of realities—dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases—perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been.

This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center’s polling found, 15 percent of high schoolers reported hearing about a “deepfake”—or AI-generated image—that depicted someone associated with their school in a sexually explicit or intimate manner. Generative-AI tools have “increased the surface area for students to become victims and for students to become perpetrators,” Elizabeth Laird, a co-author of the report and the director of equity in civic technology at CDT, told me. In other words, whatever else generative AI is good for—streamlining rote tasks, discovering new drugs, supplanting human art, attracting hundreds of billions of dollars in investments—the technology has made violating children much easier.

Today’s report joins several others documenting the alarming prevalence of AI-generated NCII. In August, Thorn, a nonprofit that monitors and combats the spread of child-sexual-abuse material (CSAM), released a report finding that 11 percent of American children ages 9 to 17 know of a peer who has used AI to generate nude images of other kids. A United Nations institute for international crime recently co-authored a report noting the use of AI-generated CSAM to groom minors and finding that, in a recent global survey of law enforcement, more than 50 percent had encountered AI-generated CSAM.

Although the number of official reports related to AI-generated CSAM are relatively small—roughly 5,000 tips in 2023 to the National Center for Missing & Exploited Children, compared with tens of millions of reports about other abusive images involving children that same year—those figures were possibly underestimated and have been growing. It’s now likely that “there are thousands of new [CSAM] images being generated a day,” David Thiel, who studies AI-generated CSAM at Stanford, told me. This summer, the U.K.-based Internet Watch Foundation found that in a one-month span in the spring, more than 3,500 examples of AI-generated CSAM were uploaded to a single dark-web forum—an increase from the 2,978 uploaded during the previous September.

Overall reports involving or suspecting CSAM have been rising for years. AI tools have arrived amid a “perfect storm,” Sophie Maddocks, who studies image-based sexual abuse and is the director of research and outreach at the Center for Media at Risk at the University of Pennsylvania, told me. The rise of social-media platforms, encrypted-messaging apps, and accessible AI image and video generators have made it easier to create and circulate explicit, nonconsensual material on an internet that is permissive, and even encouraging, of such behavior. The result is a “general kind of extreme, exponential explosion” of AI-generated sexual-abuse imagery, Maddocks said.

[Jonathan Haidt: Get phones out of school now]

Policing all of this is a major challenge. Most people use social- and encrypted-messaging apps—which include iMessage on the iPhone, and WhatsApp—for completely unremarkable reasons. Similarly, AI tools such as face-swapping apps may have legitimate entertainment and creative value, even if they can also be abused. Meanwhile, open-source generative-AI programs, some of which may have sexually explicit images and even CSAM in their training data, are easy to download and use. Generating a fake, sexually explicit image of almost anybody is “cheaper and easier than ever before,” Alexandra Givens, the president and CEO of CDT, told me. Among U.S. schoolchildren, at least, the victims tend to be female, according to CDT’s survey.

Tech companies do have ways of detecting and stopping the spread of conventional CSAM, but they are easily circumvented by AI. One of the main ways that law enforcement and tech companies such as Meta are able to detect and remove CSAM is by using a database of digital codes, a sort of visual fingerprint, that correspond to every image of abuse that researchers are aware of on the web, Rebecca Portnoff, the head of data science at Thorn, told me. These codes, known as “hashes,” are automatically created and cross-referenced so that humans don’t have to review every potentially abusive image. This has worked so far because much conventional CSAM consists of recirculated images, Thiel said. But the ease with which people can now generate slightly altered, or wholly fabricated, abusive images could quickly outpace this approach: Even if law-enforcement agencies could add 5,000 instances of AI-generated CSAM to the list each day, Thiel said, 5,000 new ones would exist the next.

In theory, AI could offer its own kind of solution to this problem. Models could be trained to detect explicit or abusive imagery, for example. Thorn has developed machine-learning models that can detect unknown CSAM. But designing such programs is difficult because of the sensitive training data required. “In the case of intimate images, it’s complicated,” Givens said. “For images involving children, it is illegal.” Training an image to classify CSAM involves acquiring CSAM, which is a crime, or working with an organization that is legally authorized to store and handle such images.

“There are no silver bullets in this space,” Portnoff said, “and to be effective, you are really going to need to have layered interventions across the entire life cycle of AI.” That will likely require significant, coordinated action from AI companies, cloud-computing platforms, social-media giants, researchers, law-enforcement officials, schools, and more, which could be slow to come about. Even then, somebody who has already downloaded an open-source AI model could theoretically generate endless CSAM, and use those synthetic images to train new, abusive AI programs.

Still, the experts I spoke with weren’t fatalistic. “I do still see that window of opportunity” to stop the worst from happening, Portnoff said. “But we have to grab it before we miss it.” There is a growing awareness of and commitment to preventing the spread of synthetic CSAM. After Thiel found CSAM in one of the largest publicly available image data sets used to train AI models, the data set was taken down; it was recently reuploaded without any abusive content. In May, the White House issued a call to action for combatting CSAM to tech companies and civil society, and this summer, major AI companies including OpenAI, Google, Meta, and Microsoft agreed to a set of voluntary design principles that Thorn developed to prevent their products from generating CSAM. Two weeks ago, the White House announced another set of voluntary commitments to fight synthetic CSAM from several major tech companies. Portnoff told me that, while she always thinks “we can be moving faster,” these sorts of commitments are “encouraging for progress.”

[Read: AI is about to make social media (much) more toxic]

Tech companies, of course, are only one part of the equation. Schools also have a responsibility as the frequent sites of harm, although Laird told me that, according to CDT’s survey results, they are woefully underprepared for this crisis. In CDT’s survey, less than 20 percent of high-school students said their school had explained what deepfake NCII is, and even fewer said the school had explained how sharing such images is harmful or where to report them. A majority of parents surveyed said that their child’s school had provided no guidance relating to authentic or AI-generated NCII. Among teachers who had heard of a sexually abusive deepfake incident, less than 40 percent reported that their school had updated its sexual-harassment policies to include synthetic images. What procedures do exist tend to focus on punishing students without necessarily accounting for the fact that many adolescents may not fully understand that they are harming someone when they create or share such material. “This cuts to the core of what schools are intended to do,” Laird said, “which is to create a safe place for all students to learn and thrive.”

Synthetic sexually abusive images are a new problem, but one that governments, media outlets, companies, and civil-society groups should have begun considering, and working to prevent, years ago, when the deepfake panic began in the late 2010s. Back then, many pundits were focused on something else entirely: AI-generated political disinformation, the fear of which bred government warnings and hearings and bills and entire industries that churn to this day.

All the while, the technology had the potential to transform the creation and nature of sexually abusive images. As early as 2019, online monitoring found that 96 percent of deepfake videos were nonconsensual pornography. Advocates pointed this out, but were drowned out by fears of nationally and geopolitically devastating AI-disinformation campaigns that have yet to materialize. Political deepfakes threatened to make it impossible to believe what you see, Maddocks told me. But for victims of sexual assault and harassment, “people don’t believe what they see, anyway,” she said. “How many rape victims does it take to come forward before people believe what the rapist did?” This deepfake crisis has always been real and tangible, and is now impossible to ignore. Hopefully, it’s not too late to do something about it.

September 26, 2024  14:01:06

There’s a story about Sam Altman that has been repeated often enough to become Silicon Valley lore. In 2012, Paul Graham, a co-founder of the famed start-up accelerator Y Combinator and one of Altman’s biggest mentors, sat Altman down and asked if he wanted to take over the organization.

The decision was a peculiar one: Altman was only in his late 20s, and at least on paper, his qualifications were middling. He had dropped out of Stanford to found a company that ultimately hadn’t panned out. After seven years, he’d sold it for roughly the same amount that his investors had put in. The experience had left Altman feeling so professionally adrift that he’d retreated to an ashram. But Graham had always had intense convictions about Altman. “Within about three minutes of meeting him, I remember thinking, ‘Ah, so this is what Bill Gates must have been like when he was 19,’” Graham once wrote. Altman, too, excelled at making Graham and other powerful people in his orbit happy—a trait that one observer called Altman’s “greatest gift.” As Jessica Livingston, another YC co-founder, would tell The New Yorker in 2016, “There wasn’t a list of who should run YC and Sam at the top. It was just: Sam.” Altman would smile uncontrollably, in a way that Graham had never seen before. “Sam is extremely good at becoming powerful,” Graham said in that same article.

The elements of this story—Altman’s uncanny ability to ascend and persuade people to cede power to him—have shown up throughout his career. After co-chairing OpenAI with Elon Musk, Altman sparred with him for the title of CEO; Altman won. And in the span of just a few hours yesterday, the public learned that Mira Murati, OpenAI’s chief technology officer and the most important leader at the company besides Altman, is departing along with two other crucial executives: Bob McGrew, the chief research officer, and Barret Zoph, a vice president of research who was instrumental in launching ChatGPT and GPT-4o, the “omni” model that, during its reveal, sounded uncannily like Scarlett Johansson. To top it off, Reuters, The Wall Street Journal, and Bloomberg reported that OpenAI is planning to turn away from its nonprofit roots and become a for-profit enterprise that could be valued at $150 billion. Altman reportedly could receive 7 percent equity in the new arrangement—or the equivalent of $10.5 billion if the valuation pans out. (The Atlantic recently entered a corporate partnership with OpenAI.)

In a post on X yesterday, Altman said that the leadership departures were each independent of one another and amicable, but that they were happening “all at once, so that we can work together for a smooth handover to the next generation of leadership.” In regards to OpenAI’s restructuring, a company spokesperson gave me a statement it has given before: “We remain focused on building AI that benefits everyone, and as we’ve previously shared, we’re working with our board to ensure that we’re best positioned to succeed in our mission.” The company will continue to run a nonprofit, although it is unclear what function it will serve.

I started reporting on OpenAI in 2019, roughly around when it first began producing noteworthy research. The company was founded as a nonprofit with a mission to ensure that AGI—a theoretical artificial general intelligence, or an AI that meets or exceeds human potential—would benefit “all of humanity.” At the time, OpenAI had just released GPT-2, the language model that would set OpenAI on a trajectory toward building ever larger models and lead to its release of ChatGPT. In the six months following the release of GPT-2, OpenAI would make many more announcements, including Altman stepping into the CEO position, its addition of a for-profit arm technically overseen and governed by the nonprofit, and a new multiyear partnership with, and $1 billion investment from, Microsoft. In August of that year, I embedded in OpenAI’s office for three days to profile the company. That was when I first noticed a growing divergence between OpenAI’s public facade, carefully built around a narrative of transparency, altruism, and collaboration, and how the company was run behind closed doors: obsessed with secrecy, profit-seeking, and competition.

I’ve continued to follow OpenAI closely ever since, and that rift has only grown—leading to repeated clashes within the company between groups who have vehemently sought to preserve their interpretation of OpenAI’s original nonprofit ethos and those who have aggressively pushed the company toward something that, in their view, better serves the mission (namely, launching products that get its technologies into the hands of more people). I am now writing a book about OpenAI, and in the process have spoken with dozens of people within and connected to the company.

In a way, all of the changes announced yesterday simply demonstrate to the public what has long been happening within the company. The nonprofit has continued to exist until now. But all of the outside investment—billions of dollars from a range of tech companies and venture-capital firms—goes directly into the for-profit, which also hires the company’s employees. The board crisis at the end of last year, in which Altman was temporarily fired, was a major test of the balance of power between the two. Of course, the money won, and Altman ended up on top.

[Read: Inside the chaos at OpenAI]

Murati and the other executives’ departures follow several leadership shake-ups since that crisis. Greg Brockman, a co-founder and OpenAI’s president, went on leave in August, and Ilya Sutskever, another co-founder and the company’s chief scientist, departed along with John Schulman, a founding research scientist, and many others. Notably, Sutskever and Murati had both approached the board with concerns about Altman’s behavior, which fed into the board’s decision to exercise its ousting power, according to The New York Times. Both executives reportedly described a pattern of Altman manipulating the people around him to get what he wanted. And Altman, many people have told me, pretty consistently gets what he wants. (Through her lawyer, Murati denied this characterization of her actions to the Times.)

The departure of executives who were present at the time of the crisis suggests that Altman’s consolidation of power is nearing completion. Will this dramatically change what OpenAI is or how it operates? I don’t think so. For the first time, OpenAI’s public structure and leadership are simply honest reflections of what the company has been—in effect, the will of a single person. “Just: Sam.”

September 24, 2024  18:35:23

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an AI Bill of Rights aiming to address “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his executive order on AI. Also in 2023, Senate Majority Leader Chuck Schumer held an AI summit in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “Bletchley Declaration,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

[Read: Chatbots are primed to warp reality]

On July 25, the Federal Communications Commission issued a proposal that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner alleged that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC announced that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC compromised, announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as Public Citizen, did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits lying in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled responded that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of surveyed Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely exempt—again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively stripped its authorities. Even where it could act, the commission is often stymied by political deadlock. The FCC has more evident responsibility for regulating political advertising, but only in certain media: broadcast, robocalls, text messages. Worse yet, the FCC’s rules are not exactly robust. It has actually loosened rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously rule that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and a turf war between federal agencies. And as political campaigning has gone digital, it has entered an online space with even fewer disclosure requirements or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the first state in the nation to prohibit the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed laws this fall. Nineteen states have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign memes now shared by figures such as Musk and Donald Trump?

[Read: The worst cat memes you’ve ever seen]

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the AI Transparency in Elections Act, would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI. Critics say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The Honest Ads Act would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported opposition from the tech industry. The Protect Elections From Deceptive AI Act would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already signaling challenges to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly cited congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts, as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly, banned political advertising on its platform. No longer; now it even allows ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are trivial to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation worse by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer hinted to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “beyond the 2024 election.”

[Read: A new front in the meme wars]

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant, structural reform to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the pervasive influence of tech companies and their billionaire investors should be limited through stronger lobbying and campaign-finance protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

September 25, 2024  21:48:30

Sign up for The Decision, a newsletter featuring our 2024 election coverage.

Do you remember what it was like when Donald Trump couldn’t stop tweeting? When it felt like, no matter the time of day or what you were doing, his caps-lock emeses were going to find you, like a heat-seeking, plain-text missile? Enjoying a nice little morning at the farmer’s market? Hold on, here’s a push alert about Trump calling Kim Jong Un “rocket man” on Twitter. Turn on the radio, and you’d hear somebody recapping his digital burbles. You could probably make the case that a large portion of the words spoken on cable-news panels from 2015 to early 2021 were at least tangentially about things that Trump pecked onto his smartphone from a reclining position.

Then January 6 happened. Twitter, worried about “the risk of further incitement of violence,” permanently suspended his account, and Trump later launched his own social-media site, Truth Social. It has far fewer users than its rivals do, and Trump now mostly bleats into the void. Occasionally, news outlets will surface one of his posts—or “Truths,” as they’re called—such as a September 12 post declaring that he would not debate Kamala Harris again. But although Elon Musk has reinstated Trump’s X account, the former president still mostly posts on Truth Social, which has had the effect of containing his wildest content. Unless you’re a die-hard Trump supporter, a journalist, or an obsessive political hobbyist, you’re likely not getting that regular glimpse into the Republican candidate’s brain. But … maybe you should be?

Last Friday, I received an email with a link to a website created by a Washington, D.C.–based web developer named Chris Herbert. The site, Trump’s Truth, is a searchable database collecting all of Trump’s Truth Social posts, even those that have been deleted. Herbert has also helpfully transcribed every speech and video Trump has posted on the platform, in part so that they can be indexed more easily by search engines such as Google. Thus, Trump’s ravings are more visible.

[Read: The MAGA aesthetic is AI slop]

Like many reporters, I’d been aware that the former president’s social-media posts had, like his rally speeches, grown progressively angrier, more erratic, and more bizarre in recent years. Having consumed enough Trump rhetoric over the past decade to melt my frontal cortex, I’ve grown accustomed to his addled style of communication. And yet, I still wasn’t adequately prepared for the immersive experience of scrolling through hundreds of his Truths and ReTruths. Even for Trump, this feed manages to shock. In the span of just a few days, you can witness the former president sharing flagrantly racist memes about Middle Easterners invading America, falsely edited videos showing Harris urging migrants to cross the border, an all-caps screed about how much better off women would be under his presidency, a diatribe about Oprah’s recent interview with Harris. It’s a lot to take in at once: Trump calling an MSNBC anchor a “bimbo,” a declaration of hatred for Taylor Swift, a claim that he “saved Flavored Vaping in 2019.”

On their own, each of these posts is concerning and more than a little sad. But consumed in the aggregate, they take on a different meaning, offering a portrait of a man who appears frequently incoherent, internet-addicted, and emotionally volatile—even by the extreme standard that Trump has already set. Trump seems unable to stop reposting pixelated memes from anonymous accounts with handles such as @1776WeThePeople1776 and @akaPR0B0SS, some of which contain unsettling messages such as a desire to indict sitting members of Congress for sedition. Trump appears to go on posting jags, sometimes well after midnight, rattling off Truths multiple times a minute. On Sunday night, from 6:20 p.m. to 6:26 p.m., Trump shared 20 different posts from conservative news sites, almost all without commentary. For a man currently engaged in the homestretch of campaigning for the presidency of the United States, he is prolific on social media, and seemingly unable to stop posting—from Friday to Monday, Trump posted or reposted 82 times.

Back in January, my colleague McKay Coppins argued that politically engaged Americans should go to a Trump rally and “listen to every word of the Republican front-runner’s speech” as “an act of civic hygiene.” Granted, Coppins wrote his article during a different time in the election cycle, at a moment when Trump was less visible, but his point still stands. Many Americans and the institutions that cover him have grown so used to Trump—to his tirades, lies, and buffoonery—that his behavior can fade into the background of our cultural discourse, his shamelessness and unfitness for office taken almost for granted. When Coppins attended a rally early this year, he recalled the “darker undercurrent” that infused Trump’s rhetoric and lurked behind many of the comments coming from supporters in the crowd. Just as important, Coppins wrote, the rally was also a reminder that “Trump is no longer the cultural phenomenon he was in 2016. Yes, the novelty has worn off. But he also seems to have lost the instinct for entertainment that once made him so interesting to audiences.”

[Read: You should go to a Trump rally]

Trump’s Truth Social posts offer a similar vibe. His feed is bleak, full of posts about America in decline. Aesthetically, it is ugly, full of doctored images and screenshots of screenshots of Facebook-style memes. Consuming a few weeks’ worth of his posts at once was enough to make me feel awful about the state of the world, not unlike how it feels to visit seedy message boards such as 4chan.

And then there’s the prose. As in his rallies, Trump rambles, his writing hard to follow. His stylistic choice to use caps lock for many of his longer posts gives the appearance that he is shouting. Unlike on Twitter, where he was constrained by character limits, Trump’s missives are too long and too convoluted to be easily digestible by aggregating media organizations. In previous iterations, Trump’s tweets were sometimes so bizarre as to be funny (or at least weird enough to be compelling); now his posts appear too fueled by grievance to be casually amusing.

[Read: Donald Trump can’t stop posting]

I realize that I’m not exactly selling the experience of taking a spin through Trump’s digital archive of incoherence. But I think it’s an instructive exercise. If you, like me, have had the experience of seeing friends or loved ones radicalized online or lost to a sea of Facebook memes and propaganda, then scrolling through Trump’s Truth Social posts will provoke a familiar feeling. On his own website, Trump doesn’t just appear unfit for the highest office in the land; he seems small, embittered, and under the influence of the kind of online outrage that usually consumes those who have been or feel alienated by broad swaths of society. It’s not (just) that Trump seems unpresidential—it’s that he seems like an unwell elderly man posting AI slop for an audience of bots on Facebook. Imagine that, instead of Donald Trump’s, you were looking at the feed of a relative. What would you say or do? Whom would you call?

A few months ago, The Atlantic’s editor in chief, Jeffrey Goldberg, wrote about the media’s “bias toward coherence” when it comes to Trump’s rhetoric, where, in an attempt to make sense of Trump’s nonsense, journalists sand down the candidate’s rough edges. Perusing Trump’s Truth Social feed, though, it is nearly impossible to find any coherence to latch on to. Since Trump came down his golden escalator in 2015, I’ve thought that the best way to understand the candidate is via plain text. There, unlike on television, his fragmented attention, peculiar thinking, and dangerous words cannot hide or be explained away. The election is 41 days away, and Trump appears as unstable as ever. But don’t take my word for it: Go see for yourself.

October 1, 2024  21:07:01

When the Three Mile Island power plant in Pennsylvania was decommissioned in 2019, it heralded the symbolic end of America’s nuclear industry. In 1979, the facility was the site of the worst nuclear disaster in the nation’s history: a partial reactor meltdown that  didn’t release enough radiation to cause detectable harm to people nearby, but still turned Americans against nuclear power and prompted a host of regulations that functionally killed most nuclear build-out for decades. Many existing plants stayed online, but 40 years later, Three Mile Island joined a wave of facilities that shut down because of financial hurdles and competition from cheap natural gas, closures that cast doubt over the future of nuclear power in the United States.

Now Three Mile Island is coming back, this time as part of efforts to meet the enormous electricity demands of generative AI. The plant’s owner, Constellation Energy, announced yesterday that it is reopening the facility. Microsoft, which is seeking clean energy to power its data centers, has agreed to buy power from the reopened plant for 20 years. “This was the site of the industry’s greatest failure, and now it can be a place of rebirth,” Joseph Dominguez, the CEO of Constellation, told The New York Times. Three Mile Island plans to officially reopen in 2028, after some $1.6 billion worth of refurbishing and under a new name, the Crane Clean Energy Center.

Nuclear power and chatbots might be a perfect match. The technology underlying ChatGPT, Google’s AI Overviews, and Microsoft Copilot is extraordinarily power-hungry. These programs feed on more data, are more complex, and use more electricity-intensive hardware than traditional web algorithms. An AI-powered web search, for instance, could require five to 10 times more electricity than a traditional query.

The world is already struggling to generate enough electricity to meet the internet’s growing power demand, which AI is rapidly accelerating. Large grids and electric utilities across the U.S. are warning that AI is straining their capacity, and some of the world’s biggest data-center hubs—including Sweden, Singapore, Amsterdam, and exurban Washington, D.C.—are struggling to find power to run new constructions. The exact amount of power that AI will demand within a few years’ time is hard to predict, but it will likely be enormous: Estimates range from the equivalent of Argentina’s annual power usage to that of India.

That’s a big problem for the tech companies building these data centers, many of which have made substantial commitments to cut their emissions. Microsoft, for instance, has pledged to be “carbon negative,” or to remove more carbon from the atmosphere than it emits, by 2030. The Three Mile Island deal is part of that accounting. Instead of directly drawing power from the reopened plant, Microsoft will buy enough carbon-free nuclear energy from the facility to match the power that several of its data centers draw from the grid, a company spokesperson told me over email.

Such electricity-matching schemes, known as “power purchase agreements,” are necessary because the construction of solar, wind, and geothermal plants is not keeping pace with the demands of AI. Even if it was, these clean electricity sources might pose a more fundamental problem for tech companies: Data centers’ new, massive power demands need to be met at all hours of the day, not just when the sun shines or the wind blows.

To fill the gap, many tech companies are turning to a readily available source of abundant, reliable electricity: burning fossil fuels. In the U.S., plans to wind down coal-fired power plants are being delayed in West Virginia, Maryland, Missouri, and elsewhere to power data centers. That Microsoft will use the refurbished Three Mile Island to offset, rather than supply, its data centers’ electricity consumption suggests that the facilities will likely continue to rely on fossil fuels for some time, too. Burning fossil fuels to power AI means the new tech boom might even threaten to delay the green-energy transition.

Still, investing in nuclear energy to match data centers’ power usage also brings new sources of clean, reliable electricity to the power grid. Splitting apart atoms provides a carbon-free way to generate tremendous amounts of electricity day and night. Bobby Hollis, Microsoft’s vice president for energy, told Bloomberg that this is a key upside to the Three Mile Island revival: “We run around the clock. They run around the clock.” Microsoft is working to build a carbon-free grid to power all of its operations, data centers included. Nuclear plants will be an important component that provides what the company has elsewhere called “firm electricity” to fill in the gaps for less steady sources of clean energy, including solar and wind.

It’s not just Microsoft that is turning to nuclear. Earlier this year, Amazon purchased a Pennsylvania data center that is entirely nuclear-powered, and the company is reportedly in talks to secure nuclear power along the East Coast from another Constellation nuclear plant. Google, Microsoft, and several other companies have invested or agreed to buy electricity in start-ups promising nuclear fusion—an even more powerful and cleaner form of nuclear power that remains highly experimental—as have billionaires including Sam Altman, Bill Gates, and Jeff Bezos.

Nuclear energy might not just be a good option for powering the AI boom. It might be the only clean option able to meet demand until there is a substantial build-out of solar and wind energy. A handful of other, retired reactors could come back online, and new ones may be built as well. Only the day before the Three Mile Island announcement, Jennifer Granholm, the secretary of energy, told my colleague Vann R. Newkirk II that building small nuclear reactors could become an important way to supply nonstop clean energy to data centers. Whether such construction will be fast and plentiful enough to satisfy the growing power demand is unclear. But it must be, for the generative-AI revolution to really take off. Before chatbots can finish remaking the internet, they might need to first reshape America’s physical infrastructure.

September 19, 2024  18:44:55

On the day that Elon Musk announced his intention to buy Twitter in April 2022, I tried to game out how the acquisition might go. Three scenarios seemed plausible. There was a weird/chaotic timeline, where Musk actually tried to improve the platform, but mostly just floated harebrained schemes like putting tweets on the blockchain. There was a timeline where Musk essentially reverted Twitter to its founding ethos—one that had a naive and simplistic idea of real-time global conversation. And then there was the worst-case scenario: the dark timeline and its offshoot, the darkest-darkest timeline. Here’s how I described that one:

The darkest-darkest timeline is the one where the world’s richest man runs a communications platform in a truly vengeful, dictatorial way, which involves Musk outright using Twitter as a political tool to promote extreme right-wing agendas and to punish what he calls brain-poisoned liberals.

Some 29 months later, this appears to be the timeline we’ve living in. But even my grim predictions failed to anticipate the intensity of Musk’s radicalization. He is no longer teasing at his anti-woke views or just asking questions to provoke a response. To call him a troll or a puckish court jester is to sugarcoat what’s really going on: Musk has become one of the chief spokespeople of the far right’s political project, and he’s reaching people in real time at a massive scale with his message.

Since his endorsement of Donald Trump in July, Musk has become the MAGA movement’s second-most-influential figure after the nominee himself (sorry, J. D. Vance), and the most significant node in the Republican Party’s information system. Musk and his platform are to this election what Rupert Murdoch and Fox News were to past Republican campaigns—cynical manipulators and poisonous propaganda machines, pumping lies and outrage into the American political bloodstream.

Though the mask has been off for a while, Musk’s intentions have become even more blatant recently. Following Taylor Swift’s endorsement of Kamala Harris, in which Swift labeled herself a “childless cat lady” in reference to an insult deployed by Vance, Musk publicly offered to impregnate the pop star. And just this past weekend, Musk did the following:

  • amplified a conspiracy theory that ABC had leaked sample debate questions to the Harris campaign
  • falsely claimed that “the Dems want to take your kids”
  • fueled racist lies about immigrants eating pets
  • shared with his nearly 200 million followers on X that “Trump must win” to “preserve freedom and meritocracy in America”
  • insinuated that it was suspicious that “no one is even trying to assassinate Biden/Kamala,” adding a thinking-face emoji. He subsequently deleted the post and argued that it was a joke that had been well received in private. “Turns out jokes are WAY less funny if people don’t know the context and the delivery is plain text,” he wrote in a follow-up on X.

Whether Musk is telling the truth about his assassination post or offering up a feeble excuse for his earnest trolling doesn’t matter. Although he’s trying to explain this post away as just a harmless bit of context collapse, what he’s really revealing is the extent to which he is captured by his audience, pecking out posts that delight the only cohort willing to offer the attention and respect he craves. The parallels to Trump may be obvious at this point, but they also account for Musk’s ability to dominate news cycles.

[Read: Elon Musk throws a Trump rally]

Like Trump in his Apprentice and The Art of the Deal eras, Musk before his political obsessions was a celebrity famous in a different, mostly nonpolitical context. Although Musk’s volatility, contrarianism, and disdain for the press were a matter of record before his MAGA turn, his carefully constructed popular image was that of a billionaire innovator and rocket scientist (Musk was reportedly an inspiration for Tony Stark’s character in the Iron Man movie franchise). Which is to say: Many people experienced Musk’s right-wing radicalization not as inevitable, but as a shocking departure. Right-wing diehards amplified him with glee, as proof of the ascendance of their movement, while liberals and the media amplified him as a distressing example of the proliferation of online brain worms in a certain slice of Silicon Valley.

That Musk is polarizing is important, but what allows him to attract attention is this change of context. A far-right influencer like Charlie Kirk or Alex Jones is expected to spread vile racist conspiracies—that is what they’ve always done to earn their living. But as with Trump in his 2016 campaign, there is still a lingering novelty to Musk’s role as MAGA’s minister of propaganda. Many people, for example, still don’t understand why a man with unlimited resources might want to spend most of his time acting as a political party’s in-house social-media team. Musk has been a troll for a while, but his popular image as a savvy entrepreneur stayed intact until only recently. He was the subject of a largely flattering, best-selling biography as recently as last year. He appeared on the cover of this magazine in 2013 as a contender for the world’s greatest living inventor. In fact, even when Musk muses about how strange it is that no one has tried to shoot Harris, popular news outlets still cover it as a departure from an imagined status quo. On Monday, a New York Times article described Musk, a man who recently hosted a fawning interview with Donald Trump on X and has amplified conspiracy theories such as Pizzagate, as “the world’s richest man,” who “has established a reputation as an edgy plutocrat not bound by social conventions when it comes to expressing his opinions.”

[Read: Demon mode activated]

That nearly every one of Musk’s utterances is deemed newsworthy makes him a perfect vector for right-wing propaganda. Take Musk’s role in spreading the nonsense about Haitian residents in Springfield, Ohio. According to an analysis delivered by the journalist Gaby Del Valle on Vox’s Today, Explained podcast, Musk replied to a tweet by Kirk on September 8, in which the influencer had shared a screenshot from a Springfield resident on Facebook claiming that Haitians in the area were eating ducks, geese, and pets. Musk’s reply served to amplify the claim to his followers and admirers just two days before the presidential debate, where it was directly referenced by Trump onstage. The lies “left the ecosystem of right-wing Twitter partially because Elon Musk got involved,” Del Valle said. Like Trump before him, Musk is able to act as a clearinghouse for the fringier ideas coming from the far-right fever swamps.

Musk’s is the most followed account on X and, as its owner, he has reportedly asked engineers to algorithmically boost his posts on the platform. (Musk has denied that his tweets are deliberately amplified, but the platform shows them even to people who don’t follow him.) The architecture of the site, most notably the platform’s algorithmically sorted “For You” feed, routinely features Musk and news about Musk, which increases the likelihood that anything the billionaire shares will reach a wider audience on a service that is still at least somewhat influential in shaping American political discourse. It sounds conspiratorial to suggest that Musk is tweaking the algorithmic dials on his site or using X as a political weapon, but the truth is that Musk doesn’t even need to demand that his company boost a specific message. Musk has spent nearly two years installing his own account as X’s main character and shaping the platform’s architecture in his own image. The politics of X are inextricably linked to Musk’s own politics.

It would be far too simplistic to suggest that X is the reason for the chaos of our current political moment, or that Musk is solely responsible for the dangerous rhetoric that has contributed to terrorizing Haitian residents and thoroughly disrupting life in Springfield. Trump and Vance chose to amplify these messages too, and doubled down when called out on it. X is a comparatively small platform, past its prime. It was full of garbage before Musk bought the site, and its architecture goaded users into being the worst versions of themselves long before the billionaire’s heel turn. But under Musk’s stewardship, X has become the worst version of itself—a platform whose every policy and design choice seems intended to snuff out our better angels and efficiently raise our national political temperature.

X under Musk is a pressure cooker and an insidious force—not necessarily because it is as influential as it once was but because, to those who can’t quit it, the platform offers the impression that it is a mirror to the world. One hallmark of Fox News is its ability to conjure a political perma-crisis, in order to instill a pervasive sense of fear in its audience. X, with Musk as its de facto director of programming, has created an information ecosystem that operates in much the same way. But the effect isn’t felt just among MAGA true believers.

As we lurch closer to Election Day, it’s easy to feel as if we’ve all entered the Great Clenching—a national moment of assuming the crash-landing position and bracing for impact. One gets the sense that the darkest forces in American life are accelerating, that politicians, powerful billionaires, and regular citizens alike are emboldened in the worst way or further radicalized. Every scandal, gaffe, and tragedy seems to take on a new political significance—as a harbinger of a potential electoral outcome or an indicator of societal unraveling. And it is exactly this feeling that Musk and his platform stoke and feed off every day.

September 23, 2024  15:54:31

Richard Einhorn first noticed that he was losing his hearing in a way that many others do—through a missed connection, when he couldn’t make out what a colleague was saying on a phone call. He was 38, which might seem early in life to need a hearing aid but in fact is common enough. His next step was common too. “I ignored it,” Einhorn, now 72, told me. “Hearing loss is something you associate with geezers. Of course I hid it.” He didn’t seek treatment for seven years.

About 15 percent of Americans, or nearly 53 million people, have difficulty hearing, according to the CDC. Yet an AARP survey found that Americans older than 40 are more likely to get colonoscopies than hearing tests. Even though hearing starts to deteriorate in our 20s, many people think of hearing damage as a sign of old age, and the fear of being seen as old leads people to delay treatment. According to the Hearing Loss Association of America, people with hearing loss wait, on average, seven years to seek help, just as Einhorn did.

When people ignore their hearing loss, they put themselves at a higher risk for social isolation, loneliness, and even dementia. One of the best things you can do to feel less old is, ironically, get a hearing aid. And in the past two years, these devices have become cheaper, more accessible, and arguably cooler than they’ve ever been, even before the FDA approved Apple’s bid last week to turn AirPods into starter hearing aids. This new technology is more of a first step than a complete solution—think of it as analogous to drugstore reading glasses rather than prescription lenses. That, more than anything about AirPods themselves, may be the key to softening the stigma around hearing aids. Creating an easier and earlier entry point into hearing assistance could help Americans absorb the idea that hearing loss is a spectrum, and that treatment need not be a rite of passage associated with old age.




As it stands, one demographic that could especially benefit from destigmatized hearing aids is older men. “Men are at a greater risk for hearing loss early on because they have typically had more noise exposure than women,” says Steven Rauch, who specializes in hearing and balance disorders at Harvard Medical School. But men are also less likely to go to the doctor. (Several men I interviewed spoke about being prodded by their wives to go to an audiologist.) Instead, many hide their hearing loss by nodding along in conversation, by hanging back at social gatherings, by staying home.

Faking it makes the situation worse. Without treatment, hearing can decline, and people become socially isolated. “When you’re sitting in a room and people are talking and you can’t participate, you feel stupid,” says Toni Iacolucci, a communication-access advocate who waited a dozen years before she got a hearing aid. “The amount of energy you put into the facade that you can hear is just exhausting.”

Compensating for untreated hearing loss is so taxing, in fact, that it can have a meaningful impact on the brain. “Hearing loss is arguably the single largest risk factor for cognitive decline and dementia,” says Frank Lin, the director of the Cochlear Center for Hearing and Public Health at Johns Hopkins University. Lin and his colleagues have found that mild hearing loss doubles the risk of dementia, and moderate loss triples it. In this context, a hearing aid can look almost like a miracle device for slowing aging: In that same study, Lin also found that among older adults at increased risk for cognitive decline, participants who wore a hearing aid for three years experienced about 50 percent less cognitive loss than the control group.

Lin hypothesizes that the difference is because of cognitive load. “Anybody’s brain can buffer against the pathology of dementia,” he told me. “But if you have hearing loss, too, a lot of that buffer is having to be used up to deal with hearing loss.”

In many cases, the gap between onset and treatment means years of missed conversations and declining social connection; hearing loss is associated with both loneliness and isolation. For Einhorn, who worked as a composer and a classical-record producer, his declining hearing meant maintaining a constant effort to keep up appearances. He remembers going to restaurants and tilting his head entirely to the left to favor his better ear while denying to his friends that he had any issue with his hearing; he started to avoid going to parties and to the movies. “Phone calls became hellish,” he told me. He eventually had surgery on one ear and finally started wearing hearing aids in 2010, when he suddenly lost all of his hearing on one side. “When I lost my good ear, I fell into an abyss of silence and isolation,” he says. “It was an existential crisis: Either I figure out how to deal with this, or, given the isolation I was already experiencing, it was going to become really serious.” Only then did he realize that the devices were less visible than he’d imagined and that the integration into his world was worth the ding to his vanity. Like many who use the devices, he still struggles to hear at restaurants and parties (carpets and rooms without music help), but the hearing aids have made an enormous difference in his quality of life. He still regrets the years he spent posturing instead of listening. “When you get to 72, you realize you’ve done a lot of dumb things, and not getting treatment was probably the dumbest thing I’ve ever done in my life,” he said.


That anyone is straining this much when a fix exists is a testament to how powerful ageism and the pressure to project youth can be. As long as people see the choice as one between hearing well and looking young, many will opt for faking their ability to hear. Overcoming that association with age may be the last challenge of persuading people to try hearing aids out.

Some of the barriers were, until recently, more basic. Hearing aids were available only with a prescription, which usually requires visits to an audiologist who calibrates the device. Prescription hearing aids also cost thousands of dollars and aren’t always covered by insurance. Pete Couste, for instance, did go to the doctor a couple of years after first noticing he was off pitch when playing in his band, but he decided not to get hearing aids because of the cost. Instead, he dropped out of the band and his church choir.

But these barriers are getting lower. In 2022, the FDA approved the sale of hearing aids to adults without a prescription, opening the technology up to industry for the first time. Over-the-counter options have now hit the market, including from brands such as Sony and JLab. Apple’s hearing-aid feature, compatible with some AirPod Pros, is the first FDA-approved over-the-counter hearing-aid software device and will be available later this fall via a software update. EssilorLuxottica plans to release the first-ever hearing-aid eyeglasses later this year. Learning about the over-the-counter options triggered Couste to address his hearing loss, and he ended up with prescription aids that have made a “tremendous difference” in his confidence, he told me. This year, he went to four weddings and a concert at Red Rocks; he’s even started to play saxophone again and plans to get back onstage within a year.

None of that undoes hearing aids’ association with aging, though. A selling point of the new AirPod technology is simply that “everybody wears AirPods,” Katherine Bouton, a hearing-loss advocate and the author of the memoir Shouting Won’t Help, told me. “The more you see people wearing something, the more normal it becomes.” At the same time, AirPods are typically a signal that someone’s listening to music or a podcast rather than engaging with the world around them: The AirPods might improve someone’s hearing, but they won’t necessarily make hearing loss less lonely. Even if Iacolucci’s hearing loss could be treated with AirPods, she doesn’t think they would fully address the loss’s impact: “I still have to deal with the internal stigma, which is a thousand times worse,” she told me.

The real power of the Apple technology, then, might be that it’s targeted to users with mild to moderate hearing loss. Changing the stigma around hearing loss will take far more than gadgets: It’ll require a shift in our understanding of how hearing works. “Hearing loss implies that it’s binary, which couldn’t be further from the truth,” Lin said. Most people don’t lose their hearing overnight; instead, it starts to deteriorate (along with the rest of our body) almost as soon as we reach adulthood. Over time, we permanently damage our hearing through attending loud concerts, watching fireworks, and mowing the lawn, and the world is only getting louder. By 2060, the number of Americans ages 20 years and older with hearing loss is expected to increase by 67 percent, which means that nearly 30 million more people will need treatment. If devices we already use can help people transition more easily and at a younger age to using hearing assistance, that could make the shift in identity less stark, easing the way to normalizing hearing aids and changing the idea that they’re for geezers only.

September 23, 2024  15:55:01

Updated at 9:20 a.m. ET on September 19, 2024

Yesterday, pagers used by Hezbollah operatives exploded simultaneously in Lebanon and Syria, killing at least a dozen people and injuring thousands. Today brought another mass detonation in Lebanon, this time involving walkie-talkies. The attacks are gruesome and shocking. An expert told the Associated Press that the pagers received a message that caused them to vibrate in a way that required someone to press buttons to stop it. That action appears to have triggered the explosion. At a funeral in Beirut, a loudspeaker reportedly called for people to turn off their phones, illustrating a fear that any device could actually be a bomb, including the one in your pocket.

Electronics are a global business, and the events of the past two days in Lebanon have created an unexpected information fog of war. Virtually everyone uses personal electronic devices—phones, headphones, chargers, and even, in some cases, pagers. Those devices can, under certain circumstances, create risk. Gadgets catch on fire, get hacked so that remote intruders can spy on you, or get infected with malware that turns them into botnets. Might your smartphone just explode one morning as you’re reaching for it on the nightstand? Almost surely not.

According to the Associated Press, the attack was likely carried out by hiding very small quantities of highly explosive material in the pagers. In principle, intelligence operatives in Israel, which is widely believed to have conducted both attacks, could have done so by compromising the devices in the factory. Or, given that the exploding devices seem to have specifically targeted Hezbollah rather than everyone who owned a particular model of pager, the perpetrators could have intercepted the gadgets after they left the factory. But, according to The New York Times, Israeli intelligence went even further: It set up a shell company based in Hungary, B.A.C. Consulting, to manufacture and distribute rigged electronics specifically for the purpose of selling them to Hezbollah. (B.A.C. Consulting also reportedly sold normal, non-bomb pagers to other clients.) The resulting pager bombs were apparently procured by Hezbollah months ago. The pager bombs and radio bombs have since been waiting to be detonated remotely.

You are unlikely to find that your iPhone, Kindle, or Beats headphones have been modified to include PETN, the compound currently suspected to have been used in the Lebanon detonations. That’s not because such a thing can’t be done—as little as three grams of the material can be highly explosive, and that much would, in principle, be possible to cram into even the small cavities of a circuit-packed iPhone. In theory, someone could interfere with such a device, either during manufacture or afterward. But they would have to go to great effort to do so, especially at large scale. Of course, this same risk applies not just to gadgets but to any manufactured good.

Other electronic devices have blown up without being rigged to be bombs. Yesterday, when news first broke of the pagers blowing up, some speculated that the batteries had triggered the explosion. That conclusion is partly caused by an increased awareness that lithium-ion batteries are at some risk of exploding or catching on fire. The model of pager targeted in Lebanon does in fact use lithium-ion cells for power. But the intensity and precision of the explosions seen in Beirut, which were strong enough to blow off victims’ hands, couldn’t result from a lithium-ion blast—which also couldn’t be triggered at will anyway. A lithium-ion battery could cause a smaller explosion if overheated or overcharged, but these batteries pose a greater risk of starting a fire than an explosion. They can do so when punctured so that the liquid inside, which is flammable, leaks and then ignites. That doesn’t mean your iPhone is at risk of exploding when you tap an Instagram notification. In the United States, low-quality batteries made by disreputable manufacturers and installed in low-cost devices—such as vape pens or e-bikes—pose a much greater risk than anything else.

Accidental battery fires, even from poorly made parts, couldn’t be used to carry out a simultaneous explosive attack. But that doesn’t mean you don’t own devices that could put you at risk. Consider spyware and malware, a concern commonly directed at Chinese-made gadgets. If connected to the internet, a device can convey messages, send your personal information abroad, or, in theory, detonate on command if it were built (or retrofitted) to do so. It feels plausible enough to put the pieces together in a way that produces fear—exploding pagers in Beirut, wide ownership of personal electronics, lithium-ion fire risk, devices connected to unknown servers far away. Words such as spyware and malware evoke the James Bond–inspired idea that a hacker at a computer half a world away can press buttons quickly and cause anyone’s phone to blow up. But even after the astonishing attack carried out in Lebanon, such a scenario remains fiction, and not fact.

And yet, it’s also the case that a new type of terror has been birthed by this attack. In Lebanon and other parts of the Middle East especially, citizens can now reasonably fear that ordinary devices might also be bombs. Depending on how the devices made their way to their new owners, it’s also possible that the bomb-gadgets have leaked into more general circulation. Four children have already died.

In other words, the fear is grounded in enough fact to take root. Abroad, even here in the U.S., that same fear can be mustered, even if with much less justification. Fretting that your phone is actually a bomb feels new but really isn’t. The fear is caused by bombs, the things that explode. A pager or a phone can be made into a bomb, but so can anything else.

September 23, 2024  15:58:33

The minivan dilemma: It is the least cool vehicle ever designed, yet the most useful. Offering the best value for the most function to a plurality of American drivers, a minivan can cart seven passengers or more in comfort if not style, haul more cargo than many larger trucks, and do so for a sticker price roughly a quarter cheaper than competing options. Even so, minivan sales have been falling steadily since their peak in 2000, when about 1.3 million were sold in the United States. As of last year, that figure is down by about 80 percent. Once sold in models from more than a dozen manufacturers, the minivan market now amounts to four, one each from Chrysler, Honda, Toyota, and Kia.

On account of the dilemma, a minivan is typically purchased under duress. If you live in a driving city, and especially if you have a family, a minivan conversation will eventually take place. Your older, cooler car—perhaps your Mini Cooper or your spouse’s Honda CR-V—will prove unfit for present purposes. Costco cargo, loads of mulch, sports equipment, and holiday loot all need a place to go. The same is true of car seats, which now are recommended for children as old as 7. And so, before too long: “Maybe we should get a minivan.”

This phrase is uttered with an air of resignation. The minivan was popular, but it was never cool, not even in its youth, during the 1980s. Now it’s middle-aged: The first of its type came out in ’83, which makes the minivan an elder Millennial, and it’s no more attuned than your average 41-year-old to recent trends. But why, exactly, has it earned so much derision through the years? And why was the minivan replaced, almost altogether, by the SUV?

The minivan arrived, way back when, as a savior. When Chrysler, under the former Ford chief Lee Iacocca’s direction, first conceived of the design in the late 1970s, Americans who wanted room to cart more kids and goods had only a couple of options. One was the land-yacht-style station wagon, perhaps in avocado green with faux-wood paneling. Lots of kids could pile onto its bench and jump seats, while the rear storage, accessible by hatch, allowed for easy loading. These cars were somewhat functional, but they didn’t seem that safe. The suburban family’s other choice was the full-size van—a big, boxy transport or utility vehicle. The gas for these was also pricey, and their aesthetic felt unsuited to domesticity. By cultural consensus, vans were made for plumbers, kidnappers, or ex–Special Forces domestic mercenaries.

Chrysler’s minivan would steer clear of those two dead ends, and carry American families onto the open roads toward, well, youth soccer and mall commerce. It really did bring innovation: ample seating organized in rows with easy access, the ability to stow those seats in favor of a large cargo bay, a set of sliding doors, and smaller features that had not been seen before, such as the modern cupholder. And it offered all that at an affordable price with decent fuel economy.

[Read: The hardest sell in American car culture]

Pickup was quick. In the first year after introducing them, Chrysler sold 210,000 Dodge Caravans and Plymouth Voyagers, its initial two models. Overall minivan sales reached 700,000 by the end of the decade, as the station wagon all but disappeared. But the new design also generated stigma: As the child of the station wagon and the service van, the minivan quickly came to represent the family you love but must support, and also transport. In a nation where cars stood in for power and freedom, the minivan would mean the opposite. As a vehicle, it symbolized the burdens of domestic life.

That stigma only grew with time. In 1996, Automobile magazine called this backlash “somewhat understandable,” given that the members of my generation, who were at that point young adults, had “spent their childhoods strapped into the backseat of one.” Perhaps it was childhood itself that seemed uncool, rather than the car that facilitated it. In any case, minivans would soon be obsolesced by sport utility vehicles. The earliest SUVs were more imposing than they are today: hard-riding trucks with 4×4 capabilities, such as the Chevrolet Suburban and the Jeep Wagoneer. These were as big as or even bigger than the plumber-kidnapper vans of the 1970s, and they got terrible gas mileage, cost a lot of money, and were hard to get in or out of, especially if you were very young or even slightly old. Yet the minivan’s identity had grown toxic, and for suburban parents, the SUV played into the fantasy of being somewhere else, or doing something better.

[Read: Minivans for minigarchs]

The SUV’s promise was escape from the very sort of family life that the minivan had facilitated. In 2003, The New York Times’ John Tierney recounted how the new class of vehicles had taken over. “The minivan became so indelibly associated with suburbia that even soccer moms shunned it,” he explained. “Soon image-conscious parents were going to soccer games in vehicles designed to ford Yukon streams and invade Middle Eastern countries.” At the same time, the SUVs themselves were changing. The minivan had been built from parts and designs for a car, not a van. SUV manufacturers followed suit, until their vehicles were no longer burly trucks so much as carlike vehicles that rode higher off the ground and had a station-wagon-style cargo bay. Few even had more seats than a sedan. As the early minivans were to vans, so were these downsized SUVs to the 4x4s that came before them.

Functionally, the minivan is still the better option. It is cheaper to buy and operate, with greater cargo space and more seating and headroom. Still, these benefits are overshadowed by the minivan’s dreary semiotics. Manufacturers have tried to solve that problem. When my family reached the “Maybe we should get a minivan” milestone, I noticed that some models of the Chrysler Pacifica now offered, for a premium, blacked-out chrome grills and rims. But to buy a poseur “sport van,” or whatever I was meant to call this try-hard, cooler version of the uncool minivan, struck me as an even sadder choice.

Beyond such minor mods, the industry hasn’t really done that much to shake away the shame from the minivan’s design. I suspect that any fix would have to be applied at the level of its DNA. The minivan was the offspring of the wagon and the van. To be reborn, another pairing must occur—but with what? Little differentiation is left in the passenger-vehicle market. Nearly all cars have adopted the SUV format, a shoe-shaped body with four swinging doors and a hatch, and true 4x4s have been all but abandoned. Perhaps the minivan could be recrossed with the boxy utility van, which seems ready for its own revival. This year, Volkswagen will begin selling a new electric version of its Microbus, one of the few direct precursors to the minivan that managed to retain an association with the counterculture despite taking on domestic functions.

However it evolves, the minivan will still be trammeled by its fundamental purpose. It is useful because it offers benefits for families, and it is uncool because family life is thought to be imprisoning. That logic cannot be overcome by mere design. In the end, the minivan dilemma has more to do with how Americans think than what we drive. Families, or at least vehicles expressly designed for them, turned out to be lamentable. We’d prefer to daydream about fording Yukon streams instead.

September 18, 2024  17:40:52

My problem is my habit of scrolling through Instagram Reels only at night, right before I go to sleep. Defenses worn down by the day, I am susceptible to nonsense, and unsure of whether what I’m seeing is “real.”

For example: I saw a video the other night of a young woman sitting in a normal-looking bedroom and telling a straight-faced story about how she had been proposed to at a Taylor Swift concert, and said no. “I was not saying no to the man. Like, my boyfriend is the love of my life. I’m gonna marry him,” she explained. “I was saying no to the proposal, if that makes sense.” She said the concert was in Liverpool, and she has no emotional tie to that city. She has no real passion for Taylor Swift, in fact. She doesn’t even have “Love Story,” the song during which the proposal was made, saved on Spotify. “It just wasn’t specific to me. You know? The girls that get it, get it.” I didn’t get it. Was she serious, and quite strange, or was I being tricked for some purpose I may never understand?

Another time, I watched a video from a woman whose Instagram bio reads “girly girl + future girl mom.” She was demonstrating how she does a full face of makeup every morning before her husband wakes up. “This is just what makes me feel good about myself,” she said. Like the people in the comments, I wished I knew whether this was a joke. Then I came across some guy telling the story of a woman who’d sent him “trick-or-treat candy” after he had ghosted her—he thought this was funny, and now they are married. No one in the comments thought this one was a joke, but some suggested it might be a stupid lie told for no reason.

Our befuddlement appears to be the point. These videos are short and, like all other Instagram Reels, they auto-play on a loop. That’s how they succeed. The people who produce them don’t want me to understand whether they’re sincere; they care only that I take the time to wonder—and that the loop keeps looping while I do. As such, their work appears to represent a novel form of content, distinct from any other classic form of baiting for attention (trolling, pranks, hoaxes, etc.). The videos aren’t meant to make you angry or upset. They aren’t playing off your curiosity. They’re just trying to confuse you—and they work.


In the past, engagement-baiters could win on social media only if you clicked on their post, or shared their post, or responded to their silly prompt. But those efforts weren’t hard to thwart. On Facebook, you could stare at a post for a few seconds, riddle out its hidden aims, then scroll past once you decided not to be fooled. On Reddit, you could give a suspiciously sensational story a read or two before participating in the comments.

But different rules apply to modern social video platforms, where the algorithms are especially aggressive at stuffing viral content into people’s feeds. Traditional engagement metrics—likes and shares and comments—are still important, but creators on these platforms are seeking views above all else. (This was the case even with older social video platforms like Vine.) Racking up a lot of views is crucial for achieving greater visibility, as well as moneymaking opportunities—and confusing people is a pretty innovative way to do it. Let’s say you come across an auto-playing TikTok, YouTube Short, or Instagram Reel that you find a bit unsettling. By the time you’ve watched it to the end a couple times, or spent however long it takes to make your judgment on what the video really means and whether it’s sincere, you’ve already given the creators what they wanted. When I saw that young woman talk about rejecting her boyfriend’s marriage proposal at the Eras Tour, it didn’t matter that I didn’t click, didn’t buy, didn’t like, and didn’t share. I only watched the video—and then, because I was nonplussed, I watched again.

Emily Hund, a researcher at the University of Pennsylvania’s Annenberg School for Communication and the author of the recent book The Influencer Industry: The Quest for Authenticity on Social Media, told me that these videos are “really smart and almost artful.” Instead of shocking or enraging you, they merely need to be weird enough to give you pause. Hund sees them as a response to users who have spent the past 10 years looking at influencers and doubting whether what they’re saying or presenting is sincere. The new creators “are messing with our conceptions of authenticity in a way that really makes the viewer feel it,” Hund told me. “Previous genres of influencer content didn’t incite the viewer to be so uncomfortable.”

[Read: Trolling’s surprising origins in fishing]

The proposal video turned out to be the work of an online trickster named Louisa Melcher who’s posted on X about having “niche internet fame for being a liar.” (More accurately she has gotten niche internet fame by being a liar.) Sometimes people will post in the comments on her Instagram videos to make this clear for others. For instance, on a recent video in which she gave a chipper presentation about how to make money by selling the books out of your neighborhood’s Little Free Library, somebody responded, “I am BEGGING people on this app to learn to recognize Louisa Melcher.” So some viewers are gaining media literacy re: Louisa Melcher. Others are not. The Daily Mail has credulously written up Melcher’s videos not once but twice.

When I went to the Instagram profile of Robby Witt, the Los Angeles man whose wife supposedly won him over with Halloween candy, I confirmed that his story, like seemingly everything else he says on the internet, is untrue. “I try to be pretty transparent,” Witt told me. “My bio is more transparent than The Onion’s.” This is fair to say, as The Onion’s Instagram bio is “America’s Finest News Source,” while Witt’s includes “Fictional Stories and Satire.” But most people who come across his videos never see his profile. They watch him tell fake-seeming anecdotes only in a decontextualized feed full of all kinds of other strangers doing all kinds of other improbable things that viewers may also have to watch a few times to understand how they should respond.

Melcher’s stories addle viewers because she comes off as kind of a sociopath. Witt’s mostly get people off-balance by presenting banal fantasies. Another common format for confusion-bait appeals to the human instinct to tell people that the way they’re doing things is wrong. If you spend any time on Instagram or TikTok, you will see users correcting the way that other people wash their face, season a chicken breast, or refinish old cabinets. This happens so often in the comments on sincere videos that confusion-baiters have caught on and started doing things wrong on purpose. I’m pretty confident that this explains the woman who posts about what she makes her “blue collar husband” for dinner. She wears a stony expression and never explains herself. The meals are absurd to the point of unbelievability, but viewers can’t seem to resist asking whether she is serious, and telling her that if she is, then she is harming her husband’s health by serving him old pizza fried in canola oil.

Some confusion-baiters get less confusing as you see them more. Alexia Delarosa, a stay-at-home mom who sometimes gets called a “tradwife” (though she doesn’t identify as one), makes a point of being inconsistent: This is how she grabs attention. Many of her posts appear to be sincere. I can easily believe that she’d bake a chocolate cake from scratch, and that she keeps chickens. But other videos are more ambiguous. Does she really cook dinner for her family in an off-the-shoulder gown? Does she really press empty egg cartons into homemade paper? When you first come across her posts, it takes some time, and several auto-plays, to figure out the answer.

[Read: Sharon McMahon has no use for rage-baiting]

When I spoke with Delarosa, she confirmed that she really did make paper as a fun craft project with her kids, but it wasn’t because she just didn’t feel like going to the store, as suggested in the video’s caption. That part was a joke. She jokes often and kind of out of spite. For whatever reason, her early videos about making jam and butter got powerful negative reactions from viewers. “People said, ‘This is so unrealistic. No stay-at-home mom lives like this. This is so crazy,’” she told me. “I started playing things up a little bit, almost poking fun at myself, recognizing that what I’m doing seems a little over-the-top and silly.”

She was candid about the fact that she will deliberately try to make people pause and wonder whether she’s for real. If the papermaking had been presented as a craft project, fewer people would have paid attention. Presented as the activity of a bizarre woman who assigns herself an obscene number of unnecessary chores, the same video was harder to scroll past. “People are more likely to stop and watch it,” she said. “That’s part of creating content and getting views.” She’s noticed that some people now come to her page just for the comment sections, which are entertainment in themselves. “They want to see who gets the video or who doesn’t, who’s been here long enough and gets what I’m doing.”

Whatever their approach, the confusion-baiters are receiving a lot of attention. (Delarosa and the woman cooking for her blue-collar husband each have hundreds of thousands of followers.) What they’ll get up to next remains unclear. Nathan Fielder notwithstanding, it’s hard to make a career out of being inscrutable. Witt told me he has been making videos for 20 months and doesn’t know where he’s going with it. He only just got an agent. He still has a full-time office job. When his co-workers come up to him and say they’ve just seen one of his videos, he says, “that’s how I know something is really popping.”

What I learned from talking with him is that he is actually very nice. Also: that some people are totally comfortable with lying to everyone in the world, and they wouldn’t even be embarrassed if somebody they knew saw them doing it. This is another thing I find confusing. But it’s not a joke—it’s true.

September 18, 2024  12:18:12

Why should humans do anything, if machines can do it better? The answer is crucial to the future of human civilization—and may just lie in religious texts from centuries ago.

From the digital (Google searches and Slack chats) to the purely mechanical (washing machines and microwaves), humans use tools nearly constantly to enhance or replace our own labor. Those that save time and effort are easy to appreciate—I have yet to meet someone who misses scrubbing clothes by hand. But the rapid rise of artificial intelligence—which can now write essays and poetry, create art, and substitute for human interaction—has scrambled the relationship between technology and labor. If the creators of AI models are to be believed, all of this has happened even before the technology has reached its full potential.

As this technology improves and proliferates—and as we can delegate more of our tasks to digital assistants—each of us must decide how to devote our time and energy. As a scholar of Jewish texts, I have spent the past 12 years working with a team of engineers who use machine-learning tools to digitize and expand access to the Jewish canon. Jewish tradition says nothing of ChatGPT, but it is adamant about work. According to the ancient rabbis, meaningful, creative labor is how humans channel the divine. It’s an idea that can help us all, regardless of our faith, be discerning adopters of new applications and devices in a time of great technological change. If you have ever felt the joy of untangling a seemingly intractable problem or the adrenaline rush that comes from applying creative energy to shape the world, then you know that worthwhile labor helps us channel our best selves. And we cannot afford to cede it to the robots.

What Americans colloquially call “work” divides into two categories in ancient Hebrew. Melakhah connotes creative labor, according to early rabbinic commentaries on the biblical text. This is distinct from avodah, the word used to describe more menial toil, such as the work that the enslaved Israelites perform for their Egyptian taskmasters as described in the Book of Exodus. Pirkei Avot, a third-century rabbinic treatise filled with life advice, charges its readers to “love work.” Even then, it was part of a textual tradition that distinguishes between those kinds of work we must love and those we just love to avoid. Most of the tech tools we use on a regular basis attempt to reduce our avodah: to speed up rote labor or make backbreaking tasks easier. In a perfect world, I believe, such tools would then free people up to spend more time on our melakhah.

Melakhah is most famous in rabbinic literature as being the overarching category for the 39 types of work that are forbidden on the Jewish Sabbath. Sometimes called “thoughtful labor,” these include actions such as sowing and reaping, building and destroying, and writing and erasing. At its core, melakhah requires intention. Tasks that allow you to set it and forget it are by definition not among the most serious violations of Shabbat. According to the rabbis of the classical rabbinic period, who lived and wrote in the first six centuries of the Common Era, such tasks are not the kind of work that allows us to exercise our divinely given ability to shape and change the world.

[Read: The only productivity hack that works on me]

In Avot DeRabbi Natan, a companion volume to Pirkei Avot, the very act of Creation in which God produces the world using language is framed as a quintessential example of melakhah. “Let there be light” may seem as effortless to modern readers as “Abracadabra!” but Genesis categorizes this act as labor, noting, “God rested on the seventh day from all the work which God had made” (Gn 2:2). Avot DeRabbi Natan argues that God’s choice to describe “Let there be light” as “work” is a testament to the value of creative labor. The human capacity to work, then, is a way that we imitate God.

That conclusion may sound blasphemous in our modern age, when many social scientists and therapists insist that leaving work behind at the end of the day allows one to be a better partner and parent, whereas a failure to compartmentalize one’s job leads to burnout. But such advice, unlike the ancient texts, fails to distinguish between God’s life-giving melakhah and the soul-sucking avodah that comprises many modern lives.

Perhaps because of the nature of their jobs, many Americans talk about work as something they would not do if they had a choice. We yearn for vacations, for summer, for time spent away from the grind. And yet, the authors of Avot DeRabbi Natan consider work fundamental to human fulfillment. In the Book of Genesis, God deposits Adam in the Garden of Eden and provides him with the first-ever to-do list: “And God placed him in the garden, to work it and guard it” (Gn 2:15–16). Adam was, quite literally, in paradise—not despite the work he was doing, but because of it.

[Read: AI has become a technology of faith]

Some of the ancient texts’ lessons on work seem outdated today. Consider, for example, the extensive discourses on the many steps of the process of making fabric, beginning with shearing, cleaning the wool, combing it, and so on. The rabbis of the third century didn’t have ChatGPT, nor did they devote many words to labor-replacing technologies. But they did live in a time when people had indentured servants, so they could easily envision a life in which labor was delegated to others. The Mishnah, a rabbinic legal work compiled around the year 200, discusses a woman so wealthy that she does not need to do anything but lounge; even her spinning and weaving can be delegated to the household help. But if she does no work at all, the Mishnah warns, she will go crazy.

Modern technologies such as generative AI threaten to make 21st-century Americans like the woman in the Mishnah: Deprived of purpose, convinced that our creative output is useless because a computer can produce a result that is sometimes just as good, or even better. Much of the debate around AI hinges on the question Can a computer do it better? But Jewish texts insist that the most important question is about process, not product. Tools that offer to replace work that I find meaningful aren’t ones I’ll be using anytime soon. I feel fulfilled when I write and when I teach even though I know that emerging large language models can write essays for me and may soon be able to transmit information to my students. I enjoy using my creative powers to bake despite the existence of bakeries that mass-produce delicious cookies in far less time and for far less money than I can.

Some digital “solutions” don’t just steal melakhah, but also make rote tasks proliferate. Are 20 Slack messages really more efficient than one phone call? I can’t quit Slack or totally avoid email, but I can recognize them as forms of avodah and push back against their ubiquity. Technology that doesn’t allow me to devote more of my time to creative labor isn’t worth using.

[Read: AI can’t make music]

Jewish law views the story of Creation as a blueprint for structuring the work week. “Six days you shall labor,” proclaims the Book of Exodus—that is, six days of creative work, followed by a day of rest. The implications of this model echo throughout the Bible and beyond: The day of rest is meaningless without the preceding six days of melakhah to sanctify it. At the end of the story of Creation, the Book of Genesis tells us that God deemed the world “very good.” To have a world in which we feel invested and fulfilled—that we can deem very good—we should let the machines do the chores while we, like God, create.

September 19, 2024  01:43:19

“I’m a proud crypto bro. You’re starting to become one of us, if not already,” Farokh Sarmad, a social-media influencer, said to former President Donald Trump during a livestream on X last night. According to the platform’s listener counter, more than 1 million people tuned in for the launch of World Liberty Financial, a new crypto project promoted by Trump and his family. The former president has been posting about it on social media for several weeks.

Or at least the launch was supposed to be the purpose. Trump instead gave scant details about the project itself and kept talking about cryptocurrency more broadly. He admitted to not fully understanding the technology, saying that young people can more easily grasp it, similar to how his grandchildren learned “perfect Chinese” while toddlers.

Nevertheless, he said, “we have to be the biggest and the best” when it comes to crypto. He emphasized why those in the industry should care about the 2024 election. Both the SEC and the Biden-Harris administration, he said, have been “very hostile toward crypto. Extremely hostile like nobody can believe.” In a Harris presidency, he added, the crypto world “will be living in hell.”

Trump wasn’t always this pro-crypto. He once referred to bitcoin as a scam, but he signaled interest in the crypto world in late 2022 when he partnered in the sale of Trump-themed NFT trading cards (digital-only collectibles maintained on the blockchain). This summer, he appeared at a bitcoin conference and declared that the United States “will be the crypto capital of the planet”; at least twice, he has hosted private parties at Mar-a-Lago for holders of his NFTs. It doesn’t hurt that crypto companies are bankrolling many Republican campaigns this election.

After Trump spoke, his longtime associate Steve Witkoff and his son Donald Trump Jr. came onto the livestream to talk more specifically about the new crypto project. Though the details are fuzzy, World Liberty Financial, on its X account, describes itself as driving “mass adoption of stablecoins”—a type of cryptocurrency that is supposedly less volatile than tokens such as bitcoin because it is tied to a somewhat stable asset, such as gold. It will reportedly also host some kind of crypto exchange and sell its own token, called WLFI, which will be a governance token—meaning it will be used in order to vote on business decisions and will not be transferable. The company’s bio on X currently reads: “Beware of Scams! Fake tokens & airdrop offers are circulating. We aren’t live yet!”

World Liberty Financial is led by Chase Herro and Zachary Folkman. Herro was involved in a previous cryptocurrency project called Dough Finance that was hacked and lost millions of dollars, according to a Bloomberg feature published last week. Folkman is also known for running a service called Date Hotter Girls, and Herro has long aspired to be a crypto influencer. (In 2018, Bloomberg reported, he was filmed in a Rolls-Royce saying, “You can literally sell shit in a can, wrapped in piss, covered in human skin, for a billion dollars if the story’s right, because people will buy it.”) The Trumps became involved with World Liberty Financial when Witkoff arranged the meeting between Herro, Folkman, and Trump’s sons about nine months ago, Witkoff said on the livestream.

Trump’s interest in joining a crypto project seemingly came from his sons. “Barron knows so much about this,” he said on the livestream. “Barron’s a young guy, but he knows it. He talks about his wallet. He’s got four wallets or something,” he said about his 18-year-old son. “Eric and Don know it so well.” The exact responsibilities of the Trumps involved are unclear. The company reportedly lists nonspecific roles for several members of the Trump family. The former president is “chief crypto advocate,” while both Eric Trump and Don Jr. are listed as “web3 ambassadors” and Barron Trump is listed as the company’s “DeFi visionary” (DeFi being a reference to peer-to-peer financial services).

It is uncommon and maybe even unprecedented for a presidential candidate to launch a major new business venture so close to the election. Trump already holds a majority of the stock for Trump Media & Technology Group, which owns his social-media platform, Truth Social—a possible conflict of interest. With his comments about the SEC last night, he appeared to suggest that he could interfere with a regulatory agency in favor of an industry that he is now financially involved in. The closest parallel for that might be former President Lyndon B. Johnson, who put pressure on the FCC to benefit his wife’s TV and radio empire. (Though this started when he was a congressman.)

Later in the livestream, Donald Trump Jr. spoke about his belief that the country’s Founding Fathers would be supportive of decentralized finance. He views it as less political than traditional banking, he said, and stated that his family has been “totally canceled” by banks. Echoing his father, he presented the 2024 election as a matter of life and death for crypto. The Republican Party, he said, is clearly the side that is pro-freedom. The left “is the side of censorship. They’re the side that wants to jail their political opponents,” he said. “They want to overregulate everything so much that you actually eliminate so many of the natural and great things that people love about crypto.”

September 17, 2024  18:25:02

Today, Snap, the parent company of Snapchat, one of the most popular social-media apps for teenage users, is announcing a new computer that you wear directly on your face. The latest in its Spectacles line of smart glasses, which the company has been working on for about a decade, shows you interactive imagery through its lenses, placing plants or imaginary pets or even a golf-putting range into the real world around you.

So-called augmented reality (or AR) is nothing new, and neither is wearable tech. Meta makes a pair of smart glasses in partnership with Ray-Ban, and claims they’re so popular that the company can’t make them fast enough. Amazon sells an Alexa-infused version of the famous Carrera frames, which make you look like a mob boss with access to an AI assistant (Alexa, where’s the best place to hide a body?). Apple launched its Vision Pro headset—which includes an AR mode, along with a fully immersive virtual-reality one—last year. And who could forget Google Glass? Consumers have sometimes been cool on the face computers, if not outright hostile toward them, but tech companies just can’t seem to quit the idea. From that perspective, it makes sense that Snap’s new Spectacles are more a demonstration of intent than an actual product: They’re targeted to developers who will apply and pay $99 a month to use them.

But this is also, arguably, what makes them interesting. In an interview last week, Snap CEO Evan Spiegel told me that he sees smart glasses as an opportunity to “reshape what a computer is, to make it something that actually keeps us grounded in the real world rather than behind a screen.” The company hasn’t accomplished this so far, of course, but the new Spectacles—and all those other smart glasses and AR headsets—are not being released into a void. They’re arriving at a moment when people are feeling pretty turned off by phones. People are angsty about how much time they spend looking down at small screens rather than engaging with the world around them. Parents are concerned that phones are driving a teen mental-health crisis. Smartphone sales have slowed, and even the latest iPhone isn’t doing great. Companies are trying to get people excited about technology again, by pitching all sorts of new hardware ideas that break the bounds of that rectangular screen, such as lapel pins or glorified walkie-talkies that work with AI assistants. I had this moment in mind as I wore the new Spectacles earlier this month, batting colorful digital blobs away while Paramore’s “Misery Business” played in the background.  

[Read: Yet another iPhone, dear God]

Among all the new glasses options, the Spectacles are distinct. They are oriented less toward utility—say, asking Alexa to set a timer—and more toward fun. In doing so, they offer a very specific formulation for the future of computing: that it should be amusing and connective. “If we look at the history of computers, they’ve actually always kept us indoors, taken us away from people that we love,” Spiegel told me. Growing up, he explained, he loved computers, but he had to go to the computer lab to use them, which meant forgoing the opportunity to hang out with friends during recess. He thinks smart glasses are an opportunity to reinvent screens by integrating computers more naturally into one’s life.

But Spectacles are still far from perfect. For starters, they are notably heavy. When I tried the glasses, they got warm to the touch after use, despite Snap’s assurances that it had invested in a state-of-the-art cooling system. They support up to 45 minutes of continuous usage, which isn’t very long. They reminded me of snorkeling goggles. You absolutely could not wear them in daily life without someone asking you what exactly you’ve got on your head. Their lenses can be dimmed to look like sunglasses, or made clear so people can still see your eyes. The glasses are controlled by your hands, held out in front of you. You pinch your index finger and your thumb together to “click.” (The onboarding process involves practicing by popping bubbles floating a few feet from your face.)

Mostly, they’re fun. Snapchat is famously popular with young people, and the glasses feel like a piece of hardware designed for this audience—closer to a Nintendo Switch than a Google Glass. In one game developed in partnership with Lego, you can project virtual bricks onto your kitchen table and move them around to build different creations. Ask it for an additional small blue brick, and one appears before you. In collaboration with Niantic, the company behind Pokémon Go, Snap is also launching a game called Peridot Beyond that lets you care for virtual pets.

Perhaps most important, at least when it comes to Spiegel’s bigger vision, the Spectacles can sync together, so that multiple people can see the same digital creations at once. In one experience, called Imagine Together, users can shout words to create cartoons that then appear in little bubbles on the screen. “Imagine a fox!” you might say, and then a small fox appears, floating in a bubble in midair between you and your friend.

[Read: The Apple Vision Pro is spectacular and sad]

Spiegel, who has four children, dreams that someday he’ll see his kids playing together in augmented reality. I asked him what he might say to parents who would be nervous about their children adding an additional level of computing into their daily life. (Parents are already plenty concerned about screen time as is, without the screens being barely an inch from their teens’ faces.) What would he say to the parents who just want their kids to go outside? Spiegel countered that he is a go-outside-and-play parent himself—but argued that the glasses could make playing together outside more fun.

At times, I found the Spectacles genuinely amusing, in a way the current Meta and Alexa glasses aren’t. And yet, they still don’t feel essential. Any device that’s hoping to disrupt the smartphone will have to be extremely good. Whether smart glasses are indeed the future of computing will depend on whether someone can make a pair that’s useful in day-to-day life. Spectacles aren’t there yet; they’re more novelty than utility. But the philosophical argument they make is a provocative one, even if it’s just that right now—an argument. Like the imaginary pet I saw while wearing them, it technically exists, but just barely.

September 17, 2024  18:11:57

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

In the nine years since Donald Trump descended the golden escalator in Trump Tower, Republican politicians have become less and less likely to publicly disagree with him. But in recent days, a rift has opened up between Trump and the GOP over one of his allies. Laura Loomer, an online conspiracy theorist with a penchant for bigotry, was seen leaving Trump’s private plane with him before the presidential debate last Tuesday. The next day, Loomer, who has said that 9/11 was an “inside job,” tagged along with Trump to a 9/11 memorial event.

Republican politicians do not like her proximity to the ex-president and have said so. “Laura Loomer is a crazy conspiracy theorist who regularly utters disgusting garbage intended to divide Republicans,” and stands to “hurt President Trump’s chances of winning re-election. Enough,” Republican Senator Thom Tillis tweeted on Friday. Other Republicans, including Lindsey Graham and even Marjorie Taylor Greene, who has espoused her own racist and conspiratorial ideas, made the rare move of implicitly challenging Trump in public; Greene said that Loomer does not have “the right mentality to advise” the president. Trump’s own staff has even reportedly tried to keep Loomer away from him. She has become a rare thing for the GOP these days: a red line that the party is not willing to cross.

Republicans have good reasons for disavowing Loomer. She has described Islam as a “cancer on humanity” and said that she is “pro–white nationalism.” Last week, she posted on X that the “White House will smell like curry” if Kamala Harris wins the election. Loomer’s racism is completely unabashed and unveiled, making her a unique liability even in a party that has spent the past two weeks terrorizing immigrants in Springfield, Ohio, with racist lies.

Like Trump, Loomer almost never backs down. But she doesn’t have the same media-power-broker status as Charlie Kirk or Tucker Carlson, both of whom have flirted with racism. And unlike Greene, Loomer doesn’t have a vote in Congress. She provides less value to congressional Republicans and is thus easier to censure.

In recent days, Trump has seemed to be feeling the pressure from his allies to distance himself from her. “Loomer doesn’t work for the campaign,” Trump reminded his followers on Truth Social on Friday, noting that he does “disagree with the statements she made.” But it was hardly much of a rebuke at all. “Like the many millions of people who support me,” he closed out his mea culpa, “she is tired of watching the Radical Left Marxists and Fascists violently attack and smear me.”

Trump’s allies believe that, in refusing to disavow her completely, he is missing an opportunity to earn political capital. Historically, figures like Loomer have actually worked to the benefit of more mainstream conservatives. With a circus character to criticize as too extreme, politicians can sanitize their own reputations as more moderate and sensible. The legendary genteel conservative thinker William F. Buckley Jr. is credited for his “crusade” against the John Birch Society, a radical right-wing group that was famous for conspiracy theorizing and rose to prominence in the 1960s. Buckley got to have the best of both worlds, as Matthew Dallek chronicled in his book Birchers: How the John Birch Society Radicalized the American Right. On the outside, in writing op-eds criticizing the group, Buckley appeared to have kept the rogue fringe from tainting the American right. But he still retained many Birchers as a core base of support for the broader conservative movement.

A similar dynamic played out with David Duke, the former Ku Klux Klan grand wizard and Louisiana state representative who mounted a primary challenge to President George H. W. Bush in 1992. Duke’s Klan baggage and overt racism was eventually deemed a headache by more mainstream members of the party. He was disavowed during the 1992 presidential campaign while another Republican primary candidate, Pat Buchanan, faced less scrutiny despite one prominent far-right publication viewing him as “Duke without the baggage,” as the writer John Ganz puts it in his history of the 1990s political right, When the Clock Broke. (The Anti-Defamation League has called Buchanan an “unrepentant bigot” and accused him of defending an alleged Nazi war criminal.)

But even those who are publicly dismissed for blatant demonstrations of hatred are often resilient in today’s messy information ecosystem. When CNN revealed in 2020 that Blake Neff, a writer for Carlson’s Fox News show, had a history of posting racist things online, Neff resigned and was criticized by both the company’s CEO and its president in an internal staff memo. Then, in 2023, Media Matters for America noticed that Neff had been hired as a producer for Kirk, showing yet again that these kinds of people and their ideas are never truly pushed away.

Today, Loomer seems to be a version of the Birchers or David Duke—the more extreme actor who is tossed aside as a sacrificial lamb to appease moderates and the masses. Almost every member of the GOP in high-profile elected offices, from its furthest-right fringe in Greene to a moderate such as Tillis, appears to understand this. Trump seemingly does not. On Friday, he told reporters that Loomer was a “big supporter” and a “free spirit.”

The former president has trampled over most norms by now, but his connection to Loomer is also evidence that his political instincts have dulled. During the debate, he parroted positions of the extremely online right that are inscrutable to most people, even those who mostly agree with him. It is not clear whether Trump’s palling around with Loomer is a product of his descent into the internet, or vice versa, but either way, the outcome is the same: As Trump has yanked the Republican Party to the far right, he has simultaneously welcomed extremists into the mainstream GOP.