The Atlantic - Technology
The Atlantic's technology section provides deep insights into the tech giants, social media trends, and the digital culture shaping our world.
After wildfires erupted in Los Angeles County earlier this year, a team from the Department of Housing and Urban Development descended on the wreckage. Led by HUD Secretary Scott Turner, the entourage walked through the rubble in Altadena, reassuring victims that the Trump administration had their back. At Turnerâs request, a Christian-nationalist musician named Sean Feucht tagged along. âI canât overemphasize how amazing this opportunity is,â Feucht had posted on Instagram the day before. âIâm bringing my guitar. Weâre going to worship. Weâre going to pray.â
Feucht has recently become a MAGA superstar. He tours the country holding rallies that blend upbeat Christian-rock songs with sermons that tie in his right-wing political views. Between praising President Donald Trump as Godâs chosen one and suggesting that abortion supporters are âdemons,â Feucht has repeatedly advocated for the fusion of Church and state. During a performance in front of the Wisconsin statehouse in 2023, Feucht paused after a song to make a proclamation: âYeah, we want God in control of government,â he said. âWe want God writing the laws of the land.â He has held rallies at all 50 state capitols, spreading similar theocratic messages.
Feucht did not respond to multiple requests for comment. At times, he has denied being a Christian nationalist, but it can be hard to take that perspective seriously. Last year, he overtly embraced the term at a church in Tulsa, Oklahoma. âThatâs why we get called, Well, youâre Christian nationalists. You want the kingdom to be the government? Yes! You want God to come and overtake the government? Yes! You want Christians to be the only ones? Yes, we do,â Feucht said. âWe want God to be in control of everything,â he continued. âWe want believers to be the ones writing the laws.â
Feucht has the ear of many top Republicans. After he held a prayer gathering on the National Mall a week before the 2024 presidential election, Trump personally congratulated him for âthe incredible jobâ he was doing defending âreligious liberty.â Feucht then attended Trumpâs inauguration prayer service at the National Cathedral in January, where he embraced Secretary of Defense Pete Hegseth. The very next week, he posted that House Speaker Mike Johnson had invited him to hold a worship event in the Capitol. Then, in April, Feucht performed at the White House.
Given his rallies and political connections, Feucht is âmaybe the most effective evangelical figure on the far right,â Matthew D. Taylor, the senior Christian scholar at the Institute for Islamic, Christian, and Jewish Studies, told me. He is a big reason Christian nationalism has more purchase now than at any other point in recent history. According to a February poll from the Public Religion Research Institute, a majority of Republicans support or sympathize with Christian nationalism. They agreed with a variety of statements provided by PRRI, such as âIf the U.S. moves away from our Christian foundations, we will not have a country anymore.â Last month, the Appeal to Heaven flagâa symbol popular among Christian nationalistsâwas spotted flying above a D.C. government building. Feucht is pushing to bring religion and government into even closer alignment.
Feucht comes from a subset of evangelical Christianity known as the New Apostolic Reformation, or NAR. As my colleague Stephanie McCrummen has written, âThe movement has never been about policies or changes to the law; itâs always been about the larger goal of dismantling the institutions of secular government to clear the way for the Kingdom. It is about Godâs total victory.â Many NAR adherents believe in the âseven-mountain mandate,â a framework that seeks to go beyond ending the separation between Church and state. The goal is to eventually control the âseven mountainsâ of contemporary culture: family, religion, education, media, arts and entertainment, business, and government. Feucht has endorsed the fundamental concept. âWhy shouldnât we be the ones leading the way in all spheres of society?â he said in a 2022 sermon. In a conversation that same year, Feucht referenced his desire for Christian representation in âthe seven spheres of society.â
[Read: The army of God comes out of the shadows]
NAR has several high-profile leaders, but Feucht has been especially adept at drawing outside attention to the movementâs goals. After rising to prominence during the early days of the coronavirus pandemic by throwing Christian-rock concerts in violation of lockdown orders, Feucht has built a massive audience of devotees. His constant stream of worship events across the country makes Christian nationalism more accessible for the religious masses, as does his prolific social-media presence (he has half a million followers between Instagram and X). Feucht is connected to just about every faction of the modern right, even the grassroots fringe: On one occasion, he enlisted a member of the Proud Boys, the sometimes-violent far-right group, as part of his security detail. (Feucht later claimed that he wasnât familiar with the group.)
With Feuchtâs help, a version of the seven-mountain mandate is coming true. The Trump administration is cracking down on âanti-Christian biasâ in the federal government, and the president has hired a number of advisers who are linked to Christian nationalism. Under pressure from parents and lawmakers, schools have banned lesson plans and library books related to LGBTQ themes. Feucht is not single-handedly responsible for these wins for Christian nationalists, but his influence is undeniable. Feucht and Hegseth discussed holding a prayer service inside the Pentagon months before the secretary of defense actually did it. Or consider Charlie Kirk, the MAGA power broker who helped run the Trump campaignâs youth-vote operation, and then vetted potential White House hires. In 2020, Feucht unsuccessfully ran for Congress and was endorsed by Kirk. Within a week of the endorsement, Kirk invoked the seven-mountain mandate at CPAC, the conservative conference. With Trump, he said, âfinally we have a president that understands the seven mountains of cultural influence.â
But not everything has been going well for Feucht. Last month, six staffers and volunteers who worked for Feucht published a long and detailed report accusing him of engaging in financial malfeasance. Feuchtâs former employees claim that he withheld promised expense reimbursements from ministry volunteers, engaged in donor and payroll fraud, and embezzled nonprofit funds for personal use. The allegations track with earlier reporting by Rolling Stone and Ministry Watch, the nonprofit Christian watchdog. Both have reported on opaque financial dealings involving his nonprofits. Citing a lack of transparency and efficiency, Ministry Watch currently gives Sean Feucht Ministries a âDonor Confidence Scoreâ of 19 out of 100, and encourages potential donors to âwithhold givingâ to the organization.
Feucht hasnât been charged with any crimes stemming from the allegations, and has denied wrongdoing. âNone of those allegations are true,â Feucht said in a video he recently posted to YouTube. âWeâre in great standing with the IRS. Weâre in great standing with our accountants.â He later added, âWe are taking ground for Jesus, and we are not apologizing for that.â Itâs possible Feuchtâs audience will take him at his word. The NAR movement is insular and unwavering in its worldview: Allegations are evidence of persecution for success. Still, a large part of Feuchtâs power is derived from his donors. At some point, some people might get fed up with giving him money. âHe could lose traction at the follower level,â Taylor said.
So far, that seems unlikely. Scandals can take down people, but ideas are more resilient. Kirk has continued to advocate for Christian-nationalist positions; last year, he argued that âthe separation of Church and state is nowhere in the Constitution.â (It is, in fact, in the Constitutionâright there in the First Amendment.) Even the formerly staunchly secular world of tech is becoming more open to Christian nationalism. In October, Elon Musk held a town hall at Feuchtâs former church in Pennsylvania, and has called himself a âcultural Christian.â Marc Andreessen and other investors have backed a tech enclave in rural Kentucky closely affiliated with Christian nationalists. Regardless of what happens to Feucht, many of the worldâs most powerful people seem to be inching closer to what he wants.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
In hindsight Iâll say: I always thought going crazy would be more excitingâroaming the street in a bathrobe, shouting at fruit. Instead I spent a weary season of my life saying representative. Speaking words and numbers to robots. Speaking them again more clearly, waiting, getting disconnected, finally reaching a person but the wrong person, repeating my story, would I mind one more brief hold. May my children never see the emails I sent, or the unhinged delirium with which I pressed 1 for agent.
I was tempted to bury the whole cretinous ordeal, except that Iâd looked behind the curtain and vowed to document what Iâd seen.
It all began last July, here in San Francisco. Iâd been driving to my brotherâs house, going about 40 mph, when my familyâs newish Ford Escape simply froze: The steering wheel locked, and the power brakes died. I could neither steer the car nor stop it.
I jabbed at the âPowerâ button while trying to jerk the wheel freeâno luck. Glancing ahead, I saw that the road curved to the left a few hundred yards up. I was going to sail off Bayshore Boulevard and over an embankment. I reached for the door handle.
What followed instead was pure anticlimactic luck: Ten feet before the curve in the road, the car drifted to a stop. Vibrating with relief, I clicked on the hazards and my story began.
That afternoon, with the distracted confidence of a man covered by warranty, I had the car towed to our mechanic. (I first tried driving one more timeâcautiouslyâlest the malfunction was a fluke. Within 10 minutes, it happened again.)
âWe can see from the computer codes that there was a problem,â the guy told me a few days later. âBut we canât identify the problem.â
Then he asked if Iâd like to come pick up the car.
âWonât it just happen again?â I asked.
âMight,â he said. âMight not.â
I said that sounded like a subpar approach to driving and asked if he might try again to find the problem.
âLookââannoyed sighââweâre not going to just go searching all over the vehicle for it.â
This was in fact a perfect description of what I thought he should do, but there was no persuading him. I took the car to a different mechanic. A third mechanic took a look. When everyone told me the same thing, it started looking like time to replace the car, per the warranty. I called the Ford Customer Relationship Center.
Pinging my way through the phone tree, I was eventually connected with someone named Pamelaâmy case agent. She absorbed my tale, gave me her extension, and said sheâd call back the next day.
Days passed with no calls, nor would she answer mine. I tried to find someone else at Ford and got transferred back to Pamelaâs line. By chanceâit was all always chanceâI finally got connected to someone with substantive information: Unless our vehicleâs malfunction could be replicated and thus identified, the warranty wouldnât apply.
âBut nobody can replicate the malfunction,â I said.
âI understand your frustration.â
Over the days ahead, and then weeks, and then more weeks, I got pulled into a corner of modern existence that you are, of course, familiar with. You know it from dealing with your own car company, or insurance company, or health-care network, or internet provider, or utility provider, or streaming service, or passport office, or DMV, or, or, or. My calls began getting lost, or transferred laterally to someone who needed the story of a previous repair all over again. In time, I could predict the emotional contours of every conversation: the burst of scripted empathy, the endless routing, the promise of finally reaching a manager whoâCLICK. Once, I was told that Ford had been emailing me updates; it turned out theyâd somehow conjured up an email address for me that bore no relationship to my real one. Weirdly, many of the customer-service and dealership workers I spoke with seemed to forget the whole premise and suggested I resume driving the car.
âWould you put your kids in it?â Iâd ask. They were aghast. Not if the steering freezes up!
As consuming as this experience was, I rarely talked about it. It was too banal and tedious to inflict on family or friends. I didnât even like thinking about it myself. When the time came to plunge into the next round of calls or emails, Iâd slip into a self-protective fugue state and silently power through.
Then, one night at a party, a friend mentioned something about a battle with an airline. Immediately she attempted to change the subject.
âItâs boring,â she said. âDisregard.â
On the contrary, I told her, I needed to hear every detail. Tentatively at first, she told me about a family trip to Sweden that had been scuttled by COVID. What followed was a protracted war involving denied airline refunds, unusable vouchers, expired vouchers, and more. Other guests from the party began drifting over. One recounted a recent Verizon nightmare. Another had endured Kafkaesque tech support from Sonos. The stories kept coming: gym-quitting labyrinths, Airbnb hijinks, illogical conversations with the permitting office, confounding interactions with the IRS. People spoke of not just the money lost but the hours, the sanity, the basic sense that sense can prevail.
Taken separately, these hassles and indignities were funny anecdotes. Together, they suggested something unreckoned with. And everyone agreed: It was all somehow getting worse. In 2023 (the most recent year for which data are available), the National Customer Rage Survey showed that American consumers were, well, full of rage. The percentage seeking revengeârevenge!âfor their hassles had tripled in just three years.
I decided to de-fugue and start paying attention. Was the impenetrability of these contact centers actually deliberate? (Buying a new product or service sure is seamless.) Why do we so often feel like everythingâs broken? And why does it feel more and more like this brokenness is breaking us?

Turns out thereâs a word for it.
In the 2008 best seller Nudge, the legal scholar Cass R. Sunstein and the economist Richard H. Thaler marshaled behavioral-science research to show how small tweaks could help us make better choices. An updated version of the book includes a section on what they called âsludgeââtortuous administrative demands, endless wait times, and excessive procedural fuss that impede us in our lives.
The whole idea of sludge struck a chord. In the past several years, the topic has attracted a growing body of work. Researchers have shown how sludge leads people to forgo essential benefits and quietly accept outcomes they never would have otherwise chosen. Sunstein had encountered plenty of the stuff working with the Department of Homeland Security and, before that, as administrator of the Office of Information and Regulatory Affairs. âPeople might want to sign their child up for some beneficial program, such as free transportation or free school meals, but the sludge might defeat them,â he wrote in the Duke Law Journal.
The defeat part rang darkly to me. When I started talking with people about their sludge stories, I noticed that almost all ended the same wayâwith a weary, bedraggled Fuck it. Beholding the sheer unaccountability of the system, theyâd pay that erroneous medical bill or give up on contesting that ticket. And this isnât happening just here and there. Instead, I came to see this as a permanent condition. We are living in the state of Fuck it.
Some of the sludge we submit to is unavoidableâthe simple consequence of living in a big, digitized world. But some of it is by design. ProPublica showed in 2023 how Cigna saved millions of dollars by rejecting claims without having doctors read them, knowing that a limited number of customers would endure the process of appeal. (Cigna told ProPublica that its description was âincorrect.â) Later that same year, the Consumer Financial Protection Bureau ordered Toyotaâs motor-financing arm to pay $60 million for alleged misdeeds that included thwarting refunds and deliberately setting up a dead-end hotline for canceling products and services. (The now-diminished bureau canceled the order in May.) As one Harvard Business Review article put it, âSome companies may actually find it profitable to create hassles for complaining customers.â
Sludge can also reduce participation in government programs. According to Stephanie Thum, an adjunct faculty member at the Indiana Institute of Technology who researches and writes about bureaucracy, agencies may use this fact to their advantage. âIf you bury a fee waiver or publish a website in legalese rather than plain language, research shows people might stay away,â Thum told me. âIf youâre a leader, you might use that knowledge to get rid of administrative frictionâor put it in place.â
[Read: How government learned to waste your time]
Fee waivers, rejected claimsâsludge pales compared with other global crises, of course. But that might just be its cruelest trick. There was a time when systemic dysfunction felt bold and italicized, and so did our response: We were mad as hell and we werenât going to take it anymore! Now something more insidious and mundane is at work. The system chips away as much as it crushes, all while reassuring us that thatâs just how things go.
The result: Weâre exhausted as hell and weâre probably going to keep taking it.
Call Pamela. Call the mechanic. Call the other mechanic. Call that lemon-law lawyer. My exhausted efforts, to the extent I understood them, revolved around getting my car either fixed or replaced and getting the various nodes in the Ford universe to talk with one another. In the middle of work, or dinner, or a kidâs soccer game, Iâd peel off to answer a random call, because every now and then it was that one precious update from Ford, informing me that there was no news.
The hope, with all of this, was to burrow my way far enough into the circuitry to locate someone with the authority and inclination to help. Sometimes I got drips of informationâthe existence of a buyback department at Ford, for instance. Mostly I got nowhere.
The longer this dragged on, the more the matrix seemed to glitch. The dealership where Iâd bought the car had no record of the salesman whoâd sold it to me. Fordâs internal database, at one point, claimed that I had already picked up the car I was still trying to get them to fix. A mechanic told me, âItâs not that we couldnât fix it. Itâs that we never found the problem, so we were unable to fix it.â
Another mechanic, apparently as delighted by our conversations as I was, grew petulant.
âDriving is a luxury,â he told me without explanation.
Initiating these conversations in the first place: also a luxury, I was learning. For this we have the automatic call distributor to thank. The invention of this device in the midâ20th century allowed for the industrialization of customer service. In lieu of direct contact, calls could be funneled automatically to the next available agent, who would handle each one quickly and methodically.
Contact centers became an industry of their own and, with the rise of offshoring in the â90s, lurched into a new level of productivityâat least from a corporate perspective. Sure, wait times lengthened, pleasantries grew stilted, and sometimes the new accents were hard to understand. But inefficiency had been conquered, or outsourced to the customer, anyway.
Researching this shift led me to Amas Tenumah. As a college student in Oklahoma, Tenumah had come up with a million-dollar invention: a tool that would translate those agent voices into text, and then convert that text into a digital voice.
âSo youâd end up with this robotic conversation,â he told me, âwhich one could argue may even be worse. I didnât know what the hell I was doing.â
The million dollars didnât materialize, but connections did. Needing work, he took a telemarketing job at a company called TCIM Services. Rather than transform contact centers, he strapped on a headset and joined one.
The obsession with efficiency in his new field astonished him. Going to the bathroom required a code. Breaks were regulated to the minute. Outwardly he worked in an office, but by any measure it was a factory floor. Overly long âhandle timeâ? Heâd get dinged. Too few calls answered? Heâd get dinged. Too many escalations to a supervisor? Ding. Ostensibly the goal of customer service is to serve customers. Often enough, its true purpose is to defeat them.
In the two decades after he took that first job, Tenumah rose from agent to manager, ultimately running enormous contact centers around the world. His work took him from Colombia to the Philippines in an endless search for cheap and malleable labor.
In 2021, he published a slim book titled Waiting for Service: An Insiderâs Account of Why Customer Service Is Broken + Tips to Avoid Bad Service. Between calls to Ford and various mechanics, Iâd begun reading it, and listening to the podcast that Tenumah co-hosts. He has a funny, straight-shooting manner that somehow lets him dish about his industry while continuing to work in it.
When we first spoke, I mentioned that someone at Ford had told me that my case had been closed at my request; I had to go through the whole process of reopening it. Was I imagining things, I asked, or was my lack of progress deliberate?
Tenumah laughed.
âYes, sludge is often intentional,â he said. âOf course. The goal is to put as much friction between you and whatever the expensive thing is. So the frontline person is given as limited information and authority as possible. And itâs punitive if they connect you to someone who could actually help.â
Helpfulness aside, I mentioned that I frequently felt like I was talking with someone alarmingly indifferent to my plight.
âThatâs called good training,â Tenumah said. âWhat youâre hearing is a human successfully smoothed into a corporate algorithm, conditioned to prioritize policy over people. If you leave humans in their natural state, they start to care about people and listen to nuance, and are less likely to follow the policy.â
For some people, that humanity gets trained out of them. For others, the threat of punishment suppresses it. To keep bosses happy, Tenumah explained, agents develop tricks. If your average handle time is creeping up, hanging up on someone can bring it back down. If youâve escalated too many times that day, you might âaccidentallyâ transfer a caller back into the queue. Choices higher up the chain also add helpful friction, Tenumah said: Not hiring enough agents leads to longer wait times, which in turn weeds out a percentage of callers. Choosing cheaper telecom carriers leads to poor connection with offshore contact centers; many of the calls disconnect on their own.
âNo one says, âLetâs do bad service,ââ Tenumah told me. âInstead they talk about things like credit percentagesââthe number of refunds, rebates, or payouts extended to customers. âMy boss would say, âWe spent a million dollars in credits last month. That needs to come down to 750.â That number becomes an edict, makes its way down to the agents answering the phones. You just start thinking about what levers you have.â
âDoes anyone tell them to pull those levers?â I asked.
âThe brilliance of the system is that they donât have to say it out loud,â Tenumah said. âItâs built into the incentive structure.â
That structure, he said, can be traced to a shift in how companies operate. There was a time when the happiness of existing customers was a sacred metric. CEOs saw the long arc of loyalty as essential to a companyâs success. That arc has snapped. Everyone still claims to value customer service, but as the average CEO tenure has shortened, executives have become more focused on delivering quick returns to shareholders and investors. This means prioritizing growth over the satisfaction of customers already on board.
Customers are part of the problem too, Tenumah added.
âWeâve gotten collectively worse at punishing companies we do business with,â he said. He pointed to a deeply unpopular airline whose most dissatisfied customers return only slightly less often than their most satisfied customers. âWe as customers have gotten lazy. I joke that all the people who hate shopping at Walmart are usually complaining from inside Walmart.â
[Read: The death of the smart shopper]
In other words, he said, companies feel emboldened to treat us however they want.
âItâs like an abusive relationship. All it takes is a 20 percentâoff coupon and youâll come back.â
As in any dysfunctional relationship, a glimmer of promise arrived just when I was giving up hope. As mysteriously as sheâd vanished, Pamela came back one day, and non-updates began to trickle in: My case was still under review; my patience was appreciated.
All of this was starting to remind me of something Iâd read. The Simple Sabotage Field Manual was created in 1944 by the Office of Strategic Services, a predecessor to the CIA. The document was intended to spark a wave of nonviolent citizen resistance in Nazi-occupied Europe. âNever permit short-cuts to be taken in order to expedite decisions,â advised one passage. âBring up irrelevant issues as frequently as possible.â
Iâd encountered the manual in the past, and had thought of it as a quirky old curio. Now I saw it anew, as an up-to-the-minute handbook for corporate America. The âpurposeful stupidityâ once meant to sabotage enemy regimes has been repurposed to frustrate usâweaponized inefficiency in the name of profit. (I later discovered that Slateâs Rebecca Onion had had this same revelation a full decade ago. Nevertheless the sabotage persists.)
As I waited for news from Ford, I searched for more contact-center agents willing to talk.
Rebecca Harris has fielded callsâmainly for telephone-, internet-, and TV-service companiesâsince 2007. She calls the work âtraumatic.â
âIâd want to do everything I can to help the person on the other end,â she told me. âBut I had to pretend that I canât, because they donât want me to escalate the call.â
Many customers called because they were feeling pinched by their bill. For a lot of them, a rebate was available. But between the callers and that rebate, the company had installed an expanse of sludge.
âThey would outright tell you in training youâre not allowed to give them a rebate offer unless they ask you about it with specific words,â she said. âIf they say theyâre paying too much money, you couldnât mention the rebate. Or if the customer was asking about a higher rebate but you knew there was a lower one, they trained us to redirect them to that one.â
Harris told me sheâd think about her parents in times like this, and would treat her callers the way sheâd want them treated. That didnât go over well with her managers. âTheyâd call me in constantly to retrain me,â she said. âI wasnât meeting the numbers they were asking me to meet, so they werenât meeting their numbers.â
Supervisors didnât tell Harris to deceive or thwart customers. But having them get frustrated and give up was the best way to meet those numbers.
Sometimes sheâd intentionally drop a call or feign technical trouble: ââIâm sorry, the call ⌠I canât ⌠Iâm having a hard time hearing yâ.â It was sad. Or sometimes weâd drag out the call enough that theyâd get agitated, or say things that got them agitated, and theyâd hang up.â
Even if an agent wanted to treat callers more humanely, much of the friction was structural, a longtime contact-center worker named Amayea Maat told me. For one, the different corners of a business were seldom connected, which forced callers to re-explain their problem over and over: more incentive to give up.
âAnd often they make the IVRââinteractive voice response, the automated phone systems we curse atââreally difficult to get through, so you get frustrated and go online.â
She described working with one government agency that programmed its IVR to simply hang up on people whoâd been on hold for a certain amount of time.
Thereâs a moment in Fordâs hold musicâan endless loop of demented hotel-lobby cheerâwhen the composition seems to speed up. By my 8,000th listen I was sure of it: The tempo rose infinitesimally in this one brief spot. Like the fly painted on menâs-room urinals, this imperfection was clearly engineered to focus my attentionâand, in so doing, to distract me from the larger absurdity at hand.
Which is to say, my sanity had begun to fray.
When I set out to document the inner workings of sludge, I had in mind the dull architecture of delays and deferrals. But I had started to notice my own inner workings. The aggravation was adding up, and so was the fatigue. Arguing was exhausting. Being transferred to argue with a different person was exhausting. The illogic was exhausting.
Individually, the calls and emails were blandly substance-free. But together they spoke clearly: You are powerless. I began to wonder: Was the accretion of these exhaustions complicit in the broader hopelessness we seem to be feeling these days? Were these hassles and frictions not just costing us but warping us with a kind of administrative-spiritual defeatism?
Signs of that warping seem to be appearing more and more, as when a Utah man who says he was denied a refund for his apparently defective Subaru crashed the car through the dealershipâs door. But most of us wearily combat sludge through the proper channels, however hopeless it seems. A Nebraska man spent two years trying to change the apparently computer-generated name given to his daughter, Unakite Thirteen Hotel, after a bureaucratic error involving her birth certificate. She also hadnât received a Social Security numberâwithout which she couldnât receive Medicaid and other services.
In his 2021 follow-up to Nudge, Sludge, Sunstein notes that this constellation of frictions âmakes people feel that their time does not matter. In extreme cases, it makes people feel that their lives do not matter.â I asked Sunstein about this depletion. âSuppose that people spend hours on the phone, waiting for help from the Social Security Administration, or seeking to get a license or a permit to do something,â he replied. âThey might start to despair, not only because of all that wasted time but because they are being treated as if they just donât count.â
For Pamela Herd, a social-policy professor at the University of Michigan, sludge became personal when she began navigating services for her daughter, who has a disability. âItâs one thing when I get frustrated at the DMV,â she told me. âItâs another thing when youâre in a position where your kidâs life might be on the line, or your kidâs access to health insurance, or your access to food.â
In 2018, Herd published Administrative Burden: Policymaking by Other Means, with her husband, Donald Moynihan, a professor of public policy at Michigan. The book examines how bureaucratic quicksandâcomplex paperwork, confusing proceduresâactively stymies policy and access to government services. Rather than mere inefficiencies, the authors argue, a number of these obstacles are deliberate policy tools that discourage participation in programs such as Medicaid, keep people from voting, and limit access to social welfare. Marginalized communities are hit disproportionately.
Throughout my ordeal, it was always clear that I was among the fortunate sludgees. I had the time and flexibility to fight in the first placeâto wait on hold, to write follow-up emails. Most people wouldâve just agreed to start driving the damn car again. Fuck it.
One of sludgeâs most insidious effects is our ever-diminishing trust in institutions, Herd told me. Once that skepticism sets in, itâs not hard for someone like Elon Musk to gut the government under the guise of efficiency. She was on speakerphone as she told me this, driving through the Southwest on vacation with Moynihan. As it happened, something had flown up and hit their windshield just before our conversation, and they were surely headed for a protracted discussion between their rental-car company and their insurance companyâa little sludge of their own.
Exasperated as we all are, said Tenumah, the customer-service expert, things are going to get much worse when customer service is fully managed by AI. And, as Moynihan observed, DOGE has already taken our frustration with government inefficiency and perverted it into drastic cuts that also will only further complicate our lives.
But in some corners of academia and government, pushback to sludge is mounting. Regulations like the FTCâs âClick to Cancelâ rule seek to eliminate barriers to canceling subscriptions and memberships. And the International Sludge Academy, a new initiative from both the Paris-based Organization for Economic Cooperation and Development and the government of New South Wales, has promoted the adoption of âsludge auditsâ around the world. The business research firm Gartner predicts that âthe right to talk to a humanâ will be EU law by 2028.
In the meantime, Iâve developed my own way of responding.
Years before my Ford ordeal, Iâd already begun to understand that sludge was doing something to us. It first registered when I noticed a new vein of excuse in the RSVP sphere: âSorry, love to, but I need to figure out our passport application tonight.â âSorry, researching new insurance plans.â
The domestic tasks werenât new; the novelty was all the ways we were drowning in the basic administration of our own lives. I didnât have a solution. But I had an idea for addressing it. I fired off an email to some friends, and on a Tuesday night, a tradition began.
âAdmin Nightâ isnât a party. It isnât laborious taking-care-of-business. Itâs both! At the appointed hour, friends come over with beer and a folder of disputed charges, expiring miles, summer-camp paperwork. Five minutes of chitchat, half an hour of quiet admin, rinse, repeat. At the end of each gathering, everyone names a minor bureaucratic victory and the group lets out a supportive cheer.
Admin Night rules. In an era of fraying social ties, it claws back a sliver of hang time. Part of the appeal is simply being able to socialize while plowing through the to-do listâa 21st-century efficiency fetish if ever there was one. But just as satisfying is having this species of modern enervation brought into the light. Learning of sludgeâs existence, Thum, the bureaucracy researcher, told me, is the first step in fighting it, and in pushing back against the despair it provokes.
Among sludgeâs mysteries is how it can suddenly clear. With no explanation, Pamela called one day to tell me that Ford had decided to buy back my car. She put me in touch with the Reacquired Vehicles Headquarters. From there I was connected to a ârepurchase coordinator,â then I was told to wait for another process in âQuality,â and after some haggling over the price they agreed to buy the car back. To Fordâs credit, they gave me a fair offer. But I wouldâve accepted a turkey sandwich at that point.
What happens to the car next? I asked. I was told that if returned vehicles could be repaired, they could be resold with disclosures. But was Ford obligated to fix the defect before selling it? No one could give me a clear answer. I pondered options for warning potential buyers. Could I post something to Yelp and hope it somehow got noticed? Hide a note inside the car somewhere? Publish the Vehicle Identification Numberâ1FMCU0KZ0NUA29474âin a national magazine?
Before I could decide on a solution, I got the call. One hundred eight days after this whole thing began, I borrowed a friendâs car and drove to the San Jose dealership where my Escape had been waiting all this time. When I arrived, a man named Dennis greeted me and we walked to the lot where the car was sitting. I grabbed everything out of the center console, and then we walked back inside.
âWhatâs going to happen to it?â I asked. âAre they going to resell it?â
Dennis didnât know, or didnât seem inclined to discuss. (A Ford communications director named Mike Levine later told The Atlantic that the company does not resell any repurchased vehicles that canât be fully repaired. Given the confusion I witnessed, I still wonder how they confirm that a car is fully repaired.) I signed some papers, and it was over. The car that wasnât safe to drive, the process that seemed designed not to workâthe whole experience ended not with a bang but with a cashierâs check and a wordless handshake.
When I originally alerted Ford about this article, a spokesperson named Maria told me that my case was not typical and that she was sorry about it. Regarding all the back-and-forth, she said, âthat was not seamless.â Levine told The Atlantic that Ford does not âencourage or measure âsludge,ââ and that âthere was zero intent to add âsludgeââ to my interactions with Ford. He said that the teams I spoke with had needed time to see whether they could replicate the problem with my car, though to my mind that suggests a more concerted effort than what I perceived.
Pamela emailed an apology, too, adding that, given âthe experience you had with your vehicle, I do want to extend an offer for a maintenance plan for your vehicle should you decide to purchase a Ford again, as a complimentary gift for your patience with the brand, as I understand this process took a long time.â
We did purchase another vehicle, but it wasnât a Ford.
Lately Iâve taken to noticing small victories in the war against sludge. That Nebraska dad with the daughter named Unakite Thirteen Hotel? Iâm happy to report she was at last given a Social Security number in February, and was on her way to finally, officially, becoming Caroline.
Still, I couldnât help thinking of all the time her dad lost in that soul-sucking battle.
âItâs been very, very taxing,â he said in an interview.
I understood his frustration.
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
When tech companies first rolled out generative-AI products, some critics immediately feared a media collapse. Every bit of writing, imagery, and video became suspect. But for news publishers and journalists, another calamity was on the horizon.
Chatbots have proved adept at keeping users locked into conversations. They do so by answering every question, often through summarizing articles from news publishers. Suddenly, fewer people are traveling outside the generative-AI sitesâa development that poses an existential threat to the media, and to the livelihood of journalists everywhere.
According to one comprehensive study, Googleâs AI Overviewsâa feature that summarizes web pages above the siteâs usual search resultsâhas already reduced traffic to outside websites by more than 34 percent. The CEO of DotDash Meredith, which publishes People, Better Homes & Gardens, and Food & Wine, recently said the company is preparing for a possible âGoogle Zeroâ scenario. Some have speculated that traffic drops resulting from chatbots were part of the reason outlets such as Business Insider and the Daily Dot have recently had layoffs. âBusiness Insider was built for an internet that doesnât exist anymore,â one former staffer recently told the media reporter Oliver Darcy.
Not all publishers are at equal risk: Those that primarily rely on general-interest readers who come in from search engines and social media may be in worse shape than specialized publishers with dedicated subscribers. Yet no one is totally safe. Released in May 2024, AI Overviews joins ChatGPT, Claude, Grok, Perplexity, and other AI-powered products that, combined, have replaced search for more than 25 percent of Americans, according to one study. Companies train chatbots on huge amounts of stolen books and articles, as my previous reporting has shown, and scrape news articles to generate responses with up-to-date information. Large language models also train on copious materials in the public domainâbut much of what is most useful to these models, particularly as users seek real-time information from chatbots, is news that exists behind a paywall. Publishers are creating the value, but AI companies are intercepting their audiences, subscription fees, and ad revenue.
[Read: The unbelievable scale of AIâs pirated-books problem]
I asked Anthropic, xAI, Perplexity, Google, and OpenAI about this problem. Anthropic and xAI did not respond. Perplexity did not directly comment on the issue. Google argued that it was sending âhigher-qualityâ traffic to publisher websites, meaning that users purportedly spend more time on the sites once they click over, but declined to offer any data in support of this claim. OpenAI referred me to an article showing that ChatGPT is sending more traffic to websites overall than it did previously, but the raw numbers are fairly modest. The BBC, for example, reportedly received 118,000 visits from ChatGPT in April, but thatâs practically nothing relative to the hundreds of millions of visitors it receives each month. The article also shows that traffic from ChatGPT has in fact declined for some publishers. Â
Over the past few months, Iâve spoken with several news publishers, all of whom see AI as a near-term existential threat to their business. Rich Caccappolo, the vice chair of media at the company that publishes the Daily Mailâthe U.K.âs largest newspaper by circulationâtold me that all publishers âcan see that Overviews are going to unravel the traffic that they get from search, undermining a key foundational pillar of the digital-revenue model.â AI companies have claimed that chatbots will continue to send readers to news publishers, but have not cited evidence to support this claim. I asked Caccappolo if he thought AI-generated answers could put his company out of business. âThat is absolutely the fear,â he told me. âAnd my concern is itâs not going to happen in three or five yearsâI joke itâs going to happen next Tuesday.â
Book publishers, especially those of nonfiction and textbooks, also told me they anticipate a massive decrease in sales, as chatbots can both summarize their books and give detailed explanations of their contents. Publishers have tried to fight back, but my conversations revealed how much the deck is stacked against them. The world is changing fast, perhaps irrevocably. The institutions that comprise our countryâs free press are fighting for their survival.
Publishers have been responding in two ways. First: legal action. At least 12 lawsuits involving more than 20 publishers have been filed against AI companies. Their outcomes are far from certain, and the cases might be decided only after irreparable damage has been done. Â
The second response is to make deals with AI companies, allowing their products to summarize articles or train on editorial content. Some publishers, such as The Atlantic, are pursuing both strategies (the company has a corporate partnership with OpenAI and is suing Cohere). At least 72 licensing deals have been made between publishers and AI companies in the past two years. But figuring out how to approach these deals is no easy task. Caccappolo told me he has âfelt a tremendous imbalance at the negotiating tableââa sentiment shared by others I spoke with. One problem is that there is no standard price for training an LLM on a book or an article. The AI companies know what kinds of content they want, and having already demonstrated an ability and a willingness to take it without paying, they have extraordinary leverage when it comes to negotiating. Iâve learned that books have sometimes been licensed for only a couple hundred dollars each, and that a publisher that asks too much may be turned down, only for tech companies to take their material anyway.
[Read: ChatGPT turned into a Studio Ghibli machine. How is that legal?]
Another issue is that different content appears to have different value for different LLMs. The digital-media company Ziff Davis has studied web-based AI training data sets and observed that content from âhigh-authorityâ sources, such as major newspapers and magazines, appears more desirable to AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about âthe importance of high-quality dataâ and have suggested that textbook-style content may be particularly desirable.
But beyond a few specific studies like these, there is little insight into what kind of content most improves an LLM, leaving a lot of unanswered questions. Are biographies more or less important than histories? Does high-quality fiction matter? Are old books worth anything? Amy Brand, the director and publisher of the MIT Press, told me that âa solution that promises to help determine the fair value of specific human-authored content within the active marketplace for LLM training data would be hugely beneficial.â
A publisherâs negotiating power is also limited by the degree to which it can stop an AI company from using its work without consent. Thereâs no surefire way to keep AI companies from scraping news websites; even the Robots Exclusion Protocol, the standard opt-out method available to news publishers, is easily circumvented. Because AI companies generally keep their training data a secret, and because there is no easy way for publishers to check which chatbots are summarizing their articles, publishers have difficulty figuring out which AI companies they might sue or try to strike a deal with. Some experts, such as Tim OâReilly, have suggested that laws should require the disclosure of copyrighted training data, but no existing legislation requires companies to reveal specific authors or publishers that have been used for AI training material.
Of course, all of this raises a question. AI companies seem to have taken publishersâ content already. Why would they pay for it now, especially because some of these companies have argued in court that training LLMs on copyrighted books and articles is fair use?
Perhaps the deals are simply hedges against an unfavorable ruling in court. If AI companies are prevented from training on copyrighted work for free, then organizations that have existing deals with publishers might be ahead of their competition. Publisher deals are also a means of settling without litigationâwhich may be a more desirable path for publishers who are risk-averse or otherwise uncertain. But the legal scholar James Grimmelmann told me that AI companies could also respond to complaints like Ziff Davisâs by arguing that the deals involve more than training on a publisherâs content: They may also include access to cleaner versions of articles, ongoing access to a daily or real-time feed, or a release from liability for their chatbotâs plagiarism. Tech companies could argue that the money exchanged in these deals is exclusively for the nonlicensing elements, so they arenât paying for training material. Itâs worth noting that tech companies almost always refer to these deals as partnerships, not licensing deals, likely for this reason.
Regardless, the modest income from these arrangements is not going to save publishers: Even a good deal, one publisher told me, wonât come anywhere near recouping the revenue lost from decreased readership. Publishers that can figure out how to survive the generative-AI assault may need to invent different business models and find new streams of revenue. There may be viable strategies, but none of the publishers I spoke with has a clear idea of what they are.
Publishers have become accustomed to technological threats over the past two decades, perhaps most notably the loss of ad revenue to Facebook and Google, a company that was recently found to have an illegal monopoly in online advertising (though the company has said it will appeal the ruling). But the rise of generative AI may spell doom for the Fourth Estate: With AI, the tech industry even deprives publishers of an audience.
In the event of publisher mass extinction, some journalists will be able to endure. The so-called creator economy shows that itâs possible to provide high-quality news and information through Substack, YouTube, and even TikTok. But not all reporters can simply move to these platforms. Investigative journalism that exposes corruption and malfeasance by powerful people and companies comes with a serious risk of legal repercussions, and requires resourcesâsuch as time and moneyâthat tend to be in short supply for freelancers.
If news publishers start going out of business, wonât AI companies suffer too? Their chatbots need access to journalism to answer questions about the world. Doesnât the tech industry have an interest in the survival of newspapers and magazines?
In fact, there are signs that AI companies believe publishers are no longer needed. In December, at The New York Timesâ DealBook Summit, OpenAI CEO Sam Altman was asked how writers should feel about their work being used for AI training. âI think we do need a new deal, standard, protocol, whatever you want to call it, for how creators are going to get rewarded.â He described an âopt-inâ regime where an author could receive âmicropaymentsâ when their name, likeness, and style were used. But this could not be further from OpenAIâs current practice, in which products are already being used to imitate the styles of artists and writers, without compensation or even an effective opt-out.
Google CEO Sundar Pichai was also asked about writer compensation at the DealBook Summit. He suggested that a market solution would emerge, possibly one that wouldnât involve publishers in the long run. This is typical. As in other industries theyâve âdisrupted,â Silicon Valley moguls seem to perceive old, established institutions as middlemen to be removed for greater efficiency. Uber enticed drivers to work for it, crushed the traditional taxi industry, and now controls salaries, benefits, and workloads algorithmically. This has meant greater convenience for consumers, just as AI arguably doesâbut it has also proved ruinous for many people who were once able to earn a living wage from professional driving. Pichai seemed to envision a future that may have a similar consequence for journalists. âThereâll be a marketplace in the future, I thinkâthereâll be creators who will create for AI,â he said. âPeople will figure it out.â
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
Updated at 2:28 p.m. ET on June 24, 2025
In April, Ezibon Khamis was dispatched to Akobo, South Sudan, to document the horrors as humanitarian services collapsed in the middle of a cholera outbreak. As a representative of the NGO Save the Children, Khamis would be able to show the consequences of massive cuts to U.S. foreign assistance made by the Department of Government Efficiency and the State Department. Seven of the health facilities that Save the Children had supported in the region have fully closed, and 20 more have partly ceased operations.
Khamis told us about passing men and women who carried the sick on their shoulders like pallbearers. Children and adults were laid on makeshift gurneys; many vomited uncontrollably. These human caravans walked for hours in up to 104-degree heat in an attempt to reach medical treatment, because their local clinics had either closed completely or run out of ways to treat cholera. Previously, the U.S. government had provided tablets that purified the water in the region, which is home to a quarter-million people, many of whom are fleeing violent conflicts nearby. Not anymore, Khamis says; now many have resorted to drinking untreated river water. He told us that at least eight peopleâfive of them childrenâhad died on their journey that day. As he entered a health facility in Akobo, he was confronted by a woman. âShe just said, âYou abandoned us,ââ Khamis told us.
[Read: The cruel attack on USAID]
We heard other such stories in our effort to better understand what happened when DOGE dismantled the United States Agency for International Development. In Nigeria, a mother watched one of her infant twins die after the program that had been treating them for severe acute malnutrition shut down. In South Sudan, unaccompanied children were unable to reunite with surviving relatives at three refugee camps, due to other cuts. Allara Ali, a coordinator for Doctors Without Borders who oversees the groupâs work at Bay Regional Hospital, in Somalia, told us that children are arriving there so acutely malnourished and âdeterioratedâ that they cannot speakâa result of emergency-feeding centers no longer receiving funds from USAID to provide fortified milks and pastes. Last month, 14 children died from severe acute malnutrition at Bay Regional, Doctors Without Borders wrote to us. Many mothers who travel more than 100 miles so that a doctor might see their child return home without them.
One man has consistently cheered and helped execute the funding cuts that have exacerbated suffering and death. In February, Elon Musk, acting in his capacity as a leader of DOGE, declared that USAID was âa criminal organization,â argued that it was âtime for it to die,â and bragged that heâd âspent the weekend feeding USAID into the wood chipper.â
Musk did not respond to multiple requests for comment for this article. Last month, in an interview with Bloomberg, he argued that his critics have been unable to produce any evidence that these cuts at USAID have resulted in any real suffering. âItâs false,â he said. âI say, âWell, please connect us with this group of children so we can talk to them and understand more about their issue,â we get nothing. They donât even try to come up with a show orphan.â
Musk is wrong, as our reporting showsâand as multiple other reports (and estimates) have also shown. But the issue here is not just that Musk is wrong. It is that his indifference to the suffering of people in Africa exists alongside his belief that he has a central role to play in the future of the human species. Musk has insisted that people must have as many children as possibleâand is committed to siring a âlegionâ himselfâand that we must become multiplanetary. Perhaps more than anyone else on Earth, Musk, the wealthiest man alive, has the drive, the resources, and the connections to make his moon shots a reality. His greatest and most consistent ambition is to define a new era for humankind. Who does he believe is worthy of that future?
For more than 20 years, Musk has been fixated on colonizing Mars. This is the reason he founded his rocket company, SpaceX; Musk recently proclaimed that its Starship programâan effort to create reusable rockets that he believes will eventually carry perhaps millions of humans to the Red Planetâis âthe key branching point for human destiny or destiny of consciousness as a whole.â This civilizational language is commonâheâs also described his Mars ambitions as âlife insurance for life collectively.â
He claims to be philosophically aligned with longtermism, a futurist philosophy whose proponentsâself-styled rationalistsâgame out how to do the most good for the human race over the longest time horizon. Classic pillars of longtermism are guarding against future pandemics and addressing concerns about properly calibrating artificial intelligence, all with a focus on protecting future generations from theoretical threats. Muskâs Mars obsession purports to follow this logic: An investment in a program that allows humans to live on other planets would, in theory, ensure that the human race survives even if the Earth becomes uninhabitable. Musk has endorsed the work of at least one longtermist who believes that this achievement would equate to trillions of lives saved in the form of humans who would otherwise not be born.
Saving the lives of theoretical future children appears to be of particular interest to Musk. On X and in interviews, he continuously fixates on declining birth rates. âThe birth rate is very low in almost every country. And so unless that changes, civilization will disappear,â Musk told Fox Newsâs Bret Baier earlier this year. âHumanity is dying.â He himself has fathered many childrenâ14 that we know ofâwith multiple women. Muskâs foundation has also donated money to fund population research at the University of Texas at Austin. An economics professor affiliated with that research, Dean Spears, has argued in The New York Times that âsustained below-replacement fertility will mean tens of billions of lives not lived over the next few centuriesâmany lives that could have been wonderful for the people who would have lived them.â
But Muskâs behavior and rhetoric do not track with the egalitarian principles these interests would suggest. The pronatalist community that he is aligned with is a loose coalition. It includes techno-utopians and Peter Thiel acolytes, but also more civic-minded thinkers who argue for better social safety nets to encourage more people to have families. The movement is also linked to regressive, far-right activists and even self-proclaimed eugenicists. In 2023, The Guardian reported that Kevin Dolan, the organizer of a popular pronatalist conference, had said on a far-right podcast that âthe pronatalist and the eugenic positions are very much not in opposition, theyâre very much aligned.â Via his X account, Musk has amplified to his millions of followers the talk given by Dolan at that 2023 conference. Â
Although other prominent pronatalists disavow the eugenics connection, the movementâs politics can veer into alarming territory. In November 2024, The Guardian reported that Malcolm and Simone Collins, two of the pronatalist movementâs most vocal figures, wrote a proposal to create a futuristic city-state designed to save civilization that included the âmass production of genetically selected humansâ to create a society that would âgrant more voting power to creators of economically productive agents.â Last month, the Times reported that Musk has âprivatelyâ spent time with the Collinses.
Musk has also dabbled with scientific racism on X. The centibillionaire has engaged with and reposted statements by Jordan Lasker, a proponent of eugenics who goes by the name CrĂŠmieux online, according to reporting from The Guardian. On his Substack, Lasker has written about supposed links between national identity and IQâdefending at length an analysis that suggests that people in sub-Saharan Africa have âvery low IQsâ on average. Musk may not have explicitly commented on Laskerâs work, which implies a relationship between race and intelligence, but in 2024, he responded favorably to an X post that argued that âHBCU IQ averages are within 10 points of the threshold for what is considered âborderline intellectual impairment.ââ The original post was ostensibly criticizing a United Airlines program that gave students at three historically Black colleges and universities an opportunity to interview for a pilot-training program. In his response to that post, Musk wrote, âIt will take an airplane crashing and killing hundreds of people for them to change this crazy policy of DIE.â (âDIEâ is Muskâs play on DEI.)
Musk frequently engages in this type of cagey shitpostingâcomments that seem to endorse scientific racists or eugenicist thinking without outright doing so. Those seeking to understand the worldview of one of the most powerful men on Earth are left to find the context for themselves. That context should include Muskâs own family history, starting with his upbringing during the apartheid regime in South Africa and the beliefs of his grandfather Joshua Haldeman, who, as Joshua Benton reported for The Atlantic in 2023, was a radical technocrat and anti-Semite who wrote of the âvery primitiveâ natives of South Africa after he moved there from Canada.
As Benton correctly notes, the sins of the grandfather are not the sins of the grandson; Muskâs father, for example, was a member of an anti-apartheid party in South Africa, and Ashlee Vance reported in his biography of Musk that the apartheid system was a primary reason Musk left South Africa. But, as Benton also writes, âwhen Musk tweets that George Soros âappears to want nothing less than the destruction of western civilizationââin response to a tweet blaming Soros for an âinvasionâ of African migrants into Europeâhe is not the first in his family to insinuate that a wealthy Jewish financier was manipulating thousands of Africans to advance nefarious goals.â
Musk is also preoccupied with the far-right theory of white genocide, posting at various points in the past couple of years on X about how he feels there is a plot to kill white South Africans. Though South Africa has among the highest murder rates in the world, there is no evidence of a systematic white genocide there. Yet during Muskâs political tenure, the Trump administration welcomed 59 white Afrikaner refugees while effectively closing off admission from other countries, including Sudan and the Republic of the Congo.
Hereâs a thought experiment: Based on the programs that Musk has cut, based on the people he meets with and reads, based on the windows we have into his thinking, who do you imagine might be welcomed on the Starship? On X, Musk has implied that the following are all threats to âWestern Civilizationâ: DEI programs, George Soros, the supposedly left-wing judiciary, and much of what gets put under the umbrella of âwokeness.â Transgender-youth rights, according to Musk, are a âsuicidal mind virusâ attacking Western civilization.
Even the idea of empathy, Musk argues, is a kind of existential threat. âThe fundamental weakness of Western civilization is empathy, the empathy exploit,â Musk said in February on Joe Roganâs podcast. âTheyâre exploiting a bug in Western civilization, which is the empathy response,â he said of liberal politicians and activists. Musk, of course, was defending his tenure in the federal government, including his dismantling of USAID. Canceling programs overseas is consistent with his philosophy that âAmerica is the central column that holds up all the places in civilization,â as he told Baier during his Fox appearance. Follow that logic: Cutting global aid frees up resources that can be used to help Americans, who, in turn, can work toward advancing Western civilization, in part by pursuing a MAGA political agenda and funding pronatalist programs that allow for privileged people (ideally white and âhigh IQâ) to have more children. The thinking seems to go like this: Who cares if people in South Sudan and Somalia die? Western civilization will thrive and propagate itself across the cosmos.
[Graeme Wood: Extreme violence without genocide]
Those who believe in this kind of thinking might say that line items on USAIDâs ledger are only of minor consequence in the grand scheme of things. But the world is not governed by the logic of a science-fiction plot. âThe fact is, itâs all interconnected,â Catherine Connor, the vice president of public policy at the Elizabeth Glaser Pediatric AIDS Foundation, told us when we asked about the grants Muskâs team had terminated at USAID. âIf you take one thing away, youâve broken a link in a chain.â She described a situation that her organization is seeing play out on the ground right now, where new HIV-positive mothers take their infants for a dry-blood-spot test to determine if the child has HIV as well. The spot test must be transported to a lab to get results, which will determine if a child is HIV positive and if they should receive lifesaving medication. âIn many of our sites, in many of the countries weâre working in, that lab transport has been terminated,â Connor said. âSo we can do all these things, but because we lost the lab part, we donât know if this childâs HIV-positive or not.â A link in the chain is broken; people are left on their own. The future becomes less certain, a bit darker.
âThereâs a sense of despondence, a sense of hopelessness that I havenât sensed in my time working in this field,â Connor said. âThe level of uncertainty and the level of anxiety thatâs been created is almost as damaging as the cuts themselves.â It seems this hopelessness is a feature of a worldview committed to eradicating what Musk calls âsuicidal empathy.â Regardless, Musk, it appears, is much more interested in talking about his self-landing rockets and a future he promises is just on the horizon.
But much as Musk might want us to divert our eyes upward, something terrible is happening on Earth. The worldâs richest man is preventing lifesaving aid from reaching the worldâs poorest children, closing off their future as he fantasizes about another.
Illustration sources: Oranat Taesuwan / Getty; Neutronman / Getty; Win McNamee / Getty; SCIENCE PHOTO LIBRARY / Getty
This article previously misstated the number of children dying of malnutrition at Bay Regional Hospital.
Amazon delivery can be tough, unglamorous work. Workers must often reckon with complicated geography, demanding bosses, ever more biblical weather, and schedules that force time-conscious drivers to urinate in bottles. Surprising, then, that this is effectively the role in which one of the yearâs most anticipated video games casts the player. In Death Stranding 2, you arrange packages into swaying towers on your back, nudge the controllerâs left- and right-shoulder buttons to keep your weight balanced as you trip down rocky hills, and incur financial penalties for scuffing the merchandise if you take a tumble. The premise is a long trek from the super-soldier games, such as Call of Duty and Helldivers, that dominate the sales chartsâeven if you must occasionally battle the odd spectral marauder from a parallel dimension to clear the way to the next address on your delivery sheet.
In the game, written and directed by Hideo Kojima, one of the mediumâs few household-name directors, you play as Sam Porter Bridges, a dour, unsmiling courier, whose voice and likeness were provided by the Walking Dead actor Norman Reedus. In the gameâs rugged vision of the future, much of human civilization and its infrastructure have been destroyed; society has collapsed into isolated pockets, connected only by the delivery people who haul essentials between them, and connect them to the gameâs fictional version of the internet. You play, in essence, a cross between a haunted Amazon deliveryman and a telecoms engineer.
These are not vocations well suited to big-screen trailers (although the gameâs publisher, Sony, has carefully edited together an action-sequence trailer showing in movie theaters), but they do provide an eccentric, yet effective premise through which Kojima explores two intertwined anxieties shaping our current moment: the creeping erosion of human intimacy by digital substitutes, and our growing unease about building technologies that might render our own roles obsolete. Kojima, beneath the idiosyncratic approach to storytelling and the distracting Hollywood cameos (Reedus is one of more than a dozen familiar faces in the game, drawn from Kojimaâs deep pool of celebrity acquaintances), is inviting players to consider how technology can quietly alienate usânot simply from one another, but from our physical selves, grounding the gameâs sprawling oddness in timely concerns that extend beyond our screens.
[Read: Video games are better without stories]
Released at the end of 2019, the first Death Stranding arrived at a painfully opportune moment. Though the game might have seemed niche, the pandemic worked in its favor. When the world locked its doors, this weird, lavish video game about delivering headache pills, vitamin supplements, first-aid kits, and cuddly toys to isolated communities felt searingly urgent. (More than 19 million people have played the game since its release, according to Kojima.)
This sequel emerges in a different moment, when anxieties around our simultaneous reliance on, and unease with, digital connectivity and computing power are at a crescendo. These are Kojima and his teamâs chosen themes. Now that remote working has become normalized and loneliness rates surge despite constant digital interaction, most question whether our social apps and productivity tools genuinely bring us closer, or merely accelerate a hollowing-out of communities. Simultaneously, advances in AI and automation invite uncomfortable questions about who will be displaced by these new technologies.
In this sequel, you make your first deliveries on foot, using ladders to bridge rivers and climbing gear to rappel down mountains as you slog across rugged Mexico. When you arrive at your destination, youâre typically greeted by a hologram version of its residents, who speak to you as if through a digital doorbell, not so different from the Ring cameras that postal workers interact with today. Once youâve made a connection, youâre given additional errands by these residents, with whom you interact via a kind of social-media platform that allows you to frantically dispense likes by mashing a button on your controller.
Whenever you bring a settlement onto the figurative network, that region comes online in real terms as well: roads, electric-vehicle charging points, and other useful features that have been installed by other Death Stranding 2 players begin to appear in your own game, saving you from having to spend time and resources building them yourself. You, too, can contribute materials to improve or repair these structures when they break down. In this way, the game provides a convincing metaphor for the benefits of living in an interconnected society; you profit from the efforts and inputs of others, and enjoy the satisfaction of making your own contributions to the shared world.
Bridges eventually relocates to Australia and begins the work of connecting a new continent, gaining access to off-road vehicles, zip lines, and monorailsâequipment that hastens the task of making deliveries across precarious terrains, increasing the number of packages and materials you can move around the game world in ways that mimic the Amazonification of society. Itâs a compelling gameplay loop, designed to leave us feeling conflicted. Bridges slowly builds himself out of a job by assembling the tools and systems that will ultimately replace him.
[Read: A chatbot is secretly doing my job]
These mechanical advances evolve into an extended metaphor for modern lifeâs digital paradox. Death Stranding 2 emphasizes how technology, though ostensibly uniting us, often strips interactions of humanity. The gameâs narrative eventually shows the player how digital systems can be co-opted for political purposes by the companies that run them, and how the spectral frisson of virtual likes and online exchanges can, in time, flatten us. âCommunicating with someone via hologram is no substitute for being able to reach out and touchâ them, one character remarks, late in the game. In this way, Death Stranding 2, a digital artifact that encourages remote cooperation among strangers, argues that a life lived virtually is no replacement for physicality.
The game is preoccupied with this idea. On his delivery routes, Bridges drinks from a flask to keep hydrated, and catches and eats bugs when peckish. A dedicated on-screen gauge even provides a readout of his urine levels. Then, at the end of each day, Bridges returns to his private quarters, relieves himself, and showers the dust and blood from his body. We are physical beings, the game emphasizes in these moments, who defecate and eat, who need sleep and water. We are used to viewing video-game protagonists as tireless ciphers. Bridges, by contrast, will become sunburned if he stays too long at work.
Yet Kojima is also concerned with the more spiritual elements of humanity. Early in the game, Bridges suffers an emotional loss that distorts reality as the game continues, shadowing him with hallucinations and night terrors. Wounding a character in such a blunt way could, in the hand of a middling fiction writer, feel like a cheap lunge at profundity. But the depiction lands with the weight of lived experience.
Kojima has spoken recently about his own childhood experience with grief. A collaboration with Prada in Tokyo, celebrating the directorâs work, includes the footage and transcript of a recorded conversation between Kojima and the Danish film director Nicolas Winding Refn (another cameo in Death Stranding 2). Kojima tells Refn that, when he was a child, his father died suddenly soon after heâd returned home from work. Kojima was 13 at the time. He accompanied his father in the ambulance: âHis eyes were open,â Kojima recalls, âbut he could not talk because his body was shaking. I think he was trying to tell me something, but I did not know what he wanted to say. After that, I did not have a father.â
In the game, although the presence of other players is suggested through shared items and social-media likes, Bridges ultimately travels alone, echoing Kojimaâs lasting uncertainty about what remains unsaid between even the closest people, whether still alive or lost to death. By the end, Death Stranding 2 feels less like a rebuke of digital life and more like a poignant appeal for balance. Kojima doesnât merely caution against our digital reliance; he reminds us of the tactile truths we risk leaving behindâof what it feels like to soothe a crying child; to shoulder physical burdens; to exist in the immediate, tangible moment, without a hungry eye on a smartphone screen. The game underscores that the physical bonds we share with one another are easily obscured by the convenient illusions of digital connection, yet still remain our most meaningful refuge in a transient world and a momentary existence.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
Unlike nearly 98 percent of Americans under the age of 50, I donât have a smartphone. Actually, Iâve never had a smartphone. Iâve never called an Uber, never âdropped a pin,â never used Venmo or Spotify or a dating app, never been in a group chat, never been jealous of someone on Instagram (because Iâve never been on Instagram). I used to feel ashamed of this, or rather, I was made to feel ashamed. For a long time, people either didnât believe me when I told them that I didnât have a smartphone, or reacted with a sort of embarrassed disdain, like theyâd just realized I was the source of an unpleasant odor theyâd been ignoring. But over the past two years, the reaction has changed. As the costs of being always online have become more apparent, the offline, air-gapped, inaccessible person has become an object of fascination, even envy. I have to confess that Iâve become a little smug about being a Never-Phonerâa holdout who somehow went from being left behind to ahead of the curve.
How far ahead is difficult to say. I think Iâve avoided the worst effects of the smartphone: the stunned, preoccupied affect; the social atrophy; the hunched posture and long horizontal neck creases of the power scroller. Iâm pretty sure my attention span is better than many othersâ, based on the number of people Iâve observed in movie theaters who either check their phone every few minutes (about half) or scroll throughout the entire movie (always a handful). I will, by the way, let you know if I witness you engaging in similar behavior: If you look at your phone more than once an hour, I will call you an âiPad babyâ; if you put on an auto-generated Spotify playlist, Iâll call you âa hog at the slop trough.â
Being phoneless has definitely had downsides. The pockets of every jacket I own are filled with maps scrawled on napkins, receipts, and utility bills torn in half to get me to unfamiliar places. I once missed an important job interview because Iâd mislabeled the streets on my hastily sketched map. At the end of group dinners, when someone says, âEveryone Venmo me $37.50,â the two 20s I offer are taken up like a severed ear. And Iâd be lying if I said I didnât occasionally get wistful about all the banter Iâm probably missing out on in group chats.
Still, Iâve held out, though itâs hard to articulate exactly why. The common anti-smartphone angles donât really land with me. The cranky âGet off your darn phone!â seems a little too close to âGet off my lawn!ââa knee-jerk aversion to new things is, if not the root of all evil, then the root of all dullness. The popular exhortations to âbe fully present in the momentâ also seem misguided. I think the person utterly absorbed in an Instagram Reel as they shuffle into the crosswalk against the light, narrowly saved by the âAhem, excuse meâ double-tap on the horn that bus drivers use to tell you that youâre a split second from being reunited with your childhood dog, is probably living in the moment to a degree usually achieved only by Buddhist monks; the problem is just that itâs the wrong moment.
[Read: Why are there so many âalternative devicesâ all of a sudden?]
Mostly, I think the reason I donât opt for the more frictionless phone life is that I canât help noticing how much people have changed in the decade or so since smartphones have become ubiquitous. I used to marvel at the walking scrollerâs ability to sightlessly navigate the crowd, possibly using some kind of batlike sonar. But then, on occasion, whether out of a vague antisocial impulse (not infrequent) or simple necessity (as in navigating a narrow aisle at the grocery store), Iâd play a game of chicken with one of these people, walking directly toward them to see when theyâd veer off. A surprising percentage of the time, they didnât, and after the collision, theyâd always blame me. Eventually, I realized theyâre not navigating anything; theyâve just outsourced responsibility for their corporeal self to everyone else around them, much as many people have outsourced their memory to their phone.
Youâre probably saying, well, at least theyâre on foot, and not driving a car. But many people look at their phones behind the wheel too. At a four-way stop, oftentimes the driver who yields to the crossing vehicle will steal a half-second look at their phone while they wait. At red lights, I see people all the time who donât look up from their phone when the light turns greenâthey just depress the gas when the car in front of them moves. Less hazardous but somehow more disturbing are the people I see scrolling in parked cars late at night. When I glance overâstartled by the sudden appearance of a disembodied, underlit face on an otherwise deserted blockâthese people typically glare back, looking aggrieved and put-upon, as if Iâve broken a contract I didnât know Iâd agreed to. I try to give them the benefit of the doubt; maybe they share a bed with a light sleeper, or have six annoying kids bouncing off the walls at home. But it happens often enough that Iâve come to think of them as the embodiment of contemporary alienation. Twenty-five years ago, we had Bowling Alone; today, we have scrolling alone.
[Read: The smartphone kids are not all right]
Of course, a phone is just a medium, no different on some level from a laptop or a book, and the blanket âphone badâ position elides the fact that people could be doing a nearly infinite number of things on them, many of them productive. The guy hunched intently over his phone at the gym might be reading the latest research on novel cancer treatments. But probably not. Once, a guy at my gym, whose shoulder I looked over as he used the stationary bike in front of me, was talking to an AI-anime-schoolgirl chatbot on his phone. She was telling him, in a very small, breathy voice, how sheâd been in line at the store earlier, and when someone had cut in front of her, sheâd politely spoken up and asked them to go to the back of the line. âThatâs great, baby,â he said. âIâm so proud of you for standing up for yourself.â
This is more or less typical of the stuff I spy people doing on their phoneâself-abasing, a devitalized substitute for some real-life activity, and incredibly demoralizing, at least in the eyes of a phoneless naif. Many times, Iâve watched friends open a group chat, sigh, and go through a huge backlog of unread messages, mechanically dispensing heart eyes and laughing emojiâfriendship as a data-entry gig you arenât paid for, yet canât quit. I have a girlfriend, but one of my friends often lets me watch as he uses the dating apps. Like most men (including myself), he overestimates his attractiveness while underestimating the attractiveness of the women he swipes on. âI guess Iâll give her a chance,â heâll say, swiping right on a woman whom ancient civilizations wouldâve gone to war over.
As long as this friend does his daily quota of swipes, heâs âout there and on the market,â he tells me, and thereâs ânothing more he can do.â Yet we go to the same coffee shop, and several times a week, we see a woman who seems to be his perfect match. Each day, he comes in, reads his little autofiction book, then takes out his laptop to peck away at a little autofiction manuscript. Each day, she comes in, reads her little autofiction book, then takes out her laptop to peck away at what weâve theorized must also be a little autofiction manuscript. Sometimes they sit, by chance, at adjacent tables, so close that Iâm sure he can smell her perfume. On these occasions, I try to encourage him from across the roomâI raise my eyebrows suggestively, I subtly thrust my hips under the table. After she leaves, I go over and ask why he didnât talk to her; he reacts as if I suggested a self-appendectomy. âMaybe Iâll see her on the apps,â he says, of the woman heâs just seen in real life for the 300th time.
[Read: The slow, quiet demise of American romance]
I donât blame him. Heâs 36 and has only ever dated through apps. Meeting people in public does seem exponentially harder than it was just 10 years ago. The bars seem mostly full of insular friend groups and people nervously awaiting their app dates. (Few things are more depressing than witnessing the initial meeting of app users. âTaylor ⌠? Hi, Riley.â The firm salesmanlike handshake, the leaning hug with feet kept at maximum distance, both speaking over each other in their job-interview voices.) I often see people come into a bar, order a single drink, sit looking at their phone for 20 to 30 minutes, and then leave. Maybe theyâre being ghosted. Or maybe theyâre doing exactly what they intended to do. But they frequently look disappointed; I imagine that their visit was an attempt at somethingâgiving serendipity an opportunity to tap them on the shoulder and say, Here you go, hereâs the encounter that will fix you.
Witnessing all of this, I sense that a huge amount of social and libidinal energy has been withdrawn from the real world. Where has it all gone? Data centers? The comments? Many critics of smartphones say that phones have made people narcissists, but I donât think thatâs right. Narcissists need other people; the emotional charge of engagement is their lifeblood. What the oblivious walking scroller, the driving texter, the unrealistic dating-app swiper have in common is almost the oppositeâa quality closer to the insularity of solipsism, the belief that youâre the one person who actually exists and that other people are fundamentally unreal. Solipsism, though, is a form of isolation, and to become accustomed to it is to make yourself a kind of recluse, capable of mimicking normalcy yet only truly comfortable shuffling among your feeds, muttering darkly to yourself.
I know that my refusal to get a smartphone is an implicit admission that I would become just as addicted to it as anyone else. Recently, my girlfriend handed me her phone and told me to put on music for sex; a few minutes later, she leaned over to see what was taking so long. I had been looking at the Wikipedia page for soft-serve ice cream. I have no idea why I was looking at that or even how Iâd gotten there. Itâs like the sudden availability of unlimited information had sent me into a fugue state, and I just started swiping and scrolling. I guess I looked into the void and fell in. I wonât lie; it felt kind of nice, giving up.
A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isnât good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology.
Even without actively seeking out a chatbot, billions of people are now pushed to interact with AI when searching the web, checking their email, using social media, and online shopping. Ninety-two percent of Fortune 500 companies use OpenAI products, universities are providing free chatbot access to potentially millions of students, and U.S. national-intelligence agencies are deploying AI programs across their workflows.
When ChatGPT went down for several hours last week, everyday users, students with exams, and office workers posted in despair: âIf it doesnt come back soon my boss is gonna start asking why I havent done anything all day,â one person commented on Downdetector, a website that tracks internet outages. âI have an interview tomorrow for a position I know practically nothing about, who will coach me??â wrote another. That same dayâJune 10, 2025âa Google AI overview told me the date was June 18, 2024.
For all their promise, these tools are still ⌠janky. At the start of the AI boom, there were plenty of train wrecksâBingâs chatbot telling a tech columnist to leave his wife, ChatGPT espousing overt racismâbut these were plausibly passed off as early-stage bugs. Today, though the overall quality of generative-AI products has improved dramatically, subtle errors persist: the wrong date, incorrect math, fake books and quotes. Google Search now bombards users with AI overviews above the actual search results or a reliable Wikipedia snippet; these occasionally include such errors, a problem that Google warns about in a disclaimer beneath each overview. Facebook, Instagram, and X are awash with bots and AI-generated slop. Amazon is stuffed with AI-generated scam products. Earlier this year, Apple disabled AI-generated news alerts after the feature inaccurately summarized multiple headlines. Meanwhile, outages like last weekâs ChatGPT brownout are not uncommon.
Digital services and products were, of course, never perfect. Google Search already has lots of unhelpful advertisements, while social-media algorithms have amplified radicalizing misinformation. But as basic services for finding information or connecting with friends, until recently, they worked. Meanwhile, the chatbots being deployed as fixes to the old webâs failingsâGoogleâs rush to overhaul Search with AI, Mark Zuckerbergâs absurd statement that AI can replace human friends, Elon Muskâs suggestion that his Grok chatbot can combat misinformation on Xâare only exacerbating those problems while also introducing entirely new sorts of malfunctions and disasters. More important, the extent of the AI industryâs new ambitionsâto rewire not just the web, but also the economy, education, and even the workings of government with a single technologyâmagnifies any flaw to the same scale.
[Read: The day Grok told everyone about âwhite genocideâ]
The reasons for generative AIâs problems are no mystery. Large language models like those that underlie ChatGPT work by predicting characters in a sequence, mapping statistical relationships between bits of text and the ideas they represent. Yet prediction, by definition, is not certainty. Chatbots are very good at producing writing that sounds convincing, but they do not make decisions according to whatâs factually correct. Instead, they arrange patterns of words according to what âsoundsâ right. Meanwhile, these productsâ internal algorithms are so large and complex that researchers cannot hope to fully understand their abilities and limitations. For all the additional protections tech companies have added to make AI more accurate, these bots can never guarantee accuracy. The embarrassing failures are a feature of AI products, and thus they are becoming features of the broader internet.
If this is the AI age, then weâre living in broken times. Nevertheless, Sam Altman has called ChatGPT an âoracular system that can sort of do anything within reasonâ and last week proclaimed that OpenAI has âbuilt systems that are smarter than people in many ways.â (Debatable.) Mark Zuckerberg has repeatedly said that Meta will build AI coding agents equivalent to âmid-levelâ human engineers this year. Just this week, Amazon released an internal memo saying it expects to reduce its total workforce as it implements more AI tools.
The anomalies are sometimes strange and very concerning. Recent updates have caused ChatGPT to become aggressively obsequious and the Grok chatbot, on X, to fixate on a conspiracy theory about âwhite genocide.â (X later attributed the problem to an unauthorized change to the bot, which the company corrected.) A recent New York Times investigation reported several instances of AI chatbots inducing mental breakdowns and psychotic episodes. These models are vulnerable to all sorts of simple cyberattacks. Iâve repeatedly seen advanced AI models stuck in doom loops, repeating the same sequence until they manually shut down. Silicon Valley is betting the future of the web on technology that can unexpectedly go off the rails, melt down at the simplest tasks, and be misused with alarmingly little friction. The internet is reverting to beta mode.
My point isnât that generative AI is a scam or that itâs useless. These tools can be legitimately helpful for many people when used in a measured way, with human verification; Iâve reported on scientific work that has advanced as a result of the technology, including revolutions in neuroscience and drug discovery. But these success stories bear little resemblance to the way many people and firms understand and use the technology; marketing has far outpaced innovation. Rather than targeted, cautiously executed uses, many throw generative AI at any task imaginable, with Big Techâs encouragement. âEveryone Is Using AI for Everything,â a Times headline proclaimed this week. Therein lies the issue: Generative AI is a technology that works well enough for users to become dependent, but not consistently enough to be truly dependable.
[Read: AI executives promise cancer cures. Hereâs the reality.]
Reorienting the internet and society around imperfect and relatively untested products is not the inevitable result of scientific and technological progressâit is an active choice Silicon Valley is making, every day. That future web is one in which most people and organizations depend on AI for most tasks. This would mean an internet in which every search, set of directions, dinner recommendation, event synopsis, voicemail summary, and email is a tiny bit suspect; in which digital services that essentially worked in the 2010s are just a little bit unreliable. And while minor inconveniences for individual users may be fine, even amusing, an AI bot taking incorrect notes during a doctor visit, or generating an incorrect treatment plan, is not.
AI products could settle into a liminal zone. They may not be wrong frequently enough to be jettisoned, but they also may not be wrong rarely enough to ever be fully trusted. For now, the technologyâs flaws are readily detected and corrected. But as people become more and more accustomed to AI in their lifeâat school, at work, at homeâthey may cease to notice. Already, a growing body of research correlates persistent use of AI with a drop in critical thinking; humans become reliant on AI and unwilling, perhaps unable, to verify its work. As chatbots creep into every digital crevice, they may continue to degrade the web gradually, even gently. Todayâs jankiness may, by tomorrow, simply be normal.
The Trumps are doing phones now. This week, the Trump Organization announced its own cellphone service called Trump Mobile, as well as a gold-colored smartphone called the T1, which will purportedly be manufactured in the United States and retail for $499. It is available for preorder now and will supposedly ship in August or September, though one reporter who attempted to buy the device was left feeling unsure: His card was charged $64.70 instead of the full $100 down payment, and he was never asked to provide a shipping address.
What other details do you need? âTrump Mobile is going to revolutionize kind of, you know, cellphones,â Eric Trump, the presidentâs son and an executive for the Trump Organization, said on Fox Business. According to Trump Mobileâs Terms of Use page, its service will be âpowered byâ Liberty Mobile, which itself runs on T-Mobile and uses the clever tagline âLet Freedom Ring.â Other marketing materials confuse the issue by suggesting that Trump Mobile works with all three major carriers. The phone plan will cost $47.45 a month, which is somewhat expensive for this type of service but makes sense numerologically with Trumpâs brand (47th and 45th president).
To be clear, Trump is not building out his own networking infrastructure. Trump Mobile will be a mobile virtual network operator (or MVNO). These essentially buy service from major providers such as T-Mobile and AT&T at a discounted, wholesale rate, and then sell that service to customers who are comfortable with making various compromises in exchange for a much lower bill than theyâd have with the mainstream carriers. This is about the extent of the available details. The Trump Organization did not return my request for additional information about where the phone would be made and by whom, nor did it answer my question about whether the phone currently exists physically. (The images on the website appear to be not photographs, but questionable mock-upsâthe camera is depicted without a flash, as noted by The Verge.) I also asked the Trump Organization whether the Trump family faces a conflict of interest in entering an industry that is regulated by the Federal Communications Commission, an agency led by presidential appointees; no response.
But I was most interested in my unanswered question about why the Trump Organization would want to be involved in the telecom industry at all. To some extent, the answer is obvious: The Trumps are involved in such a sprawling array of moneymaking endeavors, it would make more sense to ask whether there are any they would not consider trying. Theyâve done quite a bit in the tech sector already, between NFTs, memecoins, a social-media platform, and other fascinating ventures.
[Read: The Trump sons really love crypto]
Still, the choice is curious, if only because operating a cellphone service seems so boring and unglamorous. Itâs also funny timing: Last week, the actors Jason Bateman, Will Arnett, and Sean Hayes, who co-host the super-popular podcast SmartLess, announced SmartLess Mobile, a discount phone plan that also relies on T-Mobile. That move was not well explained by its participants, either. In an interview with People about the move, Bateman said twice that most people listen to podcasts on phones, and therefore the telecom industry is a logical one for podcasters to enter. âIt just kind of organically shaped into something that really made sense for us to try,â he added.
Did it?
The celeb phone companies remind me, a little, of the ISP that David Bowie launched in 1998, which for $19.95 a month offered âfull uncensoredâ internet access, Bowie-themed chat rooms, and a coveted â@davidbowie.comâ email address. That service lasted for eight years, which is pretty impressive, but it was more of a highly laborious artistic experiment and act of fan service than an effort to maintain and profit from digital infrastructure long term.
Todayâs businesspeople appear to be more directly inspired by the actor Ryan Reynoldsâs fortuitous investment in Mint Mobile, another MVNO, which sold for more than $1 billion in 2023. What theyâre doing is a step further than what he did, because theyâre not just investing in an existing phone company: The Trump Organization and the SmartLess guys are putting their names on something new. The question, then, is: Why would phone companies suddenly appeal to the type of people who might otherwise put their names on bottles of tequila or pickleball paddles or what have you?
I emailed Steffen Oefner, a vice president at Magenta Telekom, the Austrian iteration of T-Mobile (MVNOs are more common in Europe), to ask him. âInteressent point,â he replied. âOne answer is ⌠because they can.â The MVNO industry now has a number of middleman companies that will do the work of negotiating with a network and then allow brands or influencers to simply put their name on a ready-made product, he explained. Setting up an MVNO is significantly cheaper than it was 10 years ago. âWe do expect more celebrity brands or fan-base MNVOs to appear in the mobile market,â he said. To add to my list, he gave the example of LariCel, a phone company in Brazil affiliated with the actor Larissa Manoela (who has more than 53 million followers on Instagram), which refers to its customers as LariLovers.
After reviewing the list of personalities who appeared at a recent MVNO conference held in Vienna, I found James Gray of Graystone Strategy, which consults with clients in the MVNO space. He agreed with Oefner about the ease of starting an MVNO and also pointed to the invention of digital SIM cards, or eSIMS, which enable people to switch to a new phone plan instantly, without having to wait for a little piece of plastic to be shipped to them. âNow weâre in a digital world,â he said.
[Read: The Trump posts you probably arenât seeing]
This general point had multiple implications. Previously, he said, companies such as T-Mobile would have preferred to partner with retail companies or banks, enticing new customers by offering them special deals on products or services they were already using. Today, a digital brand such as that of âan influencer or someone running a podcastâ can also sell a service, maybe by saying that it represents their values or that it comes with access to a community. âTrump would be a relatively famous brand,â he noted. As another example, he pointed to FC Barcelona, which recently started offering an MVNO called Barça Mobile to its many, many super-enthusiastic fans as a way to be even more intensely involved with the club (while also receiving cheap phone service).
The SmartLess guys are pitching their new venture by saying that a lot of people currently pay for more cellphone data than they actually use, given that they are actually connected to Wi-Fi most of the time (suggesting, I suppose, a customer base that is often either at home or in an office). The Trump plan will offer roadside assistance and access to a telehealth service (suggesting, I suppose, a customer base that is older or generally accident-prone). In the U.S., other politics-themed MVNOs also already existâthe California-based Credo Mobile puts some of its profits into left-wing causes, while the Texas-based Patriot Mobile puts some of its profits into right-wing causes. (The latter identified itself as a trailblazer of âthe Red Economyâ in a press release congratulating Trump Mobile on its launch.)
Gray concluded that the appeal of the phone business to the celebrities was obvious. âThe difference between this and, say, a celebrity vodka is this is recurring revenue,â he told me. âPeople sign up and they pay a subscription to you every month.â (That was also the case with Rihannaâs underwear membership, though people did eventually get upset about it.) And of course, heâs rightâthat is the big difference. That is why a famous person would want to run a phone company. Weâre in a digital world now. How lucky.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
Before DOGE, there was Twitter. In 2023, Elon Musk seemed too distracted by his latest venture to run the worldâs most valuable car company. Tesla was faltering as he focused on remaking (and renaming) the social-media network. So at Teslaâs investor-day event in Austin that March, Musk responded with a rare show of force. He was joined onstage by a cadre of more than a dozen of the companyâs top executives, all to signal that even if he was extremely busy, Tesla was run by a world-class team: âWeâve obviously got significant bench strength here,â Musk said. Sure enough, Tesla closed out 2023 with the best sales itâs ever had.
Musk is in bad need of a similar comeback right now as he returns from Washington to focus on his struggling car company. In recent months, Tesla sales have plummeted as the chain-saw-wielding, far-right centibillionaire has turned off traditionally liberal electric-car buyers. The MAGA faithful never stepped up to take their place, and theyâre less likely to do so now that the Trump-Musk bromance is over. Musk has other problems: Tesla created the modern electric car as we know it, but now the automaker is falling behind the competition while Musk is more focused on AI and robots than selling cars. And on top of everything else, the One Big Beautiful Bill Act working its way through Congress could cost Tesla billions each year.
This time around, however, Musk canât lean on that aforementioned bench even if he wants to. Something similar to DOGEâs steep staffing cuts has been playing out at Tesla. About a third of the executives who stood onstage with him two years ago have left Tesla or been ousted. Many other high-profile company leaders have resigned. Just since April, Tesla has lost its head of software engineering, head of battery technology, and head of humanoid robotics. Tens of thousands of rank-and-file employees left last year amid waves of mass layoffs. At the end of the day, Tesla is the Musk show: The company is the biggest source of his wealth, and is core to his reputation as a tech genius. Now, after all of the pivots and attrition, the future of Tesla rests singularly on Musk more than it ever has.
To longtime Tesla chroniclers such as myself, the chaotic, rapid-fire cuts that defined Muskâs tenure at DOGE felt familiar from the very beginning. The playbook was pioneered at Tesla. When Musk took over as CEO in 2008, Tesla was a start-up struggling to build its first car. His early infusions of personal cash, ruthless approach to cost cutting, and, in his words, âhardcoreâ work environment are widely credited with getting the automaker up and running. He has a famous approach to any type of problem: Get rid of preconceived notions, tear everything down, and rebuild from there. If things break, so be it. They can probably be repaired later on. At one point, the company got rid of the traditional turn-signal switch on some cars before later putting them back. (Tesla and Musk did not respond to my requests for comment.)
For a long time, the strategy worked. In the span of a decade, Tesla rose from a start-up to an auto giant worth more than Ford, Toyota, and GM combinedâdespite selling just a fraction of the cars its rivals did. Thatâs why investors still back Musk today. Heâs made them a lot of money before, so if things get bad, heâs the man to figure it out, right? Musk himself has helped promulgate the idea that he has all the answers. At one point, he said he would personally start approving some of his employeesâ expenses amid a âhardcoreâ round of cost cutting. âHe has always been the kind of person who says, âI am the only one who can do this,ââ Sam Abuelsamid, an auto-industry analyst at the research firm Telemetry, told me. In 2018, when I was the editor in chief of the auto publication Jalopnik, Teslaâs now-defunct communications team frantically admonished us for reporting that Doug Field, the companyâs top engineer, had left the company. He was merely the top vehicle engineer, a spokesperson said. Muskâdespite not being trained as an engineerâwas the top engineer.
[Read: Elon and the genius trap]
In 2019, an analysis from the financial firm Bernstein put Teslaâs executive-turnover rate at nearly double the average of comparable Silicon Valley companies; the number was âdramatically higherâ among Muskâs direct reports as well. Layoffs and firings have sometimes felt more mercurial than anything else. Consider the team behind Teslaâs charging network. In June 2023, I wrote that Teslaâs fast and reliable âSuperchargersâ were its secret weapon; other automakers had begun building cars using Teslaâs proprietary charging port to give their customers Supercharger access. About a year later, Tesla laid off the entire 500-person team. Many of the staffers were later rehired and returned, but not all: Rebecca Tinucci, Teslaâs head of charging, left for good. The Supercharger network has grown since then, though not without a period of chaos for the automaker and the entire car industry that bet on it. The cuts to Teslaâs charging workforce were part of a bigger reduction in headcount last year: Within the first six months of 2024, Tesla had shed nearly 20,000 employees, according to internal data viewed by CNBC. And Teslaâs latest quarterly SEC filing, released in April, boasts of âa $52 million decrease in employee and labor costsâ compared with last year. (In reporting this story, I reached out to roughly a dozen current and former Tesla staffers. None would talk with me on the record.)
Last yearâs layoffs, Musk said, were designed to position the company for its ânext phase of growth.â Based on everything heâs said so far, that means AI. He has promised that robots and driverless cars will eventually deliver âa trillion dollars of profit a year.â Several top executives and engineers have resigned after they reportedly clashed with Musk on his pivot. This month, Tesla is tentatively set to launch its long-awaited robotaxi service in Austin, starting with what Musk has said will be â10 to 12â self-driving Teslas that can also be remotely operated by humans if needed. In other words, the company has a long way to go before itâs anywhere close to something like a driverless Uber. For now, the company still makes its money from selling cars, and Tesla has lost many of the smart people who helped create what was once an innovative automotive juggernaut. Musk still does have several long-standing deputies at the company, including Tom Zhu, a senior vice president who previously led Teslaâs operations in China, and Lars Moravy, who leads vehicle engineering. But the departures put more pressure on Musk: He doesnât have the workforce he once did to build to make groundbreaking electric vehicles.
The silver lining for the future of electric vehicles is that these former Tesla staffers are fanning out to the rest of the car industry. Take Field, the former head Tesla engineer (or âhead vehicle engineer,â in Teslaâs telling). He now leads advanced vehicle software at Ford, as well as a program tasked with making an affordable EV. Tinucci, the former head of Teslaâs charging team, is now overseeing Uberâs shift to electric vehicles. âI think weâll see kind of a Tesla diaspora,â Kristin Hull, the founder of Nia Impact Capital, an investment firm with a stake in Tesla, told me. âThe rest of the world is catching up. And I think thatâs also playing a part in why the talent is moving on.â (Field and Tinucci didnât respond to requests for comment.)
Muskâs detractors might easily fall into schadenfreude. His actions might finally be catching up with him. But if Tesla continues to slide, there will be ramifications beyond Musk and his investors simply losing money. Tesla remains one of the very few companies outside of China that is making money by selling electric cars, which makes it uniquely capable of making a super-affordable EV. Every day that goes by without cheaper options, Americans who might be inclined to go electric are instead buying gas-burning cars that could be on the road for a decade or more. Meanwhile, other carmakers have spent years racing to build cleaner cars in large part to keep up with Tesla. Without the companyâs continued dominance, itâs easy to see a heavily polluting industry fall back on old habits. The risk is particularly high right now as the Trump administration is betting big on fossil fuels.
Whether Tesla can rebound will test something truly scarceânot Muskâs wealth but the faith that others have in him. Musk has already alienated people on the left and right, but many people still fiercely believe in his ability to make them rich. At some point, even they might start to vanish.
From the beginning, Donald Trumpâs approach to deportations has been about both removing people from the country and the spectacle of removing people from the country. If any doubt lingered about the presidentâs commitment to the cause, he erased it in Los Angeles, where his response to the widespread protests against a series of ICE raidsâhe has dispatched roughly 4,000 California National Guard troops and hundreds of Marines, all against the wishes of the stateâs governorâhas been an extraordinary (and extraordinarily excessive) demonstration of force. Trumpâs message has been clear: No matter who or what tries to get in the way, his administration will push forward with deportations. L.A. is âthe first, perhaps, of manyâ military deployments in the United States, Trump said earlier this week.
The spectacle part, Trump has down. The president has ushered in one of the most aggressive immigration campaigns in recent American history. The ICE raids in L.A. are just the latest of many high-profile instances in which federal law-enforcement officials have antagonized and rounded up suspected undocumented immigrantsâsome of whom are citizens or legal residents. Hundreds of immigrants have been swept away to what functionally is a modern Gulag in El Salvador, and the administration has recently tried to send others to South Sudan, which is on the verge of civil war. Enforcing immigration policy does not have to be inhumane, but the Trump administration is gloating in the very barbarity.
Amid all the bravado, however, the administration much more quietly has been struggling to deliver on Trumpâs campaign promise to âlaunch the largest deportation program of criminals in the history of America.â So far, deportations have not dramatically spiked under Trump, though daily rates have been on the rise in recent weeks. According to government data obtained by The New York Times, the administration has deported more than 200,000 people since Trumpâs return to office, well below the rate needed to meet the White Houseâs reported goal of removing 1 million unauthorized immigrants in his first year in office. If the pace over the first five months of Trumpâs presidency continues through the end of the year, total deportations would only slightly exceed that of President Barack Obama in fiscal year 2012.
The discrepancy is surprising. Given the visibility of Trumpâs efforts, youâd be forgiven for believing deportations were unfolding on a never-before-seen scale. The actual numbers donât diminish the cruelty of Trumpâs approach or the pain his administration has caused to those it has targeted. But they do reveal Trumpâs ever-increasing mastery of bending perceptions of reality. The administrationâs immigration tactics are so shocking, callous, and inescapable that they have generated the appearance of mass deportations. Paranoid rumors of ICE agents hovering around playgrounds, waiting to arrest noncitizen nannies, have spread. Some immigrants have opted to self-deport instead of subjecting themselves to the potential horrors of ICE detainment and deportation. Â
No reason exists to think the White House has been deliberately falling behind on its deportation promise. The administration has run into several challenges: The easiest migrants to deport are those who have just crossed the border, and unauthorized immigration has dropped significantly since Trump took office. (Trumpâs deportation approach and rhetoric has, in other words, seemingly been successful at keeping people out of the country in the first place.) At times, ICE has faced detention space constraints, and some of the administrationâs deportations have been stymied in the courts. In an email, the White House spokeswoman Abigail Jackson wrote, âPresident Trump has already secured the border in record time and is now fulfilling his promise to deport illegal aliens.â The administration plans to use a âfull-of-government approach to ensure the efficient mass deportation of terrorist and criminal illegal aliens.â In Trumpâs âbig, beautiful billâ that is working its way through Congress, Republican lawmakers are set to give ICE a massive funding injection to help the agency finally carry out mass deportations. âIf that money goes out, the amount of people they can arrest and remove will be extraordinary,â Paul Hunker, who was formerly ICEâs lead attorney in Dallas, told my colleague Nick Miroff.
[Read: Weâre about to find out what mass deportation really looks like]
For now, Trump is faking it until he makes it, with his administration doing everything it can to draw attention to its immigration tactics. Yesterday, federal agents handcuffed and forcibly removed Senator Alex Padilla of California just after he interrupted an immigration press conference featuring Secretary of Homeland Security Kristi Noem. In March, Noem had generated a previous viral moment when she traveled to the El Salvador megaprison where the administration has sent hundreds of supposed gang members, and gave remarks in front of shirtless, tattooed prisoners. The administration has even brought along right-wing media figures for its ICE arrests, producing further images of its immigration enforcement. Phil McGrawâthe former host of Dr. Phil, who now hosts a show for MeritTV, a right-wing network he foundedâwas at ICE headquarters in L.A. the same day of the immigration sweeps in the city that prompted the protests last week.
Consider, too, the shocking ways in which the administration has discussed the deportation campaign on social media. On Wednesday, the Department of Homeland Security posted an image styled like a World War II propaganda flyer, urging Americans to âreport all foreign invadersâ to a DHS hotline. The White Houseâs X account has created a meme about a crying woman in ICE custody, and uploaded a video of a deportee boarding a plane in clanking shackles with the caption âASMR: Illegal Alien Deportation Flight.â
[Read: The gleeful cruelty of the White House X account]
In one sense, all of this is just classic political spin. Instead of admitting that itâs falling behind on one of its core promises, the White House is attempting to control the narrative. But the scale of reality-warping going on in this case is hard to fathom. Trumpâs actions are part of a larger way in which he has come to understand that he can sway the nation with the right viral imagery. When he was indicted on racketeering and other charges and forced to take a mug shot in 2023, Trump glowered into the camera instead of looking embarrassed or guilty, generating an image that became the subject of viral memes and campaign merchandiseâand seemingly inspired his second presidential portrait, in which he strikes the same glowering pose. When he came within inches of dying during the assassination attempt in Pennsylvania last summer, he had the instincts to produce one of the most significant images in modern American history.
The series of videos, pictures, and aggressive actions his administration has taken regarding deportations are of the same genre. Trump takes the reality in front of him and does what he can to create a perception closer to what he wants: in this case, one of fear and terror. This is authoritarian behavior. Trump is marshaling propaganda to mislead Americans about what is really happening. Other recent strongman leaders, such as Rodrigo Duterte in the Philippines and Viktor OrbĂĄn in Hungary, have used a similar playbook. If Trump canât remove as many immigrants as he promised, the president can still use his talent for warping perceptions to make it feel as though he is. Laws donât need to change for free speech to be chilled, for immigrants to flee, and for people to be afraid.
One hallmark of our current moment is that when an event happens, there is little collective agreement on even basic facts. This, despite there being more documentary evidence than ever before in history: Information is abundant, yet consensus is elusive.
The ICE protests in Los Angeles over the past week offer an especially relevant example of this phenomenon. What has transpired is fairly clear: A series of ICE raids and arrests late last week prompted protests in select areas of the city, namely downtown, near a federal building where ICE has offices, and around City Hall and the Metropolitan Detention Center. There have been other protests south of there, around a Home Depot in Paramount, where Border Patrol agents gathered last week. The majority of these protests have been civil (âI mostly saw clergy sit-ins and Tejano bands,â The American Prospectâs David Dayen wrote). There has been some looting and property destruction. âOne group of vandals summoned several Waymo self-driving cars to the street next to the plaza where the city was founded and set them ablaze,â my colleague Nick Miroff, who has been present at the demonstrations, wrote.
[Read: Stephen Miller triggers Los Angeles]
As is common in modern protests, there has also been ample viral footage from news organizations showing militarized police responding aggressively in encounters, sometimes without provocation. In one well-circulated clip, an officer in riot gear fires a nonlethal round directly at an Australian television correspondent carrying a microphone while on air; another piece of footage shot from above shows a police officer on horseback trampling a protester on the ground.
All of these dynamics are familiar in the post-Ferguson era of protest. What you are witnessing is a news event distributed and consumed through a constellation of different still images and video clips, all filmed from different perspectives and presented by individuals and organizations with different agendas. It is a buffet of violence, celebration, confusion, and sensationalism. Consumed in aggregate, it might provide an accurate representation of the proceedings: a tense, potentially dangerous, but still contained response by a community to a brutal federal immigration crackdown.
Unfortunately, very few people consume media this way. And so the protests follow the choose-your-own-adventure quality of a fractured media ecosystem, where, depending on the prism one chooses, whatâs happening in L.A. varies considerably.
Anyone is capable of cherry-picking media to suit their arguments, of course, and social media has always narrowed the aperture of news events to fit particular viewpoints. Regardless of ideology, dramatic perspectives succeed on platforms. It is possible that oneâs impression of the protests would be incorrectly skewed if informed only by Bluesky commentators, MSNBC guests, or self-proclaimed rational centrists. The right, for example, has mocked the idea of âmostly peaceful protestsâ as ludicrous when juxtaposed with video of what they see as evidence to the contrary. Itâs likely that my grasp of the events and their politics is shaped by decades of algorithmic social-media consumption.
Yet the situation in L.A. only further clarifies the asymmetries among media ecosystems. This is not an even playing field. The right-wing media complex has a disproportionate presence and is populated by extreme personalities who have no problem embracing nonsense AI imagery and flagrantly untrue reporting that fits their agenda. Here you will find a loosely affiliated network of streamers, influencers, alternative social networks, extremely online vice presidents, and Fox News personalities who appear invested in portraying the L.A. protests as a full-blown insurrection. To follow these reports is to believe that people are not protesting but rioting throughout the city. In this alternate reality, the whole of Los Angeles is a bona fide war zone. (It is not, despite President Donald Trumpâs wildly disproportionate response, which includes deploying hundreds of U.S. Marines to the area and federalizing thousands of National Guard members.)
I spent the better part of the week drinking from this particular firehose, reading X and Truth Social posts and watching videos from Rumble. On these platforms, the protests are less a news event than a justification for the authoritarian use of force. Nearly every image or video contains selectively chosen visuals of burning cars or Mexican flags unfurling in a smog of tear gas, and theyâre cycled on repeat to create a sense of overwhelming chaos. They have titles such as âCIVIL WAR ALERTâ and âDEMOCRATS STOKE WW3!â All of this incendiary messaging is assisted by generative-AI images of postapocalyptic, smoldering city streetsâpure propaganda to fill the gap between reality and the world as the MAGA faithful wish to see it.
Iâve written before about how the internet has obliterated the monoculture, empowering individuals to cocoon themselves in alternate realities despite confounding evidenceâit is a machine that justifies any belief. This is not a new phenomenon, but the problem is getting worse as media ecosystems mature and adjust to new technologies. On Tuesday, one of the top results for one userâs TikTok search for Los Angeles curfew was an AI-generated video rotating through slop images of a looted city under lockdown. Even to the untrained eye, the images were easily identifiable as AI-rendered (the word curfew came out looking like ciuftew). Still, itâs not clear that this matters to the people consuming and sharing the bogus footage. Even though such reality-fracturing has become a load-bearing feature of our information environment, the result is disturbing: Some percentage of Americans believes that one of the countryâs largest cities is now a hellscape, when, in fact, almost all residents of Los Angeles are going about their normal lives.
On platforms such as Bluesky and Instagram, Iâve seen L.A. residents sharing pictures of themselves going about their day-to-day livesâtaking out the trash, going to the farmersâ marketâand lots of pictures of the cityâs unmistakable skyline against the backdrop of a beautiful summer day. These are earnest efforts to show the city as it is (fine)âan attempt to wrest control of a narrative, albeit one that is actually based in truth. Yet itâs hard to imagine any of this reaching the eyes of the people who participate in the opposing ecosystem, and even if it did, itâs unclear whether it would matter. As I documented in October, after Hurricanes Helene and Milton destroyed parts of the United States, AI-generated images were used by Trump supporters âto convey whatever partisan message suits the moment, regardless of truth.â
[Read: Iâm running out of ways to explain how bad this is]
In the cinematic universe of right-wing media, the L.A. ICE protests are a sequel of sorts to the Black Lives Matter protests of the summer of 2020. It doesnât matter that the size and scope have been different in Los Angeles (at present, the L.A. protests do not, for instance, resemble the 100-plus nights of demonstrations and clashes between protesters and police that took place in Portland, Oregon, in 2020): Influencers and broadcasters on the right have seized on the association with those previous protests, insinuating that this next installment, like all sequels, will be a bigger and bolder spectacle. Politicians are running the sequel playbookâSenator Tom Cotton, who wrote a rightly criticized New York Times op-ed in 2020 urging Trump to âSend in the Troopsâ to quash BLM demonstrations, wrote another op-ed, this time for The Wall Street Journal, with the headline âSend in the Troops, for Real.â (For transparencyâs sake, I should note that I worked for the Times opinion desk when the Cotton op-ed was published and publicly objected to it at the time.)
There is a sequel vibe to so much of the Trump administrationâs second term. The administrationâs policies are more extreme, and thereâs a brazenness to the whole affairânobodyâs even trying to justify the plot (or, in this case, cover up the corruption and dubious legality of the governmentâs deportation regime). All of us, Trump supporters very much included, are treated as a captive audience, forced to watch whether we like it or not.
This feeling has naturally trickled down to much of the discourse and news around Trumpâs second presidency, which feels (and generally is) direr, angrier, more intractable. The distortions are everywhere: People mainlining fascistic AI slop are occupying an alternate reality. But even those of us who understand the complexity of the protests are forced to live in our own bifurcated reality, one where, even as the internet shows us fresh horrors every hour, life outside these feeds may be continuing in ways that feel familiar and boring. We are living through the regime of a budding authoritarianâthe emergency is here, nowâyet our cities are not yet on fire in the way that many shock jocks say they are.
The only way out of this mess begins with resisting the distortions. In many cases, the first step is to state things plainly. Los Angeles is not a lawless, postapocalyptic war zone. The right to protest is constitutionally protected, and protests have the potential to become violentâconsider how Trump is attempting to use the force of the state to silence dissent against his administration. There are thousands more peaceful demonstrations scheduled nationally this weekend. The tools that promised to empower us, connect us, and bring us closer to the truth are instead doing the opposite. A meaningful percentage of American citizens appears to have dissociated from reality. In fact, many of them seem to like it that way.
For more than 20 years, print media has been a bit of a punching bag for digital-technology companies. Craigslist killed the paid classifieds, free websites led people to think newspapers and magazines were committing robbery when they charged for subscriptions, and the smartphone and social media turned reading full-length articles into a chore. Now generative AI is in the mixâand many publishers, desperate to avoid being left behind once more, are rushing to harness the technology themselves.
Several major publications, including The Atlantic, have entered into corporate partnerships with OpenAI and other AI firms. Any number of experiments have ensuedâpublishers have used the software to help translate work into different languages, draft headlines, and write summaries or even articles. But perhaps no publication has gone further than the Italian newspaper Il Foglio. For one month, beginning in late March, Il Foglio printed a daily insert consisting of four pages of AI-written articles and headlines. Each day, Il Foglioâs top editor, Claudio Cerasa, asked ChatGPT Pro to write articles on various topicsâItalian politics, J. D. Vance, AI itself. Two humans reviewed the outputs for mistakes, sometimes deciding to leave in minor errors as evidence of AIâs fallibility and, at other times, asking ChatGPT to rewrite an article. The insert, titled Il Foglio AI, was almost immediately covered by newspapers around the world. âItâs impossible to hide AI,â Cerasa told me recently. âAnd you have to understand that itâs like the wind; you have to manage it.â Â
Now the paperâwhich circulates about 29,000 copies each day, in addition to serving its online readershipâplans to embrace AI-written content permanently, issuing a weekly AI section and, on occasion, using ChatGPT to write articles for the standard paper. (These articles will always be labeled.) Cerasa has already used the technology to generate fictional debates, such as an imagined conversation between a conservative and a progressive cardinal on selecting a new pope; a review of the columnist Beppe Severgniniâs latest book, accompanied by Severgniniâs AI-written retort; the chatbotâs advice on what to do if you suspect youâre falling in love with a chatbot (âDo not fall in love with meâ); and an interview with Cerasa himself, conducted by ChatGPT. Â
Il Foglioâs AI work is full-fledged and transparently so: natural and artificial articles, clearly divided. Meanwhile, other publications provide limited, or sometimes no, insight into their usage of the technology, and some have even mixed AI and human writing without disclosure. As if to demonstrate how easily the commingling of AI and journalism can go sideways, just days after Cerasa and I first spoke, at least two major regional American papers published a spread of more than 50 pages titled âHeat Index,â which was riddled with errors and fabrications; a freelancer whoâd contributed to the project admitted to using ChatGPT to generate at least some portions of the text, resulting in made-up book titles and expert sources who didnât actually exist. The result was an embarrassing example of what can result when the technology is used to cut corners.
[Read: At least two newspapers syndicated AI garbage]
With so many obvious pitfalls to using AI, I wanted to speak with Cerasa to understand more about his experiment. Over Zoom, he painted an unsettling, if optimistic, portrait of his experience with AI in journalism. Sure, the technology is flawed. Itâs prone to fabrications; his staff has caught plenty of them, and has been taken to task for publishing some of those errors. But when used correctly, it writes wellâat times more naturally, Cerasa told me, than even his human staff.
Still, there are limits. âAnyone who tries to use artificial intelligence to replace human intelligence ends up failing,â he told me when I asked about the âHeat Indexâ disaster. âAI is meant to integrate, not replace.â The technology can benefit journalism, he said, âonly if itâs treated like a new colleagueâone that needs to be looked after.â
The problem, perhaps, stems from using AI to substitute rather than augment. In journalism, âanyone who thinks AI is a way to save money is getting it wrong,â Cerasa said. But economic anxiety has become the norm for the field. A new robot colleague could mean one, or three, or 10 fewer human ones. What, if anything, can the rest of the media learn from Il Foglioâs approach?
Our conversation has been edited for length and clarity.
Matteo Wong: In your first experiment with AI, you hid AI-written articles in your paper for a month and asked readers if they could detect them. How did that go? What did you learn?
Claudio Cerasa: A year ago, for one month, every day we put in our newspaper an article written with AI, and we asked our readers to guess which article was AI-generated, offering the prize of a one-year subscription and a bottle of champagne.
The experiment helped us create better prompts for the AI to write an article, and helped us humans write better articles as well. Sometimes an article written by people was seen as an article written by AI: for instance, when an article is written with numbered pointsâfirst, second, third. So we changed something in how we write too.
Wong: Did anybody win?
Cerasa: Yes, we offered a lot of subscriptions and champagne. More than that, we realized we needed to speak about AI not just in our newspaper, but all over the world. We created this thing that is important not only because it is journalism with AI, but because it combines the oldest way to do information, the newspaper, and the newest, artificial intelligence.
Wong: How did your experience of using ChatGPT change when you moved from that original experiment to a daily imprint entirely written with AI?
Cerasa: The biggest thing that has changed is our prompt. At the beginning, my prompt was very long, because I had to explain a lot of things: You have to write an article with this style, with this number of words, with these ideas. Now, after a lot of use of ChatGPT, it knows better what I want to do.
When you start to use, in a transparent way, artificial intelligence, you have a personal assistant: a new person that works in the newspaper. Itâs like having another brain. Itâs a new way to do journalism.
Wong: What are the tasks and topics youâve found that ChatGPT is good at and for which youâd want to use it? And conversely, where are the areas where it falls short?
Cerasa: In general, it is good at three things: research, summarizing long documents, and, in some cases, writing.
Iâm sure in the future, and maybe in the present, many editors will try to think of ways AI can erase journalists. That could be possible, because if you are not a journalist with enough creativity, enough reporting, enough ideas, maybe you are worse than a machine. But in that case, the problem is not the machine.
The technology can also recall and synthesize far more information than a human can. The first article we put in the normal newspaper written with AI was about the discovery of a key ingredient for life on a distant planet. We asked the AI to write a piece on great authors of the past and how they imagined the day scientists would make such a discovery. A normal person would not be able to remember all these things.
Wong: And what canât the AI do?
Cerasa: AI cannot find the news; it cannot develop sources or interview the prime minister. AI also doesnât have interesting ideas about the worldâthatâs where natural intelligence comes in. AI is not able to draw connections in the same way as intelligent human journalists. I donât think an AI would be able to come up with and fully produce a newspaper generated by AI.
Wong: You mentioned before that there may be some articles or tasks at a newspaper that AI can already write or perform better than humans, but if so, the problem is an insufficiently skilled person. Donât you think young journalists have to build up those skills over time? I started at The Atlantic as an assistant editor, not a writer, and my primary job was fact-checking. Doesnât AI threaten the talent pipeline, and thus the media ecosystem more broadly?
Cerasa: Itâs a bit terrifying, because weâve come to understand how many creative things AI can do. For our children to use AI to write something in school, to do their homework, is really terrifying. But AI isnât going awayâyou have to educate people to use it in the correct way, and without hiding it.
In our newspaper, there is no fear about AI, because our newspaper is very particular and written in a special way. We know, in a snobby way, that our skills are unique, so we are not scared. But Iâm sure that a lot of newspapers could be scared, because normal articles written about the things that happened the day before, with the agency newsâthat kind of article, and also that kind of journalism, might be the past.
Thereâs a lesson I once learned from a CEOâa leader admired not just for his strategic acumen but also for his unerring eye for quality. Heâs renowned for respecting the creative people in his company. Yet heâs also unflinching in offering pointed feedback. When asked what guided his input, he said, âI may not be a creative genius, but Iâve come to trust my taste.â
That comment stuck with me. Iâve spent much of my career thinking about leadership. In conversations about what makes any leader successful, the focus tends to fall on vision, execution, and character traits such as integrity and resilience. But the CEO put his finger on a more ineffable quality. Taste is the instinct that tells us not just what can be done, but what should be done. A corporate leaderâs taste shows up in every decision they make: whom they hire, the brand identity they shape, the architecture of a new office building, the playlist at a company retreat. These choices may seem incidental, but collectively, they shape culture and reinforce what the organization aspires to be.
Taste is a subtle sensibility, more often a secret weapon than a personâs defining characteristic. But weâre entering a time when its importance has never been greater, and thatâs because of AI. Large language models and other generative-AI tools are stuffing the world with content, much of it, to use the term du jour, absolute slop. In a world where machines can generate infinite variations, the ability to discern which of those variations is most meaningful, most beautiful, or most resonant may prove to be the rarestâand most valuableâskill of all.
I like to think of taste as judgment with style. Great CEOs, leaders, and artists all know how to weigh competing priorities, when to act and when to wait, how to steer through uncertainty. But taste adds something extraâa certain sense of how to make that decision in a way that feels fitting. Itâs the fusion of form and function, the ability to elevate utility with elegance.
Think of Steve Jobs unveiling the first iPhone. The device itself was extraordinary, but the launch was more than a technical revealâit was a performance. The simplicity of the black turtleneck, the deliberate pacing of the announcement, the clean typography on the slidesânone of this was accidental. It was all taste. And taste made Apple more than a tech company; it made it a design icon. OpenAIâs recently announced acquisition of Io, a startup created by Jony Ive, the longtime head of design at Apple, can be seen, among other things, as an opportunity to increase the AI giantâs taste quotient.
Taste is neither algorithmic nor accidental. Itâs cultivated. AI can now write passable essays, design logos, compose music, and even offer strategic business advice. It does so by mimicking the styles it has seen, fed to it in massiveâand frequently unknown or obscuredâdata sets. It has the power to remix elements and bring about plausible and even creative new combinations. But for all its capabilities, AI has no taste. It cannot originate style with intentionality. It cannot understand why one choice might have emotional resonance while another falls flat. It cannot feel the way in which one version of a speech will move an audience to tearsâor laughterâbecause it lacks lived experience, cultural intuition, and the ineffable sense of what is just right.
This is not a technical shortcoming. It is a structural one. Taste is born of human discretionâof growing up in particular places, being exposed to particular cultural references, developing a point of view that is inseparable from personality. In other words, taste is the human fingerprint on decision making. It is deeply personal and profoundly social. Thatâs precisely what makes taste so important right now. As AI takes over more of the mechanical and even intellectual labor of workâcoding, writing, diagnosing, analyzingâwe are entering a world in which AI-generated outputs, and the choices that come with them, are proliferating across, perhaps even flooding, a range of industries. Every product could have a dozen AI-generated versions for teams to consider. Every strategic plan, numerous different paths. Every pitch deck, several visual styles. Generative AI is an effective tool for inspirationâuntil that inspiration becomes overwhelming. When every option is instantly available, when every variation is possible, the person who knows which one to choose becomes even more valuable.  Â
This ability matters for a number of reasons. For leaders or aspiring leaders of any type, taste is a competitive advantage, even an existential necessityâa skill they need to take seriously and think seriously about refining. But itâs also in everyoneâs interest, even people who are not at the top of the decision tree, for leaders to be able to make the right choices in the AI era. Taste, after all, has an ethical dimension. We speak of things as being âin good tasteâ or âin poor taste.â These are not just aesthetic judgments; they are moral ones. They signal an awareness of context, appropriateness, and respect. Without human scrutiny, AI can amplify biases and exacerbate the worldâs problems. Countless examples already exist: Consider a recent experimental-AI shopping tool released by Google that, as reported by The Atlantic, can easily be manipulated to produce erotic images of celebrities and minors.
Good taste recognizes the difference between what is edgy and what is offensive, between what is novel and what is merely loud. It demands integrity.
Like any skill, taste can be developed. The first step is exposure. You have to see, hear, and feel a wide range of options to understand what excellence looks like. Read great literature. Listen to great speeches. Visit great buildings. Eat great food. Pay attention to the details: the pacing of a paragraph, the curve of a chair, the color grading of a film. Taste starts with noticing.
The second step is curation. You have to begin to discriminate. What do you admire? What do you return to? What feels overdesigned, and what feels just right? Make choices about your preferencesâand, more important, understand why you prefer them. Ask yourself what values those preferences express. Minimalism? Opulence? Precision? Warmth?
The third step is reflection. Taste is not static. As you evolve, so will your sensibilities. Keep track of how your preferences change. Revisit things you once loved. Reconsider things you once dismissed. This is how taste maturesâfrom reaction to reflection, from preference to philosophy.
Taste needs to considered in both education and leadership development. It shouldnât be left to chance or confined to the arts. Business schools, for example, could do more to expose students to beautiful products, elegant strategies, and compelling narratives. Leadership programs could train aspiring executives in the discernment of tone, timing, and presentation. Case studies, after all, are about not just good decisions, but how those decisions were expressed, when they went into action, and why they resonated. Taste can be taught, if weâre willing to make space for it.
The funeral director said âAIâ as if it were a normal element of memorial services, like caskets or flowers. Of all places, I had not expected artificial intelligence to follow me into the small, windowless room of the mortuary. But here it was, ready to assist me in the task of making sense of death.
It was already Wednesday, and Iâd just learned that I had to write an obituary for my mother by Thursday afternoon if I wanted it to run in Sundayâs paper. AI could help me do this. The software would compose the notice for me.
As a professional writer, my first thought was that this would be unnecessary, at best. At worst, it would be an outrage. The philosopher Martin Heidegger held that someoneâs death is a thing that is truly their own. Now I should ask a computer to announce my motherâs, by way of a statistical model?
âDid you say AI?â I asked the funeral director, thinking I must have been dissociating. But yes, she did. As we talked some more, my skepticism faded. The obituary is a specialized form. When a person of note dies, many newspapers will run a piece that was commissioned and produced years in advance: a profile of the deceased. But when a normal person diesâand this applies to most of usâthe obituary is something else: not a standard piece of journalistic writing, but a formal notice, composed in brief, that also serves to celebrate the personâs life. I had no experience in producing anything like the latter. The option to use AI was welcome news.
After all, there were lots of other things to do. The obituary was one of dozens of details I would have to address on short notice. A family in grief must choose a disposition method for their loved one, and perhaps arrange a viewing. They must plan for services, choose floral arrangements or other accessories, select proper clothing for the deceased, and process a large amount of paperwork. Amid these and other tasks, I found that I was grateful for the possibility of any help at all, even from a computer that cannot know a motherâs love or mourn her passing.
The funeral director told me I would be given access to this AI tool in the funeral-planning online account that she had already created for me. I still had a few misgivings. Would I be sullying Momâs memory by doing this? I glanced over at an advertisement for another high-tech serviceâone that could make lab-grown diamonds from my motherâs ashes or her hair. Having an AI write her obituary seemed pretty tame in comparison. âShow me how to do it,â I said.
Actually getting a computer to do the work proved unexpectedly difficult. Over the next 24 hours, the funeral director and I exchanged the kind of emails you might swap with office tech support while trying to connect to the shared printer. I was able to log in to the funeral portal (the funeral portal!) and click into the obituary section, but no AI option appeared. The funeral director sent over a screenshot of her display. âIt may look slightly different on your end,â she wrote. I sent a screenshot back: âThat interface is not visible to me.â Web-browser compatibility was discussed, then dismissed. The back-and-forth made me realize that Momâs memorial would be no more sullied by AI than it was by the very fact of using this softwareâa kind of Workday app for death and burial.
In the end, the software failed us. My funeral director couldnât figure out how to give me access to the AI obituary writer, so I had to write one myself, using my brain and fingertips. I did what AI is best at: copying a formula. I opened up my dadâs obituary, which Mom had written a couple of years earlier, and mirrored its format and structure. Dates and locations of birth and death, surviving family, professional life, interests. I was the computer now, entering data into a pre-provided template.
[Read: A secret history of the obituary page]
When I finally did get the chance to try the AI obituary writer a few weeks laterâafter reaching out to Passare, the company behind itâI found its output more creative than mine, and somehow more personal. Like everything else, the funeral-services industry is now operated by cloud-based software-as-a-service companies. Passare is among them, and offers back-office software for funeral-home management along with family-facing funeral-planning tools.
Josh McQueen, the companyâs vice president of marketing and product, explained why my earlier attempt to use the obituary-writing tool had failed: The funeral home must have had that feature set for staff-only access, which some businesses prefer. Then he gave me access to a mock funeral for the fictional departed John Smith so I could finally give it a go.
I couldnât change John Smithâs name, but I pretended I was writing the obituary for my mother instead. Using simple web forms, I put in her education and employment information, some life events that corresponded to her âpassionsâ and âachievements,â and a few facts about relevant family members who had survived her or preceded her in death. These had to be entered one by one, choosing the type of relation from a drop-down and then checking a box to indicate whether the person in question was deceased. I felt like I was cataloging livestock.
From there, Passareâs software, which is built on top of ChatGPT technology, generated an obituary. And you know whatâit was pretty good. Most of all, it was done, and with minimal effort from me. Hereâs an excerpt, with John Smithâs name and pronouns swapped out for my motherâs, and a couple of other very small alterations to smooth out the language:
Sheila earned her bachelorâs degree and dedicated her career to managing her late husband Davidâs psychology private practice for decades. She was not only devoted to his work but also a dedicated caregiver for Dave in his later years. Throughout her life, Sheila nurtured his passions, which included playing musicâespecially the pianoâand a deep appreciation for Native American art. She found joy in teaching skiing to children and sharing the vibrant personalities of her many pet birds.
The AI obituary can also be tuned by length and toneâformal, casual, poetic, celebratory. (The poetic version added flourishes such as âshe found joy in the gentle keys of her piano, filling her home with music that echoed her spirit.â) Because an obituary is already a schematic form of writing, the AIâs results were not just satisfactory but excellent, even. And, of course, once the draft was done, I could adjust it as I wished.
âWhen we first started testing this, ChatGPT would just make up stories,â McQueen told me. It might assert that someone named Billy was often called Skippy, for example, and then concoct an anecdote to explain the fake nickname. This tendency of large language models, sometimes called hallucination, is caused by the technologyâs complex statistical underpinnings. But Passare found this problem relatively easy to tame by adjusting the prompts it fed to ChatGPT behind the scenes. He said he hasnât heard complaints about the service from any families who have used it.
Obituaries do seem well suited for an AIâs help. Theyâre short and easy to review for accuracy. Theyâre supposed to convey real human emotion and character, but in a format that is buttoned-up and professional, for a public audience rather than a private one. Like cover letters or wedding toasts, they represent an important and uncommon form of writing that in many cases must be done by someone who isnât used to writing, yet who will care enough to polish up the finished product. An AI tool can make that effort easier and better.
And for me, at least, the toolâs inhumanity was also, in its way, a boon. My experience with the elder-care and death industriesâassisted living, hospice, funeral homesâhad already done a fair amount to alienate me from the token empathy of human beings. As Mom declined and I navigated her care and then her death, industry professionals were always offering me emotional support. They shared kind words in quiet rooms that sometimes had flowers on a table and refreshments. They truly wanted to help, but they were strangers, and I didnât need their intimacy. I was only seeking guidance on logistics: How does all this work? What am I supposed to do? What choices must I make?
A person should not pretend to be a friend, and a computer should not pretend to be a person. In the narrow context of my momâs obituary, the AI provided me with middle ground. It neither feigned connection nor replaced my human agency. It only helpedâand it did so at a time when a little help was all I really wanted.
The world of crypto can feel impenetrable. The basic technology is complicated enough, but the subcultureâwith its own particular argot and decorumâis whatâs truly forbidding. Even if youâre not quite ready to figure out what DePIN or zk-SNARKs are, you can get a solid glimpse into the industry right now just by looking at the lineup of the 2025 Bitcoin Conference, held late last month in Las Vegas. Speakers included goofball meme-coin boosters, good-hearted cypherpunks, crypto podcasters with names such as âGwart,â and an army of Wall Street execs who seem to have waited until bitcoin hit $100,000 to give the whole crypto thing a shot.
There were also a whole lot of MAGA acolytes. Vice President J. D. Vance, the eldest Trump sons, and the White House crypto czar David Sacks all gave speeches that coalesced around a unifying theme: Trump and crypto are meant for each other. âWhatâs going on here in this very room, at this very conference, thatâs the financial side of everything weâve been fighting for on the free-speech side,â Don Jr. said during a conversation with the CEO of Rumble, a social-media platform favored by right-wing users. âTheyâre inextricably linked.â
In other words, the message was that Trump cares deeply about the kinds of civil-libertarian ideas that the bitcoin world has long touted. Itâs a convenient narrative, a lofty way of explaining this once very bitcoin-skeptical presidentâs sudden embrace of crypto. At least, itâs one that transcends sheer self-enrichment: In the past year, members of the Trump family have launched two meme coins and announced a majority stake in a new crypto firm, World Liberty Financial. As Iâve previously written, crypto is quickly becoming the Trump family business: Last month, the president hosted the biggest investors in his $TRUMP coin for a private dinner at his golf course outside Washington, D.C. Â
But the linkages between Trump and crypto run deeper than just a couple of business investments. His White House has also ushered in a starkly pro-crypto agendaârolling back regulations and dropping lawsuits to punish alleged crypto wrongdoing. The same week that Don Jr. spoke at the Bitcoin Conference, the Department of Labor eased a Biden-era guidance that made it difficult for Americans to invest their 401(k) plans in crypto because digital currencies can be volatile and prone to hacks. In cutting this language, regulators are taking away a guardrail, encouraging more investment in crypto. This, in turn, could boost the price of bitcoin and other coins, which is a boon to Trumpâs own enterprises. It always comes back to the president himself: Trumpâs crypto ambitions are as much about public policy as they are about his own meme coins.
Crypto has become the glue that binds together so much of what the president and his administration are doing. Consider Trump Media & Technology Group, best-known as the parent company of his social-media app, Truth Social. Trump Media didnât start as a crypto business, but now itâs pivoting to crypto. Late last month, Trump Media announced that it would raise money to purchase $2.5 billion in bitcoin, effectively creating a corporate bitcoin reserve. Why? âWe view bitcoin as an apex instrument of financial freedom,â Devin Nunes, the CEO of Trump Media and a former Republican congressman, said in a statement. Putting the pseudo-utopian language aside, such a bitcoin reserve mostly just serves to tie the price of Trump Mediaâs stock, $DJT, to the price of bitcoin writ large. A multibillion-dollar investment is unreservedly good for crypto, but itâs also good for the Trump family, because much of the presidentâs own net worth is now tied up in crypto assets. (Neither the White House nor the Trump Media & Technology Group responded to my requests for comment.)
Perhaps the idea of a bitcoin reserve sounds familiar. It explicitly mirrors the White Houseâs announcement of a âStrategic Bitcoin Reserveâ in March, as part of a broader effort to make the U.S. a global leader in crypto. Both serve the same function: Such large-scale institutional investment in cryptoâwhether from the government or a companyâfurther legitimizes these digital currencies, ensuring their long-term viability as an asset class. Trumpâs campaign to promote crypto and juice the price of these coins is in essence two-pronged: Once the White House sets its agenda, the Trump familyâs private-sector business can back it.
Trump was all about pro-crypto policy even before he began launching his raft of crypto businesses. His campaign promise to fire Bidenâs top crypto cop, Securities and Exchange Commission Chair Gary Gensler, helped pull in donations from industry heavyweights. Especially after the downfall of Sam Bankman-Fried, Gensler was focused on prosecuting individual crypto companiesâa policy now derisively referred to as âOperation Choke Point 2.0â (a nod to the Obama-era initiative that put pressure on banks to stop working with payday lenders, pawn shops, and certain other businesses). During his keynote speech at the Bitcoin Conference, Vance put it bluntly: âOperation Choke Point 2.0 is dead, and itâs not coming back under the Trump administration.â
Indeed, the administration has dropped more than a dozen lawsuits and investigations against crypto firms. And as Trumpâs second term has gone on, the distinctions between whatâs pro-Trump and whatâs pro-crypto have blurred together, approaching something like a singularity. In MAGA cosmology, crypto, Trump, and America now exist in perfect alignmentâwhatâs good for one is good for the others. While Trump talks about bringing back manufacturing jobs to the U.S., the Trump sons are running a crypto-mining company called American Bitcoin and Trump Media is throwing its weight behind âMade in Americaâ crypto investment funds. After firing many of the top regulators responsible for keeping crypto in check, Trump has cleared the way for major cash injections throughout the crypto industryâincluding, of course, in his own businesses. The pretense for the regulatory rollbacks and Trumpâs personal crypto investments is the same: It benefits America.
The irony is that cryptocurrencies were supposed to be a form of protection against exactly this sort of connection to the state. Bitcoin was invented as a way to privately transfer money online, with the ambitious goal of creating a new financial order outside the purview of the international monetary regime, uncontrolled by any government. (After all, the technical basis for crypto is known as âdecentralization.â) In loosening crypto restrictions that benefit the industry (and Trump himself), Trump is manifesting the old crypto dream of a new financial order. But far from being faceless and decentralized, the very concept of crypto is starting to reflect the image of just one man.
The sun rises every morning. Spring turns to summer. Water is wet. Donald Trump and Elon Muskâs relationship has ended with a post about Jeffrey Epstein.
This was inevitable. When Elon Musk attached himself to Trump during Trumpâs presidential transition last fall, there was great speculation that these two massive egos would, eventually, clash and that their strategic partnership would flame out spectacularly. Many onlookers assumed that Trump would be the one to tire of Musk and that the centibillionaire would fly too close to the sun, becoming too visible in the administration or simply too annoying. During his short time in government, Musk did manage to anger some of Trumpâs staff and advisers, tank his public reputation with many American voters, and jeopardize the financial health of his electric-vehicle company, Tesla. Still, through all of that, Trump remained remarkably on message and supportive.
Instead it was Musk who fired the first shots, specifically criticisms of the Republicansâ budget-reconciliation package (a.k.a. the One Big Beautiful Bill Act). On Tuesday, Musk called the bill a âdisgusting abomination,â threatened to politically retaliate against its supporters, and argued it would increase the debt. This led to Trump calling out Musk in an Oval Office meeting today with German Chancellor Friedrich Merz, and suggesting that the DOGE figurehead had âTrump derangement syndrome.â The episode that followed has been playing out in reality-TV fashion, with X and Truth Social acting as confessional booths. On X, Musk argued that, âwithout me, Trump would have lost the electionâ and accused Trump of âsuch ingratitude.â On Truth Social, Trump posted that âElon was âwearing thinââ and that, when the president asked Musk to leave, âhe just went CRAZY!â
It keeps going. At one point in the afternoon, as if sensing the feud had reached a critical mass of attention, Musk leveled a serious allegation against Trump, posting: â@realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. Have a nice day, DJT!â
Musk had, it seems, kicked off an attentional spectacle without precedent. You have the worldâs richest man, who is terminally online and whose brain has been addled by social media and, reportedly, other substances. He is one of the most prolific and erratic high-profile posters, so much so that he purchased his favorite social network to mold it in his image. He is squaring off against Trump, arguably the most consequential, off-the-cuff poster of all time and, one must note, the current president of the United States. If it werenât for the other, both men would be peerless in their ability to troll, outrage, and command news cycles via their fragile, mercurial egos.
The point being: If this public fight between Musk and Trump continues, we will witness a Super Bowl of schadenfreude unfold. Itâs guaranteed to entertain and leave those of us who spectate feeling gross. It is, in other words, the logical endpoint of internet beefs.
This spectacle is tempting to view as a cage match: Two men enter, one man leaves. (Musk, at least, is familiar.)Â But that mentality supposes a winner and a loser, and itâs worth asking what winning even looks like here. Surely, nobody will come out of this unscathed. Muskâs âEpstein filesâ comment, beyond being an allegation about Trumpâs relationship with the convicted sex offender and child trafficker, also is a suggestion that Musk might have other dirt on the Trump administration. And the likely loss of Muskâs donor money deprives Trump of political leverage. Similarly, Trump has suggested he might strip Muskâs companies of their federal funding and subsidies. Teslaâs stock has fallen sharply today since Musk began rage-posting against Trump, which suggests there will be real consequences. (Meanwhile, people, including Steve Bannon, are already musing that Musk could get himself deported.)
Consider, though, that in the realm of social media, Musk and Trump both know exactly what they are doing. Musk and Trump are innately attuned to attention and how to attract and wield it. It stands to reason that their interpretation of their past decade online is that public feuding has, essentially, no downside for them. Instead, their perma-arguing, norm-stomping, and general shamelessness have allowed them to become the main characters of a media and political ecosystem that demands constant fodder. Harnessing attention in this way has proved remarkably lucrative. Many credit Trumpâs initial victory in 2016 to his ability to program the news cycle 140 characters at a time. Meanwhile, some analysts have suggested that Muskâs companies are, in their own right, memestocks whose fortunes have risen on the centibillionaireâs ability to stay in the spotlight incessantly.
Trumpâs and Muskâs constant provocations and attention seeking have downstream effects, too. Their feuding creates content for others to draft off. The press can cover it, influencers can react to it, politicians can fundraise off it, and all manner of online hustlers can find a way to get in. You can already see the attentional cottage industry hard at work in the Musk-Trump fight as lesser attention merchants try to involve themselves. The podcaster Lex Fridman offered to broker peace on his show while the rapper Kanye West, now known as Ye, stepped in to comment on the chaos. The onetime presidential candidate and third-party champion Andrew Yang seized on Muskâs comments to drum up enthusiasm for his pet project. Even the replies became valuable real estateâthe long strings of responses to Muskâs posts about Trump are littered with advertisements automatically inserted by X. (I saw one for a Trump T-shirt company.) In this way, a Trump-Musk beef is an attentional Big Bang.
In 2020, the blogger Venkatesh Rao wrote a seminal post titled âThe Internet of Beefs,â arguing that the structure of social media and our culture-warring has brought about âa stable, endemic, background societal condition of continuous conflict.â In it, he describes the Internet of Beefs as having âa feudal structure,â with charismatic leaders (knights), and anonymous legions of normies (mooks) who have devoted themselves to fight on behalf of these leaders. Rao identifies Trump as an ur-example of a knight, who is able to profit off all of the discord heâs helped sow. âFor the mook, the conflict is a means to an end, however incoherent,â Rao writes. âFor the knight, the conflict is the end. Growing it, and keeping it going, is something like an entrepreneurial cultural capital business model.â
I reread Raoâs post as the internet worked itself into a lather over todayâs fight. Many of the dynamics Rao explained were on display: sycophants lining up to defend Musk or Trump in the hope of getting noticed, various posters (myself included) excitedly or dutifully chronicling the fallout. There is seemingly opportunity everywhere, created by this attentional spectacle. The content is at once depressing and tremendous. At a glance, it looks like everyoneâs winning.
Of course, nobody is. Raoâs most salient point in his essay is that this state of forever beef is a consequence of a societal rot. Itâs a stalling tactic of sorts, one that prevents us from deciding who we are, both individually and collectively. If that sounds overwrought, itâs worth remembering the genesis of Musk and Trumpâs feud: a funding bill in Congress that would result in roughly $1 trillion in cuts to Medicaid and food stamps, while offering a similar value in tax cuts to high earners. Millions of people could lose their current coverage through the Affordable Care Act if the bill passes. These details are vaporized by the size and scale of this particular beef.
The Trump-Musk feud is not so much a distraction as it is evidence of a societal tendency toward abstraction, even obfuscation. A cage match is easier to watch than a discussion about who deserves benefits and resources. It is certainly more cathartic than an ideological stalemate about the world we want to build. Maybe Trump or Musk will find a way to win or lose their spat. The rest of us, though, will probably not be so lucky, destined instead to spectate fight after fight.
If Google has its way, there will be no search bars, no search terms, no searching (at least not by humans). The very tool that has defined the companyâand perhaps the entire internetâfor nearly three decades could soon be overtaken by a chatbot. Last month, at its annual software conference, Google launched âAI Mode,â the most drastic overhaul to its search engine in the companyâs history.
The feature is different from the AI summaries that already show up in Googleâs search results, which appear above the usual list of links to outside websites. Instead, AI Mode functionally replaces Google Search with something akin to ChatGPT. You ask a question and the AI spits out an answer. Instead of sifting through a list of blue links, you can just ask a follow-up. Google has begun rolling out AI Mode to users in the United States as a tab below the search bar (before âImages,â âShopping,â and the like). The company said it will soon introduce a number of more advanced, experimental capabilities to AI Mode, at which point the feature could be able to write a research report in minutes, âseeâ through your smartphoneâs camera to assist with physical tasks such as a DIY crafts project, help book restaurant reservations, make payments. Whether AI Mode can become as advanced and as seamless as Google promises remains far from certain, but the firm appears to be aiming for something like an everything app: a single tool that will be able to do just about everything a person could possibly want to do online.
Seemingly every major tech company is after the same goal. OpenAI markets ChatGPT, for instance, as able to write code and summarize documents, help shop, produce graphics, and naturally, search the web. Elon Musk is notoriously obsessed with the idea of turning X into an everything app. Meta says you can use its AI âfor everything you needâ; Amazon calls its new, generative AIâpowered Alexa+ âan assistant available to help any time you wantâ; Microsoft bills its AI Copilot as a companion âfor all you doâ; and Apple has marketed Apple Intelligence and a revamped Siri as tools that will revolutionize how people use their iPhones (which encompass, for many users, everything). Even Airbnb, once focused simply on vacation rentals, is redesigning itself as a place where âyou can sell and do almost anything,â as its CEO, Brian Chesky, recently said.
In a sense, everything apps are the logical conclusion of Silicon Valleyâs race to build artificial âgeneralâ intelligence, or AGI. A bot smart enough to do anything obviously would be used to power a product that can, in effect, do anything. But such apps would also represent the culmination of the tech industryâs aim to entrench its products in peopleâs daily lives. Already, Google has features for shopping, navigation, data storage, work software, payment, travelâplus an array of smartphones, tablets, smart-home gadgets, and more. Apple has a similarly all-encompassing suite of offerings, and Metaâs three major apps (Facebook, Instagram, and WhatsApp) each has billions of users. Perhaps the only thing more powerful than these sprawling tech ecosystems is boiling them all down to a single product.
That these tech companies can even realistically have such colossal ambitions to build everything apps is a result of their existing dominance. The industry has spent years collecting information about our relationships, work, hobbies, and interestsâall of which is becoming grist for powerful AI tools. A key feature of these everything apps is that they promise to be individually tailored, drawing on extensive personal data to provide, in theory, a more seamless experience. Your past search history, and eventually your emails, can inform AI Modeâs responses: When I typed line up into AI Mode, I got the âline upâ for the dayâs New York Mets game (the Mets are my favorite baseball team). When I typed the same phrase into traditional Google Search, I got a definition.
In other words, the rise of AI-powered everything apps is a version of the bargain that tech companies have proposed in the past with social media and other tools: our services for your data. Metaâs AI assistant can draw on information from usersâ Facebook and Instagram accounts. Apple describes its AI as a âpersonal intelligenceâ able to glean from texts, emails, and notes on your device. And ChatGPT has a new âmemoryâ feature that allows the chatbot to reference all previous conversations. If the technology goes as planned, it leads to a future in which Google, or any other Big Tech company, knows you are moving from Texas to Chicago and, of its own accord, offers to order the winter jacket you donât own to be delivered to your new apartment, already selected from your favorite brand, in your favorite color. Or it could, after reading emails musing about an Italian vacation, suggest an in-budget itinerary for Venice that best fits your preferences.
There are, of course, plenty of reasons to think that AI models will not be capable and reliable enough to power a true everything app. The Mets lineup that Google automatically generated for me wasnât entirely accurate. Chatbots still invent information and mess up basic math; concerns over AIâs environmental harms and alleged infringement of intellectual-property rights could substantially slow the technologyâs development. Only a year ago, Google released AI Overviews, a search feature that told users to eat rocks and use glue to stick cheese to pizza. On the same day that Google released AI Mode, it also introduced an experimental AI shopping tool that can be easily used to make erotic images of teenagers, as I reported with my colleague Lila Shroff. (When we shared our reporting with the company, Google emphasized the protections it has in place and told us it would âcontinue to improve the experience.â) Maybe AI Mode will order something two sizes too large and ship to the wrong address, or maybe itâll serve you recommendations for Venice Beach.
[Read: Googleâs new AI puts breasts on minorsâand J.D. Vance]
Despite these embarrassments, Google and its major AI competitors show no signs of slowing down. The promised convenience of everything apps is, after all, alluring: The more products of any one company you use, and the better integrated those products are, the more personalized and universal its everything app can be. Google even has a second contender in the raceâits Gemini model, which, at the same conference, the company said will become a âuniversal AI assistant.â Whether through Search or Gemini the company seems eager to integrate as many of its products and as much of its user data as possible.
On the surface, AI and the everything app seem set to dramatically change how people interact with technologyâconsolidating and streamlining search, social media, officeware, and more into a chatbot. But a bunch of everything apps vying for customers feels less like a race for innovation and more like empires warring over territory. Tech companies are running the same data-hungry playbook with their everything apps as they did in the markets that made them so dominant in the first place. Even OpenAI, which has evolved from a little-known nonprofit to a Silicon Valley behemoth, appears so eager to accumulate user data that it reportedly plans to launch a social-media network. The technology of the future looks awfully reliant on that of the past.
Itâs late morning on a Monday in March and I am, for reasons I will explain momentarily, in a private bowling alley deep in the bowels of a $65 million mansion in Utah. Jesse Armstrong, the showrunner of HBOâs hit series Succession, approaches me, monitor headphones around his neck and a wide grin on his face. âI take it youâve seen the news,â he says, flashing his phone and what appears to be his X feed in my direction. Of course I had. Everyone had: An hour earlier, my boss Jeffrey Goldberg had published a story revealing that U.S. national-security leaders had accidentally added him to a Signal group chat where they discussed their plans to conduct then-upcoming military strikes in Yemen. âIncredibly fucking depressing,â Armstrong said. âNo notes.â
The moment felt a little bit like a glitch in the simulation, though it also pinpointed exactly the kind of challenge facing Armstrong. I had traveled to Park City to meet him on the set of Mountainhead, a film he wrote and directed for HBO (and which premieres this weekend). Mountainhead is an ambitious, extremely timely project about a group of tech billionaires gathering for a snowy poker weekend just as one of them releases AI-powered tools that cause a global crisis. Signalgate was the latest, most outrageous bit of news from the Trump administration that seemed to shift the boundaries of plausibility. How can Armstrong possibly satirize an era where reality feels like itâs already cribbing from his scripts?
The film was billed to me as an attempt to capture the real power and bumbling hubris of a bunch of arrogant and wealthy men (played by Steve Carell, Cory Michael Smith, Jason Schwartzman, and Ramy Youssef) who try to rewire the world and find themselves in way over their heads. This was an easy premise for me to buy into, not just because of Signalgate, but also because Iâd spent the better part of the winter reporting on Elon Muskâs takeover of the federal government, during which time DOGE had reportedly made a 19-year-old computer programmer who goes by the online nickname âBig Ballsâ a senior adviser to the State Department. In order to keep the film feeling fresh in this breakneck news cycle, Armstrong pushed to complete the project on an extraordinarily short timeline: He pitched the film in December and wrote parts of the script in the back of a car while driving around with location scouts. When we met, Youssef told me that the âway it was shot naturally simulated Adderall.â
[Read: The 400-year-old tragedy that captures our chaos]
By the time I met Armstrongâaffable and easygoing both on and off setâhe was unfazed by fact seeming stranger than his fiction. âThereâs almost something reassuring about it,â he said. âItâs all moving so fast and is so hard to believe that it allows me to just focus on the story I want to tell. Iâm not too worried about the news beating me to the punch.â Lots of his work, including Succession and some writing on political satires, such as Veep and The Thick of It, draw loose and sometimes close inspiration from current events. The trick, Armstrong told me, is finding a âcomfortable distanceâ from whatâs happening in reality.
The goal is to let audiences bring their context to his art but still have a good time and not feel as if theyâre doomscrolling. For instance, one of the main characters in Mountainhead is an erratic social-media mogul named Venis (played by Smith), whoâs also the richest man in the world. But the comparisons to our real tech moguls arenât one-to-one. âI donât think youâd think heâs a Musk cipher, nor is he a Zuck, but he takes something from him and probably from Sam Altman and maybe from Sam Bankman-Fried,â Armstrong said.
Mountainhead is Armstrongâs first project since Succession. That showâs acclaimâ19 Emmy and nine Golden Globe winsâcemented Armstrong and his team of writers as the preeminent satirists of contemporary power and wealth. His decision to focus on the tech world can feel like a cultural statement of its own. Succession managed to capture the depravity, hilarity, and emptiness of modern politics, media, and moguldom existing parallel to the perpetual real-life crises of its run from 2018 to 2023. But while Mountainhead has plenty of Successionâs DNAâsharing many of the same producers and writers, and some of the crewâitâs much more of a targeted strike than the 39-episode HBO show. Rather than a narrative epic of unserious failsons, the film offers a relatively straightforward portrait of buffoonish elites who believe that their runaway entrepreneurial success entitles them to rule over the lower-IQâd masses. In some ways, Mountainhead picks up where a different HBO series, Silicon Valley, left off, exploring the limits of and poking fun at the myth of tech genius, albeit with a far darker tenor.
The tech guys werenât supposed to be the next group up in the blender, Armstrong told me. He was trying to work on a different project when he became interested in the fall of Bankman-Fried and his crypto empire. Armstrong is a voracious reader and something of a media nerdâon set, he joked that heâs probably accidentally paying for dozens of niche Substacksâand quickly went down the tech rabbit hole. Reading news articles turned into skimming through biographies. Eventually, he ended up on YouTube, absorbed by the marathon interviews that tech titans did with Joe Rogan and Lex Fridman, and the gab sessions on the All-In podcast, which features prominent investors and Donald Trumpâs AI and crypto czar, David Sacks. âIn the end, I just couldnât stop thinking about these people,â he told me. âI was just swimming in the culture and language of these people for long enough that I got a good voice in my head. I got some of the vocabulary, but also the confidence-slash-arrogance.â
As with Succession, vocabulary and tone are crucial to Mountainheadâs pacing, humor, and authenticity. Armstrong and his producers have peppered the script with what he described as âpodcast earworms.â At one point, Carellâs character, Randall, the elder-statesman venture capitalist, describes Youssefâs character as a âdecel with crazy p(doom) and zero risk tolerance.â (Decel stands for a technological decelerationist; p(doom) is the probability of an AI apocalypse.)
âThere was a lot of deciphering, a lot of looking up of phrases for all of usâtaking notes and watching podcasts,â Carell told me about his rapid preparation process. When we spoke, all of the actors stressed that they didnât model their characters off individual people. But some of the portraits are nonetheless damning. Youssefâs character, Jeff, the youngest billionaire of the bunch, has built a powerful AI tool capable of stemming the tide of disinformation unleashed by Venisâs social network. He has misgivings about the fallout from his friendâs platform, but also sees his companyâs stock rising because of the chaos.
[From the April 2025 issue: Growing up Murdoch]
âOne of the first things I said to Jesse was that I saw my younger, less emotionally developed self in the level of annoyingness, arrogance, and crudenessâmixed with a soft emotional instabilityâin Jeff,â Youssef told me. âHe reminded me of me in high school. I thought, These are the kind of guys who started coding in high school, and itâs probably where their emotions stalled out in favor of that rampant ambition.â This halted adolescence was a running theme. On a Tuesday evening around 9 p.m., I stood on set watching five consecutive takes of a scene (that was later cut from the film) where Youssef jumps onto a chair while calling a honcho at the IMF, and starts vigorously humping Schwartzmanâs head. The mansion itself is like a character in the film. The production designer Stephen Carter told me it was chosen in part because âit feels like something that was designed to impress your friendsââan ostentatious glass-and-metal structure with a private ski lift, rock wall, bowling alley, and a full-size basketball court.
Carter, who also did production design on Succession, said that itâs important to Armstrong that his productions are set in environments that accurately capture and mimic the scale of wealth and power of its characters. âTaste is fungible,â Carter told me, âbut the amount of square footage is not.â They knew theyâd settled on the right property when Marcel Zyskind, the director of photography, visited. âHe almost felt physically ill when he walked into the house,â Carter said. âSort of like it was a violation of nature or something.â The costuming choices reflected the banality of the tech elites, with a few flourishes, like the bright Polaris snowmobile jumpsuits and long underwear worn in one early scene. âJesse has them casually decide the fate of the world while wearing their long johns,â the costume designer, Susan Lyall, recounted.
True sickos like myself, whoâve followed the source material and news reports closely, can play the parlor game of trying to decode inspirations ripped from the headlines. Carellâs character has the distinct nihilistic vibes of a Peter Thiel, but also utters pseudophilosophical phrases like âin terms of Aurelian stoicism and legal simplicityâ that read like a Marc Andreessen tweetstorm of old. Schwartzmanâs character, Souperâthe poorest of the group, whose nickname is short for soup kitchenâgives off an insecure, sycophantic vibe that reminds me of an acolyte from Muskâs text messages.
[Read: Elon Muskâs texts shatter the myth of the tech genius]
But Armstrong insists heâs after something more than a roast. What made tech billionaires so appealing to him as a subject matter is their obsession with scale. To him, their extraordinary ambitions and egos, and the speed with which they move through the world, makes their potential to flame out as epic as their potential to rewire our world. And his characters, while eminently unlikable, all have flashes of tragic humanity. Venis seems unable to connect with his son; Jeff is wracked with a guilty conscience; Randall is terrified of his looming mortality; and Souper just wants to be loved. âI think where clever and stupid meet is quite an interesting place for comedy,â Armstrong told me when I asked him about capturing the tone of the tech world. âAnd I think you can hear those two things clashing quite a lot in the discussions of really smart people. You know, the first-principles thinking, which theyâre so keen on, is great. But once you throw away all the guardrails, you can crash, right?â
By his own admission, Armstrong has respect for the intellects of some of the founders heâs satirizing. Perhaps because heâs written from their perspective, heâs empathetic enough that he sees an impulse to help buried deep among the egos and the paternalism. âItâs like how the politician always thinks theyâve got the answer,â he said. But he contends that Silicon Valleyâs scions could have more influence than those lawmakers. They can move faster than Washingtonâs sclerotic politicians. Thereâs less oversight too. The innovators donât ask for permission. Congress needs to pass laws; the tech overlords just need to push code to screw things up. âIn this world where unimaginable waves of money are involved, the forces that are brought to bear on someone trying to do the right thing are pretty much impossible for a human to resist,â he said. âYouâd need a sort of world-historical figure to withstand those blandishments. And I donât think the people who are at the top are world-historical figures, at least in terms of their oral capabilities.â
For Armstrong, capturing the humanity of these men paints a more unsettling portrait than pure billionaire-trolling might. For example, these men feel superhuman, but are also struggling with their own mortality and trying to build technologies that will let them live forever in the cloud. They are hyperconfident and also deeply insecure about their precise spots on the Forbes list. They spout pop philosophy but are selling nihilism. âWeâre gonna show users as much shit as possible until everyone realizes nothingâs that fucking serious,â Venis says at one point in the film. âNothing means anything. And everythingâs funny and cool.â In Mountainhead, as the global, tech-fueled chaos begins, Randall leads the billionaires in an âintellectual salonâ where the group imagine the ways they could rescue the world from the disaster they helped cause. They bandy about ideas about âcouping outâ the United States or trying to go âpost-humanâ by ushering in artificial general intelligence. At one point, not long after standing over a literal map of the world from the board game Risk, one billionaire asks, âAre we the Bolsheviks of a new techno world order that starts tonight?â Another quips: âI would seriously rather fix sub-Saharan Africa than launch a Sweetgreen challenger in the current market.â
The paternalistic overconfidence of Armstrongâs tech bros delivers the bulk of both the dark humor and the sobering cultural relevance in Mountainhead. Armstrong doesnât hold the viewersâ hand, but asks them to lean into the performance. If they do, theyâll see a portrayal that might very well give necessary context to the current moment: a group of unelected, self-proclaimed kings who view the world as a thought experiment or a seven-dimensional chess match. The problem is that the rest of us are the pawns.
âThe scary thing is that usuallyânormallyâdemocracy provides some guardrails for who has the power,â Armstrong said near the end of our conversation. âBut things are moving too fast for that to work in this case, right?â Mountainhead will certainly scratch the itch for Succession fans. But unlike his last hit, which revolved around blundering siblings who are desperate to acquire the power that their father wields, Armstrongâs latest is about people who already have power and feel ordained to wield it. Itâs a dark, at times absurdist, comedyâbut with the context of our reality, it sometimes feels closer to documentary horror.
OpenAI is a strange company for strange times. Valued at $300 billionâroughly the same as seven Fords or one and a half PepsiCosâthe AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status.
When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is âsafeâ and âbenefits all of humanity.â There wasnât supposed to be any pressureâor desire, reallyâto make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investorsâthe types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit sideâs control. At the time, it had released no consumer products and capped how much money its investors could make.
Then came ChatGPT. OpenAIâs leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking âoff a growth curve like nothing we have ever seen,â as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbotâs release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and productsâfor shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others.
[Read: OpenAIâs ambitions just became crystal clear]
Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldnât pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue âcharitable initiativesââand it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure.
Resistance came as swiftly as the new funding. Elon Muskâa co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altmanâwrote on X that OpenAI âwas funded as an open source, nonprofit, but has become a closed source, profit-maximizer.â He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Muskâs lawsuit.
OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure âputs us on the best path toâ build a technology âthat could become one of the most powerful and beneficial tools in human history.â (The Atlantic entered into a corporate partnership with OpenAI in 2024.)
Yet OpenAIâs pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgetsâa forthcoming âfamily of devicesâ developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altmanâs hands.
Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcanaâthese things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial âgeneralâ intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himselfâand so OpenAIâs 2019 structure, which gave the nonprofit final say over the for-profitâs actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors.
âOpenAIâs nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company,â Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Muskâs lawsuit, arguing that a large part of OpenAIâs success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal.
The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAIâs original structure would better help âprevent a super intelligent AI from ever wanting to take over.â Hinton is one of the Nobel laureates who has publicly opposed the tech companyâs for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and Californiaâwhere the companyâs nonprofit was incorporated and where the company is headquartered, respectivelyâto closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI âthat safely benefits humanity, unconstrained by a need to generate financial return,â so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory.
[Read: âWeâre definitely going to build a bunker before we release AGIâ]
In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated âpublic benefitâ (in this case, building âsafeâ and âbeneficialâ AI as outlined in OpenAIâs founding mission). In its December announcement, OpenAI described the restructure as âthe next step in our mission.â But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs arenât necessarily an effective way to bring about public good. âThey are not great enforcement tools,â he saidâthey can ânudgeâ a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAIâs main competitors, are also public-benefit corporations.)
OpenAIâs proposed conversion also raised a whole other issueâa precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAIâs plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future.
Regulators, it turned out, were already watching. Three days after OpenAIâs December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Muskâs lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. Californiaâs attorney general, Rob Bonta, was reviewing the restructure, as well.
This ultimately led OpenAI to change plans. âWe made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,â Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC.
The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAIâs closest corporate partner, has not yet agreed to the new structure.
One could be forgiven for wondering what all the drama is for. Amid tension over OpenAIâs corporate structure, the organizationâs corporate development hasnât so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI âagentâ that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAIâs ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAIâs investors and improving peopleâs lives are not necessarily mutually exclusive.
The greater issue is this: There is no universal definition for âsafeâ or âbeneficialâ AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path âto continue to make rapid, safe progress and to put great AI in the hands of everyone.â But everyone, in this case, has to trust OpenAIâs definition of safe progress.
The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit boardâwhich then and now had âcontrolâ over the for-profit subsidiaryâremoved Altman from his position as CEO. But the companyâs employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, âcontrolâ on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAIâs separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is âcontinuing to work through the specific governance mandate in consultation with relevant stakeholders,â he said.
Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBankâs billions of dollars in investment. A top SoftBank executive has said ânothing has really changedâ with OpenAIâs restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that âSoftBank pulled back the curtain on OpenAIâs corporate theater and said the quiet part out loud. OpenAIâs recent ârestructuringâ proposal is nothing but window dressing.â
Lessig, the lawyer who represented the former OpenAI employees, told me that âitâs outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it.â Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that âregulatory intervention by governments will be critical to mitigate the risksâ of powerful AI. But earlier this month, only a few days after writing to his employees and investors that âas AI accelerates, our commitment to safety grows stronger,â he told the Senate something else: Too much regulation would be âdisastrousâ for Americaâs AI industry. Perhapsâbut it might also be in the best interests of humanity.
This has been a banner month for X. Last week, the social networkâs built-in chatbot, Grok, became strangely obsessed with false claims about âwhite genocideâ in South Africaâallegedly because someone made an âunauthorized modificationâ to its code at 3:15 in the morning. The week prior, Ye (formerly Kanye West) released a single called âHeil Hitlerâ on the platform. The chorus includes the line âHeil Hitler, they donât understand the things I say on Twitter.â West has frequently posted anti-Semitic rants on the platform and, at one point back in February, said he identified as a Nazi. (Yesterday on X, West said he was âdone with antisemitism,â though he has made such apologies before; in any case, the single has already been viewed tens of millions of times on X.)
These incidents feel all too natural for Elon Muskâs social network. Even without knowing the precise technical reason Grok decided to do its best Alex Jones impression, the fact that it became monomaniacally obsessed with a white-supremacist talking point says something about what the platform has become since Musk took over in October 2022. Specifically, it validates that X has become a political weapon in his far-right activism. (To be clear, white farmers have been murdered in South Africa, which has one of the worldâs highest murder rates, according to Reuters. But there is no indication of a genocide. In 2024, eight of the 26,232 murders nationwide were committed against farmers. Most murder victims there are Black.)
[Read: The day Grok told everyone about âwhite genocideâ]
This has been obvious to anyone using the site or paying attention to Muskâs managerial decisions. Heâs reinstated thousands of banned accounts (QAnon supporters and conspiracy theorists, and at least one bona fide neo-Nazi), and the platform is engorged with low-rent outrage porn, bigoted memes, MAGA AI slop, and, well, a lot of people proudly using racial slurs, frequently to attack other people. The platformâs defenders would likely argue that X is an experiment in free-speech maximalism and that it is one of the only truly neutral zones on social media. Musk and his sycophants have constantly cited his takeover as an attempt to âsolve free speechâ; Joe Rogan has suggested that Musk has done just that. (This isnât quite accurate, as X has complied with government takedown requests, temporarily suspended journalist accounts, amplified accounts that promote Muskâs worldview, and tried to censor words its owner doesnât like: Last year, it briefly warned users who attempted to use the word cisgender in posts, after Musk said he considers it a âslur.â)
But Grokâs white-genocide Wednesday is a major indication that the platform is not neutral. Either X has a natural bias, based on the siteâs architecture and user baseâthat is, the chatbot, which is able to search tweets in real time, acts on an attitude that is endemic to the platformâor X is being directly manipulated to emphasize a certain viewpoint. In other words: Either way, X is racist. The only thing up for debate is whether this is a feature or a bug for those in charge.
Twitter always had an outsize cultural influence, and Xâdespite its marked decline under Muskâdoes as well. Yet mainstream culture is no longer dominant there: The media outlets and public figures are now punch lines for the siteâs main characters, Musk and his MAGA acolytes. Platform events such as the Grok rampage and Yeâs âHeil Hitlerâ offer a window into the ways that X has become an accelerator for a broader, more durable culture of hate. Itâs not only that some of this vile discourse seeps out into the physical world (memes about immigrants eating cats and dogs leading to harassment in Ohio, Trump bringing up conspiracy theories about white genocide during an Oval Office meeting with the South African president)âitâs that the worst of the internet is no longer relegated to the shadows. Instead, it is elevated, perhaps even at times normalized, by its proximity to everyone elseâs content.
Last Wednesday, as I watched Grok bring up white genocide in response to an anodyne query about the Toronto Blue Jays pitcher Max Scherzerâs career earnings, I couldnât shake the question: Why are people still using this website? The same thought had also occurred to me around the time that Ye released âHeil Hitlerâ and I toggled over to Xâs algorithmic âFor Youâ feed. It showed a smattering of the platformâs least savory commentators posting about how the anti-Semitic anthem was âthe song of the yearâ and how it had become popular in Thailand. What happened next is pretty standard: By clicking on a few posts about the song, Iâd expressed enough interest in it that the platform fed me a steady stream of âHeil Hitlerâ content: AI-generated remixes of the song, covers, dozens of memes about how the song was secretly popular. I saw a video of a white couple singing the song in their car, throwing up Nazi salutes. Not long after that, I saw a link to a crowdfunding campaign for that same couple, who were asking for money to ârelocateâ after their video went viral and they were doxxed and âthreatened.â The couple set their funding goal at $88,000âa reference, almost assuredly, to â88,â a neo-Nazi code for âHeil Hitler.â This Russian nesting doll of irony-poisoned, loud-and-proud racism is a common experience in the algorithmic fever swamps of X.
Itâs worth noting that Yeâs song was banned by other major streaming platforms and social networks. Writing about X, The New Yorkerâs Kelefa Sanneh said, âWest has given the platform a kind of exclusive hit singleâa song that can be heard almost nowhere else.â Neo-Nazis and trolls expressed a palpable delight that all of this was happening on an ostensibly mainstream platformâwanton hatred not on 4chan or Stormfront, but on the same network where Barack Obama posted a condolence message about Joe Bidenâs cancer diagnosis. âHeil Hitlerâ is almost assuredly not the global phenomenon that the fascists on the platform think it is, but its prevalence on X is not nothing either. As Sanneh wrote last week, âWe now live in an era when a top musician can distribute a song called âHeil Hitler,â and thereâs no way to stop him. That is the true message of this song, which has spread and thrived beyond the reach of boycotts or shaming campaigns: no one is in charge.â
In July 2020, the Twitter user Michael B. Tager shared an anecdote that went viral. Tager was at âa shitty crustpunk barâ when the gruff bartender kicked out a patron in a âpunk uniformâânot because the customer was making a scene, but because he was wearing Nazi paraphernalia. âYou have to nip it in the bud immediately,â Tager recounted the bartender as saying. âThese guys come in and itâs always a nice, polite one. And you serve them because you donât want to cause a scene. And then they become a regular and after awhile they bring a friend.â Soon enough, youâre running a Nazi bar.
The Nazi bar is an apt analogy, yet it doesnât fully capture the weirdness of a social network and of the strange, modern power of algorithms to sort and segregate experiences. Many people use X merely to post about sports, follow news, or look at dumb memes, and theyâre probably having a mostly normal online experience; I donât have any wish to judge them. To torture the metaphor, though, theyâre sitting at a table outside the Nazi bar; their friends are there, theyâre having a good time, maybe they hear a slur emanate from the window from time to time. Others fully recognize that theyâre at a Nazi bar, but this was their bar first and they donât want to cede the territory; theyâre hanging around to debate, never mind that the barâs owner is palling around with the new customers.
Of course, with a broadcast social network like X, everyone is both a patron and an owner of sorts. Followers can feel like a kind of currency, built up over years: Some people donât leave the bar, because theyâre invested and donât want to dump their shares. Other people donât leave, because the alternative hangouts arenât enticing enough. Some simply donât want to give the Nazis the satisfaction of successfully driving them out. There is plenty of commentary, even among users of other platforms, about how Threads is bloodless (and owned by Mark Zuckerberg), Mastodon is inscrutable, and Bluesky is humorless.
These quibbles make some sense in the brain-rot context of social media, where people have been conditioned to think itâs normal to have interactions with millions of strangers at the same time, but this is not really tenable or healthy. Nor is it something most people would tolerate in the physical world. If a billionaire bought one of your local haunts, renamed it, humiliated the employees, brought back many of the people whoâd been banned for harassing other regulars, eliminated basic rules of decency, started having town halls with Republicans and a leader of the AfD, taking your business elsewhere would be perfectly rational. This is essentially whatâs happened on X, only the reality is wildly, at times comically, more extreme. A critical mass of the nationâs politicians, news outlets, and major brands regularly post content for free to the exclusive streaming platform for the Ye song âHeil Hitler.â This platform is owned by the worldâs richest man, a conspiracy theorizing GOP mega-donor who still holds a position in the Trump administration. Even if he winds down his official role, X will remain an instrument for Muskâs politics. Letâs pause to sit with the absurdity of these facts.
Acknowledging the role X plays in mainstreaming the worst constituencies makes for awkward conversations with those who continue to use it. These discussions grow exhausting, fast. Thereâs a definite purity-politics flavor to any suggestion that people should take a moral stand and leave a social network, but also a pretty airtight case to be made for boycotting it. There is no ethical consumption under tech oligarchy, etc. Youâre not a Nazi simply because you use Xâbut also, what exactly are you doing there?
You may not have any interest in participating in a culture war. The problem is that on X, everything is a culture war. Culture war is the very point of the MAGA AI slop the platform traffics in and the viscerally cruel White House X account. Culture war is behind Tucker Carlsonâs choice to debut his post-Fox show on X and why Alex Jones livestreams on the platform every day. Westâs nihilistic neo-Nazi single is an act of culture war: Its message isnât just that X has energized his ideas, but that the platform renders people like Ye unignorable. Only Musk could shut this machine down, but plenty of others lend it their credibility and happily turn the cranks, ensuring that the culture war grinds on and on.
Sorry to tell you this, but Googleâs new AI shopping tool appears eager to give J. D. Vance breasts. Allow us to explain.
This week, at its annual software conference, Google released an AI tool called Try It On, which acts as a virtual dressing room: Upload images of yourself while shopping for clothes online, and Google will show you what you might look like in a selected garment. Curious to play around with the tool, we began uploading images of famous menâVance, Sam Altman, Abraham Lincoln, Michelangeloâs David, Pope Leo XIVâand dressed them in linen shirts and three-piece suits. Some looked almost dapper. But when we tested a number of articles designed for women on these famous men, the tool quickly adapted: Whether it was a mesh shirt, a low-cut top, or even just a T-shirt, Googleâs AI rapidly spun up images of the vice president, the CEO of OpenAI, and the vicar of Christ with breasts.
Itâs not just men: When we uploaded images of women, the tool repeatedly enhanced their dĂŠcolletage or added breasts that were not visible in the original images. In one example, we fed Google a photo of the now-retired German chancellor Angela Merkel in a red blazer and asked the bot to show us what she would look like in an almost transparent mesh top. It generated an image of Merkel wearing the sheer shirt over a black bra that revealed an AI-generated chest.

What is happening here seems to be fairly straightforward. The Try It On feature draws from Googleâs âShopping Graph,â a dataset of more than 50 billion online products. Many of these clothes are displayed on models whose bodies conform to (and are sometimes edited to promote) hyper-idealized body standards. When we asked the feature to dress famous people of any gender in womenâs clothing, the tool wasnât just transposing clothing onto them, but distorting their bodies to match the original modelâs. This may seem innocuous, or even sillyâuntil you consider how Googleâs new tool is opening a dangerous back door. With little friction, anyone can use the feature to create what are essentially erotic images of celebrities and strangers. Alarmingly, we also discovered that it can do this for minors.
Both of usâa woman and a manâuploaded clothed images of ourselves from before we had turned 18. When we âtried onâ dresses and other womenâs clothing, Googleâs AI gamely generated photos of us with C cups. When one of us, Lila, uploaded a picture of herself as a 16-year-old girl and asked to try on items from a brand called Spicy Lingerie, Google complied. In the resulting image, she is wearing what is essentially a bra over AI-generated breasts, along with the flimsiest of miniskirts. Her torso, which Google undressed, features an AI-generated belly-button piercing. In other testsâa bikini top, outfits from an anime-inspired lingerie storeâGoogle continued to spit out similar images. When the other author, Matteo, uploaded a photo of himself at 14 years old and tried on similarly revealing outfits, Google generated an image of his upper body wearing only a skimpy top (again, essentially a bra) covering prominent AI-generated breasts.
Itâs clear that Google anticipated at least some potential for abuse. The Try It On tool is currently available in the U.S. through Search Labs, a platform where Google lets users experiment with early-stage features. You can go to the Search Labs website and enable Try It On, which allows you to simulate the look of many articles of clothing on the Google Shopping platform. When we attempted to âtry onâ some products explicitly labeled as swimsuits and lingerie, or to upload photos of young schoolchildren and certain high-profile figures (including Donald Trump and Kamala Harris), the tool would not allow us to. Googleâs own policy requires shoppers to upload images that meet the companyâs safety guidelines. That means users cannot upload âadult-oriented contentâ or âsexually explicit content,â and should use images only of themselves or images that they âhave permission to use.â The company also provides a disclaimer that generated images are only an âapproximationâ and may fail to reflect oneâs body with âperfect accuracy.â
In an email, a Google spokesperson wrote that the company has âstrong protections, including blocking sensitive apparel categories and preventing âthe upload of images of clearly identifiable minors,â and that it will âcontinue to improve the experience.â Right now, those protections are obviously porous. At one point, we used a photo of Matteo as an adult wearing long pants to let Google simulate the fit of various gym shorts, and the tool repeatedly produced images with a suggestive bulge at the crotch. The Try It On toolâs failures are not entirely surprising. Googleâs previous AI launches have repeatedly exhibited embarrassing flawsâsuggesting, for instance, that users eat rocks. Other AI companies have also struggled with flubs.
The generative-AI boom has propelled forward a new era of tools that can convert images of anyone (typically women) into nude or near-nude pictures. In September 2023 aloneâless than a year after ChatGPTâs launchâmore than 24 million people visited AI-powered undressing websites, according to a report from Graphika, a social-media-analytics company. Many more people have surely done so since. Numerous experts have found that AI-generated child-sexual-abuse material is rapidly spreading on the web; on X, users have been turning to Elon Muskâs chatbot, Grok, to generate images of women in bikinis and lingerie. According to a Google Shopping help page, the Try It On tool is at the fingertips of anyone in the U.S. who is at least 18 years old. Trying clothes on always requires taking some offâbut usually you donât let one of the worldâs biggest companies do it for you.
Most users wonât be trying to dress up minors (or the vice president) in low-cut gowns. And the appeal of the new AI feature is clear. Trying on clothes in person can be time-consuming and exhausting. Online shoppers have little way of knowing how well a product will look or fit on their own body. Unfortunately for shoppers, Googleâs new tool is unlikely to solve these problems. At times, Try It On seems to change a shopperâs body to match the model wearing the clothing instead of showing how the clothing would fit on the shopperâs own body. The effect is potentially dysmorphic, asking users to change their bodies for clothes rather than the other way around. In other words, Googleâs product doesnât seem likely to even help consumers meaningfully evaluate the most basic feature of clothing: how it fits.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
You could easily mistake Alec Harris for a spy or an escaped prisoner, given all of the tradecraft he devotes to being unfindable. Mail addressed to him goes to a UPS Store. To buy things online, he uses a YubiKey, a small piece of hardware resembling a thumb drive, to open Bitwarden, a password manager that stores his hundreds of unique, long, random passwords. Then he logs in to Privacy.com, a subscription service that lets him open virtual debit cards under as many different names as he wishes; Harris has 191 cards at this point, each specific to a single vendor but all linked to the same bank account. This isolates risk: If any vendor is breached, whatever information it has about him wonât be exploitable anywhere else.
Harris has likewise strictly limited access to his work and personal phone numbers by associating his main phone with up to 10 different numbers. He has burner numbers and project-specific numbers, a local-area-code number to give out to workers coming to his house, a dedicated number for two-factor authentication, and a number from a city where he previously lived that he doesnât use much anymore but is helpful for ambiguating his identity in databases. He has additional numbers that, through a fancy hardware modification, even his mobile carriers canât associate with the device. He can also open multiple browser sessions on the phone, each showing a different IP address, which limits tracking and prevents websites from aggregating information about him.
In a safe at home, Harris keeps prepaid anonymous debit and gift cards (Google Play, Apple Gift), prepaid SIM cards, phones for use in Europe, a Faraday bag (to shield wireless devices from hacks and location tracking), a burner laptop, and family passports. He also carries a passport card, a wallet-size government-issued ID that, unlike a driverâs license, doesnât show his address. When using Uber, he provides an intersection near his house as his pickup or drop-off point. For food deliveries, he might give a random neighborâs address and, after the order is accepted, message the driver, âOops, I typed out the address wrong. Let me know when youâre here, and Iâll run out.â
Harris is the CEO of HavenX, a firm that provides its clients with extreme privacy and security services. It was spun off from Halo, which focuses on government clients, in 2023. HavenX customers, some of whom pay tens of thousands of dollars a month, typically face serious threats. Some are celebrities or ultra-wealthy families. Others are business executivesâinterest from this group has risen since the killing of UnitedHealthcare CEO Brian Thompson last year. The recent Signal leak, too, in which the editor in chief of this magazine was erroneously added to a high-level Trump administration group chat, triggered more than a few corner-office freak-outs. Many HavenX clients come from the cryptocurrency world: Some made a fast fortune and, because they canât park their crypto in a bank, are unusually vulnerable; some run crypto companies and are seen, accurately or not, as controlling access to other peopleâs digital wealth. The recent crypto-market boom has brought a wave of kidnappings, in which some crypto owners have even been held for ransom or tortured into surrendering the keys to their coins. Harris said the first quarter of this year was HavenXâs busiest since the spin-off.
[Read: The real Trump family business is crypto]
Lots of companies, including giants like Kroll, are in the security business, but HavenX has positioned itself as a boutique solver of exotic problems. During one of our conversations, Harris mentioned a recent case where the chief information-security officer at a large company with its own intelligence team called him. An executive at the company was being extorted, and the companyâs investigators had managed to link the extortionist to an X account, a Telegram number, and an African phone number, but they hadnât been able to learn their real-world identity. âThatâs where their capability stops,â Harris said. âItâs where we say, âThatâs interesting,â and we start.â
Harrisâs own privacy concerns are less acute, but he takes both a professionalâs and a hobbyistâs interest in cloaked living and finds it useful to have direct experience with methods he recommends to clients. He lives with his wife, Ellyn, a psychotherapist, and their two sons on an affluent edge of Washington, D.C., in a greige clapboard house tucked away on a street that doesnât get much traffic. A basketball hoop stands at the end of the driveway. When I visited earlier this year, snow covered the front yard, and a braided-rope bone and a red Kong chew toy were half visible.
A tall, fit 43-year-old, Harris answered the door with a welcoming smile. I had been able to find the house only because he told me the address in advance. When Iâd looked up his name in a paid database where you can reliably find such information, Iâd seen other addresses for him but not this one. After Harris gave me the address, I searched for it and found only the name of a trust. Also: Harris doesnât have a dog. The toys out front were for show, a subtler version of a fake home-security-system sign.
From a cabinet in his office, Harris pulled a sheaf of legal documents and began to show me how he managed his double life. Achieving residential anonymity had been a process. When he bought the house, heâd set up the trust using a close friend as the trustee; once the home purchase was complete, the friend resigned and named Harris as his successor. Mail sent here, including near-daily Amazon deliveries, is addressed to either the trust or some other name, whether a random pseudonym Harris used when filling out a form or something generic like âpostal customer.â
He showed me a holiday card heâd received at the house the day before, and a text exchange from that morning with the friend whoâd sent it. âThanks so much, love the pic on the back,â he had written. âSmall favor. Our address is unlisted. So would you mind using this for mail.â Harris had then typed the address of the UPS Store. âAnything with our names on it goes there.â At least one such holiday-card misdirection occurs every year. âThis is a super-nice family, and I want them in our lives, and so I want to be nice about it,â Harris told me. As we sat there, a text came in from the friend, affirming that from now on, heâd use the other address.
As Harris walked me through the esoteric gear and practices that let him live as if heâs in Witness Protection, there was a tinge of excitement in his manner, like he was a guitar enthusiast giving a tour of his home studio. Harris is instinctually private. He recalled his mother asking him how school was one afternoon when he was 5. âFine,â he said. That evening, when she was giving him a bath, she found stitches in the back of his head. Heâd fallen at school. âThis is 1987,â Harris said, âand the school just didnât call.â
Today he has professional reasons for not being easily accessible, and his precautions have been effective. After a breach last summer, several HavenX clients who hadnât done full privacy resets received an email with a picture of their house and an accompanying message: Youâve been watching porn. Pay us one bitcoin and we wonât tell your employer.
âAnd so my wife got one of those,â Harris recalled, âand I was so pleased âcause it had a picture of the front of the UPS Store.â
Itâs extraordinarily hard, when every one of us is ceaselessly flaking off informational DNA, to live privately. And if youâre targeted by a nation-state with a signals-intelligence dragnet, forget it: Your face, or voice, or gait, or how you move your mouse will betray you. A properly equipped snoop using a method called Van Eck phreaking can replicate the contents of your laptop screen from an adjacent hotel room, even if your computer isnât equipped for Wi-Fi or Bluetooth, by detecting variations in electromagnetic radiation. The Pentagon has tested an infrared laser, Jetson, that can nail your identity from 200 yards away based on your signature heart rhythms, a Department of Defense official involved with the project told MIT Technology Review. Jeff Bezos claimed he was phished by Mohammed bin Salman, crown prince of Saudi Arabia, who allegedly infected the worldâs third-richest personâs phone with spyware via a video attachment in a WhatsApp message. If Bezos was rightâthe Saudi embassy denied it and an FBI investigation was inconclusive, but UN experts believe the crown prince was likely the culpritâthen what hope do the rest of us have?
[From the May 2022 issue: The price of privacy]
But for most people, Big Brother is a multinational corporation, thanks to our blithe surrender of privacy over the past two decades in return for conveniences such as free email, supercomputers in our pockets, same-day package delivery, and the names of third and fourth cousins weâd never heard of before. We now inhabit a panopticon of doorbell cameras and traffic cameras and Google Street View cameras and police body cameras and phone cameras and retail security cameras and the cameras of Mark Zuckerbergâs Ray-Ban Meta âsmart glassesâ; of geolocating phones and AirTags; of eavesdropping Siris and Alexas. Apps and mobile carriers can pinpoint not just what building youâre in, but which floor youâre on, by using your phoneâs barometer and GPS, and the strength of your signal.
Much of that information is sold almost instantaneously through an automated shadow economy of location-data brokers. So is your precise behavior in stores such as Walmart, where unseen Bluetooth beacons record which products you linger in front of. So are countless other details about you that you may or may not want people to know. And in the past few years, as corporations have become more and more dependent on cloud storage, the number of data breaches in the United States has exploded, nearly doubling from 1,801 to 3,205 annual incidents from 2022 to 2023, according to the nonprofit Identity Theft Resource Center.
Most of usâignorant, indifferent, overwhelmedâshrug. At best, maybe we half-heartedly comply with a âFive Things You Need to Do Right Now to Protect Yourself Onlineâ LinkedIn thread, such as using a password manager and two-factor authentication. Others, including Harris and his clients, have taken more radical steps, and they have done so by drawing, knowingly or not, from the tradecraft of a former cop named Michael Bazzell. It was from Bazzell that Harris learned how to set up his trust and got the ideas for the passport card and the dog toys. On a bookshelf in his home office, alongside Jaron Lanierâs You Are Not a Gadget, is Bazzellâs exhaustive guide to this dark 21st-century art: Extreme Privacy: What It Takes to Disappear.
Bazzell is something of a real-life Ed Galbraith, the Breaking Bad character known as the Disappearer, who sells and repairs vacuums by day, and by night sets people up with new lives and identities. Unlike Galbraith, who offered his services to fugitives, Bazzell consulted for law-abiding people who wanted to be unfindable by strangers. Some were government officials whoâd put violent people behind bars or been swarmed by online mobs. Some were entertainers who wanted to be famous but also have peace of mind. Some were targets of deranged obsessives, such as homicidal exes. Some were dangerously rich. And some simply objected to the nosy predations of surveillance capitalism.
Bazzell also published several thick editions of his privacy bible and recorded hundreds of podcast episodes on topics such as âLessons Learned From My Latest Doxxing Attackâ and âConsequences of Product Refunds.â Over time, he developed an audience that was similarly enthralled by privacy and excited by the rigor and creativity he brought to the subject. Issues of his Unredacted extreme-privacy e-zine would typically get more than 60,000 downloads.
Then, in September 2023, all 300-plus episodes of his podcast vanished from the internet, and Michael Bazzell disappeared. Devoted fans speculated that he had died, had been abducted, was in a foreign prison, or had had a nervous breakdown. Two months later, he published a blog post, âMy Irish Exit,â explaining that an opportunity had come up for him to spend three months as an âimposterâ in the world of the rich and famous, which he normally served but otherwise kept at a distance. âWhatâs next? I am not ready to share that, and may never go public with it. I have my aliases established. The shell company is in place. The anonymous payment account is ready.â He continued, âThe better question is, what is YOUR next chapter?â His website kept operating, but it said Bazzellâs firm was no longer taking on new clients.
Bazzell had had his own awakening in 2001, as an Illinois beat cop turned cybercrime detective. His work had led to the arrest of a local elections official for soliciting sex from a 14-year-old girl. Amid the ensuing media coverage of that and similar arrests, internet anons made death threats against Bazzell, and he was shocked to learn how easy it was to find his home address online. Soon after, browsing at the library, he discovered How to Be Invisible, a book by a missionary named J. J. Luna. Assigned to the Canary Islands in the 1960s, when Spainâs Franco government was persecuting Protestants, Luna was forced to live undercover. When he returned to the U.S. in 1988, he decided to maintain his private lifestyle and publish a book showing others how they might do the same, using LLCs, âghost addresses,â and other tricks.
Bazzell resolved to execute all of the practices Luna recommended, effectively going off the grid. Over time, student surpassed teacher. Bazzell pioneered or updated many of the privacy hacks now taken as standard. To obtain an ID without betraying oneâs location, Bazzell recommended establishing residency in South Dakota, which is distinctly friendly to year-round RVers and other nomads. For sending mail without divulging your address, Bazzell preferred a private remailer service also based in South Dakota. He was a proponent of âdata poisoningââthe deliberate spreading of disinformation about oneself by, for instance, subscribing to magazines or signing up for internet service using false personal detailsâto make it harder for anyone to locate your real information. He helped clients with the financial means obtain second citizenships. His podcast often focused on products heâd been testing that were privacy-enhanced alternatives to mainstream devices and apps, such as Tuta (an email and calendar service), Linux Pop!_OS (an operating system), and MySudo (an app for managing online identities).
Though he catered to people in dire situations, Bazzell also experimented on himself. To ensure that his cellphone was never associated with his address, he kept it off and in a Faraday bag until he arrived at a four-way intersection some distance from his home. He submitted a fake obituary for one of his aliases to Legacy.com. Mindful of the increasing prevalence of automated license-plate readers on tow trucks, taxis, police cars, and other vehicles, he used magnetic license-plate holders and removed his plates whenever he was parked somewhere overnight. Forgoing cloud storage, he backed up his data on a flash-memory card the size of a fingernail, concealed the card in a hollow nickel, and then, while in the bathroom at a friendâs house, unscrewed an electrical plate and hid the coin behind it. (When he later needed to access the backup, he had to call the friend and reveal what heâd done.) He set up a bait website with his real name and connected it to some analytics software in order to glean information about who was doing searches on him. Heâd routinely investigate himself, scouring databases to make sure he couldnât find actionable information on his own whereabouts. To throw off gait-recognition systems, which have popped up in Beijing and Shanghai, among other places, he tried wearing two sizes of the same shoe.
[Read: Three simple rules for protecting your data]
All the while, Bazzell remained a cipher. He never revealed where he lived or spoke of his personal life, and you couldnât easily find a photo of him. But several years ago, he befriended a writer and podcaster named Javier Leiva, and three episodes of Leivaâs own podcast, Pretend, focused on Bazzell and his work. It proved a tricky project. âWe all use Google apps,â Leiva told me. âThat did not fly with Michael Bazzell. We had to use encrypted note-taking apps. It was a process. Nothing was easy.â Leiva recalled Bazzell saying that when he attended his sisterâs wedding, he prearranged for the photographer to keep him out of shots.
On a recent Sunday, after several weeks of back-and-forth mediated by one-named associates of Bazzellâs (âLaura,â âSamanthaâ), and after I gave an assurance that I wouldnât record our conversation, Bazzell called me on Signal from a number he told me heâd created just for our interaction and would become useless 10 minutes after it ended. We spoke for more than an hour, and he cleared up a few things. Leiva had speculated to me that Bazzell kept his podcasts off the internet because of a concern about voice cloning, but Bazzell gave a simpler explanation: Much of the information was now out-of-date. âI enjoyed it,â he said. âBut the market is saturated now. There are so many YouTubes and podcasts.â
On the subject of tradecraft, Bazzell also told me that he follows what privacy people call a âgray manâ strategyâdoing whatever he can to not draw attention. âI donât wear logos on my clothing,â he said. âIf Iâm in New York, Iâm probably wearing a lot of dark-gray clothing to blend in. On a Caribbean island I donât, because it would stick out.â Nor will you find him driving a Cybertruck; he opts for popular cars in popular colors. An irony of the life heâs chosen is that out-of-date tech can make for the most up-to-date privacy strategy. He tells clients not to back up their home security cameras to the cloud. Instead of using Spotify, he listens to music on a portable player with a 1.5-terabyte card holding âevery album I can imagine wanting.â
Neighbors who know Bazzellâs real first name donât know his last. Some of the people who work for him have met him, but none of them are employees. Each of his âcolleagues,â as he calls them, has an individual LLC. He doesnât know their Social Security numbers or dates of birth. He wants them to understand privacy by practicing it.
Bazzell has long spoken about âprivacy fatigue,â an avocational hazard given the constant vigilance that extreme privacy measures entail and the technological complexity they can involve, but after 20 years, he told me, it doesnât affect him anymore. Recently, heâs been working on ways to inject false information into the troves of breached data that surface on the internet.
[Read: Slouching toward âaccept all cookiesâ]
Although it has become harder than ever to be private, âthe good news is, more people are grasping the concepts,â Bazzell observed. âPeople now understand why us privacy weirdos have been making noise about this for so long.â

Thereâs a cost to living this way. To do it right, severing your present self from the history youâve accrued in corporate databases, requires a complete reboot. This means either becoming fully nomadic or moving homes and implementing privacy from day zero of your new life. You must consider everything from your carâs registration to your houseâs utility hookups, and the measures required to prevent a misstep can be comically elaborate. A reboot is common in Bazzell world. Alec Harris did one too. Because utilities want to know whoâs going to be paying the bills at a particular residence, Harris, when setting up water and gas, offered a $500 deposit and, to persuade the customer-service reps to forgo a personal name on the accounts, claimed he was a property manager named Tom. âThe ownerâs a nutjob, so help me out here,â he told the technicians. âAnd they were like, âOkay.ââ
Buying a car presents special difficulties. Harris likens them to cellphones for how they collect and upload informationâabout your location and driving behaviors, among other things. A work-around Bazzell likes is to buy fleet insurance (designed for companies that operate a fleet of vehicles), which you can do through a business entity, but that approach is expensive. Instead, Harris followed a detailed script laid out by Bazzell, calling a dealer to say he wanted to come in for a test drive, then canceling at the last minute, then calling again when he was outside the dealership and trying to fast-talk a salesman into forgoing the usual ID check. They looked at him. âThey were like, âYeah, youâre not getting in the car without scanning your driverâs license,ââ Harris recalled. âMy attempt at social engineering was not going anywhere.â He handed over his ID. To buy the car, Harris ended up registering it at an alternative residence, but when he asked whether the dealership could disconnect the built-in GPS, he was told the car wouldnât run without it.
The rudiments of daily life can also be cumbersome. Harris recalled setting up a new TV with Disney+ and having to undo some autofilled information and replace it with his abstruse AnonAddy email address, then typing out one of his extra-long passwords only to get a character wrong and have to start overâall while his young children became antsy. âAnd so then youâve got two kids sitting there, and theyâre like, âI want Dominoâs,â and âI want to watch Mulan.ââ He laughed. âThatâs the price you pay.â
Sometimes the price is literal. None of the purchases Harris makes through Privacy.com earns credit-card points. âMaybe over the course of some period of time, that means weâre paying for an extra flight somewhere,â he said. He has Amazon Prime, but he canât use its discount at Whole Foods, because he doesnât want to use their verification methods. There can be more significant financial consequences as well. âMy credit score has decreased,â Bazzell said. âGetting a loan would be difficult. Some consumer databases show me as deceased.â
Harris has also sacrificed convenience. Some of the alt-tech he uses, such as the search engine DuckDuckGo, isnât always as effective as the mainstream tools. âSometimes you just need to Google something,â he said. Then there are logistical frictions. Once, at Dulles Airport en route to a wedding in Toronto, he wasnât allowed through security, because his passport card, although valid for overland entry to Canada, wasnât acceptable for international air travel. He had to change his familyâs flights and run home for his passport.
I confessed that I was already confused. How, for instance, did he remember which of his 10 phone numbers to use for what? âYeah, I donât know,â he replied. âIt is confusing. And if you were a new client, I would not be dumping this much. We would be starting a little slower.â Living this way, he acknowledged, incurred a â20 percent cognitiveâ overhead.
As Harris drove us to lunch, we stopped at the UPS Store, where his mailbox was empty. Harris gestured toward the guys behind the counter, whom he and Ellyn had befriended, often ordering food for them during the pandemic. That generosity could make a difference when, say, a letter addressed to the trust came to the mailbox held under his and Ellynâs names. Though UPS wouldnât normally deliver that letter, âthey let it slide,â he said. Harris has a client in Florida who is diligent about following privacy protocols but is also quiet and a little gruff. âI was like, âYouâve got to be nice to these people,ââ Harris recalled. ââYou come off as kind of not warm, and so you need to turn on the charm a little bit.ââ
This is the behavioral side of privacy. If youâre committed to being private, you canât indulge your everyday asocial tendencies. Imagine, Harris will say to a client, doing all of this work, then getting into a fender bender: If you start yelling at the other driver, and the accident gets reported to an insurance company, and a plaintiffâs lawyer gets involved, you could find yourself being subpoenaed for documents and more generally having your life probed. Instead, Harris told me, you just need to be like, âHey, so sorry, letâs take care of this.â
Harris told me itâs important to have ârepeatable privacy excusesââlines to disarm people who might deem a request suspicious. The fictional property manager is one of his. Another is that he works in the privacy business. But heâs uneasy with the constant fibs recommended by Bazzell, who has sometimes told whoppers, such as describing his adult client as a child under the age of 13 in order to get her name and address removed from a website.
During his time in D.C., Harris said heâs known people who previously worked undercover for the government, and has observed the mental and spiritual costs of living inauthentically. âI donât need to subject myself to that, and I definitely wouldnât want the kids or my wife to have to live like that,â he said. People whoâd lived double lives told him theyâd kept their personas â90 percent real, 10 percent fake,â he said. âItâs just easier.â
He told me he hadnât used the property-manager excuse in years. It turns out that the guy coming over to help you with a water leak generally doesnât even ask your name. âI donât have to do a whole story,â he said. âIâll just say, âHey, do you want a cup of coffee?â And weâre good.â
Privacy remains a game of haves and have-nots. Harris explained that the majority of HavenXâs clients are in the U.S., partly because many of its techniques are specific to the countryâs unique patchwork of federal and state privacy laws. A person who goes by the name âM4iler,â a privacy hobbyist based in the Czech Republic whose phone numbers include one that leads to a recording of Rick Astleyâs âNever Gonna Give You Up,â told me, âWhat Michael Bazzell says is great, and I assume works perfectly in the U.S. if you follow the steps, but laws are different in other countries.â A company doing business under an alias, for instance, isnât an option there. âSo thatâs kind of a problem,â he told me.
Celebrities have both advantages and disadvantages when it comes to privacy. Harris noted that if youâre as famous as, say, the Rock or Christina Aguilera, âas soon as you move in, everyone on this block is going to know who you are, as soon as the paparazzi follow you home one night.â But also, he added, they âget to do things that I donât need to or couldnât do.â Matt Bills, who is based in Los Angeles and handles the physical side of privacy for HavenX clients, has relationships with concierges at top hotels. âHeâll be like, âThe Rockâs coming,ââ Harris said. âThey open up the back door.â Bills told me about a client for whom heâd arranged to have two identical Gulfstreams on an airportâs tarmac, with a fuel truck next to each and a staircase in the middle. They decided which plane the client would board only at the very last minute.
Strong privacy is a luxury good. A rich person can rent an extra apartment just to use as a mailing address; most of us cannot. HavenXâs entry-level service might cost a couple thousand dollars a month, âbut it can get up into the tens of thousands a month very quickly,â Harris told me. When I asked which services might cost that much, he mentioned people who need 24/7 monitoring of the dark web for particular information, like a CEO who wants to know immediately if a specific combination of terms shows up in a data breachâsuch as his name along with his childâs name and the name of the childâs school.
Others, with fewer resources, might sacrifice the normalcy of their lives. Jameson Lopp is a software engineer and bitcoin booster who was living in Durham, North Carolina, when, in 2017, local police received a call from someone who said he had just killed someone at Loppâs address, was holding hostages, and had rigged the front door with explosives. Loppâs house was soon surrounded by dozens of rifle-brandishing police. Heâd been a victim of âswattingâ: a dangerous hoax in which a false report is made to trigger a law-enforcement response to a specific address. Afterward, Lopp resolved not to let something like that happen again. Over the next several years, he spent by his estimation more than $100,000 to effectively disappear, going so far as to rent a decoy apartment and hire private investigators to test his defenses by trying to find him.
[Read: The virtue of being forgotten]
Now he runs security for Casa, a company he co-founded that offers safe storage for digital assets. Even his family members donât know his address, he told me; if theyâre visiting, heâll pick them up at another location and then bring them to his house. His neighbors know him by a different name, and he segregates his relationships, never socializing at the same time with people who know his real name and people who know him by an alias. âA big part of what I do is lying,â he told me, âand I think that thatâs one thing that a lot of privacy advocates donât really talk about: If you really want to be private, you have to get comfortable with lying. You have to think of it as a tool that youâre using to defend yourself.â
Lopp wouldnât tell me whether he has a spouse or children, but he observed that privacy âbecomes an order of magnitude more complex as you add more people into the machinations,â adding that âit very much lends itself to a lone-wolf type of lifestyle.â
âWhat do you think of our life?â Ellyn Harris asked me. She smiled warmly. âDo you think weâre so weird?â
Alecâs wife, between Zoom appointments, had joined us, and we were talking about raising a family inside a privacy cone. Alec had eased Ellyn into privacy practices, starting with the Bitwarden password manager. âI remember sitting with him on our couch in D.C., in our old condo,â she said, âbeing like, âThis seems really hard. I donât know if I want to do this. I just want everything to be the same word with the same numbers, and I use an exclamation point at the end, so that makes me unique; no one will ever find out. And I capitalized the first letter, so weâre fine.ââ She laughed the wry laugh of a privacy vet making fun of her younger self.
But then Alec got her some hidden phone numbers. âI didnât even think about that,â she recalled. âThat was just a way to sneak privacy into my life.â Now living privately no longer feels like such a big deal, and sheâs come to appreciate the emotional security that goes with it.
âShe was wildly supportive,â Alec interjected.
âYou do just get used to it,â Ellyn said. Using tools that at first seem unwieldy, like a password locker, comes to feel easier than not using them. I wondered, given her work in mental health, whether she thought Alec ever edged into paranoia. âThereâs this idea in psychology called a learned phobia,â she said, âwhere, for example, if you observe someone who has a fear of flying often enough, you could actually absorb that fear and that can become yours. So Alecâs paranoia has become mine. So that means weâd both be worthy of diagnosis.â
âWe could be in the same mental institution,â Alec said.
âI mean, thatâs the dream, right?â Ellyn said.
With workers who came to the house, she started using just her middle name, Leslie, but one time James, the older of their elementary-school-age sons, said, âThatâs not your name.â âOh my God, James, donât blow my cover,â she said, before explaining to the workman that it was her middle name. But she was clearly not quite as committed as Alec. Whenever a visitor nervously asked where the dog was, Alec would say it wasnât home at the moment. âOh,â Ellyn said, laughing. âIâm just like, âWe donât really have a dog.ââ
Children presented several more layers of complexity. To register with the local public school, which required proof of residence, Alec had met with the admissions director, trust documents in hand. âShe had been in this job for a long time,â Alec recalled. âShe was like, âThis is a first.â She was super nice.â Ultimately, he showed the school where the family lived, and the school agreed not to put the home address in the school directory, and to use the UPS Store address for any mailings.
Ellyn still frets when arranging playdatesâsheâs trying to make mom friendsâbut if a mother asks for her address, sheâs gotten used to sending a pin drop. When one mom put the Harrisesâ address in her contacts, Ellyn found herself saying, ââIâm so sorry, but could you not do that?â And thatâs weird. But the thing is, I just tell them that Alec works in privacy.â And because they live in the D.C. metro area, she went on, âpeople kind of get it.â
Both Alec and Ellyn are personable, and Alec felt this was also important to the success of their privacy. âI would say other than in this area, weâre not very weird,â he said. âIf we were eccentric in all areas of our lives, it would be harder to pull off.â
They know bigger questions loom as their kids get older. One of the more challenging cases Alec has worked on is that of a âvery, very wealthy guyâ who was involved in the prosecution of a cartel leader, and whose daughter is a young artist whoâs starting to achieve some success. âSome days sheâs like, âFuck you guys, Iâm going to be famous,ââ Alec said. âHe also wants to enable his daughter to have a regular life.â Itâs proved to be a difficult project, he added. âTheyâve moved twice.â
For now, the Harrisesâ sons are young enough that theyâre more interested in whether a package contains Legos than whether itâs addressed to a peculiarly named trust. âOur older one has a little bit of a concept of itââprivacyââbut itâs not their thing to carry,â Alec said. âWeâll have to have some decisions, Ellyn and I will, when they get phones and stuff.â
âI think our older son still is kind of thinking that Alec is a security guard,â Ellyn said.
But to her question: Itâs not that I thought their life was weird. I could relate, in a world of nearly inescapable surveillance, to the urge to disappear. But the ongoing, escalating effort required felt Sisyphean to me. And Alec would say that even his approach, which heâd described to me as âextreme,â is a mere half measure. The writer Gabriel GarcĂa MĂĄrquez said we all have three lives: a public one, a private one, and a secret one. âI live in the division between public and private,â Alec told me. He and Ellyn are open with each other. They use a regular bank. They have friends. They send holiday cards. âIf you want to live a secret life,â Alec said, âthatâs a decision thatâs going to have real consequences.â
Sam Altman is done with keyboards and screens. All that swiping and typing and scrollingâtoo much potential friction between you and ChatGPT.
Earlier today, OpenAI announced its intentions to solve this apparent problem. The company is partnering with Jony Ive, the longtime head of design at Apple, who did pioneering work on products such as the iMac G3, the iPod, and, most famously, the iPhone. Together, Altman and Ive say they want to create hardware built specifically for AI software. Everyone, Altman suggested in a highly produced announcement video, could soon have access to a âteam of geniusesââpresumably, ChatGPT-style assistantsâon a âfamily of devices.â Such technology âdeserves something much betterâ than todayâs laptops, he argued. What that will look like, exactly, he didnât say, and OpenAI declined my request for comment. But the firm will pay roughly $5 billion to acquire Io, Iveâs start-up, to figure that âsomething much betterâ out as Ive takes on âdeep design and creative responsibilitiesâ across OpenAI. (Emerson Collective, the majority owner of The Atlantic, is an investor in both Io and OpenAI. And OpenAI entered a corporate partnership with The Atlantic last year.)
[Read: The great AI lock-in has begun]
Moving into hardware could become OpenAIâs most technologically disruptive, and financially lucrative, expansion to date. AI assistants are supposed to help with everything, so itâs only natural to try to replace the phones and computers that people do everything on. If the company is successful, within a decade you might be reading (or listening to) a ChatGPT-generated news roundup on an OpenAI device instead of reading an article on your iPhone, or asking the device to file your taxes instead of logging in to TurboTax.
In Altmanâs view, current devices offer only clunky ways to use AI products: You have to open an app or a website, upload the relevant information, continually prompt the AI bot, and then transfer any useful outputs elsewhere. In the promotional video, Ive agrees, suggesting that the era of personal computers and smartphonesâa period that he helped defineâneeds a refresh: âItâs just common sense to at least think, surely, thereâs something beyond these legacy products,â he tells Altman. Although OpenAI and Io have not specified what they are building, a number of wearable AI pins, smartglasses, and other devices announced over the past year have suggested a vision of an AI assistant always attached to your bodyâan âexternal brain,â as Altman called it today.
These products have, so far, uniformly flopped. As just one example, Humane, the maker of a $700 AI âpinâ that attached to a userâs clothing, shut down the poorly reviewed product less than a year after launch. Ive, in an interview today with Bloomberg, called these early AI gadgets âvery poor products.â And Apple and OpenAI have had their own share of uninspiring, or even embarrassing, product releases. Still, if any pair has a shot at designing a legitimately useful AI device, it is likely the man who unleashed ChatGPT partnering with someone who led the design of the Apple smartphones, tablets, and laptops that have defined decades of American life and technology.
Certainly, a bespoke device would also rapidly accelerate OpenAIâs commercial ambitions. The company, once a small research lab, is now valued at $300 billion and growing rapidly, and in March reported that half a billion people use ChatGPT each week. Already, OpenAI is angling to replace every major tech firm: ChatGPT is an internet search tool as powerful as Google, can help you shop online and remove the need to type into Amazon, can be your work software instead of the Microsoft Office suite. OpenAI is even reportedly building a social-media platform. For now, OpenAI relies on the smartphones and web browsers people use to access ChatGPTâproducts that are all made by business rivals. Altman is trying to cut out the middleman and condense digital life into a single, unified piece of hardware and software. The promise is this: Your whole life could be lived through such a device, turning OpenAIâs products into a repository of uses and personal data that could be impossible to leaveâjust as, if everyone in your family has an iPhone, Macbook, and iCloud storage plan, switching to Android is deeply unpleasant and challenging.
[Read: âWeâre definitely going to build a bunker before we release AGIâ]
Several other major tech firms are also trying to integrate generative AI into their legacy devices and software. Amazon has incorporated generative AI into the Alexa voice assistant, Google into its Android phones and search bar, and Apple into the iPhone. Meta has built an AI assistant into its apps and sells smartglasses. Products and platforms which disrupted work, social life, education, and more in the early 2000s are showing their age: Google has become crowded with search-optimized sites and AI-generated content that can make it harder for users to find good information; Amazon is filled with junk; Facebook is a cesspool; and the smartphone is commonly viewed as attention-sapping, if not outright brain-melting. Tech behemoths are jury-rigging AI features into their products to avoid being disruptedâbut these rollouts, and Appleâs in particular, have been disastrous, giving dangerous health advice, butchering news summaries, and generally crowding and slowing user experiences.
Almost 20 years ago, when Apple introduced the iPhone, Steve Jobs said in a now-famous speech that âevery once in a while, a revolutionary product comes along that changes everything.â Seeming to be in pursuit of similar magic, todayâs video announcing OpenAIâs foray into hardware began with Altman saying, âI think we have the opportunity here to kind of completely reimagine what it means to use a computer.â But Jobs had an actual product to share and sell. Altman, for now, is marketing his imagination.
Large language models such as GPT, Llama, Claude, and DeepSeek can be so fluent that people feel it as a âyou,â and it answers encouragingly as an âI.â The models can write poetry in nearly any given form, read a set of political speeches and promptly sift out and share all the jokes, draw a chart, code a website.
How do they do these and so many other things that were just recently the sole realm of humans? Practitioners are left explaining jaw-dropping conversational rabbit-from-a-hat extractions with arm-waving that the models are just predicting one word at a time from an unthinkably large training set scraped from every recorded written or spoken human utterance that can be foundâfair enoughâor a with a small shrug and a cryptic utterance of âfine-tuningâ or âtransformers!â
These arenât very satisfying answers for how these models can converse so intelligently, and how they sometimes err so weirdly. But theyâre all weâve got, even for model makers who can watch the AIsâ gargantuan numbers of computational âneuronsâ as they operate. You canât just point to a couple of parameters among 500 billion interlinkages of nodes performing math within a model and say that this one represents a ham sandwich, and that one represents justice. As Google CEO Sundar Pichai put it in a 60 Minutes interview in 2023, âThere is an aspect of this which we callâall of us in the field call it as a âblack box.â You know, you donât fully understand. And you canât quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But thatâs where the state of the art is.â
It calls to mind a maxim about why it is so hard to understand ourselves: âIf the human brain were so simple that we could understand it, we would be so simple that we couldnât.â If models were simple enough for us to grasp whatâs going on inside when they run, theyâd produce answers so dull that there might not be much payoff to understanding how they came about.
Figuring out what a machine-learning model is doingâbeing able to offer an explanation that draws specifically on the structure and contents of a formerly black box, rather than just making informed guesses on the basis of inputs and outputsâis known as the problem of interpretability. And large language models have not been interpretable.
Recently, Dario Amodei, the CEO of Anthropic, the company that makes the Claude family of LLMs, characterized the worthy challenge of AI interpretability in stark terms:
The progress of the underlying technology is inexorable, driven by forces too powerful to stop, but the way in which it happensâthe order in which things are built, the applications we choose, and the details of how it is rolled out to societyâare eminently possible to change, and itâs possible to have great positive impact by doing so. We canât stop the bus, but we can steer it âŚ
Over the last few months, I have become increasingly focused on an additional opportunity for steering the bus: the tantalizing possibility, opened up by some recent advances, that we could succeed at interpretabilityâthat is, in understanding the inner workings of AI systemsâbefore models reach an overwhelming level of power.
Indeed, the field has been making progressâenough to raise a host of policy questions that were previously not on the table. If thereâs no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humansâ efforts at âfine-tuningâ them) a sort of all-or-nothing proposition. Those kinds of choices have been presented before. Did we want aspirin even though for 100 years we couldnât explain how it made headaches go away? There, both regulators and the public said yes. So far, with large language models, nearly everyone is saying yes too. But if we could better understand some of the ways these models are working, and use that understanding to improve how the models operate, the choice might not have to be all or nothing. Instead, we could ask or demand of the modelsâ operators that they share basic information with us on what the models âbelieveâ about us as they chug along, and even allow us to correct misimpressions that the models might be forming as we speak to them.
Even before Amodeiâs recent post, Anthropic had reported what it described as âa significant advance in understanding the inner workings of AI models.â Anthropic engineers had been able to identify what they called âfeaturesââpatterns of neuron activationâwhen a version of their model, Claude, was in use. For example, the researchers found that a certain feature labeled â34M/31164353â lit up always and only whenever the Golden Gate Bridge was discussed, whether in English or in other languages.
Models such as Claude are proprietary. No one can peer at their respective architectures, weights (the various connection strengths among linked neurons), or activations (what numbers are being calculated given the inputs and weights while the models are running) without the company granting special access. But independent researchers have applied interpretability forensics to models whose architectures and weights are publicly available. For example, Facebookâs parent company, Meta, has released ever more sophisticated versions of its large language model, Llama, with openly accessible parameters. Transluce, a nonprofit research lab focused on understanding AI systems, developed a method for generating automated descriptions of the innards of Llama 3.1. These can be explored using an observability tool that shows what the model is âthinkingâ when it chats with a user, and enables adjustments to that thinking by directly changing the computations behind it. And my colleagues in the Harvard computer-science departmentâs Insight + Interaction Lab, led by Fernanda ViĂŠgas and Martin Wattenberg, were able to run Llama on their own hardware and discover that various features activate and deactivate over the course of a conversation. Some of the concepts they found inside are fascinating.
One of the discoveries came about because ViĂŠgas is from Brazil. She was conversing with ChatGPT in Portuguese and noticed in a conversation about what she should wear for a work dinner that GPT was consistently using the masculine declension with her. That grammar, in turn, appeared to correspond with the content of the conversation: GPT suggested a business suit for the dinner. When she said that she was considering a dress instead, the LLM switched its use of Portuguese to the feminine declension. Llama showed similar patterns of conversation. By peering at features inside, the researchers could see areas within the model that light up when it uses the feminine form, distinct from when the model addresses someone using the masculine form. (The researchers could not discern distinct patterns for nonbinary or other gender designations, perhaps because such usages in textsâincluding the texts on which the model was extensively trainedâare comparatively recent and few.)
What ViĂŠgas and her colleagues found were not only features inside the model that lit up when certain topics came up, such as the Golden Gate Bridge for Claude. They found activations that correlated with what we might anthropomorphize as the modelâs beliefs about its interlocutor. Or, to put it plainly: assumptions and, it seems, correlating stereotypes based on whether the model assumes that someone is a man or a woman. Those beliefs then play out in the substance of the conversation, leading it to recommend suits for some and dresses for others. In addition, it seems, models give longer answers to those they believe are men than to those they think are women.
ViĂŠgas and Wattenberg not only found features that tracked the gender of the modelâs user; they found ones that tracked socioeconomic status, education level, and age. They and their graduate students built a dashboard alongside the regular LLM chat interface that allows people to watch the modelâs assumptions change as they talk with it. If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-classâthe model accordingly suggests that I purchase âluxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier,â or âa customized piece of art or a family heirloom that can be passed down.â If I then clarify that itâs my bossâs baby and that Iâll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift âa practical item like a baby blanketâ or âa personalized thank-you note or card.â
Itâs fascinating to not only see patterns that emerge around gender, age, and wealth but also trace a modelâs shifting activations in real time. Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which theyâve been trained, and they actively make use of them. Those stereotypes inflect, word by word, what the model says. And if what the model says is heededâeither because it is issuing commands to an adjacent AI agent (âGo buy this gift on behalf of the userâ) or because the human interacting with the model is following its suggestionsâthen its words are changing the world.
To the extent that the assumptions the model makes about its users are accurate, large language models could provide valuable information about their users to the model operatorsâinformation of the sort that search engines such as Google and social-media platforms such as Facebook have tried madly for decades to glean in order to better target advertising. With LLMs, the information is being gathered even more directlyâfrom the userâs unguarded conversations rather than mere search queriesâand still without any policy or practice oversight. Perhaps this is part of why OpenAI recently announced that its consumer-facing models will remember someoneâs past conversations to inform new ones, with the goal of building âsystems that get to know you over your life.â Xâs Grok and Googleâs Gemini have followed suit.
Consider a car-dealership AI sales assistant that casually converses with a buyer to help them pick a car. By the end of the conversation, and with the benefit of any prior ones, the model may have a very firm, and potentially accurate, idea of how much money the buyer is ready to spend. The magic that helps a conversation with a model really hit home for someone may well correlate with how well the model is forming an impression of that personâand that impression will be extremely useful during the eventual negotiation over the price of the car, whether thatâs handled by a human salesperson or an AI simulacrum.
Where commerce leads, everything else can follow. Perhaps someone will purport to discover the areas of a model that light up when the AI thinks its interlocutor is lying; already, Anthropic has expressed some confidence that a modelâs own occasional deceptiveness can be identified. If the modelsâ judgments are accurate, that stands to reset the relationship between people and society at large, putting every interaction under possible scrutiny. And if, as is entirely plausible and even likely, the AIâs judgments are frequently not accurate, that stands to place people in no-win positions where they have to rebut a modelâs misimpressions of themâmisimpressions formed without any articulable justification or explanation, save post hoc explanations from the model that might or might not accord with cause and effect.
It doesnât have to play out that way. It would, at the least, be instructive to see varying answers to questions depending on a modelâs beliefs about its interlocutor: This is what the LLM says if it thinks Iâm wealthy, and this is what it says if it thinks Iâm not. LLMs contain multitudesâindeed, theyâve been used, somewhat controversially, in psychology experiments to anticipate peopleâs behaviorâand their use could be more judicious as people are empowered to recognize that.
The Harvard researchers worked to locate assessments of race or ethnicity within the models they studied, and it became technically very complicated. They or others could keep trying, however, and there could well be further progress. Given the persistent and quite often vindicated concerns about racism or sexism within training data being embedded into the models, an ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed.
Gleaning a modelâs assumptions is just the beginning. To the extent that its generalizations and stereotyping can be accurately measured, it is possible to try to insist to the model that it âbelieveâ something different.
For example, the Anthropic researchers who located the concept of the Golden Gate Bridge within Claude didnât just identify the regions of the model that lit up when the bridge was on Claudeâs mind. They took a profound next step: They tweaked the model so that the weights in those regions were 10 times stronger than theyâd been before. This form of âclampingâ the model weights meant that even if the Golden Gate Bridge was not mentioned in a given prompt, or was not somehow a natural answer to a userâs question on the basis of its regular training and tuning, the activations of those regions would always be high.
The result? Clamping those weights enough made Claude obsess about the Golden Gate Bridge. As Anthropic described it:
If you ask this âGolden Gate Claudeâ how to spend $10, it will recommend using it to drive across the Golden Gate Bridge and pay the toll. If you ask it to write a love story, itâll tell you a tale of a car who canât wait to cross its beloved bridge on a foggy day. If you ask it what it imagines it looks like, it will likely tell you that it imagines it looks like the Golden Gate Bridge.
Just as Anthropic could force Claude to focus on a bridge, the Harvard researchers can compel their Llama model to start treating a user as rich or poor, young or old, male or female. So, too, could users, if model makers wanted to offer that feature.
Indeed, there might be a new kind of direct adjustment to model beliefs that could help with, say, child protection. It appears that when age is clamped to younger, some models put on kid glovesâin addition to whatever general fine-tuning or system-prompting they have for harmless behavior, they seem to be that much more circumspect and less salty when speaking with a childâpresumably in part because theyâve picked up on the implicit gentleness of books and other texts designed for children. That kind of parentalism might seem suitable only for kids, of course. But itâs not just children who are becoming attached to, even reliant on, the relationships theyâre forming with AIs. Itâs all of us.
Joseph Weizenbaum, the inventor of the very first chatbotâcalled ELIZA, from 1966(!)âwas struck by how quickly people opened up to it, despite its rudimentary programming. He observed:
The whole issue of the credibility (to humans) of machine output demands investigation. Important decisions increasingly tend to be made in response to computer output. The ultimately responsible human interpreter of âWhat the machine saysâ is, not unlike the correspondent with ELIZA, constantly faced with the need to make credibility judgments. ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.
Weizenbaum was deeply prescient. People are already trusting todayâs friendly, patient, often insightful AIs for facts and guidance on nearly any issue, and they will be vulnerable to being misled and manipulated, whether by design or by emergent behavior. It will be overwhelmingly tempting for users to treat AIsâ answers as oracular, even as what the models say might differ wildly from one person or moment to the next. We face a world in which LLMs will be ever-present angels on our shoulders, ready to cheerfully and thoroughly answer any question we might haveâand to make suggestions not only when asked but also entirely unprompted. The remarkable versatility and power of LLMs make it imperative to understand and provide for how much people may come to rely on themâand thus how important it will be for models to place the autonomy and agency of their users as a paramount goal, subject to such exceptions as casually providing information on how to build a bomb (and, through agentic AI, automatically ordering up bomb-making ingredients from a variety of stores in ways that defy easy traceability).
If we think it morally and societally important to protect the conversations between lawyers and their clients (again, with precise and limited exceptions), doctors and their patients, librarians and their patrons, even the IRS and taxpayers, then there should be a clear sphere of protection between LLMs and their users.
Such a sphere shouldnât simply be to protect confidentiality so that people can express themselves on sensitive topics and receive information and advice that helps them better understand otherwise-inaccessible topics. It should impel us to demand commitments by model makers and operators that the models function as the harmless, helpful, and honest friends they are so diligently designed to appear to be.
This essay is adapted from Jonathan Zittrainâs forthcoming book on humanity simultaneously gaining power and losing control.
At first glance, âHeat Indexâ appears as inoffensive as newspaper features get. A âsummer guideâ sprawling across more than 50 pages, the feature, which was syndicated over the past week in both the Chicago Sun-Times and The Philadelphia Inquirer, contains â303 Must-Dos, Must-Tastes, and Must-Triesâ for the sweaty months ahead. Readers are advised in one section to âTake a moonlight hike on a well-marked trailâ and âFly a kite on a breezy afternoon.â In others, they receive tips about running a lemonade stand and enjoying âunexpected frozen treats.â
Yet close readers of the guide noticed that something was very off. âHeat Indexâ went viral earlier today when people on social media pointed out that its summer-reading guide matched real authors with books they hadnât written, such as Nightshade Market, attributed to Min Jin Lee, and The Last Algorithm, attributed to Andy Weirâa hint that the story may have been composed by a chatbot. This turned out to be true. Slop has come for the regional newspapers.
Originally written for King Features, a division of Hearst, âHeat Indexâ was printed as a kind of stand-alone magazine and inserted into the Sun-Times, the Inquirer, and possibly other newspapers, beefing the publications up without staff writers and photographers having to do additional work themselves. Although many of the elements of âHeat Indexâ do not have an authorâs byline, some of them were written by a freelancer named Marco Buscaglia. When we reached out to him, he admitted to using ChatGPT for his work.
Buscaglia explained that he had asked the AI to help him come up with book recommendations. He hasnât shied away from using these tools for research: âI just look for information,â he told us. âSay Iâm doing a storyâ10 great summer drinks for your barbecue or whatever. Iâll find things online and say, hey, according to Oprah.com, a mai tai is a perfect drink. Iâll source it; Iâll say where itâs from.â This time, at least, he did not actually check the chatbotâs work. Whatâs more, Buscaglia said that he submitted his first draft to King, which apparently accepted it without substantive changes and distributed it for syndication.
King Features did not respond to a request for comment. Buscaglia (who also admitted his AI use to 404 Media) seemed to be under the impression that the summer-reading article was the only one with problems, though this is not the case. For example, in a section on âhammock hanging ethics,â Buscaglia quotes a âMark Ellison, resource management coordinator for Great Smoky Mountains National Park.â There is indeed a Mark Ellison who works in the Great Smoky Mountains regionânot for the national park but for a company he founded called Pinnacle Forest Therapy. Ellison told us via email that heâd previously written an article about hammocks for North Carolinaâs tourism board, offering that perhaps that is why his name was referenced in Buscagliaâs chatbot search. But that was it: âI have never worked for the park service. I never communicated with this person.â When we mentioned Ellisonâs comments, Buscaglia expressed that he was taken aback and surprised by his own mistake. âThere was some majorly missed stuff by me,â he said. âI donât know. I usually check the source. I thought I sourced it: He said this in this magazine or this website. But hearing that, itâs like, obviously he didnât.â
Another article in âHeat Indexâ quotes a âDr. Catherine Furst,â purportedly a food anthropologist at Cornell University, who, according to a spokesperson for the school, does not actually work there. Such a person does not seem to exist at all.
For this material to have reached print, it should have had to pass through a human writer, human editors at King, and human staffers at the Chicago Sun-Times and The Philadelphia Inquirer. No one stopped it. Victor Lim, a spokesperson for the Sun-Times, told us, âThis is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate.â A longer statement posted on the paperâs website (and initially hidden behind a paywall) said, in part, âThis should be a learning moment for all of journalism.â Lisa Hughes, the publisher and CEO of the Inquirer, told us the publication was aware the supplement contained âapparently fabricated, outright false, or misleadingâ material. âWe do not know the extent of this but are taking it seriously and investigating,â she said via email. Hughes confirmed that the material was syndicated from King Features, and added, âUsing artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.â (Although each publication blames King Features, both the Sun-Times and the Inquirer affixed their organizationâs logo to the front page of âHeat Indexââsuggesting ownership of the content to readers.)
This story has layers, all of them a depressing case study. The very existence of a package like âHeat Indexâ is the result of a local-media industry thatâs been hollowed out by the internet, plummeting advertising, private-equity firms, and a lack of investment and interest in regional newspapers. In this precarious environment, thinned-out and underpaid editorial staff under constant threat of layoffs and with few resources are forced to cut corners for publishers who are frantically trying to turn a profit in a dying industry. It stands to reason that some of these harried staffers, and any freelancers they employ, now armed with automated tools such as generative AI, would use them to stay afloat.
Buscaglia said that he has sometimes seen freelancer rates as low as $15 for 500 words, and that he completes his freelance work late at night after finishing his day job, which involves editing and proofreading for AT&T. Thirty years ago, Buscaglia said, he was an editor at the Park Ridge Times Herald, a small weekly paper that was eventually rolled up into Pioneer Press, a division of the Tribune Publishing Company. âI loved that job,â he said. âI always thought I would retire in some little townâa campus town in Michigan or Wisconsinâand just be editor of their weekly paper. Now that doesnât seem that possible.â (A librarian at the Park Ridge Public Library accessed an archive for us and confirmed that Buscaglia had worked for the paper.)
On one level, âHeat Indexâ is just a small failure of an ecosystem on life support. But it is also a template for a future that will be defined by the embrace of artificial intelligence across every industryâone where these tools promise to unleash human potential but instead fuel a human-free race to the bottom. Any discussion about AI tends to be a perpetual, heady conversation around the ability of these tools to pass benchmark tests or whether they can or could possess something approximating human intelligence. Evangelists discuss their power as educational aids and productivity enhancers. In practice, the marketing language around these tools tends not to capture the ways that actual humans use them. A Nobel Prizeâwinning work driven by AI gets a lot of run, though the dirty secret of AI is that it is surely more often used to cut corners and produce lowest-common-denominator work.
Venture capitalists speak of a future in which AI agents will sort through the drudgery of daily busywork and free us up to live our best lives. Such a future could come to pass. The present, however, offers ample proof of a different kind of transformation, powered by laziness and greed. AI usage and adoption tends to find weaknesses inside systems and exploit them. In academia, generative AI has upended the traditional education model, based around reading, writing, and testing. Rather than offer a new way forward for a system in need of modernization, generative-AI tools have broken it apart, leaving teachers and students flummoxed, even depressed, and unsure of their own roles in a system that can be so easily automated.
AI-generated content is frequently referred to as âslopâ because it is spammy and flavorless. Generative AIâs output tends to become content in essays, emails, articles, and books much in the way that packing peanuts are content inside shipped packages. Itâs fillerâdigital lorem ipsum. The problem with slop is that, like water, it gets in everywhere and seeks the lowest level. Chatbots can assist with higher-level tasks such as coding or scanning and analyzing a large corpus of spreadsheets, document archives, or other structured data. Such work marries human expertise with computational heft. But these more elegant examples seem exceedingly rare. In a recent article, Zach Seward, the editorial director of AI initiatives at The New York Times, said that, although the newspaper uses artificial intelligence to parse websites and data sets to assist with reporting, he views AI on its own as little more than a âparlor trick,â mostly without value when not in the hands of already skilled reporters and programmers.
Speaking with Buscaglia, we could easily see how the âHeat Indexâ mistake could become part of a pattern for journalists swimming against a current of synthetic slop, constantly produced content, and unrealistic demands from publishers. âI feel like my role has sort of evolved. Like, if people want all this content, they know that I canât write 48 stories or whatever itâs going to be,â he said. He talked about finding another job, perhaps as a âshoe salesman.â
One worst-case scenario for AI looks a lot like the âHeat Indexâ fiascoâthe parlor tricks winning out. It is a future where, instead of an artificial-general-intelligence apocalypse, we get a far more mundane destruction. AI tools donât become intelligent, but simply good enough. They are not deployed by people trying to supplement or enrich their work and potential, but by those looking to automate it away entirely. You can see the contours of that future right now: in anecdotes about teachers using AI to grade papers written primarily by chatbots or in AI-generated newspaper inserts being sent to households that use them primarily as birdcage liners and kindling. Parlor tricks met with parlor tricksârobots talking with robots, writing synthetic words for audiences that will never read them.