Home 🧠 AI The Verge - Artificial Intelligence
author

The Verge - Artificial Intelligence

Explore how AI is seamlessly integrating into our lives, beyond the hype, reshaping technology's role across various sectors.

April 3, 2025  19:25:52

Amazon is testing a new “Buy for Me” button that will let you purchase products from third-party websites without leaving the e-commerce giant’s mobile app. The feature is powered by agentic AI, allowing the company to purchase items on your behalf.

Last month, Amazon rolled out a test that directs you to other brands’ websites for products it doesn’t sell. But now, instead of directing you to the website to fill out your payment details and shipping address, “Buy for Me” is supposed to do all the work for you. The feature runs on Amazon’s Nova AI system, which now includes a new model capable of performing actions within your browser, along with Anthropic’s Claude.

When you tap on an item that supports the feature, you’ll see all the product details directly within the Amazon app. Pressing the “Buy for Me” button will bring up an Amazon checkout page, where you can verify your payment information.

Amazon will then use AI to “securely” provide your “encrypted name, address, and payment details to complete the checkout process on the brand’s website.” The company says it can’t view previous or separate orders from third-party sites. Even though you’ll be able to track your orders directly on Amazon, you’ll have to visit the other brand’s site for customer service and returns.

Amazon doesn’t say whether it will get a cut of a “Buy for Me” purchase but notes that third-party companies can opt out. “Buy for Me” is currently available to a “subset” of users in the US on iOS and Android devices. Amazon is also testing it with a limited number of brands and products for now, but it plans to expand it in the future.

April 3, 2025  16:42:26

Two leading AI labs, OpenAI and Anthropic, just announced major initiatives in higher education. It’s the constant one-upping we’ve all become familiar with: this week, Anthropic dropped their announcement at 8 AM Wednesday, while OpenAI followed with nearly identical news at 8 AM Thursday.

For Anthropic, this week’s announcement was its first major academic push. It launched Claude for Education, a university-focused version of its chatbot. The company also announced partnerships with Northeastern University, London School of Economics (LSE), and Champlain College, along with with Internet2, which builds university tech infrastructure, and Instructure (maker of Canvas) to increase “equitable access to tools that support universities as they integrate AI.”

At the center of Anthropic’s education-focused offering is “Learning mode,” a new feature that changes how Claude interacts with students. Instead of just providing answers, the press release says Learning mode will use Socratic questioning to guide students through problems, asking “How would you approach this?” or “What evidence supports your conclusion?” — with the goal of helping students ”develop critical thin …

Read the full story at The Verge.

April 3, 2025  16:19:21

Though Blumhouse’s first M3gan feature was a near-perfect blend of techno-horror and ridiculous comedy, the sequel looks like it’s going to blow its predecessor out of the water.

Set a couple of years after the first film, M3gan 2.0 once again centers roboticist Gemma (Allison Williams) and her niece Cady (Violet McGraw) — two of the few people who managed to survive M3gan’s (Amie Donald / Jenna Davis) initial murder spree. 

As an advocate for stricter regulations on the very same kind of artificial intelligence she helped create, Gemma knows how dangerous it would be if she were to bring M3gan back. But when a defense contractor uses Gemma’s code to create an even more advanced killer robot called Amelia (Ivanna Sakhno), who immediately sets out to conquer the world, Gemma has no choice but to give M3gan a second chance at life.

While the first M3gan was definitely silly, this trailer’s use of “Oops!… I Did It Again” and its focus on M3gan flying through the sky in a wingsuit make clear that writer Akela Cooper and director Gerard Johnstone are fully leaning into the franchise’s camp. Blumhouse’s other recent AI panic horrors haven’t exactly sparked joy, but M3gan 2.0 feels like it’s going to be one hell of a good time when it hits theaters on June 27th.

April 3, 2025  15:52:52

On today’s episode of Decoder, we’re talking about AI, art, and the controversial collision between the two — a debate that, to be honest, is an absolute mess. If you’ve been on the internet this past week, you undoubtedly know that controversy was just kicked up a notch by the Studio Ghibli memes — pictures cribbing the style of the legendary Japanese film studio. These images, powered by OpenAI’s new image generator, are everywhere; OpenAI CEO Sam Altman has even been posting some examples to his personal X account. And they’ve widened an already pretty stark rift between AI boosters and critics. 

Brian Merchant, a good friend of The Verge and author of the newsletter and book Blood in the Machine, wrote one of the best analyses of the Ghibli trend last week. So I invited him onto the show to discuss this particular situation and also to help me figure out the ongoing AI art debate more broadly as it continues to collide with legal frameworks like copyright. 

Merchant and I tend to agree a lot more than we disagree when it comes to the technology industry. So I did my best to really take the other side here and push on these ideas as hard as I could. Technology and art have always been in a dance with each other; that’s part of the founding ethos of The Verge. So I think it’s important to put AI in that context — not least because we can see the obvious joy regular people find in using some of these tools to express themselves in ways they might not otherwise be able to.

But there’s expressing yourself, and then there’s churning out AI anime slop that is designed to evoke classics like Spirited Away and My Neighbor Totoro in a way that devalues and even outright steals from actual human artists. Add in the way the Trump administration jumped on the trend by “Ghibli-fying” a deportation photo, and it’s not hard to see why a lot of folks perceive this tool as utterly grotesque and offensive.  Or “an insult to life itself,” as Ghibli cofounder Hayao Miyazaki once famously said of an AI demo he witnessed in 2016. 

So you’ll hear Merchant and I really go back and forth, digging in on what this all means — for art and artists, and for a creative economy that has long since transitioned from the world of physical scarcity to one of limitless digital supply. And, most importantly, we spent a lot of time talking about how we should feel using these tools at all when they might pose very real threats to people’s livelihoods and the ongoing climate crisis. 

I’ll warn you: there are no easy answers here, and I don’t think Merchant and I came to a single conclusion. I don’t even think we wanted to. But I think this conversation helped me consider more clearly how to think about AI and art. Let me know what you think.

If you’d like to read more on what we talked about in this episode, check out the links below:

  • OpenAI’s Studio Ghibli meme factory is an insult to art itself | Brian Merchant
  • Seattle engineer’s Ghibli-style image goes viral | Seattle Times
  • OpenAI just raised another $40 billion round from SoftBank | The Verge
  • ChatGPT “added one million users in the last hour.” | The Verge
  • ChatGPT’s Ghibli filter is political now, but it always was | The Verge
  • OpenAI, Google ask the government to let them train on content they don’t own | The Verge
  • Studio Ghibli in the age of A.I. reproduction | Max Read
  • OpenAI has a Studio Ghibli problem | Vergecast
  • AI slop is a brute force attack on the algorithms that control reality | 404 Media
  • The New Aesthetics of Fascism | New Socialist

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

April 3, 2025  15:10:02

When President Donald Trump began yesterday’s announcement of the White House’s latest trade policy brandishing a novelty-sized cardboard sign labeled “Reciprocal Tariffs,” the immediate and nearly unanimous response was bafflement. Trump slapped a 10 percent baseline tariff on all imports into the US, including from uninhabited islands, plus absurdly high rates on specific countries, supposedly based on “tariffs charged to the USA” — which didn’t match up to other, non-cardboard-sign-based estimates. Stock markets have plummeted and consumers are facing down sharp price hikes on potentially almost everything they buy. 

Where did these numbers come from? Apparently, an oversimplified calculation that several major AI chatbots happen to recommend.

Economist James Surowiecki quickly reverse-engineered a possible explanation for the tariff pricing. He found you could recreate each of the White House’s numbers by simply taking a given country’s trade deficit with the US and dividing it by their total exports to the US. Halve that number, and you get a ready-to-use “discounted reciprocal tariff.” The White House objected to this claim and published the formula it …

Read the full story at The Verge.

April 3, 2025  10:48:46

Google has added a new feature to NotebookLM that lets the AI note-taking tool find its own web sources to summarize and narrate. Instead of manually uploading sources like documents or YouTube links, users can now tap the “Discover” button and simply describe the topic they want to get a better understanding of, with the tool then gathering web sources around the subject.

Google says the Discover feature started rolling out on Wednesday, and will take “about a week or so” to be available to all users.

NotebookLM will hunt through “hundreds of potential web sources in seconds” according to Google, analyzing the most relevant options and then presenting a list of up to ten recommendations, each with a summary explaining its relevance. Users can select which of these sources they want NotebookLM to reference, and import them to use in other features, including FAQs, Briefing Docs, and podcast-like Audio Overviews that use AI hosts to discuss a topic.

A GIF demonstrating NotebookLM’s new Discover sources feature.

Sources will be saved within NotebookLM to allow users to read them directly and use them as references for citations, note-taking, and question-answering capabilities. Google says that Discover sources is the first of several Gemini-powered NotebookLM features that are being developed to make it easier for users to find relevant notebook reference materials.

Another capability spun from this is “I’m Feeling Curious” — a button that prompts NotebookLM to generate sources on a completely random topic. It’s a good way to see what the feature is capable of, but also a fun way to learn about new subjects, much like Wikipedia’s random article feature.

April 2, 2025  17:20:07

Sissie Hsaio, the Google exec who oversaw the launch of the company’s AI chatbot, is stepping down as head of the Gemini app, according to a report from Semafor. A memo seen by the outlet reveals that Google Labs vice president Josh Woodward will take her place.

In the memo, Google DeepMind CEO Demis Hassabis said the move will “sharpen our focus on the next evolution of the Gemini app,” reports Semafor. Google spokesperson Alex Joseph confirmed Semafor’s reporting but declined to comment.

Hsaio has worked at Google for nearly 20 years, first joining the company in 2006 as a product manager for Search and Docs. Google appointed Hsaio as the head of Gemini apps in 2021, where she most notably helped launch the company’s answer to ChatGPT, originally called Bard. Hsaio’s memo said she will take a “short break” coming back to Google in a new position, Semafor reports.

As noted by Semafor, Woodward will remain the head of Google Labs “while shaping the next chapter of Gemini.” Woodward helped develop NotebookLM, Google’s AI-powered note-taking app, which features an Audio Overview tool that transforms research into a podcast with two AI “hosts.” Google has since brought NotebookLM to its One AI Premium subscription and now lets users make AI podcasts from Gemini’s Deep Research, too.

April 2, 2025  13:00:00

Adobe is updating Premiere Pro with AI-powered features that aim to provide creatives with faster and better video editing results. Version 25.2 of Premiere Pro is launching today, bringing tools for locating, translating, and extending video footage out of beta and into general availability for every user.

The most notable is Generative Extend, which Adobe announced in October as one of the first tools powered by its Firefly generative AI video model. The feature allows users to extend clips by up to two seconds, providing more options for transitions or correcting unexpected movements without having to reshoot footage. Generative Extend can now generate clips in 4K quality and will extend ambient background audio — up to ten seconds for audio alone, or two when paired with video extension — though this won’t extend speech or music.

Generative Extend is completely free to use for a “limited time,” according to Adobe, after which the feature will require users to spend Firefly generative credits. Creative Cloud subscriptions provide a monthly allocation of credits ranging between 25 to 1,000 credits depending on the plan. An additional Firefly credit subscription is also available starting from $10, which grants 2,000 credits per month. Adobe has not specified how many credits the Generative Extend feature will eventually consume, but says that “price will vary based on the format, frame rate, and resolution of your video.”

The latest version of Premiere Pro also includes the new AI-powered Search panel that automatically recognizes the content of clips within your video library. This enables users to search for footage using text descriptions that include objects, locations, camera angles, and effects, such as looking for “close-ups of hands working in a kitchen.” Another feature, Premiere Color Management, automatically transforms log and raw files directly to SDR or HDR without lookup tables to make it easier to jump right into editing, alongside a new wide-gamut color pipeline.

A screenshot of the new search panel inside Adobe Premiere Pro

Premiere Pro can now also use AI to automatically translate video captions into 27 different languages, with users able to display multiple caption tracks simultaneously during editing. Adobe also says that the Premiere Pro update provides better speed and performance across both Apple silicon and Windows devices.

The latest version of Premiere Pro is launching alongside After Effects 25.2, which provides new HDR monitoring capabilities, animation controls, support for 3D FBX models, and animated environmental light effects. A new High Performance Preview Playback feature also makes it easier to preview longer compositions thanks to a new caching system that utilizes both RAM and local disks, rather than RAM alone.

April 3, 2025  15:51:06

What was once a humble research lab has transformed into one of the biggest consumer technology companies of all time.

OpenAI, founded in 2015 to develop artificial general intelligence (AGI) — AI systems with human-level intelligence — has transformed dramatically since launching ChatGPT, which was once considered to be “the fastest-growing consumer application in history.” Most cofounders have left either to create a competitor or work for one. The company has secured billions in funding and partnerships with Apple and Microsoft, even announcing a $500 billion datacenter project called Stargate. Meanwhile, it faces copyright lawsuits from authors and news organizations, legal action from cofounder Elon Musk over the company’s alleged departure from its original mission, and criticism for burning through cash despite projected billions in revenue. After Altman’s brief ouster, OpenAI is now expected to restructure from a nonprofit-led organization to a full for-profit company to stabilize operations and reassure investors.

As San Francisco’s hottest AI company continues to barrel towards ever growing valuations, its claims become more nebulous. Altman expects we may see “the first AI agents ‘join the workforce’ and materially change the output of companies” in 2025, and says his team is “now confident we know how to build AGI as we have traditionally understood it.” A decade since it was founded, OpenAI has become synonymous with the future of AI, with the tech industry and beyond closely monitoring its next moves.

All of the news and updates about OpenAI continue below.

April 1, 2025  18:31:20
A screenshot from a Runway AI-generated video

AI startup Runway says its latest AI video model can generate consistent scenes and people across multiple shots, according to an announcement. AI-generated videos can struggle with maintaining consistent storytelling, but Runway claims on X that the new model, Gen-4, should allow users more “continuity and control” while telling stories. 

Currently rolling out to paid and enterprise users, the new Gen-4 video synthesis model allows users to generate characters and objects across shots using a single reference image. Users must then describe the composition they want, and the model will then generate consistent outputs from multiple angles.

As an example, the startup released a video of a woman maintaining her appearance in different shots and contexts across a variety of lighting conditions.

The release comes less than a year after Runway announced its Gen-3 Alpha video generator. That model extended the length of videos users could produce, but also sparked controversy as reportedly had been trained on thousands of scraped YouTube videos and pirated films.