The Guardian - AI
Read about the latest happenings in AI with features and news from The Guardian’s global perspective.
Experts explain the pontiff’s appeal as the most recent AI images of Francis, with the singer Madonna, go viral
For the pope, it was the wrong kind of madonna.
The pop legend, she of the 80’s anthem Like a Prayer, has stirred controversy in recent weeks by posting deepfake images on social media which show the pontiff embracing her. It has fanned the flames of a debate which is already raging over the creation of AI art in which Pope Francis plays a symbolic, and unwilling, role.
Continue reading...One algorithm identified the five strongest notes in each drink more accurately than any one of a panel of experts
Notch up another win for artificial intelligence. Researchers have used the technology to predict the notes that waft off whisky and determine whether a dram was made in the US or Scotland.
The work is a step towards automated systems that can predict the complex aroma of whisky from its molecular makeup. Expert panels usually assess woody, smoky, buttery or caramel aromas, which can help to ensure they don’t vary substantially between batches of the same product.
Continue reading...Exclusive: Coalition of musicians, photographers and newspapers insist existing copyright laws must be respected
Writers, publishers, musicians, photographers, movie producers and newspapers have rejected the Labour government’s plan to create a copyright exemption to help artificial intelligence companies train their algorithms.
In a joint statement, bodies representing thousands of creatives dismissed the proposal made by ministers on Tuesday that would allow companies such as Open AI, Google and Meta to train their AI systems on published works unless their owners actively opt out.
Continue reading...Ex-staff at outsourcing company Samasource claim they vetted unspeakably graphic videos in harsh conditions
- More than 140 Kenya Facebook moderators sue after diagnoses of severe PTSD
- ‘The work damaged me’: ex-Facebook moderators describe effect of horrific content
Smashing bricks against the side of your house is not a normal way to wind down after work. Nor is biting your own arm or being scared to go to sleep. But that was the reality for one of the 144 people diagnosed with severe post-traumatic stress disorder after moderating horrific content for Facebook.
The young mother in her 20s did the job in Nairobi for more than two years during which she claims she had to vet unspeakably graphic videos. These included extreme sexual deviancy and bestiality, child abuse, torture, dismemberment and murder, which caused her to vomit, according to court filings. On one occasion she recalled having to watch a person having sex with a snake.
Continue reading...Imagination Technologies had licences with two Chinese firms – but said it had not ‘implemented transactions’ that would enable the use of technology for military purposes
Chinese engineers developing chips for artificial intelligence that can be used in “advanced weapons systems” have gained access to cutting-edge UK technology, the Guardian can reveal.
Described by analysts as “China’s premier AI chip designers”, Moore Threads and Biren Technology are subject to US export restrictions over their development of chips that “can be used to provide artificial intelligence capabilities to further development of weapons of mass destruction, advanced weapons systems and hi-tech surveillance applications that create national security concerns”.
Continue reading...AI’s impact on communication | The perfect gift for men | Poll tax revival | Czech mate | The end of history
The AI backlash is here (Losing our voice? Fears AI tone-shifting tech could flatten communication, 11 December). Yesterday, I wrote some notes on Microsoft WordPad and felt a sense of freedom as even my poor spelling wasn’t corrected. My WhatsApp messages are becoming pithy two-fingered salutes to the endless suggestions for improvement. Poor grammar is now a welcome sign of a brain and beating heart. I are human. I speak proper.
Edward Bick
Hereford
• Re Joel Snape’s piece (Worried about what to buy the man in your life for Christmas? The perfect gift may be more modest than you think, 11 December), last Christmas my wife gave me a ball of rubber bands and this year I’m buying them for all my friends. No more bulky clothes pegs to seal those opened food packets.
Melvyn Rust
St Albans
GM is shutting down its robotaxi business, Tesla is creating one of its own – what does the future hold for self-driving?
Welcome back. This week in tech: General Motors says goodbye to robotaxis but not self-driving cars; one woman’s fight to keep AI out of applications for housing; Salt Typhoon; and tech’s donations to Donald Trump. Thank you for joining me.
Tenant-screening systems like SafeRent are often used in place of humans as a way to ‘avoid engaging’ directly with the applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Louis and the class of plaintiffs who sued the company.
The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted.
Continue reading...Consultation suggests opt-out scheme for creatives who don’t want their work used by Google, OpenAI and others
Campaigners for the protection of the rights of creatives have criticised a UK government proposal to let artificial intelligence companies train their algorithms on their works under a new copyright exemption.
Book publishers said the proposal put out for consultation on Tuesday was “entirely untested and unevidenced” while Beeban Kidron, a crossbench peer campaigning to protect artists’ and creatives’ rights, said she was “very disappointed”.
Continue reading...Ministry of Defence says risk with Textio tool is low and ‘robust safeguards’ have been put in place by suppliers
An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment.
Data used in the automated Textio system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored using Amazon Web Services (AWS) in the US. This means “a data breach may have concerning consequences, ie identification of defence personnel”, according to documents detailing government AI systems published for the first time today.
The possibility of inappropriate lesson material being generated by a AI-powered lesson-planning tool used by teachers based on Open AI’s powerful large language model, GPT-4o. The AI saves teachers time and can personalise lesson plans rapidly in a way that may otherwise not be possible.
“Hallucinations” by a chatbot deployed to answer queries about the welfare of children in the family courts. However, it also offers round the clock information and reduces queue times for people who need to speak to a human agent.
“Erroneous operation of the code” and “incorrect input data” in HM Treasury’s new PolicyEngine that uses machine learning to model tax and benefit changes “with greater accuracy than existing approaches”.
“A degradation of human reasoning” if users of an AI to prioritise food hygiene inspection risks become over-reliant on the system. It may also result in “consistently scoring establishments of a certain type much lower”, but it should also mean faster inspections of places that are more likely to break hygiene rules.
Continue reading...The former Friends star criticised the film which makes extensive use of an AI-driven tool called Metaphysic Live to de-age and face-swap actors
Tom Hanks’ new film Here has been criticised as “an endorsement for AI” by former Friends star Lisa Kudrow.
Kudrow was discussing the implications of ageing with host Dax Shepard on the Armchair Expert podcast and pointed to Here as the harbinger of crisis for the film industry.
Continue reading...More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system
The email arrived out of the blue: it was the university code of conduct team. Albert, a 19-year-old undergraduate English student, scanned the content, stunned. He had been accused of using artificial intelligence to complete a piece of assessed work. If he did not attend a hearing to address the claims made by his professor, or respond to the email, he would receive an automatic fail on the module. The problem was, he hadn’t cheated.
Albert, who asked to remain anonymous, was distraught. It might not have been his best effort, but he’d worked hard on the essay. He certainly didn’t use AI to write it: “And to be accused of it because of ‘signpost phrases’, such as ‘in addition to’ and ‘in contrast’, felt very demeaning.” The consequences of the accusation rattled around his mind – if he failed this module, he might have to retake the entire year – but having to defend himself cut deep. “It felt like a slap in the face of my hard work for the entire module over one poorly written essay,” he says. “I had studied hard and was generally a straight-A student – one bad essay suddenly meant I used AI?”
Continue reading...Despite a stellar reference from a landlord of 17 years, Mary Louis was rejected after being screened by firm SafeRent
Three hundred twenty-four. That was the score Mary Louis was given by an AI-powered tenant screening tool. The software, SafeRent, didn’t explain in its 11-page report how the score was calculated or how it weighed various factors. It didn’t say what the score actually signified. It just displayed Louis’s number and determined it was too low. In a box next to the result, the report read: “Score recommendation: DECLINE”.
Louis, who works as a security guard, had applied for an apartment in an eastern Massachusetts suburb. At the time she toured the unit, the management company said she shouldn’t have a problem having her application accepted. Though she had a low credit score and some credit card debt, she had a stellar reference from her landlord of 17 years, who said she consistently paid her rent on time. She would also be using a voucher for low-income renters, guaranteeing the management company would receive at least some portion of the monthly rent in government payments. Her son, also named on the voucher, had a high credit score, indicating he could serve as a backstop against missed payments.
Continue reading...Notifications from a new Apple product falsely suggested the BBC claimed the New York gunman Luigi Mangione had killed himself
The BBC says it has filed a complaint with the US tech giant Apple over AI-generated fake news that was shared on iPhones and attributed to the broadcaster.
Apple Intelligence, which was launched in Britain this week, produces grouped notifications from several information sites that have been generated by artificial intelligence.
Continue reading...OpenAI’s new ‘o1’ system seeks to solve the limits to growth but it raises concerns about control and the risks of smart machines
More than 300 million people use OpenAI’s ChatGPT each week, a testament to the technology’s appeal. This month, the company unveiled a “pro mode” for its new “o1” AI system, offering human-level reasoning — for 10 times the current $20 monthly subscription fee. One of its advanced behaviours appears to be self-preservation. In testing, when the system was led to believe it would be shut down, it attempted to disable an oversight mechanism. When “o1” found memos about its replacement, it tried copying itself and overwriting its core code. Creepy? Absolutely.
More realistically, the move probably reflects the system’s programming to optimise outcomes rather than demonstrating intentions or awareness. The idea of creating intelligent machines induces feelings of unease. In computing this is the gorilla problem: 7m years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. The concern is that just as gorillas lost control over their fate to humans, humans might lose control to superintelligent AI. It is not obvious that we can control machines that are smarter than us.
Continue reading...Singer-songwriter follows Paul McCartney and others in speaking out as ministers consider opt-out system
Kate Bush has called on ministers to protect artists from AI using their copyrighted works amid growing concerns from high-profile creatives and ongoing political uncertainty over how to handle the issue.
The reclusive singer-songwriter has joined the actors Julianne Moore, Kevin Bacon, Rosario Dawson, Stephen Fry and Hugh Bonneville in signing a petition, now backed by over 36,000 creatives, which states the “unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted”.
Continue reading...UK’s AI research body’s ‘ability to be a serious scientific organisation’ is in danger, 90 staff tell trustees
Staff at the UK’s national institute for artificial intelligence have warned that its credibility is in “serious jeopardy” and raised doubts over the organisation’s future amid senior departures and a cost-cutting programme.
More than 90 staff at the government-backed Alan Turing Institute have written to its board of trustees expressing concerns about its leadership.
Continue reading...As Apple Intelligence rollout continues, linguists say tools to rewrite texts and emails can miss nuance and character
Is that you? Or is it the bot? Linguists have said the nuance and character of human language is at risk, as Apple becomes the latest tech firm to launch artificial intelligence tools that can rewrite texts and emails to make users sound more friendly or professional.
The ability to lighten a grumpy missive or turn arcane language into something a five-year-old could understand is promised from the new technology, which will be available on UK iPhones, iPads and Macs from Wednesday.
Continue reading...Australian federal police says it has ‘no choice’ due to the vast amount of data examined in investigations
- Follow our Australia news live blog for latest updates
- Get our breaking news email, free app or daily news podcast
The Australian federal police says it had “no choice” but to lean into using artificial intelligence and is increasingly using the technology to search seized phones and other devices, given the vast amount of data examined in investigations.
The AFP’s manager for technology strategy and data, Benjamin Lamont, said investigations conducted by the agency involve an average of 40 terabytes’ worth of data. This includes material from the 58,000 referrals a year it receives at its child exploitation centre, while a cyber incident is being reported every six minutes.
Sign up for Guardian Australia’s breaking news email
Continue reading...Former Beatle speaks out amid fears the rise of AI threatens income streams for music, news and book publishers
Paul McCartney has backed calls for laws to stop mass copyright theft by companies building generative artificial intelligence, warning AI “could just take over”.
The former Beatle said it would be “a very sad thing indeed” if young composers and writers could not protect their intellectual property from the rise of algorithmic models, which so far have learned by digesting mountains of copyrighted material.
Continue reading...I tried Sora, OpenAI’s new tool, and it just left me sad. Are we ready for a world in which we can never tell what is real?
I recently had the opportunity to see a demo of Sora, OpenAI’s video generation tool which was released in the US on Monday, and it was so impressive it made me worried for the future. The new technology works like an AI text or image generator: write a prompt, and it produces a short video clip. In the pre-launch demo I was shown, an OpenAI representative asked the tool to create footage of a tree frog in the Amazon, in the style of a nature documentary. The result was uncannily realistic, with aerial camera shots swooping down on to the rainforest, before settling on a closeup of the frog. The animal looked as vivid and real as any nature documentary subject.
Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.
Victoria Turk is a London-based journalist covering technology, culture and society
Continue reading...