AI Platforms and Concerns

Articles and Resources about Technology Infrastructure that Impact Global Systems

“The most interesting, yet challenging aspect of my job is [working out] how we get that balance between being really bold, moving at velocity, tremendous pace and innovation, and at the same time doing it responsibly, safely, ethically,” said Tom Lue, a Google DeepMind vice-president with responsibility for policy, legal, safety and governance, who stopped work for 30 minutes to talk to the Guardian.

Donald Trump’s White House takes a permissive approach to AI regulation and there is no comprehensive nationwide legislation in the US or the UK. Yoshua Bengio, a computer scientist known as a godfather of AI, said in a Ted Talkarrow-up-right this summer: “A sandwich has more regulation than AI.”

The competitors have therefore found they bear responsibility for setting the limits of what AIs should be allowed to do.

“Our calculus is not so much looking over our shoulders at what [the other] companies are doing, but how do we make sure that we are the ones in the lead, so that we have influence in impacting how this technology is developed and setting the norms across society,” said Lue. “You have to be in a position of strength and leadership to set that.”

If it’s just a race and all gas, no brakes and it’s basically a race to the bottom, that’s a terrible outcome for society,” said Lue, who is pushing for coordinated action between the racers and governments.

But strict state regulation may not be the answer either. “We support regulation that’s going to help AI be delivered to the world in a way that’s positive,” said Helen King, Google DeepMind’s vice-president for responsibility. “The tricky part is always how do you regulate in a way that doesn’t actually slow down the good guys and give the bad guys loopholes.”

Anthropic this month revealedarrow-up-right that its Claude Code AI, widely seen as the best system for automating computer programming, was used by a Chinese state-sponsored group in “the first documented case of a cyber-attack largely executed without human intervention at scale”.

It sent shivers through some. “Wake the f up,” said one US senator on X. “This is going to destroy us – sooner than we think”. By contrast, Prof Yann LeCun, who is about to step down after 12 years as Meta’s chief AI scientist, said Anthropic was “scaring everyone” to encourage regulation that might hinder rivals. .

Tests of other state-of-the-art models foundarrow-up-right they sometimes sabotaged programming intended to ensure humans can interrupt them, a worrying trait called “shutdown resistance”.

But with nearly $2bn a weekarrow-up-right in new venture capital investment pouring into generative AI in the first half of 2025, the pressure to realise profits will quickly rise. Tech companies realised they could make fortunes from monetising human attention on social media platforms that caused serious social problems. The fear is that profit maximisation in the age of AGI could result in far greater adverse consequences. “It is imbalanced because it’s such a costly technology,” he said. “Early on, the companies working on AI were very open about the techniques they were using. They published, and it was quasi-academic. But then [they] started cracking down and saying, ‘No, we don’t want to talk about … the technology under the hood, because it’s too important to us – it’s proprietary’.” ....he now calls on governments to create a counterweight to the huge AI firms by investing in a facility for independent, academic research. It would have a similar function to the state-funded Cern organisation for high-energy physics on the France-Switzerland border. The European Commission president, Ursula von der Leyen, has calledarrow-up-right for something similar and advocates believearrow-up-right it could steer the technology towards trustworthy, public interest outcomes.

These are technologies that are going to produce the greatest boost in productivity ever seen,” Etchemendy said. “You have to make sure that the benefits are spread through society, rather than benefiting Elon Musk.” They were part of a rapidly growing community of entrepreneurs hustling to apply AI to real world money-making ideas and there was zero support for any brakes on progress towards AGI to allow for its social impacts to be checked. “We don’t do that in Silicon Valley,” said one. “If everyone here stops, it still keeps going,” said another. “It’s really hard to opt out now.” Another declared: “Morality is best thought of as a machine-learning problem.” Their neighbour said AI meant every cancer would be cured in 10 years. Aggressive, clever and hyped up – the young talent driving the AI boom wants it all and fast. Frequently you hear AI researchers say they want the push to AGI to “go well”. It is a vague phrase suggesting a wish the technology should not cause harm, but its woolliness masks trepidation.

Altman has talked about “crazy sci-fi technology becoming reality” and having “extremely deep worries about what technology is doing to kids”. He admitted: “No one knows what happens next. It’s like, we’re gonna figure this out. It’s this weird emergent thing.”

“There’s clearly real risks,” he said in an interview with the comedian Theo Von, which was short on laughs. “It kind of feels like you should be able to say something more than that, but in truth, I think all we know right now is that we have discovered … something extraordinary that is going to reshape the course of our history.”

AI and Social Media: Platforms, Harms, Digital Justice

“Across Europe, a generation is suffering through a silent crisis,” says a new reportarrow-up-right from People vs Big Techarrow-up-right – a coalition of more than 140 digital rights NGOs from around Europe – and Ctrl+Alt+Reclaimarrow-up-right, their youth-led spin-off. A big factor is “the design and dominance of social media platforms”.

Ctrl+Alt+Reclaim, for people aged 15 to 29, came about in September last year when People vs Big Tech put out a call – on social media, paradoxically. About 20 young people who were already active on these issues came together at a “boot camp” in London. “We were really given the tools to create the movement that we wanted to build,” says McLaren, who attended with her partner. “They booked a big room, they brought the food, pencils, paper, everything we needed. And they were like: ‘This is your space, and we’re here to help.’” In researching her book Logging Off: The Human Cost of our Digital Worldarrow-up-right, Walton, 26, also became aware of how little control young people have over the content that is algorithmically served up to them. “We don’t really have any choice over what our feeds look like. Despite the fact there are things where you can say, ‘I don’t want to see this type of content’, within a week, you’re still seeing it again.”

Alycia Colijn, 29, another member of Ctrl+Alt+Reclaim, knows something about this. She studied data science and marketing analytics at university in Rotterdam, researching AI-driven algorithms – how they can be used to manipulate behaviour, and in whose interests. During her studies she began to think: “It’s weird that I’m trained to gather as much data as I can, and to build a model that can respond to or predict what people want to buy, but I’ve never had a conversation around ethics.” Now she is researching these issues as co-founder of Encode Europearrow-up-right, which advocates for human-centric AI. “I realised how much power these algorithms have over us; over our society, but also over our democracies,” she says. “Can we still speak of free will if the best psychologists in the world are building algorithms that make us addicted?”

The more she learned, the more concerned Colijn became. “We made social media into a social experiment,” she says. “It turned out to be the place where you could best gather personal data from individuals. Data turned into the new gold, and then tech bros became some of the most powerful people in the world, even though they aren’t necessarily known for caring about society.” Social media companies have had ample opportunities to respond to these myriad harms, but invariably they have chosen not to. Just as McLaren found with Snapchat and the fisha accounts, hateful and racist content is still minimally moderated on platforms such as X, Instagram, Snapchat and YouTube. After Donald Trump’s re-election, Mark Zuckerberg stated at the start of this year that Meta would be reducing factcheckersarrow-up-right across Facebook and Instagram, just as X has under Elon Musk. This has facilitated the free flow of misinformation. Metaarrow-up-right, Amazonarrow-up-right and Googlearrow-up-right were also among the companies announcing they were rolling back their diversity, equity and inclusion initiatives, post-Trump’s election. The shift towards the right politically, in the US and Europe, has inevitably affected these platforms’ tolerance of hateful and racist content, says Yassine. “People feel like now they have more rights to be harmful than rights to be protected.” “Big tech, combined with the AI innovators, say they are the growth of tomorrow’s economy, and that we have to trust them. I don’t think that’s true,” says Colijn. She also disagrees with their argument that regulation harms innovation. “The only thing deregulation fosters is harmful innovation. If we want responsible innovation, we need regulation in place.” Almost all the activists in Ctrl+Alt+Reclaim attest to having had some form of screen addiction. As much as social media has brought them together, it has also led to much less face-to-face socialising. “I’ve had to sort of rewire my brain to get used to the awkwardness and get comfortable with being in a social setting and not knowing anyone,” says Walton. “Actually, it would be really nice to return to proper connection.”

In fact, that scenario was entirely possible. The origins of Mr Deepfakes stretch back to 2017-18 when AI porn was just beginning to build on social media sites such as Reddit. One anonymous Redditor and AI porn “pioneer” who went by the name of “deepfakes” (and is thus credited with coining the term) gave an early interviewarrow-up-right to Vice about its potential. Shortly after, though, in early 2018, Reddit banned deepfake pornarrow-up-right from its site. “We have screenshots from their message boards at that time and the deepfake community, which was small, was freaking out and jumping ship,” says Compton. This is when Mr DeepFakes was created, with the early domain name dpfks.com. The administrator carried the same username – dpfks – and was the person who advertised for volunteers to work as moderators, and posted rules and guidelines, as well as deepfake videos and an in-depth guide to using software for deepfake porn. “What’s so depressing about reading the messages and seeing the genesis is realising how easily governments could have stopped this in its tracks,” says Compton. “The people doing it didn’t believe they were going to be allowed free rein. They were saying: ‘They’re coming for us!’, ‘They’re never going to let us do this!’ But as they continued without any problems at all, you see this growing emboldenment. On 4 May, Mr DeepFakes shut down. A notice on its homepage blamed “data loss” caused by the withdrawal of a “critical service provider”. “We will not be relaunching,” it continued. “Any website claiming this is fake. This domain will eventually expire and we are not responsible for future use. This message will be removed in about a week.” “After Mr DeepFakes shut down, I got an automatic email from one of them which said: “If you want anything made, let me know … Mr DeepFakes is down – but of course, we keep working.”

The Data (Use and Access) Act, which received royal assent in June, has made the creation of a deepfake intimate image without consent a criminal offence and also criminalised requesting others to create the deepfake image – as her best friend had when he posted images of Jodie on forums. Both now carry a custodial sentence of up to six months, as well as an unlimited fine. It’s a huge victory won fast in a space where progress has been mind-bendingly slow. However, this isn’t the story of a new government determined to tackle an ever-evolving crime. It was fought for and pushed through by a small group of victims and experts who formed a WhatsApp group called Heroes. Owen organised a core group of experts and campaigners, which included Clare McGlynn, professor of law at Durham University; the Revenge Porn Helplinearrow-up-right; Not Your Pornarrow-up-right; MyImageMyChoicearrow-up-right; the End Violence Against Women Coalition (EVAW); and survivors such as Jodie. They formed a tight team in the WhatsApp group Owen set up, aiming to criminalise the creation and requesting of deepfake intimate image abuse. Elena Michael, co-founder of Not Your Pornarrow-up-right, which supports victims of intimate image abuse and campaigns for change, agrees. “For a decade, the victims and experts working in the field have been kept in separate rooms to the people making the laws, and the laws we’ve had just haven’t worked,” she says. “When the government doesn’t listen to survivors, they’re not just devaluing their experience, they’re also discarding their expertise. These women have had to fight for years. There’s nothing they don’t know about what change is needed. It criminalised the making of sexually explicit deepfakes and the requesting of them. It was consent-based – if the deepfake was made without consent, it was a crime; the intentions behind it made no difference. It included forced deletion – anyone convicted of creating a deepfake image would be forced to delete it. This was crucial, given one victim’s experience when, after a conviction for intimate image abuse, police returned devices to her perpetrator with the images still on them. “Every single line represented a woman’s real experience,” says Owen. “Still, this is significant progress,” she adds. “We’re now able to say, ‘Creating this stuff without consent is wrong.’ It’s unlawful and it’s wrong. The message is clear and that’s great.”

Platforms: Articles:

Articles & Resources:

Child Advocacy and AI Concerns

Fairplay: https://fairplayforkids.org/arrow-up-right

For over 25 years, Fairplay has been the leading voice fighting to enhance children’s well-being by eliminating the exploitative and harmful business practices of marketers and Big Tech. Join us to create a world where kids can be kids!

Thorn.org: https://www.thorn.org/arrow-up-right We’re building a digital safety net to protect children. At Thorn, we put technology to work each day to enhance child safety. Thanks to a combination of donor support and earned revenue, we can focus our efforts across four key pillars to protect children from harm in the digital age:

Labor and AI Concerns:

The letter contains a range of demands for Amazonarrow-up-right, concerning its impact on the workplace and the environment. Staffers are calling on the company to power all its data centers with clean energy, make sure its AI-powered products and services do not enable “violence, surveillance and mass deportation”, and form a working group comprised of non-managers “that will have significant ownership over org-level goals and how or if AI should be used in their orgs, how or if AI-related layoffs or headcount freezes are implemented, and how to mitigate or minimize the collateral effects of AI use, such as environmental impact”. Workers emphasized they are not against AI outright, rather they want it to be developed sustainably and with input from the people building and using it. “I see Amazon using AI to justify a power grab over community resources like water and energy, but also over its own workers, who are increasingly subject to surveillance, work speedups, and implicit threats of layoffs,” said the senior software engineer. “There is a culture of fear around openly discussing the drawbacks of AI at work, and one thing the letter is setting out to accomplish is to show our colleagues that many of us feel this way and that another path is possible.”

AI Leadership & Media

At a time when the majority of Americans distrustarrow-up-right big techarrow-up-right and believe artificial intelligence will harm societyarrow-up-right, Silicon Valley has built its own network of alternative media where CEOs, founders and investors are the unchallenged and beloved stars. What was once the province of a few fawning podcasters has grown into a fully fledged ecosystem of publications and shows supported by some of the tech industry’s most powerful. The a16z Substack also announced this month that the firm was launching an eight-week new media fellowship for “operators, creators, and storytellers shaping the future of media”. The fellowship includes collaborating with a16z’s new media operation, which it describes as being made up of “online legends” creating a “single place where founders acquire the legitimacy, taste, brandbuilding, expertise, and momentum they need to win the narrative battle online”.

In addition to a16z’s media effort, Palantir launched a digital and print publication earlier this year called the Republic that mimics academic journals and thinktank-style magazines like Foreign Affairs. The journal is funded by the Palantir Foundation for Defense Policy and International Affairs, a non-profit where Karp is the chair, though he only works there 0.01 hours per week, according to 2023 tax filings.

“Far too many people who should not have a platform do. And there are far too many people who should have a platform but do not,” states the Republic, which has an editorial team made up of senior Palantir executives. Even if much of this new media isn’t aiming to expose wrongdoing or challenge people in power, it is not exactly without value. The content that the tech industry is creating is frequently a reflection of how its elites see themselves and the world they want to build – one with less government regulation and fewer probing questions on how their companies are run. Even the most banal questions can also be a glimpse into the heads of people who exist primarily in guarded board rooms and gated compounds.

Last updated