AI Risks & Management

AI Risk Management Frameworks

Knowledge Collapse

AI Moderators and Safety Perspectives

A dozen AI raters, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously. These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots.

“It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,” said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program. “So this means when we see the final [version of the] chatbot, we can expect the same type of errors they’re experiencing. It does not bode well for a public that is increasingly going to LLMs for news and information.”

AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality. Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools. For her, the biggest turning point was realizing how little support the people training these systems receive.

“We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,” said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley’s most popular AI models. “If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.” Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say. An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta’s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025. At the same time, the chatbots’ likelihood of repeating false information almost doubled from 18% to 35%, NewsGuard found. None of the companies responded to NewsGuard’s request for a comment at the time.

“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable,” said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company. She warns people about using it and echoed another rater’s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too. “This is not an ethical robot. It’s just a robot.” For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him. “After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,” he said. He used the term “garbage in, garbage out”, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws. The rater avoids using generative AI and has also “advised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal”, he said. Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic – explaining the army of invisible workers behind it, the unreliability of the information and how environmentally damaging it is.

Once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile,” said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes. “In my experience it’s always people who don’t understand AI who are enchanted by it.” “Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?” she said. “We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.”

AI in Politics

Trump has managed to turn America’s idea of itself entirely upside down. And he has done it with the active consent of an entire political party. Speaker of the House Mike Johnson, when asked about the poop video, for once did not bother lying that he had not seen it. Instead he said: “The president uses social media to make the point. You can argue he’s probably the most effective person who’s ever used social media.

Misconceptions about Brain Computer Interfaces:

https://pmc.ncbi.nlm.nih.gov/articles/PMC11004276/

Social Media Algorithms Profit from Scammers and Fraud

Algorithms Amplify Polarization

AI Deep Fakes, Misinformation, Violence

Last updated