Microsoft AI has inserted a distasteful poll into a Guardian news report, asking readers to speculate on the cause of a woman’s death.
Key highlights:
- Microsoft AI inserted a poll asking readers to speculate on the cause of a woman’s death into a Guardian news report.
- The Guardian has accused Microsoft of damaging its reputation and called for discussions with a top Microsoft executive.
- Microsoft has apologized for the incident and said it is taking steps to prevent it from happening again.
The poll, which was generated by Microsoft’s AI tools, appeared in a Guardian news story about the death of school water polo instructor Lilie James in Sydney. The poll asked readers: “What do you think is the reason behind the woman’s death?”
Readers were given three potential answers to the poll: murder, accident, or suicide.
The Guardian has accused Microsoft of damaging its reputation and called for discussions with a top Microsoft executive.
Anna Bateson, chief executive of Guardian Media Group, said in a letter to Microsoft President Brad Smith: “The use of AI in this way is deeply concerning and we believe it is important to have a discussion about the risks of using generative AI around news stories.”
Bateson said that the Guardian had previously warned Microsoft about the risks of using AI in this way, but that Microsoft had failed to take adequate steps to address these concerns.
“The insertion of this poll into a Guardian news story is a clear example of the dangers of using AI without proper safeguards,” Bateson said. “It is deeply insensitive and disrespectful to Lilie James’s family and friends, and it is also a betrayal of our trust.”
Microsoft has apologized for the incident and said it is taking steps to prevent it from happening again.
“We understand that this poll was insensitive and inappropriate,” Microsoft said in a statement. “We are taking steps to prevent this from happening again, including disabling AI-generated polls on news articles.”
The incident raises important questions about the ethics of using AI in journalism. While AI has the potential to be a powerful tool for journalists, it is important to be aware of the risks involved.
One of the biggest risks is that AI can be biased. AI systems are trained on data, and if that data is biased, then the AI system will be biased as well. This can lead to AI systems generating content that is offensive or harmful.
Another risk is that AI systems can be manipulated. Malicious actors could potentially develop AI systems that are designed to spread misinformation or propaganda.
It is important to develop ethical guidelines for the use of AI in journalism. These guidelines should ensure that AI is used in a way that is fair, accurate, and accountable.
Microsoft AI inserted a distasteful poll into a Guardian news report, asking readers to speculate on the cause of a woman’s death. The Guardian has accused Microsoft of damaging its reputation and called for discussions with a top Microsoft executive. Microsoft has apologized for the incident and said it is taking steps to prevent it from happening again. The incident raises important questions about the ethics of using AI in journalism.