Microsoft’s artificial intelligence capabilities have stirred controversy, with reports that the company’s AI picture generator creates obscene and indecent images.
According to CNN, a Microsoft employee has filed a complaint with the US Federal Trade Commission, claiming that the company’s AI system Copilot Designer has “systemic issues” that allow it to consistently create sexualized and insulting images of women.
The whistleblower, software engineering lead Shane Jones, accused Microsoft of deceptively advertising the program as safe for young users despite its proclivity to generate such disturbing material. This insight comes as the IT industry faces the problems of ethically creating and deploying AI image-generation tools.
Previously, Google received similar criticism over its Gemini AI program’s processing of photographs based on race, causing the company to halt generating human images while working on changes. As artificial intelligence becomes more prevalent in consumer products, it is vital to ensure that these systems adhere to ethical standards.