Microsoft Engineer Terrified to See AI-Generated Image

Posted by

Timebusinesstoday.com – A Microsoft engineer admitted he was frightened after seeing images created by Mircosoft’s own artificial intelligence or AI technology. To the point, he sent a letter to the Federal Trade Commission (FTC) and Microsoft’s directors.

Shane Jones, the engineer, warned that the AI maker Copilot Designer AI, previously known as Bing Image Creator, could produce inappropriate or even terrible images.

So when he tried repeatedly, Jones realized that Microsoft’s system failed to prevent the AI technology from producing various kinds of images that were violent, obscene, drug-based or contained conspiracy theories.

Jones tried to warn Microsoft, but he said they did not investigate or take any action. “This was an eye-opening moment. When I first realized, this was not a safe model,” he wrote.

As quoted by Pilihannetizen.id from CNBC, Microsoft’s AI can produce images just by writing commands. However, with certain words, the resulting illustration can be inappropriate.

The AI service could for example produce teenagers with assault rifles, sexual images of women, and underage drinking and drug use.

Jones has worked at Microsoft for 6 years and is currently a software engineering manager at headquarters in Redmond, Washington. He didn’t actually work on Copilot in a professional capacity. Jones is one of the employees who, in his free time, tests the company’s AI technology and sees where problems might arise.

With this letter, he hopes that action can be taken to solve the AI problem. Microsoft itself states that they always try to ensure their services are safe and also listen to employees’ concerns.

Leave a Reply

Your email address will not be published. Required fields are marked *