AI

Microsoft report describes its artificial intelligence safety efforts | News Brief


The report, released Wednesday, describes the company’s disclosures about its AI program as going beyond what it and other major AI companies agreed to with the White House last summer.

“This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust,” the company said in a statement Wednesday.

“We are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices,” the company said.

Microsoft released the report amid ongoing concern about chatbots that rely on generative AI and have shown at times to provide odd, alarming, and unreliable responses.

In February 2023, the company said shortly after releasing its Copilot chatbot that longer conversations involving 15 questions or more could lead the chatbot to “become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”

The company said it is relying on real-world feedback from users to improve functionality.

In March, a Microsoft engineer brought his concerns to the Federal Trade Commission (FTC) about offensive and fraudulent imagery that could be created using the company’s AI imaging tool, Copilot Designer, which is powered by an OpenAI model, CNBC reported.

In January, the FTC began investigating financial relationships among investors and the leading AI companies, including Microsoft, Alphabet (Google), Amazon, OpenAI, and Anthropic PBC, which involve many billions of dollars, the FTC said.

The U.K.’s Competition and Markets Authority also has been investigating Microsoft’s relationships with Mistral AI, Inflection AI, and other startups, as well as Amazon.

Congress has held a number of hearings about possibly placing guardrails on AI, due to its potential to inflect harm and chaos.



Source

Related Articles

Back to top button