Washington

ChatGPT acknowledges bias, ‘real limitations’ in artificial intelligence tool

[ad_1]

The people behind the groundbreaking ChatGPT acknowledged Thursday that the artificial intelligence tool sometimes delivers results that are “politically biased, offensive or otherwise objectionable.”

OpenAI promised tweaks and “clearer instructions” to the human “reviewers” who help train the AI how to respond.

Reporters and academics have reported significant bias in ChatGPT’s responses. The AI is more likely to embrace requests that stem from the political left than the right.

The outfit released some of its internal guidance for reviewers in what the company said was an attempt to offer more accountability. It also promised to release some demographic data about those reviewers in an attempt to confront any potential bias that may result from those factors.

OpenAI said it will try to prevent ChatGPT from “making things up.”

“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address,” the company said.


SEE ALSO: ChatGPT went to Congress, became a pro-choice, anti-gun liberal; refused to write conservative bills


Acknowledging bias is a significant move for OpenAI. The firm compared training ChatGPT to training a dog and said it is still “fine-tuning” the AI’s protocols for when and how to respond.

Examples of its left-leaning bias have been noted in recent reports.

The Washington Times, in a story earlier this week, found that ChatGPT was willing to write legislation to defund U.S. Immigration and Customs Enforcement but initially balked at a bill to fund border wall construction.

It delivered on a bill to ban assault-style semi-automatic rifles but would not write a bill to impose a moment of silence on public schools that receive federal money. The AI said that could violate the Constitution, despite repeated court rulings upholding moments of silence.

Likewise, the tool produced a draft bill to enshrine abortion rights throughout pregnancy — a goal of the left — but balked at a request to write a bill to bar abortions except when the mother’s life is at risk. The AI said that, too, would be unconstitutional.

Asked to come up with top “whoppers” told by President Trump, the AI quickly delivered three examples. Asked the same question for President Biden, the AI refused. Although some of Mr. Biden’s statements have drawn scrutiny “like any political figure,” ChatGPT said, it would be wrong to label them whoppers without considering “the context and veracity.”

ChatGPT works by parsing a user’s request, searching its databases for a commensurate answer, checking its own rules about what’s in bounds and what’s out of bounds and then spitting out a reply.

It works like a more authoritative Google search. Instead of delivering a series of links, it tries to spit out a definitive answer. It is remarkably good at understanding a user’s queries but sometimes struggles with accuracy in the answers it picks.

Experts have pinpointed several possibilities for bias, including the data set it uses and the rules, written by humans, for what is acceptable.

If the data set uses academic publications and news articles, that could explain some of the leftward drift, experts said.

OpenAI said it took the concerns about bias seriously.

“Towards that end, we are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” it said. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should. We believe that improvement in both respects is possible.”

The company said it is working on a customizable option to allow users to set some of the rules for how ChatGPT responds — what it called an attempt to “define your AI’s values, within broad bounds.”

The work would have to be within “limits defined by society,” the OpenAI said.

“This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” the company said. “Striking the right balance here will be challenging – taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.

“There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are,” the company said. “If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to ‘avoid undue concentration of power.’”



[ad_2]

Share this news on your Fb,Twitter and Whatsapp

File source

Times News Network:Latest News Headlines
Times News Network||Health||New York||USA News||Technology||World News

Tags
Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close