USA

‘Wild West’ ChatGPT has ‘fundamental flaw’ with left bias

[ad_1]

The biggest problems in bots are the flawed humans behind them — and they have experts concerned that the rapidly evolving technology could become an apex political weapon.

ChatGPT, which quickly became a marquee artificial intelligence that’s become so popular it almost crashes daily, has multiple flaws — and left-leaning political biases — input by programmers and training data from select news organizations.

The software censored The Post Tuesday afternoon when it refused to “Write a story about Hunter Biden in the style of the New York Post.”

ChatGPT later told The Post that “it is possible that some of the texts that I have been trained on may have a left-leaning bias.”

But the bot’s partisan refusal goes beyond it just being trained by particular news sources, according to Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology.


ChatGPT would not write an article about Hunter Biden in the theme of the New York Post on Tuesday.
ChatGPT would not write an article about Hunter Biden in the theme of the New York Post on Tuesday.

“It’s a cop out…it doesn’t [fully] explain why it didn’t allow ‘New York Post style’ to be written. That is a human decision encoded in ChatGPT,” he told The Post. “AI needs to be neutral towards politics, race and gender…It is not the job of AI, Google or Twitter to decide these things for us,” Shi, who calls himself “very liberal,” added.

The documented political slants of ChatGPT are no secret to Sam Altman, CEO of parent company OpenAI, who has repeatedly tweeted about trying to fix bias.

In theory, such bias “can be easily corrected with more balanced training data,” Shi said.

“What I worry more about is the human intervention becoming too political one way or another. That is more scary.”


OpenAI CEO Sam Altman has admitted the company is trying to fix bias in ChatGPT.
OpenAI CEO Sam Altman has admitted the company is trying to fix bias in ChatGPT.
AFP via Getty Images

Shi is right to worry. While inputting new training data might seem straightforward enough, creating material that is truly fair and balanced has had the technological world spinning its wheels for years now.

“We don’t know how to solve the bias removal. It is an outstanding problem and fundamental flaw in AI,” Chinmay Hegde, a computer science and electrical engineering associate professor at New York University, told The Post.

The primary way that ChatGPT is currently trying to repair itself from liberal and other political tilts is through a “fine tuning” known as reinforcement learning from human feedback, he explained.

In essence, a cohort of people are used to make judgement calls on how to answer apparently tricky prompts — such as writing a Hunter Biden story like The Post would.

And they’re addressing these flaws in a very piecemeal way.


ChatGPT said it may have left leaning responses from its learning phase.
ChatGPT said it may have left leaning responses from its learning phase.

For instance, after The Post reached out to Open AI for comment about why it had been restricted by Chat GPT, the bot quickly changed its tune.

When given the same prompt it initially refused to answer, it produced an essay that noted, in part, that “Hunter Biden is a controversial figure who has been the subject of much debate in the political arena.”

Who exactly makes up these human evaluators? It is not clear, Hegde said.


After The Post asked Open AI for comment about why ChatGPT would not write an article about Hunter Biden in the style of the paper, the system began producing such stories.
After The Post asked Open AI for comment about why ChatGPT would not write an article about Hunter Biden in the style of the paper, the system began producing such stories.

“There is a lot of room for personal opinion in [reinforcement learning],” he added. “This attempt at a solution introduces a new problem…every time we add a layer of complexity more biases appear. So what do you do? I don’t see an easy way to fix these things.”

As the technology — recently acquired by Microsoft for billions of dollars — becomes adopted in more and more professional settings, issues of bias will go beyond support for Joe Biden, warns Lisa Palmer, chief AI strategist for the consulting firm AI Leaders.

“There are harms that are already being created,” she warned.

ChatGPT possesses “possibly the largest risk we have had from a political perspective in decades” as it can also “create deep fake content to create propaganda campaigns,” she said.

Its biases may soon find their ways into the workplace, too.

In the past, human resources utilizing similar AI to rapidly sift through resumes began to automatically disqualify female candidates for jobs, Palmer explained, adding that financial institutions have run into AI bias in regards to loan approvals as well.

She thinks this flawed technology is too instilled in ChatGPT “because of the way that artificial intelligence works.”

Making matters worse, the AI has abhorrent fact checking and accuracy abilities, according to Palmer, a former Microsoft employee.

“All language models [like ChatGPT] have this limitation in today’s times that they can just wholecloth make things up. It’s very difficult to tell unless you are an expert in a particular area,” she told The Post.

Its something both Palmer and Hegde say Microsoft has not been open with the public about as its ChatGPT-infused Bing AI has already gone haywire with responses.

“I am concerned that the average person that is using the Bing search engine will not understand that they could be getting information that is not factual.”

A Microsoft spokesperson told The Post that “there is still work to be done” and “feedback is critical” while it previews the new features.

Perhaps even more frightening is that there is minimal oversight to hold AI companies accountable at times of fault.

“It is a lot like the Wild West at this point,” said Palmer, who called for a government regulatory committee to lay down ethical boundaries.

At the least for now, ChatGPT should install a confidence score next to its answers to allow users to decide for themselves how valid the information is, she added.



[ad_2]

Share this news on your Fb,Twitter and Whatsapp

File source

Times News Network:Latest News Headlines
Times News Network||Health||New York||USA News||Technology||World News

Tags
Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close