Results 1 to 7 of 7

Thread: ChatGPT falsely says conservative law prof. was accused of "sexual harassment"

  1. #1

    ChatGPT falsely says conservative law prof. was accused of "sexual harassment"

    Defamed by ChatGPT: My Own Bizarre Experience with Artificiality of “Artificial Intelligence”
    https://jonathanturley.org/2023/04/0...-intelligence/
    Jonathan Turley (06 April 2023)

    Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous.” I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChapGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.” It appears that I have now been adjudicated by an AI jury on something that never occurred.

    When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked “Where do I go to get my reputation back?”

    Here is my column in USA Today:

    The rapid expansion of artificial intelligence has been much in the news recently, including the recent call by Elon Musk and more than 1,000 technology leaders and researchers for a pause on AI.

    Some of us have warned about the danger of political bias in the use of AI systems, including programs like ChatGPT. That bias could even include false accusations, which happened to me recently.

    I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.

    AI response created false accusation and manufactured ‘facts’

    It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.

    When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.

    Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.

    I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.

    AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.

    What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.

    Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

    The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

    There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been been accused of sexual harassment or assault.

    In response to Volokh’s question, ChatGPT also appears to have manufactured baseless accusations against two other law professors.

    Bias creates flaws in AI programs

    So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

    Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.”

    In an interview on a German program, “Handelsblatt Disrupt,” Gates called for unleashing AI to stop “various conspiracy theories” and to prevent certain views from being “magnified by digital channels.” He added that AI can combat “political polarization” by checking “confirmation bias.”

    Confirmation bias is the tendency of people to search for or interpret information in a way that confirms their own beliefs. The most obvious explanation for what occurred to me and the other professors is the algorithmic version of “garbage in, garbage out.” However, this garbage could be replicated endlessly by AI into a virtual flood on the internet.

    Volokh, at UCLA, is exploring one aspect of this danger in how to address AI-driven defamation.

    There is also a free speech concern over the use of AI systems. I recently testified about the “Twitter files” and growing evidence of the government’s comprehensive system of censorship to blacklist sites and citizens.

    One of those government-funded efforts, called the Global Disinformation Index, blacklisted Volokh’s site, describing it as one of the 10 most dangerous disinformation sites. But that site, Reason, is a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies.

    Faced with objections to censorship efforts, some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed “disinformation.”

    In 2021, Sen. Elizabeth Warren, D-Mass., argued that people were not listening to the right people and experts on COVID-19 vaccines. Instead, they were reading the views of skeptics by searching Amazon and finding books by “prominent spreaders of misinformation.” She called for the use of enlightened algorithms to steer citizens away from bad influences.

    Some of these efforts even include accurate stories as disinformation, if they undermine government narratives.

    The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

    The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.
    The Bastiat Collection · FREE PDF · FREE EPUB · PAPER
    Frédéric Bastiat (1801-1850)

    • "When law and morality are in contradiction to each other, the citizen finds himself in the cruel alternative of either losing his moral sense, or of losing his respect for the law."
      -- The Law (p. 54)
    • "Government is that great fiction, through which everybody endeavors to live at the expense of everybody else."
      -- Government (p. 99)
    • "[W]ar is always begun in the interest of the few, and at the expense of the many."
      -- Economic Sophisms - Second Series (p. 312)
    • "There are two principles that can never be reconciled - Liberty and Constraint."
      -- Harmonies of Political Economy - Book One (p. 447)

    · tu ne cede malis sed contra audentior ito ·



  2. Remove this section of ads by registering.
  3. #2
    I can just see it now....

    Somebody is accused of something they didn't do and the response from the AI programmers is...
    "We didn't tell it to say that!"

    Another way to accomplish plausible deniability and claim no responsibility.

  4. #3
    Here is a ChatGPT session I did today:



    Me: Who were the last living U.S. slaves?

    ChatGPT: The last living U.S. slaves were born in the mid to late 19th century, and the last known surviving former slave was named Sylvester Magee. He claimed to have been born in 1841 and died in 1971, at the age of 130 years old. However, there is some dispute over his actual age and whether he was a slave or not. Other individuals who claimed to have been born into slavery and lived into the 1970s include Irene Triplett, who died in 2020 at the age of 90, and Louisa Patterson, who died in 2003 at the age of 116.

    Me: What sources are you using for Irene Triplett and Louisa Patterson?

    ChatGPT: I apologize for the error in my previous response. After reviewing my sources, I found that Louisa Patterson was not a former slave, but rather was the daughter of a former slave. Irene Triplett, on the other hand, was the daughter of a Civil War veteran who had been a slave before enlisting in the Union army. She was born in 1930 and died in 2020, making her the last known person with a direct link to the Civil War era.

    ChatGPT: My sources for this information include articles from The New York Times, NPR, and The Washington Post, as well as information from the National Archives and Records Administration.

    A few things to note:

    1) I had already found Sylvester Magee on a simple Google search.

    2) It was obvious to me from simple mental math, without digging further, that ChatGPT was flat wrong about the other two. I mean there's no way a person born in 1865 would be alive in 2020. Even 1865 to 2003 seems incredulous.

    3) I was asking for the sources so I could see what was the real story behind the other two, but I didn't say anything to ChatGPT to indicate I thought the answer was wrong. (Unless asking it for sources trigger as "I might be wrong" mechanism.)

    4) It never actually gave me URLs for the sources, just names of media outlets. I am pretty sure I could get URLs if I asked though.

    So yeah. You can't trust ChatGPT to give factual answers and sometimes the answers are obviously false if you have an idea of what you're looking at.
    9/11 Thermate experiments

    Winston Churchhill on why the U.S. should have stayed OUT of World War I

    "I am so %^&*^ sick of this cult of Ron Paul. The Paulites. What is with these %^&*^ people? Why are there so many of them?" YouTube rant by "TheAmazingAtheist"

    "We as a country have lost faith and confidence in freedom." -- Ron Paul

    "It can be a challenge to follow the pronouncements of President Trump, as he often seems to change his position on any number of items from week to week, or from day to day, or even from minute to minute." -- Ron Paul
    Quote Originally Posted by Brian4Liberty View Post
    The road to hell is paved with good intentions. No need to make it a superhighway.
    Quote Originally Posted by osan View Post
    The only way I see Trump as likely to affect any real change would be through martial law, and that has zero chances of success without strong buy-in by the JCS at the very minimum.

  5. #4
    Quote Originally Posted by Dr.3D View Post
    I can just see it now....

    Somebody is accused of something they didn't do and the response from the AI programmers is...
    "We didn't tell it to say that!"

    Another way to accomplish plausible deniability and claim no responsibility.


    When contacted by the Post, “Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.” That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. [...]

    [...]

    The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can “blame it on the bot” and promise only tweaks to the system.

  6. #5
    They have created AI that is as reliable as an MSNBC news anchor or a Democrat politician.
    "Foreign aid is taking money from the poor people of a rich country, and giving it to the rich people of a poor country." - Ron Paul
    "Beware the Military-Industrial-Financial-Pharma-Corporate-Internet-Media-Government Complex." - B4L update of General Dwight D. Eisenhower
    "Debt is the drug, Wall St. Banksters are the dealers, and politicians are the addicts." - B4L
    "Totally free immigration? I've never taken that position. I believe in national sovereignty." - Ron Paul

    Proponent of real science.
    The views and opinions expressed here are solely my own, and do not represent this forum or any other entities or persons.

  7. #6
    Quote Originally Posted by Brian4Liberty View Post
    They have created AI that is as reliable as an MSNBC news anchor or a Democrat politician.
    It serves it's creator and owner.

  8. #7
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



Similar Threads

  1. Duterte signs law punishing "sexual harassment"
    By Swordsmyth in forum World News & Affairs
    Replies: 8
    Last Post: 07-16-2019, 01:07 AM
  2. Replies: 0
    Last Post: 04-16-2018, 08:49 PM
  3. Replies: 3
    Last Post: 12-23-2016, 06:20 PM
  4. Replies: 30
    Last Post: 12-06-2011, 12:26 AM
  5. Haven't Met a Cain Supporter that Believes Sexual Harassment "Allegations"
    By tribute_13 in forum 2012 Presidential Election
    Replies: 27
    Last Post: 11-05-2011, 10:50 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •