Birdwatch Archive

Birdwatch Note Rating

2024-05-19 02:08:32 UTC - NOT_HELPFUL

Rated by Participant: 44D37B28BB48DC159452A9CA33131B81BAD7EEE7B76CE00D57D29AE6A86C7482
Participant Details

Original Note:

language models like ChatGPT can provide inconsistent responses, particularly on sensitive or controversial topics. The specific example provided, where ChatGPT allegedly exhibited bias against Palestinians, could be an instance of hallucination rather than actual bias. https://chatgpt.com/share/3e567ef1-2c6d-4b85-a490-4e2de975d36d https://www.cnbc.com/2023/05/31/openai-is-pursuing-a-new-way-to-fight-ai-hallucinations.html

All Note Details

Original Tweet

All Information

  • noteId - 1791970563688165660
  • participantId -
  • raterParticipantId - 44D37B28BB48DC159452A9CA33131B81BAD7EEE7B76CE00D57D29AE6A86C7482
  • createdAtMillis - 1716084512348
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - NOT_HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 0
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 0
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 0
  • helpfulImportantContext - 0
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 1
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 1
  • notHelpfulOpinionSpeculation - 1
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 179197056368816566044D37B28BB48DC159452A9CA33131B81BAD7EEE7B76CE00D57D29AE6A86C7482