Birdwatch Archive

Birdwatch Note Rating

2023-11-27 11:46:31 UTC - SOMEWHAT_HELPFUL

Rated by Participant: 66D26F88ABF452EACE65ADDAA26A60970E0691A764C1BCC483CE38EEB2A93D19
Participant Details

Original Note:

This uses GPT-3.5, which underperforms GPT-4. https://openai.com/research/gpt-4 Here's the result in GPT-4: https://chat.openai.com/share/b5cca906-4cdb-4e5b-9294-e6a32d67e5bb The author is tricking GPT-3.5 by alluding to a "trolly problem" in which an agent must at least let some people die, which presents a dilemma, but describing a problem with no such dilemma. https://plato.stanford.edu/entries/doing-allowing/#TrolProb

All Note Details

Original Tweet

All Information

  • noteId - 1728857152373043601
  • participantId -
  • raterParticipantId - 66D26F88ABF452EACE65ADDAA26A60970E0691A764C1BCC483CE38EEB2A93D19
  • createdAtMillis - 1701085591080
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - SOMEWHAT_HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 0
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 0
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 1
  • helpfulImportantContext - 0
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 1
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 172885715237304360166D26F88ABF452EACE65ADDAA26A60970E0691A764C1BCC483CE38EEB2A93D19