Birdwatch Archive

Birdwatch Note Rating

2023-11-30 21:55:09 UTC - HELPFUL

Rated by Participant: BB4251E6804CB0FCC8F0057D51F4D18CE6511A839AE3DB3C3118DE61E8342645
Participant Details

Original Note:

https://t.co/97exJtFHWC The paper details an adversarial attack method on GPT. No way to know output context as prompts or photoshop could create this. In any case, as of the time of this writing this adversarial attack apparently no longer works.

All Note Details

Original Tweet

All Information

  • noteId - 1730225511547240826
  • participantId -
  • raterParticipantId - BB4251E6804CB0FCC8F0057D51F4D18CE6511A839AE3DB3C3118DE61E8342645
  • createdAtMillis - 1701381309845
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 1
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 1
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 0
  • helpfulImportantContext - 0
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 1730225511547240826BB4251E6804CB0FCC8F0057D51F4D18CE6511A839AE3DB3C3118DE61E8342645