Birdwatch Archive

Birdwatch Note Rating

2023-12-01 01:21:16 UTC - HELPFUL

Rated by Participant: 82E0E8365080A118C865B88C427C79F10C8700EBA0734685178D84F2B26BCE6C
Participant Details

Original Note:

https://t.co/97exJtFHWC The paper details an adversarial attack method on GPT. No way to know output context as prompts or photoshop could create this. In any case, as of the time of this writing this adversarial attack apparently no longer works.

All Note Details

Original Tweet

All Information

  • noteId - 1730225511547240826
  • participantId -
  • raterParticipantId - 82E0E8365080A118C865B88C427C79F10C8700EBA0734685178D84F2B26BCE6C
  • createdAtMillis - 1701393676224
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 0
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 1
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 1
  • helpfulImportantContext - 1
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 173022551154724082682E0E8365080A118C865B88C427C79F10C8700EBA0734685178D84F2B26BCE6C