Birdwatch Archive

Birdwatch Note Rating

2023-11-30 22:28:14 UTC - HELPFUL

Rated by Participant: 208D39263B75707E98B369D91CE517E2252105FB74BCECCB5519B4DAE14B805C
Participant Details

Original Note:

https://t.co/97exJtFHWC The paper details an adversarial attack method on GPT. No way to know output context as prompts or photoshop could create this. In any case, as of the time of this writing this adversarial attack apparently no longer works.

All Note Details

Original Tweet

All Information

  • noteId - 1730225511547240826
  • participantId -
  • raterParticipantId - 208D39263B75707E98B369D91CE517E2252105FB74BCECCB5519B4DAE14B805C
  • createdAtMillis - 1701383294825
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 1
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 1
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 1
  • helpfulImportantContext - 1
  • helpfulUnbiasedLanguage - 1
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 1730225511547240826208D39263B75707E98B369D91CE517E2252105FB74BCECCB5519B4DAE14B805C