Birdwatch Archive

Birdwatch Note Rating

2023-11-30 14:07:04 UTC - HELPFUL

Rated by Participant: AE70D9011DB4EB4B95F49DC785E5D3445580CE90E9EC849B3182CC383E5C8328
Participant Details

Original Note:

https://t.co/97exJtFHWC The paper details an adversarial attack method on GPT. No way to know output context as prompts or photoshop could create this. In any case, as of the time of this writing this adversarial attack apparently no longer works.

All Note Details

Original Tweet

All Information

  • noteId - 1730225511547240826
  • participantId -
  • raterParticipantId - AE70D9011DB4EB4B95F49DC785E5D3445580CE90E9EC849B3182CC383E5C8328
  • createdAtMillis - 1701353224673
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 0
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 1
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 0
  • helpfulImportantContext - 1
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 1730225511547240826AE70D9011DB4EB4B95F49DC785E5D3445580CE90E9EC849B3182CC383E5C8328