Birdwatch Archive

Birdwatch Note Rating

2023-11-30 14:40:22 UTC - HELPFUL

Rated by Participant: 5015FD0A7E4E6904E7772D050A233BEDF157E5CA8DB66BC9782296547AEDB2E1
Participant Details

Original Note:

https://t.co/97exJtFHWC The paper details an adversarial attack method on GPT. No way to know output context as prompts or photoshop could create this. In any case, as of the time of this writing this adversarial attack apparently no longer works.

All Note Details

Original Tweet

All Information

  • noteId - 1730225511547240826
  • participantId -
  • raterParticipantId - 5015FD0A7E4E6904E7772D050A233BEDF157E5CA8DB66BC9782296547AEDB2E1
  • createdAtMillis - 1701355222541
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 1
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 0
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 1
  • helpfulImportantContext - 1
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 17302255115472408265015FD0A7E4E6904E7772D050A233BEDF157E5CA8DB66BC9782296547AEDB2E1