Birdwatch Archive

Birdwatch Note Rating

2023-01-25 20:06:54 UTC - HELPFUL

Rated by Participant: 070CCDC8D69338B729D71F68BA6CCCD6DA3867C15B1FB4C5603A7A38D4EFD0BE
Participant Details

Original Note:

The cited article (which has not been peer-reviewed) does not conclude that ChatGPT had “passed” the USMLE, as has been stated in this tweet. Rather, the results approached passing but did not meet/reach it, with accuracy for Step 1 68.0%, Step 2CK 58.3%, and Step 3 62.4%. https://t.co/GGKbYfeUFR

All Note Details

Original Tweet

All Information

  • noteId - 1617787710852177920
  • participantId -
  • raterParticipantId - 070CCDC8D69338B729D71F68BA6CCCD6DA3867C15B1FB4C5603A7A38D4EFD0BE
  • createdAtMillis - 1674677214365
  • version - 2
  • agree - 0
  • disagree - 0
  • helpful - 0
  • notHelpful - 0
  • helpfulnessLevel - HELPFUL
  • helpfulOther - 0
  • helpfulInformative - 0
  • helpfulClear - 1
  • helpfulEmpathetic - 0
  • helpfulGoodSources - 0
  • helpfulUniqueContext - 0
  • helpfulAddressesClaim - 1
  • helpfulImportantContext - 1
  • helpfulUnbiasedLanguage - 0
  • notHelpfulOther - 0
  • notHelpfulIncorrect - 0
  • notHelpfulSourcesMissingOrUnreliable - 0
  • notHelpfulOpinionSpeculationOrBias - 0
  • notHelpfulMissingKeyPoints - 0
  • notHelpfulOutdated - 0
  • notHelpfulHardToUnderstand - 0
  • notHelpfulArgumentativeOrBiased - 0
  • notHelpfulOffTopic - 0
  • notHelpfulSpamHarassmentOrAbuse - 0
  • notHelpfulIrrelevantSources - 0
  • notHelpfulOpinionSpeculation - 0
  • notHelpfulNoteNotNeeded - 0
  • ratingsId - 1617787710852177920070CCDC8D69338B729D71F68BA6CCCD6DA3867C15B1FB4C5603A7A38D4EFD0BE