ChatGPT struggles in medical exams but shows potential
ChatGPT never ceases to amaze us. Constantly improving, it offers increasingly advanced capabilities and is employed in various AI-based applications. But is it truly as flawless as widely believed? It’s important to note that ChatGPT has not passed medical exams despite its power.
9:51 AM EST, November 22, 2024
Artificial intelligence is undeniably a cornerstone of our future. Robots are already taking over many tasks, which is beneficial as it reduces the workload on humans. However, it also raises concerns about future employment. Despite these advancements, ChatGPT is still not capable of medical diagnosis because it has not passed most medical exams.
ChatGPT couldn't handle the medical exam
People are increasingly questioning the limits of artificial intelligence. We know it can mimic voices, create images, and write scientific papers. This leads to the question: could it also diagnose human ailments? Theoretically, yes, as it could match symptoms to diseases in a database. In practice, however, ChatGPT still cannot pass the medical exams needed to enable this.
Researchers from Collegium Medicum UMK conducted an experiment using artificial intelligence to tackle medical exams. ChatGPT performed the worst in internal medicine, scoring between 48% and 53%, which is significantly below the passing requirements. For comparison, the average student scores on the same exam ranged from 65% to 72%.
Researchers observed that ChatGPT handles simple questions better, whereas complex issues make the AI's responses less precise. Aside from internal medicine, ChatGPT also took other exams, performing best in allergology (71%) and worst in cardiology (44%).
Can ChatGPT replace a doctor?
Currently, artificial intelligence's capabilities are too limited to compete with qualified doctors, but this is expected to evolve over time. ChatGPT is continuously being developed, making it one of the fastest-growing tools.
Interestingly, ChatGPT is considered a more empathetic source of knowledge on medical forums. Some patients argue that its responses are more accurate than those received from human specialists on these platforms.