AI shatters captcha security, sparks major privacy concerns
The CAPTCHA security system has been cracked by artificial intelligence. Researchers from ETH Zurich have demonstrated that a well-prepared AI model can solve these security "puzzles" so that the system cannot distinguish between humans and machines. The issue lies with the reCAPTCHAv2 variant.
9:56 AM EDT, September 27, 2024
Tech Radar detailed the discovery based on an analysis shared by the researchers. In these cases, a popular AI model called YOLO was used to solve reCAPTCHAv2 tasks on behalf of a human. These puzzles typically involve selecting images with specific content, such as those showing traffic lights or motorcycles.
Until now, such tests were generally considered effective methods for verifying whether a human was genuinely at the computer or if a script was performing the task. However, the studies show that a well-prepared YOLO model, trained on 14,000 street images, could indicate the correct images as effectively as a human.
Even when the model made a mistake, it succeeded on subsequent attempts, with the system allowing multiple tries. Moreover, the AI’s success rate did not decrease even when additional CAPTCHA security measures, such as mouse movement analysis or browser history checks, were activated. The AI effectively mimicked a human enough to trick the system, raising serious security concerns.
This research signals a significant issue for administrators responsible for online service security. Although this is an academic discussion, the technology can be practically implemented, making it crucial to revisit and enhance website security systems to prevent AI from overcoming them. However, given the rapid pace of AI development, this may be quite a challenge.