It started out as a reliable way to protect websites from the blight of automated bots. But the Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA, may have met its match.
Recent developments in AI software have proven very successful in beating many manifestations of this widely-used test.
In many cases, automated bot traffic was mistaken for genuine human users. Researchers have struck a massive blow to the reliability of the CAPTCHA system by completing tests through machine learning.
An excellent track record but not unbroken
CAPTCHA tests have, up until now, proved very successful in distinguishing bots from human beings. Google’s reCaptcha is so successful that even human participants are only able to complete the test 87% of the time.
But now, a team of hot-shot developers funded by Vicarious, a U.S-based AI firm supported by Amazon’s Jeff Bezos and Facebook-founder Mark Zuckerberg, appear to have cracked it. The team has developed an algorithm that mimics how the human brain decodes complex visual puzzles displayed during CAPTCHA tests.
Scaled-down software, barrier-busting bots
Before these developments, computer scientists relied on neural networks to identify images. These groups of individual units work together to analyze the separate layers of an image to achieve a collective result.
The trouble with this method is that thousands, even millions of examples are required for the AI technology to learn the skills it needs to complete a task.
The Vicarious group’s Recursive Cortical Network (RCN) software is different. It requires much less computer processing power, and human resources, to achieve positive results. RCN uses algorithms to analyze the pixels in any given example to identify an image.
A staggering success rate
The results from Vicarious have been astounding. The group recorded a 67% success rate against reCaptcha and a 57% success rate with PayPal and Yahoo!’s CAPTCHA tests.
Back in 2013, Vicarious announced it had beaten Google’s text-based CAPTCHA test with a 90% success rate. But, with massive advancements in the technology since then, the latest reported developments are even more impressive.
Automated attacks are imminent
CAPTCHA was initially developed to halt the spread of fake accounts and websites. The technology has evolved into a method for protecting sites against automated attacks. But all this is about to change.
Malicious actors will undoubtedly hijack the algorithms created to beat CAPTCHA and use AI to commit cybercrimes. Even now, criminals are using existing AI technologies to beat the system.
Sentry MBA is a popular tool used by hackers to facilitate credential stuffing attacks. In these scenarios, scores of stolen credentials are tested against third-party sites to gain access to user accounts.
Hackers also use Sentry MBA’s optical character recognition function, powered by machine learning, to attempt CAPTCHA bypass. In many cases, it’s a resounding success.
Today, automated attacks are on the rise and even the most secure CAPTCHA tests are beatable. There’s never been a more critical time to secure your user accounts.