After long and fierce internal debate with myself, I’ve finally decided to start posting my thoughts about an issue being overwhelmed by the AI hype. I have a presence on LinkedIn, and as anyone who spends any amount of time on that platform knows, AI is the subject of many, if not most, posts and articles. My general thoughts on AI are positive, as I use it in my practice and it has been a beneficial tool for my work. But beneath the surface, my inner contrarian keeps coming back to the same thought.
It begins with the subject of AI agents. Agents are the next progression of AI, and the hype over agents is unavoidable. I can’t count the number of articles and social media posts I’ve read in the past year talking about how agents allow you to get work done while you sleep. Agents are another powerful tool I’ve used and benefitted from, but they’ve spurred a recurring thought that I just can’t shake.
The title of this website and newsletter is Beyond CAPTCHA, and I should probably describe what I mean by that. Unless you work in information technology or security, you probably aren’t aware of what CAPTCHA is or what it does. I don’t work in either of those fields, but I’ve become increasingly interested in CAPTCHAs as I’ve followed the development of AI tools. Most of the descriptions of CAPTCHA and reCAPTCHA that I’ve read are technical, but at the risk of oversimplifying, CAPTCHA is a website security tool used to distinguish between human interaction and automated bots. It has been around for over two decades, and most people know them as those annoying fuzzy squiggly letters and numbers that pop up when trying to log into a website that you’re required to retype, or in the alternative, a grid of pictures, where you need to click on all of the pictures that contain a traffic light or bicycle before being able to proceed.
I’ve struggled with CAPTCHAs just like everyone does (“Is that part of a bike in that square?”), but I realize the importance of them. The trouble is that while CAPTCHAs have been a useful, albeit flawed web security tool, they are no longer effective at distinguishing between human interaction and AI agents. In fact, research from over three years ago found that AI is actually better at solving the CAPTCHA puzzles than humans are. That’s a problem.
Web security has slowly moved away from the human/bot distinction and more towards a trust/no trust evaluation. That makes sense, but I’d argue that we need to be able to distinguish between a human and an AI agent, not only on a practical web security level, but also for the sake of civilization. I know that sounds like hyperbole, but knowing whether you’re interacting with a human or AI on social media, or while online gaming and dating, seems pretty important.
My plan moving forward is to investigate and learn what comes after CAPTCHA; in other words, how will we tell if we’re interacting with a human on the web? I do so coming from a legal, non-technical point of view, but I acknowledge that technical tools will be used to solve this question. I’m also investigating as someone in the financial services industry, where fraud is already a very big problem. Finally, most tech news and coverage on this topic comes from the coasts, and I believe that an upper-Midwest perspective might be a useful and thought-provoking alternative to the siloed echo chambers on the coasts and within the tech companies themselves.
I don’t pretend to have the answers. The joke is that lawyers tend to answer questions by saying, “It depends.” LinkedIn would have you believe that it is career and reputational suicide for a lawyer to admit they don’t know the answer to a question. Despite those risks, I’ll go where my curiosity leads me, and share what I learn here. I hope that others will find it interesting and useful too. Please join me; I’m excited to see where this leads.