Washington Must Resist Impulsive AI Regulation
On November 30th, 2022, OpenAI released the AI ChatGPT. While it was not the first chatbot released on open markets, it has far surpassed any of its predecessors. From writing poems in the style of Edgar Allen Poe or holding fluent and nuanced conversations with its users, it is a concrete demonstration of how far AI research has come.
Since its release, many prominent figures, from Elon Musk to the Biden White House, have expressed understandable but misguided concerns over AI's risks. These concerns have led to calls to develop regulations to guide AI research safely, and some have gone so far as to call for a total pause on AI research. However, the suspicion of AI is driven primarily by ignorance and fearmongering, as was on full display in the recent Senate Judiciary hearing with OpenAI CEO Sam Altman.
The hearing consisted of lawmakers expressing worries about job displacement, privacy concerns, and even using AI to craft messages for taboo topics. The only clear takeaway from the hearing was how little lawmakers understood AI technology. This potential combination of ignorance, fear, and the desire to steer an industry that has barely gotten off the ground is not the recipe for well-thought-out regulations. What is more likely to happen is that legislators will play catch-up with an industry that few understand and, along the way, fall victim to all the worst pitfalls involved with regulation.
One such pitfall is the potential for regulators to constrain the AI market to well-established firms, and AI companies are aware of this. One of CEO Altman’s suggestions to the Judiciary Committee was to allow himself to design the rules for AI development. To state the obvious, this is a blatant attempt to create rules favorable to OpenAI and prevent potential competitors from enjoying the regulation-free environment that ChatGPT and its designers benefitted from.
Another bad idea from the Committee was creating a regulatory body to oversee the industry with expertise. In other words, the same solution Washington has had with every large industry to appear before them. The problem with this broad approach is twofold. First, the concerns around AI are primarily hypothetical, and legislation dealing with these concerns would be based on hypotheticals, not concrete realities. Second, the AI industry is a tool for other sectors. Trying to regulate this way will not work when the technology is used in education, medicine, and even office spaces.
None of this is to say that AI will not require new regulations and laws like every other industry in America; it most certainly will, but rules should be based on concrete realities, not fear of the unknown and a desire to stay on top of every new industry. Furthermore, lawmakers should learn from their past failures in allowing market concentration and regulatory capture, and ensure that new laws are equally applied across the industry—not protecting existing players.
There is more potential harm than benefit if the government takes an activist role in creating new AI. Despite pop culture consensus, AI will not be the Terminator movies in real-time or trap us in the Matrix. AI alignment, if it becomes a problem, will be a slow approaching one. There is no indication that ChatGPT or any other AI is close to achieving any semblance of consciousness or any consensus that general artificial intelligence is possible: a system performing commands that require intelligence does not make it intelligent. Concerns of job displacement, while potentially true, should be seen through a historical lens. Every innovation misplaces jobs, but without fail, it also leads to the creation of new and better jobs that nobody foresaw when it first entered the market. Already, ChatGPT has shown remarkable promise in diagnosing medical patients using the database it operates on. Just this once, the government should try a watchful waiting approach to avoid damaging an industry before it is established.