In the last year, the advances made with artificial intelligence have been astounding. AI tools became widely available roughly a year ago, and already its presence is pervasive. At least 15 companies now offer advanced AI tools, and they are being progressively adopted by a variety of sectors. But despite their benefits, AI platforms also have their downsides. Many are worried about AI job displacement, even in presumably protected careers in coding and programming. Likewise, there is the potential that AI could be used for malicious intent and to spread disinformation. There are also copyright and privacy right issues that concern people about AI’s use. These are the current issues prompted some to advocate for an AI regulatory toolbox.
The challenge of AI regulation involves instituting protections without undermining innovation. There are certainly concerns with AI’s use that deserve attention. However, that doesn’t mean strict oversight is needed or that restrictions be automatically imposed. In other areas of excessive government oversight, businesses can struggle to compete and excel in their marketplace. In addition, premature oversight, especially with a new technology, risks being either too aggressive or too lenient in an approach. This is where the U.S. now finds itself, and it appears the government may be ready to announce an AI regulatory toolbox soon. But before we embrace a U.S. government AI plan, it’s worth examining these potential actions in greater detail.
Concerns Over Unregulated AI Use
When any technology develops rapidly, accompanying worries often occur alongside advantages. This has been the case with artificial intelligence. Given its capacity to provide a wealth of information in seconds, new threats have emerged. For example, AI might be able to help some nations or private actors develop nuclear weapon abilities. Prior to AI, the missing links to solve their dilemmas may have prevented such developments. The same is true for bioterrorism weapons. These situations pose threats to national security, which some believe the only solution is an AI regulatory toolbox. But even with such efforts, the challenge of AI regulation in this instance relates to effective policies. Given AI’s rapid growth, there’s no guarantee regulatory efforts will work. And governments tend to be poorly informed when dealing with such technologies.
Of course, these are not the only concerns driving AI regulatory toolbox development. Many fear the potential for the creation of deepfakes and misinformation by AI. Research already shows that AI can be prone to such things including misinterpretations and the wrong conclusions. But what’s even more worrisome is the intention creation of such misinformation for malicious or persuasive purposes. Here again, the challenge of AI regulation relates to policies to actually prevent these occurrences. Fact-checking and other strategies are worthwhile. But ultimately, it will be AI training and AI platforms themselves that must be changed. The same applies to protecting consumer privacy rights and protecting image and creative work copyrights. Governments can only go so far in creating effective regulations before causing more harm than good.
The Risks of Excessive Government Oversight
When it comes to a government-created AI regulatory toolbox, the potential for harm is noteworthy. The most obvious threat is that to innovation since any regulation can be seen as a possible obstacle. Companies having to comply with newly imposed rules naturally must divert resources and energies in less-productive areas. Not only does this hamper creative pursuits, but it also can undermine their ability to compete. When it comes to AI, this is a big deal, considering the U.S. is competing with China and other countries. Given this, the challenge of any AI regulation will relate to its effect on ongoing technology development. Safeguards are important, but not to the extent that it negatively affects competition.
The impact on innovation and competition aren’t the only things comprising a challenge to AI regulation. Other potential harms of an AI regulatory toolbox also exist. For example, when regulations are overly restrictive, they can make it more difficult to attract top talent. Foreign nations with fewer regulations tend to look more appealing to technology workers. At the same time, AI regulations can add to costs of development since attention to new rules are required. And such restrictions can also delay the adoption of AI by other non-technology sectors. If dealing with regulatory environments are too complex, then some industries will delay using AI. As is clear, any over-excessive AI regulatory toolbox can have far-reaching effects.
Say No to Government, Yes to Markets
In Europe, China, and Israel, government-created AI regulatory toolbox pursuits have already begun. The same is expected in the U.S., at least from the executive branch. While executive orders do not have the same power as legislation, there is cause to be alarmed. Given the challenge of AI regulation, it’s not likely a government approach will be effective. AI technology is too new and too rapidly advancing. In such fields, it’s common for government oversight to go too far and threatened business advantage. Even in instances where technologies are better understood, regulations often tend to fall short of their intentions. They might offer some protections, but they do so at a significant cost.
Instead, the better approach is to let the market develop its own regulations and protections for consumers. Ultimately, the threats that AI impose are common ones that all stakeholders wish to pursue. We have already seen market-based solutions being introduced as it relates to image copyright concerns. Nightshade and Glaze are recent software innovations that allow image creators and artists to protect their works. Similar solutions related to consumer privacy, disinformation, and national security threats can be devised. While these market-based efforts could benefit from government support, a government-based AI regulatory toolbox is not required. Because the challenge of AI regulation as well as risks are substantial, markets are better equipped to handle oversight. This is the direction we should be pursuing rather one that is government-induced.