The speed with which generative AI has made its impact on the world is nothing short of remarkable. Many of these effects are yet to be known, and visions of the future vary from one expert to the next. On the one hand, Big Tech seems well-positioned to dictate AI’s course… but this sparks fear among many, including politicians and governments who see AI as a social and national security threat. As such, there’s a growing push for AI government policy to provide effective U.S. AI regulation. Of course, not all agree that such regulation is needed. This is why tensions between business and government are growing in this area.
(It’s perilous to program AI with biases–read why in this Bold story.)
For those wanting tighter U.S. AI regulation, a variety of factors fuel their concerns. Issues involving data ownership rights, antitrust activities, and privacy protections top the list in this regard. At the same time, however, there are those who don’t want to perpetuate a powerful few in Big Tech. Without AI government policy, they believe smaller tech companies and startups will lose out. This is a key point of debate, however, since AI is already fueling hundreds of new startups. And the competition within the AI has never been so fierce. Regardless, the question must be asked whether governments should regulate AI.
The Past Fueling Present Fears
When it comes to U.S. AI regulation pressures, such government oversight isn’t new. When desktop computers and web software emerged, there were worries over monopolies and industry control. Then, Google demonstrated its dominance in search, stimulating eventual concerns about data privacy. And with social media, similar issues and the erosion of societal values have been added to the mix. In this regard, perhaps calls for AI government policy isn’t surprising. But there is one major difference, which pertains to the speed with which technologies fears developed. For AI, the honeymoon period where policymakers were intrigued is much shorter than with prior technological debuts. Already, copyright lawsuits and government agency claims of corporate antitrust are occurring. Whereas this took years in the past, it has only taken months for generative AI.
Part of the problem accelerating demands for U.S. AI regulation is the fact major tech players dominate these other areas. Big Tech companies like Google, Microsoft, Meta, and even Amazon are all-powerful in today’s world. And these are the same companies positioning themselves to maintain this power in the brave new world of AI. Open AI, which has taken the lead in generative AI, has strong backing from Microsoft as does Mistral. Anthropic, another rising AI star, has partnered with Google and Amazon. This means today’s Big Tech giants will likely be the AI giants of tomorrow. As a result, officials want to avoid the same scenario with AI that happened with previous technological breakthroughs. And the only way they see this happening is through AI government policy.
Arguments For and Against AI Regulations
In October of this past year, the U.S. issued an executive order that represented the first step in U.S. AI regulation. The order called for greater safety, security, and trustworthiness of AI through the development of AI government policy. It gave several federal agencies a limited number of days to address concerns in these areas related to AI. The arguments in favor of these actions related to fears over potential foul uses of AI. These included fraudulent uses as well as actions that led to discrimination, biased perspectives, and disinformation campaigns. National security and society protections were the main rationales for these actions. But economic ones were also present. There was similarly a desire to avoid antitrust practices and to expand AI use opportunities for all. These provided the foundations for establishing an AI government policy sooner rather than later.
(Open-source AI might be the future–read why in this Bold story.)
Naturally, AI companies have a much different perspective concerning U.S. AI regulation. While most all businesses support standards for AI platforms, few want a strict AI government policy. And they have good reasons for these preferences. For one, officials cite potential antitrust issues, but in actuality, several companies are developing both large and small platforms. Similarly, open source codes and information exist for AI startups and developers. Access to these tools provides evidence that monopoly-based pursuits aren’t in place. In an ideal world, AI companies would establish their own set of standards as an industry. These standards would set the bar for monitoring disinformation, discrimination, security threats, and privacy concerns. But having this forced on the industry by a government that poorly understood AI and its potential could be detrimental.
Letting Businesses Lead the Way
The problem with any government oversight, including AI government policy, pertains to its negative repercussion. When strict oversight and regulations exist, creativity and innovation take a step back. Resources must be used to ensure compliance as well as compliance reporting. According to opportunity costs, these investments cannot be used to advance the industry whether AI or something else. At the same time, a U.S. AI regulation framework could be ill-informed. We have seen how this is still playing out when it comes to the cannabis industry and its access to banking. The failure to include companies in determining standards and best practices cause these types of problems. And the same would occur with U.S. AI regulation as well if government acts unilaterally.
That doesn’t mean that some AI government policy isn’t needed. In fact, by leading the way, an AI government policy in the U.S. could establish global practices in AI. This could significantly benefit our country and American businesses down the road. But the key to realizing such policy-related benefits is to include AI technology firms in the policy-making process. In addition to allowing market forces guide AI advancement, guardrails should be determined with businesses’ inputs. This is the logical approach to U.S. AI regulation. Should government regulate AI? To a limited extent, perhaps yes, but certainly not on its own. If the U.S. wants to lead in AI development, then allowing technology firms autonomy is essential.
Generative AI and Higher Education Are the Perfect Match–Read How in this Bold Story!