As the U.S. Sprints Ahead on AI, Values Can’t Be Left Behind

News Room
10 Min Read

About the authors: Chris Inglis is a visiting professor at the U.S. Air Force Academy and the U.S. Naval Academy and most recently served as National Cyber Director in the White House. Inglis is former deputy director of the National Security Agency. Mary Aiken is the chair and professor of the Cyberpsychology Department at Capitol Technology University in Washington, D.C. and is the author of The Cyber Effect. Jamil N. Jaffer is the founder and executive director of the National Security Institute at George Mason University Law School and a former senior cybersecurity executive and national security official. Inglis , Aiken, and Jaffer serve on the Strategic Advisory Board of Paladin Capital Group, a technology venture capital firm.

Alongside the fast development in artificial intelligence has come an explosion in both excitement and fear. Indeed, where you are on the excitement vs. fear spectrum is probably a good indicator of whether you believe government should be an enabler, regulator, or monopolist owner of AI capabilities. The best answer is almost certainly somewhere in the middle. 

The three of us have spent careers at the intersection of national security and technology in government and industry, advising allies across the globe, and investing hard-earned pension fund money in this space. We would offer that aligning these efforts is the right path, not making false choices among them. Indeed, if society wants to take advantage of the myriad opportunities AI has to offer, while limiting the very real threats that certain applications may pose, urgent action is required to ensure the trust, safety, and security of these capabilities.

The opportunities offered by generative AI are boundless. It has the potential to raise all boats, upskilling workers across a range of industries, enabling them to focus on higher-level, more-rewarding tasks. However, some critics are appropriately concerned about misuse or bias in AI. Some even argue that AI could threaten our very existence, and ought be treated like a dangerous pathogen or nuclear weapon.

There is little consensus on what role government should play in these issues. Some have argued that the U.S. government should nationalize the AI industry. Others have advocated a more laissez faire approach, with government supporting basic research and addressing specific concerns, but largely giving the private sector broad license to develop new, game-changing technologies. 

The regulation (or not) of AI is a global issue, and governments are already moving in ways that reflect the broad spectrum of societal and governance frameworks. China, unsurprisingly, has imposed detailed regulations, and our European counterparts are headed down the same road. The White House has been more nuanced, announcing a series of voluntary commitments by leading companies, while Congress has initiated a series of conversations on potential legislation. Interestingly, India, a hotbed of tech innovation (and regulation) currently has no specific regulatory framework on AI, and while details on a pending U.S. executive order on the government’s use of AI and potential limits on the export (or employment) of certain capabilities remain sparse, it is expected to be released in the near future and will likely help shape the innovation space in the U.S. using the federal government’s purchasing power. 

It will be too easy for governments to overlook human and societal factors as they try to act quickly. Getting this right is important because generative AI is only the first of a series of massive technological innovations yet to come, from quantum computing to bioengineering at scale.

It’s vital that we protect our current lead in innovation. That matters for economic and national security both in the U.S. and among our allies. That lead depends on the rapid development and deployment of new and novel technologies. To keep it, however, we must ensure that the values at the heart of our free and open society are given equal priority.

Nationalizing AI is clearly incompatible with those values. At every major turn of innovation, some have argued that governments, not the private sector, are best placed to own and protect such technologies. Indeed, protecting society from the perceived perils of technological advancement has often been a key argument supporting government ownership of the means of production. Thankfully, solving this problem in a way that honors our values is wholly achievable. One need only look back to how government control of cryptography was handled in the early 1990s to see that can work. Carefully aligning the various interests at stake allowed us to effectively foster innovation, permit responsible public use, and protect our national security. 

Moreover, the U.S. has largely rejected government control in favor of private sector-driven capitalism, augmented, when necessary, by the use of limited government power. This has made the U.S. one of the most innovative nations the world has ever known. Abandoning that approach now—at a time of massive change and opportunity—would be a mistake. And it would ignore the clear reality that government, on its own, simply can’t drive—much less constrain—the kind of innovation that is critical to maintaining American and allied leadership in AI and other emerging technologies. 

At the same time, the government’s response cannot simply be hands-off. Unleashing market forces without accounting for core societal values could result in negative outcomes, including creating a backlash that could actually significantly constrain innovation over time. Moreover, with allies already moving to regulate, the opportunity for a purely market-based approach is likely already in the rearview mirror. 

To square this circle, the government ought to develop frameworks, based on rapidly evolving private sector best practices, to inculcate trust, safety, and security into the core of the AI development and deployment process. Such protocols should ensure new technology capabilities are resilient, trustworthy, and secure-by-design before deployment. 

Likewise, it’s critical to incentivize the American and allied investors and innovators who are putting money and shoe leather into backing the companies and developing technologies that promote trust, safety, and security. Doing so will speed success. In parallel, we must also reach a consensus on the most effective responses to the real challenges posed by AI. 

For example, there is clear consensus in the U.S. that the use of lethal force shouldn’t be delegated to fully automated systems. That is, a human must be “in-the-loop” and must make the final call when lethal force is used, even when technology augments that decision-making. Given this consensus, it seems clear that any AI-focused regulatory or legislative effort should focus on human resilience as well as well-architected technology. If we get it right, humans will remain at the center of the system, enabled and served by technologies arriving with ever greater speed, rather than finding themselves awash in a sea of self-directed systems that seek to replace human aspiration with machine efficiency.

Such an approach may seem obvious, and accords well with our values, but to be effective, policymakers must ensure that there is clear guidance and direction on human decision-making mediated by AI. In many ways, the issues today are similar to the early days of computers. We missed an opportunity then to build systems in an inherently resilient and secure-by-design manner. Unfortunately, the rush to monetize has led us to a near-constant game of catch up when it comes to cybersecurity. Let’s not make the same mistake with AI. 

Trust, safety, and security must be the watchwords that guide us—whether in investment, innovation, or legislation—as we sprint to take advantage of the vast opportunities being created by AI and other emerging technologies. As a nation, there is little doubt that we can continue to innovate rapidly at scale while also protecting and preserving the values that allow us that opportunity.

Guest commentaries like this one are written by authors outside the Barron’s and MarketWatch newsroom. They reflect the perspective and opinions of the authors. Submit commentary proposals and other feedback to [email protected].

Read the full article here

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *