The federal government created the National Security Commission on AI in 2018 to make recommendations on how to advance the development of AI and related technologies for addressing national security and defense needs. The independent, bipartisan group of technologists, national security professionals, business executives and academic leaders released its final report in March. The 756-page report includes dire warnings about the U.S. falling behind adversaries that are already using AI to spread disinformation, hone cyberattacks and gain a technological advantage on the battlefield. It also offers comprehensive recommendations for boosting AI development in the U.S. and emphasizes AI’s vast potential for good. Experts say the use of trustworthy, reliable AI across government is essential to ensuring the public’s and agencies’ confidence in the technology and its outcomes. In a recent survey of FCW readers, 60% of respondents said the biggest obstacle to using AI was a lack of employees with the right skill set, followed closely by budget constraints (54%) and legacy technology that doesn’t support or integrate with AI (42%). Fortunately, government leaders are looking for ways to facilitate AI adoption. In June, the Biden administration established a task force that will create a blueprint for the National AI Research Resource (NAIRR), as specified in the National AI Initiative Act of 2020. Read the latest insights from industry thought leaders in Carahsoft’s Innovation in Government® report on AI.
“Federal spending on artificial intelligence and machine learning (AI/ML) has been growing year over year and could exceed $4 billion by 2023, according to a report by Bloomberg Government. In addition to the increased spending, in 2021 there have been a number of federal actions that further lay the groundwork for AI adoption. Most notably, the National Security Commission on AI called for the government to invest $200 billion in AI over the next 10 years. The federal government is now investing in AI at all levels, but there is much work yet to be done to enable the government’s adoption of AI. NVIDIA, a leader in AI computing, has been helping to enable the government’s AI journey in five key areas.”
Read more insights from NVIDIA’s Inception Lead for Public Sector, Margaret Amori.
“The outcome of an AI model should be properly explained and communicated, and agencies must follow ethical standards for responsible AI. The Defense Department’s recent memo on AI spells out the importance of ensuring that the technology is traceable, governable and reliable, which maps to our core tenets at H2O.ai. Fortunately, the industry continues to make advances in the interpretability of increasingly complex machine learning models. There are many methods and tools that enable users to automatically create documentation to explain how a model works, identify biased data and detect changes in important elements, such as data drift. By enhancing operational transparency and oversight, agencies can ensure that they’re using ethical AI.”
Read more insights from H20.ai’s Vice President of Federal, Rohit Dhanda.
“The problems that government needs to solve are often massive and complex. For example, huge and varied volumes of data are necessary for agencies to respond to natural and man-made disasters. It’s impossible for humans to glean insights from information at the scale and speed that machines can, which means AI is a perfect fit for government challenges. The optimal strategy is to consolidate AI tools and choose a platform that supports all of them. That approach requires less work than stitching together disparate tools, but more importantly, it ensures that those tools will work seamlessly together. As a result, agencies can streamline the way AI models are trained and deployed, and they can benefit from the expertise of the vendor’s team of AI experts. The goal is a turnkey AI solution that will scale in response to agencies’ needs.”
Read more insights from Clarifai’s Founder and CEO, Matt Zeiler.
“The government’s progress on cloud adoption is central to its success with AI. Because agencies are trying to find answers to new questions, they need more and flexible compute and storage power than they have had in the past. Cloud technology provides a robust platform for those activities that is scalable to meet the demands of machine learning algorithms. AI is fueled by data, but information sharing has been a challenge in government for quite some time because organizational culture often encourages data owners to maintain discrete control over their information. Agencies must change that attitude so people can use whatever data they need and are entitled to access to address mission challenges.”
Read more insights from Cloudera Government Solutions’ President, Rob Carey.
“Let’s say a program office wants to buy an algorithm to understand and better inform investments in public housing. Such an algorithm can make sure that every dollar spent is going to the best use. But what if the data from one part of the country is a little different? Or what if an unusual situation happens, such as a drought slowly displacing thousands of people? These subtle changes might result in untrustworthy results. Performance scores will remain high, and yet millions of dollars will be misspent and people will be worse off. Every day, new weaknesses are discovered as algorithms are revealed to generate racist, sexist or other undesirable behaviors. How do agencies know if an algorithm is reliable? How can they compare algorithms from different vendors? These problems are slowing federal AI programs to a snail’s pace.”
Read more insights from CalypsoAI’s Director of Product, Mitchell Sipus.
“Deciding how to leverage government data to increase the safety and security of our country is a central question in computer vision. Most solutions rely on machine learning models that have been trained with real-world data. Unfortunately, 80% of the work required for an artificial intelligence project is collecting and preparing data. As a result, capturing and labeling the right data becomes a heavy resource burden. That’s why synthetic data is a game-changer in AI. It reduces the time and costs involved in training the models because it removes the need for manual collection and labeling.”
Read more insights from CVEDIA’s Head of Data Science, Miguel Ferreira.
“Before we can scale AI and machine learning solutions for ourselves or our customers, we must first provide frictionless access to the robust tools and processes that every data scientist requires. To do that, we tap the best open-source software, develop some of our own internally and partner with leading tech companies like NVIDIA. Our AI Factory is a modular ecosystem for training, deploying and then sustaining AI models at scale. It streamlines access to compute, data, software and security to eliminate many of the complexities that can plague machine learning operations. As a result, users can focus on building models that solve complex challenges and allow them to iterate rapidly.”
Read more insights from Lockheed Martin’s Vice President of AI, Justin Taylor, Director of AI Foundations, Greg Forrest, and Director of AI Innovations, Mike Harasimowicz.
“AI and high-performance computing are intrinsically tied together. To process data and AI algorithms at the speed the technology requires, agencies need to upgrade their hardware. HPC architectures are the natural choice. Today agencies can essentially buy HPC systems for AI workloads that are scalable, flexible and efficient right off the shelf. Dell Technologies and other companies are building infrastructure solutions, and customers are layering on the software stack from places like GitHub. Companies are also creating composable architectures with virtual workloads across compute, memory, networking, storage devices and software.”
Read more insights from Dell Technologies.
Download the full Innovation in Government® report for more insights from these government AI leaders and additional industry research from FCW.