Building Mission-Driven AI That Lasts: A Federal Agency Roadmap for Success

A recent Massachusetts Institute of Technology (MIT) study revealed that 95% of artificial intelligence (AI) projects fail before they even get started. For Federal agencies managing citizen data, classified information and critical infrastructure, this is not just a learning curve; it is a fundamental breakdown of how AI initiatives are conceived and executed. The disconnect between AI proliferation and AI success stems from a common pattern—agencies prioritizing tools over outcomes, launching disconnected pilots without enterprise alignment and lacking the governance structures to ensure accountability. The path forward requires a deliberate shift. Starting with mission-driven use cases, building on clean and governed agency data and ensuring sustainable adoption through people-centered strategies.

Mission-Driven Use Case Development First, Technology Second

The fastest way to stall an AI initiative is to start with the technology instead of the mission. Too often, agencies approach AI adoption by asking “what can we do with generative AI (GenAI)?” rather than “what operational problem needs solving?” This approach yields pilots that work in limited scenarios but often fail to scale because the model, data and governance do not translate to enterprise-level needs. Strong AI use cases are not discovered after implementation; they are designed deliberately around mission outcomes and real operational constraints. Agencies should begin by defining a specific challenge or opportunity, whether it is a slow and manual process, resource-intensive workflows or error-prone operations. The critical test is simple: if success would not fundamentally change how the mission operates, it is not the right use case to prioritize.

Identifying stakeholders early is equally essential. Program owners, analysts, operators and leadership must validate whether AI will genuinely help or simply add noise to an already complex technology landscape. Agencies must also be explicit about outcomes—faster decisions, fewer errors, reduced backlogs, better procurement insights or reclaimed staff time. Without clearly articulated outcomes, measuring success or defining return on investment becomes impossible. A practical prioritization matrix can guide agencies in filtering use cases into four categories:

  • high-impact, high-effort investments for enterprise transformation
  • high-impact, low-effort quick wins ideal for pilots
  • low-impact distractions to avoid entirely
  • interesting but non-urgent projects to defer

By focusing on tightly scoped problems with clear ownership and contained risk, agencies can deliver meaningful pilots that demonstrate real value and build momentum for broader adoption.

Data Foundation and Governance as the Critical Success Factor

Most AI models in use today are generalized Large Language Models (LLMs) trained on public internet data. These models are faster to deploy and have lower upfront costs, making them attractive for proofs of concept. However, they lack understanding of an agency’s unique mission, culture and decision-making context. For lasting, mission-critical AI, agencies should consider Small Language Models (SLMs) trained on agency-specific data. These models are more energy efficient, operationally reliable and context-aware with fewer mistakes. The challenge lies in fragmented data environments where records are spread across systems, formats and classification levels. This is where records management and data governance professionals become invaluable, helping to locate data and establish controls that transform data from a liability into a strategic asset.

AI learns directly from the data it was trained on and from how humans categorized it through reinforcement learning from human feedback. If the underlying information is disorganized, untagged or incomplete, the model will reproduce those flaws at scale. Properly governed, annotated and categorized data produces outputs that are accurate, explainable and trustworthy. Unstructured data—emails, PDFs, chat logs, memos, case files—represents roughly 80% of all agency information and contains the real story of mission operations. Yet most tools focus on structured data like databases and spreadsheets, missing the valuable context hidden in human-generated content. In-place data management addresses cost and security concerns by training and running models where data already lives, minimizing movement and preserving security boundaries. When Chief Data Officers (CDOs) and Chief AI Officers (CAIOs) collaborate under a shared governance model that includes Chief Information Security Officers (CISOs), Chief Information Officers (CIOs), legal teams and records leaders, innovation becomes both safer and faster because trust and accountability are built from the start.

The AI Failure Crisis and Its Root Causes

Federal AI adoption has accelerated faster than almost any other technology in Government history, yet this growth comes with significant risk. Currently, there is no Federal statute enacted by Congress to regulate AI across sectors, leaving agencies to rely on self-assessments and voluntary guidelines. The Office of Management and Budget (OMB) M-24-10 requires agencies to apply risk management and governance controls to high-impact AI systems, but without uniform standards for measuring impact or frameworks for compliance, agencies struggle to implement meaningful safeguards.

Many AI projects begin in isolation, driven by excitement about new tools or pressure to deliver results quickly, without engaging CIOs, CDOs or records management teams. Solutions may work adequately for limited use cases but lack the foundation to scale because governance, data quality and stakeholder alignment were afterthoughts rather than prerequisites. This pattern creates an explosion of activity with limited longevity, the very definition of a bubble. Experts report that Government is a generation behind industry in AI governance, a concerning gap given the sensitive citizen data, classified information and critical infrastructure at stake. If agencies rush to deploy AI without proper governance, they multiply the surface area for data errors, bias and compliance breakdowns. Expansion without oversight increases exposure rather than capability.

Sustainable Adoption Through People and Partnership

Even well-designed AI initiatives fail without sustained human engagement and vendor commitment. Vendors must remain engaged beyond initial implementation, continuing to train systems, monitor performance, incorporate feedback and deliver updates. If a vendor disappears after the sale, agencies are left without the support needed to refine and sustain their AI investments. This reinforces why starting with genuine use cases matters: when AI addresses tangible operational pain points, users are motivated to engage with and trust the technology.

Training cannot be a one-time orientation. Structured, continuous learning programs ensure that users understand not just the technology, but the workflows and data that feed it. Agencies should design AI for growth from the outset, building in governance controls, planning for scalability and considering reuse potential beyond the initial deployment. This “build once, reuse often” approach delivers efficiency gains and cost savings while making funding approval easier.

In an era where understanding how to learn has become the most essential skill, professionals must remain elastic and curious about topics that may fall outside traditional scopes, whether data governance for operational staff or technical architecture for mission leaders. By prioritizing mission-driven use cases, establishing robust data foundations, implementing governance as an enabler rather than a barrier and investing in people alongside technology, Federal agencies can move beyond experimental pilots to deliver AI that creates lasting, measurable impact.

To explore proven strategies for building mission-driven AI that lasts, watch ZL Technologies’ webinar, “From Noise to Impact: Building Mission-Driven AI in the Agency.”

From Noise to Impact: How Agencies Can Build Real AI Use Cases

Insights from Federal data, legal and technology leaders on turning AI potential into mission-driven action

Everyone’s talking about AI. But in Government, where budgets are tight, oversight is strict and the stakes are high, talk isn’t enough. Agencies need AI use cases that solve real problems, not just generate headlines.

At a recent panel discussion in D.C. hosted by ZL Tech and Carahsoft, experts from data, legal and tech roles shared their insights on how Federal agencies can move from experimentation to impact. Their message was clear: success with AI starts with governance, strategy and the right people at the table.


1. Want Real AI? Start at the Top

The biggest challenge agencies face? Starting small and remaining siloed.

“Start at the highest, most strategic level of the organization,” said Matthew Versaggi, a White House Presidential Innovation Fellow for AI. “Don’t begin in your own department, by then it’s too narrow. Instead, ask: what’s the most impactful agency-wide use case we can build toward?”

The panelists emphasized that departmental pain points might improve workflows, but agency-wide pain points tied to the mission are where AI can truly move the needle.

“Without a structured process, you’re just chasing your tail,” added Kon Leong, CEO of ZL Tech. “Start small, but make sure your experiment is scalable and aligned to long-term strategy.”


2. Governance Isn’t a Roadblock. It’s the Roadmap.

AI can’t succeed without trust in the data. And trust depends on governance.

“Governance is accountability,” said Leong. “It’s what separates scalable, sustainable innovation from science experiments.”

Jason Baron, a professor and former senior Government attorney, described governance as a mesh, not a silo: “True governance links your CISO, CIO, records officers, FOIA leads, legal teams—all under shared policy and ownership. We used to work in silos. That has to end.”

And as Matthew pointed out, AI governance isn’t a blocker, it’s an enabler: “AI governance becomes the mechanism for sustaining innovation. If we’re going to compete globally, we have to embrace it.”


3. Talk to Your CDO—Yes, You Have One

One of the most actionable takeaways: if you’re not already talking to your Chief Data Officer, you’re behind.

“Every agency has a CDO,” said Jason. “Go find them. Hopefully you like them. Have a conversation.”

CDOs are uniquely positioned to bridge mission needs with data access and policy. As one attendee noted during the session, “Awareness is the first step. Records and governance leaders are finally getting a seat at the table.”

It’s no longer enough for legal, records and privacy teams to operate in isolation. Building AI responsibly requires alignment—and that starts with the CDO.


4. Unstructured Data Is the Game-Changer

Structured data, like spreadsheets and databases, has been the traditional foundation for reporting and analytics. But that’s not where the majority of Government data lives.

“Unstructured data is radioactive,” said Leong. “That’s where every crisis lives. And now, it’s center stage in AI.”

Unstructured data includes everything from emails and PDFs to file shares, chat logs and documents. It makes up more than 80% of enterprise data, yet many agencies lack visibility or control over it.

Jason gave a real-world Federal perspective: “As a records guy, I’d take out my watch and wait to see how long it took vendors to say ‘FOIA’ or ‘FedRAMP.’ If they don’t understand the challenges around Federal unstructured data, they’re not serious.”


5. Use the Impact vs. Effort Matrix to Prioritize Wisely

With hundreds of possible AI use cases, how can agencies filter out distractions and find the ones worth pursuing?

Panelists recommended the Impact vs. Effort Matrix—a simple yet powerful tool to map use cases by how much effort they require and how much impact they’ll deliver.

What Is the Impact vs. Effort Matrix?

This tool helps agencies focus on what’s worth doing, especially when time, talent and resources are limited. Each AI idea gets placed into one of four categories:

  • Quick Wins (High Impact, Low Effort): Prioritize these immediately.
  • Major Projects (High Impact, High Effort): Worth the investment—plan carefully.
  • Fill-Ins (Low Impact, Low Effort): Do when time permits.
  • Thankless Tasks (Low Impact, High Effort): Avoid or minimize these.

“We see hundreds of AI ideas across agencies,” one panelist said. “But when you apply the matrix, only a handful have real traction. The juice has to be worth the squeeze.”

The matrix helps filter noise and ensure teams are spending time on the projects most likely to scale, succeed and support the mission.


6. Build with Scale in Mind, Even If You Start Small

AI is experimental. Not every idea will pan out. But successful projects need a path to grow from day one.

“Do a small test with an enterprise mindset,” said Matthew. “Security, governance and scale should be built in from the start.”

Leong agreed: “Get your data ducks in a row, and everything else will follow. You don’t want to make long-term bets on projects that were never designed to scale.”


7. Custom or Off-the-Shelf? Choose Based on Complexity

Should agencies build custom platforms or adapt off-the-shelf tools? It depends.

“Don’t overpay for generic tools,” said Matthew. “But for deep, high-end capabilities, you may need in-house builds—just know the tradeoffs.”

The more specialized the use case, the more likely a tailored solution is required. But whether buying or building, the panel emphasized the importance of involving records officers, legal teams and SMEs early—not just the CIO chasing the next shiny object.


Final Thought: The Data Is There. The Champions Are Too.

The core message of the session? Agencies already have the data—and they have the people who care about getting it right.

What’s missing is coordination, prioritization and a strong governance foundation.

Start with strategy. Talk to your CDO. Use the matrix. Build with intent.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including ZL Tech, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.