4 ways AI agents change the way we approach Identity Security

As if gaining visibility into all human and non-human identities wasn’t a big enough task for security teams, adding AI agents into the mix takes identity complexity to a new level. Organizations of all sizes are tackling this new reality, where it feels premature to confidently say they know about all the AI agents running in their environment. 

That uncertainty is not a knowledge gap. It is an attack surface. 

Gartner’s new report on IAM for AI agents names the real nugget of truth: “Purpose/intent cannot be discovered after the fact by monitoring and observability capabilities.”

That is not just analyst language. It is a fundamental shift in how we need to think about governing agents. You cannot govern agents by watching them after-the-fact. You must know who they are, what they are for, and who is accountable before they run. 

The numbers that should change your priorities

Gartner’s data reinforces the urgency. By 2029, over 50% of successful attacks against AI agents will exploit access control weaknesses. By the year before, 90% of organizations that share credentials between humans and agents will need to make significant investments to undo that design.Gartner IAM for AI agents stat graphic-18 (1)

Those numbers are consequences, not causes. The root cause is structural: IAM maturity for agents is uneven. The Gartner lifecycle maturity assessment makes this visible. Authentication and monitoring capabilities are relatively mature. Identity registration and authorization are not. That gap is the story. 

Weak identity registration means the agent was never properly onboarded as an identity. No defined owner. No declared purpose. No documented scope. It has credentials and it is running, but nobody can tell you who built it, what it is supposed to do, or what happens when it breaks. When registration is weak, ownership is unclear. And when ownership is unclear, accountability does not exist. 

Weak authorization means the agent has more access than it needs. It can reach databases, APIs, and workflows that have nothing to do with its intended function. Nobody scoped it down because nobody defined what “down” looks like. When authorization is weak, privilege is excessive.

Now combine excessive privilege with autonomy. An agent that can reason, chain tools, and act on its own, with more access than it should have, and no one clearly accountable for what it does. That is the exploitable attack surface. That is the chain revealed in Gartner’s data.

You cannot protect what you cannot see

Before you can govern agents, you need to find them. All of them. Not just the ones your platform team sanctioned. The ones that developers spun up to solve an issue. The ones contractors built. The ones that exist because someone needed to “just get this working.” 

We hear this consistently from security teams. As one InfoSec manager at a professional services firm put it: “We do not find out about it until someone goes and does an actual audit of the system.” 

Gartner’s assessment confirms it: identity registration is one of the least mature IAM capabilities for AI agents. Most organizations cannot answer the basics: What is this agent supposed to do? Who owns it? What happens when it breaks? 

Discovery is not a checkbox. It is the foundation. Without it, every policy you write is based on assumptions, and assumptions do not survive first contact with autonomous agents operating at machine speed.

The identity registration gap

Most organizations are trying to govern agents with the wrong tools. They are monitoring. They are logging. But monitoring tells you what happened. Identity registration tells you what should happen. Authorization enforces the boundary between them. 

If your governance model depends on catching problems after they occur, you are always going to be behind. 

This is where many organizations reach for familiar tools. IGA platforms can help with registration and lifecycle management. IAM solutions like Okta or Entra ID can register agent identities. These are necessary steps. But they stop there. They can tell you an agent exists and who requested it. They cannot enforce anything at the moment that agent acts. 

That is the gap: governance on paper versus enforcement in production. 

Agents are identities, but not like any you have managed before

The way I read Gartner’s recommendations, there is a unifying thread: treat AI agents like you would treat any identity in your organization. They authenticate. They access resources. They act on behalf of someone. That is not a tool. That is an identity. 

But agents are more complex than traditional identities. They are what we call composite identities. They combine the blast radius of service accounts with the unpredictability of human decision-making at machine speed.

Four reasons that make them different: 

  • They act autonomously, unlike service accounts that execute predefined operations.
  • They may inherit human delegation, creating privilege escalation risk.
  • They may chain multiple machine identities in a single task.
  • They may operate across trust boundaries your IAM system was not designed to handle.

Think about how you onboard an employee. You do not give them admin access on day one. You define their role, their manager, their scope. You review their access as responsibilities change. Agents need that same lifecycle. But right now, most organizations are skipping straight to “give them credentials and hope for the best.” 

What runtime enforcement actually looks like

Gartner calls out the authorization gap. But what does closing that gap look like in practice? 

Even modern IAM systems, including conditional access and continuous evaluation, were designed primarily to evaluate who is signing in and what that identity is generally allowed to do. Agents introduce a different problem. They do not just sign in. They execute. They invoke tools dynamically. They operate across multiple identity contexts within a single task. 

Traditional conditional access evaluates who is signing in and under what conditions. Agent governance must also evaluate what is being executedat the moment of execution. 

Here is what that looks like: an agent is about to call a tool, read from a database, trigger an API, or execute a workflow. Before that happens, there is a decision point. Runtime enforcement evaluates the composite identity: the human owner, the agent itself, the tool credentials, and the defined purpose, all at execution time. Is this agent authenticated? Does it have permission for this specific action? Is this behavior consistent with its intended function? 

That is runtime enforcement. Not configuration-time policies that assume the agent will behave as designed. Decisions at execution time, every time.

What Silverfort does differently

If the failure pattern is identity immaturity, then the control point must also be identity. Most AI agent security approaches start at the model or application layer. We start at the identity layer. Because if identity is uncontrolled, everything above is fragile. 

Human accountability by design

Every AI agent is explicitly tied to a real human owner in policy. Not informally. Not in documentation. In enforcement logic.

Every action can be traced back to a real chain of accountability: which human owns this agent, what identity the agent is operating under, and what credentials it uses to access resources. That is what we mean by composite identity. And it is what makes enforcement possible before monitoring even begins.

Runtime enforcement at the identity layer

Silverfort enforces at the identity decision point at runtime. For MCP-connected agents, that means sitting in line between the agent and the MCP server. For platform-native agents, enforcement is delivered through native integration, directly within the platform. 

Before a tool call executes, we evaluate identity, context, delegation, and policy in real time. If the action exceeds scope, it does not execute. This is not configuration-time IAM. This is execution-time identity enforcement. That distinction matters. 

Least privilege that survives autonomy

Static least privilege assumes predictable behavior. Agents break that assumption. They reason. They chain tools. They drift from what they were originally authorized to do. Least privilege must be validated at runtime, not just set at provisioning. 

That means if an agent tries to access a resource outside its declared purpose, it gets blocked. If delegated privileges start expanding beyond what was originally scoped, they are contained. This is the same enforcement model we apply to humans and service accounts, now extended to AI agents.

One Identity Security Platform

AI Agent Security is not a standalone product. Agents sit at the intersection of human identities, non-human identities, service accounts, cloud resources, SaaS applications, and protocol layers like MCP. If those domains are secured separately, agents will exploit the seams. 

Silverfort unifies this. One policy framework. One observability layer. One enforcement architecture. Across humans, machines, and AI. That is the architectural difference.

Enabling AI innovation without slowing it down

Security leaders are not trying to stop AI adoption. They are trying to make sure it does not outrun their ability to govern it. The organizations moving fastest with AI agents are the ones that figured out early: the right security model is a speed advantage, not a drag. 

Cars have brakes so you can drive fast. The same principle applies here. 

But, the brakes only work if they’re connected to the same system. Today, most organizations secure human identities in one tool, service accounts in another, and AI agents (if at all) in a third. If those domains are secured separately, agents will exploit the seams. 

That’s the reason teams need a unified Identity Security Platform

  • One policy framework means a CISO can define “no agent accesses production data without human approval” once and have it applied across every agent, every platform, every protocol. No per-tool configuration. No coverage gaps.
  • One observability layer means when an agent acts, you see the full chain: which human triggered it, which NHI it authenticated with, which tool it called, and what data it touched. Not three dashboards stitched together after the fact, but a single view that makes incident response possible in minutes instead of days.
  • One enforcement point means policy is applied at runtime, at the moment of action, not retroactively through quarterly access reviews. When an agent requests access, the decision happens inline. Allow, deny, or step up. Before the action executes, not after. 

This is what shifts AI agent security from a governance exercise to an operational capability. Discovery tells you what exists. Registration tells you who owns it. Runtime enforcement tells agents what they’re actually allowed to do, in the moment, every time. 

AI agents represent the next frontier of identity. Identity Security must evolve accordingly, from governance alone to continuous, runtime enforcement. Discover what is running. Register who owns it. Enforce at the moment of execution. That is the path. 

The Gartner report is worth reading in full. : https://www.silverfort.com/landing-page/campaign/gartner-report-iam-for-agents/.

Want to learn how Silverfort discovers and protects AI agent identities? See AI agent Security in action.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Silverfort, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Silverfort.com, and is re-published with permission.

Ignite. Innovate. Impact: Key Takeaways from NAWB The Forum 2026

For the first time in over 40 years, the National Association of Workforce Boards (NAWB) took its premier annual event on the road, landing in Las Vegas for The Forum 2026. This year’s theme, “Ignite. Innovate. Impact,” signaled a bold shift in how the workforce system addresses rapid economic change, emerging technology and legislative uncertainty.

Whether you missed the sessions or just need a refresher to share with your board, here is a summary of the major trends and tactical insights that defined the conference.

1. The Era of Generative AI: From Hype to Implementation

Perhaps the biggest “main stage” topic this year was the shift from talking about AI to using it. Sessions like “What AI ISN’T: Rethinking ChatGPT and Policy” and “The Current State of AI in Workforce Development” moved past the buzzwords.

Key Takeaways:

  • Capacity Building: AI is being framed as a tool to “do more with less” as boards face funding constraints. By automating routine administrative tasks, staff can shift focus to high-value human services like coaching and relationship building.
  • The “Human” Edge: Despite the automation, speakers emphasized that AI-exposed occupations still require human judgment, creativity and “core employability skills” (soft skills), which workforce boards are uniquely positioned to teach.
  • New Credentials: Discussion centered on emerging credentials for AI quality assurance, prompt design and data annotation as new entry points for job seekers.

2. Advocacy & WIOA Reauthorization

With the workforce system at a crossroads, advocacy was a central pillar of the 2026 agenda. The message from the “Inside the Beltway” updates was clear: workforce boards must be their own best storytellers.

Strategic Priorities:

  • WIOA Flexibility: NAWB continues to push for the reauthorization of the Workforce Innovation and Opportunity Act (WIOA), specifically advocating against “one-size-fits-all” mandates and for the reduction of state-level set-asides (from 15% to 10%) to return more funding to local control.
  • Data-driven evidence: Utilize current employment data from authoritative sources to substantiate your achievements.
  • Short-Term Pell: There was significant momentum around expanding Pell Grant eligibility for high-quality, short-term skills development programs that align with in-demand careers.

3. Solving the Childcare & Trades Equation

A standout session focused on the intersection of labor and family support: “Meeting Big Needs with Big Solutions.” Using Pierce County Labor and the Machinists Institute as a model, the session explored how investing in childcare for trades workers is no longer a “benefit”. It is a critical infrastructure requirement for a stable workforce.

4. Expanding the Apprenticeship Model

Registered Apprenticeships (RA) were highlighted as the gold standard for sustainable sector pipelines.

  • Influence Meets Industry: Sessions focused on making RA a “household name” beyond just the construction trades, expanding into Logistics, Electric Vehicles (EV) and even Childcare.
  • Public-Private Funding: A major theme was leveraging diverse funding streams (not just WIOA) to sustain apprenticeship momentum during economic shifts.

5. Organizational Resilience & Leadership

For Executive Directors and Board Chairs, the conference offered a deep dive into “Full Throttle Leadership.”

  • Contingency Planning: A specialized pre-conference session focused on helping boards navigate labor market shocks and talent shortages with decisive, proactive planning.
  • Culture Matters: Insights from the Eastern Kentucky Concentrated Employment Program (EKCEP) highlighted how a “culture of performance” can increase engagement among employees and elected officials alike.

Why it Matters for Our Community

The shift to Las Vegas was more than a venue change; it was a metaphor for the “nationwide tour of innovation” that NAWB is championing. The 2026 Forum made it clear that the future of work isn’t just about jobs, it’s about ecosystems.

As we bring these insights back to our local regions, our focus should remain on:

  1. Embracing AI ethically to improve service delivery.
  2. Advocating for local control and flexible funding.
  3. Integrating supportive services (like childcare) directly into our workforce strategies.

We had a great time and learned a lot. Schedule a meeting to chat more about the conference.

How AI is Reshaping Courts and Legal Operations 

The conversation around artificial intelligence (AI) in the legal system has fundamentally shifted from courts and legal organizations debating whether it belongs in legal environments to how to integrate AI responsibly into daily operations. For courts facing expanding caseloads, staffing shortages and budget constraints, AI-powered legal technologies have become operational tools for improving efficiency, access to justice and administrative effectiveness across the legal lifecycle. While AI can significantly enhance legal workflows, responsibility for judgement, accuracy and decision-making must remain with human professionals. 

From Policy Discussion to Practical Adoption 

The American Bar Association’s (ABA) Year 2 Report on the Impact of AI on the Practice of Law makes clear that AI adoption in the legal profession has entered a new phase. Early concerns centered on ethics, confidentiality and professional responsibility. Today, the focus has shifted toward responsible deployment, governance and workflow integration where efficiency gains are immediate and measurable. These applications allow courts to redirect limited staff resources toward higher-value legal and judicial work rather than routine manual processes. 

Common AI-enabled courtroom use cases already in practice include: 

  • Organizing and searching large volumes of filings, briefs and evidence 
  • Creating unofficial or preliminary real-time transcriptions 
  • Summarizing motions, exhibits and prior case materials 
  • Supporting scheduling, workload analysis and calendar management 

This is especially important for Federal, State and Local courts that must maintain service levels despite limited resources. AI-enabled legal technologies provide a validated path to modernizing court operations while preserving judicial independence, transparency and accountability. 

Real-World Applications Delivering Value 

AI adoption is already producing tangible operational benefits across court systems. 

Administrative and workflow automation applications include drafting routine administrative orders and standard court notices, managing scheduling and calendar coordination, conducting workload studies and organizing court documents and filings for improved retrieval. These implementations reduce administrative burden while improving consistency in standard legal processes. 

Document review and case support capabilities allow legal teams to summarize briefs, motions, pleadings, depositions and exhibits at scale. AI systems create timelines of relevant events across large case records and assist with legal research when trained on reputable legal authorities. Some implementations identify misstated law or omitted legal authority in filings, though human verification remains mandatory for all outputs. 

Transcription, translation and accessibility services are also being rapidly adopted. Courts are generating unofficial or preliminary real-time transcriptions to accelerate case documentation. Systems provide preliminary translations of foreign-language documents and support accessibility services for self-represented litigations navigating complex court procedures. These applications expand access to justice by reducing cost barriers and improving navigation of legal systems for citizens. 

Scaling Court Operations Under Budget Constraints 

Rising caseloads combined with constrained budgets make AI adoption particularly relevant for Government legal operations. Technology adoption has emerged as the primary driver of scalability for courts that cannot expand head count. By automating manual processes such as transcription, document review, evidence management and research, AI allows existing staff to handle higher volumes while maintaining or improving service quality.  

This approach aligns with broader access-to-justice goals highlighted in the ABA report. AI-enabled tools are already helping courts improve case management, streamline dispute resolution processes and support self-represented litigants through better access to information and court services. These gains are particularly impactful for jurisdictions seeking to modernize legacy systems while preserving fairness, transparency and judicial independence. 

Human Oversight and Accountability 

While AI delivers meaningful efficiency gains, the ABA report stresses that AI-generated outputs may appear authoritative while containing factual or legal inaccuracies. The risk of hallucinations has not been fully resolved in any current generative AI (GenAI) tools. As a result, AI should not replace judges or court staff, nor should it be treated as an authoritative source of truth. Instead, AI should serve as an assistive technology that augments human expertise, improving documentation quality, accelerating research and making information more accessible. 

Judicial guidelines outlined in the report reinforce several critical principles: 

  • Judges and attorneys remain fully responsible for accuracy and legal reasoning 
  • AI-generated content must always be reviewed for correctness and relevance 
  • Overreliance on AI can introduce risks such as automation bias or misinformation 

Courts adopting AI must establish clear governance frameworks that address privacy, security, transparency and oversight. Human verification of AI outputs is essential to ensuring that AI enhances documentation quality and accelerates legal research without compromising accuracy, professional responsibility and public trust. 

Responsible Adoption Through Trusted Procurement 

The ABA emphasizes that responsible AI adoption is not optional; it is a leadership responsibility. Human oversight, ethical use policies and ongoing evaluation remain essential to ensuring AI strengthens, rather than undermines, trust in the justice system. 

Carahsoft, The Trusted Government IT Solutions Provider®, works with leading legal tech software providers to help Federal, State and Local courts modernize legacy systems, reduce administrative burden and implement AI responsibly at scale. By making these technologies accessible through trusted procurement vehicles, Carahsoft enables courts and Government legal organizations to adopt AI while aligning with established legal, ethical and operational requirements.  

AI is not a substitute for legal expertise, but it is quickly becoming an indispensable tool for courts seeking efficiency, consistency and scalability. By procuring AI solutions through Carahsoft, Government courts can ensure their modernization demands will be met while maintaining legal and ethical standards. As AI continues to reshape legal operations, organizations that pair technology deployment with clear governance, training and accountability frameworks will be better positioned to deliver improved services to the public.  

Ready to explore AI-enabled legal technology solutions? Explore Carahsoft’s Legal & Courtroom Technology Solutions portfolio or take a Self-Guided Tour. 

Contact Carahsoft’s team at LegalTech@carahsoft.com to discuss AI solutions tailored for your organization’s needs.  

Unified Financial Intelligence: Why Government Finance Teams Have a Data Foundation Problem, Not a Data Problem

How Incorta, Google and Carahsoft help State, Local, education and Federal civilian agencies move from slow close cycles to real-time, AI-ready financial insight

I spend a lot of my time talking with Government finance leaders—CFOs, comptrollers, budget directors—and the conversation almost always starts with AI and ends with data. Almost every agency I talk to eventually runs into the same wall: their data isn’t ready. As we move toward agentic AI—AI that takes actions and makes decisions on its own, not just answers questions—the demands on that foundation multiply fast. Until it’s right, AI remains a slide in a strategy deck. That’s the problem Incorta was built to solve.

Nowhere is this more obvious than in Public Sector financial management, where the stakes are high, the infrastructure is often decades old and the expectation for transparency has never been greater. If we want to talk seriously about Unified Financial Intelligence in Government, we have to talk seriously about the data brain underneath it—the trusted, real-time, contextual foundation that AI agents depend on to make accurate, explainable decisions. Without it, you don’t have an AI problem. You have a data problem dressed up as one.

The Real Bottleneck: Government Finance Needs a Data Brain

Public Sector finance teams are under more pressure than ever: leaner budgets, post-pandemic fiscal gaps, enrollment volatility and a mandate to do more with less. New White House and OMB directives are accelerating the AI timeline—agencies are being asked to demonstrate AI-ready infrastructure now, not in a future budget cycle.

For CFOs, comptrollers and finance teams, that pressure is concrete. Close cycles still take days or weeks. Analysts spend more time gathering data than using it. When leadership questions a number, the answer is “let me pull it manually”—because the system shows aggregates, not the transactions behind them.

The root cause isn’t a lack of tools or talent. Financial data is scattered across GL, procurement, grants, payroll and project systems—each with its own codes and timing—and traditional ETL strips out the very context that makes it useful. That’s the data brain problem.

What the Data Brain Has to Deliver

For finance, AI isn’t about prettier dashboards. It’s about answering hard questions: why did this variance occur? Where are the early signals of fraud, waste or abuse? What does next quarter look like if this assumption changes? To answer those credibly, AI needs a data brain.

That data brain has to deliver three things: granularity (100% transactional detail), timeliness (near real-time, not last week’s batch) and context (preserved relationships—purchase orders to vendors, funds to appropriations, payroll to projects).

Traditional ETL gives you the opposite of a data brain: summarized, stale data stripped of business logic. When you layer AI on top of it, the model fills in the gaps—and for Government finance, that’s not a technical problem. If an AI-assisted answer can’t be traced back to the exact transaction, your auditors and oversight bodies won’t accept it.

That’s how you get hallucinations instead of financial intelligence.
The “AI problem” and the “data problem” in Government finance are actually the same problem. Build the data brain, and Unified Financial Intelligence follows.

What Changes When You Have a Data Brain

Take a Federal civilian agency we worked with: 24-hour data refresh cycles, manual reconciliation, spreadsheets and email chains just to close the books. Analysts spent most of their time getting data into a usable format—not using it.

After implementing Incorta with Google Cloud, that agency went from 24-hour to 15-minute data refreshes for key financial subject areas.

  • From periodic close to continuous audit. Anomalies surface in near real-time—before they snowball, not after month-end.
  • From “check the dashboard” to “follow the data.” The CFO questions a number; the analyst drills to the exact transaction, in the same environment.
  • From data gathering to value creation. Analysts shift from reconciliation to scenario modeling and real decisions.

That’s Unified Financial Intelligence with a data brain underneath it: full, timely, contextual access to the truth—and the time to actually use it.

How Incorta Builds the Data Brain

The traditional path to modernizing financial data in Government is measured in years and eight-figure budgets—and most of us have seen how that story ends. At Incorta, we took a different approach: build the data brain for Government finance on Google Cloud without requiring agencies to tear out what’s already there. Three pillars make that possible:

  1. Direct access to ERP data in its native form – Incorta connects directly to Oracle EBS, Oracle Fusion, SAP and Workday, ingesting data in its native schema—no heavy transformation, no lost business context.
  2. Prebuilt blueprints for Public Sector financial systems – A library of prebuilt blueprints captures how ERP tables relate, how funds and projects are structured and how to translate that into analytics-ready models—removing months of data engineering work.
  3. Landing it all in Google BigQuery for AI-ready analytics – The result is a production-ready financial data brain in Google BigQuery—granular, near real-time and fully contextualized—standing up in weeks, not months or years, with Gemini for Government and agentic AI tools ready to operate on top.

On top of this, Incorta layers AI-powered insights with built-in hallucination mitigation, role-based access controls, audit trails and mirrored source system permissions—so agencies can scale AI without sacrificing governance.

Carahsoft plays a crucial role in this story by making it easy for agencies to get started—through existing contract vehicles and the Google Cloud Marketplace—without embarking on another risky, bespoke IT project.

Where State, Local, Education and Federal Civilian Finance Teams Are Starting

State budget offices need real-time visibility into appropriations and fund balances—so leadership responds to revenue shifts, not monthly reports. Local Governments want to move from reactive spreadsheets to proactive scenario planning and cleaner audits. Education finance teams need unified views of budgets, grants and financial aid to navigate enrollment volatility. Federal civilian CFO offices are pursuing continuous close and early AI-driven detection of fraud, waste and abuse. In every case: build the data brain first, and the downstream AI use cases become operational, not experimental.

Getting Started Doesn’t Have to Be a Multi-Year Commitment

One of the most consistent concerns I hear is: “We’ve been burned by big data projects before. We can’t sign up for another multi-year transformation.” That hesitation is completely rational—and it’s exactly why we’ve structured our approach with Google and Carahsoft to deliver value in weeks, not years.

A practical entry point is a Unified Financial Intelligence Modernization Assessment—a focused engagement to assess your ERP landscape, map how your data lands in BigQuery (secure, governed, auditable) and define a 60- to 90-day outcome that shows what the data brain delivers in your environment.

Incorta is available through Carahsoft on the Google Cloud Marketplace—most agencies can use existing contracts and cloud commitments to get started, no new RFX required.

The Bottom Line

State, Local, education and Federal civilian finance teams don’t need another dashboard. They need the data brain that makes Unified Financial Intelligence possible—access to all of their financial data, in near real-time, with full business context, so they can shift from gathering data to actually using it.

That’s what Incorta, Google and Carahsoft are building together for Government. In an environment where agencies are being asked to do more with less, standing up that data brain in weeks rather than years isn’t just a nice-to-have. It’s the difference between a finance function that’s keeping up and one that’s falling behind.

→ Request a live Agentic AI demo — see Incorta + Google in action on your mission data.

→ Try free for 30 days on Google Cloud Marketplace — software free; infrastructure costs may apply.

→ Get started with the Unified Financial Intelligence Modernization Assessment — map your data brain and define a 60- to 90-day outcome.

Ready to explore what real-time financial intelligence looks like for your agency? Learn more about Incorta’s Government solutions on Carahsoft’s Incorta microsite. Watch our joint Incorta + Google session on AI-ready financial data for Public Sector.
Contact the Carahsoft Team ☎ (703) 871-8548  |  ✉ incorta@carahsoft.com

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Incorta, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Smart Guarding: How AI can be Used to Enhance Vacant Building Security

After 2020, the landscape of corporate real estate changed dramatically. Companies across multiple sectors, including technology, transitioned from working in office to either hybrid or totally remote models. Vacancy rates on corporate campuses increased to 15-20%, opening companies up to a multitude of liabilities and operational challenges. Artificial intelligence (AI) has brought a new edge to vacant building security. Smart guarding and solar guards elevate the security posture of vacant buildings, defend corporate assets and subsequently deliver a Return on Investment (ROI) through effective security measures.

Risks of Vacant Building Stewardship

Vacant buildings come with a series of unique risks to the company that either owns or leases the building. These locations are particularly attractive for criminal activity, especially trespassing and vandalism. Companies also face other risks such as copper theft and squatting that result in higher insurance claims, causing rising premiums. Further challenges come from the range in potential responses from law enforcement. The crime rate in the area will greatly affect how quickly police respond to the call, or whether they will respond at all if there is not an active incident.

Traditional security models for vacant buildings rely heavily on human patrols and come with their own operational drawbacks. A commonly used term in security, “warm bodyguards,” describes guards that are physically present but only do the bare minimum required to complete the job; in other words, these guards are just a warm body whose physical presence alone is deemed to be enough to deter criminal activity. Depending on the size and scope of the campus, these security measures can cost up to $25,000 per month. The ROI is negligible at best, and companies are often left with an expensive yet ineffective security protocol.

With property vacancy on the rise, companies need a solution that is cost effective but does not sacrifice protection or increase their risk profile. That solution lies in the integration of cutting-edge technology with human security.

The Modern Security Guard: Smart Guarding and Solar Guard

Prior to the existence of AI, the Silicon Valley Model sought to enhance building security by combining electronic access control in a building with a fleshed out in-person security protocol. This gives companies the opportunity to employ security guards with relevant prior experience, such as ex-law enforcement and ex-military members, who have effective communication and customer support skills. The key to success is a combination of the right people on site and the proper technological processes in place.

Sentry AI’s Smart Guarding takes this approach a step further by integrating AI agents into the security protocols. A various range of sensors are installed across the building. These can include:

  • Cameras
  • Microphones
  • Motion sensors
  • Turnstiles
  • Fire detection (smoke detectors, heat detectors, etc.)

With the number of sensors that exist in a singular building, a Security Operations Center (SOC) analyst can get easily overwhelmed by the sheer volume of alerts. An AI agent established at the core of this alert system can absorb the information, interpret the incoming data and pass on the relevant security alerts to the SOC analysts.

The AI agent itself can also be proactive and mitigate ongoing security risks. The AI can impersonate a human guard, using any language, tone of voice or even slang if required. By voicing details such as the intruder’s clothing or appearance, the agent creates the impression of an on-site security guard without actually engaging physically with the intruder. After announcing a security presence, the agent will tell the intruder to leave and threaten police intervention if they do not. The agent can also activate sirens and lights to trigger a flight response from the intruder. This is all managed without human intervention.

Periodically, companies need to install a security solution that does not rely on the network, property owner or landlord. Sentry AI has the Solar Guard solution for these exact situations. The Solar Guard is a self-contained mobile unit with a tall mast and several solar panels. Energy collected throughout the day is stored in batteries contained within the unit to power it throughout the night or in adverse weather conditions. At the top of the mast, the Solar Guard has lights, speakers, a cellular modem and dual lens cameras that give a 360-degree field of vision.

As vacancy rates in corporate buildings continue to climb, companies continue to search for new impactful and cost-effective ways to improve their security posture in their buildings. AI-powered security protocols such as Solar Guard and Smart Guarding decrease the risk to personnel and cut through alert fatigue. By combining modern technological advancements with knowledgeable SOC analysts, companies gain ROI and protect their assets when personnel are not present.

To discover how Smart Guarding can elevate security in your vacant facilities, watch Sentry AI’s webinar, “Using AI to Protect Vacant Facilities.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Sentry AI, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing AI Adoption in Government: From Mandates to Implementation

One of today’s top trends is artificial intelligence (AI), specifically how the Public Sector can adopt it while maintaining the security, governance and oversight essential for mission-critical operations. With AI jumping from number three to one on Federal Chief Information Officers’ (CIO) priority lists and 80% of CIOs under explicit cost savings mandates, the question is no longer whether to deploy AI but how to do so securely at scale.

The recent overhaul of the Federal Acquisition Regulation (FAR) marks the most significant rewrite in over 40 years, fundamentally shifting how Federal agencies operate and procure technology. As generative AI (GenAI) deployments move into mission-critical environments, agencies need practical frameworks that balance speed with verification.

Moving From Speed to Velocity

As The Public Sector enters the age of AI, with $4 trillion in Private Sector investment in data centers, agencies face a fundamental design challenge: design AI systems that adapt to human workflows rather than forcing humans to adapt to systems. This distinction matters most in Government and defense contexts where lives depend on maintaining human oversight for deliberate decisions.

The Department of War’s (DoW) Acquisition Transformation Strategy (ATS) offers a proven model of buying outcomes in increments. Instead of funding calendar time through traditional program structures, agencies should fund missions through portfolios that deliver outcomes in weeks or months. This approach structures procurement in modular increments that integrate with evolving architecture while funding capability and delivery, not timelines.

Velocity differs from speed in its directional precision. Agencies can accelerate procurement through fast-lane processes while maintaining governance through evidence gates that verify operational performance, user adoption, cyber risk posture and sustainment realities. This framework preserves ethical obligations while delivering measurable results.

Prerequisites for Secure AI Implementation

Before deploying AI tools in production environments, agencies need foundational elements in place:

GitLab, Securing AI Adoption in Government blog, embedded image, 2026

Policy frameworks that define where AI can be a part of the process and establish clear boundaries for all personnel. Training and enablement programs ensure teams understand governance requirements and security policies. Several Federal agencies have already created AI centers of excellence to help establish standards and create processes around how they are implementing AI.

End-to-end visibility across the entire software delivery process enables agencies to track where AI agents operate and what actions they perform. Without comprehensive visibility, governance becomes theoretical rather than operational.

Contextual accuracy determines output quality, AI systems deliver accurate, usable results only when provided with the right context, making data quality and integration critical prerequisites.

Built-in guardrails must exist before AI implementation. Security scans on every code change and controls preventing critical vulnerabilities from merging into production branches become essential as agencies move into the agentic AI era.

Practical AI Use Cases That Deliver Value

GitLab’s most recent DevSecOps survey reports that AI currently handles about 25% of the work in Public Sector organizations, with leadership targeting 50% automation. The most successful implementations focus on code generation, testing and documentation, areas where AI delivers immediate, measurable impacts.

Federal customers using GitLab’s AI capabilities report significant efficiency gains in code review processes. AI-powered first-pass reviews reduce time while maintaining quality standards. Test generation and legacy code modernization have proven particularly effective.

Compliance automation represents an emerging high-value use case. GitLab teams are developing compliance agents that access code repositories, Continuous Integration/Continuous Deployment (CI/CD) pipelines and security vulnerability data to automatically populate Security Technical Implementation Guide (STIG) checklists. Security team leaders review and adjust outputs as necessary, reducing administrative burden while allowing teams to focus on strengthening application security posture.

Prioritizing AI Governance Frameworks

With 35% of Public Sector professionals using unofficial AI tools at work, agencies governance frameworks that address shadow IT risks without stifling innovation. A risk-based approach identifies high-impact systems within critical infrastructure and implements controls that prevent systemic failures.

Effective governance prioritizes AI adoption around innovation while maintaining public trust. Agencies must identify high-impact areas and understand system interdependencies, as more systems connect, understanding how one system impacts another becomes essential for appropriate segmentation and risk management.

Building on Secure Foundations

Agencies cannot build on a shaky foundation. Federal AI and cybersecurity strategies must align around building responsibility into the process from the start. This requires shifting from governing static systems to engineering systems that can evolve safely, integrating assurances, accountability and human judgment as foundational design constraints instead of downstream checks.

Before deploying advanced AI capabilities, agencies should strengthen foundational practices, standardizing workflows, implementing security by design and ensuring basic guardrails are in place. AI cannot compensate for weak foundations in the software development lifecycle. The path forward requires doubling down on fundamentals while strategically adopting AI where it delivers clear value.

To learn more about implementing secure AI solutions, watch GitLab’s full webinar, “Cyber in the AI Era: Building Foundations for Secure Adoption.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including GitLab, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

10 Healthcare Technology Predictions Shaping 2026 

Carahsoft, The Trusted IT Solutions Provider for the Healthcare Industry™, supports healthcare organizations in their mission to deliver efficient, high-quality care across the enterprise. Our comprehensive portfolio of healthcare solutions addresses critical needs across clinical systems, patient experience, enterprise operations, infrastructure and more. We help healthcare organizations streamline workflows, reduce administrative burden and improve security, maximizing the value of technology investments. As healthcare continues to evolve through regulatory changes, innovation and shifting care delivery models, these 10 trends represent the most significant opportunities and challenges facing the industry in 2026. 

Interoperability: From Compliance Exercise to Strategic Asset 

The 21st Century Cures Act and the Office of the National Coordinator’s (ONC) Health Data, Technology and Interoperability (HTI)-1 Final Rule have pushed standardized Fast Healthcare Interoperability Resources (FHIR)-based Application Programming Interfaces (APIs) and expanded data classes into the market. The Center for Medicare and Medicaid Services’ (CMS) Interoperability and Prior Authorization Final Rule adds pressure on both payers and providers to exchange information seamlessly. In 2026, however, organizations that treated these regulations as checkbox compliance activities will watch competitors turn interoperability into operational advantage. 

Real-time data feeds reduce prior authorization delays. Integration platforms surface insights that drive value-based care arrangements. Data warehouses built for exchange, not just storage, become the foundation for population health management. The early adopters are not just meeting regulatory requirements. They are using data exchange to reduce administrative burden, improve care coordination across settings and unlock revenue opportunities that siloed systems leave on the table.  

The Transparent Use of AI in Healthcare 

In 2026, healthcare leaders will shift from asking should they use AI to how to document and explain it. The HTI-1 Final Rule introduced algorithm transparency requirements: disclosure when artificial intelligence (AI) and machine Learning (ML) algorithms influence clinical decisions. Clinical teams need to understand when AI-driven insights are guiding care recommendations, and patients deserve to know when algorithms influence their treatment plans.  

Regulatory bodies expect organizations to prove their AI tools meet safety and efficiency standards. The organizations that move early on AI governance frameworks, establish clear documentation standards and train clinicians on algorithm literacy will be ready when transparency moves from recommended to required.  

AI will also be used as the voice of healthcare. Call center staff miss operational targets by spending 25 minutes on a single call, AI, however, can make 50+ simultaneous calls while giving each patient the time they need. This capability transforms patient engagement at scale. AI enables follow-up with 100% of discharges, identifying interventions that prevent readmissions and materially impact the quadruple aim: better outcomes, better patient experiences, lower costs and improved clinician satisfaction. 

Telemedicine Shifts to Integrated Care Model 

Telemedicine exploded during the pandemic as an emergency solution. In 2026, leading organizations will stop treating telehealth as a separate channel and start embedding it into the care continuum. Digital front doors guide patients to the right care setting, whether that is video, in-person or asynchronous messaging. 

The technology exists and the patient demand has been proven, but what is missing is the operational maturity to weave virtual care into clinical workflows, reimbursement models and quality measurement. Organizations that integrate this technology into their environments will deliver better access without fracturing the care experience. 

The Revenue Cycle  

Healthcare organizations have been exploring AI in clinical settings (ambient documentation, diagnostic support, care coordination), but the revenue cycle may deliver faster more measurable returns. Prior authorization is a prime target. AI can automate the documentation assembly, predict approval likelihood and flag missing information before submission. 

Coding accuracy is another opportunity. Natural Language Processing (NLP) tools can analyze clinical documentation and suggest appropriate diagnosis and procedure codes, reducing claim denials and capturing revenue that incomplete documentation would lead to. The Chief Financial Officer (CFO) conversation around AI will shift in 2026. Revenue cycle leaders will demonstrate tangible Return on Investment (ROI): fewer denials, faster reimbursement and reduced administrative costs. These wins will fund broader AI adoption across the enterprise. 

Value-Based Care 

The shift to value-based care has been talked about for years, but 2026 is when data infrastructure limitations become impossible to ignore. Value-based contracts require organizations to track outcomes across care settings, measure quality metrics in real time and identify high-risk patients before they become high cost. Siloed Electronic Health Records (EHRs), fragmented data warehouses and manual reporting processes cannot support these requirements. 

Organizations need integration platforms that pull data from multiple sources, such as inpatient, outpatient, lab, pharmacy and claims. They need analytics tools that surface actionable insights, not just dashboards, and they need governance frameworks that ensure data quality and consistency. 

The healthcare organization succeeding in value-based arrangements are not necessarily the largest or best-resourced. They are the ones that invested early in data infrastructure and developed the analytical capabilities to turn information into action. 

Cybersecurity: From IT Issue to Board-Level Risk 

The proposed changes to the Health Insurance Portability and Accountability Act (HIPAA) Security Rule published December 2024 represents a significant escalation in regulatory expectations. If finalized in 2026, covered entities will face requirements for data encryption, Multi-Factor Authentication (MFA), network segmentation, vulnerability scanning and penetration testing. The Department of Health and Human Services’ (DHHS) Cybersecurity Performance Goals provide a voluntary framework, but the proposed HIPAA updates suggest these practices may become mandatory. 

Chief Information Security Officers (CISOs) who can translate technical risks into business impacts will gain influence. Organizations that invest in both technology controls and governance frameworks will build resilience that extends beyond compliance checkboxes. Organizations that elevate cybersecurity to a strategic priority will be better prepared when threats escalate. 

The Digital Front Door 

Patient expectations have changed. People expect to schedule appointments, complete intake forms and access their health information online. The digital front door is more than a patient portal. It is a comprehensive strategy to meet patients where they are. In 2026, leading organizations will integrate digital patient engagement tools into a seamless experience, reducing administrative burden on staff, improving patient access and generating operational efficiencies. 

However, digital tools that do not connect to existing workflows create more problems than they solve. Integration of patient-facing technology with operational systems eliminates duplicate work and improves patient and staff experiences. 

Rural Healthcare Transformation 

The Rural Health Transformation Program represents the most significant Federal investment in rural healthcare infrastructure with $50 billion over five years, starting in 2026. This funding creates opportunities for technology investments that rural hospitals and health systems, particularly patient-facing solutions, technical assistance for IT and cybersecurity and innovative care models that often depend on digital tools. 

Rural organizations that prepare strong applications will access resources that can transform their operational capabilities. However, rural organizations often lack the IT staff, strategic planning capacity and vendor relationships that larger systems have. The organizations that succeed in securing and deploying these funds will be those that partner with experienced implementation teams, prioritize high-impact use cases and build sustainable technology roadmaps. 

Technology vendors and solution providers should pay attention to this program. It represents a market opportunity to support underserved communities with solutions that improve access, reduce costs and strengthen resilience. 

Workforce Solutions Beyond Scheduling and Talent Management 

Healthcare’s workforce crisis continues as burnout and turnover remains high. Traditional solutions help but do not solve the underlying challenges and impact staffing shortages have on care delivery and patient experience. In 2026, forward-thinking organizations will expand their workforce technology strategy beyond administrative efficiency to include tools that directly reduce clinician burden and improve job satisfaction. 

Clinical and operational technologies improve the work experience, and organizations that recognize this and invest accordingly will differentiate themselves in competitive labor markets. Workforce development technology such as training platforms, competency management systems and career advancement tools can help organizations grow talent internally rather than recruiting externally. This is especially valuable for rural hospitals that cannot compete with compensation alone. The organizations that treat workforce challenges as technology opportunities will build more resilient, engaged and effective teams. 

The Role of Process Automation 

Healthcare has embraced automation is administrative functions like claims processing, appointment reminders and billing. These applications deliver clear ROI and do not require clinical engagement. Clinical applications, however, require different considerations than back-office automation. These workflows involve judgement, variability and patient safety concerns. 

Automation in clinical settings requires trust. Clinicians need to understand how automated processes work, when to intervene and how to escalate exceptions. IT and operational leaders need to ensure automation enhances workflows rather than creating workarounds that introduce new risks. Healthcare organizations that approach automation thoughtfully will reduce burden, improve efficiency and demonstrate that technology can support instead of complicate clinical work. 

These trends represent opportunities for healthcare organizations to leverage technology in pursuit of better outcomes, improved efficiency and stronger financial performance. The organizations with clear priorities, engaged leadership and commitment to implementation will position themselves for success. As regulatory requirements evolve and patient expectations rise, technology partnerships become essential to delivering high-quality care while managing costs and operational complexity. 

Explore Carahsoft’s Healthcare Technology solutions portfolio to discover compliant, secure solutions tailored for healthcare organizations.  

Download Carahsoft’s Healthcare Buyer’s Guide to evaluate solutions that meet your organization’s operational and compliance requirements. 

Contact the Healthcare Team at (571) 591-6080 or Healthcare@carahsoft.com to discuss solutions that accelerate your technology adoption. 

From Chaos to Confidence: Building Modern Data Strategy for Government Agencies

Government agencies hold vast amounts of data but struggle to extract value from it. Historically, agencies prioritized completeness over usefulness, resulting in years of manual efforts to organize data without surfacing valuable insights. Information remained trapped in siloed systems and inaccessible formats. As artificial intelligence (AI) transforms Government operations, its success depends not on new technology but on organized, accessible and secure data. Moving from reactive data management to a proactive strategy requires rethinking how data is classified, shared and protected.

The Evolution from Data Chaos to Strategic Data Organization

Agencies have long battled data disorganization, often with approaches that created more problems. Mandating perfect data organization before system development proved counterproductive. Projects stalled as teams pursued an impossible standard of completeness through governance structures that prioritized control over utility.

Rather than starting with comprehensive inventories, agencies should ask: What do I need to know that I cannot answer today? This question identifies the data that actually matters, assigns ownership and establishes automated processes to keep information current. Focusing on real business questions, not theoretical perfection, revealing the most-used data and delivering immediate value.

This shift reframes data as a strategic asset rather than a compliance burden. Modern data organization requires data domains that map to major key functions, establishing governance that enables access and early wins. The goal is speed and relevance over exhaustive documentation.

The Complexity and Criticality of Unstructured Data

Unstructured data, including Office documents, PDFs, imagery, drone footage, building blueprints, redlined contracts and multimedia recordings, poses a great challenge as it continues to grow dramatically. Construction agencies hold scanned blueprints from the 1950s alongside modern Computer-Aided Design (CAD) files. Legal teams generate years of contract negotiations with intelligence hidden in redlines and clause changes. Contact centers produce customer feedback that defies easy categorization yet contains critical insights. Emerging technologies like drones for monitoring or automated transcription continually introduce new data formats.

Extracting value requires technologies that classify, tag and analyze at scale. Optical Character Recognition (OCR) must identify Social Security numbers in images; classification engines need to distinguish between Controlled Unstructured Information (CUI) and Federal Contract Information (FCI) for Cybersecurity Maturity Model Certification (CMMC); multimodal tools must process audio, video and geospatial content. The challenge is organizing today’s data while preparing for tomorrow’s formats and making legacy information accessible and actionable.

Security, Access Control and Zero Trust in Modern Data Environments

As data moves into cloud, mobile and collaborative platforms, agencies face heightened security concerns. Traditional perimeter-based models, designed to secure from the outside in, no longer fit work patterns where employees access sensitive information from multiple devices and locations.

Egnyte, Building Modern Data Strategy for Government blog, embedded image, 2026

Zero Trust Architecture (ZTA) reframes security by treating trust as a vulnerability. Every access request requires continuous verification. Field-level encryption at rest and in transit becomes essential. Authentication must balance robust security with usability to avoid workarounds. Agencies must evaluate whether solutions meet FedRAMP requirements, CMMC standards and other frameworks while implementing least-privilege access and continuous monitoring.

Effective security requires a layered design across three dimensions:

  • Storage – encryption and data handling
  • Systems – secure communications between platforms
  • Access – authentication and authorization

Agencies that succeed build security into workflows rather than adding it afterward, enabling legitimate access while preventing exposure.

Trust, Governance and the Fear of Sharing

Agencies hesitate to share data because they distrust its accuracy, currency or interpretation. Data owners understand nuances and limitations, but this context rarely transfers to others, leading to misinterpretation and errors. These issues stem from inconsistent definitions across systems, incomplete or outdated records and uncertainty about whether data reflects current operations.

Fear and misuse leads to data hoarding, which protects teams but limits organizational intelligence. Breaking this cycle requires comprehensive governance that enables rather than restricts. Effective approaches include:

  • Automating processes to ensure information is current
  • Assigning clear data ownership and accountability for quality
  • Creating data guilds for sharing best practices across organizational silos

Training, both technical and contextual, is essential. Early wins establish reliability, building trust and confidence.

AI Readiness and the Data Foundation Imperative

AI offers significant promise but depends entirely on data quality. AI cannot grant access to sensitive data, cleanse disorganized datasets or prevent hallucinations when trained on incomplete or contradictory information. AI amplifies existing data conditions: strong organization enables powerful AI applications; chaotic data yields unreliable outputs.

AI readiness intensifies longstanding challenges. Classification becomes non-negotiable when AI can process millions of documents but needs rules for handling personally identifiable information (PII), CUI and regulated data. Permissions must prevent accidental exposure or improper data combinations. Data cleansing, which includes identifying duplicates, correcting inconsistencies and validating accuracy, becomes a prerequisite for responsible AI deployment.

Because AI technologies evolve quickly, agencies must remain tool agnostic and focus on outcomes. Architecture can support multiple AI platforms and multimodal processing of text, audio, video and geospatial data. Agencies must assess current data maturity and invest in classification, cleansing and cultural alignment to ensure AI success.

Building Your Agency’s Data Strategy

Government agencies stand at a crossroads where old approaches to data management no longer suffice, yet the path forward remains challenging to navigate. Key steps include:

  • Start with the questions that matter, not perfect organization
  • Treat unstructured data as a high-value intelligence source
  • Implement security that enables legitimate access
  • Build trust through governance and early wins
  • Recognize that AI readiness begins with solid data fundamentals

Success does not require a sudden overhaul; it requires strategic focus, incremental progress and organizational commitment to treating data as the strategic asset it represents.

To dive deeper into practical strategies for organizing, securing and leveraging your agency’s data, watch the full webinar “Make Your Data Work for Your – Solutions for Securing and Sharing Data Correctly” hosted by Egnyte and Carahsoft.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Egnyte, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How Microsoft’s OneGov Agreement Brings Affordable AI-Enhanced Productivity to the Federal Government

Federal agencies have a need to advance artificial intelligence (AI) adoption and transform Government by modernizing legacy IT systems. Microsoft’s OneGov Portfolio delivers AI-powered collaboration capabilities through pre-negotiated discounts, giving agencies a simple and predictive way to obtain Microsoft Solutions at significant cost savings.

Aligned with the General Services Administration’s (GSA) OneGov strategy to unify agencies and reduce technology silos, the program provides Federal agencies with streamlined access to Microsoft 365 Copilot, cybersecurity and monitoring tools, as well as tools to assist with citizen engagement and streamlining operations. This approach simplifies procurement, accelerates deployment and delivers measurable productivity gains across mission-critical operations.

Enhanced Productivity and Secure Collaboration

The Microsoft OneGov offer provides the AI-powered productivity capabilities of Microsoft Copilot with applications agencies are using today like Word, Outlook and Teams. The platform enables users to draft content, analyze complex datasets and automate repetitive processes without switching between systems or learning new interfaces.

Government‑tailored versions of the Microsoft 365 applications operate within Microsoft’s U.S. sovereign cloud environment, giving agencies secure channels for cross-agency communication. Agencies also receive cloud storage through Microsoft OneDrive for secure, real-time collaboration and AI capabilities through Microsoft Copilot that accelerate daily workflows, including:

  • Content generation: MicrosoftCopilot generates first-draft documents in Word, reducing time spent on routine writing tasks and enabling staff to focus on substantive review and refinement.
  • Accelerated communication: Microsoft Copilot summarizes lengthy email threads and drafts responses in Outlook, streamlining correspondence management across complex organizational structures.
  • Process automation: Users build agents in Microsoft Copilot to orchestrate multi-step processes, reducing manual effort and minimizing errors in repetitive workflows.

Entra ID, Microsoft’s Identity Management Platform, provides identity management capabilities that support secure collaboration across agencies. Administrators gain automated access policies, conditional access controls and enforcement of least-privilege principles, ensuring users access only content explicitly authorized for their roles.

The offer includes built-in automation and bulk-assignment tools that streamline license deployment and management for agencies of all sizes. Once licenses are deployed, they are readily available to users, expediting the onboarding process.

Meeting Federal Security and Compliance Requirements

Solutions deployed through Microsoft’s Government Community Cloud (GCC) and Government Community Cloud High (GCC‑High) operate in U.S. sovereign cloud environments designed to meet Federal compliance standards. The offer supports FedRAMP High authorization and Department of Defense (DoD) Impact Level 4 (IL4) requirements through comprehensive security controls:

  • Encrypted data handling protects information in transit and at rest.
  • Role‑based access control and continuous monitoring provide layered security.
  • Data residency guarantees ensure information remains within authorized geographic boundaries.
  • Zero Trust Architecture (ZTA) enforces identity‑based access, least‑privilege permissions and robust conditional access policies across all services.

Simplified Procurement for Federal Buyers

Microsoft’s OneGov offer provides Federal agencies with pre-negotiated, standardized pricing up to 70% compared to standard GSA rates. The program supports agency-wide purchasing, reduces duplicative contracting and provides multi‑year discounts on solutions such as Microsoft 365 G5 and Copilot.

All purchases remain within the GSA Multiple Award Schedule (MAS), streamlining administrative tasks and simplifying budget planning. This structure enables agencies to act quickly on modernization initiatives while maintaining compliance with Federal procurement regulations.

Deployment and Adoption

Microsoft has end customer development funds available through the OneGov Portfolio offer to assist customers with rapid deployment, implementation and adoption of these tools.

The Power of Strategic Partnerships

As The Trusted Government IT Solutions Provider®, Carahsoft worked closely with Microsoft to add OneGov offers to Carahsoft’s GSA MAS, making pricing widely accessible and offering standardized discounts ranging from 50-100% to Federal agencies. This partnership delivers pricing advantages on Azure Services, Microsoft 365, Copilot and Dynamics 365.

Microsoft and Carahsoft provide comprehensive support for environment qualification, anniversary alignment, suite conversions and deployment across GCC, GCC-High and DoD environments. By combining OneGov incentives with existing enterprise agreements, agencies gain simplified procurement, predictable pricing and meaningful cost savings that accelerate modernization timelines.

Explore Microsoft’s OneGov portfolio to discover available solutions aligned with the needs of Federal agencies.

Contact the Microsoft Team at (844) 673-8468 or Microsoft@carahsoft.com to receive pricing details or schedule an overview of OneGov offerings for your agency.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Microsoft, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Building Mission-Driven AI That Lasts: A Federal Agency Roadmap for Success

A recent Massachusetts Institute of Technology (MIT) study revealed that 95% of artificial intelligence (AI) projects fail before they even get started. For Federal agencies managing citizen data, classified information and critical infrastructure, this is not just a learning curve; it is a fundamental breakdown of how AI initiatives are conceived and executed. The disconnect between AI proliferation and AI success stems from a common pattern—agencies prioritizing tools over outcomes, launching disconnected pilots without enterprise alignment and lacking the governance structures to ensure accountability. The path forward requires a deliberate shift. Starting with mission-driven use cases, building on clean and governed agency data and ensuring sustainable adoption through people-centered strategies.

Mission-Driven Use Case Development First, Technology Second

The fastest way to stall an AI initiative is to start with the technology instead of the mission. Too often, agencies approach AI adoption by asking “what can we do with generative AI (GenAI)?” rather than “what operational problem needs solving?” This approach yields pilots that work in limited scenarios but often fail to scale because the model, data and governance do not translate to enterprise-level needs. Strong AI use cases are not discovered after implementation; they are designed deliberately around mission outcomes and real operational constraints. Agencies should begin by defining a specific challenge or opportunity, whether it is a slow and manual process, resource-intensive workflows or error-prone operations. The critical test is simple: if success would not fundamentally change how the mission operates, it is not the right use case to prioritize.

Identifying stakeholders early is equally essential. Program owners, analysts, operators and leadership must validate whether AI will genuinely help or simply add noise to an already complex technology landscape. Agencies must also be explicit about outcomes—faster decisions, fewer errors, reduced backlogs, better procurement insights or reclaimed staff time. Without clearly articulated outcomes, measuring success or defining return on investment becomes impossible. A practical prioritization matrix can guide agencies in filtering use cases into four categories:

  • high-impact, high-effort investments for enterprise transformation
  • high-impact, low-effort quick wins ideal for pilots
  • low-impact distractions to avoid entirely
  • interesting but non-urgent projects to defer

By focusing on tightly scoped problems with clear ownership and contained risk, agencies can deliver meaningful pilots that demonstrate real value and build momentum for broader adoption.

Data Foundation and Governance as the Critical Success Factor

Most AI models in use today are generalized Large Language Models (LLMs) trained on public internet data. These models are faster to deploy and have lower upfront costs, making them attractive for proofs of concept. However, they lack understanding of an agency’s unique mission, culture and decision-making context. For lasting, mission-critical AI, agencies should consider Small Language Models (SLMs) trained on agency-specific data. These models are more energy efficient, operationally reliable and context-aware with fewer mistakes. The challenge lies in fragmented data environments where records are spread across systems, formats and classification levels. This is where records management and data governance professionals become invaluable, helping to locate data and establish controls that transform data from a liability into a strategic asset.

AI learns directly from the data it was trained on and from how humans categorized it through reinforcement learning from human feedback. If the underlying information is disorganized, untagged or incomplete, the model will reproduce those flaws at scale. Properly governed, annotated and categorized data produces outputs that are accurate, explainable and trustworthy. Unstructured data—emails, PDFs, chat logs, memos, case files—represents roughly 80% of all agency information and contains the real story of mission operations. Yet most tools focus on structured data like databases and spreadsheets, missing the valuable context hidden in human-generated content. In-place data management addresses cost and security concerns by training and running models where data already lives, minimizing movement and preserving security boundaries. When Chief Data Officers (CDOs) and Chief AI Officers (CAIOs) collaborate under a shared governance model that includes Chief Information Security Officers (CISOs), Chief Information Officers (CIOs), legal teams and records leaders, innovation becomes both safer and faster because trust and accountability are built from the start.

The AI Failure Crisis and Its Root Causes

Federal AI adoption has accelerated faster than almost any other technology in Government history, yet this growth comes with significant risk. Currently, there is no Federal statute enacted by Congress to regulate AI across sectors, leaving agencies to rely on self-assessments and voluntary guidelines. The Office of Management and Budget (OMB) M-24-10 requires agencies to apply risk management and governance controls to high-impact AI systems, but without uniform standards for measuring impact or frameworks for compliance, agencies struggle to implement meaningful safeguards.

Many AI projects begin in isolation, driven by excitement about new tools or pressure to deliver results quickly, without engaging CIOs, CDOs or records management teams. Solutions may work adequately for limited use cases but lack the foundation to scale because governance, data quality and stakeholder alignment were afterthoughts rather than prerequisites. This pattern creates an explosion of activity with limited longevity, the very definition of a bubble. Experts report that Government is a generation behind industry in AI governance, a concerning gap given the sensitive citizen data, classified information and critical infrastructure at stake. If agencies rush to deploy AI without proper governance, they multiply the surface area for data errors, bias and compliance breakdowns. Expansion without oversight increases exposure rather than capability.

Sustainable Adoption Through People and Partnership

Even well-designed AI initiatives fail without sustained human engagement and vendor commitment. Vendors must remain engaged beyond initial implementation, continuing to train systems, monitor performance, incorporate feedback and deliver updates. If a vendor disappears after the sale, agencies are left without the support needed to refine and sustain their AI investments. This reinforces why starting with genuine use cases matters: when AI addresses tangible operational pain points, users are motivated to engage with and trust the technology.

Training cannot be a one-time orientation. Structured, continuous learning programs ensure that users understand not just the technology, but the workflows and data that feed it. Agencies should design AI for growth from the outset, building in governance controls, planning for scalability and considering reuse potential beyond the initial deployment. This “build once, reuse often” approach delivers efficiency gains and cost savings while making funding approval easier.

In an era where understanding how to learn has become the most essential skill, professionals must remain elastic and curious about topics that may fall outside traditional scopes, whether data governance for operational staff or technical architecture for mission leaders. By prioritizing mission-driven use cases, establishing robust data foundations, implementing governance as an enabler rather than a barrier and investing in people alongside technology, Federal agencies can move beyond experimental pilots to deliver AI that creates lasting, measurable impact.

To explore proven strategies for building mission-driven AI that lasts, watch ZL Technologies’ webinar, “From Noise to Impact: Building Mission-Driven AI in the Agency.”