Built for This Moment (and All Those to Come) Introducing Symantec CBX: Finally, a security platform for smaller teams fighting larger threats

  • Disconnected, vendor-dependent security stacks leave smaller teams blind to threats and overwhelmed by noise they’re not equipped to manage.
  • Symantec CBX unifies Symantec and Carbon Black capabilities into a cloud-based XDR platform that delivers native telemetry correlation, AI-driven insights and enterprise-grade protections without enterprise-level complexity.
  • Built for resource-constrained teams, Symantec CBX reduces costs, cuts alert fatigue, accelerates response and gives organizations a longoverdue advantage against increasingly sophisticated, AI-powered attacks.
  • See Symantec CBX in action in Booth N-5345 at RSAC 2026 Conference.

It’s time for the cybersecurity industry to face an uncomfortable truth: The tools meant to make organizations safer are often the very systems slowing them down, and sometimes leaving them vulnerable.

The problem is that security stacks are built over time from disparate tools that prevent analysts from seeing the full operating environment. Smaller security teams have relied on vendors to solve the challenge of integrating various products—and too often, vendors have fallen short, making it too difficult to gather and correlate the telemetry needed to understand what’s really happening across endpoints, networks and data.

While large enterprises have the resources to manage and integrate complex security stacks, left behind are the organizations that make up the largest swath of the cybersecurity customer market: smaller, less-resourced security teams that increasingly face AI-powered, enterprise-grade threats but lack the budgets and in-house expertise to implement enterprise-grade defenses. These sophisticated attacks can decimate smaller organizations, turning them into casualties of an escalating cyber war fueled by nefarious AI agents that never miss a day of work.

These security teams don’t just need better tools. They need an advantage. Now they have one.

XDR from the pioneer of EDR

Today, we’re introducing Symantec CBX, a groundbreaking new extended detection and response (XDR) solution that combines all the best capabilities of Symantec and Carbon Black into a unified, cloud-based platform. Symantec CBX is the first new product to integrate features from these two iconic brands. But more importantly, it’s the first fully featured XDR platform built expressly for smaller teams looking to evolve their security protections, but that lack the expertise and resources needed to configure and optimize traditional enterprise-class XDR solutions.

In Symantec CBX, we’ve distilled decades of innovation from Symantec and Carbon Black into a platform that solves the problem of correlating and making sense of telemetry across endpoints, networks and data. Typically, the various tools within security stacks attempt this via API integrations. But those fragmented couplings are often incomplete and leave dangerous gaps in visibility and actionable insight. Security analysts may understand that something is happening—they just don’t always know what it is or what to do about it.

The problem grows worse as attack surfaces expand. Organizations send more and more data to costly SIEM platforms, leading to a waterfall of challenges, from endless false positives that waste analyst time to murky outcomes that frustrate corporate management looking for evidence that security programs are working. These are costs smaller organizations can’t afford.

Symantec CBX solves this by combining into a single cloud platform Symantec’s robust prevention, data security and network security features with Carbon Black’s pioneering EDR technology for deep visibility, exceptional threat detection and rapid response across attack surfaces. Spared from log-centric ingestion, security teams detect incidents more precisely and can act more confidently.

Native correlation is just the beginning

With Symantec CBX, native telemetry correlation sits at the center of a vast array of advanced capabilities that, until today, were available only from multiple point solutions. In CBX, we have integrated breakthrough features from Symantec and Carbon Black that make teams smarter and more efficient. Here’s what security teams can look forward to:

AI that makes life easier for humans at the helm. We’ve strategically deployed AI to deliver meaningful improvements to security workflows, resulting in capabilities that simply aren’t available anywhere else. Take Carbon Black Threat Tracer, which allows any analyst to see all adversary activity in a single pane. (Even junior analysts can understand immediately where attackers came in, how they executed their attack and what data they accessed across endpoint, network, email and cloud environments.) The CBX platform also includes Symantec Adaptive Protection, which uses AI to stop living off the land (LOTL) attacks before they do damage. And Symantec’s Incident Prediction, the groundbreaking feature we introduced last year, predicts an attacker’s next four to five moves so teams can stop threat actors moving laterally to steal data or shut down systems.

More complete insights for faster remediation. Incident Summaries, another AI-powered feature, gathers comprehensive data about incidents and presents them in well-written, intuitive summaries and remediation guidance so any analyst can engage mitigation when and where it makes sense.

Enterprise-grade network and data protections. Drawing from the best of Symantec Secure Web Gateway (SWG) and Symantec DLP solutions, this new XDR platform defends the network and data domains by stopping malicious traffic at the network edge, while packaging data security essentials from our acclaimed DLP offerings to ensure that sensitive data stays where it belongs. Via the integrated Symantec Cloud SWG

Express, this new platform even supports post-quantum computing cryptography protocols, thus shielding organizations from the threat of increasingly common “harvest now, decrypt later” attacks and relieving concerns over the prospect of attackers someday unlocking encrypted data.

Meaningful outcomes and rapid time to value. Security managers are expected to continuously improve their team’s performance, but that’s not easy when disjointed solutions create needless friction and confusion, and multiple dashboards steal time from an already busy day. We built Symantec CBX with the features and unified management console that enable the outcomes security teams need most: driving down SIEM and operational costs, rescuing analysts from alert fatigue, speeding time to resolution, meeting governance requirements and demonstrating progress by improving metrics.

Out-of-the-box policy configurations make CBX easy to implement and deliver immediate value.

The Goldilocks platform for the heart of the market

Symantec CBX is aimed squarely at the heart of the cybersecurity market, empowering and enabling security teams of virtually any size with a platform that puts them first. No other XDR solution is built so specifically for organizations laboring under tight budgets, too few resources, a persistent lack of senior expertise, chronic alert fatigue and the ever-more–daunting threat of AI-powered attacks.

Symantec CBX is the XDR platform for this moment and this market. As the first new solution from Broadcom to integrate capabilities from both Symantec and Carbon Black, CBX is the realization of our strategy to deliver on the “better together” pledge we made when these two legendary brands first came together under Broadcom’s Enterprise Security Group. And it’s the ideal solution for our global network of Catalyst Partners, with their deep regional expertise and close customer relationships, as they help organizations struggling to keep up in an environment of constant change and unrelenting challenges.

Overwhelmed security teams need an advantage, and now they have one.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Security.com, and is re-published with permission.

4 ways AI agents change the way we approach Identity Security

As if gaining visibility into all human and non-human identities wasn’t a big enough task for security teams, adding AI agents into the mix takes identity complexity to a new level. Organizations of all sizes are tackling this new reality, where it feels premature to confidently say they know about all the AI agents running in their environment. 

That uncertainty is not a knowledge gap. It is an attack surface. 

Gartner’s new report on IAM for AI agents names the real nugget of truth: “Purpose/intent cannot be discovered after the fact by monitoring and observability capabilities.”

That is not just analyst language. It is a fundamental shift in how we need to think about governing agents. You cannot govern agents by watching them after-the-fact. You must know who they are, what they are for, and who is accountable before they run. 

The numbers that should change your priorities

Gartner’s data reinforces the urgency. By 2029, over 50% of successful attacks against AI agents will exploit access control weaknesses. By the year before, 90% of organizations that share credentials between humans and agents will need to make significant investments to undo that design.Gartner IAM for AI agents stat graphic-18 (1)

Those numbers are consequences, not causes. The root cause is structural: IAM maturity for agents is uneven. The Gartner lifecycle maturity assessment makes this visible. Authentication and monitoring capabilities are relatively mature. Identity registration and authorization are not. That gap is the story. 

Weak identity registration means the agent was never properly onboarded as an identity. No defined owner. No declared purpose. No documented scope. It has credentials and it is running, but nobody can tell you who built it, what it is supposed to do, or what happens when it breaks. When registration is weak, ownership is unclear. And when ownership is unclear, accountability does not exist. 

Weak authorization means the agent has more access than it needs. It can reach databases, APIs, and workflows that have nothing to do with its intended function. Nobody scoped it down because nobody defined what “down” looks like. When authorization is weak, privilege is excessive.

Now combine excessive privilege with autonomy. An agent that can reason, chain tools, and act on its own, with more access than it should have, and no one clearly accountable for what it does. That is the exploitable attack surface. That is the chain revealed in Gartner’s data.

You cannot protect what you cannot see

Before you can govern agents, you need to find them. All of them. Not just the ones your platform team sanctioned. The ones that developers spun up to solve an issue. The ones contractors built. The ones that exist because someone needed to “just get this working.” 

We hear this consistently from security teams. As one InfoSec manager at a professional services firm put it: “We do not find out about it until someone goes and does an actual audit of the system.” 

Gartner’s assessment confirms it: identity registration is one of the least mature IAM capabilities for AI agents. Most organizations cannot answer the basics: What is this agent supposed to do? Who owns it? What happens when it breaks? 

Discovery is not a checkbox. It is the foundation. Without it, every policy you write is based on assumptions, and assumptions do not survive first contact with autonomous agents operating at machine speed.

The identity registration gap

Most organizations are trying to govern agents with the wrong tools. They are monitoring. They are logging. But monitoring tells you what happened. Identity registration tells you what should happen. Authorization enforces the boundary between them. 

If your governance model depends on catching problems after they occur, you are always going to be behind. 

This is where many organizations reach for familiar tools. IGA platforms can help with registration and lifecycle management. IAM solutions like Okta or Entra ID can register agent identities. These are necessary steps. But they stop there. They can tell you an agent exists and who requested it. They cannot enforce anything at the moment that agent acts. 

That is the gap: governance on paper versus enforcement in production. 

Agents are identities, but not like any you have managed before

The way I read Gartner’s recommendations, there is a unifying thread: treat AI agents like you would treat any identity in your organization. They authenticate. They access resources. They act on behalf of someone. That is not a tool. That is an identity. 

But agents are more complex than traditional identities. They are what we call composite identities. They combine the blast radius of service accounts with the unpredictability of human decision-making at machine speed.

Four reasons that make them different: 

  • They act autonomously, unlike service accounts that execute predefined operations.
  • They may inherit human delegation, creating privilege escalation risk.
  • They may chain multiple machine identities in a single task.
  • They may operate across trust boundaries your IAM system was not designed to handle.

Think about how you onboard an employee. You do not give them admin access on day one. You define their role, their manager, their scope. You review their access as responsibilities change. Agents need that same lifecycle. But right now, most organizations are skipping straight to “give them credentials and hope for the best.” 

What runtime enforcement actually looks like

Gartner calls out the authorization gap. But what does closing that gap look like in practice? 

Even modern IAM systems, including conditional access and continuous evaluation, were designed primarily to evaluate who is signing in and what that identity is generally allowed to do. Agents introduce a different problem. They do not just sign in. They execute. They invoke tools dynamically. They operate across multiple identity contexts within a single task. 

Traditional conditional access evaluates who is signing in and under what conditions. Agent governance must also evaluate what is being executedat the moment of execution. 

Here is what that looks like: an agent is about to call a tool, read from a database, trigger an API, or execute a workflow. Before that happens, there is a decision point. Runtime enforcement evaluates the composite identity: the human owner, the agent itself, the tool credentials, and the defined purpose, all at execution time. Is this agent authenticated? Does it have permission for this specific action? Is this behavior consistent with its intended function? 

That is runtime enforcement. Not configuration-time policies that assume the agent will behave as designed. Decisions at execution time, every time.

What Silverfort does differently

If the failure pattern is identity immaturity, then the control point must also be identity. Most AI agent security approaches start at the model or application layer. We start at the identity layer. Because if identity is uncontrolled, everything above is fragile. 

Human accountability by design

Every AI agent is explicitly tied to a real human owner in policy. Not informally. Not in documentation. In enforcement logic.

Every action can be traced back to a real chain of accountability: which human owns this agent, what identity the agent is operating under, and what credentials it uses to access resources. That is what we mean by composite identity. And it is what makes enforcement possible before monitoring even begins.

Runtime enforcement at the identity layer

Silverfort enforces at the identity decision point at runtime. For MCP-connected agents, that means sitting in line between the agent and the MCP server. For platform-native agents, enforcement is delivered through native integration, directly within the platform. 

Before a tool call executes, we evaluate identity, context, delegation, and policy in real time. If the action exceeds scope, it does not execute. This is not configuration-time IAM. This is execution-time identity enforcement. That distinction matters. 

Least privilege that survives autonomy

Static least privilege assumes predictable behavior. Agents break that assumption. They reason. They chain tools. They drift from what they were originally authorized to do. Least privilege must be validated at runtime, not just set at provisioning. 

That means if an agent tries to access a resource outside its declared purpose, it gets blocked. If delegated privileges start expanding beyond what was originally scoped, they are contained. This is the same enforcement model we apply to humans and service accounts, now extended to AI agents.

One Identity Security Platform

AI Agent Security is not a standalone product. Agents sit at the intersection of human identities, non-human identities, service accounts, cloud resources, SaaS applications, and protocol layers like MCP. If those domains are secured separately, agents will exploit the seams. 

Silverfort unifies this. One policy framework. One observability layer. One enforcement architecture. Across humans, machines, and AI. That is the architectural difference.

Enabling AI innovation without slowing it down

Security leaders are not trying to stop AI adoption. They are trying to make sure it does not outrun their ability to govern it. The organizations moving fastest with AI agents are the ones that figured out early: the right security model is a speed advantage, not a drag. 

Cars have brakes so you can drive fast. The same principle applies here. 

But, the brakes only work if they’re connected to the same system. Today, most organizations secure human identities in one tool, service accounts in another, and AI agents (if at all) in a third. If those domains are secured separately, agents will exploit the seams. 

That’s the reason teams need a unified Identity Security Platform

  • One policy framework means a CISO can define “no agent accesses production data without human approval” once and have it applied across every agent, every platform, every protocol. No per-tool configuration. No coverage gaps.
  • One observability layer means when an agent acts, you see the full chain: which human triggered it, which NHI it authenticated with, which tool it called, and what data it touched. Not three dashboards stitched together after the fact, but a single view that makes incident response possible in minutes instead of days.
  • One enforcement point means policy is applied at runtime, at the moment of action, not retroactively through quarterly access reviews. When an agent requests access, the decision happens inline. Allow, deny, or step up. Before the action executes, not after. 

This is what shifts AI agent security from a governance exercise to an operational capability. Discovery tells you what exists. Registration tells you who owns it. Runtime enforcement tells agents what they’re actually allowed to do, in the moment, every time. 

AI agents represent the next frontier of identity. Identity Security must evolve accordingly, from governance alone to continuous, runtime enforcement. Discover what is running. Register who owns it. Enforce at the moment of execution. That is the path. 

The Gartner report is worth reading in full. : https://www.silverfort.com/landing-page/campaign/gartner-report-iam-for-agents/.

Want to learn how Silverfort discovers and protects AI agent identities? See AI agent Security in action.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Silverfort, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Silverfort.com, and is re-published with permission.

Ignite. Innovate. Impact: Key Takeaways from NAWB The Forum 2026

For the first time in over 40 years, the National Association of Workforce Boards (NAWB) took its premier annual event on the road, landing in Las Vegas for The Forum 2026. This year’s theme, “Ignite. Innovate. Impact,” signaled a bold shift in how the workforce system addresses rapid economic change, emerging technology and legislative uncertainty.

Whether you missed the sessions or just need a refresher to share with your board, here is a summary of the major trends and tactical insights that defined the conference.

1. The Era of Generative AI: From Hype to Implementation

Perhaps the biggest “main stage” topic this year was the shift from talking about AI to using it. Sessions like “What AI ISN’T: Rethinking ChatGPT and Policy” and “The Current State of AI in Workforce Development” moved past the buzzwords.

Key Takeaways:

  • Capacity Building: AI is being framed as a tool to “do more with less” as boards face funding constraints. By automating routine administrative tasks, staff can shift focus to high-value human services like coaching and relationship building.
  • The “Human” Edge: Despite the automation, speakers emphasized that AI-exposed occupations still require human judgment, creativity and “core employability skills” (soft skills), which workforce boards are uniquely positioned to teach.
  • New Credentials: Discussion centered on emerging credentials for AI quality assurance, prompt design and data annotation as new entry points for job seekers.

2. Advocacy & WIOA Reauthorization

With the workforce system at a crossroads, advocacy was a central pillar of the 2026 agenda. The message from the “Inside the Beltway” updates was clear: workforce boards must be their own best storytellers.

Strategic Priorities:

  • WIOA Flexibility: NAWB continues to push for the reauthorization of the Workforce Innovation and Opportunity Act (WIOA), specifically advocating against “one-size-fits-all” mandates and for the reduction of state-level set-asides (from 15% to 10%) to return more funding to local control.
  • Data-driven evidence: Utilize current employment data from authoritative sources to substantiate your achievements.
  • Short-Term Pell: There was significant momentum around expanding Pell Grant eligibility for high-quality, short-term skills development programs that align with in-demand careers.

3. Solving the Childcare & Trades Equation

A standout session focused on the intersection of labor and family support: “Meeting Big Needs with Big Solutions.” Using Pierce County Labor and the Machinists Institute as a model, the session explored how investing in childcare for trades workers is no longer a “benefit”. It is a critical infrastructure requirement for a stable workforce.

4. Expanding the Apprenticeship Model

Registered Apprenticeships (RA) were highlighted as the gold standard for sustainable sector pipelines.

  • Influence Meets Industry: Sessions focused on making RA a “household name” beyond just the construction trades, expanding into Logistics, Electric Vehicles (EV) and even Childcare.
  • Public-Private Funding: A major theme was leveraging diverse funding streams (not just WIOA) to sustain apprenticeship momentum during economic shifts.

5. Organizational Resilience & Leadership

For Executive Directors and Board Chairs, the conference offered a deep dive into “Full Throttle Leadership.”

  • Contingency Planning: A specialized pre-conference session focused on helping boards navigate labor market shocks and talent shortages with decisive, proactive planning.
  • Culture Matters: Insights from the Eastern Kentucky Concentrated Employment Program (EKCEP) highlighted how a “culture of performance” can increase engagement among employees and elected officials alike.

Why it Matters for Our Community

The shift to Las Vegas was more than a venue change; it was a metaphor for the “nationwide tour of innovation” that NAWB is championing. The 2026 Forum made it clear that the future of work isn’t just about jobs, it’s about ecosystems.

As we bring these insights back to our local regions, our focus should remain on:

  1. Embracing AI ethically to improve service delivery.
  2. Advocating for local control and flexible funding.
  3. Integrating supportive services (like childcare) directly into our workforce strategies.

We had a great time and learned a lot. Schedule a meeting to chat more about the conference.

The Top 5 Insights for Government from HIMSS 2026 

Healthcare and technology leaders convened at the Healthcare Information and Management Systems Society (HIMSS) 2026 conference with a shared sense of urgency as the Federal health ecosystem is undergoing one of its most significant transformations in decades. Across panel sessions, discussions highlighted both the structural challenges and strategic investments shaping Government health agencies, from modernizing public health data infrastructure to addressing long-standing interoperability barriers that have fragmented care delivery.  

Five critical insights emerged that define a path toward a more connected, data-driven and patient-centered Federal healthcare system. 

Federal AI Policy Is Being Rebuilt Around Coordination, Not Fragmentation 

Leaders from the Department of Health and Human Services (HHS) emphasized that agency-by-agency artificial intelligence (AI) experimentation is ending. With dozens of programs across its divisions, HHS has restructured its AI strategy around three coordinated pillars: regulation, reimbursement and research/development.  

Historically fragmented efforts created conflicting signals and limited cross-agency innovation. Now, the Secretary’s office serves as an alignment layer, ensuring regulatory decisions at the Food and Drug Administration (FDA), reimbursement policies at the Centers for Medicare and Medicaid Services (CMS) and research investments at the Advanced Research Projects Agency for Health (ARPA-H) are coordinated. The goal is not to expand Government roles, but to remove barriers and accelerate adoption of existing technologies. 

The FDA is rethinking how AI-enabled medical technologies are regulated. After authorizing more than 1,000 AI and machine learning products, primarily in radiology but expanding into other domains, the agency recognizes the limits of a pre-market framework designed for static hardware, not continuously evolving software. Leaders described a shift toward lighter pre-market review paired with stronger post-market surveillance, focusing on real-world performance, model drift and patient outcomes. This approach requires new regulatory frameworks and enhanced data-sharing between developers, providers and regulators.  

ARPA-H complements this work by funding high-risk, high-reward innovations not supported through traditional mechanisms. Notably, no generative AI (GenAI) technology capable of providing clinical care has received FDA authorization, a gap the agency aims to close. One flagship initiative supports AI systems capable of performing comprehensive physician functions, developed alongside the FDA to establish new regulatory pathways. Additionally, ARPA-H is investing in “supervising agents,” systems that monitor and control deployed AI, addressing the scalability limits of human oversight. 

The VIP Sets a New National Standard for Health Data Exchange 

The Department of Veterans Affairs (VA) positioned itself as a national convener for interoperability through the Veteran Interoperability Pledge (VIP), which unites leading health systems to improve care coordination for veterans regardless of where they receive care.  

Grounded in the Elizabeth Dole Act, the initiative mandates rapid adoption of national interoperability standards across care coordination, benefits, identity matching, quality measurement and public health. VA leaders outlined a layered interoperability model—from foundational standards such as X12Fast Healthcare Interoperability Resources (FHIR) and Bulk FHIR, to data quality frameworks like Patient Information Quality Improvement (PIQI) and ultimately to advanced analytics and decision support. The key message: interoperability is foundational, but value is created through what is built on top of it. 

Operationally, the VIP is already enabling real-world capabilities. The Veteran Confirmation Application Programming Interface (API) allows Electronic Health Records (EHRs) to verify veteran status in real time, supporting eligibility recommendations under the Promise to Address Comprehensive Toxics (PACT) Act and the Comprehensive Prevention, Access to Care and Treatment (COMPACT) Act. Two workgroups are developing recommendations for identity verification and care coordination workflows, targeting submission by the end of March. A structured cadence of monthly plenaries and bi-weekly workgroups ensures continuous alignment between policy, standards and implementation. 

Seamless Collaboration Requires Breaking Down Technical and Cultural Barriers 

Federal, State and Local leaders underscored that populations served by multiple programs cannot be effectively supported by siloed agencies. Both technical and cultural barriers must be addressed simultaneously. 

At the Federal level, CMS, VA and the Indian Health Service (IHA) are advancing shared infrastructure and lowering redundancy. CMS is transitioning from Government-developed systems to commercial platforms, accelerating innovation and enabling AI tools that now reach approximately 80% of its workforce, saving an estimated 5.5 hours per employee weekly. The agency is also adopting a multicloud strategy for resilience and fostering talent pipelines through partnerships with institutions like the University of Maryland. 

IHS is undergoing a similar transition to commercial platforms, improving AI integration and expanding access to advanced tools in rural and tribal communities. Enterprise services help ensure equitable access where local technical resources are limited. The VA is modernizing security processes to reduce delays in technology adoption and leveraging physical locations to support identity verification, improving access for veterans struggling with digital enrollment. 

Bridging the digital divide also requires workforce and literacy solutions. Baltimore City panelists highlighted the need to translate Federal data into local action, particularly around social determinants of health, including housing and economic mobility. Community health workers were cited as essential connectors and should be integrated into digital strategies from the outset. 

Public Health Data Infrastructure Must Shift from Detection to Prediction 

The Center for Disease Control (CDC) acknowledged that current public health infrastructure is designed for detection, not prediction. While improvements have been made since COVID-19, a broader transformation is still underway.  

The One CDC Data Platform (1CDP) serves as a central hub, enabling flexible data exchange, reusable capabilities and advanced analytics. Its purpose is to shift focus from manual data processing to proactive analysis and decision making. Leaders envision disease forecasting becoming as routine as weather forecasting, with real-time modeling to guide early intervention. 

State-level examples illustrate this shift. Illinois is consolidating siloed systems into a unified cloud platform, while addressing cultural resistance to data sharing. Louisiana is focusing on targeted, use-case-driven improvements tied to Medicaid and public health outcomes. Mississippi is prioritizing foundational infrastructure and workforce readiness before scaling analytics. Across all three states, the consensus is clear that interoperability only delivers value when tied to actionable outcomes. 

The VA’s NextGen CCN Redesigns Care Delivery at National Scale 

Community care is one of the fastest-growing components of the VA healthcare system. Of the 17 million veterans served, roughly 6.3 million use VA healthcare annually, with 2-3 million accessing community providers. Programs introduced through the Choice Act and Maintaining Internal Systems and Strengthening Integrated Outside Networks (MISSION) Act expanded access but created operational and financial complexity. 

The Next Generation Community Care Network (NextGen CCN) addresses these challenges through a comprehensive redesign of how the VA manages external care. Expected to launch in early 2027, the program introduces a more competitive ecosystem involving insurers, providers and technology partners. 

Key capabilities include improved care coordination, real-time data exchange, standardized quality benchmarks and outcomes-based reimbursement. Interoperability is foundational to these goals, enabling performance measurement and accountability. The program also prioritizes transparency and trust across stakeholders, ensuring a shared understanding of care delivery. Together, these efforts are designed to position the VA to deliver high-quality, fiscally responsible care while continuing to expand access for a veteran population whose demographics and care needs are rapidly evolving. 

Charting the Course for Federal Health IT Modernization 

HIMSS 2026 reinforced that progress in Federal healthcare requires aligned investment across AI governance, interoperability, cross-agency collaboration, data infrastructure and care delivery redesign. Government health agencies are not simply adding new technologies onto existing systems; they are rethinking how they organize, share data and operate as an integrated ecosystem. Sustained success will depend on aligned standards, cultural transformation and technologies that translate strategy into measurable outcomes. 

As Carahsoft, The Trusted Government IT Solutions Provider™, continues supporting Federal health IT modernization, these insights inform how industry can partner with Government to deliver a more connected, data-driven and patient-centered healthcare system. 

Explore Carahsoft’s Healthcare Technology portfolio of leading solutions that support Federal healthcare modernization priorities including AI, interoperability, cloud infrastructure and advanced analytics. 

Contact the Health IT Team at Healthcare@Carahsoft.com or (571) 591-6080 to learn more. 

How AI is Reshaping Courts and Legal Operations 

The conversation around artificial intelligence (AI) in the legal system has fundamentally shifted from courts and legal organizations debating whether it belongs in legal environments to how to integrate AI responsibly into daily operations. For courts facing expanding caseloads, staffing shortages and budget constraints, AI-powered legal technologies have become operational tools for improving efficiency, access to justice and administrative effectiveness across the legal lifecycle. While AI can significantly enhance legal workflows, responsibility for judgement, accuracy and decision-making must remain with human professionals. 

From Policy Discussion to Practical Adoption 

The American Bar Association’s (ABA) Year 2 Report on the Impact of AI on the Practice of Law makes clear that AI adoption in the legal profession has entered a new phase. Early concerns centered on ethics, confidentiality and professional responsibility. Today, the focus has shifted toward responsible deployment, governance and workflow integration where efficiency gains are immediate and measurable. These applications allow courts to redirect limited staff resources toward higher-value legal and judicial work rather than routine manual processes. 

Common AI-enabled courtroom use cases already in practice include: 

  • Organizing and searching large volumes of filings, briefs and evidence 
  • Creating unofficial or preliminary real-time transcriptions 
  • Summarizing motions, exhibits and prior case materials 
  • Supporting scheduling, workload analysis and calendar management 

This is especially important for Federal, State and Local courts that must maintain service levels despite limited resources. AI-enabled legal technologies provide a validated path to modernizing court operations while preserving judicial independence, transparency and accountability. 

Real-World Applications Delivering Value 

AI adoption is already producing tangible operational benefits across court systems. 

Administrative and workflow automation applications include drafting routine administrative orders and standard court notices, managing scheduling and calendar coordination, conducting workload studies and organizing court documents and filings for improved retrieval. These implementations reduce administrative burden while improving consistency in standard legal processes. 

Document review and case support capabilities allow legal teams to summarize briefs, motions, pleadings, depositions and exhibits at scale. AI systems create timelines of relevant events across large case records and assist with legal research when trained on reputable legal authorities. Some implementations identify misstated law or omitted legal authority in filings, though human verification remains mandatory for all outputs. 

Transcription, translation and accessibility services are also being rapidly adopted. Courts are generating unofficial or preliminary real-time transcriptions to accelerate case documentation. Systems provide preliminary translations of foreign-language documents and support accessibility services for self-represented litigations navigating complex court procedures. These applications expand access to justice by reducing cost barriers and improving navigation of legal systems for citizens. 

Scaling Court Operations Under Budget Constraints 

Rising caseloads combined with constrained budgets make AI adoption particularly relevant for Government legal operations. Technology adoption has emerged as the primary driver of scalability for courts that cannot expand head count. By automating manual processes such as transcription, document review, evidence management and research, AI allows existing staff to handle higher volumes while maintaining or improving service quality.  

This approach aligns with broader access-to-justice goals highlighted in the ABA report. AI-enabled tools are already helping courts improve case management, streamline dispute resolution processes and support self-represented litigants through better access to information and court services. These gains are particularly impactful for jurisdictions seeking to modernize legacy systems while preserving fairness, transparency and judicial independence. 

Human Oversight and Accountability 

While AI delivers meaningful efficiency gains, the ABA report stresses that AI-generated outputs may appear authoritative while containing factual or legal inaccuracies. The risk of hallucinations has not been fully resolved in any current generative AI (GenAI) tools. As a result, AI should not replace judges or court staff, nor should it be treated as an authoritative source of truth. Instead, AI should serve as an assistive technology that augments human expertise, improving documentation quality, accelerating research and making information more accessible. 

Judicial guidelines outlined in the report reinforce several critical principles: 

  • Judges and attorneys remain fully responsible for accuracy and legal reasoning 
  • AI-generated content must always be reviewed for correctness and relevance 
  • Overreliance on AI can introduce risks such as automation bias or misinformation 

Courts adopting AI must establish clear governance frameworks that address privacy, security, transparency and oversight. Human verification of AI outputs is essential to ensuring that AI enhances documentation quality and accelerates legal research without compromising accuracy, professional responsibility and public trust. 

Responsible Adoption Through Trusted Procurement 

The ABA emphasizes that responsible AI adoption is not optional; it is a leadership responsibility. Human oversight, ethical use policies and ongoing evaluation remain essential to ensuring AI strengthens, rather than undermines, trust in the justice system. 

Carahsoft, The Trusted Government IT Solutions Provider®, works with leading legal tech software providers to help Federal, State and Local courts modernize legacy systems, reduce administrative burden and implement AI responsibly at scale. By making these technologies accessible through trusted procurement vehicles, Carahsoft enables courts and Government legal organizations to adopt AI while aligning with established legal, ethical and operational requirements.  

AI is not a substitute for legal expertise, but it is quickly becoming an indispensable tool for courts seeking efficiency, consistency and scalability. By procuring AI solutions through Carahsoft, Government courts can ensure their modernization demands will be met while maintaining legal and ethical standards. As AI continues to reshape legal operations, organizations that pair technology deployment with clear governance, training and accountability frameworks will be better positioned to deliver improved services to the public.  

Ready to explore AI-enabled legal technology solutions? Explore Carahsoft’s Legal & Courtroom Technology Solutions portfolio or take a Self-Guided Tour. 

Contact Carahsoft’s team at LegalTech@carahsoft.com to discuss AI solutions tailored for your organization’s needs.  

Unified Financial Intelligence: Why Government Finance Teams Have a Data Foundation Problem, Not a Data Problem

How Incorta, Google and Carahsoft help State, Local, education and Federal civilian agencies move from slow close cycles to real-time, AI-ready financial insight

I spend a lot of my time talking with Government finance leaders—CFOs, comptrollers, budget directors—and the conversation almost always starts with AI and ends with data. Almost every agency I talk to eventually runs into the same wall: their data isn’t ready. As we move toward agentic AI—AI that takes actions and makes decisions on its own, not just answers questions—the demands on that foundation multiply fast. Until it’s right, AI remains a slide in a strategy deck. That’s the problem Incorta was built to solve.

Nowhere is this more obvious than in Public Sector financial management, where the stakes are high, the infrastructure is often decades old and the expectation for transparency has never been greater. If we want to talk seriously about Unified Financial Intelligence in Government, we have to talk seriously about the data brain underneath it—the trusted, real-time, contextual foundation that AI agents depend on to make accurate, explainable decisions. Without it, you don’t have an AI problem. You have a data problem dressed up as one.

The Real Bottleneck: Government Finance Needs a Data Brain

Public Sector finance teams are under more pressure than ever: leaner budgets, post-pandemic fiscal gaps, enrollment volatility and a mandate to do more with less. New White House and OMB directives are accelerating the AI timeline—agencies are being asked to demonstrate AI-ready infrastructure now, not in a future budget cycle.

For CFOs, comptrollers and finance teams, that pressure is concrete. Close cycles still take days or weeks. Analysts spend more time gathering data than using it. When leadership questions a number, the answer is “let me pull it manually”—because the system shows aggregates, not the transactions behind them.

The root cause isn’t a lack of tools or talent. Financial data is scattered across GL, procurement, grants, payroll and project systems—each with its own codes and timing—and traditional ETL strips out the very context that makes it useful. That’s the data brain problem.

What the Data Brain Has to Deliver

For finance, AI isn’t about prettier dashboards. It’s about answering hard questions: why did this variance occur? Where are the early signals of fraud, waste or abuse? What does next quarter look like if this assumption changes? To answer those credibly, AI needs a data brain.

That data brain has to deliver three things: granularity (100% transactional detail), timeliness (near real-time, not last week’s batch) and context (preserved relationships—purchase orders to vendors, funds to appropriations, payroll to projects).

Traditional ETL gives you the opposite of a data brain: summarized, stale data stripped of business logic. When you layer AI on top of it, the model fills in the gaps—and for Government finance, that’s not a technical problem. If an AI-assisted answer can’t be traced back to the exact transaction, your auditors and oversight bodies won’t accept it.

That’s how you get hallucinations instead of financial intelligence.
The “AI problem” and the “data problem” in Government finance are actually the same problem. Build the data brain, and Unified Financial Intelligence follows.

What Changes When You Have a Data Brain

Take a Federal civilian agency we worked with: 24-hour data refresh cycles, manual reconciliation, spreadsheets and email chains just to close the books. Analysts spent most of their time getting data into a usable format—not using it.

After implementing Incorta with Google Cloud, that agency went from 24-hour to 15-minute data refreshes for key financial subject areas.

  • From periodic close to continuous audit. Anomalies surface in near real-time—before they snowball, not after month-end.
  • From “check the dashboard” to “follow the data.” The CFO questions a number; the analyst drills to the exact transaction, in the same environment.
  • From data gathering to value creation. Analysts shift from reconciliation to scenario modeling and real decisions.

That’s Unified Financial Intelligence with a data brain underneath it: full, timely, contextual access to the truth—and the time to actually use it.

How Incorta Builds the Data Brain

The traditional path to modernizing financial data in Government is measured in years and eight-figure budgets—and most of us have seen how that story ends. At Incorta, we took a different approach: build the data brain for Government finance on Google Cloud without requiring agencies to tear out what’s already there. Three pillars make that possible:

  1. Direct access to ERP data in its native form – Incorta connects directly to Oracle EBS, Oracle Fusion, SAP and Workday, ingesting data in its native schema—no heavy transformation, no lost business context.
  2. Prebuilt blueprints for Public Sector financial systems – A library of prebuilt blueprints captures how ERP tables relate, how funds and projects are structured and how to translate that into analytics-ready models—removing months of data engineering work.
  3. Landing it all in Google BigQuery for AI-ready analytics – The result is a production-ready financial data brain in Google BigQuery—granular, near real-time and fully contextualized—standing up in weeks, not months or years, with Gemini for Government and agentic AI tools ready to operate on top.

On top of this, Incorta layers AI-powered insights with built-in hallucination mitigation, role-based access controls, audit trails and mirrored source system permissions—so agencies can scale AI without sacrificing governance.

Carahsoft plays a crucial role in this story by making it easy for agencies to get started—through existing contract vehicles and the Google Cloud Marketplace—without embarking on another risky, bespoke IT project.

Where State, Local, Education and Federal Civilian Finance Teams Are Starting

State budget offices need real-time visibility into appropriations and fund balances—so leadership responds to revenue shifts, not monthly reports. Local Governments want to move from reactive spreadsheets to proactive scenario planning and cleaner audits. Education finance teams need unified views of budgets, grants and financial aid to navigate enrollment volatility. Federal civilian CFO offices are pursuing continuous close and early AI-driven detection of fraud, waste and abuse. In every case: build the data brain first, and the downstream AI use cases become operational, not experimental.

Getting Started Doesn’t Have to Be a Multi-Year Commitment

One of the most consistent concerns I hear is: “We’ve been burned by big data projects before. We can’t sign up for another multi-year transformation.” That hesitation is completely rational—and it’s exactly why we’ve structured our approach with Google and Carahsoft to deliver value in weeks, not years.

A practical entry point is a Unified Financial Intelligence Modernization Assessment—a focused engagement to assess your ERP landscape, map how your data lands in BigQuery (secure, governed, auditable) and define a 60- to 90-day outcome that shows what the data brain delivers in your environment.

Incorta is available through Carahsoft on the Google Cloud Marketplace—most agencies can use existing contracts and cloud commitments to get started, no new RFX required.

The Bottom Line

State, Local, education and Federal civilian finance teams don’t need another dashboard. They need the data brain that makes Unified Financial Intelligence possible—access to all of their financial data, in near real-time, with full business context, so they can shift from gathering data to actually using it.

That’s what Incorta, Google and Carahsoft are building together for Government. In an environment where agencies are being asked to do more with less, standing up that data brain in weeks rather than years isn’t just a nice-to-have. It’s the difference between a finance function that’s keeping up and one that’s falling behind.

→ Request a live Agentic AI demo — see Incorta + Google in action on your mission data.

→ Try free for 30 days on Google Cloud Marketplace — software free; infrastructure costs may apply.

→ Get started with the Unified Financial Intelligence Modernization Assessment — map your data brain and define a 60- to 90-day outcome.

Ready to explore what real-time financial intelligence looks like for your agency? Learn more about Incorta’s Government solutions on Carahsoft’s Incorta microsite. Watch our joint Incorta + Google session on AI-ready financial data for Public Sector.
Contact the Carahsoft Team ☎ (703) 871-8548  |  ✉ incorta@carahsoft.com

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Incorta, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Weathering the Storm: Migrating to the Cloud in Government

Government agencies are under increasing pressure to modernize IT systems and deliver secure, efficient digital services. Migrating to the cloud is a critical step in this transformation, but the journey can feel like navigating a storm. In our latest CarahCast podcast episode, “Weather the Storm of Migrating to the Cloud,” experts share strategies to help agencies adopt cloud solutions with confidence.

Why Cloud Migration Matters 

Cloud adoption enables scalability, resilience and innovation. Agencies can reduce reliance on outdated legacy systems, strengthen disaster recovery and improve citizen services. 

Key Benefits: 

  • Efficiency: Lower costs and improved scalability.
  • Resilience: Faster adaptation to crises and cybersecurity threats
  • Innovation: Access to artificial intelligence (AI), analytics and automation.
  • Citizen experience: Reliable digital services that build trust.

Key Challenges: 

Despite its benefits, migration presents hurdles:

  • Security and compliance requirements
  • Legacy infrastructure integration
  • Budget limitations
  • Cultural resistance to change
  • Vendor management and lock‑in risks

Expert Insights from CarahCast 

Podcast experts highlight that migration is not one‑size‑fits‑all. Key takeaways include:

  • Start small with pilot projects to prove value.
  • Embed security and compliance at every stage.
  • Engage stakeholders across IT, leadership and end‑users.

As one guest noted, “Cloud migration is about resilience, not just moving workloads.”

Best Practices to Weather the Storm 

To navigate the complexities of cloud migration, agencies should: 

  • Define a clear roadmap with goals and milestones.
  • Use hybrid approaches to balance on‑premises and cloud systems.
  • Invest in staff training and change management.
  • Partner with trusted vendors and experts.
  • Measure success with KPIs like uptime and cost savings. 

Real‑World Examples 

Agencies nationwide are already seeing results:

  • State Governments modernized licensing systems to reduce wait times.
  • Federal departments leveraged cloud analytics for disaster response.
  • Local Governments adopted cloud collaboration tools to streamline operations. 

Listen to the Podcast

For deeper insights, tune in to CarahCast: Weather the Storm of Migrating to the Cloud. Hear directly from experts guiding agencies through successful migrations.

Migrating to the cloud may seem daunting, but with the right strategy, agencies can emerge stronger, more resilient and better equipped to serve citizens. The CarahCast podcast is your trusted resource for navigating this journey. Subscribe today to stay informed on the latest technology trends shaping Government.

Smart Guarding: How AI can be Used to Enhance Vacant Building Security

After 2020, the landscape of corporate real estate changed dramatically. Companies across multiple sectors, including technology, transitioned from working in office to either hybrid or totally remote models. Vacancy rates on corporate campuses increased to 15-20%, opening companies up to a multitude of liabilities and operational challenges. Artificial intelligence (AI) has brought a new edge to vacant building security. Smart guarding and solar guards elevate the security posture of vacant buildings, defend corporate assets and subsequently deliver a Return on Investment (ROI) through effective security measures.

Risks of Vacant Building Stewardship

Vacant buildings come with a series of unique risks to the company that either owns or leases the building. These locations are particularly attractive for criminal activity, especially trespassing and vandalism. Companies also face other risks such as copper theft and squatting that result in higher insurance claims, causing rising premiums. Further challenges come from the range in potential responses from law enforcement. The crime rate in the area will greatly affect how quickly police respond to the call, or whether they will respond at all if there is not an active incident.

Traditional security models for vacant buildings rely heavily on human patrols and come with their own operational drawbacks. A commonly used term in security, “warm bodyguards,” describes guards that are physically present but only do the bare minimum required to complete the job; in other words, these guards are just a warm body whose physical presence alone is deemed to be enough to deter criminal activity. Depending on the size and scope of the campus, these security measures can cost up to $25,000 per month. The ROI is negligible at best, and companies are often left with an expensive yet ineffective security protocol.

With property vacancy on the rise, companies need a solution that is cost effective but does not sacrifice protection or increase their risk profile. That solution lies in the integration of cutting-edge technology with human security.

The Modern Security Guard: Smart Guarding and Solar Guard

Prior to the existence of AI, the Silicon Valley Model sought to enhance building security by combining electronic access control in a building with a fleshed out in-person security protocol. This gives companies the opportunity to employ security guards with relevant prior experience, such as ex-law enforcement and ex-military members, who have effective communication and customer support skills. The key to success is a combination of the right people on site and the proper technological processes in place.

Sentry AI’s Smart Guarding takes this approach a step further by integrating AI agents into the security protocols. A various range of sensors are installed across the building. These can include:

  • Cameras
  • Microphones
  • Motion sensors
  • Turnstiles
  • Fire detection (smoke detectors, heat detectors, etc.)

With the number of sensors that exist in a singular building, a Security Operations Center (SOC) analyst can get easily overwhelmed by the sheer volume of alerts. An AI agent established at the core of this alert system can absorb the information, interpret the incoming data and pass on the relevant security alerts to the SOC analysts.

The AI agent itself can also be proactive and mitigate ongoing security risks. The AI can impersonate a human guard, using any language, tone of voice or even slang if required. By voicing details such as the intruder’s clothing or appearance, the agent creates the impression of an on-site security guard without actually engaging physically with the intruder. After announcing a security presence, the agent will tell the intruder to leave and threaten police intervention if they do not. The agent can also activate sirens and lights to trigger a flight response from the intruder. This is all managed without human intervention.

Periodically, companies need to install a security solution that does not rely on the network, property owner or landlord. Sentry AI has the Solar Guard solution for these exact situations. The Solar Guard is a self-contained mobile unit with a tall mast and several solar panels. Energy collected throughout the day is stored in batteries contained within the unit to power it throughout the night or in adverse weather conditions. At the top of the mast, the Solar Guard has lights, speakers, a cellular modem and dual lens cameras that give a 360-degree field of vision.

As vacancy rates in corporate buildings continue to climb, companies continue to search for new impactful and cost-effective ways to improve their security posture in their buildings. AI-powered security protocols such as Solar Guard and Smart Guarding decrease the risk to personnel and cut through alert fatigue. By combining modern technological advancements with knowledgeable SOC analysts, companies gain ROI and protect their assets when personnel are not present.

To discover how Smart Guarding can elevate security in your vacant facilities, watch Sentry AI’s webinar, “Using AI to Protect Vacant Facilities.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Sentry AI, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

The Federal 100 Signals Optimization in Federal IT

The Federal 100 reflects more than individual achievement; it reveals how technology is fueling great things in Federal Government. Serving as a judge this cycle provided a front-row view of the work happening across agencies and the priorities shaping it.

Optimization had been a stated priority for years. Over the past cycle, it became visible in day-to-day decisions. Leaders were recognized for tightening how technology environments operate: setting clearer enterprise direction, reinforcing shared standards and embedding modernization into routine governance.

That shift showed up across security, acquisition, data strategy and workforce systems. Programs moved beyond isolated efforts and began operating with greater cohesion across components.

That pattern was especially visible in national security organizations.

Many of this year’s honorees came from Defense, DHS and Energy. When agencies responsible for the nation’s most demanding missions lead in enterprise alignment, platform standardization and structured governance, it signals that these practices are no longer experimental. They are operational. They are institutional. And they are delivering measurable mission impact.

Enterprise Leadership Drove Alignment

The leaders who stood out had enterprise reach. They worked across organizational boundaries and aligned components around shared priorities.

That leadership showed up in measurable ways: faster ATO approvals, stronger FedRAMP execution and authorization built into delivery rather than added at the end. Identity now anchors security strategy, reinforcing Zero Trust and allowing bureaus to operate on common foundations.

What this means for the vendor community:
Agencies are aligning at the enterprise level and across organizations. Solutions that integrate across components and scale cleanly will move more easily.

Optimization Became the Operating Model

Optimization is now part of how agencies operate. Leaders are simplifying architectures, cutting duplicated data and strengthening shared platforms so systems connect without unnecessary friction. Unnecessary data movement and storage are being designed out of the system rather than absorbed as the cost of doing business.

The impact was measurable:

  • Identity consolidation lowered integration complexity
  • Multi-cloud strategies improved resilience
  • Enterprise data fabrics reduced duplication
  • Shared platforms supported multiple bureaus

What this means for the vendor community:
Agencies need solutions built to integrate cleanly and minimize unnecessary data movement.

AI Moves from Pilot to Program

AI is no longer confined to experimentation. Programs that began as pilots are gaining executive ownership and defined accountability.

For the first time, Chief AI Officer roles were recognized, reflecting formal accountability for the deployment and governance of AI. That shows AI is maturing into the same category as cybersecurity and cloud: a capability that requires strategy, standards and sustained leadership.

What this means for the vendor community:
Fast Proof-of-concepts with a plan to move into production is important. Solutions must support enterprise integration and sustained use.

The Rules of Government Buying Are Changing

Several of this year’s leaders are literally rewriting the rules of Government buying. The FAR rewrite reshapes the governing framework, and OneGov pushes a long-promised goal into practice: aligning agencies around shared buying strategies instead of fragmented procurements. Expanded use of OTAs and CSOs rounds out the shift by speeding access to new technology.

The combined effect is a more coordinated, more flexible acquisition environment.

What this means for the vendor community:
Vendors who understand enterprise procurement strategies, regulatory shifts and alternative purchasing pathways will be best positioned to support their customers effectively.

Workforce Modernization Is Delivering Results

Workforce systems are undergoing substantive modernization. Agencies are eliminating long-standing backlogs and delivering near real-time workforce data to leadership.

Modernization is extending into core business operations and strengthening how agencies hire, manage and support their people.

What this means for the vendor community
Demand is strong for secure, scalable workforce platforms that integrate with enterprise systems and deliver timely insight.

Emerging Technologies Are Strengthening the Mission Edge

Advanced capabilities are being deployed with clear mission impact. Autonomous systems are extending operational reach. Operational technology security efforts are hardening critical infrastructure. Post-quantum planning is addressing future cryptographic risk. High-performance computing is accelerating analysis and modeling tied directly to national priorities.

These efforts reflect growing confidence in deploying advanced technology within demanding mission environments.

What this means for the vendor community:
Government is embracing new and emerging technologies. This shift creates significant opportunities for vendors prepared to innovate and adapt to changing procurement models.    

What This Signals for the Year Ahead

Federal IT is operating with greater urgency and focus, with speed and mission impact as top priorities.

Enterprise leadership coordinates large organizations. Optimization shapes architecture decisions. AI has named accountability. Acquisition frameworks are being revised. Workforce and emerging technologies are delivering measurable outcomes.

The leaders recognized this year are shaping how Government will function over the next decade, not just how it will deploy the next tool. Congratulations to all the winners.

Securing AI Adoption in Government: From Mandates to Implementation

One of today’s top trends is artificial intelligence (AI), specifically how the Public Sector can adopt it while maintaining the security, governance and oversight essential for mission-critical operations. With AI jumping from number three to one on Federal Chief Information Officers’ (CIO) priority lists and 80% of CIOs under explicit cost savings mandates, the question is no longer whether to deploy AI but how to do so securely at scale.

The recent overhaul of the Federal Acquisition Regulation (FAR) marks the most significant rewrite in over 40 years, fundamentally shifting how Federal agencies operate and procure technology. As generative AI (GenAI) deployments move into mission-critical environments, agencies need practical frameworks that balance speed with verification.

Moving From Speed to Velocity

As The Public Sector enters the age of AI, with $4 trillion in Private Sector investment in data centers, agencies face a fundamental design challenge: design AI systems that adapt to human workflows rather than forcing humans to adapt to systems. This distinction matters most in Government and defense contexts where lives depend on maintaining human oversight for deliberate decisions.

The Department of War’s (DoW) Acquisition Transformation Strategy (ATS) offers a proven model of buying outcomes in increments. Instead of funding calendar time through traditional program structures, agencies should fund missions through portfolios that deliver outcomes in weeks or months. This approach structures procurement in modular increments that integrate with evolving architecture while funding capability and delivery, not timelines.

Velocity differs from speed in its directional precision. Agencies can accelerate procurement through fast-lane processes while maintaining governance through evidence gates that verify operational performance, user adoption, cyber risk posture and sustainment realities. This framework preserves ethical obligations while delivering measurable results.

Prerequisites for Secure AI Implementation

Before deploying AI tools in production environments, agencies need foundational elements in place:

GitLab, Securing AI Adoption in Government blog, embedded image, 2026

Policy frameworks that define where AI can be a part of the process and establish clear boundaries for all personnel. Training and enablement programs ensure teams understand governance requirements and security policies. Several Federal agencies have already created AI centers of excellence to help establish standards and create processes around how they are implementing AI.

End-to-end visibility across the entire software delivery process enables agencies to track where AI agents operate and what actions they perform. Without comprehensive visibility, governance becomes theoretical rather than operational.

Contextual accuracy determines output quality, AI systems deliver accurate, usable results only when provided with the right context, making data quality and integration critical prerequisites.

Built-in guardrails must exist before AI implementation. Security scans on every code change and controls preventing critical vulnerabilities from merging into production branches become essential as agencies move into the agentic AI era.

Practical AI Use Cases That Deliver Value

GitLab’s most recent DevSecOps survey reports that AI currently handles about 25% of the work in Public Sector organizations, with leadership targeting 50% automation. The most successful implementations focus on code generation, testing and documentation, areas where AI delivers immediate, measurable impacts.

Federal customers using GitLab’s AI capabilities report significant efficiency gains in code review processes. AI-powered first-pass reviews reduce time while maintaining quality standards. Test generation and legacy code modernization have proven particularly effective.

Compliance automation represents an emerging high-value use case. GitLab teams are developing compliance agents that access code repositories, Continuous Integration/Continuous Deployment (CI/CD) pipelines and security vulnerability data to automatically populate Security Technical Implementation Guide (STIG) checklists. Security team leaders review and adjust outputs as necessary, reducing administrative burden while allowing teams to focus on strengthening application security posture.

Prioritizing AI Governance Frameworks

With 35% of Public Sector professionals using unofficial AI tools at work, agencies governance frameworks that address shadow IT risks without stifling innovation. A risk-based approach identifies high-impact systems within critical infrastructure and implements controls that prevent systemic failures.

Effective governance prioritizes AI adoption around innovation while maintaining public trust. Agencies must identify high-impact areas and understand system interdependencies, as more systems connect, understanding how one system impacts another becomes essential for appropriate segmentation and risk management.

Building on Secure Foundations

Agencies cannot build on a shaky foundation. Federal AI and cybersecurity strategies must align around building responsibility into the process from the start. This requires shifting from governing static systems to engineering systems that can evolve safely, integrating assurances, accountability and human judgment as foundational design constraints instead of downstream checks.

Before deploying advanced AI capabilities, agencies should strengthen foundational practices, standardizing workflows, implementing security by design and ensuring basic guardrails are in place. AI cannot compensate for weak foundations in the software development lifecycle. The path forward requires doubling down on fundamentals while strategically adopting AI where it delivers clear value.

To learn more about implementing secure AI solutions, watch GitLab’s full webinar, “Cyber in the AI Era: Building Foundations for Secure Adoption.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including GitLab, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.