VMware Private AI: Secure, Scalable AI Adoption for Healthcare

Demand for artificial intelligence (AI) is nearly universal with approximately 98% of healthcare executives reporting a desire to implement or expand AI capabilities, yet most remain stalled at the starting line. The barrier is not a lack of ambition, but rather the complexity of execution. Fragmented platforms, unclear procurement pathways and the difficulty of integrating AI with sensitive patient data have made deployment feel out of reach for many care teams. Broadcom’s VMware Private AI, now natively embedded within VMware Cloud Foundation (VCF) 9, is designed to change that equation.

From Add-On to Foundation: The VCF 9 Integration

The most significant architectural shift in Broadcom’s AI strategy over the past year is the evolution of VMware Private AI from a standalone service into a core component of the platform. With VCF 9, organizations that already hold VCF licensing have immediate access to Private AI capabilities without separate procurement or added complexity.

This shift is especially meaningful for healthcare IT leaders tasked with balancing innovation and compliance in highly regulated environments. By embedding AI capabilities directly into the foundational infrastructure layer, VMware Private AI eliminates the “moving parts” that have historically made AI deployments costly and unpredictable. Healthcare organizations can now activate and govern AI workloads within an environment they already operate and trust.

Five Components Built for Production-Ready AI

VMware Private AI is organized around five functional pillars, each designed to address a specific stage of the AI lifecycle, from model governance to real-world deployment:

  • Model Store: A secure repository where models are curated, tested and governed before entering production, ensuring only validated and policy-compliant models used in clinical or administrative environments.
  • Service Infrastructure: Templatized deep learning virtual machines (VMs) that can be provisioned on demand, accelerating deployment timelines while maintaining standardization and security controls.
  • Model Runtime: The generative AI (GenAI) execution layer handles active model inference, forming the operational core of the Private AI environment.
  • Model Insights and Action: Tools that support model interaction, response logic and fine-tuning, enabling teams to continuously refine AI performance using real operational data.
  • Vector Databases with Retrieval Augmented Generation (RAG): Instead of retraining base models with proprietary data, RAG enables AI systems to retrieve and reference internal knowledge in real time, delivering accurate, contextually relevant outputs without exposing sensitive data externally.

Keeping Healthcare Data Where It Belongs

Data sovereignty remains a non-negotiable priority in healthcare. Patient records, clinical notes and operational data are governed by strict regulatory requirements, and any AI solution that routes this information through public cloud services or third-party providers introduces significant compliance risk.

VMware Private AI addresses this directly through its RAG-based architecture. By connecting AI models to internal data sources—including SharePoint repositories, local file systems and internal databases—and processing information within the organization’s own infrastructure, the solution ensures that sensitive data never leaves the controlled environment. Documents are segmented into discrete chunks that the model can reference contextually, producing outputs grounded in the organization’s actual knowledge base rather than generic training data.

Additionally, new observability tools provide administrators with real-time visibility into model health, capacity utilization and Application Programming Interface (API) access patterns, supporting both operational continuity and security monitoring.

Healthcare Use Cases: From Clinic to Back Office

 VMware Private AI supports a broad range of healthcare applications across four primary domains:

  • Clinical Decision Support: AI-assisted tools that help clinicians navigate complex case data supports precision medicine and population health initiatives.
  • Administrative Automation: Automated documentation, clinical annotation and digital chat assistance for care teams reduces clerical burden, staff burnout and documentation backlogs.
  • Patient Engagement: AI-powered digital assistants that guide patients through post-discharge treatment plans improve adherence and reduce readmission risk.
  • Operational Efficiency: Predictive maintenance for medical equipment and AI-driven resource allocation optimizes capacity management for healthcare systems.

The broader vision is a shift toward ambient intelligence, AI that monitors, learns and assists in real time without requiring manual prompting, freeing care teams to focus on patients and less on administrative systems.

A Practical Framework for Getting Started

Not all AI use cases offer the same balance of value and implementation complexity. Broadcom recommends a prioritization framework that evaluates each potential application against two key dimensions:

  • The value delivered to patients or the organization
  • The complexity required for deployment

By starting with high-value, low-complexity use cases, such as administrative automation or patient communication, organizations can build momentum, demonstrate Return on Investment (ROI) and develop internal expertise before advancing to more complex clinical applications.

This phased approach reflects a broader evolution in healthcare AI. It is no longer confined to research environments; it is now an operational capability. Organizations that approach AI with deliberate governance, clear prioritization and secure foundational infrastructure will be best positioned to realize its full potential.

Explore how VMware’s Private AI capabilities can support your organization’s clinical and operational goals.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including VMware, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

The Importance of Creativity in Government and How Creative Software Improves Digital Workflows

In today’s rapidly changing world, Government agencies are under immense pressure to deliver efficient, transparent and citizen-focused services. They often work with limited budgets and follow strict rules. Although creativity is commonly associated with the Private Sector, it has become increasingly important in the Government space. Creative thinking allows employees to develop better solutions for complex challenges, such as emergency response and policy implementation. Adobe’s creative software plays a valuable role in this shift by helping agencies improve their digital workflows, reduce delays and operate more effectively while meeting high standards for security and compliance.

The Value of Creativity in the Public Sector

Creativity in the Public Sector goes beyond new ideas. It helps agencies address important issues like public health, infrastructure improvements and fair access to services. By encouraging fresh thinking, Government teams can create clearer communications for citizens, present complex data in simple ways and design programs that truly meet community needs. When creativity is supported, agencies tend to achieve better results, build stronger public trust and adapt more easily to change. Without creative approaches, traditional processes can limit progress and make it harder to serve the public effectively.

Enhancing Digital Workflows with Creative Software

One area where creativity makes a real difference is in digital workflows. Many Government operations still depend on manual, paper-based steps that take considerable time and effort. Creative software tools help transform these into faster, more collaborative digital processes. Applications for graphic design, video production, document creation and data visualization enable teams to produce professional materials more efficiently. This includes public awareness campaigns, reports and e-learning training resources. Improved system integration also makes it easier for departments to share information and collaborate effectively. 

Bottlenecks remain a common challenge in Government. Excessive paperwork, lengthy approval processes and outdated systems often cause delays, increase costs and reduce productivity. Creative software and automation offer a practical way to address these issues. By simplifying routine tasks, agencies can save significant time and resources. Features such as electronic signatures, document templates and real-time collaboration help speed up processes that could take up to twice as long using traditional methods. 

Real-World Success Stories

Several Government agencies have seen clear benefits from creative software. Adobe Creative and Adobe Document Cloud, featuring Adobe Acrobat and Adobe Acrobat Sign, further helps by automating document-related tasks. The City of Denver used Adobe Creative Cloud to strengthen its online services and public outreach campaigns (City of Denver Case Study, n.d.). The Federal Aviation Administration (FAA) integrated these tools to modernize its grants management process. This change reduced paperwork and allowed funding for major infrastructure projects to proceed at a faster pace (FAA Case Study, n.d.). The United States Marine Corps achieved a 38 percent reduction in Adobe eLearning production costs by updating its training workflows with Adobe solutions (USMC Case Study, n.d.). The U.S. Census Bureau also realized substantial savings—between $1.4 billion and $1.9 billion—by digitizing forms and outreach efforts (US Census Bureau Case Study, n.d.). Importantly, Adobe’s tools are designed to meet strict Federal security, accessibility and compliance requirements.

A Step Toward More Effective Government

By embracing creativity through secure and accessible creative software tools, Government agencies can reduce operational bottlenecks and deliver better service to the public, supporting greater efficiency, innovation and accountability.

Check out our on-demand webinar series for more information about how Adobe solutions empower teams to streamline workflows, harness AI-driven tools and elevate creative output.

Sources

“City and County of Denver Case Study.” https://business.adobe.com/customer-success-stories/city-county-denver-case-study.html

“Automating digital documents to improve government efficiency and effectiveness.” May 1, 2024. https://blog.adobe.com/en/publish/2024/05/01/automating-digital-documents-improve-government-efficiency-effectiveness

“USMC Extends Elite Training to the Digital Classroom.” https://business.adobe.com/customer-success-stories/usmc-case-study.html

Adobe Customer Success Story – “U.S. Census Bureau.” The savings range reflects estimates from Government Accountability Office (GAO) reporting on the 2020 Census digital innovations. https://business.adobe.com/customer-success-stories/us-census-bureau-case-study.html

Adobe Customer User Cases. Government Solutions: Efficient, Impactful, Modernized

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Adobe, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Minimizing the Attack Surface: The Onion Model vs. Core-First Protection

Historical Context of Layered Security

The onion model emerged during the growth of enterprise IT when organizations responded to new threats by adding new defensive layers. Each incident or compliance requirement led to another perimeter or middleware control. While effective in the short term, this layered approach produced patchwork systems with overlapping functionality, inconsistent policies and gaps that attackers could exploit.

The Onion Model and Its Vulnerabilities

The traditional “onion model” of cybersecurity layers defenses concentrically around a central database. Each layer is intended to provide a barrier against intrusion, but the cumulative effect is often an expanded and more complex attack surface. From the inside out, the layers typically include:

  1. Database (Data) – the core asset containing customer records, financial transactions, intellectual property, logs and other sensitive information.
  2. Schema & Validation – enforcement of data formats, constraints and integrity checks designed to prevent malformed or malicious inputs from reaching the core.
  3. Application Logic & APIs – business rules and access methods that determine how applications interact with the database, often exposing numerous interfaces.
  4. Access Controls & Identity (IAM) – authentication and authorization services (passwords, tokens, SSO, MFA) that regulate who can reach protected resources.
  5. Encryption Services – cryptographic mechanisms for protecting data at rest and in transit, including key management, TLS/SSL and disk-level encryption.
  6. Firewalls / Perimeter Security – network boundary defenses, intrusion detection systems, packet filtering and monitoring services designed to repel external threats.

Why the Attack Surface Expands

While each layer aims to protect the core, collectively they create new opportunities for exploitation:

  • Integration Points – every interface or protocol boundary becomes a seam that can be misconfigured or attacked.
    • Configuration Complexity – with more interdependent systems, administrators must manage extensive policy sets and security rules, increasing the likelihood of mistakes.
    • Expanded Targets – each layer (firewalls, IAM, middleware, encryption appliances) presents its own vulnerabilities, requiring constant patching and monitoring.
    • Dependency Chains – the failure of a single outer system can cascade inward, leaving the core exposed despite the presence of other controls.

In practice, adding more layers often enlarges the attack surface instead of shrinking it. Attackers exploit this complexity, probing for the weakest link among numerous entry points.

Operational Cost of a Typical Attack Surface

Beyond theoretical weaknesses, a large attack surface carries real operational costs. Tool sprawl burdens administrators with dozens of systems to configure and maintain.

Overlapping monitoring layers generate alert fatigue, obscuring genuine threats. Security budgets become diluted, funding maintenance of redundant defenses rather than reinforcing the integrity of the data itself.

Modern Threat Landscape

Today’s adversaries exploit weaknesses that layered defenses cannot easily address. Lateral movement bypasses layers once attackers are inside a network. Supply chain compromises enter through trusted applications, neutralizing perimeter filters. Zero-day exploits render outer walls ineffective overnight. Core-first security, with protection embedded at the data level, ensures confidentiality and integrity even in the face of these modern tactics.

Architectural Simplicity as Security

Simpler architectures are inherently more secure. Each removed integration point reduces the trusted computing base and the probability of misconfiguration. By embedding protections directly into the data layer, Walacor collapses overlapping controls, producing a system that is easier to audit, verify and trust. This simplicity is itself a security multiplier.

The Core-First Alternative

A core-first security model inverts the paradigm by embedding protections at the data layer itself rather than relying primarily on external systems:

  • Record-Level Encryption and Validation – each data element carries its own cryptographic safeguards, ensuring confidentiality and authenticity.
    • Immutable Integrity Proofs – cryptographic hashes and proofs guarantee that tampering is detectable, independent of outer defenses.
    • Minimized Trust Dependencies – fewer external layers are required for assurance, reducing the number of systems that must be defended and configured.
    • Resilience Under Breach – even if outer controls fail, the data itself remains cryptographically protected and resistant.

This approach shrinks the attack surface by concentrating security at the point of greatest value: the data. Instead of expanding outward with additional complexity, it reduces potential vectors for compromise.

Walacor and Core-First Protection

Walacor implements the core-first philosophy by embedding immutability, cryptographic enforcement and schema validation directly into the data layer. Rather than building outward layers that expand the attack surface, Walacor collapses unnecessary perimeter complexity and anchors protection where it cannot be bypassed: the data itself.

  • Data-Level Cryptography – each record is encrypted and bound to proofs of authenticity, eliminating reliance on external encryption appliances.
    • Immutable Storage – records are tamper-evident at the core, reducing the need for overlapping monitoring systems.
    • Integrated Validation – schema and policy checks occur at write-time, blocking invalid or hostile data without middleware add-ons.
    • Shrinking the Attack Surface – because Walacor renders many outer layers redundant, there are fewer interfaces to defend, fewer seams to misconfigure and fewer targets for attackers.

Walacor demonstrates that the most effective way to minimize the attack surface is to concentrate defenses in the core, ensuring data integrity and confidentiality regardless of the state of external systems.

Agents, AI and the Attack Surface

The emergence of intelligent agents and AI-driven systems adds a new dimension to the attack surface discussion. Agents interact with data across multiple contexts—querying, transforming and making autonomous decisions. In a traditional layered model, each of these interactions multiplies the integration points and potential vulnerabilities. Malicious prompts, poisoned training data or compromised connectors can all bypass outer defenses to reach sensitive information.

A core-first model directly addresses this risk. By cryptographically securing and validating data at the record level, Walacor ensures that even AI agents cannot be tricked into handling falsified or tampered records. Every data element carries its own assurance, creating a trustworthy substrate for automated reasoning and machine learning pipelines.

In this way, AI becomes a consumer of verifiable data rather than a potential vector for hidden compromise, aligning intelligent agents with the same guarantees that protect human operators.

Forward-Looking Implications

A core-first approach lays the groundwork for enduring benefits. Immutable, verifiable data strengthens sovereignty in federated and multicloud environments. Compliance becomes easier, as audit trails and integrity proofs are inherent to the system rather than bolted on. This architecture future-proofs sensitive systems, ensuring resilience against evolving threats.

Reinforcing the Core-First Premise

The onion model reflects a reactionary philosophy that often results in excessive complexity and a sprawling attack surface. A core-first strategy simplifies the architecture by embedding protection directly into the data layer, eliminating unnecessary exposure and ensuring that sensitive information remains secure even in hostile conditions.

To learn more about a core-first approach to cybersecurity, contact Walacor.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Walacor, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

The Importance of Securing the Software Supply Chain

Moving Upstream: The Evolution of Software Supply Chain Attacks

The software supply chain consists of multiple components, touching every piece of code from the moment of conception to the moment of deployment into a Government application. This includes a variety of software, including third-party libraries, open source components, build tools and software architecture, making it a valuable target to hackers.

The software supply chain threat landscape has evolved from a series of disjointed yet targeted attacks to a broader upstream poisoning strategy. Historically, malicious actors targeted specific agencies; today, they have shifted to targeting upstream public software libraries and repositories. These open source libraries are used by thousands of Government agencies and can cause untold damage in a single attack. In the Public Sector, a compromised supply chain does not just mean a data link—it can constitute a threat to national security.

Several real-world cyberattacks exemplify this pattern change, including the 2025 Shai-Hulud software supply chain attack and the 2025 GlassWorm Integrated Development Environment (IDE) extension cyberattack. Malicious actors contribute code that appears to be helpful to public open source projects that contain hidden backdoors or vulnerabilities. In this case, it grants access to systems run by Government agencies.

Some hackers target the developer toolchain and IDE more broadly, as shown in the GlassWorm IDE extension cyberattack. GlassWorm was a self-propagating vulnerability whose initial threat injection was through an IDE extension download through a popular IDE extension marketplace. Other malicious actors have targeted artificial intelligence (AI)-powered supply chains, taking advantage of the speed and power of AI to propagate sophisticated multi-threaded threat campaigns against the developer ecosystem.

Setting Up for Success: Security Built Into the Process

In February 2022, the US Government published the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF) to combat threats to the software security chain. This publication divides guidance under four main practice groups:

  • Preparing the organization
  • Protecting the software
  • Producing well-secured software
  • Responding to vulnerabilities

These groups shift the model from fragmented security tools stitched together toward a unified process in which the security is baked directly into the developer’s workflow. For agencies, this framework provides a common language from which they can all develop a cohesive, secure and regulated software supply chain.

One of the ways developers can secure their supply chains is through Software Bill of Materials (SBOMs). SBOMs are essentially recipes for software; they outline all of the components inside a piece of software. These became required through Executive Order (EO) 14028 but creating them manually at the speed of modern DevSecOps is nearly impossible. Furthermore, as the Government manages risk and prepares for quantum-safe cryptography, the ability to support industry-standard and Federal compliance requirements for Software Package Data Exchange (SPDX) and CycloneDX SBOM formats, which include Vulnerability Exploitability Exchange (VEX) and cryptographic information, is mandatory for mission success.

The automation of SBOMs affects multiple components of the software supply chain:

  • Real-Time Visibility: Agencies have insight into all aspects of the software supply chain, from the deployment of a new line of code to the introduction of common vulnerabilities and exposures (CVE) to their inventory.
  • Reach of Vulnerability: DevSecOps teams can look at a vulnerable part of a library and determine the status of execution, the path of remediation and how agencies should prioritize remediation efforts.
  • Continuous Compliance: Every automated SBOM ensures that every release is compliant with Federal standards without requiring manual audit every time.

Beyond SBOMs, Federal agencies can focus on implementing other safeguards. Developing a curation process to vet open source libraries and components before they are ever downloaded is a critical first step. Agencies should examine potential application and service exposures, such as leaked credentials or backdoors in the software architecture. Additionally, securing the code at the binary level ensures that what was tested and developed is exactly what is run in production.

The JFrog Software Supply Chain Platform: All in One

From inception of code to runtime during mission-critical operations, having a single platform that provides security and visibility across the Software Development Life Cycle (SDLC) is crucial. The JFrog Platform ensures those factors by focusing on universal binary management. It supports over 30 open source packages, including Docker, Maven and Python. JFrog Artifactory, JFrog’s universal artifact repository manager, manages this package from one place, providing a single source of truth for developers that support mission-critical applications.

JFrog does not just look at the top layer for vulnerabilities and exposures; they scan deep into every dependency and sub-dependency within the binary to protect developer tools and infrastructure. Signed evidence at every gate creates end-to-end traceability from the developer’s IDE to edge deployment. The JFrog Platform is compatible with multiple network environments, from on-prem to hybrid to a multicloud flexible strategy.

As the Government modernizes its approach to digital transformation, agencies need industry partners that provide visibility into the next frontier. Security starts and extends across the software supply chain, from the inception of the code at the binary level to deployment of the application. The JFrog Platform delivers unprecedented trust assurance and risk mitigation through their signature binary-level security and positions their Public Sector customers and partners at the bleeding edge of innovation.

Explore JFrog’s DevSecOps solutions and how JFrog can protect Public Sector software supply chains from code to production.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including JFrog, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Built for This Moment (and All Those to Come) Introducing Symantec CBX: Finally, a security platform for smaller teams fighting larger threats

  • Disconnected, vendor-dependent security stacks leave smaller teams blind to threats and overwhelmed by noise they’re not equipped to manage.
  • Symantec CBX unifies Symantec and Carbon Black capabilities into a cloud-based XDR platform that delivers native telemetry correlation, AI-driven insights and enterprise-grade protections without enterprise-level complexity.
  • Built for resource-constrained teams, Symantec CBX reduces costs, cuts alert fatigue, accelerates response and gives organizations a longoverdue advantage against increasingly sophisticated, AI-powered attacks.
  • See Symantec CBX in action in Booth N-5345 at RSAC 2026 Conference.

It’s time for the cybersecurity industry to face an uncomfortable truth: The tools meant to make organizations safer are often the very systems slowing them down, and sometimes leaving them vulnerable.

The problem is that security stacks are built over time from disparate tools that prevent analysts from seeing the full operating environment. Smaller security teams have relied on vendors to solve the challenge of integrating various products—and too often, vendors have fallen short, making it too difficult to gather and correlate the telemetry needed to understand what’s really happening across endpoints, networks and data.

While large enterprises have the resources to manage and integrate complex security stacks, left behind are the organizations that make up the largest swath of the cybersecurity customer market: smaller, less-resourced security teams that increasingly face AI-powered, enterprise-grade threats but lack the budgets and in-house expertise to implement enterprise-grade defenses. These sophisticated attacks can decimate smaller organizations, turning them into casualties of an escalating cyber war fueled by nefarious AI agents that never miss a day of work.

These security teams don’t just need better tools. They need an advantage. Now they have one.

XDR from the pioneer of EDR

Today, we’re introducing Symantec CBX, a groundbreaking new extended detection and response (XDR) solution that combines all the best capabilities of Symantec and Carbon Black into a unified, cloud-based platform. Symantec CBX is the first new product to integrate features from these two iconic brands. But more importantly, it’s the first fully featured XDR platform built expressly for smaller teams looking to evolve their security protections, but that lack the expertise and resources needed to configure and optimize traditional enterprise-class XDR solutions.

In Symantec CBX, we’ve distilled decades of innovation from Symantec and Carbon Black into a platform that solves the problem of correlating and making sense of telemetry across endpoints, networks and data. Typically, the various tools within security stacks attempt this via API integrations. But those fragmented couplings are often incomplete and leave dangerous gaps in visibility and actionable insight. Security analysts may understand that something is happening—they just don’t always know what it is or what to do about it.

The problem grows worse as attack surfaces expand. Organizations send more and more data to costly SIEM platforms, leading to a waterfall of challenges, from endless false positives that waste analyst time to murky outcomes that frustrate corporate management looking for evidence that security programs are working. These are costs smaller organizations can’t afford.

Symantec CBX solves this by combining into a single cloud platform Symantec’s robust prevention, data security and network security features with Carbon Black’s pioneering EDR technology for deep visibility, exceptional threat detection and rapid response across attack surfaces. Spared from log-centric ingestion, security teams detect incidents more precisely and can act more confidently.

Native correlation is just the beginning

With Symantec CBX, native telemetry correlation sits at the center of a vast array of advanced capabilities that, until today, were available only from multiple point solutions. In CBX, we have integrated breakthrough features from Symantec and Carbon Black that make teams smarter and more efficient. Here’s what security teams can look forward to:

AI that makes life easier for humans at the helm. We’ve strategically deployed AI to deliver meaningful improvements to security workflows, resulting in capabilities that simply aren’t available anywhere else. Take Carbon Black Threat Tracer, which allows any analyst to see all adversary activity in a single pane. (Even junior analysts can understand immediately where attackers came in, how they executed their attack and what data they accessed across endpoint, network, email and cloud environments.) The CBX platform also includes Symantec Adaptive Protection, which uses AI to stop living off the land (LOTL) attacks before they do damage. And Symantec’s Incident Prediction, the groundbreaking feature we introduced last year, predicts an attacker’s next four to five moves so teams can stop threat actors moving laterally to steal data or shut down systems.

More complete insights for faster remediation. Incident Summaries, another AI-powered feature, gathers comprehensive data about incidents and presents them in well-written, intuitive summaries and remediation guidance so any analyst can engage mitigation when and where it makes sense.

Enterprise-grade network and data protections. Drawing from the best of Symantec Secure Web Gateway (SWG) and Symantec DLP solutions, this new XDR platform defends the network and data domains by stopping malicious traffic at the network edge, while packaging data security essentials from our acclaimed DLP offerings to ensure that sensitive data stays where it belongs. Via the integrated Symantec Cloud SWG

Express, this new platform even supports post-quantum computing cryptography protocols, thus shielding organizations from the threat of increasingly common “harvest now, decrypt later” attacks and relieving concerns over the prospect of attackers someday unlocking encrypted data.

Meaningful outcomes and rapid time to value. Security managers are expected to continuously improve their team’s performance, but that’s not easy when disjointed solutions create needless friction and confusion, and multiple dashboards steal time from an already busy day. We built Symantec CBX with the features and unified management console that enable the outcomes security teams need most: driving down SIEM and operational costs, rescuing analysts from alert fatigue, speeding time to resolution, meeting governance requirements and demonstrating progress by improving metrics.

Out-of-the-box policy configurations make CBX easy to implement and deliver immediate value.

The Goldilocks platform for the heart of the market

Symantec CBX is aimed squarely at the heart of the cybersecurity market, empowering and enabling security teams of virtually any size with a platform that puts them first. No other XDR solution is built so specifically for organizations laboring under tight budgets, too few resources, a persistent lack of senior expertise, chronic alert fatigue and the ever-more–daunting threat of AI-powered attacks.

Symantec CBX is the XDR platform for this moment and this market. As the first new solution from Broadcom to integrate capabilities from both Symantec and Carbon Black, CBX is the realization of our strategy to deliver on the “better together” pledge we made when these two legendary brands first came together under Broadcom’s Enterprise Security Group. And it’s the ideal solution for our global network of Catalyst Partners, with their deep regional expertise and close customer relationships, as they help organizations struggling to keep up in an environment of constant change and unrelenting challenges.

Overwhelmed security teams need an advantage, and now they have one.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Security.com, and is re-published with permission.

4 ways AI agents change the way we approach Identity Security

As if gaining visibility into all human and non-human identities wasn’t a big enough task for security teams, adding AI agents into the mix takes identity complexity to a new level. Organizations of all sizes are tackling this new reality, where it feels premature to confidently say they know about all the AI agents running in their environment. 

That uncertainty is not a knowledge gap. It is an attack surface. 

Gartner’s new report on IAM for AI agents names the real nugget of truth: “Purpose/intent cannot be discovered after the fact by monitoring and observability capabilities.”

That is not just analyst language. It is a fundamental shift in how we need to think about governing agents. You cannot govern agents by watching them after-the-fact. You must know who they are, what they are for, and who is accountable before they run. 

The numbers that should change your priorities

Gartner’s data reinforces the urgency. By 2029, over 50% of successful attacks against AI agents will exploit access control weaknesses. By the year before, 90% of organizations that share credentials between humans and agents will need to make significant investments to undo that design.

Those numbers are consequences, not causes. The root cause is structural: IAM maturity for agents is uneven. The Gartner lifecycle maturity assessment makes this visible. Authentication and monitoring capabilities are relatively mature. Identity registration and authorization are not. That gap is the story. 

Weak identity registration means the agent was never properly onboarded as an identity. No defined owner. No declared purpose. No documented scope. It has credentials and it is running, but nobody can tell you who built it, what it is supposed to do, or what happens when it breaks. When registration is weak, ownership is unclear. And when ownership is unclear, accountability does not exist. 

Weak authorization means the agent has more access than it needs. It can reach databases, APIs, and workflows that have nothing to do with its intended function. Nobody scoped it down because nobody defined what “down” looks like. When authorization is weak, privilege is excessive.

Now combine excessive privilege with autonomy. An agent that can reason, chain tools, and act on its own, with more access than it should have, and no one clearly accountable for what it does. That is the exploitable attack surface. That is the chain revealed in Gartner’s data.

You cannot protect what you cannot see

Before you can govern agents, you need to find them. All of them. Not just the ones your platform team sanctioned. The ones that developers spun up to solve an issue. The ones contractors built. The ones that exist because someone needed to “just get this working.” 

We hear this consistently from security teams. As one InfoSec manager at a professional services firm put it: “We do not find out about it until someone goes and does an actual audit of the system.” 

Gartner’s assessment confirms it: identity registration is one of the least mature IAM capabilities for AI agents. Most organizations cannot answer the basics: What is this agent supposed to do? Who owns it? What happens when it breaks? 

Discovery is not a checkbox. It is the foundation. Without it, every policy you write is based on assumptions, and assumptions do not survive first contact with autonomous agents operating at machine speed.

The identity registration gap

Most organizations are trying to govern agents with the wrong tools. They are monitoring. They are logging. But monitoring tells you what happened. Identity registration tells you what should happen. Authorization enforces the boundary between them. 

If your governance model depends on catching problems after they occur, you are always going to be behind. 

This is where many organizations reach for familiar tools. IGA platforms can help with registration and lifecycle management. IAM solutions like Okta or Entra ID can register agent identities. These are necessary steps. But they stop there. They can tell you an agent exists and who requested it. They cannot enforce anything at the moment that agent acts. 

That is the gap: governance on paper versus enforcement in production. 

Agents are identities, but not like any you have managed before

The way I read Gartner’s recommendations, there is a unifying thread: treat AI agents like you would treat any identity in your organization. They authenticate. They access resources. They act on behalf of someone. That is not a tool. That is an identity. 

But agents are more complex than traditional identities. They are what we call composite identities. They combine the blast radius of service accounts with the unpredictability of human decision-making at machine speed.

Four reasons that make them different: 

  • They act autonomously, unlike service accounts that execute predefined operations.
  • They may inherit human delegation, creating privilege escalation risk.
  • They may chain multiple machine identities in a single task.
  • They may operate across trust boundaries your IAM system was not designed to handle.

Think about how you onboard an employee. You do not give them admin access on day one. You define their role, their manager, their scope. You review their access as responsibilities change. Agents need that same lifecycle. But right now, most organizations are skipping straight to “give them credentials and hope for the best.” 

What runtime enforcement actually looks like

Gartner calls out the authorization gap. But what does closing that gap look like in practice? 

Even modern IAM systems, including conditional access and continuous evaluation, were designed primarily to evaluate who is signing in and what that identity is generally allowed to do. Agents introduce a different problem. They do not just sign in. They execute. They invoke tools dynamically. They operate across multiple identity contexts within a single task. 

Traditional conditional access evaluates who is signing in and under what conditions. Agent governance must also evaluate what is being executedat the moment of execution. 

Here is what that looks like: an agent is about to call a tool, read from a database, trigger an API, or execute a workflow. Before that happens, there is a decision point. Runtime enforcement evaluates the composite identity: the human owner, the agent itself, the tool credentials, and the defined purpose, all at execution time. Is this agent authenticated? Does it have permission for this specific action? Is this behavior consistent with its intended function? 

That is runtime enforcement. Not configuration-time policies that assume the agent will behave as designed. Decisions at execution time, every time.

What Silverfort does differently

If the failure pattern is identity immaturity, then the control point must also be identity. Most AI agent security approaches start at the model or application layer. We start at the identity layer. Because if identity is uncontrolled, everything above is fragile. 

Human accountability by design

Every AI agent is explicitly tied to a real human owner in policy. Not informally. Not in documentation. In enforcement logic.

Every action can be traced back to a real chain of accountability: which human owns this agent, what identity the agent is operating under, and what credentials it uses to access resources. That is what we mean by composite identity. And it is what makes enforcement possible before monitoring even begins.

Runtime enforcement at the identity layer

Silverfort enforces at the identity decision point at runtime. For MCP-connected agents, that means sitting in line between the agent and the MCP server. For platform-native agents, enforcement is delivered through native integration, directly within the platform. 

Before a tool call executes, we evaluate identity, context, delegation, and policy in real time. If the action exceeds scope, it does not execute. This is not configuration-time IAM. This is execution-time identity enforcement. That distinction matters. 

Least privilege that survives autonomy

Static least privilege assumes predictable behavior. Agents break that assumption. They reason. They chain tools. They drift from what they were originally authorized to do. Least privilege must be validated at runtime, not just set at provisioning. 

That means if an agent tries to access a resource outside its declared purpose, it gets blocked. If delegated privileges start expanding beyond what was originally scoped, they are contained. This is the same enforcement model we apply to humans and service accounts, now extended to AI agents.

One Identity Security Platform

AI Agent Security is not a standalone product. Agents sit at the intersection of human identities, non-human identities, service accounts, cloud resources, SaaS applications, and protocol layers like MCP. If those domains are secured separately, agents will exploit the seams. 

Silverfort unifies this. One policy framework. One observability layer. One enforcement architecture. Across humans, machines, and AI. That is the architectural difference.

Enabling AI innovation without slowing it down

Security leaders are not trying to stop AI adoption. They are trying to make sure it does not outrun their ability to govern it. The organizations moving fastest with AI agents are the ones that figured out early: the right security model is a speed advantage, not a drag. 

Cars have brakes so you can drive fast. The same principle applies here. 

But, the brakes only work if they’re connected to the same system. Today, most organizations secure human identities in one tool, service accounts in another, and AI agents (if at all) in a third. If those domains are secured separately, agents will exploit the seams. 

That’s the reason teams need a unified Identity Security Platform

  • One policy framework means a CISO can define “no agent accesses production data without human approval” once and have it applied across every agent, every platform, every protocol. No per-tool configuration. No coverage gaps.
  • One observability layer means when an agent acts, you see the full chain: which human triggered it, which NHI it authenticated with, which tool it called, and what data it touched. Not three dashboards stitched together after the fact, but a single view that makes incident response possible in minutes instead of days.
  • One enforcement point means policy is applied at runtime, at the moment of action, not retroactively through quarterly access reviews. When an agent requests access, the decision happens inline. Allow, deny, or step up. Before the action executes, not after. 

This is what shifts AI agent security from a governance exercise to an operational capability. Discovery tells you what exists. Registration tells you who owns it. Runtime enforcement tells agents what they’re actually allowed to do, in the moment, every time. 

AI agents represent the next frontier of identity. Identity Security must evolve accordingly, from governance alone to continuous, runtime enforcement. Discover what is running. Register who owns it. Enforce at the moment of execution. That is the path. 

The Gartner report is worth reading in full. : https://www.silverfort.com/landing-page/campaign/gartner-report-iam-for-agents/.

Want to learn how Silverfort discovers and protects AI agent identities? See AI agent Security in action.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Silverfort, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Silverfort.com, and is re-published with permission.

How Government Agencies Can Modernize Transportation with Uber for Business

State and Local Government agencies are under pressure to do more with less while still delivering reliable services. Transportation is fragmented in many agencies, with four or five separate vendor contracts across departments in larger agencies. There is an over-reliance on legacy vendors that are significantly more expensive, including specialty vendors that are important for certain populations and services but may not be necessary for every rider. In many cases, these systems require rides to be booked days in advance, sometimes through offline means such as phone calls. This lack of centralization also limits reporting and visibility into how transportation dollars are being spent.

Uber for Business helps Government agencies move away from a fragmented model by offering a single enterprise platform that can support a variety of transportation needs across departments. With more than 9.4 million participating drivers and couriers, Uber has the largest rideshare network in the world. Centralized administration and reporting provides agencies with a complete view of their transportation programs while reducing the burden on staff who currently manage rides manually.

Supporting Employee Travel and Community Programs

Agencies are using Uber for Business in several capacities. One major use case is employee travel. Many agencies still rely on rental cars or motor pools for staff traveling for work. Uber for Business provides an alternative that can also augment existing fleet operations, helping reduce reliance on basic sedans while allowing fleet teams to focus on specialized vehicles. Agencies can set controls around who can ride, when they can ride and what trip options are available. This is especially appealing as many employees are already familiar with using Uber in their personal lives, making it a seamless and intuitive option to extend into official Government travel.

Agencies are also using Uber for Business to support community-facing programs, including:

  • Court systems use rideshares to transport victims and witnesses, ensuring they arrive on time reliably and have access to a mode of transportation they are familiar with.
  • Social service departments and similar programs are using rideshare to close mobility gaps for the populations they serve, including workforce reentry, recidivism and youth and family programs that need reliable transportation to access essential services or job opportunities.
  • Public safety and transportation agencies are leveraging rideshare to support anti-driving under the influence (DUI) and safe ride campaigns, helping reduce impaired driving by providing residents with accessible transportation alternatives during high-risk times.

Delivering Value Quickly

One of the clearest advantages of Uber for Business is how quickly agencies can begin seeing value. For program managers responsible for overseeing social service and community programs, the benefits can be immediate when constituents are able to get where they need to go more reliably. Smoother transportation can make programs easier to manage and more effective overall.

Programs can be set up as fast as a couple of days. This speed can be especially important when agencies have immediate transportation needs or are looking for a fast, low-lift way to modernize existing processes.

Reducing Costs and Administrative Burden

Uber, Modernize Transportation Blog, Embedded Image, 2026

Cost savings are another major driver for adoption. Through Uber’s partnership with Carahsoft, the solution is available through a National Association of State Procurement Officials (NASPO) agreement that includes built-in incentives for agencies. Uber also applies a tax exemption tag when setting up programs so eligible rides are exempt from applicable taxes.

Beyond discounts and tax advantages, agencies can realize significant operational efficiencies. Program managers no longer need to call in rides or worry about whether clients are reaching their destinations. Instead, they can see trips in real time, communicate with drivers during the booking process and distribute ride credits easily. These streamlined workflows reduce administrative effort and help programs run more efficiently.

Improving Visibility, Compliance and Oversight

For agencies in large counties, Uber for Business can be set up with a parent account that all department accounts fall under. This gives agencies centralized administration rights and better reporting across the organization. It also supports auditing and grant compliance by allowing administrators to view granular details for each trip.

Centralization also helps agencies capture unmanaged transportation spending that may otherwise happen informally across departments. Instead of relying on ad hoc rideshare use with little oversight, agencies can bring transportation activity into one system and enforce internal policies more consistently.

Enhancing the Transportation Experience

Ease of use is a major reason agencies are adopting Uber for Business. For riders, the biggest advantage is on-demand access. Rather than scheduling transportation days in advance, riders can get a trip when they need it. This flexibility can make a meaningful difference for participants in social service and workforce reentry programs, where reliable access to transportation can affect whether someone is able to reach work, court or other essential services.

Uber has also invested in accessibility features, building tools for riders who may not have a cell phone or the Uber app, as well as for those who speak another language or have low vision or hearing-related disabilities. For Government agencies focused on serving all constituents, not just most, these capabilities can help expand access and improve inclusivity.

A Centralized Transportation Strategy

According to Uber, the most successful deployments happen when an executive or procurement leader helps identify which departments across an agency could benefit from a more modern, efficient mobility solution. That agency-wide visibility makes it easier to structure the right program from the start, including setting up the parent account, selecting the right products for different departments and developing an implementation and training plan for staff. This kind of centralized planning can help agencies move beyond isolated pilots and create a transportation strategy that serves multiple departments and use cases through one platform.

For agencies just getting started, most programs can be up and running in less than a month. While some agencies may choose to run their own solicitation process, others can take advantage of existing contracts through NASPO and Carahsoft to start immediately. In emergency situations, deployment can be done within a day. Uber can move as fast as an agency requires.

As agencies look for ways to improve service delivery, manage budgets more carefully and give employees and constituents more reliable transportation options, Uber for Business provides a scalable and flexible model for modernization. From employee travel and fleet augmentation to court systems and social services, a centralized rideshare platform can help agencies simplify operations, improve oversight and better meet transportation needs across the communities they serve.

To learn more about how Uber provides modern travel and rideshare options to Government agencies, view their Uber for Business portfolio.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Uber for Business, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders

Keep More, Store Less: The Case for Advanced Compression in Federal EDR

How agencies can retain full-fidelity data without overspending on storage

Endpoint detection and response (EDR) depends on data. The more telemetry you collect, the more context you have to detect threats, investigate incidents and meet Federal compliance requirements.

But data volume is also the problem. Federal agencies generate massive amounts of endpoint telemetry every day. Process activity. File changes. Network connections. User behavior. Multiply that across thousands of devices and storage requirements quickly grow beyond what many teams can sustain.

Security teams often face a difficult tradeoff: retain full-fidelity data and absorb higher storage costs, or limit retention and risk losing critical visibility.

That tradeoff is no longer necessary. Advanced data compression changes the economics of endpoint visibility. Agencies can retain unfiltered telemetry for extended periods without expanding storage budgets or adding operational complexity.

The Visibility–Storage Tradeoff is No Longer Sustainable

Federal cybersecurity requirements continue to raise the bar for telemetry collection and retention. Agencies must support Zero Trust initiatives, continuous monitoring programs and audit readiness. Modernization efforts increase the number of connected endpoints, including cloud workloads, remote systems and contractor-managed devices. Each new endpoint expands the telemetry footprint.

At the same time, budgets remain under scrutiny. Storage infrastructure must compete with other mission priorities and security leaders must justify every dollar. When storage costs climb, teams often respond in predictable ways:

  • Reduce retention windows
  • Sample or filter telemetry
  • Drop lower-priority event types
  • Offload data to external archives that are difficult to query

Each of these approaches creates blind spots. Shorter retention windows limit historical investigations and filtered data weakens threat hunting while fragmented storage slows response times.

In a threat context where adversaries can dwell quietly for months, incomplete data is a liability. Agencies need a way to collect and retain comprehensive telemetry without creating unsustainable storage growth.

Compression-First Architectures Improve Data Retention

Traditional security platforms treat compression as an afterthought. Data is collected at scale, stored in raw or lightly optimized formats and compressed later in the pipeline. By then, infrastructure costs are already locked in.

A compression-first architecture takes a different approach. Advanced compression techniques reduce data size at ingest. Telemetry is optimized as it enters the platform, not after it has consumed storage resources. The result is a significantly smaller storage footprint without sacrificing fidelity. For Federal security operations centers (SOCs), this shift has meaningful impact:

  • Longer retention without higher cost – Agencies can retain 180 days or more of full-fidelity telemetry while remaining within budget constraints.
  • Unfiltered visibility – Teams do not need to decide in advance which data might matter later. They can keep it all.
  • Faster investigations – Optimized storage enables efficient querying across large datasets, supporting threat hunting and incident response.
  • Simplified architecture – Native compression reduces the need for external storage tiers or complex archival systems.

Instead of managing tradeoffs, security teams regain flexibility.

Full-Fidelity Data Supports Compliance and Zero Trust

Federal mandates increasingly require measurable security maturity. Continuous monitoring, device-level visibility and documented audit trails are central to that effort, and retention depth matters.

When agencies can access complete endpoint histories, they strengthen their ability to:

  • Validate Zero Trust controls within the device pillar
  • Reconstruct events during forensic investigations
  • Demonstrate compliance with evolving Federal security requirements
  • Support reporting obligations tied to vulnerability and risk management

Short retention windows make it harder to answer fundamental questions: When did this behavior begin? Was lateral movement attempted? Did similar activity occur on other systems?

With compressed full-fidelity data, those questions become easier to answer and teams can look back months, not days. This level of historical visibility supports stronger analytics, more informed risk decisions and more defensible reporting.

Cost Efficiency Matters Under Federal Scrutiny

Every Federal technology investment must demonstrate operational value. Advanced compression directly addresses cost concerns in several ways:

  • Reduces total storage consumption
  • Delays or eliminates additional infrastructure purchases
  • Lowers operational overhead tied to managing multiple storage systems
  • Minimizes data movement between tiers

At the same time, it strengthens the overall security posture by preserving data that might otherwise be discarded. This combination of efficiency and depth is particularly important for agencies balancing modernization initiatives with budget discipline.

Security cannot become a cost center that expands without limit. It must scale responsibly. Compression-first EDR architecture supports that balance.

The Federal security community no longer needs to accept a compromise between cost and visibility. Advanced data compression enables agencies to:

  • Collect unfiltered endpoint telemetry
  • Retain data for extended periods
  • Support Zero Trust maturity
  • Strengthen investigative capabilities
  • Maintain fiscal discipline

As agencies define the next standard for Federal EDR, data strategy must be part of the conversation. Retention, accessibility and efficiency determine whether telemetry delivers long-term value.

Carbon Black and Carahsoft help Federal agencies adopt a compression-first approach to endpoint detection and response, so teams can keep more data, store less and operate with confidence.

Contact us to learn how your agency can adopt a compression-first approach to endpoint visibility while staying within budget.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Unified Financial Intelligence: Why Government Finance Teams Have a Data Foundation Problem, Not a Data Problem

How Incorta, Google and Carahsoft help State, Local, education and Federal civilian agencies move from slow close cycles to real-time, AI-ready financial insight

I spend a lot of my time talking with Government finance leaders—CFOs, comptrollers, budget directors—and the conversation almost always starts with AI and ends with data. Almost every agency I talk to eventually runs into the same wall: their data isn’t ready. As we move toward agentic AI—AI that takes actions and makes decisions on its own, not just answers questions—the demands on that foundation multiply fast. Until it’s right, AI remains a slide in a strategy deck. That’s the problem Incorta was built to solve.

Nowhere is this more obvious than in Public Sector financial management, where the stakes are high, the infrastructure is often decades old and the expectation for transparency has never been greater. If we want to talk seriously about Unified Financial Intelligence in Government, we have to talk seriously about the data brain underneath it—the trusted, real-time, contextual foundation that AI agents depend on to make accurate, explainable decisions. Without it, you don’t have an AI problem. You have a data problem dressed up as one.

The Real Bottleneck: Government Finance Needs a Data Brain

Public Sector finance teams are under more pressure than ever: leaner budgets, post-pandemic fiscal gaps, enrollment volatility and a mandate to do more with less. New White House and OMB directives are accelerating the AI timeline—agencies are being asked to demonstrate AI-ready infrastructure now, not in a future budget cycle.

For CFOs, comptrollers and finance teams, that pressure is concrete. Close cycles still take days or weeks. Analysts spend more time gathering data than using it. When leadership questions a number, the answer is “let me pull it manually”—because the system shows aggregates, not the transactions behind them.

The root cause isn’t a lack of tools or talent. Financial data is scattered across GL, procurement, grants, payroll and project systems—each with its own codes and timing—and traditional ETL strips out the very context that makes it useful. That’s the data brain problem.

What the Data Brain Has to Deliver

For finance, AI isn’t about prettier dashboards. It’s about answering hard questions: why did this variance occur? Where are the early signals of fraud, waste or abuse? What does next quarter look like if this assumption changes? To answer those credibly, AI needs a data brain.

That data brain has to deliver three things: granularity (100% transactional detail), timeliness (near real-time, not last week’s batch) and context (preserved relationships—purchase orders to vendors, funds to appropriations, payroll to projects).

Traditional ETL gives you the opposite of a data brain: summarized, stale data stripped of business logic. When you layer AI on top of it, the model fills in the gaps—and for Government finance, that’s not a technical problem. If an AI-assisted answer can’t be traced back to the exact transaction, your auditors and oversight bodies won’t accept it.

That’s how you get hallucinations instead of financial intelligence.
The “AI problem” and the “data problem” in Government finance are actually the same problem. Build the data brain, and Unified Financial Intelligence follows.

What Changes When You Have a Data Brain

Take a Federal civilian agency we worked with: 24-hour data refresh cycles, manual reconciliation, spreadsheets and email chains just to close the books. Analysts spent most of their time getting data into a usable format—not using it.

After implementing Incorta with Google Cloud, that agency went from 24-hour to 15-minute data refreshes for key financial subject areas.

  • From periodic close to continuous audit. Anomalies surface in near real-time—before they snowball, not after month-end.
  • From “check the dashboard” to “follow the data.” The CFO questions a number; the analyst drills to the exact transaction, in the same environment.
  • From data gathering to value creation. Analysts shift from reconciliation to scenario modeling and real decisions.

That’s Unified Financial Intelligence with a data brain underneath it: full, timely, contextual access to the truth—and the time to actually use it.

How Incorta Builds the Data Brain

The traditional path to modernizing financial data in Government is measured in years and eight-figure budgets—and most of us have seen how that story ends. At Incorta, we took a different approach: build the data brain for Government finance on Google Cloud without requiring agencies to tear out what’s already there. Three pillars make that possible:

  1. Direct access to ERP data in its native form – Incorta connects directly to Oracle EBS, Oracle Fusion, SAP and Workday, ingesting data in its native schema—no heavy transformation, no lost business context.
  2. Prebuilt blueprints for Public Sector financial systems – A library of prebuilt blueprints captures how ERP tables relate, how funds and projects are structured and how to translate that into analytics-ready models—removing months of data engineering work.
  3. Landing it all in Google BigQuery for AI-ready analytics – The result is a production-ready financial data brain in Google BigQuery—granular, near real-time and fully contextualized—standing up in weeks, not months or years, with Gemini for Government and agentic AI tools ready to operate on top.

On top of this, Incorta layers AI-powered insights with built-in hallucination mitigation, role-based access controls, audit trails and mirrored source system permissions—so agencies can scale AI without sacrificing governance.

Carahsoft plays a crucial role in this story by making it easy for agencies to get started—through existing contract vehicles and the Google Cloud Marketplace—without embarking on another risky, bespoke IT project.

Where State, Local, Education and Federal Civilian Finance Teams Are Starting

State budget offices need real-time visibility into appropriations and fund balances—so leadership responds to revenue shifts, not monthly reports. Local Governments want to move from reactive spreadsheets to proactive scenario planning and cleaner audits. Education finance teams need unified views of budgets, grants and financial aid to navigate enrollment volatility. Federal civilian CFO offices are pursuing continuous close and early AI-driven detection of fraud, waste and abuse. In every case: build the data brain first, and the downstream AI use cases become operational, not experimental.

Getting Started Doesn’t Have to Be a Multi-Year Commitment

One of the most consistent concerns I hear is: “We’ve been burned by big data projects before. We can’t sign up for another multi-year transformation.” That hesitation is completely rational—and it’s exactly why we’ve structured our approach with Google and Carahsoft to deliver value in weeks, not years.

A practical entry point is a Unified Financial Intelligence Modernization Assessment—a focused engagement to assess your ERP landscape, map how your data lands in BigQuery (secure, governed, auditable) and define a 60- to 90-day outcome that shows what the data brain delivers in your environment.

Incorta is available through Carahsoft on the Google Cloud Marketplace—most agencies can use existing contracts and cloud commitments to get started, no new RFX required.

The Bottom Line

State, Local, education and Federal civilian finance teams don’t need another dashboard. They need the data brain that makes Unified Financial Intelligence possible—access to all of their financial data, in near real-time, with full business context, so they can shift from gathering data to actually using it.

That’s what Incorta, Google and Carahsoft are building together for Government. In an environment where agencies are being asked to do more with less, standing up that data brain in weeks rather than years isn’t just a nice-to-have. It’s the difference between a finance function that’s keeping up and one that’s falling behind.

→ Request a live Agentic AI demo — see Incorta + Google in action on your mission data.

→ Try free for 30 days on Google Cloud Marketplace — software free; infrastructure costs may apply.

→ Get started with the Unified Financial Intelligence Modernization Assessment — map your data brain and define a 60- to 90-day outcome.

Ready to explore what real-time financial intelligence looks like for your agency? Learn more about Incorta’s Government solutions on Carahsoft’s Incorta microsite. Watch our joint Incorta + Google session on AI-ready financial data for Public Sector.
Contact the Carahsoft Team ☎ (703) 871-8548  |  ✉ incorta@carahsoft.com

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Incorta, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Integrated Threat Hunting: A Smarter Path for Stretched Federal SOCs

Why visibility, automation and collaboration are now mission-critical

Federal Security Operations Center (SOC) teams are under relentless pressure. Teams are increasingly stretched thin as agencies grapple with AI-enhanced threats, Zero Trust requirements and operational mandates like FISMA 2.0. Despite limited staff and growing workloads, though, the mission remains clear: defend critical infrastructure, secure sensitive data and maintain compliance.

For split-second contexts in the face of critical alerts, fragmented tools and siloed data only make matters worse. Analysts lose time switching between platforms. Revalidating and responding to quickly escalating threats takes time away from mission continuity.

Federal SOCs require integrated, intelligence-driven platforms that support end-to-end threat visibility, rapid response and secure information sharing.

Modern Federal SOCs Face Mounting Challenges

Staffing shortfalls are now a systemic issue. The cybersecurity talent gap currently exceeds 5.5 million unfilled roles globally, with Federal agencies competing for a shrinking pool of qualified professionals.

Meanwhile, tool sprawl and console fatigue complicate workflows. Analysts must juggle multiple platforms to correlate data, validate incidents and track lateral movement all while meeting increasingly complex compliance reporting mandates.

Agencies must also contend with:

  • AI-generated malware that evades signature-based detection
  • Expanding attack surfaces from hybrid environments and remote endpoints
  • Escalating compliance expectations tied to FISMA modernization, OMB M-24-14 and Zero Trust architecture maturity

To keep pace, teams need tools that consolidate, correlate and streamline.

Real-time Response Enhances SOC Agility

Threat impact is defined by the time it takes to respond properly. Delayed containment leads to higher costs and increased exposure. That’s why real-time response is now essential to any defensible cybersecurity posture.

Modern endpoint detection and response (EDR) platforms allow teams to:

  • Isolate compromised endpoints instantly
  • Terminate malicious processes at the source
  • Prevent data exfiltration in-flight
  • Apply automated playbooks for repeatable, standards-based remediation

These capabilities reduce manual intervention and align with CISA’s SOAR guidance, enabling SOCs to act swiftly within a Zero Trust model. For Federal teams, this also supports audit-readiness with timestamped forensic records that meet FISMA and OMB compliance requirements.

Unified Telemetry Accelerates Threat Hunting

Siloed data weakens an analyst’s ability to detect patterns and perform deep investigations. By unifying endpoint telemetry across devices and environments, teams gain access to richer datasets and longer retention windows for root cause analysis.

Carbon Black EDR captures high-fidelity endpoint activity and retains up to 180 days of telemetry, letting teams uncover threats that may have originated weeks or months prior.

With behavior-based analytics, SOCs can move past static signatures and detect anomalies faster. This involves pinpointing lateral movement, privilege escalation and indicators of compromise before damage escalates.

Collaboration and Data Sharing Reduce Operational Risk

Cybersecurity is a team sport, but without integrated data sharing, even the best defenses can fall short. Fragmented environments limit visibility, making it difficult to act on shared intelligence across tools and agency teams.

Integrated platforms streamline threat intelligence sharing through features such as:

  • The Carbon Black Data Forwarder, which simplifies integration with SIEM/SOAR platforms
  • API-driven data sharing that supports automation and collaboration
  • Compatibility with Zero Trust frameworks, particularly the Device Pillar of OMB M-24-14

With cross-environment visibility and collective learning, SOC teams can improve incident response while advancing cybersecurity maturity across the agency.

Work Smarter, Not Harder

Federal SOCs face high-stakes situations where time and clarity are critical and impact lives in real time. Every alert demands focus. Every decision must be defensible. To operate effectively under pressure, teams need platforms that reduce noise, unify workflows and enable smart action.

Carbon Black and Carahsoft help Federal teams do more with less. We empower analysts with the real-time insights and interoperability they need to protect what matters most.

Contact us to learn how your agency can simplify threat detection, response and collaboration with Carbon Black EDR.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.