Building a Security Strategy for Agentic AI: A Framework for State and Local Government

As artificial intelligence (AI) evolves from simple chatbots to autonomous agents capable of making independent decisions, State and Local Government agencies face a fundamental shift in cybersecurity requirements. Recent research shows 59% of State and Local Government respondents report already using some form of generative AI (GenAI), with 55% planning to deploy AI agents for employee support within the next two years. Yet this rapid adoption brings unprecedented security challenges. Because AI agents are designed to pursue goals autonomously, even adapting when security measures block their path, Chief Information Security Officers (CISOs) responsible for safeguarding Government networks must rethink traditional defenses and embrace a new security paradigm.

The Emergence of Agentic AI and Its Unique Security Challenges

AI agents represent a significant departure from the GenAI tools many agencies currently use. While traditional Large Language Models (LLMs) respond to prompts and return information such as a support chatbot, AI agents and agentic systems are autonomous software programs that can plan, reflect, use tools, maintain memory and collaborate with other agents to achieve specific goals. These capabilities make them powerful productivity tools, but they also introduce failure modes that conventional software simply does not have. Unlike deterministic systems that crash when something goes wrong, AI agents can fail silently through collusion, context loss or corrupted cognitive states that propagate errors throughout connected systems. Research examining the real-world performance of AI agents found that single-term tasks had a 62% failure rate, with success rates dropping even further for multi-term scenarios.

When Veracode examined 100 LLMs performing programming tasks, these systems introduced risky security vulnerabilities 45% of the time. For State and Local agencies handling sensitive citizen data, managing critical infrastructure or supporting public safety operations, these error rates demand careful attention within robust security frameworks designed specifically for autonomous systems.

The New Security Paradigm: From Human-Centric to Agent-Inclusive Workforce Protection

AI agents, the newest coworker, amplify insider threats by combining human-like autonomy with capabilities that exceed human limitations. While employees work within bounded motivation and finite skills, AI agents possess boundless motivation to achieve goals, uncapped skills that continuously improve and infinite willpower, constrained only by computational capacity. They will not simply make a single attempt to access a file, get blocked due to a lack of permissions, get frustrated and go home for the day the way an employee might; they will persistently pursue objectives, potentially finding novel ways around security controls.

This transformation fundamentally changes the attack surface agencies must protect. Data breaches continue to impose significant financial and operational strain across the public sector, with many state and local organizations reporting cumulative annual costs that reach into the millions. AI agents and agentic systems collapse traditional security models by operating as autonomous workforce members who interact with systems, access data and make decisions without direct human oversight. They can be compromised through threats specific to agentic AI, such as goal and intent hijacking, memory poisoning, resource exhaustion or excessive agency that can lead to unauthorized actions, all in pursuit of achieving programmed objectives. For Government agencies managing limited security budgets while protecting essential citizen services, this exponential increase in potential attack vectors demands proactive frameworks rather than reactive responses.

The AEGIS Framework: A Six-Domain Approach to Securing Agentic AI

Forrester’s Agentic AI Enterprise Guardrails for Information Security (AEGIS) framework provides a comprehensive approach to helping CISOs in securing autonomous AI systems across six critical domains.

Governance, Risk and Compliance (GRC) establish oversight functions and continuous monitoring capabilities. Identity and Access Management (IAM) address the unique challenge of agent identities that combine characteristics of both machine and human identities. Data Security focuses on classifying data appropriately, implementing controls for agent memory and considering data enclaves and anonymization from privacy perspectives.

Application Security evaluates risks across the entire software development lifecycle (SDLC), implements Development, Security and Operations (DevSecOps) best practices, assesses the software supply chain and uses adversarial red team testing to validate safety and security controls. This domain focuses on embedding telemetry that gives security teams visibility into agent behavior and decision making. Threat Management ensures logs are accessible to security operations center analysts, enabling detection of behavioral anomalies and supporting forensic investigations. Zero Trust Architecture (ZTA) principles apply such as implementing network access layer controls for agent workloads, continuous validation of the agent’s runtime environment and  monitoring of agent to agent communication.

Underlying the framework are three core principles:

  • Least Agency extends least privilege to focus on decisions and actions, ensuring agents have only the minimum set of permissions, capabilities, tools and decision making necessary to complete specific tasks.
  • Continuous Risk Management replaces periodic audits with ongoing evaluation of data, model and agent integrity.
  • Securing Intent requires organizations to understand whether agent actions are malicious or benign, intentional or unintentional, enabling proper investigation when failures occur.

Practical Implementation: Agent Onboarding and Governance

Forrester’s “Agent on a Page” concept provides a practical tool for providing structure, consistency and alignment of AI agents to business goals before activation, by outlining each agent’s owner, core purpose, operational context, knowledge base, specific tasks, functional alignment, tool access and cooperation patterns. This documentation gives business stakeholders clear success criteria, while security teams use it as a threat model and input into Forrester’s AEGIS framework to identify gaps in controls, missing guardrails, vulnerabilities and establish baselines to validate agent behavior against.

Similar to employee onboarding, agents require explicit programming on compliance frameworks, data privacy restrictions, scope of work and organizational norms. They must understand cooperation boundaries, operational context, knowledge sources and collaboration patterns. Agencies already deploying agents may have some of this documentation; those starting should collaborate between business owners and security teams to develop these frameworks.

Building a Secure Foundation for Autonomous AI

State and Local Government agencies stand at a critical inflection point. AI agents promise significant productivity gains across employee support, investigation assistance and first responder capabilities. Yet deploying these autonomous systems without appropriate security frameworks creates unacceptable risks for organizations managing citizen data and essential public services. The AEGIS framework provides a comprehensive approach to securing agentic AI before widespread deployment, enabling agencies to realize benefits while maintaining security postures that citizens expect.

Organizations should begin by reviewing the Forrester’s AEGIS framework to understand how it maps to existing compliance requirements such as NIST AI RMF, the EU AI Act and OWASP Top 10 for LLMs. Forming AI governance committees using AEGIS principles help establish organizational buy-in. Discovery processes identifying which departments are exploring AI agents enable targeted control implementation. Agencies that establish strong foundations now position themselves to adopt autonomous AI confidently and securely.

To explore the complete AEGIS framework and gain deeper insights into securing agentic AI for State and Local Government, watch Carahsoft’s full webinar featuring Forrester, “Full Throttle, Firm Control: Build Your Trust Strategy for Agentic AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Forrester, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

FedRAMP 20x: Modernizing Cloud Security Authorization Through Automation and Continuous Assurance

FedRAMP authorization has long required extensive documentation, static point-in-time assessments and timelines of 18–24 months. This approach has slowed innovation for Federal agencies seeking secure cloud solutions and for vendors pursuing Government contracts.

FedRAMP 20x reimagines authorization through automation, machine-readable evidence and continuous monitoring, shifting compliance from document-driven processes to data-driven assurance. It also reshapes how Federal agencies, Cloud Service Providers (CSPs) and Third-Party Assessment Organizations (3PAOs) collaborate to secure Government environments.

The Shift from REV 5 to 20x

Traditional FedRAMP authorization follows a linear, document-heavy process where CSPs write extensive System Security Plans (SSPs), undergo annual assessments and exchange static artifacts with 3PAOs. FedRAMP 20x maintains the same security requirements from National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Revision 5 (REV 5) but transforms how evidence is validated. Instead of screenshots or single-moment spreadsheets, 20x uses logs, configuration files and automated integrations that reflect real-time security posture. This enables continuous assurance, with systems remaining audit-ready and controls validated through actual telemetry and configuration baselines.

The result is a more dynamic, risk-focused model that moves beyond top-down waterfall processes that often obscure security conditions.

Modernized Compliance

FedRAMP 20x requires robust compliance automation built on five pillars:

  1. Control normalization
  2. Engineering
  3. Infrastructure
  4. Evidence generation
  5. Reporting

Controls must be technically engineered into Continuous Integration/Continuous Deployment (CI/CD) pipelines, an approach often described as “compliance-as-code.” Supporting infrastructure must generate evidence in a reliable, machine-readable format such as NIST Open Security Controls Assessment Language (OSCAL) or JavaScript Object Notation (JSON) so CSPs, agencies and 3PAOs can share data rather than documents. This approach transforms compliance work from writing narratives and taking screenshots to building monitoring systems that continuously validate control effectiveness.

While artificial intelligence (AI) tools are emerging as assistants, the foundation remains consistent instrumentation and automated evidence collection. Organizations must invest in platforms capable of real-time logging, automated vulnerability scanning, Application Programming Interface (API)-driven evidence collection and continuous control monitoring, moving beyond spreadsheets or basic ticketing systems to true automated Governance, Risk and Compliance (GRC).

Maintaining Security Standards

FedRAMP 20x reduces the barriers to entry for small CSPs. Under the traditional REV 5 model, many providers faced prohibitive costs and timelines, often waiting indefinitely for Joint Authorization Board (JAB) review without agency sponsorship. The 20x pilot eliminates this sponsor requirement and accelerates review: organizations using automation have achieved authorization in six months.

RegScale, FedRAMP 20x blog, embedded image, 2025

RegScale, leveraging its own platform with features like automated evidence collection and AI-assisted control validation, completed its SSP and evidence in approximately three weeks and achieved full authorization within six months of audit start. This acceleration does not weaken security; rather, continuous monitoring and real-time evidence provide greater assurance than annual snapshots.

Another benefit of the 20x approach is that the machine-readable evidence can be reused for other frameworks, enabling a “certify once and comply many” approach across:

  • System and Organization Controls 2 (SOC 2)
  • International Organization for Standardization (ISO) 27001
  • Cloud Security Alliance (CSA) Security, Trust, Assurance and Risk (STAR)

For cloud-native organizations already operating with infrastructure as code (IaC) and automated pipelines, 20x aligns Federal compliance with modern DevSecOps practices.

Cultural and Organizational Change Management

The greatest challenge with FedRAMP 20x is cultural, not technological. Many organizations already possess the necessary tools but continue to rely on manual processes built over 15–20 years. Shifting to automation requires replacing “no hope” environments, where compliance is viewed as endless documentation, with the recognition that more efficient, sustainable operations are both possible and necessary.

Teams must actively retrain themselves to think operationally rather than as checklist validators. The transition also requires breaking down silos between security and compliance teams, agencies and 3PAOs, ensuring all stakeholders rely on the same real-time telemetry instead of debating the meaning of outdated screenshots. Federal agencies must also educate risk owners and embrace new evidence formats and methodologies. Ultimately, this is as much an organizational transformation as a technical one.

Continuous Monitoring and Real-Time Risk Management

FedRAMP 20x redefines relationships between CSPs, agencies and 3PAOs by replacing periodic reviews with continuous monitoring and near real-time risk visibility. Instead of exchanging PDFs, stakeholders share dashboards, datasets and evidence repositories that all parties can access. Auditors can review assessments based on evidence collected minutes or hours ago rather than relying on outdated artifacts.

Continuous monitoring supports 20x by allowing agencies to track configuration drift, Plan of Action and Milestone (POA&M) status and control effectiveness in regular cadences. The definition of “continuous” varies by control type; some require minute-by-minute validation, while policy controls may be quarterly or semi-annual.

For agencies, continuous assurance delivers better risk management capabilities, but only if they invest time in understanding how to interpret machine-readable formats such as OSCAL. Adoption varies, with some agencies already capable while others continue developing this capacity.

Moving Forward with Confidence

FedRAMP 20x is a strategic shift that aligns Federal authorization with modern DevSecOps, delivering faster innovation without reducing security standards. Since launching in March 2025, the pilot has processed 27 submissions and granted 13 authorizations, demonstrating scalability and viability.

With 20x, agencies gain improved risk visibility, reduced vendor timelines and access to innovative cloud solutions previously delayed by lengthy authorizations. However, success is not guaranteed. It requires adopting continuous assurance, investing in platforms that support machine-readable evidence and educating risk owners to interpret dynamic data. CSPs must centralize systems of record, instrument environments for continuous evidence collection and adopt standardized mappings that facilitate automation.  

The organizations that thrive will be those that use FedRAMP 20x as a motivator to replace outdated habits, engineer controls properly and embrace automation as an enhancement, not a replacement, of human expertise.

Discover how FedRAMP 20x is transforming Federal cloud authorization by watching the webinar, “FedRAMP 20x in Motion: What Early Results Mean for Federal Agencies,” featuring insights from RegScale and the CSA.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including RegScale, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Where the Physical Meets the Virtual: How Digital Twins Transform Flood Management

Roughly 2 billion people globally are at risk of flooding, with that number growing steadily every year. With flooding ranking as the number one most frequent and costly natural disaster, Federal, State and Local Governments must find ways to translate historical and real-time data into predictive models for emergency response. Digital twins powered by Artificial Intelligence (AI) substantially shorten simulation cycles, compare complex variables and precisely estimate future flood scenarios.

Challenges with Traditional Forecast Models

Examining the traditional forecast modeling process uncovers a series of disadvantages that mean an early warning flooding system is not functioning at maximum potential. These flood algorithms often have long modeling and simulation times, and analysts do not have the luxury to run outcomes multiple times to make the model as accurate as possible when it comes to emergency response. As forecasting areas get larger, these models need more time, more compute power and more analysts to run properly.

There are also issues with the data input into traditional forecast models. Analysts have data that is either unreliable or unavailable in the locales necessary to issue an accurate early flood warning. Incorrect data can also be created when outdated models misrepresent geospatial features. When this invalid data cannot be compared with other current or historical data points, the overall quality of the data decreases.

Along with the disadvantages of the traditional models themselves, the nature of flooding itself presents its own unique set of challenges for analysts. Freeform or uncontained water is an incredibly difficult element to measure properly, especially when it is in motion. Additionally, weather forecasts are often microregional. Rainfall can differ drastically between two different areas only hundreds of feet apart, making accurate assessments of rainfall across entire municipalities or counties near impossible.

To address these challenges, analysts examine existing models and determine how emerging technology can complement those frameworks to function in a more proactive manner.

Digital Twins and Flood Management

Predictive models are at the cornerstone of emergency response, and the merging of the physical world with digital information is crucial to outputting accurate information for public servants to utilize in the field. This is achieved through the creation of digital twins, or virtual representations of real-life components and processes. In this case, digital twins of an Area of Interest (AOI), such as a town or a county, can consist of multiple variables that can contribute to different factors in a flood scenario, including elevation, stormwater infrastructure, commercial and residential constructions, precipitation and natural geographic features. The model then forecasts flooding based on real-time and historical data.

To create a digital twin, analysts select a designated AOI and break it down into a gridded matrix. These cells can be as precise as 50 feet by 50 feet, depending on the resolution required for a specific model and the resolution of the available geospatial data. This way, the model can take into account the spatial variation of different geological data elements within the AOI, including infiltration rate and soil type. Relevant data points are often available through the town or county in question, or through the United States Geological Survey (USGS). Once compiled, this information can be processed in a Geographic Information System (GIS) to create a digital twin to be used in flood forecasting.

However, the digital twin can remain static for some time, but can often change based on:

  • Changes in the landscape due to urbanization
  • Structures are built and demolished
  • Coastlines and water levels change

The more data and more current data that is incorporated into the digital twin, the more accurate the flood forecast and the more efficient the emergency response will be.

The Power of the Hybrid Model

As stated previously, one of the major challenges facing public servants concerning flood management is the time it takes to run simulations. AI models, trained on a series of input and output data, dramatically cut down model run times during storm events. Analysts can produce forecasts in seconds or minutes, where prior it may have taken hours or days to produce the underlying hydraulic and hydrologic model. This rapid prediction via model scoring process means that multiple AI models can be run at once that can take uncertainty in multiple parameters into account, reconcile differentiating flooding estimates and produce more accurate estimates.

When AI meets the real-world accuracy of digital twins, Government agencies can quickly and effectively plan for worst-case scenarios in flood emergencies.  These hybrid models can pinpoint areas on a large scale that are susceptible to complex issues during a flood, such as trash accumulation. Subsequently, these models can outline in real-time the cause and effect of decisions made by Government officials. In other words, if officials make infrastructure changes to solve a water challenge in one location, a hybrid model can show if the solution inadvertently created additional challenges elsewhere.

According to experts in the field, collaboration is the key to flood management success. This synergetic approach is echoed in the use of digital twins and AI predictive models. Using historical and real-time data to simulate future events will ultimately allow Government officials to plan and respond to flood scenarios safely and effectively.

Discover how digital twins and accompanying technology can transform flood management by watching SAS’s webinar “From Sensors to Digital Twins: Real-Time Flood Management with Data & AI”.

How Snyk Helps Federal Agencies Prepare for the Genesis Mission Era of AI-Driven Science

The White House’s new Genesis Mission signals a major shift in how the Federal Government plans to accelerate discovery using AI, national lab computing power and massive scientific datasets. For agencies, this means a new wave of AI-enabled research programs, expanded public-private collaboration and a significant increase in the use of software, data pipelines and cloud resources to drive scientific missions. Along with this opportunity comes a simple truth: AI can only accelerate discovery if the software behind it is secure.

That’s where Snyk supports agencies—by enabling developers, researchers and mission teams to build secure software from the start, aligned to Secure by Design and modern Federal cybersecurity expectations.

Why the Genesis Mission introduces new security pressure for agencies

  • More data and more experimentation: Agencies will be unlocking and federating large datasets, many of which were never designed for AI-scale access. This increases exposure risk and requires tighter control over data lineage, permissions and software pipelines.
  • More partners in the loop: National labs, other Federal entities, commercial cloud providers, academia and industry vendors will work together under new shared platforms. That means expanded software supply chains and stricter expectations for transparency and assurance.
  • Faster development cycles: Scientific models, simulations, AI workflows and data-processing pipelines will move at an accelerated pace. Traditional security review processes won’t be able to keep up.
  • Higher stakes for misconfigurations: AI workloads rely heavily on containers, open source, infrastructure-as-code and cloud services. A single misconfiguration in a pipeline, cluster or library could compromise sensitive scientific work.

Federal agencies need secure-by-default pipelines that can scale with mission speed.

Four ways Snyk supports Federal agencies

1.  Secures software supply chains for AI, HPC and scientific workloads

Snyk gives agencies visibility into all components used in AI and research software—including open source libraries, containers and IaC templates. Snyk helps agencies identify vulnerable or

risky components early, enforce approved library lists, produce SBOMs automatically and meet Federal supply chain expectations (Secure by Design, NIST 800-218, EO 14028, etc.)

2.  Embeds security for CI/CD, model-training and data pipelines

Whether agencies run pipelines in cloud environments, HPC clusters or hybrid infrastructures, Snyk integrates directly into:

  • GitHub / GitLab / Bitbucket
    • Jenkins, GitHub Actions, CircleCI
    • Container build systems
    • AI/ML workflow orchestration tools

This ensures vulnerabilities, misconfigurations and secrets are caught before software reaches production environments or shared research platforms.

3.  Cloud and container security for AI compute systems

The Genesis Mission relies on secure computing—including cloud GPUs, containerized workloads, HPC clusters, research VMs and hybrid infrastructure. Snyk helps agencies detect misconfigurations across cloud infrastructure, secure container images powering AI workloads, scan infrastructure-as-code templates before deployment and protect credentials and secrets used in research pipelines.

4.  Practical “secure by design” implementation

Snyk meets developers and researchers inside the tools they already use by providing automated fix recommendations, IDE plug-ins for secure coding, policy enforcement for high-risk components, as well as fast feedback loops that align with Agile R&D teams. This

operationalizes Secure-by-Design in a way that won’t slow down experiments, model training or rapid prototyping.

Why this matters for Federal missions

The Genesis Mission is accelerating scientific discovery across:

  • Clean energy and grid modernization
    • Fusion and advanced nuclear research
    • Materials science and critical minerals
    • Biotechnology and health research
    • Quantum, semiconductors and microelectronics
    • Climate modeling and Earth science

These domains rely heavily on software, data and compute, and securing those systems is essential for mission success.

Snyk helps agencies build software that is secure by design, fully transparent and aligned with Federal AI safety expectations. With Snyk’s AI Security Platform, agencies gain end-to-end protection across code, dependencies, containers and AI pipelines, enabling trustworthy and compliant AI systems that can power the next generation of U.S. Government missions–exactly what the Genesis Mission requires.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Snyk, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Federal Access: How Identity Visibility Drives Zero Trust Success

Federal agencies face mounting pressure to implement Zero Trust frameworks but often struggle with where to begin. The answer lies in understanding identity telemetry, the insights into who has access to what and how threat actors exploit identities to gain privilege and maintain persistence. Because threat actors increasingly steal credentials and pose as legitimate users, Federal agencies can no longer rely solely on detection tools that trigger alarms after attacks succeed. This shift demands a new approach to Zero Trust, one beginning with comprehensive visibility into the identity attack surface before implementing controls.

From Detection to Prevention

Federal agencies have historically relied on detection-based security tools like Endpoint, Detection and Response (EDR) and Extended Detection and Response (XDR) solutions to detect malicious activity. While still valuable, these reactive tools are inadequate as adversaries are compromising both human and non-human credentials, operating for extended periods. Using legitimate credentials, threat actors gain persistent access and escalate permissions while evading detection.

The missing component is proactive threat hunting that maps potential identity exposure before they are exploited. This requires aggregating identity data across the entire IT environment and analyzing how threat actors could leverage poor identity hygiene such as overprivileged accounts, insecure Virtual Private Networks (VPNs), exposed passwords and secrets, blind spots in third-party access and dormant identities to gain access to critical assets and data. Zero Trust relies on knowing exactly how identities function across the environment; without this visibility, agencies are essentially enforcing Zero Trust policies blindly and wasting time and money by not investing in protection capabilities that are resilient against cyberattacks. Identity telemetry should guide agencies in building proactive identity and mature Zero Trust capabilities.

The Fragmented Identity Visibility Problem

Federal environments span on-prem Active Directory (AD), multicloud environments, federated identity providers and numerous Software-as-a-Service (SaaS) applications, causing confusion, overlap and complex interactions across these different environments that are difficult to track, limiting end-to-end visibility of hidden attack paths for lateral movement and escalation.

These “unknown trust relationships” or “paths to privilege” stem from:

  • Identity provider misconfigurations replicating over-permissive access
  • Nested group memberships granting indirect privileges
  • Federation relationships enabling cross-domain escalation
  • Generic “all access” group rights elevating unprivileged users

These exposures exist between siloed systems and provide entry points for threat actors. Addressing this requires aggregating identity data, mapping cross-domain relationships and calculating the human, non-human and AI based identities. This exposes blind spots and transforms an unknowable attack surface into a manageable identity landscape.

True Privilege Calculation

Traditional privilege assessments focus on group membership and cloud role assignments but miss factors like nested groups, cloud application ownership, misconfigured identity providers and federation pathways. These elements often elevate an identity’s privilege far beyond what surface-level audits reveal.

BeyondTrust, Securing Federal Access blog, embedded image, 2025

True privilege calculation measures an identity’s effective and actual privilege across all connected systems and domains, including relationships, configurations and escalation pathways. For example, an identity that appears low-privileged in AD may federate into Identity and Access Management (IAM) roles and elevate its privilege. This visibility supports key Zero Trust decisions, such as:

  • What access should be continuously verified
  • Gaps in least privilege enforcement
  • Which accounts are most likely to be targeted
  • Where to place micro-segmentation boundaries

Given the scale and complexity of modern Federal environments, manual calculation is impossible. Automated solutions must continuously analyze permissions, relationships and identity provider configurations while mapping escalation paths. True privilege calculation transforms Zero Trust from theory into actionable strategy that goes from implementation to Zero Trust maturity.

Critical Attack Vectors

Dormant privileged accounts, often left active after personnel departures or reorganizations, retain elevated permissions long after their use ends. Threat actors frequently identify and reactivate these accounts to move laterally and maintain persistence using legitimate credentials. Effective identity hygiene requires:

  • Continuous monitoring of new dormant accounts
  • Cleanup of existing dormant or misconfigured accounts and standing privilege
  • Behavioral detection to flag unusual privilege escalation attempts or unexpected activity

Identity security cannot be a point-in-time exercise. Without visibility and a proactive approach, configurations drift and dormant accounts accumulate. Agencies must continuously identify dormant privileged accounts and immediately investigate if they suddenly become active, one of the strongest indicators of compromise. Continuous visibility transforms identity hygiene from a reactive alert-based approach to actionable telemetry for proactive threat hunting around current and known attack risk.

The Expanding Identity Attack Surface

The identity attack surface extends far beyond human users to service principals, cloud workloads, Application Programming Interface (API) credentials and automated systems, collectively known as “non-human identities.” These accounts often have elevated privileges but lack safeguards like password rotation, Multi-Factor Authentication (MFA) or behavioral analytics, creating significant security gaps.

Agentic AI introduces new challenges. Unlike traditional service accounts, AI agents act autonomously based on their instructions, tools and knowledge sources. A seemingly low-privilege agent could escalate privileges by interacting with other agents, creating complex escalation chains. Understanding an AI agent’s effective capability, not just its assigned permissions, is essential.

AI and non-human identity risks come from interconnected relationships. An AI agent running as a cloud workload may access secrets, interact with privileged systems or execute commands across domains. True privilege calculation for these entities requires mapping downstream actions they could initiate. Federal agencies need governance designed for non-human identities and AI agents, including:

  • True privilege calculation of escalation paths
  • Comprehensive inventory across all systems
  • Monitoring of potential blast radius as AI adoption accelerates
  • Context and knowledge of AI use and where agents are being deployed
  • Visibility into AI agent instructions, tools and knowledge sources

Investing in identity visibility now prepares agencies for emerging challenges as AI adoption becomes more prevalent.

Federal agencies must secure hybrid environments against adversaries who exploit identities rather than technical vulnerabilities. The path forward requires shifting from reactive detection to proactive threat hunting, eliminating fragmented visibility, measuring true privilege across all domains, maintaining continuous identity hygiene and extending visibility to non-human identities and agentic AI. Identity telemetry provides the data foundation needed for Zero Trust maturity, showing agencies where and how to strengthen their security posture.

Discover how comprehensive identity visibility drives Zero Trust maturity by watching BeyondTrust and Optiv+Clearshark’s webinar, “Securing Federal Access: Identity Security Insights for a Zero Trust Future.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including BeyondTrust, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Better Together: How Eightfold.ai and Empyra Are Transforming Government Workforce Services

Proven Results:

  • 30% faster job placement (Washington, D.C.)
  • 36% increase in engagement among underserved populations
  • 65% increase in training module completions
  • 71% increase in job applications submitted
  • 30% faster reemployment for RESEA participants (Florida Department of Commerce)

State and Local Governments are rethinking the way they connect job candidates with meaningful employment. Eightfold.ai and Empyra have combined to join advanced AI-driven talent matching with configurable case management. Together, they deliver a unified, secure environment that helps agencies modernize operations, improve employment outcomes and provide more efficient, personalized experience for both job seekers and employers.

AI-Driven Workforce Modernization

Eightfold.ai was built by former Google and Facebook engineers to be the world’s most intelligent talent matching platform, matching candidates to the right jobs. From more than a decade of global labor market data, its neural network goes beyond keyword searches, interpreting:

  • Skills
  • Roles
  • Qualifications

The platform continuously learns from interactions across job seekers, employers and case managers, moving agencies away from time-consuming resume screening toward a data-driven system that identifies talent by capability and aptitude.

Through its Career Navigator, Eightfold.ai provides:

  • Visual career pathways
  • Transferable skill identification
  • Gap analysis
  • Training from State-approved providers

This transforms the labor exchange into a dynamic environment that supports both immediate reemployment and long-term career mobility.

Integrated Case Management and Service Delivery

Empyra’s myOneFlow consolidates workforce and social service delivery into a single, configurable platform. By capturing data once and reusing it across workflows, the system reduces duplication and frees staff to focus on engagement rather than paperwork. Designed as a Commercial Off-The-Shelf (COTS), Workforce Innovation and Opportunity Act (WIOA)-ready system, myOneFlow includes Participant Individual Record Layout (PIRL) and performance reporting out of the box. As funding and requirements evolve, its flexible architecture allows agencies to tailor:

  • Forms
  • Eligibility rules
  • Intake processes
Eightfold.ai , Better Together Eightfold.ai and Empyra blog, embedded image, 2025

The platform streamlines the participant journey by automating:

  • Intake
  • Enrollment
  • Eligibility determination
  • Business rules to identify program fit
  • Referrals to partners for housing, education, training or employment resources

Participants can complete tasks and upload documents from any device via the mobile app. Beyond WIOA, myOneFlow also supports:

  • Apprenticeship management
  • Temporary Assistance for Needy Families (TANF)
  • Supplemental Nutrition Assistance Program (SNAP) tracking
  • Domestic-violence programs
  • Municipal grants.

By consolidating these functions, myOneFlow gives agencies flexibility to manage multiple programs efficiently within one adaptive system.

“Better Together” Integration Between Eightfold.ai and Empyra

Together Eightfold.ai and myOneFlow create a single front door for job seekers, case managers and employers. Unified identity management with Single Sign-On (SSO) and shared data models ensure information remains consistent across platforms.

Here’s how the integration works:

  • Participants register in myOneFlow
  • Their intake data automatically populates into Eightfold.ai
  • The AI engine generates skills assessments, job recommendations and career pathways
  • Applications, training and other activities sync back into myOneFlow

Case managers gain a real-time view of participant progress without manual entry, while employers benefit from accurate candidate matching and streamline recruiting tools. Behind the scenes, Eightfold.ai and Empyra operate a coordinated support model and incorporate agency feedback into joint product enhancements.

Trust, Security and Compliance

Both platforms meet rigorous standards, including:

  • FedRAMP
  • Tx-RAMP
  • System and Organization Controls 2 (SOC 2)
  • Department of Defense (DoD) Impact Level 4 (IL4)
  • International Organization for Standardization (ISO) 27001

They also adhere to evolving regulations across the European Union Artificial Intelligence (EU AI) Act, Texas Department of Information Resources (DIR) and other State privacy laws.

myOneFlow enforces:

  • Role-based access controls
  • Audit logging
  • Deduplication safeguards

Building the Future of Workforce Modernization

Eightfold.ai and Empyra’s myOneFlow demonstrate what is possible when AI, automation and integration align with mission-driven goals. The integrated solution helps agencies:

  • Deliver faster services
  • Improve job matching accuracy
  • Reduce administrative burden
  • Strengthen engagement
  • Maximize limited resources

Workforce organizations can now create a more responsive, equitable and efficient system, empowering job seekers, supporting employers and advancing mission outcomes.

Watch the full webinar, “AI-Centric Innovation: Modernizing Workforce Agencies,” to see the full demonstration of Eightfold.ai and Empyra’s integrated approach to workforce transformation.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Eightfold.ai and Empyra, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Emerging Trends in Artificial Intelligence and What They Mean for Risk Management

Artificial intelligence (AI) is a valuable risk management tool, but it also poses a degree of risk. As AI becomes more prevalent, it opens new possibilities while simultaneously raising new concerns.

Federal agencies and contractors have a responsibility to closely monitor developments in the scope and capacity of AI. In this article, we’ll explore some of the top emerging trends in AI, and we’ll explain their impact on risk management strategies for Federal agencies and contractors.

What are the Emerging Trends in Artificial Intelligence?

With its enormous capacity for pattern recognition, prediction and analytics, AI can be instrumental in identifying risk and driving solutions. Here are some of the most promising new AI applications for risk management.

Predictive Analytics

Predictive AI is widely used in applications like network surveillance, fraud detection and supply chain management. Here’s how it works.

Machine learning tools, a subsection of AI, rapidly “read” and analyze reams of historical data to find patterns. Historical data can mean anything from network traffic patterns to consumer behavior. Since machine learning tools can analyze vast datasets, they find subtle patterns that might not be evident to a human analyst working their way slowly through the same data. This kind of predictive analysis helps organizations identify risks before they escalate.

Once ML identifies the patterns, it can use them to make highly specific and accurate predictions. That can mean, for example, predicting website traffic and preventing unexpected outages due to increased usage. It can also mean spotting the warning signs of new computer viruses or identifying phishing emails.

Generative AI

Generative AI (GenAI) is often discussed in terms of its content creation capabilities, but the technology also has enormous potential for risk management.

GenAI can rapidly synthesize data from a wide range of inputs and use it to create a coherent analysis. For example, GenAI can make predictions about supply chain disruptions, based on weather patterns, geopolitical issues and market demand. Many generative systems use natural language processing to interpret context, summarize information and support more accurate decisions.

GenAI can also come up with solutions to the problems it identifies. The technology excels at breaking down silos and drawing connections between different sources of information. For example, the technology can suggest alternative shipping routes or suppliers in the event of a supply chain disruption.

It’s worth noting that, like any other AI tool, generative AI does best with human oversight. GenAI analysis should never be accepted at face value. Rather, employees can use it as an inspiration or a jumping-off point for further planning. Human expertise should always play a key role in the planning process, since GenAI isn’t always accurate.

Adaptive Risk Modeling

AI tools are capable of continuous learning and real-time analysis. Those capabilities lay the groundwork for adaptive risk modeling.

Adaptive risk modeling allows for a dynamic understanding of risk factors, instead of the traditional static approach. The old way of calculating risk relied on identifying patterns in historical data and using a linear model with a simple cause-and-effect analysis.

In contrast, adaptive risk modeling uses machine learning and deep learning to continually scan data sets for changes or new patterns. Instead of a static, linear model, AI risk modeling can build a dynamic model and continually update it.

Use Cases for AI Risk Management Tools

AI is widely used in the Public and Private Sectors to predict and manage risk, even with third parties involved. Here are some of the common use cases.

Federal Government Use Cases

A growing number of Federal agencies use AI tools to increase efficiency in their work. Some are beginning to pilot AI-powered agents to automate routine tasks and provide real-time recommendations for employees.

  • The Department of Labor leverages AI chatbots to answer inquiries about procurement and contracts.
  • The Patent and Trademark Office uses AI to rapidly surface important documents.
  • The Centers for Disease Control uses AI tools to track the spread of foodborne illnesses.

Financial Sector

Lenders increasingly use AI tools to assess the risk of issuing loans. Because AI can collect and analyze large data sets, the technology provides a comprehensive way to assess creditworthiness.

Financial institutions also use AI for fraud detection. AI tools can spot patterns in typical customer behavior and identify anomalies that could indicate fraud.

Insurance Industry

Insurance companies frequently use AI for underwriting, including risk assessment and risk mitigation. AI is also a useful tool for processing claims and searching for fraud.

Generative AI is also often used to provide frontline services to customers. For example, chatbots answer straightforward questions, provide triage and refer more complex questions to human operators.

Risks Associated with AI Technologies

AI is a valuable tool in mitigating risk, but it’s important to be aware of the risks the tools themselves present.

Chief among those risks is the problem of algorithmic bias. AI and ML excel at identifying patterns and codifying them. However, this means that AI is only as good as the data that feeds it. If AI/ML tools are trained on biased data, the tools will codify the biases embedded in that data. AI/ML takes the unspoken prejudices in datasets and turns them into hard and fast rules, which inform every decision going forward.

Agencies must also consider data privacy implications when AI tools process sensitive or regulated data. If human operators do not question the algorithm’s output, there’s a real risk that bias will become deeply ingrained, causing lasting harm to individuals and organizations and even creating regulatory compliance issues.

Addressing AI Bias

Federal agencies and contractors must understand exactly how AI tools are being deployed. Operators should frequently look “under the hood” of the AI algorithms, asking questions about how the outputs are generated. Opening the “black box” allows organizations to check for bias and prevent it from being codified. Strong data ethics practices ensure that AI systems are trained on fair, transparent and accountable data sources.

It’s best practice to implement a cross-functional AI governance council or team to oversee artificial intelligence. It’s also important to work closely with a trusted partner who has experience integrating AI into a GRC platform. The best AI tools help humans manage a Federal agency with efficiency. The question is, how to make the most of the available technology while mitigating the associated risk.

From Pilot to Production: Operationalizing Healthcare GenAI in Secure Multicloud Environments

Healthcare organizations are under immense pressure to shrink margins, tighten regulations, improve patient expectations and utilize increasingly complex data environments. While generative artificial intelligence (GenAI) has emerged as a powerful tool, most healthcare systems still struggle to move from experimentation to measurable outcomes. Leaders are asking the same questions: Where do we start? How do we ensure security and compliance? How fast should the Return on Investment (ROI) appear?

The answer is not simply selecting a model, it is building a strategy and infrastructure that transforms AI from a promising pilot into an enterprise engine for clinical, operational and financial improvement.

Start With High-Impact Use Cases that Deliver Early ROI

The path to operationalizing GenAI begins with use cases that are narrow enough to implement quickly, but meaningful enough to prove value. Start where measurable gains are most attainable, such as document processing, contract review, claims analysis, compliance workflows and call center optimization.

One of the strongest early candidates is Protected Health Information (PHI) de-identification, where AI can accelerate research access while protecting privacy. Many organizations are also applying GenAI to claims review, using models to flag missing attachments, coding inconsistencies or errors that commonly drive costly denials. With first-pass denial rates hovering in the 17–25% range industry-wide, automating this analysis can generate immediate financial return.

These targeted wins build executive confidence, secure budget and create organizational momentum, which is critical before expanding to more complex clinical or patient-facing scenarios.

Build Trust by Grounding the Model in Your Own Data

Accuracy and trust determine whether healthcare AI is adopted or ignored. General-purpose models are not sufficient for healthcare, where language is deeply nuanced and context dependent. Instead, organizations should ground GenAI in their own governed data sources, such as Electronic Health Records (EHRs), Customer Relationship Management (CRM) platforms, care summaries, research documents or internal policies.

To achieve this, many leaders are adopting Retrieval-Augmented Generation (RAG) with vector databases, which allows models to pull precise information from internal systems in real time. Vector databases are a foundational accelerator, enabling faster, more accurate retrieval across structured and unstructured data. This approach delivers three business advantages:

  1. Higher accuracy and confidence in model responses
  2. Stronger control of PHI and sensitive data
  3. Traceability, which is essential for audits, appeals and clinical validation

Grounding the model in an organization’s own data turns GenAI from a creative tool into a trusted operational system.

Use a Secure Multicloud Strategy to Reduce Risk and Increase Agility

John Snow Labs, Operationalizing Healthcare GenAI blog, embedded image, 2025

To operationalize GenAI responsibly, healthcare organizations should design for security,compliance and flexibility from day one. When separating PHI and non-PHI workloads, a multicloud strategy helps healthcare organizations:

  • Isolate sensitive data to minimize breach impact and simplify governance
  • Reduce lock-in risk and leverage the strengths of different cloud platforms
  • Tap into more innovative options, since each cloud offers unique AI tooling
  • Optimize cost and performance by matching workloads to the right environment

Multicloud design also supports stronger compliance postures by enabling auditability, identity controls, monitoring and bias/hallucination safeguards, all of which must be proven to regulators and accrediting bodies.

Avoid “Pilot Purgatory” and Build a Path to Production

Many healthcare AI programs fail not because the technology underperforms, but because the organization never assigns ownership or a path to scale. To prevent “pilot purgatory,” short-term projects that drag on without measurable outcomes, organizations should:

  • Create a defined production roadmap before the pilot begins
  • Empower a cross-functional AI Center of Excellence (COE) to own outcomes
  • Secure both clinical and administrative stakeholders
  • Treat GenAI as an enterprise capability, not a one-off project

This shift enables the same investment to support multiple use cases, expanding impact while lowering cost per interaction over time.

Continuously Measure, Optimize and Expand

An operational GenAI program is never “set it and forget it.” It is important to continuously track Key Performance Indicators (KPIs) to guide optimization and justify expansion. Recommended KPIs include:

  • Cost per interaction
  • Accuracy and confidence
  • Time saved per task or workflow
  • Time to response (latency and model speed)
  • User satisfaction (providers, staff and patients)

By evaluating these metrics regularly, healthcare organizations can expand from early wins to enterprise scale, from research and development to patient support, revenue cycle, compliance and beyond.

Align People, Data and Infrastructure For AI Success

Technology alone is not the determining factor of AI success in the healthcare space, alignment is. Success requires a shared vision from leadership, responsible data groundwork, a secure multicloud foundation and continuous measurement to maintain trust and value. With the right approach, GenAI can improve patient satisfaction, strengthen trust, accelerate research and innovation, reduce administrative burden and deliver measurable ROI in weeks over years.

Carahsoft and John Snow Labs help healthcare leaders accelerate this journey, combining secure infrastructure, domain-specific healthcare AI and proven deployment models. To explore how your organization can operationalize GenAI safely and effectively, watch the full webinar, “Lessons Learned from Harnessing Healthcare Generative AI in a Hybrid Multi-Cloud Environment.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including John Snow Labs, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How AI-Powered Records Management Transforms Government Operations from Reactive to Proactive

Government agencies today must manage an unprecedented volume of digital documents. As digital transformation accelerates across Federal, State and Local agencies, the challenge is not just managing more content, it is extracting actionable intelligence while maintaining compliance, security and operational efficiency. Artificial intelligence (AI) has transformed enterprise records management, replacing manual processes with automated, predictive systems that improve decision making and resource allocation across the mission.

AI-Powered Auto-Classification for Document Management

Effective classification is the foundation of records management, and AI has altered this traditionally complex process. Modern AI models can accurately classify structured documents like invoices or purchase orders, with as few as ten training examples. This represents a major improvement over legacy systems that required zonal Optical Character Recognition (OCR) configuration, separator pages and precise layout specifications.

AI models employ multiple techniques, including computer vision, text extraction and contextual reasoning, to identify document types with high confidence. Unlike older pattern-matching tools, today’s AI adapts to variations in structure and format, making classification scalable for agencies managing thousands of document types across different departments.

Training has also become more accessible. Agencies can simply label documents, point the AI to those examples and generate a working classification system. Accuracy improves over time through human review, and confidence scores allow agencies to set thresholds and route low-confidence results to human reviewers.

Accurate classification directly impacts record retention, access control and content discovery. Without it, employees cannot find necessary documents, retention schedules are misapplied and access permissions become inconsistent. Robust AI-powered classification at ingestion ensures downstream processes function as intended.

Intelligent Data Extraction from Structured and Unstructured Documents

Once documents are classified, agencies must extract meaningful information, an area where AI delivers transformative capabilities. Modern machine learning models locate key-value pairs anywhere on a document, using contextual understanding rather than fixed positions or label formats. AI can also answer natural-language queries, mirroring human logic. If a person can explain how they would find a piece of information, that logic can be written as a prompt for the model.

These capabilities work across structured and unstructured formats. Work that previously required specialized staff and years of experience can now be configured with simple prompts. Confidence scoring ensures accuracy. When the model is uncertain, items are routed to human reviewers. This combines automation’s speed and consistency with human judgment where needed.

For Government agencies, AI extraction improves compliance and reporting. Licensing applications, permit requests, inspection reports and countless other documents can be automatically processed, with extracted data populating systems of record and triggering workflows. Information once locked in PDFs or paper becomes structured, searchable and actionable.

AI-Driven Deduplication and Data Quality Management

VisualVault, AI-Powered Records Management blog, embedded image, 2025

Duplicate data is a productivity drain and a compliance risk. Redundant documents accumulate quickly across forwarded emails, multiple repositories and inconsistent processes. This creates unnecessary work, consumes storage and complicates compliance with data retention requirements.

Legacy deduplication relied on hash matching, but this fails to detect most real-world duplicates. AI-based deduplication analyzes document classifications and extracted metadata to determine true duplicates based on agency-defined rules. If the elements match according to customer rules, the system flags the items as duplicates regardless of differences in headers or formatting.

This content-based deduplication reduces storage costs, simplifies retention compliance and minimizes cybersecurity exposure. Retaining unnecessary data increases legal risk during litigation and discovery and expands the attack surface for cyber threats. AI allows agencies to retain only necessary data, reducing operational and security liabilities.

Enhanced Workflow Automation with Predictive Analytics

High-quality, classified and extracted data unlocks the full value of predictive analytics, enabling Government agencies to shift from reactive problem-solving to proactive planning. This capability uses historical data to predict outcomes, such as numeric values, binary decisions or multiclass classifications.

Platforms like VisualVault allow agencies to train predictive models without data science expertise. Professional services teams configure the models, demonstrate how they work and train agency employees to manage them.

Public sector agencies already use predictive analytics to forecast safety incidents at licensed facilities. Historical inspection data comprised of conditions, violations and corrective actions allows models to identify facilities with a high probability of future serious events. When inspections reveal patterns associated with increased risk, inspectors and licensing officials are automatically alerted, enabling early intervention.

Predictive analytics also strengthens performance management. Agencies can compare their metrics against industry norms, seeing where they stand within their sector. This supports investment decisions and enables precise tracking of improvement outcomes.

Agencies should focus on automating controls that meaningfully reduce, not simply increasing the percentage of automated controls. High-impact controls should be prioritized for automation and predictive monitoring to maximize security and operational benefits.

For decision makers, predictive analytics delivers the context and accuracy needed to make fast, informed decisions across claims, vendor management, resource allocation and strategic planning.

Digital Transformation as Organizational Necessity

Despite rapid technological advancement, human expertise remains essential. AI systems are designed to operate behind the scenes and do not require users to understand machine learning (ML) concepts. Small teams define the required outcomes, what must be classified, what data must be extracted and what predictions will improve decisions, while professional services configure the system accordingly.

AI adoption does not inherently reduce headcount. Historically, technology shifts transform jobs rather than eliminate them. Workflows move from manual tasks like sorting documents to higher-value work such as analysis, decision making and innovation. Employees focus on defining requirements, reviewing AI outputs and applying human judgement where it adds value.

The Measurable Value of AI Implementation

Agencies can begin their journey by identifying their key performance indicators and the business outcomes they want to improve:

  • What pain points cause the most friction?
  • Where do backlogs accumulate?
  • Which processes create the most risk?

This ensures implementation is tied to measurable outcomes. AI success depends on clear requirements, proper process, staff training and strong governance. Agencies should adopt AI incrementally, starting with high-value use cases that deliver quick wins, then expanding into more complex workflows and predictive models as confidence grows.

Digitization mandates and the rise of generative AI have accelerated content creation beyond expectations, driving significant growth for platforms like VisualVault. The agencies that succeed will be those that embrace this shift and modernize now.

Watch VisualVault’s webinar “Employing AI to Bring Order and Value to Enterprise Records Management” to explore detailed demonstrations of AI-powered classification, extraction and predictive analytics capabilities that can transform your agency’s records management operations.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including VisualVault, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

From Data Silos to Life-Saving Decisions: How Technology is Transforming Healthcare Delivery

Healthcare organizations continuously navigate complex challenges as patient demand grows. Imaging volumes are rising faster than radiology capacity can scale. Public health agencies manage vast amounts of data across disconnected systems. Administrative tasks consume time that healthcare staff would rather spend on patient care.

These operational realities create opportunities for technology to make a meaningful difference. Leading healthcare organizations are already transforming these challenges into improved outcomes through strategic technology deployments enabled by streamlined procurement.

As The Trusted IT Solutions Provider for the Healthcare Industry™, Carahsoft offers a robust portfolio of healthcare technology solutions that make positive changes in the quality, safety and effectiveness of healthcare delivery systems. Streamlined procurement is available through Carahsoft’s reseller partners and numerous contract vehicles including GSA Schedule, NASPO ValuePoint, E&I Cooperative Services and The Quilt.

Key Takeaways:

  • AI diagnostics improve radiology efficiently by up to 40% addressing the looming shortage of 42,000 radiologists by 2033.
  • Unified data platforms enable more than 80% of emergency departments to share real-time data with the CDC.
  • Automated workflows cut processing times by 50%, freeing staff for patient care.
  • Zero Trust security protects patient data while enabling hybrid cloud operations.
  • Streamlined procurement accelerates deployment from months to weeks.

AI-Powered Diagnostics: Addressing the Radiology Crisis

By 2023 the U.S. faces a shortage of up to 42,000 radiologists as imaging volumes rise 5% annually while residency positions increase just 2%.

At Northwestern Medicine, Dr. Mozziyar Etemadi, Clinical Director of Advanced Technologies, deployed a generative AI solution with Dell Technologies and NVIDIA that analyzes chest X-rays and generates draft reports instantaneously. Results: radiology efficiency improved by up to 40% without compromising diagnostic accuracy. The system flagged unexpected pneumothorax cases with 72.7% sensitivity and 99.9% specificity – lifesaving in emergency settings.

The technology runs on Dell PowerEdge XE9680 servers with NVIDIA H100 GPUs, deployed on premises to maintain HIPAA compliance. Northwestern is now developing predictive models for entire electronic records.

Public Health Surveillance: Rapid Outbreak Response

The CDC faced a critical challenge: essential health data trapped in disconnected silos across thousands of facilities.

The CDC’s partnership with Cloudera created a unified platform consolidating data from hospitals, laboratories and wastewater testing sites. More than 80% of non-federal emergency departments now send data to CDC, enabling comprehensive threat monitoring. When measles spiked across 15 states in 2025, officials had integrated visualizations within days.

The CDC’s One CDC Data Platform (1CDP), established in 2024, provides state, tribal, local and territorial agencies with streamlined access to core datasets and analytics, enabling faster disease trend detection and proactive strategies.

Accelerating Cancer Research Collaboration

The National Cancer Institute partnered with Google Cloud and Barnacle AI to introduce NanCI – a platform leveraging AI-driven recommendations to connect researchers with collaboration opportunities, literature and events. The solution demonstrates how AI extends beyond clinical care to accelerate scientific discovery across Government, Education and Healthcare sectors.

Operational Excellence: Freeing Caregivers to Care

Workforce coordination: Healthcare organizations use BlackBerry AtHoc, available through Carahsoft’s reseller network and contract vehicles, to streamline staffing and scheduling processes. The event management platform helps ensure personnel are coordinated efficiently across departments which is essential for maintaining high standards of patient care.

Financial automation: Community Health Centers of Florida implemented Laserfiche’s enterprise content management system, cutting processing time by 50% and eliminating manual data entry. “I cannot fathom processing the current volume of invoices ‘the old way,’” said Dee Bradshaw, director of purchasing. “Laserfiche has cut our processing time in half.”

Every hour freed from administrative burdens is an hour caregivers get back to spend with their patients.

Modern, Secure Infrastructure

California Department of State Hospitals deployed Rubrik’s data management platform to integrate legacy systems with modern hybrid cloud environments. Rubrik’s Zero Trust Data Security framework minimized ransomware vulnerability while ensuring Federal compliance.  

St. Luke’s University Healthcare Network used Rubrik for faster backups, near-instant recovery and seamless hybrid IT integration, strengthening cyber defenses while freeing IT staff to support clinical teams.

Federal agencies, State and Local Governments and Education institutions face similar Zero Trust security and hybrid cloud integration requirements.

Explore Carahsoft’s cybersecurity solutions at www.carahsoft.com/solve/cybersecurity.

Meeting Demand at Scale

NYC Health + Hospitals deployed Snowflake’s Data Cloud which consolidated separate data sources into a unified platform. This integration eradicated silos, provided real-time visibility and enabled data-driven decisions at the point of care for vulnerable populations.

The Carahsoft Advantage

For Healthcare Organizations: Faster access to solutions, simplified procurement through pre-negotiated contracts, integrated solutions across technology verticals, dedicated healthcare technology expertise. Simplify your organization’s procurement journey with Carahsoft.

For Reseller Partners: Opportunities to deliver comprehensive solutions, access to leading vendors through established contract vehicles, sales enablement and marketing support. Become a Carahsoft reseller partner.

For Technology Vendors: Expanded reach across Federal, State and Local Government, Education and Healthcare markets, simplified Healthcare sales through hundreds of contract vehicles. Join our partner ecosystem.

Ready to explore healthcare technology solutions?