Securing Air-Gapped and Classified Environments: The Importance of Customized Endpoint Protection

Military and intelligence agencies manage extremely sensitive information, and their missions often require them to operate in high-risk environments where even the slightest breach of security or sensitive data exposure means disastrous results to the mission and to national security. Their most vital networks are air-gapped—disconnected from the internet—so cloud-native security tools cannot secure these sensitive assets.

There is a myriad of reasons organizations choose to air-gap their systems. To effectively secure classified networks, weapons systems, tactical field systems and critical infrastructure, agencies are faced with the challenge of building and maintaining a security strategy involving endpoint, network and data security defenses that can deliver strong cyber command and control without relying on internet connectivity.

No Single Strategy is 100% Attack Proof

Physically or logically isolating networks into air-gapped networks is a sound security strategy that defense, intelligence and civilian agencies employ to prevent access to sensitive or classified systems and operations. Yet their isolation alone is not enough to ensure air-tight security.

While air-gapping does reduce remote risk, it is not exactly immune to cyber risk. Air-gapped environments are designed to block external adversaries by isolating networks from the internet or a broader enterprise. But that isolation inevitably shifts risk toward the people who do have access—admins, operators, contractors, maintenance staff and trusted vendors. By eliminating one problem, there is often an unintended consequence of risk—by blocking outsiders, threat likelihood from insiders becomes concentrated.

In most air-gapped environments, a small set of users has elevated access. Patching and updates are slow, and monitoring is limited or entirely local to the air-gapped network. Due to the isolation of the systems, physical presence is required, increasing insider impact. This makes insiders the most capable attack vector—whether through malicious or simply negligent behavior. 

Air-gapped environments make heavy use of Universal Serial Bus (USB), compact disks (CDs), digital versatile disks (DVDs), portable Solid-State Drives (SSDs) and sneakernet to move data from system to system, and to apply updates and patches. This offers the opportunity for tampering, and these environments often lack the continuous monitoring needed to spot and stop these risks, resulting in threat detection gaps and delays.  A mature data protection strategy is vital in air-gapped environments to thwart insider threats.

Because air gapped systems rely entirely on local security measures, organizations must build layered, robust defenses to secure classified and sensitive assets. Local protection is everything, and for high-risk agencies that means monitoring and securing every single endpoint.

How Endpoint Protection Fills the Gaps

Endpoint protection is a broad term describing technology and strategies used to secure end-user devices, such as laptops, computers and mobile devices. Since these devices get the most direct human interaction while housing vital data, they are exceptionally vulnerable to cyberattacks, even in air-gapped networks. To avoid critical breaches, security operators must be able to detect, prevent and respond to threats on each endpoint device in any given environment, especially when they interact with classified data.

Many organizations are turning to cloud-native endpoint security solutions that depend upon cloud-based machine learning for anomaly detection. While these endpoint security tools may be suitable for some systems and some environments, they depend on the cloud to function so they cannot operate in disconnected or air-gapped environments. This opens security gaps, leaving devices vulnerable to cyberattacks and insider threats. Security teams can solve this problem by investing in endpoint protection approaches that are well-suited to air-gapped environments, enabling the visibility and control necessary to safeguard these critical systems.

The Benefits of Customizable Endpoint Protection

The ability to tailor security for nuanced policy control and security monitoring—including specific configurations for user roles, device types or classification levels—is crucial to ensure a strong security posture. Endpoint security solutions must also be established independently from the cloud, to run behavioral analytics even in fully isolated network enclaves.

When a threat occurs, detailed information is vital to protecting high-value assets, and robust air-gapped endpoint security systems enable rapid identification and threat mitigation while providing analysts with forensic data for investigation. This critical context also informs refinements to tailor and optimize the security approach for the environment’s unique mission.

Implementing a Zero Trust approach is still vital to reducing threats to air-gapped environments, just as it is in internet-facing networks. Hardening systems by ensuring only trusted software can execute enables the mission but not an attacker.

Safeguarding the data from insider threats is another important element of a mature air-gapped security operation. Data Loss Prevention (DLP) offers an important countermeasure against cybersecurity risk in air-gapped environments and allows security teams the ability to ensure that organizational data is appropriately controlled. 

Two Industry Leaders, One Unbreakable Line of Defense

Defense and intelligence agencies cannot afford to leave gaps from security tooling that is unsuitable to defend disconnected networks and endpoints. They need an endpoint security suite built for their world—one that delivers advanced security capabilities to offline, high-stakes and mission critical IT systems. Symantec and Carbon Black deliver exactly that: proven protection designed for Federal environments.

Both solutions are purpose-built for Government, but each brings its own strengths to the field:

  • Symantec delivers powerful static and dynamic malware analysis, plus built-in USB device management to automatically flag and quarantine malicious media. Symantec also offers an industry-leading DLP solution well-suited to air-gapped environments where ensuring data is properly safeguarded is mission-critical.
  • Carbon Black provides deep behavioral detection and advanced Endpoint Detection and Response (EDR), capturing forensic logs, watchlists tuned to the unique environment and analytics to support detailed investigations. Carbon Black also enables organizations to establish a positive security model with policy-based governance to ensure their systems only execute trusted software and use only allowed removable media devices.

Joined together, renowned brands Symantec and Carbon Black offer proven, mature solutions to safeguard air-gapped environments and data by providing visibility to identify threats and streamline investigations and protection policies to neutralize threats. Their combined detection and granular visibility close the gaps left by cloud-reliant platforms—especially necessary in disconnected air-gapped and bandwidth-constrained environments—giving agencies the command and control they need to stop threats before they compromise the mission.

Watch the expert webinar to hear how Department of War guest speakers are addressing their endpoint security gaps.

Can’t get enough? Download NextGov/FCW’s latest article for deeper insights on the fight to secure air-gapped environments.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Cybersecurity Automation: Strengthening Defense in a Resource-strapped Environment

If you work in Government agencies or as a contractor, you feel the pressure to do more with less every day. Security teams in particular have to reduce response times despite limited staff and resources.

Cybersecurity automation gives a practical way to manage these tasks without relying on constant hiring. Two core compliance frameworks that shape this work for you are the National Institute of Standards and Technology (NIST) Cybersecurity Framework and the Cybersecurity Maturity Model Certification (CMMC).

NIST organizes cybersecurity activities into five functions: Identify, Protect, Detect, Respond and Recover. Meanwhile, CMMC defines maturity levels and specific practices across domains, such as access control, auditing and incident response. Let’s explore three cybersecurity automation strategies that help organizations strengthen their defense.

Why Cybersecurity Automation Is Important

For security teams, a typical day revolves around manual triage, status chasing and spreadsheet maintenance. Cybersecurity automation changes it by pulling live data from your systems to maintain current asset and risk inventories. This happens without asking people to update information by hand.

Under NIST’s Identify function, this means you can see where your critical assets live and how they change over time. On the other hand, the Protect function benefits from automated patching, network segmentation and access monitoring that do not depend on someone remembering to run a script.

Cybersecurity automation also strengthens access control. It enables security professionals to manage who joins, moves and leaves networks and critical systems. At the same time, it keeps user privileges aligned with each user’s role.

This automation handles all your repeatable tasks, allowing you and your teams to spend more time on strategic risk decisions instead of routine checks. You can easily keep pace with security requirements even when the headcount is tight.

Three Ways Cybersecurity Automation Reduces Risks

The main purpose of automating cybersecurity is to minimize threats and speed up recovery and incident response times. Below are three cybersecurity automation strategies that help achieve that:

Smarter Threat Detection

Staff shortages directly or indirectly impact almost every step of your security process. This also includes your ability to watch for threats around the clock. With manual scans and periodic log reviews, your team is more likely to leave gaps that adversaries can take advantage of.

Cybersecurity automation closes those gaps by running continuous monitoring and correlating logs across your security operations center. It also surfaces patterns, such as unusual data transfers or login behaviors, that deserve a closer look. This lines up directly with the Detect function of the NIST Cybersecurity Framework, which emphasizes the timely discovery of cybersecurity events.

Automated anomaly detection can learn what “normal” looks like in your environment and instantly flag deviations for investigation. Your analysts don’t have to stare at dashboards all day. This way, you give your security operations greater depth without adding more people to the roster.

Additionally, CMMC strengthens this need through the AU (Audit and Accountability) domain. It expects systematic collection, protection and review of audit logs. Automation can collect and timestamp events, retain them according to policy and perform first-level analysis to find suspicious sequences. If you work in Government services, this type of threat detection raises your confidence that your team won’t miss any meaningful events.

Faster Incident Response and Recovery

Security teams feel the need for more staff members, especially when something goes wrong. A strong incident response plan only helps if you can execute it quickly and consistently.

Cybersecurity automation brings that plan into action by triggering playbooks as soon as a qualifying event occurs. The automated system instantly isolates affected systems, blocks malicious IP addresses and starts forensics workflows without waiting for someone to manually coordinate the steps.

NIST’s Respond and Recover functions call for well-defined processes that you can rely on during stressful situations. With automation in place, regular backups can be created and tested according to schedule. It also makes sure recovery takes place before systems return to production and that every step is logged for later review.

CMMC’s IR (Incident Response) domain expects this level of definition and documentation. This is much easier to achieve via automation than phone calls or ad hoc emails.

Compliance Made More Manageable

Agencies and contractors working in regulated environments must show that they consistently follow their stated controls. NIST SP 800-53 includes controls that can be supported through cybersecurity automation, such as CA-7 for continuous monitoring. It runs assessments on a defined cadence and produces standardized reports for reviewers.

For security teams, this means they can rely on their automation solutions to maintain an up-to-date record of control performance.

CMMC evaluates maturity across Risk Assessment (RA) and Security Assessment (CA) domains. Automation can help you bring together threat, vulnerability and asset information to support cybersecurity activities without adding new layers of manual work. These include objective risk scoring, tracking remediation activities and monitoring third-party risks.

This automates the flow of information and helps security teams, auditors and compliance leaders easily interpret the results. You still own the decisions, but security automation makes it much easier to show how your program aligns with compliance requirements.

Choosing the Right Cybersecurity Automation Platform

If you’ve already started planning to put these strategies into practice, you may still be wondering which security automation platform to choose. As a general rule of thumb, look for a solution that:

  • Connects to your existing cybersecurity technology, tools and processes
  • Supports a range of users, from CISOs and risk officers to analysts and auditors
  • Offers no-code or low-code options, as they allow security teams to design and adjust workflows without requiring many development resources
  • Aligns with your long-term Governance, Risk and Compliance (GRC) strategy while giving you quick wins in log review, alert triage, incident response and control testing
  • Ties with NIST and CMMC requirements
  • Comes with strong reporting and user experiences

Onspring offers all these features to security teams. Their no-code GRC platform connects risk, compliance and audit data so you can manage policies, assessments and issues in one place.

The platform has strong social proof. Their customers report saving up to 70% of the time they once spent managing policies, consolidating 12% of their applications and improving overall business efficiency by 33%.

Onspring also automates repetitive tasks and displays everything on spreadsheets and dashboards for easy collaboration. It also has GovCloud support for Government environments, which enables CISOs, auditors and security teams to manage security-related functions on autopilot.

Connect with Onspring’s team to understand how their cybersecurity automation capabilities can reduce risks in diverse environments.

Discover How Automation Reduces Cybersecurity Risks

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Onspring, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Building a Security Strategy for Agentic AI: A Framework for State and Local Government

As artificial intelligence (AI) evolves from simple chatbots to autonomous agents capable of making independent decisions, State and Local Government agencies face a fundamental shift in cybersecurity requirements. Recent research shows 59% of State and Local Government respondents report already using some form of generative AI (GenAI), with 55% planning to deploy AI agents for employee support within the next two years. Yet this rapid adoption brings unprecedented security challenges. Because AI agents are designed to pursue goals autonomously, even adapting when security measures block their path, Chief Information Security Officers (CISOs) responsible for safeguarding Government networks must rethink traditional defenses and embrace a new security paradigm.

The Emergence of Agentic AI and Its Unique Security Challenges

AI agents represent a significant departure from the GenAI tools many agencies currently use. While traditional Large Language Models (LLMs) respond to prompts and return information such as a support chatbot, AI agents and agentic systems are autonomous software programs that can plan, reflect, use tools, maintain memory and collaborate with other agents to achieve specific goals. These capabilities make them powerful productivity tools, but they also introduce failure modes that conventional software simply does not have. Unlike deterministic systems that crash when something goes wrong, AI agents can fail silently through collusion, context loss or corrupted cognitive states that propagate errors throughout connected systems. Research examining the real-world performance of AI agents found that single-term tasks had a 62% failure rate, with success rates dropping even further for multi-term scenarios.

When Veracode examined 100 LLMs performing programming tasks, these systems introduced risky security vulnerabilities 45% of the time. For State and Local agencies handling sensitive citizen data, managing critical infrastructure or supporting public safety operations, these error rates demand careful attention within robust security frameworks designed specifically for autonomous systems.

The New Security Paradigm: From Human-Centric to Agent-Inclusive Workforce Protection

AI agents, the newest coworker, amplify insider threats by combining human-like autonomy with capabilities that exceed human limitations. While employees work within bounded motivation and finite skills, AI agents possess boundless motivation to achieve goals, uncapped skills that continuously improve and infinite willpower, constrained only by computational capacity. They will not simply make a single attempt to access a file, get blocked due to a lack of permissions, get frustrated and go home for the day the way an employee might; they will persistently pursue objectives, potentially finding novel ways around security controls.

This transformation fundamentally changes the attack surface agencies must protect. Data breaches continue to impose significant financial and operational strain across the public sector, with many state and local organizations reporting cumulative annual costs that reach into the millions. AI agents and agentic systems collapse traditional security models by operating as autonomous workforce members who interact with systems, access data and make decisions without direct human oversight. They can be compromised through threats specific to agentic AI, such as goal and intent hijacking, memory poisoning, resource exhaustion or excessive agency that can lead to unauthorized actions, all in pursuit of achieving programmed objectives. For Government agencies managing limited security budgets while protecting essential citizen services, this exponential increase in potential attack vectors demands proactive frameworks rather than reactive responses.

The AEGIS Framework: A Six-Domain Approach to Securing Agentic AI

Forrester’s Agentic AI Enterprise Guardrails for Information Security (AEGIS) framework provides a comprehensive approach to helping CISOs in securing autonomous AI systems across six critical domains.

Governance, Risk and Compliance (GRC) establish oversight functions and continuous monitoring capabilities. Identity and Access Management (IAM) address the unique challenge of agent identities that combine characteristics of both machine and human identities. Data Security focuses on classifying data appropriately, implementing controls for agent memory and considering data enclaves and anonymization from privacy perspectives.

Application Security evaluates risks across the entire software development lifecycle (SDLC), implements Development, Security and Operations (DevSecOps) best practices, assesses the software supply chain and uses adversarial red team testing to validate safety and security controls. This domain focuses on embedding telemetry that gives security teams visibility into agent behavior and decision making. Threat Management ensures logs are accessible to security operations center analysts, enabling detection of behavioral anomalies and supporting forensic investigations. Zero Trust Architecture (ZTA) principles apply such as implementing network access layer controls for agent workloads, continuous validation of the agent’s runtime environment and  monitoring of agent to agent communication.

Underlying the framework are three core principles:

  • Least Agency extends least privilege to focus on decisions and actions, ensuring agents have only the minimum set of permissions, capabilities, tools and decision making necessary to complete specific tasks.
  • Continuous Risk Management replaces periodic audits with ongoing evaluation of data, model and agent integrity.
  • Securing Intent requires organizations to understand whether agent actions are malicious or benign, intentional or unintentional, enabling proper investigation when failures occur.

Practical Implementation: Agent Onboarding and Governance

Forrester’s “Agent on a Page” concept provides a practical tool for providing structure, consistency and alignment of AI agents to business goals before activation, by outlining each agent’s owner, core purpose, operational context, knowledge base, specific tasks, functional alignment, tool access and cooperation patterns. This documentation gives business stakeholders clear success criteria, while security teams use it as a threat model and input into Forrester’s AEGIS framework to identify gaps in controls, missing guardrails, vulnerabilities and establish baselines to validate agent behavior against.

Similar to employee onboarding, agents require explicit programming on compliance frameworks, data privacy restrictions, scope of work and organizational norms. They must understand cooperation boundaries, operational context, knowledge sources and collaboration patterns. Agencies already deploying agents may have some of this documentation; those starting should collaborate between business owners and security teams to develop these frameworks.

Building a Secure Foundation for Autonomous AI

State and Local Government agencies stand at a critical inflection point. AI agents promise significant productivity gains across employee support, investigation assistance and first responder capabilities. Yet deploying these autonomous systems without appropriate security frameworks creates unacceptable risks for organizations managing citizen data and essential public services. The AEGIS framework provides a comprehensive approach to securing agentic AI before widespread deployment, enabling agencies to realize benefits while maintaining security postures that citizens expect.

Organizations should begin by reviewing the Forrester’s AEGIS framework to understand how it maps to existing compliance requirements such as NIST AI RMF, the EU AI Act and OWASP Top 10 for LLMs. Forming AI governance committees using AEGIS principles help establish organizational buy-in. Discovery processes identifying which departments are exploring AI agents enable targeted control implementation. Agencies that establish strong foundations now position themselves to adopt autonomous AI confidently and securely.

To explore the complete AEGIS framework and gain deeper insights into securing agentic AI for State and Local Government, watch Carahsoft’s full webinar featuring Forrester, “Full Throttle, Firm Control: Build Your Trust Strategy for Agentic AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Forrester, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

FedRAMP 20x: Modernizing Cloud Security Authorization Through Automation and Continuous Assurance

FedRAMP authorization has long required extensive documentation, static point-in-time assessments and timelines of 18–24 months. This approach has slowed innovation for Federal agencies seeking secure cloud solutions and for vendors pursuing Government contracts.

FedRAMP 20x reimagines authorization through automation, machine-readable evidence and continuous monitoring, shifting compliance from document-driven processes to data-driven assurance. It also reshapes how Federal agencies, Cloud Service Providers (CSPs) and Third-Party Assessment Organizations (3PAOs) collaborate to secure Government environments.

The Shift from REV 5 to 20x

Traditional FedRAMP authorization follows a linear, document-heavy process where CSPs write extensive System Security Plans (SSPs), undergo annual assessments and exchange static artifacts with 3PAOs. FedRAMP 20x maintains the same security requirements from National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Revision 5 (REV 5) but transforms how evidence is validated. Instead of screenshots or single-moment spreadsheets, 20x uses logs, configuration files and automated integrations that reflect real-time security posture. This enables continuous assurance, with systems remaining audit-ready and controls validated through actual telemetry and configuration baselines.

The result is a more dynamic, risk-focused model that moves beyond top-down waterfall processes that often obscure security conditions.

Modernized Compliance

FedRAMP 20x requires robust compliance automation built on five pillars:

  1. Control normalization
  2. Engineering
  3. Infrastructure
  4. Evidence generation
  5. Reporting

Controls must be technically engineered into Continuous Integration/Continuous Deployment (CI/CD) pipelines, an approach often described as “compliance-as-code.” Supporting infrastructure must generate evidence in a reliable, machine-readable format such as NIST Open Security Controls Assessment Language (OSCAL) or JavaScript Object Notation (JSON) so CSPs, agencies and 3PAOs can share data rather than documents. This approach transforms compliance work from writing narratives and taking screenshots to building monitoring systems that continuously validate control effectiveness.

While artificial intelligence (AI) tools are emerging as assistants, the foundation remains consistent instrumentation and automated evidence collection. Organizations must invest in platforms capable of real-time logging, automated vulnerability scanning, Application Programming Interface (API)-driven evidence collection and continuous control monitoring, moving beyond spreadsheets or basic ticketing systems to true automated Governance, Risk and Compliance (GRC).

Maintaining Security Standards

FedRAMP 20x reduces the barriers to entry for small CSPs. Under the traditional REV 5 model, many providers faced prohibitive costs and timelines, often waiting indefinitely for Joint Authorization Board (JAB) review without agency sponsorship. The 20x pilot eliminates this sponsor requirement and accelerates review: organizations using automation have achieved authorization in six months.

RegScale, FedRAMP 20x blog, embedded image, 2025

RegScale, leveraging its own platform with features like automated evidence collection and AI-assisted control validation, completed its SSP and evidence in approximately three weeks and achieved full authorization within six months of audit start. This acceleration does not weaken security; rather, continuous monitoring and real-time evidence provide greater assurance than annual snapshots.

Another benefit of the 20x approach is that the machine-readable evidence can be reused for other frameworks, enabling a “certify once and comply many” approach across:

  • System and Organization Controls 2 (SOC 2)
  • International Organization for Standardization (ISO) 27001
  • Cloud Security Alliance (CSA) Security, Trust, Assurance and Risk (STAR)

For cloud-native organizations already operating with infrastructure as code (IaC) and automated pipelines, 20x aligns Federal compliance with modern DevSecOps practices.

Cultural and Organizational Change Management

The greatest challenge with FedRAMP 20x is cultural, not technological. Many organizations already possess the necessary tools but continue to rely on manual processes built over 15–20 years. Shifting to automation requires replacing “no hope” environments, where compliance is viewed as endless documentation, with the recognition that more efficient, sustainable operations are both possible and necessary.

Teams must actively retrain themselves to think operationally rather than as checklist validators. The transition also requires breaking down silos between security and compliance teams, agencies and 3PAOs, ensuring all stakeholders rely on the same real-time telemetry instead of debating the meaning of outdated screenshots. Federal agencies must also educate risk owners and embrace new evidence formats and methodologies. Ultimately, this is as much an organizational transformation as a technical one.

Continuous Monitoring and Real-Time Risk Management

FedRAMP 20x redefines relationships between CSPs, agencies and 3PAOs by replacing periodic reviews with continuous monitoring and near real-time risk visibility. Instead of exchanging PDFs, stakeholders share dashboards, datasets and evidence repositories that all parties can access. Auditors can review assessments based on evidence collected minutes or hours ago rather than relying on outdated artifacts.

Continuous monitoring supports 20x by allowing agencies to track configuration drift, Plan of Action and Milestone (POA&M) status and control effectiveness in regular cadences. The definition of “continuous” varies by control type; some require minute-by-minute validation, while policy controls may be quarterly or semi-annual.

For agencies, continuous assurance delivers better risk management capabilities, but only if they invest time in understanding how to interpret machine-readable formats such as OSCAL. Adoption varies, with some agencies already capable while others continue developing this capacity.

Moving Forward with Confidence

FedRAMP 20x is a strategic shift that aligns Federal authorization with modern DevSecOps, delivering faster innovation without reducing security standards. Since launching in March 2025, the pilot has processed 27 submissions and granted 13 authorizations, demonstrating scalability and viability.

With 20x, agencies gain improved risk visibility, reduced vendor timelines and access to innovative cloud solutions previously delayed by lengthy authorizations. However, success is not guaranteed. It requires adopting continuous assurance, investing in platforms that support machine-readable evidence and educating risk owners to interpret dynamic data. CSPs must centralize systems of record, instrument environments for continuous evidence collection and adopt standardized mappings that facilitate automation.  

The organizations that thrive will be those that use FedRAMP 20x as a motivator to replace outdated habits, engineer controls properly and embrace automation as an enhancement, not a replacement, of human expertise.

Discover how FedRAMP 20x is transforming Federal cloud authorization by watching the webinar, “FedRAMP 20x in Motion: What Early Results Mean for Federal Agencies,” featuring insights from RegScale and the CSA.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including RegScale, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Where the Physical Meets the Virtual: How Digital Twins Transform Flood Management

Roughly 2 billion people globally are at risk of flooding, with that number growing steadily every year. With flooding ranking as the number one most frequent and costly natural disaster, Federal, State and Local Governments must find ways to translate historical and real-time data into predictive models for emergency response. Digital twins powered by Artificial Intelligence (AI) substantially shorten simulation cycles, compare complex variables and precisely estimate future flood scenarios.

Challenges with Traditional Forecast Models

Examining the traditional forecast modeling process uncovers a series of disadvantages that mean an early warning flooding system is not functioning at maximum potential. These flood algorithms often have long modeling and simulation times, and analysts do not have the luxury to run outcomes multiple times to make the model as accurate as possible when it comes to emergency response. As forecasting areas get larger, these models need more time, more compute power and more analysts to run properly.

There are also issues with the data input into traditional forecast models. Analysts have data that is either unreliable or unavailable in the locales necessary to issue an accurate early flood warning. Incorrect data can also be created when outdated models misrepresent geospatial features. When this invalid data cannot be compared with other current or historical data points, the overall quality of the data decreases.

Along with the disadvantages of the traditional models themselves, the nature of flooding itself presents its own unique set of challenges for analysts. Freeform or uncontained water is an incredibly difficult element to measure properly, especially when it is in motion. Additionally, weather forecasts are often microregional. Rainfall can differ drastically between two different areas only hundreds of feet apart, making accurate assessments of rainfall across entire municipalities or counties near impossible.

To address these challenges, analysts examine existing models and determine how emerging technology can complement those frameworks to function in a more proactive manner.

Digital Twins and Flood Management

Predictive models are at the cornerstone of emergency response, and the merging of the physical world with digital information is crucial to outputting accurate information for public servants to utilize in the field. This is achieved through the creation of digital twins, or virtual representations of real-life components and processes. In this case, digital twins of an Area of Interest (AOI), such as a town or a county, can consist of multiple variables that can contribute to different factors in a flood scenario, including elevation, stormwater infrastructure, commercial and residential constructions, precipitation and natural geographic features. The model then forecasts flooding based on real-time and historical data.

To create a digital twin, analysts select a designated AOI and break it down into a gridded matrix. These cells can be as precise as 50 feet by 50 feet, depending on the resolution required for a specific model and the resolution of the available geospatial data. This way, the model can take into account the spatial variation of different geological data elements within the AOI, including infiltration rate and soil type. Relevant data points are often available through the town or county in question, or through the United States Geological Survey (USGS). Once compiled, this information can be processed in a Geographic Information System (GIS) to create a digital twin to be used in flood forecasting.

However, the digital twin can remain static for some time, but can often change based on:

  • Changes in the landscape due to urbanization
  • Structures are built and demolished
  • Coastlines and water levels change

The more data and more current data that is incorporated into the digital twin, the more accurate the flood forecast and the more efficient the emergency response will be.

The Power of the Hybrid Model

As stated previously, one of the major challenges facing public servants concerning flood management is the time it takes to run simulations. AI models, trained on a series of input and output data, dramatically cut down model run times during storm events. Analysts can produce forecasts in seconds or minutes, where prior it may have taken hours or days to produce the underlying hydraulic and hydrologic model. This rapid prediction via model scoring process means that multiple AI models can be run at once that can take uncertainty in multiple parameters into account, reconcile differentiating flooding estimates and produce more accurate estimates.

When AI meets the real-world accuracy of digital twins, Government agencies can quickly and effectively plan for worst-case scenarios in flood emergencies.  These hybrid models can pinpoint areas on a large scale that are susceptible to complex issues during a flood, such as trash accumulation. Subsequently, these models can outline in real-time the cause and effect of decisions made by Government officials. In other words, if officials make infrastructure changes to solve a water challenge in one location, a hybrid model can show if the solution inadvertently created additional challenges elsewhere.

According to experts in the field, collaboration is the key to flood management success. This synergetic approach is echoed in the use of digital twins and AI predictive models. Using historical and real-time data to simulate future events will ultimately allow Government officials to plan and respond to flood scenarios safely and effectively.

Discover how digital twins and accompanying technology can transform flood management by watching SAS’s webinar “From Sensors to Digital Twins: Real-Time Flood Management with Data & AI”.

How Snyk Helps Federal Agencies Prepare for the Genesis Mission Era of AI-Driven Science

The White House’s new Genesis Mission signals a major shift in how the Federal Government plans to accelerate discovery using AI, national lab computing power and massive scientific datasets. For agencies, this means a new wave of AI-enabled research programs, expanded public-private collaboration and a significant increase in the use of software, data pipelines and cloud resources to drive scientific missions. Along with this opportunity comes a simple truth: AI can only accelerate discovery if the software behind it is secure.

That’s where Snyk supports agencies—by enabling developers, researchers and mission teams to build secure software from the start, aligned to Secure by Design and modern Federal cybersecurity expectations.

Why the Genesis Mission introduces new security pressure for agencies

  • More data and more experimentation: Agencies will be unlocking and federating large datasets, many of which were never designed for AI-scale access. This increases exposure risk and requires tighter control over data lineage, permissions and software pipelines.
  • More partners in the loop: National labs, other Federal entities, commercial cloud providers, academia and industry vendors will work together under new shared platforms. That means expanded software supply chains and stricter expectations for transparency and assurance.
  • Faster development cycles: Scientific models, simulations, AI workflows and data-processing pipelines will move at an accelerated pace. Traditional security review processes won’t be able to keep up.
  • Higher stakes for misconfigurations: AI workloads rely heavily on containers, open source, infrastructure-as-code and cloud services. A single misconfiguration in a pipeline, cluster or library could compromise sensitive scientific work.

Federal agencies need secure-by-default pipelines that can scale with mission speed.

Four ways Snyk supports Federal agencies

1.  Secures software supply chains for AI, HPC and scientific workloads

Snyk gives agencies visibility into all components used in AI and research software—including open source libraries, containers and IaC templates. Snyk helps agencies identify vulnerable or

risky components early, enforce approved library lists, produce SBOMs automatically and meet Federal supply chain expectations (Secure by Design, NIST 800-218, EO 14028, etc.)

2.  Embeds security for CI/CD, model-training and data pipelines

Whether agencies run pipelines in cloud environments, HPC clusters or hybrid infrastructures, Snyk integrates directly into:

  • GitHub / GitLab / Bitbucket
    • Jenkins, GitHub Actions, CircleCI
    • Container build systems
    • AI/ML workflow orchestration tools

This ensures vulnerabilities, misconfigurations and secrets are caught before software reaches production environments or shared research platforms.

3.  Cloud and container security for AI compute systems

The Genesis Mission relies on secure computing—including cloud GPUs, containerized workloads, HPC clusters, research VMs and hybrid infrastructure. Snyk helps agencies detect misconfigurations across cloud infrastructure, secure container images powering AI workloads, scan infrastructure-as-code templates before deployment and protect credentials and secrets used in research pipelines.

4.  Practical “secure by design” implementation

Snyk meets developers and researchers inside the tools they already use by providing automated fix recommendations, IDE plug-ins for secure coding, policy enforcement for high-risk components, as well as fast feedback loops that align with Agile R&D teams. This

operationalizes Secure-by-Design in a way that won’t slow down experiments, model training or rapid prototyping.

Why this matters for Federal missions

The Genesis Mission is accelerating scientific discovery across:

  • Clean energy and grid modernization
    • Fusion and advanced nuclear research
    • Materials science and critical minerals
    • Biotechnology and health research
    • Quantum, semiconductors and microelectronics
    • Climate modeling and Earth science

These domains rely heavily on software, data and compute, and securing those systems is essential for mission success.

Snyk helps agencies build software that is secure by design, fully transparent and aligned with Federal AI safety expectations. With Snyk’s AI Security Platform, agencies gain end-to-end protection across code, dependencies, containers and AI pipelines, enabling trustworthy and compliant AI systems that can power the next generation of U.S. Government missions–exactly what the Genesis Mission requires.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Snyk, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Federal Access: How Identity Visibility Drives Zero Trust Success

Federal agencies face mounting pressure to implement Zero Trust frameworks but often struggle with where to begin. The answer lies in understanding identity telemetry, the insights into who has access to what and how threat actors exploit identities to gain privilege and maintain persistence. Because threat actors increasingly steal credentials and pose as legitimate users, Federal agencies can no longer rely solely on detection tools that trigger alarms after attacks succeed. This shift demands a new approach to Zero Trust, one beginning with comprehensive visibility into the identity attack surface before implementing controls.

From Detection to Prevention

Federal agencies have historically relied on detection-based security tools like Endpoint, Detection and Response (EDR) and Extended Detection and Response (XDR) solutions to detect malicious activity. While still valuable, these reactive tools are inadequate as adversaries are compromising both human and non-human credentials, operating for extended periods. Using legitimate credentials, threat actors gain persistent access and escalate permissions while evading detection.

The missing component is proactive threat hunting that maps potential identity exposure before they are exploited. This requires aggregating identity data across the entire IT environment and analyzing how threat actors could leverage poor identity hygiene such as overprivileged accounts, insecure Virtual Private Networks (VPNs), exposed passwords and secrets, blind spots in third-party access and dormant identities to gain access to critical assets and data. Zero Trust relies on knowing exactly how identities function across the environment; without this visibility, agencies are essentially enforcing Zero Trust policies blindly and wasting time and money by not investing in protection capabilities that are resilient against cyberattacks. Identity telemetry should guide agencies in building proactive identity and mature Zero Trust capabilities.

The Fragmented Identity Visibility Problem

Federal environments span on-prem Active Directory (AD), multicloud environments, federated identity providers and numerous Software-as-a-Service (SaaS) applications, causing confusion, overlap and complex interactions across these different environments that are difficult to track, limiting end-to-end visibility of hidden attack paths for lateral movement and escalation.

These “unknown trust relationships” or “paths to privilege” stem from:

  • Identity provider misconfigurations replicating over-permissive access
  • Nested group memberships granting indirect privileges
  • Federation relationships enabling cross-domain escalation
  • Generic “all access” group rights elevating unprivileged users

These exposures exist between siloed systems and provide entry points for threat actors. Addressing this requires aggregating identity data, mapping cross-domain relationships and calculating the human, non-human and AI based identities. This exposes blind spots and transforms an unknowable attack surface into a manageable identity landscape.

True Privilege Calculation

Traditional privilege assessments focus on group membership and cloud role assignments but miss factors like nested groups, cloud application ownership, misconfigured identity providers and federation pathways. These elements often elevate an identity’s privilege far beyond what surface-level audits reveal.

BeyondTrust, Securing Federal Access blog, embedded image, 2025

True privilege calculation measures an identity’s effective and actual privilege across all connected systems and domains, including relationships, configurations and escalation pathways. For example, an identity that appears low-privileged in AD may federate into Identity and Access Management (IAM) roles and elevate its privilege. This visibility supports key Zero Trust decisions, such as:

  • What access should be continuously verified
  • Gaps in least privilege enforcement
  • Which accounts are most likely to be targeted
  • Where to place micro-segmentation boundaries

Given the scale and complexity of modern Federal environments, manual calculation is impossible. Automated solutions must continuously analyze permissions, relationships and identity provider configurations while mapping escalation paths. True privilege calculation transforms Zero Trust from theory into actionable strategy that goes from implementation to Zero Trust maturity.

Critical Attack Vectors

Dormant privileged accounts, often left active after personnel departures or reorganizations, retain elevated permissions long after their use ends. Threat actors frequently identify and reactivate these accounts to move laterally and maintain persistence using legitimate credentials. Effective identity hygiene requires:

  • Continuous monitoring of new dormant accounts
  • Cleanup of existing dormant or misconfigured accounts and standing privilege
  • Behavioral detection to flag unusual privilege escalation attempts or unexpected activity

Identity security cannot be a point-in-time exercise. Without visibility and a proactive approach, configurations drift and dormant accounts accumulate. Agencies must continuously identify dormant privileged accounts and immediately investigate if they suddenly become active, one of the strongest indicators of compromise. Continuous visibility transforms identity hygiene from a reactive alert-based approach to actionable telemetry for proactive threat hunting around current and known attack risk.

The Expanding Identity Attack Surface

The identity attack surface extends far beyond human users to service principals, cloud workloads, Application Programming Interface (API) credentials and automated systems, collectively known as “non-human identities.” These accounts often have elevated privileges but lack safeguards like password rotation, Multi-Factor Authentication (MFA) or behavioral analytics, creating significant security gaps.

Agentic AI introduces new challenges. Unlike traditional service accounts, AI agents act autonomously based on their instructions, tools and knowledge sources. A seemingly low-privilege agent could escalate privileges by interacting with other agents, creating complex escalation chains. Understanding an AI agent’s effective capability, not just its assigned permissions, is essential.

AI and non-human identity risks come from interconnected relationships. An AI agent running as a cloud workload may access secrets, interact with privileged systems or execute commands across domains. True privilege calculation for these entities requires mapping downstream actions they could initiate. Federal agencies need governance designed for non-human identities and AI agents, including:

  • True privilege calculation of escalation paths
  • Comprehensive inventory across all systems
  • Monitoring of potential blast radius as AI adoption accelerates
  • Context and knowledge of AI use and where agents are being deployed
  • Visibility into AI agent instructions, tools and knowledge sources

Investing in identity visibility now prepares agencies for emerging challenges as AI adoption becomes more prevalent.

Federal agencies must secure hybrid environments against adversaries who exploit identities rather than technical vulnerabilities. The path forward requires shifting from reactive detection to proactive threat hunting, eliminating fragmented visibility, measuring true privilege across all domains, maintaining continuous identity hygiene and extending visibility to non-human identities and agentic AI. Identity telemetry provides the data foundation needed for Zero Trust maturity, showing agencies where and how to strengthen their security posture.

Discover how comprehensive identity visibility drives Zero Trust maturity by watching BeyondTrust and Optiv+Clearshark’s webinar, “Securing Federal Access: Identity Security Insights for a Zero Trust Future.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including BeyondTrust, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Better Together: How Eightfold.ai and Empyra Are Transforming Government Workforce Services

Proven Results:

  • 30% faster job placement (Washington, D.C.)
  • 36% increase in engagement among underserved populations
  • 65% increase in training module completions
  • 71% increase in job applications submitted
  • 30% faster reemployment for RESEA participants (Florida Department of Commerce)

State and Local Governments are rethinking the way they connect job candidates with meaningful employment. Eightfold.ai and Empyra have combined to join advanced AI-driven talent matching with configurable case management. Together, they deliver a unified, secure environment that helps agencies modernize operations, improve employment outcomes and provide more efficient, personalized experience for both job seekers and employers.

AI-Driven Workforce Modernization

Eightfold.ai was built by former Google and Facebook engineers to be the world’s most intelligent talent matching platform, matching candidates to the right jobs. From more than a decade of global labor market data, its neural network goes beyond keyword searches, interpreting:

  • Skills
  • Roles
  • Qualifications

The platform continuously learns from interactions across job seekers, employers and case managers, moving agencies away from time-consuming resume screening toward a data-driven system that identifies talent by capability and aptitude.

Through its Career Navigator, Eightfold.ai provides:

  • Visual career pathways
  • Transferable skill identification
  • Gap analysis
  • Training from State-approved providers

This transforms the labor exchange into a dynamic environment that supports both immediate reemployment and long-term career mobility.

Integrated Case Management and Service Delivery

Empyra’s myOneFlow consolidates workforce and social service delivery into a single, configurable platform. By capturing data once and reusing it across workflows, the system reduces duplication and frees staff to focus on engagement rather than paperwork. Designed as a Commercial Off-The-Shelf (COTS), Workforce Innovation and Opportunity Act (WIOA)-ready system, myOneFlow includes Participant Individual Record Layout (PIRL) and performance reporting out of the box. As funding and requirements evolve, its flexible architecture allows agencies to tailor:

  • Forms
  • Eligibility rules
  • Intake processes
Eightfold.ai , Better Together Eightfold.ai and Empyra blog, embedded image, 2025

The platform streamlines the participant journey by automating:

  • Intake
  • Enrollment
  • Eligibility determination
  • Business rules to identify program fit
  • Referrals to partners for housing, education, training or employment resources

Participants can complete tasks and upload documents from any device via the mobile app. Beyond WIOA, myOneFlow also supports:

  • Apprenticeship management
  • Temporary Assistance for Needy Families (TANF)
  • Supplemental Nutrition Assistance Program (SNAP) tracking
  • Domestic-violence programs
  • Municipal grants.

By consolidating these functions, myOneFlow gives agencies flexibility to manage multiple programs efficiently within one adaptive system.

“Better Together” Integration Between Eightfold.ai and Empyra

Together Eightfold.ai and myOneFlow create a single front door for job seekers, case managers and employers. Unified identity management with Single Sign-On (SSO) and shared data models ensure information remains consistent across platforms.

Here’s how the integration works:

  • Participants register in myOneFlow
  • Their intake data automatically populates into Eightfold.ai
  • The AI engine generates skills assessments, job recommendations and career pathways
  • Applications, training and other activities sync back into myOneFlow

Case managers gain a real-time view of participant progress without manual entry, while employers benefit from accurate candidate matching and streamline recruiting tools. Behind the scenes, Eightfold.ai and Empyra operate a coordinated support model and incorporate agency feedback into joint product enhancements.

Trust, Security and Compliance

Both platforms meet rigorous standards, including:

  • FedRAMP
  • Tx-RAMP
  • System and Organization Controls 2 (SOC 2)
  • Department of Defense (DoD) Impact Level 4 (IL4)
  • International Organization for Standardization (ISO) 27001

They also adhere to evolving regulations across the European Union Artificial Intelligence (EU AI) Act, Texas Department of Information Resources (DIR) and other State privacy laws.

myOneFlow enforces:

  • Role-based access controls
  • Audit logging
  • Deduplication safeguards

Building the Future of Workforce Modernization

Eightfold.ai and Empyra’s myOneFlow demonstrate what is possible when AI, automation and integration align with mission-driven goals. The integrated solution helps agencies:

  • Deliver faster services
  • Improve job matching accuracy
  • Reduce administrative burden
  • Strengthen engagement
  • Maximize limited resources

Workforce organizations can now create a more responsive, equitable and efficient system, empowering job seekers, supporting employers and advancing mission outcomes.

Watch the full webinar, “AI-Centric Innovation: Modernizing Workforce Agencies,” to see the full demonstration of Eightfold.ai and Empyra’s integrated approach to workforce transformation.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Eightfold.ai and Empyra, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Emerging Trends in Artificial Intelligence and What They Mean for Risk Management

Artificial intelligence (AI) is a valuable risk management tool, but it also poses a degree of risk. As AI becomes more prevalent, it opens new possibilities while simultaneously raising new concerns.

Federal agencies and contractors have a responsibility to closely monitor developments in the scope and capacity of AI. In this article, we’ll explore some of the top emerging trends in AI, and we’ll explain their impact on risk management strategies for Federal agencies and contractors.

What are the Emerging Trends in Artificial Intelligence?

With its enormous capacity for pattern recognition, prediction and analytics, AI can be instrumental in identifying risk and driving solutions. Here are some of the most promising new AI applications for risk management.

Predictive Analytics

Predictive AI is widely used in applications like network surveillance, fraud detection and supply chain management. Here’s how it works.

Machine learning tools, a subsection of AI, rapidly “read” and analyze reams of historical data to find patterns. Historical data can mean anything from network traffic patterns to consumer behavior. Since machine learning tools can analyze vast datasets, they find subtle patterns that might not be evident to a human analyst working their way slowly through the same data. This kind of predictive analysis helps organizations identify risks before they escalate.

Once ML identifies the patterns, it can use them to make highly specific and accurate predictions. That can mean, for example, predicting website traffic and preventing unexpected outages due to increased usage. It can also mean spotting the warning signs of new computer viruses or identifying phishing emails.

Generative AI

Generative AI (GenAI) is often discussed in terms of its content creation capabilities, but the technology also has enormous potential for risk management.

GenAI can rapidly synthesize data from a wide range of inputs and use it to create a coherent analysis. For example, GenAI can make predictions about supply chain disruptions, based on weather patterns, geopolitical issues and market demand. Many generative systems use natural language processing to interpret context, summarize information and support more accurate decisions.

GenAI can also come up with solutions to the problems it identifies. The technology excels at breaking down silos and drawing connections between different sources of information. For example, the technology can suggest alternative shipping routes or suppliers in the event of a supply chain disruption.

It’s worth noting that, like any other AI tool, generative AI does best with human oversight. GenAI analysis should never be accepted at face value. Rather, employees can use it as an inspiration or a jumping-off point for further planning. Human expertise should always play a key role in the planning process, since GenAI isn’t always accurate.

Adaptive Risk Modeling

AI tools are capable of continuous learning and real-time analysis. Those capabilities lay the groundwork for adaptive risk modeling.

Adaptive risk modeling allows for a dynamic understanding of risk factors, instead of the traditional static approach. The old way of calculating risk relied on identifying patterns in historical data and using a linear model with a simple cause-and-effect analysis.

In contrast, adaptive risk modeling uses machine learning and deep learning to continually scan data sets for changes or new patterns. Instead of a static, linear model, AI risk modeling can build a dynamic model and continually update it.

Use Cases for AI Risk Management Tools

AI is widely used in the Public and Private Sectors to predict and manage risk, even with third parties involved. Here are some of the common use cases.

Federal Government Use Cases

A growing number of Federal agencies use AI tools to increase efficiency in their work. Some are beginning to pilot AI-powered agents to automate routine tasks and provide real-time recommendations for employees.

  • The Department of Labor leverages AI chatbots to answer inquiries about procurement and contracts.
  • The Patent and Trademark Office uses AI to rapidly surface important documents.
  • The Centers for Disease Control uses AI tools to track the spread of foodborne illnesses.

Financial Sector

Lenders increasingly use AI tools to assess the risk of issuing loans. Because AI can collect and analyze large data sets, the technology provides a comprehensive way to assess creditworthiness.

Financial institutions also use AI for fraud detection. AI tools can spot patterns in typical customer behavior and identify anomalies that could indicate fraud.

Insurance Industry

Insurance companies frequently use AI for underwriting, including risk assessment and risk mitigation. AI is also a useful tool for processing claims and searching for fraud.

Generative AI is also often used to provide frontline services to customers. For example, chatbots answer straightforward questions, provide triage and refer more complex questions to human operators.

Risks Associated with AI Technologies

AI is a valuable tool in mitigating risk, but it’s important to be aware of the risks the tools themselves present.

Chief among those risks is the problem of algorithmic bias. AI and ML excel at identifying patterns and codifying them. However, this means that AI is only as good as the data that feeds it. If AI/ML tools are trained on biased data, the tools will codify the biases embedded in that data. AI/ML takes the unspoken prejudices in datasets and turns them into hard and fast rules, which inform every decision going forward.

Agencies must also consider data privacy implications when AI tools process sensitive or regulated data. If human operators do not question the algorithm’s output, there’s a real risk that bias will become deeply ingrained, causing lasting harm to individuals and organizations and even creating regulatory compliance issues.

Addressing AI Bias

Federal agencies and contractors must understand exactly how AI tools are being deployed. Operators should frequently look “under the hood” of the AI algorithms, asking questions about how the outputs are generated. Opening the “black box” allows organizations to check for bias and prevent it from being codified. Strong data ethics practices ensure that AI systems are trained on fair, transparent and accountable data sources.

It’s best practice to implement a cross-functional AI governance council or team to oversee artificial intelligence. It’s also important to work closely with a trusted partner who has experience integrating AI into a GRC platform. The best AI tools help humans manage a Federal agency with efficiency. The question is, how to make the most of the available technology while mitigating the associated risk.

From Pilot to Production: Operationalizing Healthcare GenAI in Secure Multicloud Environments

Healthcare organizations are under immense pressure to shrink margins, tighten regulations, improve patient expectations and utilize increasingly complex data environments. While generative artificial intelligence (GenAI) has emerged as a powerful tool, most healthcare systems still struggle to move from experimentation to measurable outcomes. Leaders are asking the same questions: Where do we start? How do we ensure security and compliance? How fast should the Return on Investment (ROI) appear?

The answer is not simply selecting a model, it is building a strategy and infrastructure that transforms AI from a promising pilot into an enterprise engine for clinical, operational and financial improvement.

Start With High-Impact Use Cases that Deliver Early ROI

The path to operationalizing GenAI begins with use cases that are narrow enough to implement quickly, but meaningful enough to prove value. Start where measurable gains are most attainable, such as document processing, contract review, claims analysis, compliance workflows and call center optimization.

One of the strongest early candidates is Protected Health Information (PHI) de-identification, where AI can accelerate research access while protecting privacy. Many organizations are also applying GenAI to claims review, using models to flag missing attachments, coding inconsistencies or errors that commonly drive costly denials. With first-pass denial rates hovering in the 17–25% range industry-wide, automating this analysis can generate immediate financial return.

These targeted wins build executive confidence, secure budget and create organizational momentum, which is critical before expanding to more complex clinical or patient-facing scenarios.

Build Trust by Grounding the Model in Your Own Data

Accuracy and trust determine whether healthcare AI is adopted or ignored. General-purpose models are not sufficient for healthcare, where language is deeply nuanced and context dependent. Instead, organizations should ground GenAI in their own governed data sources, such as Electronic Health Records (EHRs), Customer Relationship Management (CRM) platforms, care summaries, research documents or internal policies.

To achieve this, many leaders are adopting Retrieval-Augmented Generation (RAG) with vector databases, which allows models to pull precise information from internal systems in real time. Vector databases are a foundational accelerator, enabling faster, more accurate retrieval across structured and unstructured data. This approach delivers three business advantages:

  1. Higher accuracy and confidence in model responses
  2. Stronger control of PHI and sensitive data
  3. Traceability, which is essential for audits, appeals and clinical validation

Grounding the model in an organization’s own data turns GenAI from a creative tool into a trusted operational system.

Use a Secure Multicloud Strategy to Reduce Risk and Increase Agility

John Snow Labs, Operationalizing Healthcare GenAI blog, embedded image, 2025

To operationalize GenAI responsibly, healthcare organizations should design for security,compliance and flexibility from day one. When separating PHI and non-PHI workloads, a multicloud strategy helps healthcare organizations:

  • Isolate sensitive data to minimize breach impact and simplify governance
  • Reduce lock-in risk and leverage the strengths of different cloud platforms
  • Tap into more innovative options, since each cloud offers unique AI tooling
  • Optimize cost and performance by matching workloads to the right environment

Multicloud design also supports stronger compliance postures by enabling auditability, identity controls, monitoring and bias/hallucination safeguards, all of which must be proven to regulators and accrediting bodies.

Avoid “Pilot Purgatory” and Build a Path to Production

Many healthcare AI programs fail not because the technology underperforms, but because the organization never assigns ownership or a path to scale. To prevent “pilot purgatory,” short-term projects that drag on without measurable outcomes, organizations should:

  • Create a defined production roadmap before the pilot begins
  • Empower a cross-functional AI Center of Excellence (COE) to own outcomes
  • Secure both clinical and administrative stakeholders
  • Treat GenAI as an enterprise capability, not a one-off project

This shift enables the same investment to support multiple use cases, expanding impact while lowering cost per interaction over time.

Continuously Measure, Optimize and Expand

An operational GenAI program is never “set it and forget it.” It is important to continuously track Key Performance Indicators (KPIs) to guide optimization and justify expansion. Recommended KPIs include:

  • Cost per interaction
  • Accuracy and confidence
  • Time saved per task or workflow
  • Time to response (latency and model speed)
  • User satisfaction (providers, staff and patients)

By evaluating these metrics regularly, healthcare organizations can expand from early wins to enterprise scale, from research and development to patient support, revenue cycle, compliance and beyond.

Align People, Data and Infrastructure For AI Success

Technology alone is not the determining factor of AI success in the healthcare space, alignment is. Success requires a shared vision from leadership, responsible data groundwork, a secure multicloud foundation and continuous measurement to maintain trust and value. With the right approach, GenAI can improve patient satisfaction, strengthen trust, accelerate research and innovation, reduce administrative burden and deliver measurable ROI in weeks over years.

Carahsoft and John Snow Labs help healthcare leaders accelerate this journey, combining secure infrastructure, domain-specific healthcare AI and proven deployment models. To explore how your organization can operationalize GenAI safely and effectively, watch the full webinar, “Lessons Learned from Harnessing Healthcare Generative AI in a Hybrid Multi-Cloud Environment.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including John Snow Labs, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.