Where the Physical Meets the Virtual: How Digital Twins Transform Flood Management

Roughly 2 billion people globally are at risk of flooding, with that number growing steadily every year. With flooding ranking as the number one most frequent and costly natural disaster, Federal, State and Local Governments must find ways to translate historical and real-time data into predictive models for emergency response. Digital twins powered by Artificial Intelligence (AI) substantially shorten simulation cycles, compare complex variables and precisely estimate future flood scenarios.

Challenges with Traditional Forecast Models

Examining the traditional forecast modeling process uncovers a series of disadvantages that mean an early warning flooding system is not functioning at maximum potential. These flood algorithms often have long modeling and simulation times, and analysts do not have the luxury to run outcomes multiple times to make the model as accurate as possible when it comes to emergency response. As forecasting areas get larger, these models need more time, more compute power and more analysts to run properly.

There are also issues with the data input into traditional forecast models. Analysts have data that is either unreliable or unavailable in the locales necessary to issue an accurate early flood warning. Incorrect data can also be created when outdated models misrepresent geospatial features. When this invalid data cannot be compared with other current or historical data points, the overall quality of the data decreases.

Along with the disadvantages of the traditional models themselves, the nature of flooding itself presents its own unique set of challenges for analysts. Freeform or uncontained water is an incredibly difficult element to measure properly, especially when it is in motion. Additionally, weather forecasts are often microregional. Rainfall can differ drastically between two different areas only hundreds of feet apart, making accurate assessments of rainfall across entire municipalities or counties near impossible.

To address these challenges, analysts examine existing models and determine how emerging technology can complement those frameworks to function in a more proactive manner.

Digital Twins and Flood Management

Predictive models are at the cornerstone of emergency response, and the merging of the physical world with digital information is crucial to outputting accurate information for public servants to utilize in the field. This is achieved through the creation of digital twins, or virtual representations of real-life components and processes. In this case, digital twins of an Area of Interest (AOI), such as a town or a county, can consist of multiple variables that can contribute to different factors in a flood scenario, including elevation, stormwater infrastructure, commercial and residential constructions, precipitation and natural geographic features. The model then forecasts flooding based on real-time and historical data.

To create a digital twin, analysts select a designated AOI and break it down into a gridded matrix. These cells can be as precise as 50 feet by 50 feet, depending on the resolution required for a specific model and the resolution of the available geospatial data. This way, the model can take into account the spatial variation of different geological data elements within the AOI, including infiltration rate and soil type. Relevant data points are often available through the town or county in question, or through the United States Geological Survey (USGS). Once compiled, this information can be processed in a Geographic Information System (GIS) to create a digital twin to be used in flood forecasting.

However, the digital twin can remain static for some time, but can often change based on:

  • Changes in the landscape due to urbanization
  • Structures are built and demolished
  • Coastlines and water levels change

The more data and more current data that is incorporated into the digital twin, the more accurate the flood forecast and the more efficient the emergency response will be.

The Power of the Hybrid Model

As stated previously, one of the major challenges facing public servants concerning flood management is the time it takes to run simulations. AI models, trained on a series of input and output data, dramatically cut down model run times during storm events. Analysts can produce forecasts in seconds or minutes, where prior it may have taken hours or days to produce the underlying hydraulic and hydrologic model. This rapid prediction via model scoring process means that multiple AI models can be run at once that can take uncertainty in multiple parameters into account, reconcile differentiating flooding estimates and produce more accurate estimates.

When AI meets the real-world accuracy of digital twins, Government agencies can quickly and effectively plan for worst-case scenarios in flood emergencies.  These hybrid models can pinpoint areas on a large scale that are susceptible to complex issues during a flood, such as trash accumulation. Subsequently, these models can outline in real-time the cause and effect of decisions made by Government officials. In other words, if officials make infrastructure changes to solve a water challenge in one location, a hybrid model can show if the solution inadvertently created additional challenges elsewhere.

According to experts in the field, collaboration is the key to flood management success. This synergetic approach is echoed in the use of digital twins and AI predictive models. Using historical and real-time data to simulate future events will ultimately allow Government officials to plan and respond to flood scenarios safely and effectively.

Discover how digital twins and accompanying technology can transform flood management by watching SAS’s webinar “From Sensors to Digital Twins: Real-Time Flood Management with Data & AI”.

How Snyk Helps Federal Agencies Prepare for the Genesis Mission Era of AI-Driven Science

The White House’s new Genesis Mission signals a major shift in how the Federal Government plans to accelerate discovery using AI, national lab computing power and massive scientific datasets. For agencies, this means a new wave of AI-enabled research programs, expanded public-private collaboration and a significant increase in the use of software, data pipelines and cloud resources to drive scientific missions. Along with this opportunity comes a simple truth: AI can only accelerate discovery if the software behind it is secure.

That’s where Snyk supports agencies—by enabling developers, researchers and mission teams to build secure software from the start, aligned to Secure by Design and modern Federal cybersecurity expectations.

Why the Genesis Mission introduces new security pressure for agencies

  • More data and more experimentation: Agencies will be unlocking and federating large datasets, many of which were never designed for AI-scale access. This increases exposure risk and requires tighter control over data lineage, permissions and software pipelines.
  • More partners in the loop: National labs, other Federal entities, commercial cloud providers, academia and industry vendors will work together under new shared platforms. That means expanded software supply chains and stricter expectations for transparency and assurance.
  • Faster development cycles: Scientific models, simulations, AI workflows and data-processing pipelines will move at an accelerated pace. Traditional security review processes won’t be able to keep up.
  • Higher stakes for misconfigurations: AI workloads rely heavily on containers, open source, infrastructure-as-code and cloud services. A single misconfiguration in a pipeline, cluster or library could compromise sensitive scientific work.

Federal agencies need secure-by-default pipelines that can scale with mission speed.

Four ways Snyk supports Federal agencies

1.  Secures software supply chains for AI, HPC and scientific workloads

Snyk gives agencies visibility into all components used in AI and research software—including open source libraries, containers and IaC templates. Snyk helps agencies identify vulnerable or

risky components early, enforce approved library lists, produce SBOMs automatically and meet Federal supply chain expectations (Secure by Design, NIST 800-218, EO 14028, etc.)

2.  Embeds security for CI/CD, model-training and data pipelines

Whether agencies run pipelines in cloud environments, HPC clusters or hybrid infrastructures, Snyk integrates directly into:

  • GitHub / GitLab / Bitbucket
    • Jenkins, GitHub Actions, CircleCI
    • Container build systems
    • AI/ML workflow orchestration tools

This ensures vulnerabilities, misconfigurations and secrets are caught before software reaches production environments or shared research platforms.

3.  Cloud and container security for AI compute systems

The Genesis Mission relies on secure computing—including cloud GPUs, containerized workloads, HPC clusters, research VMs and hybrid infrastructure. Snyk helps agencies detect misconfigurations across cloud infrastructure, secure container images powering AI workloads, scan infrastructure-as-code templates before deployment and protect credentials and secrets used in research pipelines.

4.  Practical “secure by design” implementation

Snyk meets developers and researchers inside the tools they already use by providing automated fix recommendations, IDE plug-ins for secure coding, policy enforcement for high-risk components, as well as fast feedback loops that align with Agile R&D teams. This

operationalizes Secure-by-Design in a way that won’t slow down experiments, model training or rapid prototyping.

Why this matters for Federal missions

The Genesis Mission is accelerating scientific discovery across:

  • Clean energy and grid modernization
    • Fusion and advanced nuclear research
    • Materials science and critical minerals
    • Biotechnology and health research
    • Quantum, semiconductors and microelectronics
    • Climate modeling and Earth science

These domains rely heavily on software, data and compute, and securing those systems is essential for mission success.

Snyk helps agencies build software that is secure by design, fully transparent and aligned with Federal AI safety expectations. With Snyk’s AI Security Platform, agencies gain end-to-end protection across code, dependencies, containers and AI pipelines, enabling trustworthy and compliant AI systems that can power the next generation of U.S. Government missions–exactly what the Genesis Mission requires.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Snyk, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Federal Access: How Identity Visibility Drives Zero Trust Success

Federal agencies face mounting pressure to implement Zero Trust frameworks but often struggle with where to begin. The answer lies in understanding identity telemetry, the insights into who has access to what and how threat actors exploit identities to gain privilege and maintain persistence. Because threat actors increasingly steal credentials and pose as legitimate users, Federal agencies can no longer rely solely on detection tools that trigger alarms after attacks succeed. This shift demands a new approach to Zero Trust, one beginning with comprehensive visibility into the identity attack surface before implementing controls.

From Detection to Prevention

Federal agencies have historically relied on detection-based security tools like Endpoint, Detection and Response (EDR) and Extended Detection and Response (XDR) solutions to detect malicious activity. While still valuable, these reactive tools are inadequate as adversaries are compromising both human and non-human credentials, operating for extended periods. Using legitimate credentials, threat actors gain persistent access and escalate permissions while evading detection.

The missing component is proactive threat hunting that maps potential identity exposure before they are exploited. This requires aggregating identity data across the entire IT environment and analyzing how threat actors could leverage poor identity hygiene such as overprivileged accounts, insecure Virtual Private Networks (VPNs), exposed passwords and secrets, blind spots in third-party access and dormant identities to gain access to critical assets and data. Zero Trust relies on knowing exactly how identities function across the environment; without this visibility, agencies are essentially enforcing Zero Trust policies blindly and wasting time and money by not investing in protection capabilities that are resilient against cyberattacks. Identity telemetry should guide agencies in building proactive identity and mature Zero Trust capabilities.

The Fragmented Identity Visibility Problem

Federal environments span on-prem Active Directory (AD), multicloud environments, federated identity providers and numerous Software-as-a-Service (SaaS) applications, causing confusion, overlap and complex interactions across these different environments that are difficult to track, limiting end-to-end visibility of hidden attack paths for lateral movement and escalation.

These “unknown trust relationships” or “paths to privilege” stem from:

  • Identity provider misconfigurations replicating over-permissive access
  • Nested group memberships granting indirect privileges
  • Federation relationships enabling cross-domain escalation
  • Generic “all access” group rights elevating unprivileged users

These exposures exist between siloed systems and provide entry points for threat actors. Addressing this requires aggregating identity data, mapping cross-domain relationships and calculating the human, non-human and AI based identities. This exposes blind spots and transforms an unknowable attack surface into a manageable identity landscape.

True Privilege Calculation

Traditional privilege assessments focus on group membership and cloud role assignments but miss factors like nested groups, cloud application ownership, misconfigured identity providers and federation pathways. These elements often elevate an identity’s privilege far beyond what surface-level audits reveal.

BeyondTrust, Securing Federal Access blog, embedded image, 2025

True privilege calculation measures an identity’s effective and actual privilege across all connected systems and domains, including relationships, configurations and escalation pathways. For example, an identity that appears low-privileged in AD may federate into Identity and Access Management (IAM) roles and elevate its privilege. This visibility supports key Zero Trust decisions, such as:

  • What access should be continuously verified
  • Gaps in least privilege enforcement
  • Which accounts are most likely to be targeted
  • Where to place micro-segmentation boundaries

Given the scale and complexity of modern Federal environments, manual calculation is impossible. Automated solutions must continuously analyze permissions, relationships and identity provider configurations while mapping escalation paths. True privilege calculation transforms Zero Trust from theory into actionable strategy that goes from implementation to Zero Trust maturity.

Critical Attack Vectors

Dormant privileged accounts, often left active after personnel departures or reorganizations, retain elevated permissions long after their use ends. Threat actors frequently identify and reactivate these accounts to move laterally and maintain persistence using legitimate credentials. Effective identity hygiene requires:

  • Continuous monitoring of new dormant accounts
  • Cleanup of existing dormant or misconfigured accounts and standing privilege
  • Behavioral detection to flag unusual privilege escalation attempts or unexpected activity

Identity security cannot be a point-in-time exercise. Without visibility and a proactive approach, configurations drift and dormant accounts accumulate. Agencies must continuously identify dormant privileged accounts and immediately investigate if they suddenly become active, one of the strongest indicators of compromise. Continuous visibility transforms identity hygiene from a reactive alert-based approach to actionable telemetry for proactive threat hunting around current and known attack risk.

The Expanding Identity Attack Surface

The identity attack surface extends far beyond human users to service principals, cloud workloads, Application Programming Interface (API) credentials and automated systems, collectively known as “non-human identities.” These accounts often have elevated privileges but lack safeguards like password rotation, Multi-Factor Authentication (MFA) or behavioral analytics, creating significant security gaps.

Agentic AI introduces new challenges. Unlike traditional service accounts, AI agents act autonomously based on their instructions, tools and knowledge sources. A seemingly low-privilege agent could escalate privileges by interacting with other agents, creating complex escalation chains. Understanding an AI agent’s effective capability, not just its assigned permissions, is essential.

AI and non-human identity risks come from interconnected relationships. An AI agent running as a cloud workload may access secrets, interact with privileged systems or execute commands across domains. True privilege calculation for these entities requires mapping downstream actions they could initiate. Federal agencies need governance designed for non-human identities and AI agents, including:

  • True privilege calculation of escalation paths
  • Comprehensive inventory across all systems
  • Monitoring of potential blast radius as AI adoption accelerates
  • Context and knowledge of AI use and where agents are being deployed
  • Visibility into AI agent instructions, tools and knowledge sources

Investing in identity visibility now prepares agencies for emerging challenges as AI adoption becomes more prevalent.

Federal agencies must secure hybrid environments against adversaries who exploit identities rather than technical vulnerabilities. The path forward requires shifting from reactive detection to proactive threat hunting, eliminating fragmented visibility, measuring true privilege across all domains, maintaining continuous identity hygiene and extending visibility to non-human identities and agentic AI. Identity telemetry provides the data foundation needed for Zero Trust maturity, showing agencies where and how to strengthen their security posture.

Discover how comprehensive identity visibility drives Zero Trust maturity by watching BeyondTrust and Optiv+Clearshark’s webinar, “Securing Federal Access: Identity Security Insights for a Zero Trust Future.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including BeyondTrust, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Better Together: How Eightfold.ai and Empyra Are Transforming Government Workforce Services

Proven Results:

  • 30% faster job placement (Washington, D.C.)
  • 36% increase in engagement among underserved populations
  • 65% increase in training module completions
  • 71% increase in job applications submitted
  • 30% faster reemployment for RESEA participants (Florida Department of Commerce)

State and Local Governments are rethinking the way they connect job candidates with meaningful employment. Eightfold.ai and Empyra have combined to join advanced AI-driven talent matching with configurable case management. Together, they deliver a unified, secure environment that helps agencies modernize operations, improve employment outcomes and provide more efficient, personalized experience for both job seekers and employers.

AI-Driven Workforce Modernization

Eightfold.ai was built by former Google and Facebook engineers to be the world’s most intelligent talent matching platform, matching candidates to the right jobs. From more than a decade of global labor market data, its neural network goes beyond keyword searches, interpreting:

  • Skills
  • Roles
  • Qualifications

The platform continuously learns from interactions across job seekers, employers and case managers, moving agencies away from time-consuming resume screening toward a data-driven system that identifies talent by capability and aptitude.

Through its Career Navigator, Eightfold.ai provides:

  • Visual career pathways
  • Transferable skill identification
  • Gap analysis
  • Training from State-approved providers

This transforms the labor exchange into a dynamic environment that supports both immediate reemployment and long-term career mobility.

Integrated Case Management and Service Delivery

Empyra’s myOneFlow consolidates workforce and social service delivery into a single, configurable platform. By capturing data once and reusing it across workflows, the system reduces duplication and frees staff to focus on engagement rather than paperwork. Designed as a Commercial Off-The-Shelf (COTS), Workforce Innovation and Opportunity Act (WIOA)-ready system, myOneFlow includes Participant Individual Record Layout (PIRL) and performance reporting out of the box. As funding and requirements evolve, its flexible architecture allows agencies to tailor:

  • Forms
  • Eligibility rules
  • Intake processes
Eightfold.ai , Better Together Eightfold.ai and Empyra blog, embedded image, 2025

The platform streamlines the participant journey by automating:

  • Intake
  • Enrollment
  • Eligibility determination
  • Business rules to identify program fit
  • Referrals to partners for housing, education, training or employment resources

Participants can complete tasks and upload documents from any device via the mobile app. Beyond WIOA, myOneFlow also supports:

  • Apprenticeship management
  • Temporary Assistance for Needy Families (TANF)
  • Supplemental Nutrition Assistance Program (SNAP) tracking
  • Domestic-violence programs
  • Municipal grants.

By consolidating these functions, myOneFlow gives agencies flexibility to manage multiple programs efficiently within one adaptive system.

“Better Together” Integration Between Eightfold.ai and Empyra

Together Eightfold.ai and myOneFlow create a single front door for job seekers, case managers and employers. Unified identity management with Single Sign-On (SSO) and shared data models ensure information remains consistent across platforms.

Here’s how the integration works:

  • Participants register in myOneFlow
  • Their intake data automatically populates into Eightfold.ai
  • The AI engine generates skills assessments, job recommendations and career pathways
  • Applications, training and other activities sync back into myOneFlow

Case managers gain a real-time view of participant progress without manual entry, while employers benefit from accurate candidate matching and streamline recruiting tools. Behind the scenes, Eightfold.ai and Empyra operate a coordinated support model and incorporate agency feedback into joint product enhancements.

Trust, Security and Compliance

Both platforms meet rigorous standards, including:

  • FedRAMP
  • Tx-RAMP
  • System and Organization Controls 2 (SOC 2)
  • Department of Defense (DoD) Impact Level 4 (IL4)
  • International Organization for Standardization (ISO) 27001

They also adhere to evolving regulations across the European Union Artificial Intelligence (EU AI) Act, Texas Department of Information Resources (DIR) and other State privacy laws.

myOneFlow enforces:

  • Role-based access controls
  • Audit logging
  • Deduplication safeguards

Building the Future of Workforce Modernization

Eightfold.ai and Empyra’s myOneFlow demonstrate what is possible when AI, automation and integration align with mission-driven goals. The integrated solution helps agencies:

  • Deliver faster services
  • Improve job matching accuracy
  • Reduce administrative burden
  • Strengthen engagement
  • Maximize limited resources

Workforce organizations can now create a more responsive, equitable and efficient system, empowering job seekers, supporting employers and advancing mission outcomes.

Watch the full webinar, “AI-Centric Innovation: Modernizing Workforce Agencies,” to see the full demonstration of Eightfold.ai and Empyra’s integrated approach to workforce transformation.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Eightfold.ai and Empyra, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Emerging Trends in Artificial Intelligence and What They Mean for Risk Management

Artificial intelligence (AI) is a valuable risk management tool, but it also poses a degree of risk. As AI becomes more prevalent, it opens new possibilities while simultaneously raising new concerns.

Federal agencies and contractors have a responsibility to closely monitor developments in the scope and capacity of AI. In this article, we’ll explore some of the top emerging trends in AI, and we’ll explain their impact on risk management strategies for Federal agencies and contractors.

What are the Emerging Trends in Artificial Intelligence?

With its enormous capacity for pattern recognition, prediction and analytics, AI can be instrumental in identifying risk and driving solutions. Here are some of the most promising new AI applications for risk management.

Predictive Analytics

Predictive AI is widely used in applications like network surveillance, fraud detection and supply chain management. Here’s how it works.

Machine learning tools, a subsection of AI, rapidly “read” and analyze reams of historical data to find patterns. Historical data can mean anything from network traffic patterns to consumer behavior. Since machine learning tools can analyze vast datasets, they find subtle patterns that might not be evident to a human analyst working their way slowly through the same data. This kind of predictive analysis helps organizations identify risks before they escalate.

Once ML identifies the patterns, it can use them to make highly specific and accurate predictions. That can mean, for example, predicting website traffic and preventing unexpected outages due to increased usage. It can also mean spotting the warning signs of new computer viruses or identifying phishing emails.

Generative AI

Generative AI (GenAI) is often discussed in terms of its content creation capabilities, but the technology also has enormous potential for risk management.

GenAI can rapidly synthesize data from a wide range of inputs and use it to create a coherent analysis. For example, GenAI can make predictions about supply chain disruptions, based on weather patterns, geopolitical issues and market demand. Many generative systems use natural language processing to interpret context, summarize information and support more accurate decisions.

GenAI can also come up with solutions to the problems it identifies. The technology excels at breaking down silos and drawing connections between different sources of information. For example, the technology can suggest alternative shipping routes or suppliers in the event of a supply chain disruption.

It’s worth noting that, like any other AI tool, generative AI does best with human oversight. GenAI analysis should never be accepted at face value. Rather, employees can use it as an inspiration or a jumping-off point for further planning. Human expertise should always play a key role in the planning process, since GenAI isn’t always accurate.

Adaptive Risk Modeling

AI tools are capable of continuous learning and real-time analysis. Those capabilities lay the groundwork for adaptive risk modeling.

Adaptive risk modeling allows for a dynamic understanding of risk factors, instead of the traditional static approach. The old way of calculating risk relied on identifying patterns in historical data and using a linear model with a simple cause-and-effect analysis.

In contrast, adaptive risk modeling uses machine learning and deep learning to continually scan data sets for changes or new patterns. Instead of a static, linear model, AI risk modeling can build a dynamic model and continually update it.

Use Cases for AI Risk Management Tools

AI is widely used in the Public and Private Sectors to predict and manage risk, even with third parties involved. Here are some of the common use cases.

Federal Government Use Cases

A growing number of Federal agencies use AI tools to increase efficiency in their work. Some are beginning to pilot AI-powered agents to automate routine tasks and provide real-time recommendations for employees.

  • The Department of Labor leverages AI chatbots to answer inquiries about procurement and contracts.
  • The Patent and Trademark Office uses AI to rapidly surface important documents.
  • The Centers for Disease Control uses AI tools to track the spread of foodborne illnesses.

Financial Sector

Lenders increasingly use AI tools to assess the risk of issuing loans. Because AI can collect and analyze large data sets, the technology provides a comprehensive way to assess creditworthiness.

Financial institutions also use AI for fraud detection. AI tools can spot patterns in typical customer behavior and identify anomalies that could indicate fraud.

Insurance Industry

Insurance companies frequently use AI for underwriting, including risk assessment and risk mitigation. AI is also a useful tool for processing claims and searching for fraud.

Generative AI is also often used to provide frontline services to customers. For example, chatbots answer straightforward questions, provide triage and refer more complex questions to human operators.

Risks Associated with AI Technologies

AI is a valuable tool in mitigating risk, but it’s important to be aware of the risks the tools themselves present.

Chief among those risks is the problem of algorithmic bias. AI and ML excel at identifying patterns and codifying them. However, this means that AI is only as good as the data that feeds it. If AI/ML tools are trained on biased data, the tools will codify the biases embedded in that data. AI/ML takes the unspoken prejudices in datasets and turns them into hard and fast rules, which inform every decision going forward.

Agencies must also consider data privacy implications when AI tools process sensitive or regulated data. If human operators do not question the algorithm’s output, there’s a real risk that bias will become deeply ingrained, causing lasting harm to individuals and organizations and even creating regulatory compliance issues.

Addressing AI Bias

Federal agencies and contractors must understand exactly how AI tools are being deployed. Operators should frequently look “under the hood” of the AI algorithms, asking questions about how the outputs are generated. Opening the “black box” allows organizations to check for bias and prevent it from being codified. Strong data ethics practices ensure that AI systems are trained on fair, transparent and accountable data sources.

It’s best practice to implement a cross-functional AI governance council or team to oversee artificial intelligence. It’s also important to work closely with a trusted partner who has experience integrating AI into a GRC platform. The best AI tools help humans manage a Federal agency with efficiency. The question is, how to make the most of the available technology while mitigating the associated risk.

From Pilot to Production: Operationalizing Healthcare GenAI in Secure Multicloud Environments

Healthcare organizations are under immense pressure to shrink margins, tighten regulations, improve patient expectations and utilize increasingly complex data environments. While generative artificial intelligence (GenAI) has emerged as a powerful tool, most healthcare systems still struggle to move from experimentation to measurable outcomes. Leaders are asking the same questions: Where do we start? How do we ensure security and compliance? How fast should the Return on Investment (ROI) appear?

The answer is not simply selecting a model, it is building a strategy and infrastructure that transforms AI from a promising pilot into an enterprise engine for clinical, operational and financial improvement.

Start With High-Impact Use Cases that Deliver Early ROI

The path to operationalizing GenAI begins with use cases that are narrow enough to implement quickly, but meaningful enough to prove value. Start where measurable gains are most attainable, such as document processing, contract review, claims analysis, compliance workflows and call center optimization.

One of the strongest early candidates is Protected Health Information (PHI) de-identification, where AI can accelerate research access while protecting privacy. Many organizations are also applying GenAI to claims review, using models to flag missing attachments, coding inconsistencies or errors that commonly drive costly denials. With first-pass denial rates hovering in the 17–25% range industry-wide, automating this analysis can generate immediate financial return.

These targeted wins build executive confidence, secure budget and create organizational momentum, which is critical before expanding to more complex clinical or patient-facing scenarios.

Build Trust by Grounding the Model in Your Own Data

Accuracy and trust determine whether healthcare AI is adopted or ignored. General-purpose models are not sufficient for healthcare, where language is deeply nuanced and context dependent. Instead, organizations should ground GenAI in their own governed data sources, such as Electronic Health Records (EHRs), Customer Relationship Management (CRM) platforms, care summaries, research documents or internal policies.

To achieve this, many leaders are adopting Retrieval-Augmented Generation (RAG) with vector databases, which allows models to pull precise information from internal systems in real time. Vector databases are a foundational accelerator, enabling faster, more accurate retrieval across structured and unstructured data. This approach delivers three business advantages:

  1. Higher accuracy and confidence in model responses
  2. Stronger control of PHI and sensitive data
  3. Traceability, which is essential for audits, appeals and clinical validation

Grounding the model in an organization’s own data turns GenAI from a creative tool into a trusted operational system.

Use a Secure Multicloud Strategy to Reduce Risk and Increase Agility

John Snow Labs, Operationalizing Healthcare GenAI blog, embedded image, 2025

To operationalize GenAI responsibly, healthcare organizations should design for security,compliance and flexibility from day one. When separating PHI and non-PHI workloads, a multicloud strategy helps healthcare organizations:

  • Isolate sensitive data to minimize breach impact and simplify governance
  • Reduce lock-in risk and leverage the strengths of different cloud platforms
  • Tap into more innovative options, since each cloud offers unique AI tooling
  • Optimize cost and performance by matching workloads to the right environment

Multicloud design also supports stronger compliance postures by enabling auditability, identity controls, monitoring and bias/hallucination safeguards, all of which must be proven to regulators and accrediting bodies.

Avoid “Pilot Purgatory” and Build a Path to Production

Many healthcare AI programs fail not because the technology underperforms, but because the organization never assigns ownership or a path to scale. To prevent “pilot purgatory,” short-term projects that drag on without measurable outcomes, organizations should:

  • Create a defined production roadmap before the pilot begins
  • Empower a cross-functional AI Center of Excellence (COE) to own outcomes
  • Secure both clinical and administrative stakeholders
  • Treat GenAI as an enterprise capability, not a one-off project

This shift enables the same investment to support multiple use cases, expanding impact while lowering cost per interaction over time.

Continuously Measure, Optimize and Expand

An operational GenAI program is never “set it and forget it.” It is important to continuously track Key Performance Indicators (KPIs) to guide optimization and justify expansion. Recommended KPIs include:

  • Cost per interaction
  • Accuracy and confidence
  • Time saved per task or workflow
  • Time to response (latency and model speed)
  • User satisfaction (providers, staff and patients)

By evaluating these metrics regularly, healthcare organizations can expand from early wins to enterprise scale, from research and development to patient support, revenue cycle, compliance and beyond.

Align People, Data and Infrastructure For AI Success

Technology alone is not the determining factor of AI success in the healthcare space, alignment is. Success requires a shared vision from leadership, responsible data groundwork, a secure multicloud foundation and continuous measurement to maintain trust and value. With the right approach, GenAI can improve patient satisfaction, strengthen trust, accelerate research and innovation, reduce administrative burden and deliver measurable ROI in weeks over years.

Carahsoft and John Snow Labs help healthcare leaders accelerate this journey, combining secure infrastructure, domain-specific healthcare AI and proven deployment models. To explore how your organization can operationalize GenAI safely and effectively, watch the full webinar, “Lessons Learned from Harnessing Healthcare Generative AI in a Hybrid Multi-Cloud Environment.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including John Snow Labs, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How AI-Powered Records Management Transforms Government Operations from Reactive to Proactive

Government agencies today must manage an unprecedented volume of digital documents. As digital transformation accelerates across Federal, State and Local agencies, the challenge is not just managing more content, it is extracting actionable intelligence while maintaining compliance, security and operational efficiency. Artificial intelligence (AI) has transformed enterprise records management, replacing manual processes with automated, predictive systems that improve decision making and resource allocation across the mission.

AI-Powered Auto-Classification for Document Management

Effective classification is the foundation of records management, and AI has altered this traditionally complex process. Modern AI models can accurately classify structured documents like invoices or purchase orders, with as few as ten training examples. This represents a major improvement over legacy systems that required zonal Optical Character Recognition (OCR) configuration, separator pages and precise layout specifications.

AI models employ multiple techniques, including computer vision, text extraction and contextual reasoning, to identify document types with high confidence. Unlike older pattern-matching tools, today’s AI adapts to variations in structure and format, making classification scalable for agencies managing thousands of document types across different departments.

Training has also become more accessible. Agencies can simply label documents, point the AI to those examples and generate a working classification system. Accuracy improves over time through human review, and confidence scores allow agencies to set thresholds and route low-confidence results to human reviewers.

Accurate classification directly impacts record retention, access control and content discovery. Without it, employees cannot find necessary documents, retention schedules are misapplied and access permissions become inconsistent. Robust AI-powered classification at ingestion ensures downstream processes function as intended.

Intelligent Data Extraction from Structured and Unstructured Documents

Once documents are classified, agencies must extract meaningful information, an area where AI delivers transformative capabilities. Modern machine learning models locate key-value pairs anywhere on a document, using contextual understanding rather than fixed positions or label formats. AI can also answer natural-language queries, mirroring human logic. If a person can explain how they would find a piece of information, that logic can be written as a prompt for the model.

These capabilities work across structured and unstructured formats. Work that previously required specialized staff and years of experience can now be configured with simple prompts. Confidence scoring ensures accuracy. When the model is uncertain, items are routed to human reviewers. This combines automation’s speed and consistency with human judgment where needed.

For Government agencies, AI extraction improves compliance and reporting. Licensing applications, permit requests, inspection reports and countless other documents can be automatically processed, with extracted data populating systems of record and triggering workflows. Information once locked in PDFs or paper becomes structured, searchable and actionable.

AI-Driven Deduplication and Data Quality Management

VisualVault, AI-Powered Records Management blog, embedded image, 2025

Duplicate data is a productivity drain and a compliance risk. Redundant documents accumulate quickly across forwarded emails, multiple repositories and inconsistent processes. This creates unnecessary work, consumes storage and complicates compliance with data retention requirements.

Legacy deduplication relied on hash matching, but this fails to detect most real-world duplicates. AI-based deduplication analyzes document classifications and extracted metadata to determine true duplicates based on agency-defined rules. If the elements match according to customer rules, the system flags the items as duplicates regardless of differences in headers or formatting.

This content-based deduplication reduces storage costs, simplifies retention compliance and minimizes cybersecurity exposure. Retaining unnecessary data increases legal risk during litigation and discovery and expands the attack surface for cyber threats. AI allows agencies to retain only necessary data, reducing operational and security liabilities.

Enhanced Workflow Automation with Predictive Analytics

High-quality, classified and extracted data unlocks the full value of predictive analytics, enabling Government agencies to shift from reactive problem-solving to proactive planning. This capability uses historical data to predict outcomes, such as numeric values, binary decisions or multiclass classifications.

Platforms like VisualVault allow agencies to train predictive models without data science expertise. Professional services teams configure the models, demonstrate how they work and train agency employees to manage them.

Public sector agencies already use predictive analytics to forecast safety incidents at licensed facilities. Historical inspection data comprised of conditions, violations and corrective actions allows models to identify facilities with a high probability of future serious events. When inspections reveal patterns associated with increased risk, inspectors and licensing officials are automatically alerted, enabling early intervention.

Predictive analytics also strengthens performance management. Agencies can compare their metrics against industry norms, seeing where they stand within their sector. This supports investment decisions and enables precise tracking of improvement outcomes.

Agencies should focus on automating controls that meaningfully reduce, not simply increasing the percentage of automated controls. High-impact controls should be prioritized for automation and predictive monitoring to maximize security and operational benefits.

For decision makers, predictive analytics delivers the context and accuracy needed to make fast, informed decisions across claims, vendor management, resource allocation and strategic planning.

Digital Transformation as Organizational Necessity

Despite rapid technological advancement, human expertise remains essential. AI systems are designed to operate behind the scenes and do not require users to understand machine learning (ML) concepts. Small teams define the required outcomes, what must be classified, what data must be extracted and what predictions will improve decisions, while professional services configure the system accordingly.

AI adoption does not inherently reduce headcount. Historically, technology shifts transform jobs rather than eliminate them. Workflows move from manual tasks like sorting documents to higher-value work such as analysis, decision making and innovation. Employees focus on defining requirements, reviewing AI outputs and applying human judgement where it adds value.

The Measurable Value of AI Implementation

Agencies can begin their journey by identifying their key performance indicators and the business outcomes they want to improve:

  • What pain points cause the most friction?
  • Where do backlogs accumulate?
  • Which processes create the most risk?

This ensures implementation is tied to measurable outcomes. AI success depends on clear requirements, proper process, staff training and strong governance. Agencies should adopt AI incrementally, starting with high-value use cases that deliver quick wins, then expanding into more complex workflows and predictive models as confidence grows.

Digitization mandates and the rise of generative AI have accelerated content creation beyond expectations, driving significant growth for platforms like VisualVault. The agencies that succeed will be those that embrace this shift and modernize now.

Watch VisualVault’s webinar “Employing AI to Bring Order and Value to Enterprise Records Management” to explore detailed demonstrations of AI-powered classification, extraction and predictive analytics capabilities that can transform your agency’s records management operations.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including VisualVault, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

From Data Silos to Life-Saving Decisions: How Technology is Transforming Healthcare Delivery

Healthcare organizations continuously navigate complex challenges as patient demand grows. Imaging volumes are rising faster than radiology capacity can scale. Public health agencies manage vast amounts of data across disconnected systems. Administrative tasks consume time that healthcare staff would rather spend on patient care.

These operational realities create opportunities for technology to make a meaningful difference. Leading healthcare organizations are already transforming these challenges into improved outcomes through strategic technology deployments enabled by streamlined procurement.

As The Trusted IT Solutions Provider for the Healthcare Industry™, Carahsoft offers a robust portfolio of healthcare technology solutions that make positive changes in the quality, safety and effectiveness of healthcare delivery systems. Streamlined procurement is available through Carahsoft’s reseller partners and numerous contract vehicles including GSA Schedule, NASPO ValuePoint, E&I Cooperative Services and The Quilt.

Key Takeaways:

  • AI diagnostics improve radiology efficiently by up to 40% addressing the looming shortage of 42,000 radiologists by 2033.
  • Unified data platforms enable more than 80% of emergency departments to share real-time data with the CDC.
  • Automated workflows cut processing times by 50%, freeing staff for patient care.
  • Zero Trust security protects patient data while enabling hybrid cloud operations.
  • Streamlined procurement accelerates deployment from months to weeks.

AI-Powered Diagnostics: Addressing the Radiology Crisis

By 2023 the U.S. faces a shortage of up to 42,000 radiologists as imaging volumes rise 5% annually while residency positions increase just 2%.

At Northwestern Medicine, Dr. Mozziyar Etemadi, Clinical Director of Advanced Technologies, deployed a generative AI solution with Dell Technologies and NVIDIA that analyzes chest X-rays and generates draft reports instantaneously. Results: radiology efficiency improved by up to 40% without compromising diagnostic accuracy. The system flagged unexpected pneumothorax cases with 72.7% sensitivity and 99.9% specificity – lifesaving in emergency settings.

The technology runs on Dell PowerEdge XE9680 servers with NVIDIA H100 GPUs, deployed on premises to maintain HIPAA compliance. Northwestern is now developing predictive models for entire electronic records.

Public Health Surveillance: Rapid Outbreak Response

The CDC faced a critical challenge: essential health data trapped in disconnected silos across thousands of facilities.

The CDC’s partnership with Cloudera created a unified platform consolidating data from hospitals, laboratories and wastewater testing sites. More than 80% of non-federal emergency departments now send data to CDC, enabling comprehensive threat monitoring. When measles spiked across 15 states in 2025, officials had integrated visualizations within days.

The CDC’s One CDC Data Platform (1CDP), established in 2024, provides state, tribal, local and territorial agencies with streamlined access to core datasets and analytics, enabling faster disease trend detection and proactive strategies.

Accelerating Cancer Research Collaboration

The National Cancer Institute partnered with Google Cloud and Barnacle AI to introduce NanCI – a platform leveraging AI-driven recommendations to connect researchers with collaboration opportunities, literature and events. The solution demonstrates how AI extends beyond clinical care to accelerate scientific discovery across Government, Education and Healthcare sectors.

Operational Excellence: Freeing Caregivers to Care

Workforce coordination: Healthcare organizations use BlackBerry AtHoc, available through Carahsoft’s reseller network and contract vehicles, to streamline staffing and scheduling processes. The event management platform helps ensure personnel are coordinated efficiently across departments which is essential for maintaining high standards of patient care.

Financial automation: Community Health Centers of Florida implemented Laserfiche’s enterprise content management system, cutting processing time by 50% and eliminating manual data entry. “I cannot fathom processing the current volume of invoices ‘the old way,’” said Dee Bradshaw, director of purchasing. “Laserfiche has cut our processing time in half.”

Every hour freed from administrative burdens is an hour caregivers get back to spend with their patients.

Modern, Secure Infrastructure

California Department of State Hospitals deployed Rubrik’s data management platform to integrate legacy systems with modern hybrid cloud environments. Rubrik’s Zero Trust Data Security framework minimized ransomware vulnerability while ensuring Federal compliance.  

St. Luke’s University Healthcare Network used Rubrik for faster backups, near-instant recovery and seamless hybrid IT integration, strengthening cyber defenses while freeing IT staff to support clinical teams.

Federal agencies, State and Local Governments and Education institutions face similar Zero Trust security and hybrid cloud integration requirements.

Explore Carahsoft’s cybersecurity solutions at www.carahsoft.com/solve/cybersecurity.

Meeting Demand at Scale

NYC Health + Hospitals deployed Snowflake’s Data Cloud which consolidated separate data sources into a unified platform. This integration eradicated silos, provided real-time visibility and enabled data-driven decisions at the point of care for vulnerable populations.

The Carahsoft Advantage

For Healthcare Organizations: Faster access to solutions, simplified procurement through pre-negotiated contracts, integrated solutions across technology verticals, dedicated healthcare technology expertise. Simplify your organization’s procurement journey with Carahsoft.

For Reseller Partners: Opportunities to deliver comprehensive solutions, access to leading vendors through established contract vehicles, sales enablement and marketing support. Become a Carahsoft reseller partner.

For Technology Vendors: Expanded reach across Federal, State and Local Government, Education and Healthcare markets, simplified Healthcare sales through hundreds of contract vehicles. Join our partner ecosystem.

Ready to explore healthcare technology solutions?

Understanding CMMC: A Roadmap for Federal Contractors

The Department of Defense (DoD) recently announced new cybersecurity compliance mandates for contractors and subcontractors in the DoD’s supply chain. Private companies that process, store or transmit DoD data are now required to comply with the Cybersecurity Maturity Model Certification, or CMMC.

The new mandate impacts every private company that handles Federal Contract Information (FCI) or Controlled Unclassified Information (CUI). That’s a large group: According to the DoD’s own estimation, at least 220,000 private companies currently have access to FCI and CUI and require CMMC certification.

Because the CMMC is relatively new, some organizations may be struggling to understand their obligations. Learn more about exactly what the CMMC is and what steps organizations should take right now to be prepared for audits and remain eligible for DoD contracts.

What Is CMMC?

CMMC is the cybersecurity compliance structure used by the Department of Defense. High-profile security breaches like Solar Winds highlighted the need for rigorous data protection throughout the DoD supply chain. The DoD implements the CMMC framework to vet potential contractors and subcontractors and protect against third-party data breaches.

There are three CMMC certification levels: 1, 2 and 3. The different levels correspond to the degree of sensitive information being handled. All companies that contract with DoD need to have at least Level 1 CMMC, while companies that handle more sensitive information will need to have Level 2 or Level 3 cybersecurity compliance certifications.

Recent Changes to CMMC

The CMMC has recently undergone some amendments. An older version of the CMMC, or CMMC 1.0, was implemented in 2019. The new version, CMMC 2.0, came into effect at the end of 2024.

Contractors must now comply with CMMC 2.0, although implementation is taking place in stages. For any organization contracting with the Defense Department, the most important takeaway is that you absolutely must be CMMC compliant to continue working with the Department.

What Level of CMMC Certification Do You Need?

If your organization handles any FCI or CUI, you’ll need CMMC certification. Which level is right for you? You can’t know for certain until you apply for a contract, as there is some variation from one external contract to another.

However, you can make an educated guess about the certification you’ll need. The DoD’s Scoping and Assessment Guide also provides more detail about the standards for each level.

Level 1 CMMC

Level 1 is the most straightforward CMMC certification. It doesn’t require third-party auditing; contractors do a self-assessment to get the certification.

Level 1 is usually appropriate for contractors who handle FCI material and nothing else. FCI is unclassified Government information that isn’t publicly available. Details about Government employees or facilities, for example, might be categorized as FCI. Although the information is sensitive, it is not considered critical enough to require the extra protection of a Level 2 or Level 3 certification.

Level 2 CMMC

If your organization handles both CUI and FCI, you will probably require Level 2 CMMC certification.

In many cases, Level 2 certification is straightforward and can be achieved through a self-certification process. However, in some cases you will need to pass a third-party audit for Level 2 certification. The procedure depends on the sensitivity of the data you’ll be handling. The more sensitive the information, the more precautions the DoD puts in place to prevent a potentially disastrous security breach.

Level 3 CMMC

Level 3 CMMC is the most serious and the most difficult certification to obtain. If your organization routinely handles both CUI and FCI and also deals with material that impacts DoD operations, then you may need this certification.

Level 3 CMMC mandates stricter protections than the other two certification levels. It’s required in cases where a data breach could create widespread problems for the Department of Defense, or even for national security.

To obtain Level 3 CMMC certification, you must undergo a Government audit. The Government will thoroughly assess your security system and determine whether it meets the appropriate standards for certification.

What Is the Cybersecurity Compliance Timeline?

CMMC 2.0 came into effect in December 2024. From that date on, organizations working with the Department of Defense are mandated to begin implementing CMMC compliance according to a 4-phase plan.

Phase 1

This stage began in December 2024, as soon as CMMC 2.0 came into effect. During Phase 1, prospective new DoD contractors are required to conduct a self-assessment to ensure cybersecurity compliance according to Level 1 or 2 CMMC. Phase 1 requirements went into effect November 10, 2025.

Phase 2

The full Level 2 standard comes into effect in November 2026, ushering in Phase 2 of CMMC 2.0. At this stage, contractors are subject to third-party audits to ensure cybersecurity compliance with Level 2 and Level 3 certification.

Phase 3

Phase 3 is set to begin in November 2027. At that time, organizations that handle the most sensitive data will be mandated to undergo a Government-run security audit to ensure compliance with Level 3 CMMC certification.

Phase 4

In November 2028, all new defense contracts will contain language stipulating the CMMC level requirement.

What Steps Should You Take To Comply with the CMMC?

Cybersecurity compliance is fairly straightforward and can be broken down into a few key steps.

Step One: Preparation

Determine which certification level is appropriate for your organization and its needs. Begin by deciding which contracts you’d like to apply for, and use the contracts to decide the appropriate certification level.

Remember that it’s always a good idea to aim for the lowest appropriate certification level, as higher levels are more difficult to obtain. If you are not dealing with highly sensitive data, it’s not worth trying to obtain the Level 3 certification.

Step Two: Internal Assessment

Conduct a preliminary assessment of your organization, analyzing where you will need to make changes to achieve cybersecurity compliance.

It’s good practice to do this in two stages. First, complete a self-assessment. Next, check your assessment with an objective source.

Step Three: Third-Party Audit

If you’re working towards Level 2 or Level 3 certification, you’ll need to be audited, either by an approved third-party auditor or by the Government. The CMMC marketplace makes it easy to set up the assessment. Again, you should first perform a self-assessment to make sure that you’ve addressed any shortfalls in your organization before you undergo this audit.

Step Four: Course Correction

The audit may reveal deficiencies in your security system. If so, you may be granted time to correct these deficiencies and still successfully apply for your CMMC certification.

Once you receive your CMMC certification, you’ll need to renew it once a year to confirm that your organization is keeping up with DoD best practices for cybersecurity.

Get Started With the CMMC Certification Process

Artificial Intelligence and Cybersecurity: A Federal Perspective

As artificial intelligence (AI) continues to expand across Government operations, Federal agencies must integrate advanced AI technology to strengthen cybersecurity while staying ahead of new cyber threats. This is especially crucial in environments where critical systems, personally identifiable information (PII), and critical infrastructure are constantly targeted by sophisticated adversaries.

AI is a double-edged sword. Malicious actors now use machine learning techniques, deep learning and generative AI to scale cyberattacks at unprecedented speed. At the same time, security teams are successfully deploying advanced AI algorithms, security tools and threat intelligence to detect, defend and respond faster. Striking the right balance is essential for Federal leaders responsible for safeguarding national interests.

In this article, we’ll talk about how to find the right balance between exploiting AI’s capabilities and guarding against the risks. We’ll also explore the specific threats agencies face today, and discuss how AI can help by automating risk management.

The Growing Cybersecurity Challenge

Ransomware, large-scale phishing campaigns and deepfake social engineering attacks are accelerating due to advancements in AI systems and large language models (LLMs). Cybercriminals can cast a wider net than ever before, with little effort and at a low cost to themselves, especially when targeting critical infrastructure and Federal systems.

Increased Threats

It’s worth noting that even benign AI applications are paving the way for more cyber events. When Government agencies adopt AI tools, they automatically expand their networks and their “attack surfaces,” requiring new security measures and stronger vulnerability assessment practices.

AI’s automation and speed enable large-scale attacks. AI can rapidly scan and scrape online databases and analyze network traffic, looking for potential targets to attack. Hackers can use AI’s no-code automation capabilities to create the code for malware at high speed, and to send out phishing emails at a larger scale than ever before. AI’s natural language processing (NLP) capabilities allow it to create credible “deepfake” video and audio at high speed, as well.

The vast majority of these attacks are unsuccessful, but it only takes one careless end user to click a bad link to a malicious website, or to click a link that triggers a domain blocking failure. That’s why it’s so important for security teams to be on their guard. Fortunately, AI tools can also help. Just as no-code automation helps hackers, it also helps agencies protect themselves against threats.

Leveraging AI Tools To Fight Cyberattacks

The same capabilities that can make AI useful for hackers also make it a great tool in fighting cyber threats. Automation, speed and the ability to identify patterns are all invaluable for countering online threats.

Using AI to Identify Phishing Attacks

AI excels at assisting with phishing detection. AI and Machine Learning (ML) tools can quickly “read” incoming emails and texts and scan them for telltale signs of danger, like unusual sender addresses. AI’s natural language processing capabilities also help. NLP tools scan incoming messages for unusual phrasing or a strange tone, which might indicate a phishing attack.

Most spam folders are powered by AI and ML tools. These tools are constantly learning on the job, too. Whenever you mark an incoming email “spam,” your software learns a little more about what you consider to be spam. Going forward, it incorporates that information into its workflow.

Using AI To Scan for Malware

AI-powered antivirus tools scan for malware more effectively than older antivirus detection systems. The AI software scans and analyzes huge quantities of data in network traffic and system logs to identify patterns that could indicate a virus. Because deep learning models are so good at identifying patterns and spotting anomalies, it can often spot new viruses early on.

Older antivirus software relies on known viral signatures. While useful, these tools can’t keep up with new threats evolving through AI algorithms. That’s the AI difference: predictive pattern detection supports proactive cybersecurity solutions and strengthens incident response.

Using AI To Identify Threats From Within

AI can help to spot attacks from within. The software establishes a baseline of user behavior, like normal login hours and normal patterns of data access. When there’s a change in that baseline, the AI tool flags it for further investigation.

AI looks for changes like unusual activity outside of a team member’s normal working hours or location-based aberrations. For example, if a member of your team normally logs in at 9 a.m. and out at 5 p.m., the AI tool will notice if they start logging in again at midnight to download files. Even if they have authorization to view that information, it’s worth asking why they suddenly need to access it at an unusual time. In the same vein, further review may be warranted if an employee views a record from an atypical IP address.

Using AI To Actively Fight Threats

Beyond identifying cyber threats, AI tools can proactively defend systems. They block or isolate compromised devices, enforce malicious domain blocking, apply system patches and notify security teams of attempted attacks.

AI-backed incident response workflows reduce the spread of malware and help protect the network even when one endpoint is compromised.

Exercising Precaution: Building Guardrails for AI

AI is a valuable tool for fighting cyber threats. However, it’s important to protect your network and end users against AI’s natural pitfalls. Federal agencies have a special responsibility to install guardrails in accordance with the relevant regulations and guidelines.

AI guardrails ensure that the technology behaves according to ethical standards, avoiding bias and making appropriate use of sensitive data. To some extent, AI itself can create guidelines. Generative AI tools can routinely scan for ethical problems and alert managers to any new issues.

However, human oversight remains crucial, and agencies should appoint managers to be directly accountable for AI supervision. The NIST AI Risk Management Framework provides detailed guidance for managers and anyone else involved in managing AI guardrails.

Making the Best Use of AI

Government agencies can’t turn their backs on AI. The technology offers too many benefits to stop using it. However, leaders must be aware that expanding AI also opens them up to greater threats. It’s also critical to be alert to the many dangers posed by AI-enabled cyberattacks.

The first step? Inform yourself about how AI can impact your agency. To get started, learn about AI integration into GRC today.