Building a Security Strategy for Agentic AI: A Framework for State and Local Government

As artificial intelligence (AI) evolves from simple chatbots to autonomous agents capable of making independent decisions, State and Local Government agencies face a fundamental shift in cybersecurity requirements. Recent research shows 59% of State and Local Government respondents report already using some form of generative AI (GenAI), with 55% planning to deploy AI agents for employee support within the next two years. Yet this rapid adoption brings unprecedented security challenges. Because AI agents are designed to pursue goals autonomously, even adapting when security measures block their path, Chief Information Security Officers (CISOs) responsible for safeguarding Government networks must rethink traditional defenses and embrace a new security paradigm.

The Emergence of Agentic AI and Its Unique Security Challenges

AI agents represent a significant departure from the GenAI tools many agencies currently use. While traditional Large Language Models (LLMs) respond to prompts and return information such as a support chatbot, AI agents and agentic systems are autonomous software programs that can plan, reflect, use tools, maintain memory and collaborate with other agents to achieve specific goals. These capabilities make them powerful productivity tools, but they also introduce failure modes that conventional software simply does not have. Unlike deterministic systems that crash when something goes wrong, AI agents can fail silently through collusion, context loss or corrupted cognitive states that propagate errors throughout connected systems. Research examining the real-world performance of AI agents found that single-term tasks had a 62% failure rate, with success rates dropping even further for multi-term scenarios.

When Veracode examined 100 LLMs performing programming tasks, these systems introduced risky security vulnerabilities 45% of the time. For State and Local agencies handling sensitive citizen data, managing critical infrastructure or supporting public safety operations, these error rates demand careful attention within robust security frameworks designed specifically for autonomous systems.

The New Security Paradigm: From Human-Centric to Agent-Inclusive Workforce Protection

AI agents, the newest coworker, amplify insider threats by combining human-like autonomy with capabilities that exceed human limitations. While employees work within bounded motivation and finite skills, AI agents possess boundless motivation to achieve goals, uncapped skills that continuously improve and infinite willpower, constrained only by computational capacity. They will not simply make a single attempt to access a file, get blocked due to a lack of permissions, get frustrated and go home for the day the way an employee might; they will persistently pursue objectives, potentially finding novel ways around security controls.

This transformation fundamentally changes the attack surface agencies must protect. Data breaches continue to impose significant financial and operational strain across the public sector, with many state and local organizations reporting cumulative annual costs that reach into the millions. AI agents and agentic systems collapse traditional security models by operating as autonomous workforce members who interact with systems, access data and make decisions without direct human oversight. They can be compromised through threats specific to agentic AI, such as goal and intent hijacking, memory poisoning, resource exhaustion or excessive agency that can lead to unauthorized actions, all in pursuit of achieving programmed objectives. For Government agencies managing limited security budgets while protecting essential citizen services, this exponential increase in potential attack vectors demands proactive frameworks rather than reactive responses.

The AEGIS Framework: A Six-Domain Approach to Securing Agentic AI

Forrester’s Agentic AI Enterprise Guardrails for Information Security (AEGIS) framework provides a comprehensive approach to helping CISOs in securing autonomous AI systems across six critical domains.

Governance, Risk and Compliance (GRC) establish oversight functions and continuous monitoring capabilities. Identity and Access Management (IAM) address the unique challenge of agent identities that combine characteristics of both machine and human identities. Data Security focuses on classifying data appropriately, implementing controls for agent memory and considering data enclaves and anonymization from privacy perspectives.

Application Security evaluates risks across the entire software development lifecycle (SDLC), implements Development, Security and Operations (DevSecOps) best practices, assesses the software supply chain and uses adversarial red team testing to validate safety and security controls. This domain focuses on embedding telemetry that gives security teams visibility into agent behavior and decision making. Threat Management ensures logs are accessible to security operations center analysts, enabling detection of behavioral anomalies and supporting forensic investigations. Zero Trust Architecture (ZTA) principles apply such as implementing network access layer controls for agent workloads, continuous validation of the agent’s runtime environment and  monitoring of agent to agent communication.

Underlying the framework are three core principles:

  • Least Agency extends least privilege to focus on decisions and actions, ensuring agents have only the minimum set of permissions, capabilities, tools and decision making necessary to complete specific tasks.
  • Continuous Risk Management replaces periodic audits with ongoing evaluation of data, model and agent integrity.
  • Securing Intent requires organizations to understand whether agent actions are malicious or benign, intentional or unintentional, enabling proper investigation when failures occur.

Practical Implementation: Agent Onboarding and Governance

Forrester’s “Agent on a Page” concept provides a practical tool for providing structure, consistency and alignment of AI agents to business goals before activation, by outlining each agent’s owner, core purpose, operational context, knowledge base, specific tasks, functional alignment, tool access and cooperation patterns. This documentation gives business stakeholders clear success criteria, while security teams use it as a threat model and input into Forrester’s AEGIS framework to identify gaps in controls, missing guardrails, vulnerabilities and establish baselines to validate agent behavior against.

Similar to employee onboarding, agents require explicit programming on compliance frameworks, data privacy restrictions, scope of work and organizational norms. They must understand cooperation boundaries, operational context, knowledge sources and collaboration patterns. Agencies already deploying agents may have some of this documentation; those starting should collaborate between business owners and security teams to develop these frameworks.

Building a Secure Foundation for Autonomous AI

State and Local Government agencies stand at a critical inflection point. AI agents promise significant productivity gains across employee support, investigation assistance and first responder capabilities. Yet deploying these autonomous systems without appropriate security frameworks creates unacceptable risks for organizations managing citizen data and essential public services. The AEGIS framework provides a comprehensive approach to securing agentic AI before widespread deployment, enabling agencies to realize benefits while maintaining security postures that citizens expect.

Organizations should begin by reviewing the Forrester’s AEGIS framework to understand how it maps to existing compliance requirements such as NIST AI RMF, the EU AI Act and OWASP Top 10 for LLMs. Forming AI governance committees using AEGIS principles help establish organizational buy-in. Discovery processes identifying which departments are exploring AI agents enable targeted control implementation. Agencies that establish strong foundations now position themselves to adopt autonomous AI confidently and securely.

To explore the complete AEGIS framework and gain deeper insights into securing agentic AI for State and Local Government, watch Carahsoft’s full webinar featuring Forrester, “Full Throttle, Firm Control: Build Your Trust Strategy for Agentic AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Forrester, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

FedRAMP 20x: Modernizing Cloud Security Authorization Through Automation and Continuous Assurance

FedRAMP authorization has long required extensive documentation, static point-in-time assessments and timelines of 18–24 months. This approach has slowed innovation for Federal agencies seeking secure cloud solutions and for vendors pursuing Government contracts.

FedRAMP 20x reimagines authorization through automation, machine-readable evidence and continuous monitoring, shifting compliance from document-driven processes to data-driven assurance. It also reshapes how Federal agencies, Cloud Service Providers (CSPs) and Third-Party Assessment Organizations (3PAOs) collaborate to secure Government environments.

The Shift from REV 5 to 20x

Traditional FedRAMP authorization follows a linear, document-heavy process where CSPs write extensive System Security Plans (SSPs), undergo annual assessments and exchange static artifacts with 3PAOs. FedRAMP 20x maintains the same security requirements from National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Revision 5 (REV 5) but transforms how evidence is validated. Instead of screenshots or single-moment spreadsheets, 20x uses logs, configuration files and automated integrations that reflect real-time security posture. This enables continuous assurance, with systems remaining audit-ready and controls validated through actual telemetry and configuration baselines.

The result is a more dynamic, risk-focused model that moves beyond top-down waterfall processes that often obscure security conditions.

Modernized Compliance

FedRAMP 20x requires robust compliance automation built on five pillars:

  1. Control normalization
  2. Engineering
  3. Infrastructure
  4. Evidence generation
  5. Reporting

Controls must be technically engineered into Continuous Integration/Continuous Deployment (CI/CD) pipelines, an approach often described as “compliance-as-code.” Supporting infrastructure must generate evidence in a reliable, machine-readable format such as NIST Open Security Controls Assessment Language (OSCAL) or JavaScript Object Notation (JSON) so CSPs, agencies and 3PAOs can share data rather than documents. This approach transforms compliance work from writing narratives and taking screenshots to building monitoring systems that continuously validate control effectiveness.

While artificial intelligence (AI) tools are emerging as assistants, the foundation remains consistent instrumentation and automated evidence collection. Organizations must invest in platforms capable of real-time logging, automated vulnerability scanning, Application Programming Interface (API)-driven evidence collection and continuous control monitoring, moving beyond spreadsheets or basic ticketing systems to true automated Governance, Risk and Compliance (GRC).

Maintaining Security Standards

FedRAMP 20x reduces the barriers to entry for small CSPs. Under the traditional REV 5 model, many providers faced prohibitive costs and timelines, often waiting indefinitely for Joint Authorization Board (JAB) review without agency sponsorship. The 20x pilot eliminates this sponsor requirement and accelerates review: organizations using automation have achieved authorization in six months.

RegScale, FedRAMP 20x blog, embedded image, 2025

RegScale, leveraging its own platform with features like automated evidence collection and AI-assisted control validation, completed its SSP and evidence in approximately three weeks and achieved full authorization within six months of audit start. This acceleration does not weaken security; rather, continuous monitoring and real-time evidence provide greater assurance than annual snapshots.

Another benefit of the 20x approach is that the machine-readable evidence can be reused for other frameworks, enabling a “certify once and comply many” approach across:

  • System and Organization Controls 2 (SOC 2)
  • International Organization for Standardization (ISO) 27001
  • Cloud Security Alliance (CSA) Security, Trust, Assurance and Risk (STAR)

For cloud-native organizations already operating with infrastructure as code (IaC) and automated pipelines, 20x aligns Federal compliance with modern DevSecOps practices.

Cultural and Organizational Change Management

The greatest challenge with FedRAMP 20x is cultural, not technological. Many organizations already possess the necessary tools but continue to rely on manual processes built over 15–20 years. Shifting to automation requires replacing “no hope” environments, where compliance is viewed as endless documentation, with the recognition that more efficient, sustainable operations are both possible and necessary.

Teams must actively retrain themselves to think operationally rather than as checklist validators. The transition also requires breaking down silos between security and compliance teams, agencies and 3PAOs, ensuring all stakeholders rely on the same real-time telemetry instead of debating the meaning of outdated screenshots. Federal agencies must also educate risk owners and embrace new evidence formats and methodologies. Ultimately, this is as much an organizational transformation as a technical one.

Continuous Monitoring and Real-Time Risk Management

FedRAMP 20x redefines relationships between CSPs, agencies and 3PAOs by replacing periodic reviews with continuous monitoring and near real-time risk visibility. Instead of exchanging PDFs, stakeholders share dashboards, datasets and evidence repositories that all parties can access. Auditors can review assessments based on evidence collected minutes or hours ago rather than relying on outdated artifacts.

Continuous monitoring supports 20x by allowing agencies to track configuration drift, Plan of Action and Milestone (POA&M) status and control effectiveness in regular cadences. The definition of “continuous” varies by control type; some require minute-by-minute validation, while policy controls may be quarterly or semi-annual.

For agencies, continuous assurance delivers better risk management capabilities, but only if they invest time in understanding how to interpret machine-readable formats such as OSCAL. Adoption varies, with some agencies already capable while others continue developing this capacity.

Moving Forward with Confidence

FedRAMP 20x is a strategic shift that aligns Federal authorization with modern DevSecOps, delivering faster innovation without reducing security standards. Since launching in March 2025, the pilot has processed 27 submissions and granted 13 authorizations, demonstrating scalability and viability.

With 20x, agencies gain improved risk visibility, reduced vendor timelines and access to innovative cloud solutions previously delayed by lengthy authorizations. However, success is not guaranteed. It requires adopting continuous assurance, investing in platforms that support machine-readable evidence and educating risk owners to interpret dynamic data. CSPs must centralize systems of record, instrument environments for continuous evidence collection and adopt standardized mappings that facilitate automation.  

The organizations that thrive will be those that use FedRAMP 20x as a motivator to replace outdated habits, engineer controls properly and embrace automation as an enhancement, not a replacement, of human expertise.

Discover how FedRAMP 20x is transforming Federal cloud authorization by watching the webinar, “FedRAMP 20x in Motion: What Early Results Mean for Federal Agencies,” featuring insights from RegScale and the CSA.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including RegScale, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How Snyk Helps Federal Agencies Prepare for the Genesis Mission Era of AI-Driven Science

The White House’s new Genesis Mission signals a major shift in how the Federal Government plans to accelerate discovery using AI, national lab computing power and massive scientific datasets. For agencies, this means a new wave of AI-enabled research programs, expanded public-private collaboration and a significant increase in the use of software, data pipelines and cloud resources to drive scientific missions. Along with this opportunity comes a simple truth: AI can only accelerate discovery if the software behind it is secure.

That’s where Snyk supports agencies—by enabling developers, researchers and mission teams to build secure software from the start, aligned to Secure by Design and modern Federal cybersecurity expectations.

Why the Genesis Mission introduces new security pressure for agencies

  • More data and more experimentation: Agencies will be unlocking and federating large datasets, many of which were never designed for AI-scale access. This increases exposure risk and requires tighter control over data lineage, permissions and software pipelines.
  • More partners in the loop: National labs, other Federal entities, commercial cloud providers, academia and industry vendors will work together under new shared platforms. That means expanded software supply chains and stricter expectations for transparency and assurance.
  • Faster development cycles: Scientific models, simulations, AI workflows and data-processing pipelines will move at an accelerated pace. Traditional security review processes won’t be able to keep up.
  • Higher stakes for misconfigurations: AI workloads rely heavily on containers, open source, infrastructure-as-code and cloud services. A single misconfiguration in a pipeline, cluster or library could compromise sensitive scientific work.

Federal agencies need secure-by-default pipelines that can scale with mission speed.

Four ways Snyk supports Federal agencies

1.  Secures software supply chains for AI, HPC and scientific workloads

Snyk gives agencies visibility into all components used in AI and research software—including open source libraries, containers and IaC templates. Snyk helps agencies identify vulnerable or

risky components early, enforce approved library lists, produce SBOMs automatically and meet Federal supply chain expectations (Secure by Design, NIST 800-218, EO 14028, etc.)

2.  Embeds security for CI/CD, model-training and data pipelines

Whether agencies run pipelines in cloud environments, HPC clusters or hybrid infrastructures, Snyk integrates directly into:

  • GitHub / GitLab / Bitbucket
    • Jenkins, GitHub Actions, CircleCI
    • Container build systems
    • AI/ML workflow orchestration tools

This ensures vulnerabilities, misconfigurations and secrets are caught before software reaches production environments or shared research platforms.

3.  Cloud and container security for AI compute systems

The Genesis Mission relies on secure computing—including cloud GPUs, containerized workloads, HPC clusters, research VMs and hybrid infrastructure. Snyk helps agencies detect misconfigurations across cloud infrastructure, secure container images powering AI workloads, scan infrastructure-as-code templates before deployment and protect credentials and secrets used in research pipelines.

4.  Practical “secure by design” implementation

Snyk meets developers and researchers inside the tools they already use by providing automated fix recommendations, IDE plug-ins for secure coding, policy enforcement for high-risk components, as well as fast feedback loops that align with Agile R&D teams. This

operationalizes Secure-by-Design in a way that won’t slow down experiments, model training or rapid prototyping.

Why this matters for Federal missions

The Genesis Mission is accelerating scientific discovery across:

  • Clean energy and grid modernization
    • Fusion and advanced nuclear research
    • Materials science and critical minerals
    • Biotechnology and health research
    • Quantum, semiconductors and microelectronics
    • Climate modeling and Earth science

These domains rely heavily on software, data and compute, and securing those systems is essential for mission success.

Snyk helps agencies build software that is secure by design, fully transparent and aligned with Federal AI safety expectations. With Snyk’s AI Security Platform, agencies gain end-to-end protection across code, dependencies, containers and AI pipelines, enabling trustworthy and compliant AI systems that can power the next generation of U.S. Government missions–exactly what the Genesis Mission requires.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Snyk, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Federal Access: How Identity Visibility Drives Zero Trust Success

Federal agencies face mounting pressure to implement Zero Trust frameworks but often struggle with where to begin. The answer lies in understanding identity telemetry, the insights into who has access to what and how threat actors exploit identities to gain privilege and maintain persistence. Because threat actors increasingly steal credentials and pose as legitimate users, Federal agencies can no longer rely solely on detection tools that trigger alarms after attacks succeed. This shift demands a new approach to Zero Trust, one beginning with comprehensive visibility into the identity attack surface before implementing controls.

From Detection to Prevention

Federal agencies have historically relied on detection-based security tools like Endpoint, Detection and Response (EDR) and Extended Detection and Response (XDR) solutions to detect malicious activity. While still valuable, these reactive tools are inadequate as adversaries are compromising both human and non-human credentials, operating for extended periods. Using legitimate credentials, threat actors gain persistent access and escalate permissions while evading detection.

The missing component is proactive threat hunting that maps potential identity exposure before they are exploited. This requires aggregating identity data across the entire IT environment and analyzing how threat actors could leverage poor identity hygiene such as overprivileged accounts, insecure Virtual Private Networks (VPNs), exposed passwords and secrets, blind spots in third-party access and dormant identities to gain access to critical assets and data. Zero Trust relies on knowing exactly how identities function across the environment; without this visibility, agencies are essentially enforcing Zero Trust policies blindly and wasting time and money by not investing in protection capabilities that are resilient against cyberattacks. Identity telemetry should guide agencies in building proactive identity and mature Zero Trust capabilities.

The Fragmented Identity Visibility Problem

Federal environments span on-prem Active Directory (AD), multicloud environments, federated identity providers and numerous Software-as-a-Service (SaaS) applications, causing confusion, overlap and complex interactions across these different environments that are difficult to track, limiting end-to-end visibility of hidden attack paths for lateral movement and escalation.

These “unknown trust relationships” or “paths to privilege” stem from:

  • Identity provider misconfigurations replicating over-permissive access
  • Nested group memberships granting indirect privileges
  • Federation relationships enabling cross-domain escalation
  • Generic “all access” group rights elevating unprivileged users

These exposures exist between siloed systems and provide entry points for threat actors. Addressing this requires aggregating identity data, mapping cross-domain relationships and calculating the human, non-human and AI based identities. This exposes blind spots and transforms an unknowable attack surface into a manageable identity landscape.

True Privilege Calculation

Traditional privilege assessments focus on group membership and cloud role assignments but miss factors like nested groups, cloud application ownership, misconfigured identity providers and federation pathways. These elements often elevate an identity’s privilege far beyond what surface-level audits reveal.

BeyondTrust, Securing Federal Access blog, embedded image, 2025

True privilege calculation measures an identity’s effective and actual privilege across all connected systems and domains, including relationships, configurations and escalation pathways. For example, an identity that appears low-privileged in AD may federate into Identity and Access Management (IAM) roles and elevate its privilege. This visibility supports key Zero Trust decisions, such as:

  • What access should be continuously verified
  • Gaps in least privilege enforcement
  • Which accounts are most likely to be targeted
  • Where to place micro-segmentation boundaries

Given the scale and complexity of modern Federal environments, manual calculation is impossible. Automated solutions must continuously analyze permissions, relationships and identity provider configurations while mapping escalation paths. True privilege calculation transforms Zero Trust from theory into actionable strategy that goes from implementation to Zero Trust maturity.

Critical Attack Vectors

Dormant privileged accounts, often left active after personnel departures or reorganizations, retain elevated permissions long after their use ends. Threat actors frequently identify and reactivate these accounts to move laterally and maintain persistence using legitimate credentials. Effective identity hygiene requires:

  • Continuous monitoring of new dormant accounts
  • Cleanup of existing dormant or misconfigured accounts and standing privilege
  • Behavioral detection to flag unusual privilege escalation attempts or unexpected activity

Identity security cannot be a point-in-time exercise. Without visibility and a proactive approach, configurations drift and dormant accounts accumulate. Agencies must continuously identify dormant privileged accounts and immediately investigate if they suddenly become active, one of the strongest indicators of compromise. Continuous visibility transforms identity hygiene from a reactive alert-based approach to actionable telemetry for proactive threat hunting around current and known attack risk.

The Expanding Identity Attack Surface

The identity attack surface extends far beyond human users to service principals, cloud workloads, Application Programming Interface (API) credentials and automated systems, collectively known as “non-human identities.” These accounts often have elevated privileges but lack safeguards like password rotation, Multi-Factor Authentication (MFA) or behavioral analytics, creating significant security gaps.

Agentic AI introduces new challenges. Unlike traditional service accounts, AI agents act autonomously based on their instructions, tools and knowledge sources. A seemingly low-privilege agent could escalate privileges by interacting with other agents, creating complex escalation chains. Understanding an AI agent’s effective capability, not just its assigned permissions, is essential.

AI and non-human identity risks come from interconnected relationships. An AI agent running as a cloud workload may access secrets, interact with privileged systems or execute commands across domains. True privilege calculation for these entities requires mapping downstream actions they could initiate. Federal agencies need governance designed for non-human identities and AI agents, including:

  • True privilege calculation of escalation paths
  • Comprehensive inventory across all systems
  • Monitoring of potential blast radius as AI adoption accelerates
  • Context and knowledge of AI use and where agents are being deployed
  • Visibility into AI agent instructions, tools and knowledge sources

Investing in identity visibility now prepares agencies for emerging challenges as AI adoption becomes more prevalent.

Federal agencies must secure hybrid environments against adversaries who exploit identities rather than technical vulnerabilities. The path forward requires shifting from reactive detection to proactive threat hunting, eliminating fragmented visibility, measuring true privilege across all domains, maintaining continuous identity hygiene and extending visibility to non-human identities and agentic AI. Identity telemetry provides the data foundation needed for Zero Trust maturity, showing agencies where and how to strengthen their security posture.

Discover how comprehensive identity visibility drives Zero Trust maturity by watching BeyondTrust and Optiv+Clearshark’s webinar, “Securing Federal Access: Identity Security Insights for a Zero Trust Future.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including BeyondTrust, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Better Together: How Eightfold.ai and Empyra Are Transforming Government Workforce Services

Proven Results:

  • 30% faster job placement (Washington, D.C.)
  • 36% increase in engagement among underserved populations
  • 65% increase in training module completions
  • 71% increase in job applications submitted
  • 30% faster reemployment for RESEA participants (Florida Department of Commerce)

State and Local Governments are rethinking the way they connect job candidates with meaningful employment. Eightfold.ai and Empyra have combined to join advanced AI-driven talent matching with configurable case management. Together, they deliver a unified, secure environment that helps agencies modernize operations, improve employment outcomes and provide more efficient, personalized experience for both job seekers and employers.

AI-Driven Workforce Modernization

Eightfold.ai was built by former Google and Facebook engineers to be the world’s most intelligent talent matching platform, matching candidates to the right jobs. From more than a decade of global labor market data, its neural network goes beyond keyword searches, interpreting:

  • Skills
  • Roles
  • Qualifications

The platform continuously learns from interactions across job seekers, employers and case managers, moving agencies away from time-consuming resume screening toward a data-driven system that identifies talent by capability and aptitude.

Through its Career Navigator, Eightfold.ai provides:

  • Visual career pathways
  • Transferable skill identification
  • Gap analysis
  • Training from State-approved providers

This transforms the labor exchange into a dynamic environment that supports both immediate reemployment and long-term career mobility.

Integrated Case Management and Service Delivery

Empyra’s myOneFlow consolidates workforce and social service delivery into a single, configurable platform. By capturing data once and reusing it across workflows, the system reduces duplication and frees staff to focus on engagement rather than paperwork. Designed as a Commercial Off-The-Shelf (COTS), Workforce Innovation and Opportunity Act (WIOA)-ready system, myOneFlow includes Participant Individual Record Layout (PIRL) and performance reporting out of the box. As funding and requirements evolve, its flexible architecture allows agencies to tailor:

  • Forms
  • Eligibility rules
  • Intake processes
Eightfold.ai , Better Together Eightfold.ai and Empyra blog, embedded image, 2025

The platform streamlines the participant journey by automating:

  • Intake
  • Enrollment
  • Eligibility determination
  • Business rules to identify program fit
  • Referrals to partners for housing, education, training or employment resources

Participants can complete tasks and upload documents from any device via the mobile app. Beyond WIOA, myOneFlow also supports:

  • Apprenticeship management
  • Temporary Assistance for Needy Families (TANF)
  • Supplemental Nutrition Assistance Program (SNAP) tracking
  • Domestic-violence programs
  • Municipal grants.

By consolidating these functions, myOneFlow gives agencies flexibility to manage multiple programs efficiently within one adaptive system.

“Better Together” Integration Between Eightfold.ai and Empyra

Together Eightfold.ai and myOneFlow create a single front door for job seekers, case managers and employers. Unified identity management with Single Sign-On (SSO) and shared data models ensure information remains consistent across platforms.

Here’s how the integration works:

  • Participants register in myOneFlow
  • Their intake data automatically populates into Eightfold.ai
  • The AI engine generates skills assessments, job recommendations and career pathways
  • Applications, training and other activities sync back into myOneFlow

Case managers gain a real-time view of participant progress without manual entry, while employers benefit from accurate candidate matching and streamline recruiting tools. Behind the scenes, Eightfold.ai and Empyra operate a coordinated support model and incorporate agency feedback into joint product enhancements.

Trust, Security and Compliance

Both platforms meet rigorous standards, including:

  • FedRAMP
  • Tx-RAMP
  • System and Organization Controls 2 (SOC 2)
  • Department of Defense (DoD) Impact Level 4 (IL4)
  • International Organization for Standardization (ISO) 27001

They also adhere to evolving regulations across the European Union Artificial Intelligence (EU AI) Act, Texas Department of Information Resources (DIR) and other State privacy laws.

myOneFlow enforces:

  • Role-based access controls
  • Audit logging
  • Deduplication safeguards

Building the Future of Workforce Modernization

Eightfold.ai and Empyra’s myOneFlow demonstrate what is possible when AI, automation and integration align with mission-driven goals. The integrated solution helps agencies:

  • Deliver faster services
  • Improve job matching accuracy
  • Reduce administrative burden
  • Strengthen engagement
  • Maximize limited resources

Workforce organizations can now create a more responsive, equitable and efficient system, empowering job seekers, supporting employers and advancing mission outcomes.

Watch the full webinar, “AI-Centric Innovation: Modernizing Workforce Agencies,” to see the full demonstration of Eightfold.ai and Empyra’s integrated approach to workforce transformation.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Eightfold.ai and Empyra, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

From Pilot to Production: Operationalizing Healthcare GenAI in Secure Multicloud Environments

Healthcare organizations are under immense pressure to shrink margins, tighten regulations, improve patient expectations and utilize increasingly complex data environments. While generative artificial intelligence (GenAI) has emerged as a powerful tool, most healthcare systems still struggle to move from experimentation to measurable outcomes. Leaders are asking the same questions: Where do we start? How do we ensure security and compliance? How fast should the Return on Investment (ROI) appear?

The answer is not simply selecting a model, it is building a strategy and infrastructure that transforms AI from a promising pilot into an enterprise engine for clinical, operational and financial improvement.

Start With High-Impact Use Cases that Deliver Early ROI

The path to operationalizing GenAI begins with use cases that are narrow enough to implement quickly, but meaningful enough to prove value. Start where measurable gains are most attainable, such as document processing, contract review, claims analysis, compliance workflows and call center optimization.

One of the strongest early candidates is Protected Health Information (PHI) de-identification, where AI can accelerate research access while protecting privacy. Many organizations are also applying GenAI to claims review, using models to flag missing attachments, coding inconsistencies or errors that commonly drive costly denials. With first-pass denial rates hovering in the 17–25% range industry-wide, automating this analysis can generate immediate financial return.

These targeted wins build executive confidence, secure budget and create organizational momentum, which is critical before expanding to more complex clinical or patient-facing scenarios.

Build Trust by Grounding the Model in Your Own Data

Accuracy and trust determine whether healthcare AI is adopted or ignored. General-purpose models are not sufficient for healthcare, where language is deeply nuanced and context dependent. Instead, organizations should ground GenAI in their own governed data sources, such as Electronic Health Records (EHRs), Customer Relationship Management (CRM) platforms, care summaries, research documents or internal policies.

To achieve this, many leaders are adopting Retrieval-Augmented Generation (RAG) with vector databases, which allows models to pull precise information from internal systems in real time. Vector databases are a foundational accelerator, enabling faster, more accurate retrieval across structured and unstructured data. This approach delivers three business advantages:

  1. Higher accuracy and confidence in model responses
  2. Stronger control of PHI and sensitive data
  3. Traceability, which is essential for audits, appeals and clinical validation

Grounding the model in an organization’s own data turns GenAI from a creative tool into a trusted operational system.

Use a Secure Multicloud Strategy to Reduce Risk and Increase Agility

John Snow Labs, Operationalizing Healthcare GenAI blog, embedded image, 2025

To operationalize GenAI responsibly, healthcare organizations should design for security,compliance and flexibility from day one. When separating PHI and non-PHI workloads, a multicloud strategy helps healthcare organizations:

  • Isolate sensitive data to minimize breach impact and simplify governance
  • Reduce lock-in risk and leverage the strengths of different cloud platforms
  • Tap into more innovative options, since each cloud offers unique AI tooling
  • Optimize cost and performance by matching workloads to the right environment

Multicloud design also supports stronger compliance postures by enabling auditability, identity controls, monitoring and bias/hallucination safeguards, all of which must be proven to regulators and accrediting bodies.

Avoid “Pilot Purgatory” and Build a Path to Production

Many healthcare AI programs fail not because the technology underperforms, but because the organization never assigns ownership or a path to scale. To prevent “pilot purgatory,” short-term projects that drag on without measurable outcomes, organizations should:

  • Create a defined production roadmap before the pilot begins
  • Empower a cross-functional AI Center of Excellence (COE) to own outcomes
  • Secure both clinical and administrative stakeholders
  • Treat GenAI as an enterprise capability, not a one-off project

This shift enables the same investment to support multiple use cases, expanding impact while lowering cost per interaction over time.

Continuously Measure, Optimize and Expand

An operational GenAI program is never “set it and forget it.” It is important to continuously track Key Performance Indicators (KPIs) to guide optimization and justify expansion. Recommended KPIs include:

  • Cost per interaction
  • Accuracy and confidence
  • Time saved per task or workflow
  • Time to response (latency and model speed)
  • User satisfaction (providers, staff and patients)

By evaluating these metrics regularly, healthcare organizations can expand from early wins to enterprise scale, from research and development to patient support, revenue cycle, compliance and beyond.

Align People, Data and Infrastructure For AI Success

Technology alone is not the determining factor of AI success in the healthcare space, alignment is. Success requires a shared vision from leadership, responsible data groundwork, a secure multicloud foundation and continuous measurement to maintain trust and value. With the right approach, GenAI can improve patient satisfaction, strengthen trust, accelerate research and innovation, reduce administrative burden and deliver measurable ROI in weeks over years.

Carahsoft and John Snow Labs help healthcare leaders accelerate this journey, combining secure infrastructure, domain-specific healthcare AI and proven deployment models. To explore how your organization can operationalize GenAI safely and effectively, watch the full webinar, “Lessons Learned from Harnessing Healthcare Generative AI in a Hybrid Multi-Cloud Environment.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including John Snow Labs, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How AI-Powered Records Management Transforms Government Operations from Reactive to Proactive

Government agencies today must manage an unprecedented volume of digital documents. As digital transformation accelerates across Federal, State and Local agencies, the challenge is not just managing more content, it is extracting actionable intelligence while maintaining compliance, security and operational efficiency. Artificial intelligence (AI) has transformed enterprise records management, replacing manual processes with automated, predictive systems that improve decision making and resource allocation across the mission.

AI-Powered Auto-Classification for Document Management

Effective classification is the foundation of records management, and AI has altered this traditionally complex process. Modern AI models can accurately classify structured documents like invoices or purchase orders, with as few as ten training examples. This represents a major improvement over legacy systems that required zonal Optical Character Recognition (OCR) configuration, separator pages and precise layout specifications.

AI models employ multiple techniques, including computer vision, text extraction and contextual reasoning, to identify document types with high confidence. Unlike older pattern-matching tools, today’s AI adapts to variations in structure and format, making classification scalable for agencies managing thousands of document types across different departments.

Training has also become more accessible. Agencies can simply label documents, point the AI to those examples and generate a working classification system. Accuracy improves over time through human review, and confidence scores allow agencies to set thresholds and route low-confidence results to human reviewers.

Accurate classification directly impacts record retention, access control and content discovery. Without it, employees cannot find necessary documents, retention schedules are misapplied and access permissions become inconsistent. Robust AI-powered classification at ingestion ensures downstream processes function as intended.

Intelligent Data Extraction from Structured and Unstructured Documents

Once documents are classified, agencies must extract meaningful information, an area where AI delivers transformative capabilities. Modern machine learning models locate key-value pairs anywhere on a document, using contextual understanding rather than fixed positions or label formats. AI can also answer natural-language queries, mirroring human logic. If a person can explain how they would find a piece of information, that logic can be written as a prompt for the model.

These capabilities work across structured and unstructured formats. Work that previously required specialized staff and years of experience can now be configured with simple prompts. Confidence scoring ensures accuracy. When the model is uncertain, items are routed to human reviewers. This combines automation’s speed and consistency with human judgment where needed.

For Government agencies, AI extraction improves compliance and reporting. Licensing applications, permit requests, inspection reports and countless other documents can be automatically processed, with extracted data populating systems of record and triggering workflows. Information once locked in PDFs or paper becomes structured, searchable and actionable.

AI-Driven Deduplication and Data Quality Management

VisualVault, AI-Powered Records Management blog, embedded image, 2025

Duplicate data is a productivity drain and a compliance risk. Redundant documents accumulate quickly across forwarded emails, multiple repositories and inconsistent processes. This creates unnecessary work, consumes storage and complicates compliance with data retention requirements.

Legacy deduplication relied on hash matching, but this fails to detect most real-world duplicates. AI-based deduplication analyzes document classifications and extracted metadata to determine true duplicates based on agency-defined rules. If the elements match according to customer rules, the system flags the items as duplicates regardless of differences in headers or formatting.

This content-based deduplication reduces storage costs, simplifies retention compliance and minimizes cybersecurity exposure. Retaining unnecessary data increases legal risk during litigation and discovery and expands the attack surface for cyber threats. AI allows agencies to retain only necessary data, reducing operational and security liabilities.

Enhanced Workflow Automation with Predictive Analytics

High-quality, classified and extracted data unlocks the full value of predictive analytics, enabling Government agencies to shift from reactive problem-solving to proactive planning. This capability uses historical data to predict outcomes, such as numeric values, binary decisions or multiclass classifications.

Platforms like VisualVault allow agencies to train predictive models without data science expertise. Professional services teams configure the models, demonstrate how they work and train agency employees to manage them.

Public sector agencies already use predictive analytics to forecast safety incidents at licensed facilities. Historical inspection data comprised of conditions, violations and corrective actions allows models to identify facilities with a high probability of future serious events. When inspections reveal patterns associated with increased risk, inspectors and licensing officials are automatically alerted, enabling early intervention.

Predictive analytics also strengthens performance management. Agencies can compare their metrics against industry norms, seeing where they stand within their sector. This supports investment decisions and enables precise tracking of improvement outcomes.

Agencies should focus on automating controls that meaningfully reduce, not simply increasing the percentage of automated controls. High-impact controls should be prioritized for automation and predictive monitoring to maximize security and operational benefits.

For decision makers, predictive analytics delivers the context and accuracy needed to make fast, informed decisions across claims, vendor management, resource allocation and strategic planning.

Digital Transformation as Organizational Necessity

Despite rapid technological advancement, human expertise remains essential. AI systems are designed to operate behind the scenes and do not require users to understand machine learning (ML) concepts. Small teams define the required outcomes, what must be classified, what data must be extracted and what predictions will improve decisions, while professional services configure the system accordingly.

AI adoption does not inherently reduce headcount. Historically, technology shifts transform jobs rather than eliminate them. Workflows move from manual tasks like sorting documents to higher-value work such as analysis, decision making and innovation. Employees focus on defining requirements, reviewing AI outputs and applying human judgement where it adds value.

The Measurable Value of AI Implementation

Agencies can begin their journey by identifying their key performance indicators and the business outcomes they want to improve:

  • What pain points cause the most friction?
  • Where do backlogs accumulate?
  • Which processes create the most risk?

This ensures implementation is tied to measurable outcomes. AI success depends on clear requirements, proper process, staff training and strong governance. Agencies should adopt AI incrementally, starting with high-value use cases that deliver quick wins, then expanding into more complex workflows and predictive models as confidence grows.

Digitization mandates and the rise of generative AI have accelerated content creation beyond expectations, driving significant growth for platforms like VisualVault. The agencies that succeed will be those that embrace this shift and modernize now.

Watch VisualVault’s webinar “Employing AI to Bring Order and Value to Enterprise Records Management” to explore detailed demonstrations of AI-powered classification, extraction and predictive analytics capabilities that can transform your agency’s records management operations.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including VisualVault, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

The Process-Oriented View: CISO Visibility During an OT Attack

When a cyber incident occurs in an operational technology (OT) environment, understanding what is actually happening can become difficult. Control systems may continue to display normal readings even if attackers have begun manipulating logic or feedback within Programmable Logic Controllers (PLCs) or Human-Machine Interfaces (HMIs). Operators see stable values while underlying conditions start to diverge from what is shown on screen.

If process data at the controller level is falsified, every connected monitoring and cybersecurity tool reflects the same false picture. At that point, the Chief Information Security Officer (CISO) and operations team lose reliable visibility into the physical process that underpins production and safety.

The choices that follow each carry risk:

  • Shutting down operations may prevent escalation but could also cause costly downtime if the intrusion is contained to the network.
  • Continuing to operate may expose critical assets to damage if the manipulation extends to the process layer.

A recent cyber event at Norway’s Risevatnet dam illustrates this limitation.
During the incident, operators lost visibility into parts of the control system, yet intrusion detection and monitoring tools reported no anomalies. The breach was discovered only when on-site personnel noticed irregular behavior in equipment operations.

This outcome speaks to a broader issue in OT cybersecurity. Network-based detection tools can confirm whether communication channels are functioning, but they cannot independently verify whether the process data itself is genuine.  If attackers manipulate information within PLCs or HMIs, every connected dashboard, alarm and analytic layer reflects the same falsified values. In effect, the system becomes blind at the moment visibility is most needed.

The Risevatnet case shows how quickly a cybersecurity failure can become an operational one. When control room data appears normal, incident response slows and decisions depend on incomplete or misleading information. Without a way to validate what is happening at the physical process level, teams must rely on manual observation or external cues, a reactive approach that offers no real protection in complex or distributed environments.

SIGA’s SigaML², available through Carahsoft, addresses this visibility gap by providing an independent, out-of-band view of the industrial process. The system collects unfiltered electrical signals directly from field I/Os (data that cannot be spoofed or altered) and applies multi-level analytics across Purdue Levels 0–4 to detect anomalies and false-data injections in real time.

Its components work together to create an evidence-based view of the process:

  1. SigaGuard sensors capture raw electrical data directly from equipment.
  2. SigaGuardX software correlates Level 0-4 information to identify inconsistencies and possible manipulations.
  3. S-PAS simulation tools allow cybersecurity and operations teams to rehearse attack scenarios and refine incident response playbooks.

These capabilities give CISOs and plant operators verifiable insight during an active incident, helping determine whether an event is operational or cyber in nature and guiding containment or recovery actions.

Regulatory frameworks including Network and Information Security Directive 2 (NIS2), Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the latest National Institute of Standards and Technology (NIST) guidance highlight the importance of process-level monitoring and validation.

As oversight expands, CISOs and plant operators are expected to provide verifiable evidence of what occurred during an event, more than network logs or alarms.
Meeting that requirement depends on having data sources that remain trustworthy even when control networks are compromised.

SigaML² provides that capability, giving security and operations teams a direct, unaltered view of the physical process when clarity matters most.

Explore how SIGA’s cyber-physical security solutions empower CISOs with greater visibility during OT attacks. Visit Carahsoft’s SIGA solutions page to discover how your agency can enhance its infrastructure resilience.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including SIGA, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Building Sustainable Automation: How Government Agencies Can Scale IT Operations for the AI Era

Despite investing in numerous automation tools, Government agencies still struggle to achieve true operational efficiency. The issue is not a lack of technology, but the need to better align organizational processes with automation strategies. Agencies often find that automation scattered across teams does not equate to automation at scale.

For State and Local Government agencies navigating budget constraints, workforce transitions and mounting pressure to adopt artificial intelligence (AI), understanding how to make automation sustainable is now mission critical.

Understanding the Foundation

The most effective automation transformations begin not with technology selection but with process evaluation. Agencies that achieve lasting results recognize that automation amplifies existing workflows, accelerating efficient processes while exposing areas in need of standardization. The key lies in establishing organizational readiness before scaling solutions.

Experience shows that technical excellence alone does not guarantee adoption. Many organizations implement advanced automation tools only to see them underutilized because processes were not standardized first. This pattern repeats across ticketing, project management and AI initiatives when solutions are deployed before process design. Sustainable change requires equal focus on culture, workflow and collaboration.

The distinction between organizational and technical capability becomes clear during initiatives like enterprise-wide patching. While patching might appear technically simple, it requires coordination across teams, standardized processes and consistent execution. When approached strategically, patching strengthens structures and communication across departments.

Moving Beyond Linear Scaling

Traditional methods for managing IT complexity have centered on workforce expansion, but modern infrastructure requires new thinking. As organizations add personnel to manage new systems, coordination overhead grows, reducing visibility and collaboration, which then drives additional staffing needs. This challenge extends beyond budgets. Larger teams face higher coordination demands, and IT professionals often overlook their time as an organizational resource until capacity constraints emerge. The question is not just about staffing; it is about designing systems that scale efficiently.

For Government agencies, this issue is especially pressing. Retirements and limited hiring flexibility leave positions unfilled, putting institutional knowledge at risk and resulting in expanding workloads for current employees. In this environment, automation becomes a strategic enabler for maintaining service levels and mission delivery. Manual processes scale linearly, while infrastructure complexity grows exponentially. Centralized automation helps break this cycle by handling routine operations, freeing staff to focus on work that demands human expertise.

Creating Connected Workflows

Sustainable automation strategies move beyond isolated, team-specific implementations toward centralized platforms that enable consistent workflows across the organization. Many agencies have distributed automation capabilities, where infrastructure teams automate provisioning, security teams automate compliance validation and network teams automate configuration, but these workflows often lack seamless integration.

Red Hat, Building Sustainable Automation blog, embedded image, 2025

A single application deployment spans multiple domains, such as provisioning, networking, security scanning, compliance validation and monitoring. When automation operates independently, staff must still coordinate manual handoffs between automated steps. According to Conway’s Law, organizations design systems that reflect their communication structures; fragmented communication results in fragmented architecture.

Centralized platforms address this by establishing shared, standardized automation for common tasks. Instead of multiple teams maintaining separate scripts, one validated and documented process can serve all. This approach enhances auditability, improves consistency, enables scalable growth and eliminates redundant development. Updates to shared workflows require modifying a single authoritative source rather than tracking changes across multiple implementations.

Importantly, centralization is as much about culture and process as technology. Success depends on clear communication of the value of standardization, demonstrating tangible benefits and building trust that centralized approaches will serve all teams effectively. When alignment is achieved, automation platforms reach their full potential, transforming disconnected efforts into unified, scalable operations.

Building the Foundation for Advanced Technologies

The growing interest in AI has created momentum for agencies to explore new solutions, but success requires careful groundwork. Agencies realize the greatest benefits from AI when they first established stable, standardized automation foundations. MIT research shows that 95% of enterprise AI solutions encounter challenges not because of model quality but due to integration difficulties and organizational readiness. Effective AI deployment depends on how well technology integrates within existing workflows.

Many agencies have expanded infrastructure incrementally, developing complex architectures held together by manual processes and specialized expertise. Deploying AI on such foundations is difficult. AI cannot effectively optimize systems when the underlying processes lack consistent automation. In practice, agencies deploying AI to optimize Customer Relationship Management (CRM) operations or automate incident response achieve better results when data and workflows are standardized. This consistency enables organizations to act confidently on AI-driven insights.

Building AI readiness involves working backward from AI’s requirements: integrated systems that share data reliably, standardized processes that AI can learn from and consistent execution that produces trustworthy patterns. Agencies that mature their automation capabilities create the foundation AI needs to succeed, significantly improving the likelihood of achieving meaningful results from AI investments.

Partnering for Success

Achieving sustainable automation is a progressive journey best supported by experienced partners. Leading strategies emphasize a “crawl, walk, run” approach:

  1. Start with a manageable scope
  2. Expand systematically
  3. Build organizational capability over time

This measured progression ensures transformation occurs sustainably for the teams implementing and maintaining these systems.

Many agencies are undertaking comprehensive automation for the first time, making guidance from experienced organizations like Red Hat particularly valuable. Effective partnerships emphasize knowledge transfer over dependency, helping agencies build autonomous, capable teams rather than relying on long-term external support.

The results of this approach are measurable. Red Hat customers have achieved 50% faster networking provisioning, 65% reductions in certain provisioning activities and 67% improvements in other operational areas, freeing staff for innovation and strategic initiatives. These gains also reduce unplanned downtime and improve the overall quality of life for IT teams.

This journey addresses multiple organizational objectives simultaneously. Leadership achieves cost optimization and stronger security, while practitioners gain time, efficiency and better work-life balance. Sustainable automation delivers across these dimensions because the same standardization that drives efficiency also enhances security and empowers staff to focus on meaningful challenges.


Government agencies have reached a pivotal moment where growing infrastructure complexity demands a more evolved approach to IT operations. The path forward lies in fundamentally integrating automation into organizational processes and culture. By prioritizing standardization, embracing centralization and partnering for sustainable transformation, agencies can develop scalable automation strategies that prepare the organizations to leverage emerging technologies like AI.To discover proven strategies for building sustainable automation foundations that prepare your agency for advanced technology adoption, watch Red Hat’s webinar, “The Backbone of Modern Government: Sustainable Automation at Scale.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Red Hat, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Billington CyberSecurity Summit: AI Takes Center Stage

Premier U.S. Government cyber conference previews AI on offense, on defense and as a target

  • While adversaries can boost the quality and volume of attacks with artificial intelligence (AI), defenders will apply AI to counter attacks with predictive and proactive defenses.
  • The advent of Agentic AIs will accelerate this trend and provide more avenues for attack, but defenders will always have the advantage by being able to train AIs with proprietary information and use them to identify vulnerabilities before attackers do.
  • The transition to post-quantum cryptography will be an industry-wide heavy lift, with extensive rewriting of code to meet post-quantum standards.

Recently, I had the opportunity to share some of my experience and insights at the Billington CyberSecurity Summit in Washington, D.C. Moderated by Chris Townsend, Global Vice President of Public Sector at Elastic, our panel session, “The Future of Cyber Threat: Anticipating Threat Actors’ Next Steps,” explored how threat actors are evolving and what organizations can do now to prepare. Not surprisingly, AI was a hot topic. We also discussed quantum computing, emerging threats and the cybersecurity staffing shortage.

How Attackers Will Leverage AI

Attackers are already using AI to power their attacks, but it is important not to over-sensationalize the impact that AI is having because the proportion of AI-driven attacks is still quite small relative to the overall amount of malicious activity we are seeing. However, we expect that proportion to grow quickly.

One of the main ways attackers are using it now is to create phishing materials, because it addresses what is a weak point for many threat actors, who often are not native English speakers. Attacks that are technically sophisticated can fail because they begin with a spear phishing email whose spelling or grammar is wrong. Large Language Models (LLMs) solve that problem brilliantly because if there is one thing they are good at, it is creating plausible narratives in perfect English.

The other area we see attackers using it is to automate their work. We have already documented examples of code that appears to have been written by an AI.

In the short term, AI will not enable adversaries to do anything new, but we expect it to enhance the quality and volume of their attacks. AI is lowering the entry bar for threat actors. They do not even need to know how to code anymore. Naturally, the number of attacks will begin to go up.

In the medium term, the arrival of Agentic AI is likely to accelerate malicious activity levels, since agents can act autonomously, further minimizing the level of input needed from attackers.

We have already done some research on how agents could be abused and proven that they can already be used to carry out a basic spear phishing attack and deliver malicious code to a target. Agents are still in their infancy, and it is only a matter of time before they become capable of carrying out more sophisticated attacks with minimal instruction.

Preparing For the Quantum Era

The advent of quantum computing presents another significant challenge for cybersecurity. Quantum computers have the potential to break current encryption standards, making it imperative for organizations to transition to post-quantum encryption algorithms.

Adversaries are already preparing for this shift. The “harvest now, decrypt later” strategy involves stealing encrypted data today with the intention of decrypting it once quantum computing becomes viable.

This process of transitioning to post-quantum encryption is not without its challenges. Decades of work have gone into refining and protecting the implementation of existing encryption methods, and we now face the task of revising and rewriting code using new, post-quantum standards. This will inevitably introduce a new generation of bugs, but we will have the benefit of AI to mitigate them.

It Does Not Stop Here

Conferences such as Billington are essential as we navigate this complex landscape. It embodies the Public and Private Sector collaboration that will be key to realizing better cyber defense outcomes moving forward. Together, with partners like Carahsoft delivering mission-critical industry expertise to U.S. Federal and Public Sector agencies, we can anticipate and counter the next generation of cyber threats, ensuring the safety and resilience of our digital ecosystems.

Learn more about how industry icons like Symantec and Carbon Black are putting AI on the front lines of cybersecurity.

Want to learn how Symantec, Carbon Black and Carahsoft can strengthen your cybersecurity posture? Contact us at Broadcom@Carahsoft.com for more information.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on security.com, and is re-published with permission.