As artificial intelligence (AI) evolves from simple chatbots to autonomous agents capable of making independent decisions, State and Local Government agencies face a fundamental shift in cybersecurity requirements. Recent research shows 59% of State and Local Government respondents report already using some form of generative AI (GenAI), with 55% planning to deploy AI agents for employee support within the next two years. Yet this rapid adoption brings unprecedented security challenges. Because AI agents are designed to pursue goals autonomously, even adapting when security measures block their path, Chief Information Security Officers (CISOs) responsible for safeguarding Government networks must rethink traditional defenses and embrace a new security paradigm.
The Emergence of Agentic AI and Its Unique Security Challenges
AI agents represent a significant departure from the GenAI tools many agencies currently use. While traditional Large Language Models (LLMs) respond to prompts and return information such as a support chatbot, AI agents and agentic systems are autonomous software programs that can plan, reflect, use tools, maintain memory and collaborate with other agents to achieve specific goals. These capabilities make them powerful productivity tools, but they also introduce failure modes that conventional software simply does not have. Unlike deterministic systems that crash when something goes wrong, AI agents can fail silently through collusion, context loss or corrupted cognitive states that propagate errors throughout connected systems. Research examining the real-world performance of AI agents found that single-term tasks had a 62% failure rate, with success rates dropping even further for multi-term scenarios.
When Veracode examined 100 LLMs performing programming tasks, these systems introduced risky security vulnerabilities 45% of the time. For State and Local agencies handling sensitive citizen data, managing critical infrastructure or supporting public safety operations, these error rates demand careful attention within robust security frameworks designed specifically for autonomous systems.
The New Security Paradigm: From Human-Centric to Agent-Inclusive Workforce Protection
AI agents, the newest coworker, amplify insider threats by combining human-like autonomy with capabilities that exceed human limitations. While employees work within bounded motivation and finite skills, AI agents possess boundless motivation to achieve goals, uncapped skills that continuously improve and infinite willpower, constrained only by computational capacity. They will not simply make a single attempt to access a file, get blocked due to a lack of permissions, get frustrated and go home for the day the way an employee might; they will persistently pursue objectives, potentially finding novel ways around security controls.
This transformation fundamentally changes the attack surface agencies must protect. Data breaches continue to impose significant financial and operational strain across the public sector, with many state and local organizations reporting cumulative annual costs that reach into the millions. AI agents and agentic systems collapse traditional security models by operating as autonomous workforce members who interact with systems, access data and make decisions without direct human oversight. They can be compromised through threats specific to agentic AI, such as goal and intent hijacking, memory poisoning, resource exhaustion or excessive agency that can lead to unauthorized actions, all in pursuit of achieving programmed objectives. For Government agencies managing limited security budgets while protecting essential citizen services, this exponential increase in potential attack vectors demands proactive frameworks rather than reactive responses.
The AEGIS Framework: A Six-Domain Approach to Securing Agentic AI

Forrester’s Agentic AI Enterprise Guardrails for Information Security (AEGIS) framework provides a comprehensive approach to helping CISOs in securing autonomous AI systems across six critical domains.
Governance, Risk and Compliance (GRC) establish oversight functions and continuous monitoring capabilities. Identity and Access Management (IAM) address the unique challenge of agent identities that combine characteristics of both machine and human identities. Data Security focuses on classifying data appropriately, implementing controls for agent memory and considering data enclaves and anonymization from privacy perspectives.
Application Security evaluates risks across the entire software development lifecycle (SDLC), implements Development, Security and Operations (DevSecOps) best practices, assesses the software supply chain and uses adversarial red team testing to validate safety and security controls. This domain focuses on embedding telemetry that gives security teams visibility into agent behavior and decision making. Threat Management ensures logs are accessible to security operations center analysts, enabling detection of behavioral anomalies and supporting forensic investigations. Zero Trust Architecture (ZTA) principles apply such as implementing network access layer controls for agent workloads, continuous validation of the agent’s runtime environment and monitoring of agent to agent communication.
Underlying the framework are three core principles:
- Least Agency extends least privilege to focus on decisions and actions, ensuring agents have only the minimum set of permissions, capabilities, tools and decision making necessary to complete specific tasks.
- Continuous Risk Management replaces periodic audits with ongoing evaluation of data, model and agent integrity.
- Securing Intent requires organizations to understand whether agent actions are malicious or benign, intentional or unintentional, enabling proper investigation when failures occur.
Practical Implementation: Agent Onboarding and Governance
Forrester’s “Agent on a Page” concept provides a practical tool for providing structure, consistency and alignment of AI agents to business goals before activation, by outlining each agent’s owner, core purpose, operational context, knowledge base, specific tasks, functional alignment, tool access and cooperation patterns. This documentation gives business stakeholders clear success criteria, while security teams use it as a threat model and input into Forrester’s AEGIS framework to identify gaps in controls, missing guardrails, vulnerabilities and establish baselines to validate agent behavior against.
Similar to employee onboarding, agents require explicit programming on compliance frameworks, data privacy restrictions, scope of work and organizational norms. They must understand cooperation boundaries, operational context, knowledge sources and collaboration patterns. Agencies already deploying agents may have some of this documentation; those starting should collaborate between business owners and security teams to develop these frameworks.
Building a Secure Foundation for Autonomous AI
State and Local Government agencies stand at a critical inflection point. AI agents promise significant productivity gains across employee support, investigation assistance and first responder capabilities. Yet deploying these autonomous systems without appropriate security frameworks creates unacceptable risks for organizations managing citizen data and essential public services. The AEGIS framework provides a comprehensive approach to securing agentic AI before widespread deployment, enabling agencies to realize benefits while maintaining security postures that citizens expect.
Organizations should begin by reviewing the Forrester’s AEGIS framework to understand how it maps to existing compliance requirements such as NIST AI RMF, the EU AI Act and OWASP Top 10 for LLMs. Forming AI governance committees using AEGIS principles help establish organizational buy-in. Discovery processes identifying which departments are exploring AI agents enable targeted control implementation. Agencies that establish strong foundations now position themselves to adopt autonomous AI confidently and securely.
To explore the complete AEGIS framework and gain deeper insights into securing agentic AI for State and Local Government, watch Carahsoft’s full webinar featuring Forrester, “Full Throttle, Firm Control: Build Your Trust Strategy for Agentic AI.”
Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Forrester, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.
