Preparing Federal Systems for Post-Quantum Security: A Strategic Approach

Federal agencies face an urgent timeline to protect their most sensitive data from quantum computing threats. Quantum computers leverage physics principles like superposition and entanglement to perform calculations faster than classical computers, posing a significant threat to current encryption standards. Adversaries employ “harvest now, decrypt later” tactics, collecting encrypted data to store until there is a quantum computer powerful enough to break the encryption. The National Institute of Standards and Technology (NIST) released standardized Post-Quantum Cryptography (PQC) algorithms designed to withstand quantum attacks, ensuring long-term data security. The U.S. Federal Government has also issued guidance urging Federal agencies to update their IT infrastructure and deploy crypto-agile solutions that utilize today’s classical encryption algorithms and provide the ability to upgrade to PQC algorithms to combat this threat.

With the Cloud Security Alliance projecting cryptographically relevant quantum computers by 2030, agencies must implement these quantum-resistant algorithms before current security measures become obsolete.

The Quantum Threat Landscape

Current public key infrastructure (PKI), which underpins the internet, code signing and authentication, faces an existential threat from quantum computing. This vulnerability extends beyond theoretical concerns to three specific risk areas affecting Federal systems:

  1. Harvest Now, Decrypt Later: Attackers intercept communications and data today, storing them until quantum computers can break the encryption—potentially exposing Government secrets and sensitive information.
  2. Forged Signatures: Quantum capabilities could enable impersonation of trusted entities, allowing attackers to load malicious software to long-life devices or create fraudulent financial transactions that impact both commercial and Federal Government systems.
  3. Man-in-the-Middle Attacks: Advanced quantum computing could facilitate access to secure systems, potentially compromising military command and control (C2) environments, disrupting critical infrastructure and interfering with elections.

The most vulnerable assets are those containing long-lived data, including decades of trade secrets, classified information and lifetime healthcare and personal identifiable information. Short-lived data that exists for hours or months faces considerably less risk from quantum-enabled decryption.

Post-Quantum Cryptography Standards and Timeline

The standardization of quantum-resistant algorithms represents the culmination of an eight-year process spearheaded by NIST. In August 2024, NIST published its final standards for three critical algorithms:

  • ML-KEM (formerly Crystals-Kyber) | FIPS 203 | Key Encapsulation
  • ML-DSA (formerly Crystals-Dilithium) | FIPS 204 | Digital Signature
  • SLH-DSA (formerly HSS/LMS) | FIPS 205 | Stateless Hash-Based Signature

A fourth algorithm, FND-DSA (formerly Falcon), is still pending finalization. Simultaneously, NIST has released Internal Report (IR) 8547, providing comprehensive guidelines for transitioning from quantum-vulnerable cryptographic algorithms to PQC.

The National Security Agency’s (NSA) Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), released in September 2022 with an FAQ update in April 2024, outlines specific PQC requirements for National Security Systems. These standards have become reference points for Federal agencies beyond classified environments, establishing a staggered implementation timeline:

  • 2025-2030: Software/firmware signing
  • 2025-2033: Browsers, servers and cloud services
  • 2026-2030: Traditional networking equipment
  • 2027: Begin implementation of operating systems

Crypto Agility and Transition Strategy

It is essential for Federal agencies to deploy crypto-agile solutions that provide the ability to quickly modify underlying cryptographic primitives with flexible, upgradable technology. This capability allows organizations to support both current algorithms and future quantum-resistant ones without hardware replacement.

A comprehensive transition strategy includes seven critical steps:

  1. Awareness: Understand the challenges, risks and necessary actions to prepare for quantum threats.
  2. Inventory and Prioritize: Catalog cryptographic technologies and identify high-risk systems—a process the Cybersecurity and Infrastructure Security Agency (CISA) mandated via spreadsheet submission last year.
  3. Automate Discovery: Implement tools that continuously identify and inventory cryptographic assets, recognizing that manual inventories quickly become outdated.
  4. Set Up a PQC Test Environment: Establish testing platforms to evaluate how quantum-resistant algorithms affect performance, as these algorithms generate larger keys that may impact systems differently.
  5. Practice Crypto Agility: Ensure systems can support both classical algorithms and quantum-resistant alternatives, which may require modernizing end-of-life hardware security modules.
  6. Quantum Key Generation: Leverage quantum random number generation to create quantum-capable keys.
  7. Implement Quantum-Resistant Algorithms: Deploy PQC solutions across systems, beginning with high-risk assets while preparing for a multi-year process.

Practical Implementation of PQC

Thales, Preparing Federal Systems for Post Quantum Security, blog, embedded image, 2025

Federal agencies should look beyond algorithms to consider the full scope of implementation requirements. The quantum threat extends to communication protocols including Transport Layer Security (TLS), Internet Protocol Security (IPSec) and Secure Shell (SSH). It also affects certificates like X.509 for identities and code signing, as well as key management protocols.

Hardware security modules (HSMs) and high-speed network encryptors serve as critical components in quantum-resistant infrastructure. These devices must support hybrid approaches that combine classical encryption with PQC to maintain backward compatibility while adding quantum protection.

The National Cybersecurity Center of Excellence (NCCoE) is coordinating a major post-quantum crypto migration project involving more than 40 collaborators, including industry, academia, financial sectors and Government partners. This initiative has already produced testing artifacts and integration frameworks available through NIST Special Publication (SP) 1800-38.

Crypto Discovery and Inventory Management

Automated discovery tools represent a crucial capability for maintaining an accurate and current inventory of cryptographic assets. Unlike the one-time manual inventories many agencies completed in 2022-2023, these tools enable continuous monitoring of cryptographic implementations across the enterprise.

Several vendors offer specialized solutions for cryptographic discovery, including InfoSec Global, Sandbox AQ and IBM. These tools can:

  • Discover and classify cryptographic material across environments
  • Identify which assets are managed or unmanaged
  • Determine vulnerability to quantum attacks
  • Support centralized crypto management and policies

The Cloud Security Alliance has coined the term “Y2Q” (Years to Quantum) as an analogy to the “Y2K bug,” highlighting the need for systematic preparation. However, the quantum threat represents a potentially more significant risk than Y2K, with a projected timeline that places a cryptographically relevant quantum computer capable of breaking current cryptography by April 14, 2030.

Moving Forward with Quantum-Resistant Security

The transition to post-quantum cryptography is not optional for Federal agencies—it is an imperative. While the process requires significant investment in time and resources, the alternative—leaving sensitive Government data vulnerable to decryption—poses an unacceptable risk to national security.

Agencies should begin by evaluating their existing cryptographic inventory, prioritizing systems with long-lived sensitive data and developing implementation roadmaps aligned with NIST and NSA timelines. By taking incremental steps today toward quantum-resistant infrastructure, Federal organizations can ensure their critical information remains secure in the quantum computing era.

To learn more about implementing quantum-resistant security in Federal environments, watch Thales Trusted Cyber Technologies’ (TCT) webinar, “CTO Sessions: Best Practices for Implementing Quantum-Resistant Security.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Thales TCT we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

From Concept to Implementation: Operationalizing Zero Trust Architecture in Government Environments

Zero Trust has evolved over the last 15 years into a cornerstone of Federal cybersecurity strategy, influencing enterprises as well as State and Local Governments. While the principles of continuous authentication and least privilege are widely accepted, many organizations still need the industry’s support with implementation.

The National Institute of Standards and Technology’s (NIST) National Cyber Center of Excellence (NCCoE) has bridged this gap by offering practical guidance for applying Zero Trust concepts in real-world solutions.

Understanding Zero Trust Principles

Zero Trust is a cybersecurity strategy built on the assumption that networks are already compromised, making it the most resilient approach for securing today’s hybrid environments. Rather than relying on network perimeters, Zero Trust focuses on continuous authentication and verification of every access request, regardless of where those resources are located.

This approach requires organizations to secure all communications through encryption and authentication, grant access on a per-session basis with least privileges, implement dynamic policies, continuously monitor resource integrity and authenticate before allowing access. The objective is to reduce implicit trust between enterprise systems to minimize lateral movement by potential attackers.

Organizations must also collect and analyze as much contextual information as possible to create more granular access policies and strengthen current controls for an enhanced Zero Trust Architecture (ZTA).

NIST’s Role and Guidance

NIST has been instrumental in defining and operationalizing Zero Trust through guidance documents and practical demonstrations like Special Publication (SP) 800-207, published in 2020, which established the foundation for ZTA. Building on this framework, NIST’s NCCoE worked with industry, Government and academia to launch a project to show how these concepts could be implemented in real-world environments.  

Initially focused on three example implementations, the project expanded to 19 different ZTA implementations using technologies from 24 industry collaborators, including Palo Alto Networks.

These implementations were built around three primary deployment approaches:

  1. Enhanced Identity Governance: Emphasizes identity and attribute-based access control, ensuring access decisions are linked to user identity, roles and context.
  2. Microsegmentation: Uses smart devices such as firewalls, smart switches or specialized gateways to isolate and protect specific resources.
  3. Software-Defined Perimeter (SDP): Creates a software overlay to protect infrastructure—like servers and routers—by concealing it from unauthorized users.

Although not included in SP 800-207, the project also recognized Secure Access Service Edge (SASE) as an emerging deployment model that integrates network and security functions into a unified, cloud-delivered service.

Practical Implementation Strategies

Palo Alto Networks - Operationalizing Zero Trust - Blog - Embedded Image - 2025

The NCCoE project tackled the critical question: where should organizations start on their Zero Trust journey? By adopting an agile, incremental approach with “crawl, walk and run” stages, the project phased its implementation based on deployment approaches. This allowed gradual, manageable builds while addressing real-world complexities.

Technologies such as firewalls, SASE with Software-Defined Wide Area Network (SD-WAN) and Endpoint Detection and Response (EDR) using Palo Alto Networks Cortex XDR® were utilized, with remote worker scenarios reflecting modern hybrid environments. NIST SP 1800-35 outlines the phased approach and provides a practice guide, including technologies, reference architectures, use cases, tested scenarios and security controls built into each implementation.

One of the most significant challenges addressed was interoperability between different security solutions. Rather than overhauling infrastructure, organizations can leverage existing technologies while gradually introducing new solutions to enhance security and move toward a mature ZTA.

Integrating Technology Solutions

The NCCoE highlighted how comprehensive security platforms enable Zero Trust principles across hybrid environments. Palo Alto Networks presented a comprehensive ZTA built with artificial intelligence (AI) and machine learning (ML), leveraging capabilities including Cloud Identity Engine for federated identity management, next-generation firewalls for microsegmentation, cloud-delivered security services and SASE for remote access and EDR.

The approach focused on three key objectives:

  1. Continuous trust verification and threat prevention
  2. Single policy enforcement across all environments
  3. Interoperability with other security solutions

AI was embedded throughout the platform—from policy creation to user and device analysis—ensuring that Zero Trust policies are enforced consistently and adapted automatically in response to evolving threats. This intelligent strategy provides a scalable and resilient foundation for securing modern, hybrid environments.

Community Collaboration and A Holistic Approach

The success of the NCCoE project underscored the importance of collaboration between Government and industry to develop practical Zero Trust solutions. This partnership enabled the development of a holistic security monitoring system that can track user behavior across on-premises, cloud and remote environments. The integration of AI and ML streamlined incident response, reducing mean time to detection and resolution.

Experts recommend that organizations begin their Zero Trust journey with fundamental capabilities such as identity and access management (ICAM), endpoint security and compliance and data security. Implementing multi-factor authentication (MFA), integrated with existing Active Directory (AD) systems or identity providers, is an effective first step in strengthening access security. Monitoring network traffic and endpoint behavior using threat intelligence, user behavior analytics and AI allows organizations to proactively detect and respond to threats, providing a solid foundation for a resilient ZTA.

The journey to operationalizing Zero Trust continues to evolve, with NIST planning updates to their guidance documents to address emerging technologies like SASE and special considerations for operational technology (OT) environments. By adopting the principles, frameworks and practical implementation approaches demonstrated through the NCCoE project, Government agencies can develop more resilient security architectures that protect resources across diverse environments.

To learn more about implementing ZTAs in Government environments, watch the full webinar “Operationalizing Zero Trust: NIST and End-to-End Zero Trust Architectures,” presented by Palo Alto Networks, NIST and Carahsoft.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Palo Alto Networks, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Modern Fraud Threats in Government Relief Programs: How Agencies Can Defend Against Cybercrime

A recent investigation by CBS News’ “60 Minutes” has highlighted a significant issue: organized crime rings, often operating from overseas, are using stolen identities to steal billions of dollars from the U.S. Federal and State programs. These sophisticated fraud schemes specifically target public assistance initiatives, taking advantage of digital vulnerabilities and overwhelmed systems. The COVID-19 pandemic accelerated the delivery of relief funds, presenting new challenges for security systems still being implemented.

As these cyber-enabled crimes grow in complexity and scale, Public Sector organizations must evolve their defenses. HUMAN Security offers a modern solution that aligns with Public Sector standards and frameworks, like the NIST Cybersecurity Framework, to protect against automated fraud, account takeovers and bot-driven exploitation.

The Expanding Threat Landscape: Government Fraud at Scale

The fraud rings described in the CBS report do not fit the Hollywood stereotype of a lone hacker in a basement. These are industrial-scale operations run by criminal syndicates that:

  • Use stolen or synthetic identities to apply for public benefits such as unemployment insurance, COVID relief, food assistance and housing vouchers.

  • Leverage bots and automated scripts to rapidly test stolen credentials against Government login portals.

  • Host phishing websites and fake document generators to fool verification systems.

  • Exploit the lack of robust digital defenses in legacy Public Sector infrastructure.

At the height of the pandemic, the U.S. prioritized the rapid distribution of trillions in relief funds to support individuals and businesses in crisis. In the urgency to deliver aid quickly, some agencies adjusted standard fraud controls—creating unforeseen opportunities for bad actors. According to the CBS report, an estimated $280 billion was lost to fraud, with an additional $123 billion categorized as wasted or misused.

The tactics employed have now evolved into permanent tools of financial exploitation. Many cybercriminals continue to exploit social welfare and Government programs by leveraging automation and AI. Fraud isn’t slowing down—it’s scaling up.

Why Public Sector Agencies Are Attractive Targets

Government systems present a unique target profile for attackers due to a combination of high-value data, broad user bases and strained IT resources. Here’s why the Public Sector is particularly vulnerable:

1. High Payout Potential

Each successful fraudulent claim can yield thousands of dollars in benefits. Fraudsters often operate in bulk, submitting thousands of applications using stolen identities.

2. Legacy Infrastructure

Many State and Local agencies still operate on outdated software stacks that lack modern bot detection or behavior-based threat analysis.

3. Lack of Real-Time Monitoring

Fraudulent applications often go undetected until after funds are dispersed. Manual review processes are insufficient to handle the volume of claims.

4. Increased Script & API Vulnerabilities

Fraudsters exploit front-end vulnerabilities, such as JavaScript manipulation or misuse of APIs, to simulate real user activity, bypass verification checks and deploy fake documents.

HUMAN Security: A Modern Solution for a Modern Threat

Carahsoft, HUMAN 60 min, blog, embedded image, 2025

HUMAN Security specializes in protecting organizations from automated attacks, fraud and abuse by distinguishing between real users and malicious bots. HUMAN’s solutions are uniquely positioned to help Public Sector agencies address the specific types of fraud exposed by 60 Minutes.

1. Bot and Automation Mitigation

Fraudsters frequently use bots to submit applications at scale, probe systems for weaknesses and conduct credential stuffing attacks. The HUNAN Defense Platform analyzes over 20 trillion digital interactions weekly to identify real-time anomalies.

Through behavioral analysis, device fingerprinting, and machine learning, we can help public sector clients:

  • Detect non-human interaction patterns
  • Prevent fake accounts from being created
  • Block bot-driven denial-of-service or overload attempts

2. Account Takeover & Credential Abuse Defense

Many fraud schemes begin with access to a real person’s Government credentials. We prevent account takeovers by identifying compromised credentials in real time and helping clients stop  unauthorized login attempts.

Our Application Protection Package also integrates into public-facing login portals to block brute-force attempts and detect unusual login behavior.

3. Fake Identity and Synthetic Account Prevention

Fraudsters use fake IDs or generated synthetic identities to bypass identity checks. Our behavior-based analytics distinguish real users from fabricated personas—stopping fake account creation before it starts.

4. Real-Time Threat Intelligence:

By continuously monitoring emerging threats, we equip Public Sector clients with up-to-date information to counteract evolving fraud tactics.

5. Integration with Public Sector Frameworks:

Leading-edge solutions that align with standards like the NIST Cybersecurity Framework, HUMAN facilitates seamless integration into existing Government infrastructures and helps public sector clients with compliance and regulatory requirements.

Real-World Benefits to Government Agencies

By adopting fraud protection solutions, public agencies can:

  • Minimize Fraud Risk: Real-time prevention minimizes the risk of sending funds to bad actors.

  • Protect Citizens: Reduce identity theft and unauthorized access to sensitive citizen data.

  • Build Trust: Demonstrating robust cybersecurity fosters public trust in digital Government systems.

  • Streamline Compliance: Meet modern standards like PCI DSS 4.0 requirements 6.4.3. & 11.6.1 and NIST CSF with confidence.

  • Save Taxpayer Dollars: Every fraudulent dollar blocked is money that can be returned to real beneficiaries or saved for future programs.

A Call to Action for Government Leaders

The fraud revealed in the CBS 60 Minutes report isn’t an isolated event—it’s a warning sign. Digital transformation has accelerated across public agencies, but fraud defenses haven’t always kept pace.

Government leaders must take a proactive stance by:

  • Modernizing fraud detection capabilities

  • Closing visibility gaps across digital infrastructure

  • Adopting behavior-based, real-time defenses like HUMAN Security

  • Aligning security strategy with established frameworks (NIST, PCI DSS)

Fraud is no longer just a compliance risk—it’s a national security issue. As public trust and taxpayer funds hang in the balance, Government agencies must embrace modern, intelligent and automated defense systems to keep fraudsters out.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including HUMAN Security we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Bridging Identity Governance and Dynamic Access: The Anatomy of a Contextual and Dynamic Access Policy

As organizations adapt to increasingly complex IT ecosystems, traditional static access policies fail to meet modern security demands. This blog instance continues to explore how identity attributes, and governance controls impact contextual and dynamic access policies—as highlighted previous articles; Governing Identity Attributes in a Contextual and Dynamic Access Control Environment and SailPoint Identity Security The foundation of DoD ICAM and Zero Trust, it examines the role of identity governance controls, such as role-based access (dynamic or policy-based), lifecycle management, and separation of duties, as the foundation for real-time decision-making and compliance. Together, these approaches not only mitigate evolving threats but also align with critical standards like NIST SP 800-207, NIST CSF, and DHS CISA recommendations, enabling secure, adaptive, and scalable access ecosystems. Discover how this integration empowers organizations to achieve zero-trust principles, enhance operational resilience, and maintain regulatory compliance in an era of dynamic threats.

Authors Note: While I referenced the DoD instruction and guidance, the examples in the document can be applied to the NIST Cybersecurity Framework, and NIST SP 800-53 controls as well. My next article with speak specifically to the applicability of the DHS CDM MUR and future proposed DEFEND capabilities.


Defining Contextual and Dynamic Access Policies

Contextual and dynamic access policies adapt access decisions based on real-time inputs, including user identity, device security posture, behavioral patterns, and environmental risks. By focusing on current context rather than static attributes, these policies mitigate risks such as over-provisioning or unauthorized access.

Key Features:

  • Contextual Awareness: Evaluates real-time signals such as login frequency, device encryption status, geolocation, and threat intelligence.
  • Dynamic Decision-Making: Enforces least-privilege access dynamically and incorporates risk-based authentication (e.g., triggering MFA only under high-risk scenarios).
  • Identity Governance Integration: Leverages governance structures to align access with roles, responsibilities, and compliance standards.

The Role of Identity Governance Controls

Identity governance forms the backbone of effective contextual and dynamic access policies by providing the structure needed for secure access management. Core components include:

SailPoint Bridging Identity Governance Blog Embedded Image
  • Role-Based Access Control (RBAC), Dynamic/Policy-based: Defines roles and associated entitlements to reduce excessive or inappropriate access.
  • Access Reviews: Ensures periodic validation of user access rights, aligning with business needs and compliance mandates.
  • Separation of Duties (SoD): Prevents conflicts of interest by limiting excessive control over critical processes.
  • Lifecycle Management: Automates the provisioning and de-provisioning of access rights as roles change.
  • Policy Framework: Establishes clear baselines for determining who can access what resources under specific conditions.

Balancing Runtime Evaluation and Governance Controls

While governance controls establish structured, policy-driven access frameworks, runtime evaluations add the flexibility to adapt to real-time risks. Together, they create a layered security approach:

  • Baseline Governance: Sets foundational access rights using role-based policies and lifecycle management.
  • Dynamic Contextualization: Enhances governance by factoring in real-time conditions to ensure access decisions reflect current risk levels.
  • Feedback Loops: Insights from runtime evaluations inform and refine governance policies over time.

Benefits of Integration

By combining governance controls with contextual access policies, organizations achieve:

  • Enhanced security through continuous evaluation and dynamic risk mitigation.
  • Improved compliance with regulatory frameworks like GDPR, HIPAA, and NIST standards.
  • Operational efficiency by automating access reviews and reducing administrative overhead.

The integration of contextual and dynamic access policies with identity governance controls addresses the dual needs of flexibility and security in modern cybersecurity strategies. By combining structured governance with real-time adaptability, organizations can mitigate risks, ensure compliance, and achieve a proactive security posture that aligns with evolving business needs and regulatory demands. This layered approach represents the future of access management in a rapidly changing digital environment.


To learn more about how SailPoint can support your organization’s efforts within identity governance, cybersecurity and Zero Trust, view our resource, “The Anatomy of a Contextual and Dynamic Access Policy.”


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including SailPoint, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Governing Identity Attributes in a Contextual and Dynamic Access Control Environment

In the rapidly evolving landscape of cybersecurity, federal agencies, the Department of Defense (DoD), and critical infrastructure sectors face unique challenges in governing identity attributes within dynamic and contextual access control environments. The Department of Defense Instruction 8520.04, Identity Authentication for Information Systems, underscores the importance of identity governance in establishing trust and managing access across DoD systems. In parallel, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (DHS CISA) guidance and the National Institute of Standards and Technology (NIST) frameworks further emphasize the critical need for secure and adaptive access controls in safeguarding critical infrastructure and federal systems.

This article examines the governance of identity attributes in this complex environment, linking these practices to Attribute-Based Access Control (ABAC) and Role-Based Access Control (RBAC) models. It highlights how adherence to DoD 8520.04, CISA’s Zero Trust Maturity Model, and NIST guidelines enable organizations to maintain the accuracy, security, and provenance of identity attributes. These efforts are particularly crucial for critical infrastructure, where the ability to dynamically evaluate and protect access can prevent disruptions to essential services and minimize security risks. By integrating these principles, organizations not only achieve regulatory compliance but also strengthen their defense against evolving threats, ensuring the resilience of national security systems and vital infrastructure.

SailPoint Governing Identity Attributes Blog Embedded Image 2025

Importance of Governing Identity Attributes

Dynamic Access Control

In a dynamic access control environment (Zero Trust), access decisions are made based on real-time evaluation of identity attributes and contextual information. Identity governance plays a pivotal role in ensuring that these attributes are accurate, up-to-date, and relevant. Effective identity governance facilitates:

  • Real-time Access Decisions: By maintaining a comprehensive and current view of identity attributes, organizations can make informed and timely access decisions, ensuring that users have appropriate access rights based on their roles, responsibilities, and the context of their access request.
  • Adaptive Security: Identity governance enables adaptive security measures that can dynamically adjust access controls in response to changing risk levels, user behaviors, and environmental conditions.

Attribute Provenance

Attribute provenance refers to the history and origin of identity attributes. Understanding the provenance of attributes is critical for ensuring their reliability and trustworthiness. Identity governance supports attribute provenance by:

  • Tracking Attribute Sources: Implementing mechanisms to track the origins of identity attributes, including the systems and processes involved in their creation and modification.
  • Ensuring Data Integrity: Establishing validation and verification processes to ensure the integrity and accuracy of identity attributes over time.

Attribute Protection

Protecting identity attributes from unauthorized access, alteration, or misuse is fundamental to maintaining a secure access control environment. Identity governance enhances attribute protection through:

  • Access Controls: Implementing stringent access controls to limit who can view, modify, or manage identity attributes.
  • Encryption and Masking: Utilizing encryption and data masking techniques to protect sensitive identity attributes both at rest and in transit.
  • Monitoring and Auditing: Continuously monitoring and auditing access to identity attributes to detect and respond to any suspicious activities or policy violations.

Attribute Effectiveness

The effectiveness of identity attributes in supporting access control decisions is contingent upon their relevance, accuracy, and granularity. Identity governance ensures attribute effectiveness by:

  • Regular Reviews and Updates: Conducting periodic reviews and updates of identity attributes to align with evolving business needs, regulatory requirements, and security policies.
  • Feedback Mechanisms: Establishing feedback mechanisms to assess the effectiveness of identity attributes in real-world access control scenarios and make necessary adjustments.

Risks Associated with ABAC and RBAC

ABAC Risks

ABAC relies on the evaluation of attributes to make access control decisions. While ABAC offers flexibility and granularity, it also presents several risks:

  • Complexity: The complexity of managing a large number of attributes and policies can lead to misconfigurations and errors, potentially resulting in unauthorized access or access denials.
  • Scalability: As the number of attributes and policies grows, the scalability of the ABAC system can be challenged, affecting performance and responsiveness.
  • Attribute Quality: The effectiveness of ABAC is heavily dependent on the quality of the attributes. Inaccurate, outdated, or incomplete attributes can compromise access control decisions.

RBAC Risks

RBAC assigns access rights based on predefined roles. While RBAC simplifies access management, it also has inherent risks:

  • Role Explosion: The proliferation of roles to accommodate varying access needs can lead to role explosion, complicating role management and increasing administrative overhead.
  • Stale Roles: Over time, roles may become stale or misaligned with current job functions, leading to over-privileged or under-privileged access.
  • Inflexibility: RBAC may lack the flexibility to handle dynamic and context-specific access requirements, limiting its effectiveness in modern, agile environments.

Importance to a Zero Trust Model

The Zero Trust model is predicated on the principle of “never trust, always verify,” emphasizing continuous verification of identity and context for access decisions. Governing identity attributes is integral to the Zero Trust model for several reasons:

  • Continuous Verification: Accurate and reliable identity attributes are essential for continuous verification processes that dynamically assess access requests in real-time.
  • Context-Aware Security: By governing identity attributes, organizations can implement context-aware security measures that consider a wide range of factors, including user behavior, device health, and network conditions.
  • Minimizing Attack Surface: Effective governance of identity attributes helps minimize the attack surface by ensuring that access rights are tightly controlled and aligned with current security policies and threat landscapes.

Governing identity attributes is a cornerstone of modern access control strategies, particularly within the dynamic and contextual environments that characterize today’s IT ecosystems. By supporting dynamic access, ensuring attribute provenance, protection, and effectiveness, and addressing the risks associated with ABAC and RBAC, identity governance enhances the security and efficiency of access control mechanisms. In the context of a Zero Trust model, the rigorous governance of identity attributes is indispensable for maintaining robust and adaptive security postures, ultimately contributing to the resilience and integrity of organizational systems and data.

To learn more about SailPoint’s cybersecurity capabilities and how it can support mission-critical DoD initiatives, view our technology solutions portfolio. Additionally, check out our other blog highlighting the latest insights into “The Role of Identity Governance in the Implementation of DoD Instruction 8520.04”.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including SailPoint, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Vice President for StateRAMP Solutions, Carahsoft: StateRAMP: Recognizing the Importance of Framework Harmonization

StateRAMP builds on the National Institute of Standards and Technology (NIST) Special Publication 800-53 standard, which underpins FedRAMP’s approach to cloud security for Federal agencies by offering a consistent framework for security assessment, authorization and continuous monitoring. Recognizing the need for a similar framework at the State and Local levels, StateRAMP has been developed to tailor these Federal standards to the unique needs of State and Local Governments.  

Key to StateRAMP’s initiative is the focus on framework harmonization, which aligns State and Local regulations with broader Federal and industry standards. This harmonization includes efforts like FedRAMP/TX-RAMP reciprocity and the CJIS task force, making compliance more streamlined. By mapping more compliance frameworks to one another, StateRAMP helps Government agencies and industry players leverage existing work, avoid redundancy and facilitate smoother procurement of secure technologies. Carahsoft supports this mission by partnering with StateRAMP Authorized vendors and engaging in initiatives that promote these harmonization efforts, such as the StateRAMP Cyber Summit and Federal News Networks’ StateRAMP Exchange.  

Developing Framework Harmonization 

CSPs often operate across multiple sectors and industries, each regulated by distinct frameworks such as FedRAMP CJIS, IRS Publication 1075, PCI DSS, FISMA, and HIPPA. Managing compliance across multiple frameworks can lead to redundant processes, inefficiencies and complexity. These challenges have emphasized the need for framework harmonization—aligning various cybersecurity frameworks to create a more cohesive and streamlined process.  

Carahsoft StateRAMP Framework Harmonization Blog Embedded Image 2024

With the FedRAMP transition to the NIST SP 800-53 Rev. 5 requirements in 2023, StateRAMP began working towards harmonization with FedRAMP across all impact levels. Through the StateRAMP Fast Track Program, CSPs pursuing FedRAMP authorization can leverage the same compliance documentation, including Plans of Actions and Milestones (POA&M), System Security Plans (SSP), security controls matrix and Third Party Assessment Organization (3PAO) audits, to achieve StateRAMP authorization.  

Reciprocity between StateRAMP and TX-RAMP has been established to streamline cybersecurity compliance for CSPs working with Texas state agencies, higher education institutions and public community colleges. CSPs that achieve a StateRAMP Ready or Authorized status are eligible to attain TX-RAMP certification at the same impact level through an established process. Additionally, StateRAMP’s Progressing Security Snapshot Program offers a pathway to provisional TX-RAMP certification, enabling CSPs to engage with Texas agencies while working towards StateRAMP compliance. Once CSPs have enrolled in the Snapshot Program or have engaged with a 3PAO to conduct an audit, they are added to the Progressing Product List, a public directory of products and their cybersecurity maturity status. This reciprocity eases the burden of navigating multiple compliance frameworks and certifications.  

Harmonized frameworks enable CSPs to align with the cybersecurity objectives of various organizations while simultaneously addressing a broader range of threats and vulnerabilities, improving overall security. StateRAMP’s focus is to align requirements across the Federal, State, Local and Educational sectors to reduce the cost of development and deployment through a unified set of standards. To ensure the Public and Private Sectors work in alignment, StateRAMP members have access to the same guidance, tools and resources necessary for implementing a harmonized framework. This initiative will streamline the compliance process through a unified approach to cybersecurity that ensures adherence to industry and regulatory requirements. 

The Future of StateRAMP  

StateRAMP has rolled out an overlay to its Moderate Impact Level baseline that maps to Criminal Justice Information Services (CJIS) Security Policy. This overlay is designed to strengthen cloud security in the law enforcement sector, helping assess a product’s potential for CJIS compliance in safeguarding critical information.  

At the 2024 StateRAMP Cyber Summit, Deputy Information Security Officer Jeffrey Campbell from the FBI CJIS addressed the challenges state and local entities face when adopting cloud technologies. He explained that while state constituents frequently asked if they could use FedRAMP for cloud initiatives, the answer was often complicated because FedRAMP alone does not fully meet CJIS requirements. “You can use vendors vetted through FedRAMP, that is going to get you maybe 80% of these requirements. There’s still 20% you’re going to have to do on your own” Campbell noted. He emphasized that, through framework harmonization, StateRAMP can bridge this compliance gap, offering states a viable solution to achieve several parallel security standards.  

Another initiative is the NASPO/StateRAMP Task Force, which was formed to unite procurement officials, cybersecurity experts, Government officials and industry experts together with IT professionals. The task force aims to produce tools and resources for procurement officials nationwide to make the StateRAMP adoption process more streamlined and consistent. 

Though still relatively new, StateRAMP is gaining traction, with 28 participating states as of October 2024. As cyberattacks become more sophisticated, cybersecurity compliance has become a larger point of emphasis at every level of Government to protect sensitive data. StateRAMP is working to bring all stakeholders together to drive toward a common understanding and acceptance of a standardized security standard. StateRAMP’s proactive steps to embrace framework harmonization are helping CSPs and State and Local Governments move towards a more secure digital future. 

To learn more about the advantages the StateRAMP program offers State Governments and technology suppliers watch the Federal News Network’s StateRAMP Exchange, presented by Carahsoft.  

To learn more about framework harmonization and gain valuable insights into others, such as cloud security, risk management and procurement best practices, watch the StateRAMP Cyber Summit, presented by Carahsoft. 

Third-Party Risk Management: Moving from Reactive to Proactive

In today’s interconnected world, cyber threats are more sophisticated, with 83% of cyberattacks originating externally, according to the 2023 Verizon Data Breach Investigations Report (DBIR). This has prompted organizations to rethink third-party risk management. The 2023 Gartner Reimagining Third Party Cybersecurity Risk Management Survey found that 65% of security leaders increased their budgets, 76% invested more time and resources and 66% enhanced automation tools to combat third-party risks. Despite these efforts, 45% still reported increased disruptions from supply chain vulnerabilities, highlighting the need for more effective strategies.

Information vs Actionable Alerts

The constant evolution and splintering of illicit actors pose a challenge for organizations. Many threat groups have short lifespans or re-form due to law enforcement takedowns, infighting and shifts in ransomware-as-a-service networks, making it difficult for organizations to keep pace. A countermeasure against one attack may quickly become outdated as these threats evolve, requiring constant adaptation to new variations.

In cybersecurity, information is abundant, but decision-makers must distinguish the difference between information and actionable alerts. Information provides awareness but does not always drive immediate action, whereas alerts deliver real-time insights, enabling quick threat identification and response. Public data and real-time alerts help detect threats not visible in existing systems, allowing organizations to make proactive defense adjustments.

Strategies for Managing Third-Party Risk

Dataminr Third Party Risk Management OSINT Blog Embedded Image 2024

Managing third-party risk has become a critical challenge. The NIST Cybersecurity Framework (CSF) 2.0 emphasizes that governance must be approached holistically and highlights the importance of comprehensive third-party risk management. Many organizations rely on vendor surveys, attestations and security ratings, but these provide merely a snapshot in time and are often revisited only during contract negotiations. The NIST CSF 2.0 calls for continuous monitoring—a practice many organizations follow, though it is often limited to identifying trends and anomalies in internal telemetry data, rather than extending to third-party systems where potential risks may go unnoticed. Failing to consistently assess changes in third-party risks leaves organizations vulnerable to attack.

Many contracts require self-reporting, but this relies on the vendor detecting breaches, and there is no direct visibility into third-party systems like there is with internal systems. Understanding where data is stored, how it is handled and whether it is compromised is critical, but organizations often struggle to continuously monitor these systems. Government organizations, in particular, must manage their operations with limited budgets, making it difficult to scale with the growing number of vendors and service providers they need to oversee. Threat actors exploit this by targeting smaller vendors to access larger organizations.

Current strategies rely too heavily on initial vetting and lack sufficient post-contract monitoring. Continuous monitoring is no longer optional—it is essential. Organizations need to assess third-party risks not only at the start of a relationship but also as they evolve over time. This proactive approach is crucial in defending against the ever-changing threat landscape.

Proactively Identifying Risk

Proactively identifying and mitigating risks is essential for Government organizations, particularly as threat actors increasingly leverage publicly available data to plan their attacks. Transparency programs, such as USAspending.gov and city-level open checkbook platforms, while necessary for showing how public funds are used, can inadvertently provide a playbook for illicit actors to target vendors and suppliers involved in Government projects. Public data often becomes the first indicator of an impending breach, giving organizations a narrow window—sometimes just 24 hours—to understand threat actors’ operations and take proactive action.

To shift from reactive to proactive, organizations must enhance capabilities in three critical areas:

  1. Speed is vital for detecting threats in real time. Using AI to examine open source and threat intelligence data helps organizations avoid delays caused by time-consuming searches.
  2. The scope of monitoring must extend beyond traditional sources to deep web forums and dark web sites, evaluating text, images and indicators that mimic official branding.
  3. While real-time information is essential, excessive data can lead to alert fatigue. AI models that filter and tag relevant information enable security teams to focus on the most significant risks.

Proactively addressing third-party risks requires organizations to stay prepared for immediate threats. By leveraging public data, they can strengthen defenses and act before vulnerabilities are exploited.

While self-reporting and AI tools are valuable, organizations must take ownership of their risk management by conducting their own due diligence. The ability to continuously monitor, identify and mitigate risks presents not just a challenge but an opportunity for growth and improvement. Ultimately, it is the organization’s reputation and security at stake, making proactive risk management key to staying ahead of today’s evolving threats.

To learn more about proactive third-party risk management strategies, watch Dataminr’s webinar “A New Paradigm for Managing Third-Party Risk with OSINT and AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Dataminr, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Systems Through Segmentation and Zero Trust

Zero Trust is a cybersecurity strategy that recognizes trust as a vulnerability that may potentially allow malicious actors to exploit system environments. Traditionally, systems operated by granting permissions, visibility and trust to a user once they gain access. Rather than minimize trust and opportunity for breaches, Zero Trust eliminates trusted packets, systems and users altogether.

Implementing Zero Trust’s Fundamental Design Concepts

While breaches are inevitable, agencies can equip themselves with a Zero Trust framework to prevent successful cyber-attacks. Zero Trust encompasses identity, access permissions and micro segmentation, per the National Institute of Standards and Technology (NIST) architecture. All three enforcement points are required to complete the Zero Trust model. While security products are a component of Government agency’s implementation of Zero Trust, it is a strategy that requires proper planning.

To successfully implement Zero Trust, agencies must understand its fundamental design concepts.

  • Focus on business outcomes: Determine key agency objectives and design strategies with those in mind.

  • Design security strategies from the “inside out”: Typically, networks are designed from the “outside in,” beginning with the software and moving onto data. This can introduce vulnerabilities. By designing software accessibility around data and assets that need to be protected, agencies can personalize security and minimize vulnerabilities.

  • Determine who or what needs to have access: Individuals should default with the least amount of privilege, having additional access granted on a need-to-know basis.

  • Inspect and log all traffic: Multiple factors should be considered to determine whether to allow traffic, not just authentication. Understanding what traffic is moving in and out of the network prevents breaches.

Fundamentally, Zero Trust is simple. Trust is a human concept, not a digital concept. Once agencies understand the basics of Zero Trust, they can decide which tactics they will use to help them deploy it across their network.

Breaking Up Breaches with Segmentation

Illumio Microsegmentation Zero Trust Blog Embedded Image 2024

In other security strategies, security is implemented on perimeters or endpoints. This places IT far from the data that needs monitoring. The average time between a breach and its discovery is 277 days and is usually discovered by independent third parties. With flat, unsegmented surfaces, once breachers gain access to a network, they can take advantage of the entire system. Zero Trust alleviates this by transforming a system’s attack surface into a “protect surface.” Through proper segmentation, systems make the attack surface as small as possible, then places users adjacent to the attack surface to protect it. This area then becomes a more manageable surface for agencies to monitor and protect, eliminating the time gap between breach and discovery.

Once the strategy method is chosen, agencies must decide which tactics and tools they will use to deploy Zero Trust. Here is a simple, five-step process for deploying Zero Trust.

1. Define the protect surface: It is important to start with knowing what data needs protection. A great first step is to follow the DAAS element—protect data, assets, applications and services. Segmentation can help separate these four elements and posit each on its own protect surface, giving IT employees a manageable surface to monitor.

    2. Map transaction flows: With a robust protect surface, agencies can begin tailoring their Zero Trust environment. Understanding how the entire system functions together is imperative. With visibility into transaction flow mapping, agencies can build and architecture the environment around the protect surface.

    3. Architect a Zero Trust environment: Agencies should personalize their security to best fit their protect surface. That way, Zero Trust can work for the agency and its environment.

    4. Create policy: It is important to ask questions when creating policy, as Zero Trust is a set of granular allowance rules. Who should be allowed access and via what application? When should access be enabled? Where is the data located on the protect surface? Why is the agency doing this? These questions help agencies map out their personalized cybersecurity strategy.

    5. Monitor and maintain the protect surface: By creating an anti-fragile system, which increases its capability after exposure to shocks and violations, agencies can adapt and strengthen from stressors.

    Segmentation is vital to the theory of Zero Trust. Through centralized management, agencies can utilize segmentation to their benefit, positing IT adjacent to the specialized surface they protect. Zero Trust can be a learning curve. By implementing each protect surface individually, agencies can avoid becoming overwhelming. Building from the foundation up allows agencies to control their networks. Additional technologies, such as artificial intelligence (AI) and machine learning (ML), help give defenders the advantage by enabling them to focus on protect surfaces. Through a personalized and carefully planned Zero Trust strategy, agencies can stop breaches and protect their network and data.

    Illumio & Zero Trust

    Zero Trust often incorporates threat-hunting solutions, to detect a problem and then try to block or remove it. But no solution will ever be 100% and it must be assumed that eventually a threat will slip through, undetected. Undetected threats will eventually move between workloads, further compromising the network. Illumio, a cloud computing security company that specializes in Zero Trust micro segmentation, can future-proof agencies against malware.

    While threat-hunting tools focus on the workload, Illumio focuses on the segment, which means that Illumio enforces the Protect Surface via the vectors used by any and all threats that try to breach it. Any complex AI-generated malware which will appear in the near future will also want to move across segments, and Illumio will protect the environment today against threats which will appear tomorrow.

    To learn more about Zero Trust and Segmentation, visit Illumio’s webinar, Segmentation is the Foundation of Zero Trust.

    FedRAMP Rev. 5 Baselines are Here, Now What?

    The FedRAMP Joint Authorization Board (JAB) has given the green light to update to FedRAMP Rev. 5. With this revision, FedRAMP baselines are now updated in line with the National Institute of Standards and Technology’s (NIST) SP 800-53 Rev. 5 Catalog of Security and Privacy Controls for Information Systems and Organizations and SP 800-53B Control Baselines for Information Systems and Organizations. This transformation brings opportunities and challenges for all stakeholders involved, including Cloud Service Providers (CSP), Third Party Assessment Organizations (3PAOs), and Federal Agencies. But worry not – with RegScale, we have your back! Let’s dive in and understand the impact and how to prepare for the coming changes.

    Decoding the Transition

    The transition has been in the works for a very long time, and FedRAMP has updated many of their controls to accurately reflect updates in technology since Rev. 4 was published in 2015. FedRAMP Rev. 5 brings with it significant updates to the security controls to meet emerging threats, including new families such as supply chain risk management, and places a greater emphasis on privacy controls. FedRAMP continues to strongly encourage package submission in NIST Open Security Controls Assessment Language (OSCAL) format to accelerate review and approval processes. To aid with a clear comprehension of the updates, FedRAMP has also released a Rev. 4 to Rev. 5 Baseline Comparison Summary. There are more than 250 controls with significant changes, including several whole new families of controls.

    In the coming weeks, FedRAMP plans to release a series of updated OSCAL baseline profiles, resolved profile catalogs, System Security Plan (SSP), Security Assessment Plan (SAP), Security Assessment Report (SAR), and Plans of Action and Milestones (POA&;ampM) templates as well as supporting guides for each of these.

    What is OSCAL, You Ask?

    RegScale FedRAMP Rev. 5 Baselines Blog Embedded Image 2023

    OSCAL is a set of standards for digitizing the authorization package through common machine-readable formats developed by NIST in conjunction with the FedRAMP PMO and industry. NIST defines it as a “set of hierarchical, formatted, XML- JSON- and YAML-based formats that provide a standardized representation for different categories of security information pertaining to the publication, implementation, and assessment of security controls.” OSCAL makes it easier to validate the quality of your FedRAMP packages and expedites the review of those packages.

    The Impact on CSPs

    FedRAMP has published the CSP Transition Plan, providing a comprehensive roadmap and tool for CSPs to identify the scope of the Rev. 5 controls that require testing and offering support for everyone based on their stage in the FedRAMP authorization process. Timelines for the full transition range from immediate to 12-18 months. You should find a technology partner to assist you regardless of your FedRAMP stage so that you can quickly and completely adapt from Rev. 4 to Rev. 5 baselines as well as update, review, and submit your packages in both human-readable (Word, Excel) and machine-readable (OSCAL) formats.

    If you are a CSP just getting started with your FedRAMP journey…

    As of May 30, 2023, CSPs in the “planning” stage of FedRAMP authorization must adopt the new Rev. 5 baseline in their controls documentation and testing and submit their packages in the updated FedRAMP templates as they become available. You are in the planning phase if you are:

    • Applying for FedRAMP or are in the readiness review process
    • Have not partnered with a federal agency prior to May 30, 2023
    • Have not contracted with a 3PAO for a Rev. 4 assessment prior to May 30, 2023
    • Have a JAB prioritization but have not begun an assessment after the release of the Rev. 5 baselines and templates

    If you are a CSP in the “Initiation” phase

    CSPs in the initiation phase will complete an Authority to Operate (ATO) using the Rev. 4 baseline and templates. By the latest of the issuance of your ATO or September 1, 2023, you will identify the delta between your Rev. 4 implementation and the Rev. 5 requirements, develop plans to address the differences, and document those plans in the SSP and POA&;ampM. You are in the initiation phase if any of the following apply prior to May 30, 2023:

    • Prioritized for the JAB and are under contract with a 3PAO or in 3PAO assessment
    • Have been assessed and are working toward P-ATO package submission
    • Kicked off the JAB P-ATO review process
    • Partnered with a federal agency and are:
      • Currently under contract with a 3PAO
      • Undergoing a 3PAO assessment
      • Have been assessed and have submitted the package for Agency ATO review

    If you are a Fully Authorized CSP

    You are in the “continuous monitoring” phase if you are a CSP with a current FedRAMP authorization. By September 1, 2023, you need to identify the delta between your current Rev. 4 implementation and the Rev. 5 requirement, develop plans to address the differences and document those plans in the SSP and POA&;ampM. By October 2, 2023; you should update plans based on any shared controls.

    If your latest assessment was completed between January 2 and July 3, 2023, you have a maximum of one year from the date of the last assessment to complete all implementation and testing activities for Rev. 5. If your annual assessment is scheduled between July 3 and December 15, 2023, you will need to complete all implementation and testing activities no later than your next, scheduled annual assessment in 2023/2024.

    A Complete Technology and Transition Partner

    The transition to FedRAMP Rev. 5 is not just about meeting the new requirements but doing so in the most efficient and seamless manner. You should focus on your core business while technology like RegScale handles the intricacies of the compliance transition.

    Beyond compliance documentation, RegScale serves as a comprehensive FedRAMP compliance technology and transition partner. Our platform assists with mapping your security controls against FedRAMP and NIST SP 800-53 baselines for Rev. 4 and Rev. 5, supports gap analysis, provides remediation support, and enables continuous monitoring and improvement. The platform currently includes FedRAMP support and tools to develop human-readable and OSCAL-formatted content for Catalogs, Profiles, SSPs, Components, SAPs, SARs, POAMs and Asset Inventory. To help eliminate the friction and confusion of where to begin with OSCAL, RegScale provides an intuitive Graphical User Interface (GUI) to build artifacts using our wizards and then easily export them as valid OSCAL. By automating the creation of audit-ready documentation and allowing direct submission to the FedRAMP Project Management Office (PMO) through OSCAL and/or Word/Excel templates, RegScale provides a seamless transition experience to Rev. 5, reducing complexities and saving you valuable time and resources.

    In closing, it is crucial for all CSPs and stakeholders to review the new mandates and the CSP Transition Plan and begin planning to address the updated templates. Let RegScale help make the shift to FedRAMP Rev. 5 a streamlined, efficient, and effective process with minimum costs and business disruptions.

    This post originally appeared on Regscale.com and is re-published with permission.

    View our webinar to learn more about the low-cost approaches for handling the transition to Rev 5.

    How Palantir Meets IL6 Security Requirements with Apollo

    Building secure software requires robust delivery and management processes, with the ability to quickly detect and fix issues, discover new vulnerabilities, and deploy patches. This is especially difficult when services are run in restricted, air-gapped environments or remote locations, and was the main reason we built Palantir Apollo.

    With Apollo, we are able to patch, update, or make changes to a service in 3.5 minutes on average and have significantly reduced the time required to remediate production issues, from hours to under 5 minutes.

    For 20 years, Palantir has worked alongside partners in the defense and intelligence spaces. We have encoded our learnings for managing software in national security contexts. In October 2022, Palantir received an Impact Level 6 (IL6) provisional authorization (PA) from the Defense Information Systems Agency (DISA) for our federal cloud service offering.

    IL6 accreditation is a powerful endorsement, recognizing that Palantir has met DISA’s rigorous security and compliance standards and making it easier for U.S. Government entities to use Palantir products for some of their most sensitive work.

    The road to IL6 accreditation can be challenging and costly. In this blog post, we share how we designed a consistent, cross-network deployment model using Palantir Apollo’s built-in features and controls in order to satisfy the requirements for operating in IL6 environments.

    What are FedRAMP, IL5, and IL6?

    With the rise of cloud computing in the government, DISA defined the operating standards for software providers seeking to offer their services in government cloud environments. These standards are meant to ensure that providers demonstrate best practices when securing the sensitive work happening in their products.

    DISA’s standards are based on a framework that measures risk in a provider’s holistic cloud offering. Providers must demonstrate both their products and their operating strategy are deployed with safety controls aligned to various levels of data sensitivity. In general, more controls mean less risk in a provider’s offering, making it eligible to handle data at higher sensitivity levels.

    Palantir IL6 Security Requirements with Apollo Blog Embedded Image 2023

    Impact Levels (ILs) are defined in DISA’s Cloud Computing SRG as Department of Defense (DoD)-developed categories for leveraging cloud computing based on the “potential impact should the confidentiality or the integrity of the information be compromised.” There are currently four defined ILs (2, 4, 5, and 6), with IL6 being the highest and the only IL covering potentially classified data that “could be expected to have a serious adverse effect on organizational operations” (the SRG is available for download as a .zip from here).

    Defining these standards allows DISA to enable a “Do Once, Use Many” approach to software accreditation that was pioneered with the FedRAMP program. For commercial providers, IL6 authorization means government agencies can fast track use of their services in place of having to run lengthy and bespoke audit and accreditation processes. The DoD maintains a Cloud Service Catalog that lists offerings that have already been granted PAs, making it easy for potential user groups to pick vetted products.

    NIST and the Risk Management Framework

    The DoD bases its security evaluations on the National Institute of Standards and Technology’s (NIST) Risk Management Framework (RMF), which outlines a generic process used widely across the U.S. Government to evaluate IT systems.

    The RMF provides guidance for identifying which security controls exist in a system so that the RMF user can assess the system and determine if it meets the users’ needs, like the set of requirements DISA established for IL6.

    Controls are descriptive and focus on whole system characteristics, including those of the organization that created and operates the system. For example, the Remote Access (AC-17) control is defined as:

    The organization:

    • Establishes and documents usage restrictions, configuration/connection requirements, and implementation guidance for each type of remote access allowed;
    • Authorizes remote access to the information system prior to allowing such connections.

    Because of how controls are defined, a primary aspect of the IL6 authorization process is demonstrating how a system behaves to match control descriptions.

    Demonstrating NIST Controls with Apollo

    Apollo was designed with many of the NIST controls in mind, which made it easier for us to assemble and demonstrate an IL6-eligible offering using Apollo’s out-of-the box features.

    Below we share how Apollo allows us to address six of the twenty NIST Control Families (categories of risk management controls) that are major themes in the hundreds of controls adopted as IL6 requirements.

    System and Services Acquisition (SA) and Supply Chain Risk Management (SR)

    The System and Services Acquisition (SA) family and related Supply Chain Risk Management (SR) family (created in Revision 5 of the RMF guidelines) cover the controls and processes that verify the integrity of the components of a system. These measures ensure that component parts have been vetted and evaluated, and that the system has safeguards in place as it inevitably evolves, including if a new component is added or a version is upgraded.

    In a software context, modern applications are now composed of hundreds of individual software libraries, many of which come from the open source community. Securing a system’s software supply chain requires knowing when new vulnerabilities are found in code that’s running in the system, which happens nearly every day.

    Apollo helped us address SA and SR controls because it has container vulnerability scanning built directly into it.

    Figure 1: The security scan status appears for each Release on the Product page for an open-source distribution of Redis

    When a new Product Release becomes available, Apollo automatically scans the Release to see if it’s subject to any of the vulnerabilities in public security catalogs, like MITRE’s Common Vulnerabilities and Exposure’s (CVE) List.

    If Apollo finds that a Release has known vulnerabilities, it alerts the team at Palantir responsible for developing the Product in order to make sure a team member updates the code to patch the issue. Additionally, our information security teams use vulnerability severity to define criteria for what can be deployed while still keeping our system within IL6 requirements.

    Figure 2: An Apollo scan of an open-source distribution of Redi shows active CVEs

    Scanning for these weak spots in our system is now an automatic part of Apollo and a crucial element in making sure our IL6 services remain secure. Without it, mapping newly discovered security findings to where they’re used in a software platform is an arduous, manual process that’s intractable as the complexity of a platform grows, and would make it difficult or impossible to accurately estimate the security of a system’s components.

    Configuration Management (CM)

    The Configuration Management (CM) group covers the safety controls that exist in the system for validating and applying changes to production environments.

    CM controls include the existence of review and approval steps when changing configuration, as well as the ability within the system for administrators to assign approval authority to different users based on what kind of change is proposed.

    Apollo maintains a YML-based configuration file for each individual microservice within its configuration management service. Any proposed configuration change creates a Change Request (CR), which then has to be reviewed by the owner of the product or environment.

    Changes within our IL6 environments are sent to Palantir’s centralized team of operations personnel, Baseline, which verifies that the Change won’t cause disruptions and approves the new configuration to be applied by Apollo. In development and testing environments, Product teams are responsible for approving changes. Because each service has its own configuration, it’s possible to fine-tune an approval flow for whatever’s most appropriate for an individual product or environment.

    Figure 3: An example Change Request to remove a Product from an Environment

    A history of changes is saved and made available for each service, where you can see who approved a CR and when, which also addresses Audit and Accountability (AU) controls.

    When a change is made, Apollo first validates it and then applies it during configured maintenance windows, which helps to avoid the human error that’s common in managing service configuration, like introducing an untested typo that interrupts production services. This added stability has made our systems easier to manage and, consequentially, easier to keep secure.

    Incident Response (IR)

    The Incident Response (IR) control family pertains to how effectively an organization can respond to incidents in their software, including when its system comes under attack from bad actors.

    A crucial aspect to meeting IR goals is being able to quickly patch a system, quarantine only the affected parts of the system, and restore services as quickly as is safely possible.

    A major feature that Apollo brings to our response process is the ability to quickly ship code updates across network lines. If a product owner needs to patch a service, they simply need to make a code change. From there, a release is generated, and Apollo prepares an export for IL6 that is applied automatically once it’s transferred by our Network Operations Center (NOC) team according to IL6 security protocols. Apollo performs the upgrade without intervention, which removes expensive coordination steps between the product owner and the NOC.

    Figure 4: How Apollo works across network lines to an air-gapped deployment

    Additionally, Apollo allows us to save Templates of our Environments that contain configuration that is separate from the infrastructure itself. This has made it easy for us to take a “cattle, not pets” approach to underlying infrastructure. With secrets and other configuration decoupled from the Kubernetes cluster or VMs that run the services, we can easily reapply them onto new infrastructure should an incident ever pop up, making it simple to isolate and replace nodes of a service.

    Figure 5: Templates make it easy to manage Environments that all use the same baseline

    Contingency Planning (CP)

    Contingency Planning (CP) controls demonstrate preparedness should service instability arise that would otherwise interrupt services. This includes the human component of training personnel to respond appropriately, as well as automatic controls that kick in when problems are detected.

    We address the CP family by using Apollo’s in-platform monitoring and alerting, which allows product or environment owners to define alerting thresholds based on an open standard metric types, including Prometheus’s metrics format.

    Figure 6: Monitors configured for all of the Products in an Environment make it easy to track the health of software components

    Apollo monitors our IL6 services and routes alerts to members of our NOC team through an embedded alert inbox. Alerts are automatically linked to relevant service logging and any associated Apollo activity, which has drastically sped up the remediation process when services or infrastructure experience unexpected issues. The NOC is able to address alerts by following runbooks prepared for and linked to within alerts. When needed, alerts are triaged to teams that own the product for more input.

    Because we’ve standardized our monitors in Apollo, we’ve been able to create straightforward protocols and processes for responding to incidents, which means we are able to action contingency plans quicker and ensure our systems remain secure.

    Access Control (AC)

    The Access Control (AC) control family describes the measures in a system for managing accounts and ensuring accounts are only given the appropriate levels of permissions to perform actions in the system.

    Robustly addressing AC controls includes having a flexible system where individual actions can be granted based on what a user needs to be able to do within a specific context.

    In Apollo, every action and API has an associated role, which can be assigned to individual users or Apollo Teams, which are managed within Apollo and can be mirrored from an SSO provider.

    Roles necessary to operating environments (e.g. approving the installation of a new component) are granted to our Baseline team, and are restricted as needed to a smaller group of environment owners based on an environment’s compliance requirements. Team management is reserved for administrators, and roles that include product lifecycle actions (e.g. recalling a product release) are given to development teams.

    Figure 7: Products and Environments have configurable ownership that ensures the right team is monitoring their resources

    Having a single system to divide responsibilities by functional areas means that our access control system is consistent and easy to understand. Further, being able to be granularly assign roles to perform different actions makes it possible to meet the principle of least privilege system access that underpins AC controls.

    Conclusion

    The bar to operate with IL6 information is rightfully a high one. We know obtaining IL6 authorization can feel like a long process — however, we believe this should not prevent the best technology from being available to the U.S. Government. It’s with that belief that we built Apollo, which became the foundation for how we deploy to all of our highly secure and regulated environments, including FedRAMP, IL5, and IL6.

    Additionally, we recently started a new program, FedStart, where we partner with organizations just starting their accreditation journey to bring their technology to these environments. If you’re interested in working together, reach out to us at fedstart@palantir.com for more information.

    Get in touch if you want to learn more about how Apollo can help you deploy to any kind of air-gapped environment, and check out the Apollo Content Hub for white papers and other case studies.

    This post originally appeared on Palantir.com and is re-published with permission.

    Download our Resource, “Solution Overview: Palantir—Apollo” to learn more about how Palantir Technologies can support your organization.