Bridging Identity Governance and Dynamic Access: The Anatomy of a Contextual and Dynamic Access Policy

As organizations adapt to increasingly complex IT ecosystems, traditional static access policies fail to meet modern security demands. This blog instance continues to explore how identity attributes, and governance controls impact contextual and dynamic access policies—as highlighted previous articles; Governing Identity Attributes in a Contextual and Dynamic Access Control Environment and SailPoint Identity Security The foundation of DoD ICAM and Zero Trust, it examines the role of identity governance controls, such as role-based access (dynamic or policy-based), lifecycle management, and separation of duties, as the foundation for real-time decision-making and compliance. Together, these approaches not only mitigate evolving threats but also align with critical standards like NIST SP 800-207, NIST CSF, and DHS CISA recommendations, enabling secure, adaptive, and scalable access ecosystems. Discover how this integration empowers organizations to achieve zero-trust principles, enhance operational resilience, and maintain regulatory compliance in an era of dynamic threats.

Authors Note: While I referenced the DoD instruction and guidance, the examples in the document can be applied to the NIST Cybersecurity Framework, and NIST SP 800-53 controls as well. My next article with speak specifically to the applicability of the DHS CDM MUR and future proposed DEFEND capabilities.


Defining Contextual and Dynamic Access Policies

Contextual and dynamic access policies adapt access decisions based on real-time inputs, including user identity, device security posture, behavioral patterns, and environmental risks. By focusing on current context rather than static attributes, these policies mitigate risks such as over-provisioning or unauthorized access.

Key Features:

  • Contextual Awareness: Evaluates real-time signals such as login frequency, device encryption status, geolocation, and threat intelligence.
  • Dynamic Decision-Making: Enforces least-privilege access dynamically and incorporates risk-based authentication (e.g., triggering MFA only under high-risk scenarios).
  • Identity Governance Integration: Leverages governance structures to align access with roles, responsibilities, and compliance standards.

The Role of Identity Governance Controls

Identity governance forms the backbone of effective contextual and dynamic access policies by providing the structure needed for secure access management. Core components include:

SailPoint Bridging Identity Governance Blog Embedded Image
  • Role-Based Access Control (RBAC), Dynamic/Policy-based: Defines roles and associated entitlements to reduce excessive or inappropriate access.
  • Access Reviews: Ensures periodic validation of user access rights, aligning with business needs and compliance mandates.
  • Separation of Duties (SoD): Prevents conflicts of interest by limiting excessive control over critical processes.
  • Lifecycle Management: Automates the provisioning and de-provisioning of access rights as roles change.
  • Policy Framework: Establishes clear baselines for determining who can access what resources under specific conditions.

Balancing Runtime Evaluation and Governance Controls

While governance controls establish structured, policy-driven access frameworks, runtime evaluations add the flexibility to adapt to real-time risks. Together, they create a layered security approach:

  • Baseline Governance: Sets foundational access rights using role-based policies and lifecycle management.
  • Dynamic Contextualization: Enhances governance by factoring in real-time conditions to ensure access decisions reflect current risk levels.
  • Feedback Loops: Insights from runtime evaluations inform and refine governance policies over time.

Benefits of Integration

By combining governance controls with contextual access policies, organizations achieve:

  • Enhanced security through continuous evaluation and dynamic risk mitigation.
  • Improved compliance with regulatory frameworks like GDPR, HIPAA, and NIST standards.
  • Operational efficiency by automating access reviews and reducing administrative overhead.

The integration of contextual and dynamic access policies with identity governance controls addresses the dual needs of flexibility and security in modern cybersecurity strategies. By combining structured governance with real-time adaptability, organizations can mitigate risks, ensure compliance, and achieve a proactive security posture that aligns with evolving business needs and regulatory demands. This layered approach represents the future of access management in a rapidly changing digital environment.


To learn more about how SailPoint can support your organization’s efforts within identity governance, cybersecurity and Zero Trust, view our resource, “The Anatomy of a Contextual and Dynamic Access Policy.”


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including SailPoint, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Governing Identity Attributes in a Contextual and Dynamic Access Control Environment

In the rapidly evolving landscape of cybersecurity, federal agencies, the Department of Defense (DoD), and critical infrastructure sectors face unique challenges in governing identity attributes within dynamic and contextual access control environments. The Department of Defense Instruction 8520.04, Identity Authentication for Information Systems, underscores the importance of identity governance in establishing trust and managing access across DoD systems. In parallel, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (DHS CISA) guidance and the National Institute of Standards and Technology (NIST) frameworks further emphasize the critical need for secure and adaptive access controls in safeguarding critical infrastructure and federal systems.

This article examines the governance of identity attributes in this complex environment, linking these practices to Attribute-Based Access Control (ABAC) and Role-Based Access Control (RBAC) models. It highlights how adherence to DoD 8520.04, CISA’s Zero Trust Maturity Model, and NIST guidelines enable organizations to maintain the accuracy, security, and provenance of identity attributes. These efforts are particularly crucial for critical infrastructure, where the ability to dynamically evaluate and protect access can prevent disruptions to essential services and minimize security risks. By integrating these principles, organizations not only achieve regulatory compliance but also strengthen their defense against evolving threats, ensuring the resilience of national security systems and vital infrastructure.

SailPoint Governing Identity Attributes Blog Embedded Image 2025

Importance of Governing Identity Attributes

Dynamic Access Control

In a dynamic access control environment (Zero Trust), access decisions are made based on real-time evaluation of identity attributes and contextual information. Identity governance plays a pivotal role in ensuring that these attributes are accurate, up-to-date, and relevant. Effective identity governance facilitates:

  • Real-time Access Decisions: By maintaining a comprehensive and current view of identity attributes, organizations can make informed and timely access decisions, ensuring that users have appropriate access rights based on their roles, responsibilities, and the context of their access request.
  • Adaptive Security: Identity governance enables adaptive security measures that can dynamically adjust access controls in response to changing risk levels, user behaviors, and environmental conditions.

Attribute Provenance

Attribute provenance refers to the history and origin of identity attributes. Understanding the provenance of attributes is critical for ensuring their reliability and trustworthiness. Identity governance supports attribute provenance by:

  • Tracking Attribute Sources: Implementing mechanisms to track the origins of identity attributes, including the systems and processes involved in their creation and modification.
  • Ensuring Data Integrity: Establishing validation and verification processes to ensure the integrity and accuracy of identity attributes over time.

Attribute Protection

Protecting identity attributes from unauthorized access, alteration, or misuse is fundamental to maintaining a secure access control environment. Identity governance enhances attribute protection through:

  • Access Controls: Implementing stringent access controls to limit who can view, modify, or manage identity attributes.
  • Encryption and Masking: Utilizing encryption and data masking techniques to protect sensitive identity attributes both at rest and in transit.
  • Monitoring and Auditing: Continuously monitoring and auditing access to identity attributes to detect and respond to any suspicious activities or policy violations.

Attribute Effectiveness

The effectiveness of identity attributes in supporting access control decisions is contingent upon their relevance, accuracy, and granularity. Identity governance ensures attribute effectiveness by:

  • Regular Reviews and Updates: Conducting periodic reviews and updates of identity attributes to align with evolving business needs, regulatory requirements, and security policies.
  • Feedback Mechanisms: Establishing feedback mechanisms to assess the effectiveness of identity attributes in real-world access control scenarios and make necessary adjustments.

Risks Associated with ABAC and RBAC

ABAC Risks

ABAC relies on the evaluation of attributes to make access control decisions. While ABAC offers flexibility and granularity, it also presents several risks:

  • Complexity: The complexity of managing a large number of attributes and policies can lead to misconfigurations and errors, potentially resulting in unauthorized access or access denials.
  • Scalability: As the number of attributes and policies grows, the scalability of the ABAC system can be challenged, affecting performance and responsiveness.
  • Attribute Quality: The effectiveness of ABAC is heavily dependent on the quality of the attributes. Inaccurate, outdated, or incomplete attributes can compromise access control decisions.

RBAC Risks

RBAC assigns access rights based on predefined roles. While RBAC simplifies access management, it also has inherent risks:

  • Role Explosion: The proliferation of roles to accommodate varying access needs can lead to role explosion, complicating role management and increasing administrative overhead.
  • Stale Roles: Over time, roles may become stale or misaligned with current job functions, leading to over-privileged or under-privileged access.
  • Inflexibility: RBAC may lack the flexibility to handle dynamic and context-specific access requirements, limiting its effectiveness in modern, agile environments.

Importance to a Zero Trust Model

The Zero Trust model is predicated on the principle of “never trust, always verify,” emphasizing continuous verification of identity and context for access decisions. Governing identity attributes is integral to the Zero Trust model for several reasons:

  • Continuous Verification: Accurate and reliable identity attributes are essential for continuous verification processes that dynamically assess access requests in real-time.
  • Context-Aware Security: By governing identity attributes, organizations can implement context-aware security measures that consider a wide range of factors, including user behavior, device health, and network conditions.
  • Minimizing Attack Surface: Effective governance of identity attributes helps minimize the attack surface by ensuring that access rights are tightly controlled and aligned with current security policies and threat landscapes.

Governing identity attributes is a cornerstone of modern access control strategies, particularly within the dynamic and contextual environments that characterize today’s IT ecosystems. By supporting dynamic access, ensuring attribute provenance, protection, and effectiveness, and addressing the risks associated with ABAC and RBAC, identity governance enhances the security and efficiency of access control mechanisms. In the context of a Zero Trust model, the rigorous governance of identity attributes is indispensable for maintaining robust and adaptive security postures, ultimately contributing to the resilience and integrity of organizational systems and data.

To learn more about SailPoint’s cybersecurity capabilities and how it can support mission-critical DoD initiatives, view our technology solutions portfolio. Additionally, check out our other blog highlighting the latest insights into “The Role of Identity Governance in the Implementation of DoD Instruction 8520.04”.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including SailPoint, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Vice President for StateRAMP Solutions, Carahsoft: StateRAMP: Recognizing the Importance of Framework Harmonization

StateRAMP builds on the National Institute of Standards and Technology (NIST) Special Publication 800-53 standard, which underpins FedRAMP’s approach to cloud security for Federal agencies by offering a consistent framework for security assessment, authorization and continuous monitoring. Recognizing the need for a similar framework at the State and Local levels, StateRAMP has been developed to tailor these Federal standards to the unique needs of State and Local Governments.  

Key to StateRAMP’s initiative is the focus on framework harmonization, which aligns State and Local regulations with broader Federal and industry standards. This harmonization includes efforts like FedRAMP/TX-RAMP reciprocity and the CJIS task force, making compliance more streamlined. By mapping more compliance frameworks to one another, StateRAMP helps Government agencies and industry players leverage existing work, avoid redundancy and facilitate smoother procurement of secure technologies. Carahsoft supports this mission by partnering with StateRAMP Authorized vendors and engaging in initiatives that promote these harmonization efforts, such as the StateRAMP Cyber Summit and Federal News Networks’ StateRAMP Exchange.  

Developing Framework Harmonization 

CSPs often operate across multiple sectors and industries, each regulated by distinct frameworks such as FedRAMP CJIS, IRS Publication 1075, PCI DSS, FISMA, and HIPPA. Managing compliance across multiple frameworks can lead to redundant processes, inefficiencies and complexity. These challenges have emphasized the need for framework harmonization—aligning various cybersecurity frameworks to create a more cohesive and streamlined process.  

Carahsoft StateRAMP Framework Harmonization Blog Embedded Image 2024

With the FedRAMP transition to the NIST SP 800-53 Rev. 5 requirements in 2023, StateRAMP began working towards harmonization with FedRAMP across all impact levels. Through the StateRAMP Fast Track Program, CSPs pursuing FedRAMP authorization can leverage the same compliance documentation, including Plans of Actions and Milestones (POA&M), System Security Plans (SSP), security controls matrix and Third Party Assessment Organization (3PAO) audits, to achieve StateRAMP authorization.  

Reciprocity between StateRAMP and TX-RAMP has been established to streamline cybersecurity compliance for CSPs working with Texas state agencies, higher education institutions and public community colleges. CSPs that achieve a StateRAMP Ready or Authorized status are eligible to attain TX-RAMP certification at the same impact level through an established process. Additionally, StateRAMP’s Progressing Security Snapshot Program offers a pathway to provisional TX-RAMP certification, enabling CSPs to engage with Texas agencies while working towards StateRAMP compliance. Once CSPs have enrolled in the Snapshot Program or have engaged with a 3PAO to conduct an audit, they are added to the Progressing Product List, a public directory of products and their cybersecurity maturity status. This reciprocity eases the burden of navigating multiple compliance frameworks and certifications.  

Harmonized frameworks enable CSPs to align with the cybersecurity objectives of various organizations while simultaneously addressing a broader range of threats and vulnerabilities, improving overall security. StateRAMP’s focus is to align requirements across the Federal, State, Local and Educational sectors to reduce the cost of development and deployment through a unified set of standards. To ensure the Public and Private Sectors work in alignment, StateRAMP members have access to the same guidance, tools and resources necessary for implementing a harmonized framework. This initiative will streamline the compliance process through a unified approach to cybersecurity that ensures adherence to industry and regulatory requirements. 

The Future of StateRAMP  

StateRAMP has rolled out an overlay to its Moderate Impact Level baseline that maps to Criminal Justice Information Services (CJIS) Security Policy. This overlay is designed to strengthen cloud security in the law enforcement sector, helping assess a product’s potential for CJIS compliance in safeguarding critical information.  

At the 2024 StateRAMP Cyber Summit, Deputy Information Security Officer Jeffrey Campbell from the FBI CJIS addressed the challenges state and local entities face when adopting cloud technologies. He explained that while state constituents frequently asked if they could use FedRAMP for cloud initiatives, the answer was often complicated because FedRAMP alone does not fully meet CJIS requirements. “You can use vendors vetted through FedRAMP, that is going to get you maybe 80% of these requirements. There’s still 20% you’re going to have to do on your own” Campbell noted. He emphasized that, through framework harmonization, StateRAMP can bridge this compliance gap, offering states a viable solution to achieve several parallel security standards.  

Another initiative is the NASPO/StateRAMP Task Force, which was formed to unite procurement officials, cybersecurity experts, Government officials and industry experts together with IT professionals. The task force aims to produce tools and resources for procurement officials nationwide to make the StateRAMP adoption process more streamlined and consistent. 

Though still relatively new, StateRAMP is gaining traction, with 28 participating states as of October 2024. As cyberattacks become more sophisticated, cybersecurity compliance has become a larger point of emphasis at every level of Government to protect sensitive data. StateRAMP is working to bring all stakeholders together to drive toward a common understanding and acceptance of a standardized security standard. StateRAMP’s proactive steps to embrace framework harmonization are helping CSPs and State and Local Governments move towards a more secure digital future. 

To learn more about the advantages the StateRAMP program offers State Governments and technology suppliers watch the Federal News Network’s StateRAMP Exchange, presented by Carahsoft.  

To learn more about framework harmonization and gain valuable insights into others, such as cloud security, risk management and procurement best practices, watch the StateRAMP Cyber Summit, presented by Carahsoft. 

Third-Party Risk Management: Moving from Reactive to Proactive

In today’s interconnected world, cyber threats are more sophisticated, with 83% of cyberattacks originating externally, according to the 2023 Verizon Data Breach Investigations Report (DBIR). This has prompted organizations to rethink third-party risk management. The 2023 Gartner Reimagining Third Party Cybersecurity Risk Management Survey found that 65% of security leaders increased their budgets, 76% invested more time and resources and 66% enhanced automation tools to combat third-party risks. Despite these efforts, 45% still reported increased disruptions from supply chain vulnerabilities, highlighting the need for more effective strategies.

Information vs Actionable Alerts

The constant evolution and splintering of illicit actors pose a challenge for organizations. Many threat groups have short lifespans or re-form due to law enforcement takedowns, infighting and shifts in ransomware-as-a-service networks, making it difficult for organizations to keep pace. A countermeasure against one attack may quickly become outdated as these threats evolve, requiring constant adaptation to new variations.

In cybersecurity, information is abundant, but decision-makers must distinguish the difference between information and actionable alerts. Information provides awareness but does not always drive immediate action, whereas alerts deliver real-time insights, enabling quick threat identification and response. Public data and real-time alerts help detect threats not visible in existing systems, allowing organizations to make proactive defense adjustments.

Strategies for Managing Third-Party Risk

Dataminr Third Party Risk Management OSINT Blog Embedded Image 2024

Managing third-party risk has become a critical challenge. The NIST Cybersecurity Framework (CSF) 2.0 emphasizes that governance must be approached holistically and highlights the importance of comprehensive third-party risk management. Many organizations rely on vendor surveys, attestations and security ratings, but these provide merely a snapshot in time and are often revisited only during contract negotiations. The NIST CSF 2.0 calls for continuous monitoring—a practice many organizations follow, though it is often limited to identifying trends and anomalies in internal telemetry data, rather than extending to third-party systems where potential risks may go unnoticed. Failing to consistently assess changes in third-party risks leaves organizations vulnerable to attack.

Many contracts require self-reporting, but this relies on the vendor detecting breaches, and there is no direct visibility into third-party systems like there is with internal systems. Understanding where data is stored, how it is handled and whether it is compromised is critical, but organizations often struggle to continuously monitor these systems. Government organizations, in particular, must manage their operations with limited budgets, making it difficult to scale with the growing number of vendors and service providers they need to oversee. Threat actors exploit this by targeting smaller vendors to access larger organizations.

Current strategies rely too heavily on initial vetting and lack sufficient post-contract monitoring. Continuous monitoring is no longer optional—it is essential. Organizations need to assess third-party risks not only at the start of a relationship but also as they evolve over time. This proactive approach is crucial in defending against the ever-changing threat landscape.

Proactively Identifying Risk

Proactively identifying and mitigating risks is essential for Government organizations, particularly as threat actors increasingly leverage publicly available data to plan their attacks. Transparency programs, such as USAspending.gov and city-level open checkbook platforms, while necessary for showing how public funds are used, can inadvertently provide a playbook for illicit actors to target vendors and suppliers involved in Government projects. Public data often becomes the first indicator of an impending breach, giving organizations a narrow window—sometimes just 24 hours—to understand threat actors’ operations and take proactive action.

To shift from reactive to proactive, organizations must enhance capabilities in three critical areas:

  1. Speed is vital for detecting threats in real time. Using AI to examine open source and threat intelligence data helps organizations avoid delays caused by time-consuming searches.
  2. The scope of monitoring must extend beyond traditional sources to deep web forums and dark web sites, evaluating text, images and indicators that mimic official branding.
  3. While real-time information is essential, excessive data can lead to alert fatigue. AI models that filter and tag relevant information enable security teams to focus on the most significant risks.

Proactively addressing third-party risks requires organizations to stay prepared for immediate threats. By leveraging public data, they can strengthen defenses and act before vulnerabilities are exploited.

While self-reporting and AI tools are valuable, organizations must take ownership of their risk management by conducting their own due diligence. The ability to continuously monitor, identify and mitigate risks presents not just a challenge but an opportunity for growth and improvement. Ultimately, it is the organization’s reputation and security at stake, making proactive risk management key to staying ahead of today’s evolving threats.

To learn more about proactive third-party risk management strategies, watch Dataminr’s webinar “A New Paradigm for Managing Third-Party Risk with OSINT and AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Dataminr, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Securing Systems Through Segmentation and Zero Trust

Zero Trust is a cybersecurity strategy that recognizes trust as a vulnerability that may potentially allow malicious actors to exploit system environments. Traditionally, systems operated by granting permissions, visibility and trust to a user once they gain access. Rather than minimize trust and opportunity for breaches, Zero Trust eliminates trusted packets, systems and users altogether.

Implementing Zero Trust’s Fundamental Design Concepts

While breaches are inevitable, agencies can equip themselves with a Zero Trust framework to prevent successful cyber-attacks. Zero Trust encompasses identity, access permissions and micro segmentation, per the National Institute of Standards and Technology (NIST) architecture. All three enforcement points are required to complete the Zero Trust model. While security products are a component of Government agency’s implementation of Zero Trust, it is a strategy that requires proper planning.

To successfully implement Zero Trust, agencies must understand its fundamental design concepts.

  • Focus on business outcomes: Determine key agency objectives and design strategies with those in mind.

  • Design security strategies from the “inside out”: Typically, networks are designed from the “outside in,” beginning with the software and moving onto data. This can introduce vulnerabilities. By designing software accessibility around data and assets that need to be protected, agencies can personalize security and minimize vulnerabilities.

  • Determine who or what needs to have access: Individuals should default with the least amount of privilege, having additional access granted on a need-to-know basis.

  • Inspect and log all traffic: Multiple factors should be considered to determine whether to allow traffic, not just authentication. Understanding what traffic is moving in and out of the network prevents breaches.

Fundamentally, Zero Trust is simple. Trust is a human concept, not a digital concept. Once agencies understand the basics of Zero Trust, they can decide which tactics they will use to help them deploy it across their network.

Breaking Up Breaches with Segmentation

Illumio Microsegmentation Zero Trust Blog Embedded Image 2024

In other security strategies, security is implemented on perimeters or endpoints. This places IT far from the data that needs monitoring. The average time between a breach and its discovery is 277 days and is usually discovered by independent third parties. With flat, unsegmented surfaces, once breachers gain access to a network, they can take advantage of the entire system. Zero Trust alleviates this by transforming a system’s attack surface into a “protect surface.” Through proper segmentation, systems make the attack surface as small as possible, then places users adjacent to the attack surface to protect it. This area then becomes a more manageable surface for agencies to monitor and protect, eliminating the time gap between breach and discovery.

Once the strategy method is chosen, agencies must decide which tactics and tools they will use to deploy Zero Trust. Here is a simple, five-step process for deploying Zero Trust.

1. Define the protect surface: It is important to start with knowing what data needs protection. A great first step is to follow the DAAS element—protect data, assets, applications and services. Segmentation can help separate these four elements and posit each on its own protect surface, giving IT employees a manageable surface to monitor.

    2. Map transaction flows: With a robust protect surface, agencies can begin tailoring their Zero Trust environment. Understanding how the entire system functions together is imperative. With visibility into transaction flow mapping, agencies can build and architecture the environment around the protect surface.

    3. Architect a Zero Trust environment: Agencies should personalize their security to best fit their protect surface. That way, Zero Trust can work for the agency and its environment.

    4. Create policy: It is important to ask questions when creating policy, as Zero Trust is a set of granular allowance rules. Who should be allowed access and via what application? When should access be enabled? Where is the data located on the protect surface? Why is the agency doing this? These questions help agencies map out their personalized cybersecurity strategy.

    5. Monitor and maintain the protect surface: By creating an anti-fragile system, which increases its capability after exposure to shocks and violations, agencies can adapt and strengthen from stressors.

    Segmentation is vital to the theory of Zero Trust. Through centralized management, agencies can utilize segmentation to their benefit, positing IT adjacent to the specialized surface they protect. Zero Trust can be a learning curve. By implementing each protect surface individually, agencies can avoid becoming overwhelming. Building from the foundation up allows agencies to control their networks. Additional technologies, such as artificial intelligence (AI) and machine learning (ML), help give defenders the advantage by enabling them to focus on protect surfaces. Through a personalized and carefully planned Zero Trust strategy, agencies can stop breaches and protect their network and data.

    Illumio & Zero Trust

    Zero Trust often incorporates threat-hunting solutions, to detect a problem and then try to block or remove it. But no solution will ever be 100% and it must be assumed that eventually a threat will slip through, undetected. Undetected threats will eventually move between workloads, further compromising the network. Illumio, a cloud computing security company that specializes in Zero Trust micro segmentation, can future-proof agencies against malware.

    While threat-hunting tools focus on the workload, Illumio focuses on the segment, which means that Illumio enforces the Protect Surface via the vectors used by any and all threats that try to breach it. Any complex AI-generated malware which will appear in the near future will also want to move across segments, and Illumio will protect the environment today against threats which will appear tomorrow.

    To learn more about Zero Trust and Segmentation, visit Illumio’s webinar, Segmentation is the Foundation of Zero Trust.

    FedRAMP Rev. 5 Baselines are Here, Now What?

    The FedRAMP Joint Authorization Board (JAB) has given the green light to update to FedRAMP Rev. 5. With this revision, FedRAMP baselines are now updated in line with the National Institute of Standards and Technology’s (NIST) SP 800-53 Rev. 5 Catalog of Security and Privacy Controls for Information Systems and Organizations and SP 800-53B Control Baselines for Information Systems and Organizations. This transformation brings opportunities and challenges for all stakeholders involved, including Cloud Service Providers (CSP), Third Party Assessment Organizations (3PAOs), and Federal Agencies. But worry not – with RegScale, we have your back! Let’s dive in and understand the impact and how to prepare for the coming changes.

    Decoding the Transition

    The transition has been in the works for a very long time, and FedRAMP has updated many of their controls to accurately reflect updates in technology since Rev. 4 was published in 2015. FedRAMP Rev. 5 brings with it significant updates to the security controls to meet emerging threats, including new families such as supply chain risk management, and places a greater emphasis on privacy controls. FedRAMP continues to strongly encourage package submission in NIST Open Security Controls Assessment Language (OSCAL) format to accelerate review and approval processes. To aid with a clear comprehension of the updates, FedRAMP has also released a Rev. 4 to Rev. 5 Baseline Comparison Summary. There are more than 250 controls with significant changes, including several whole new families of controls.

    In the coming weeks, FedRAMP plans to release a series of updated OSCAL baseline profiles, resolved profile catalogs, System Security Plan (SSP), Security Assessment Plan (SAP), Security Assessment Report (SAR), and Plans of Action and Milestones (POA&;ampM) templates as well as supporting guides for each of these.

    What is OSCAL, You Ask?

    RegScale FedRAMP Rev. 5 Baselines Blog Embedded Image 2023

    OSCAL is a set of standards for digitizing the authorization package through common machine-readable formats developed by NIST in conjunction with the FedRAMP PMO and industry. NIST defines it as a “set of hierarchical, formatted, XML- JSON- and YAML-based formats that provide a standardized representation for different categories of security information pertaining to the publication, implementation, and assessment of security controls.” OSCAL makes it easier to validate the quality of your FedRAMP packages and expedites the review of those packages.

    The Impact on CSPs

    FedRAMP has published the CSP Transition Plan, providing a comprehensive roadmap and tool for CSPs to identify the scope of the Rev. 5 controls that require testing and offering support for everyone based on their stage in the FedRAMP authorization process. Timelines for the full transition range from immediate to 12-18 months. You should find a technology partner to assist you regardless of your FedRAMP stage so that you can quickly and completely adapt from Rev. 4 to Rev. 5 baselines as well as update, review, and submit your packages in both human-readable (Word, Excel) and machine-readable (OSCAL) formats.

    If you are a CSP just getting started with your FedRAMP journey…

    As of May 30, 2023, CSPs in the “planning” stage of FedRAMP authorization must adopt the new Rev. 5 baseline in their controls documentation and testing and submit their packages in the updated FedRAMP templates as they become available. You are in the planning phase if you are:

    • Applying for FedRAMP or are in the readiness review process
    • Have not partnered with a federal agency prior to May 30, 2023
    • Have not contracted with a 3PAO for a Rev. 4 assessment prior to May 30, 2023
    • Have a JAB prioritization but have not begun an assessment after the release of the Rev. 5 baselines and templates

    If you are a CSP in the “Initiation” phase

    CSPs in the initiation phase will complete an Authority to Operate (ATO) using the Rev. 4 baseline and templates. By the latest of the issuance of your ATO or September 1, 2023, you will identify the delta between your Rev. 4 implementation and the Rev. 5 requirements, develop plans to address the differences, and document those plans in the SSP and POA&;ampM. You are in the initiation phase if any of the following apply prior to May 30, 2023:

    • Prioritized for the JAB and are under contract with a 3PAO or in 3PAO assessment
    • Have been assessed and are working toward P-ATO package submission
    • Kicked off the JAB P-ATO review process
    • Partnered with a federal agency and are:
      • Currently under contract with a 3PAO
      • Undergoing a 3PAO assessment
      • Have been assessed and have submitted the package for Agency ATO review

    If you are a Fully Authorized CSP

    You are in the “continuous monitoring” phase if you are a CSP with a current FedRAMP authorization. By September 1, 2023, you need to identify the delta between your current Rev. 4 implementation and the Rev. 5 requirement, develop plans to address the differences and document those plans in the SSP and POA&;ampM. By October 2, 2023; you should update plans based on any shared controls.

    If your latest assessment was completed between January 2 and July 3, 2023, you have a maximum of one year from the date of the last assessment to complete all implementation and testing activities for Rev. 5. If your annual assessment is scheduled between July 3 and December 15, 2023, you will need to complete all implementation and testing activities no later than your next, scheduled annual assessment in 2023/2024.

    A Complete Technology and Transition Partner

    The transition to FedRAMP Rev. 5 is not just about meeting the new requirements but doing so in the most efficient and seamless manner. You should focus on your core business while technology like RegScale handles the intricacies of the compliance transition.

    Beyond compliance documentation, RegScale serves as a comprehensive FedRAMP compliance technology and transition partner. Our platform assists with mapping your security controls against FedRAMP and NIST SP 800-53 baselines for Rev. 4 and Rev. 5, supports gap analysis, provides remediation support, and enables continuous monitoring and improvement. The platform currently includes FedRAMP support and tools to develop human-readable and OSCAL-formatted content for Catalogs, Profiles, SSPs, Components, SAPs, SARs, POAMs and Asset Inventory. To help eliminate the friction and confusion of where to begin with OSCAL, RegScale provides an intuitive Graphical User Interface (GUI) to build artifacts using our wizards and then easily export them as valid OSCAL. By automating the creation of audit-ready documentation and allowing direct submission to the FedRAMP Project Management Office (PMO) through OSCAL and/or Word/Excel templates, RegScale provides a seamless transition experience to Rev. 5, reducing complexities and saving you valuable time and resources.

    In closing, it is crucial for all CSPs and stakeholders to review the new mandates and the CSP Transition Plan and begin planning to address the updated templates. Let RegScale help make the shift to FedRAMP Rev. 5 a streamlined, efficient, and effective process with minimum costs and business disruptions.

    This post originally appeared on Regscale.com and is re-published with permission.

    View our webinar to learn more about the low-cost approaches for handling the transition to Rev 5.

    How Palantir Meets IL6 Security Requirements with Apollo

    Building secure software requires robust delivery and management processes, with the ability to quickly detect and fix issues, discover new vulnerabilities, and deploy patches. This is especially difficult when services are run in restricted, air-gapped environments or remote locations, and was the main reason we built Palantir Apollo.

    With Apollo, we are able to patch, update, or make changes to a service in 3.5 minutes on average and have significantly reduced the time required to remediate production issues, from hours to under 5 minutes.

    For 20 years, Palantir has worked alongside partners in the defense and intelligence spaces. We have encoded our learnings for managing software in national security contexts. In October 2022, Palantir received an Impact Level 6 (IL6) provisional authorization (PA) from the Defense Information Systems Agency (DISA) for our federal cloud service offering.

    IL6 accreditation is a powerful endorsement, recognizing that Palantir has met DISA’s rigorous security and compliance standards and making it easier for U.S. Government entities to use Palantir products for some of their most sensitive work.

    The road to IL6 accreditation can be challenging and costly. In this blog post, we share how we designed a consistent, cross-network deployment model using Palantir Apollo’s built-in features and controls in order to satisfy the requirements for operating in IL6 environments.

    What are FedRAMP, IL5, and IL6?

    With the rise of cloud computing in the government, DISA defined the operating standards for software providers seeking to offer their services in government cloud environments. These standards are meant to ensure that providers demonstrate best practices when securing the sensitive work happening in their products.

    DISA’s standards are based on a framework that measures risk in a provider’s holistic cloud offering. Providers must demonstrate both their products and their operating strategy are deployed with safety controls aligned to various levels of data sensitivity. In general, more controls mean less risk in a provider’s offering, making it eligible to handle data at higher sensitivity levels.

    Palantir IL6 Security Requirements with Apollo Blog Embedded Image 2023

    Impact Levels (ILs) are defined in DISA’s Cloud Computing SRG as Department of Defense (DoD)-developed categories for leveraging cloud computing based on the “potential impact should the confidentiality or the integrity of the information be compromised.” There are currently four defined ILs (2, 4, 5, and 6), with IL6 being the highest and the only IL covering potentially classified data that “could be expected to have a serious adverse effect on organizational operations” (the SRG is available for download as a .zip from here).

    Defining these standards allows DISA to enable a “Do Once, Use Many” approach to software accreditation that was pioneered with the FedRAMP program. For commercial providers, IL6 authorization means government agencies can fast track use of their services in place of having to run lengthy and bespoke audit and accreditation processes. The DoD maintains a Cloud Service Catalog that lists offerings that have already been granted PAs, making it easy for potential user groups to pick vetted products.

    NIST and the Risk Management Framework

    The DoD bases its security evaluations on the National Institute of Standards and Technology’s (NIST) Risk Management Framework (RMF), which outlines a generic process used widely across the U.S. Government to evaluate IT systems.

    The RMF provides guidance for identifying which security controls exist in a system so that the RMF user can assess the system and determine if it meets the users’ needs, like the set of requirements DISA established for IL6.

    Controls are descriptive and focus on whole system characteristics, including those of the organization that created and operates the system. For example, the Remote Access (AC-17) control is defined as:

    The organization:

    • Establishes and documents usage restrictions, configuration/connection requirements, and implementation guidance for each type of remote access allowed;
    • Authorizes remote access to the information system prior to allowing such connections.

    Because of how controls are defined, a primary aspect of the IL6 authorization process is demonstrating how a system behaves to match control descriptions.

    Demonstrating NIST Controls with Apollo

    Apollo was designed with many of the NIST controls in mind, which made it easier for us to assemble and demonstrate an IL6-eligible offering using Apollo’s out-of-the box features.

    Below we share how Apollo allows us to address six of the twenty NIST Control Families (categories of risk management controls) that are major themes in the hundreds of controls adopted as IL6 requirements.

    System and Services Acquisition (SA) and Supply Chain Risk Management (SR)

    The System and Services Acquisition (SA) family and related Supply Chain Risk Management (SR) family (created in Revision 5 of the RMF guidelines) cover the controls and processes that verify the integrity of the components of a system. These measures ensure that component parts have been vetted and evaluated, and that the system has safeguards in place as it inevitably evolves, including if a new component is added or a version is upgraded.

    In a software context, modern applications are now composed of hundreds of individual software libraries, many of which come from the open source community. Securing a system’s software supply chain requires knowing when new vulnerabilities are found in code that’s running in the system, which happens nearly every day.

    Apollo helped us address SA and SR controls because it has container vulnerability scanning built directly into it.

    Figure 1: The security scan status appears for each Release on the Product page for an open-source distribution of Redis

    When a new Product Release becomes available, Apollo automatically scans the Release to see if it’s subject to any of the vulnerabilities in public security catalogs, like MITRE’s Common Vulnerabilities and Exposure’s (CVE) List.

    If Apollo finds that a Release has known vulnerabilities, it alerts the team at Palantir responsible for developing the Product in order to make sure a team member updates the code to patch the issue. Additionally, our information security teams use vulnerability severity to define criteria for what can be deployed while still keeping our system within IL6 requirements.

    Figure 2: An Apollo scan of an open-source distribution of Redi shows active CVEs

    Scanning for these weak spots in our system is now an automatic part of Apollo and a crucial element in making sure our IL6 services remain secure. Without it, mapping newly discovered security findings to where they’re used in a software platform is an arduous, manual process that’s intractable as the complexity of a platform grows, and would make it difficult or impossible to accurately estimate the security of a system’s components.

    Configuration Management (CM)

    The Configuration Management (CM) group covers the safety controls that exist in the system for validating and applying changes to production environments.

    CM controls include the existence of review and approval steps when changing configuration, as well as the ability within the system for administrators to assign approval authority to different users based on what kind of change is proposed.

    Apollo maintains a YML-based configuration file for each individual microservice within its configuration management service. Any proposed configuration change creates a Change Request (CR), which then has to be reviewed by the owner of the product or environment.

    Changes within our IL6 environments are sent to Palantir’s centralized team of operations personnel, Baseline, which verifies that the Change won’t cause disruptions and approves the new configuration to be applied by Apollo. In development and testing environments, Product teams are responsible for approving changes. Because each service has its own configuration, it’s possible to fine-tune an approval flow for whatever’s most appropriate for an individual product or environment.

    Figure 3: An example Change Request to remove a Product from an Environment

    A history of changes is saved and made available for each service, where you can see who approved a CR and when, which also addresses Audit and Accountability (AU) controls.

    When a change is made, Apollo first validates it and then applies it during configured maintenance windows, which helps to avoid the human error that’s common in managing service configuration, like introducing an untested typo that interrupts production services. This added stability has made our systems easier to manage and, consequentially, easier to keep secure.

    Incident Response (IR)

    The Incident Response (IR) control family pertains to how effectively an organization can respond to incidents in their software, including when its system comes under attack from bad actors.

    A crucial aspect to meeting IR goals is being able to quickly patch a system, quarantine only the affected parts of the system, and restore services as quickly as is safely possible.

    A major feature that Apollo brings to our response process is the ability to quickly ship code updates across network lines. If a product owner needs to patch a service, they simply need to make a code change. From there, a release is generated, and Apollo prepares an export for IL6 that is applied automatically once it’s transferred by our Network Operations Center (NOC) team according to IL6 security protocols. Apollo performs the upgrade without intervention, which removes expensive coordination steps between the product owner and the NOC.

    Figure 4: How Apollo works across network lines to an air-gapped deployment

    Additionally, Apollo allows us to save Templates of our Environments that contain configuration that is separate from the infrastructure itself. This has made it easy for us to take a “cattle, not pets” approach to underlying infrastructure. With secrets and other configuration decoupled from the Kubernetes cluster or VMs that run the services, we can easily reapply them onto new infrastructure should an incident ever pop up, making it simple to isolate and replace nodes of a service.

    Figure 5: Templates make it easy to manage Environments that all use the same baseline

    Contingency Planning (CP)

    Contingency Planning (CP) controls demonstrate preparedness should service instability arise that would otherwise interrupt services. This includes the human component of training personnel to respond appropriately, as well as automatic controls that kick in when problems are detected.

    We address the CP family by using Apollo’s in-platform monitoring and alerting, which allows product or environment owners to define alerting thresholds based on an open standard metric types, including Prometheus’s metrics format.

    Figure 6: Monitors configured for all of the Products in an Environment make it easy to track the health of software components

    Apollo monitors our IL6 services and routes alerts to members of our NOC team through an embedded alert inbox. Alerts are automatically linked to relevant service logging and any associated Apollo activity, which has drastically sped up the remediation process when services or infrastructure experience unexpected issues. The NOC is able to address alerts by following runbooks prepared for and linked to within alerts. When needed, alerts are triaged to teams that own the product for more input.

    Because we’ve standardized our monitors in Apollo, we’ve been able to create straightforward protocols and processes for responding to incidents, which means we are able to action contingency plans quicker and ensure our systems remain secure.

    Access Control (AC)

    The Access Control (AC) control family describes the measures in a system for managing accounts and ensuring accounts are only given the appropriate levels of permissions to perform actions in the system.

    Robustly addressing AC controls includes having a flexible system where individual actions can be granted based on what a user needs to be able to do within a specific context.

    In Apollo, every action and API has an associated role, which can be assigned to individual users or Apollo Teams, which are managed within Apollo and can be mirrored from an SSO provider.

    Roles necessary to operating environments (e.g. approving the installation of a new component) are granted to our Baseline team, and are restricted as needed to a smaller group of environment owners based on an environment’s compliance requirements. Team management is reserved for administrators, and roles that include product lifecycle actions (e.g. recalling a product release) are given to development teams.

    Figure 7: Products and Environments have configurable ownership that ensures the right team is monitoring their resources

    Having a single system to divide responsibilities by functional areas means that our access control system is consistent and easy to understand. Further, being able to be granularly assign roles to perform different actions makes it possible to meet the principle of least privilege system access that underpins AC controls.

    Conclusion

    The bar to operate with IL6 information is rightfully a high one. We know obtaining IL6 authorization can feel like a long process — however, we believe this should not prevent the best technology from being available to the U.S. Government. It’s with that belief that we built Apollo, which became the foundation for how we deploy to all of our highly secure and regulated environments, including FedRAMP, IL5, and IL6.

    Additionally, we recently started a new program, FedStart, where we partner with organizations just starting their accreditation journey to bring their technology to these environments. If you’re interested in working together, reach out to us at fedstart@palantir.com for more information.

    Get in touch if you want to learn more about how Apollo can help you deploy to any kind of air-gapped environment, and check out the Apollo Content Hub for white papers and other case studies.

    This post originally appeared on Palantir.com and is re-published with permission.

    Download our Resource, “Solution Overview: Palantir—Apollo” to learn more about how Palantir Technologies can support your organization.

    Ransomware Security Strategies

    One of the first challenges in combatting ransomware is recognizing the imminence of an attack and the impact it could have on an individual’s personal organization. For 60% of companies surveyed by ActualTech Media and Ransomeware.org, they reported spending zero to four hours on ransomware preparedness per month.[1] Getting collective buy-in from administrators can be difficult since the cybersecurity measures put into place cannot show their full value without being hit by a ransomware attack; however, when compared to the number and scale of attacks occurring, greater attention to cybersecurity is imperative. The NIST Cybersecurity Framework (CSF) provides a guiding set of principles that inform strategies for mitigating ransomware risk. Addressing ransomware starts with identification of a security program followed by protection, prevention, detection, recovery and then security improvements. Ideally companies would follow this CSF outline but in reality, for most organizations the path looks different. Due to feasibility and order of highest critical priority, many companies first establish detection and recovery methods followed by protection, prevention, and security improvement.

    RANSOMWARE DETECTION AND RECOVERY

    When ransomware hits an organization, the biggest immediate concern is finding the problem and returning to business operations as usual. Many resources exist to assist with this endeavor including asset management tools that automatically inventory all devices on the network and monitor for potential ways malware can get in. Implementing edge detection allows companies to be alerted and quickly identify early on if the network has been compromised and which accounts and devices require isolation and additional measures to prevent the further spread to other servers, accounts and storage units. Anti-virus programs are also helpful to monitor endpoints for indicators of compromise or malware. By achieving early detection, companies can contain the malware and reduce data loss.[2] It also aids in preventing extended downtime which is very costly for operations and business reputation. Apart from the actual ransom, the downtime alone caused by cyberattacks in 2020 cost $20.9 billion to American businesses.[1]

    Once malware has been detected, a company’s recovery plan and preparation are put to the test. IT specialists and company administrators need to have an emergency plan in place so there are straightforward steps to recovery. Backups not only need to be created and stored off-site, but also updated on a regular basis and tested to ensure that they are a solid base for a system restoration. With most traditional backup systems, the data cannot be recovered fast enough to neutralize the ransomware’s impact on operations. Instead, a new strategy must be adopted that shifts from 200,000 files taking eight plus hours to restore via the traditional backups, to millions of files being recovered in minutes. Granular, immutable, verifiable snapshots are required to successfully recover all of an organization’s data.[2]

    Carahsoft Ransomware Cybersecurity Blog Series Blog 3 Infographic Image 2023

    The Sophos “State of Ransomware” report indicated that 77% of healthcare organizations that did not experience a ransomware attack in 2021 attributed it to efforts such as backups and cyber insurance, which help with remediation but not prevention. This exposed an ongoing misunderstanding within the industry on cybersecurity methods.[3] Obtaining cyber-insurance does not prevent future attacks; however, instituting proper security strategies does decrease the susceptibility to ransomware. Recovery tools and insurance provide support during post-breach response but ultimately, in conjunction, organizations should strive to prevent the attack in the first place which requires implementing protection and prevention. According to the Government Accountability Office (GAO), cyber-insurance is a valuable resource to employ but noted that it is increasingly harder to acquire, due to the massive volume of cyberattacks, a higher bar of entry and more requirements to gain coverage and receive payouts. This leaves organizations who do not have sufficient security or insurance to face the recovery process and expensive remediation costs alone.[4]

    RANSOMWARE PROTECTION AND PREVENTION

    While most organizations invest in attack detection and recovery strategies, the protection aspect of the NIST CSF is equally important and an essential element to reduce the amount of recovery needed. Protection and prevention of ransomware attacks begins with establishing system routines and measures that make it more difficult for hackers to infiltrate. Through implementing Zero Trust user principles such as Multi-Factor Authentication (MFA), institutions and agencies can protect themselves by verifying the identity of employees. Poor password hygiene is one of the leading gateways to malware infiltration, making thorough employee training and password management software a baseline to reduce risk. The average user has access to over 20 million corporate files, making each employee a critical part of keeping the network safe and a huge liability if they are not vigilant and following best practices.[2] Segmentation of the network to provide user-specific access to data and system resources also creates safety barriers, so in the event of an attack the entire network is not automatically compromised. Around 80% of critical infrastructure companies without Zero Trust policies experience an $1.17 million increase in breach costs bringing to an average of $5.4 million per attack in 2022.[5]

    Comprehensive Zero Trust authentication and data access control to limit complete access to the entire company’s files is a first step in this process. File indexing, which classifies the level of sensitivity of information contained, allows companies to better allocate resources to prioritize their protection of the most important or confidential files.[2] When processes are automated through these and other resources, it eases IT teams’ responsibilities and reduces the chance of error. Incorporating artificial intelligence (AI) and machine learning (ML) also expedites the identification of confidential information with metadata tags, along with advanced detection of suspicious network and user activity, and thereby minimizes inefficiencies.[6]

    Organizations must rigorously search for security gaps and proactively work to close them. Some other measures to incorporate include:

    • Filtering for phishing emails and providing awareness training to minimize the possibility of a user accidentally clicking a malicious link
    • Utilizing firewalls to block unusual network traffic and segment the network to impede malware system communications
    • Monitoring software licenses to ensure they are updated and systems are adequately patched
    • Removing expired and extraneous user credentials and unused legacy technology
    • Tracking vulnerabilities on devices like IoTs, OTs, and employees’ personal devices used for work (BYODs) throughout the entire connection lifecycle
    • Implementing Zero Trust cloud security with container scanning and proxies like a Cloud Access Security Broker (CASB) and Zero Trust Network Access (ZTNA)

    RANSOMWARE SECURITY IMPROVEMENT

    Following an attack, companies have the opportunity to grow and improve from the situation as well as share resources with other public and private sector companies to strengthen defenses. Incident reporting is a key strategy to prevent future ransomware incidents and a top priority for the Cybersecurity and Infrastructure Security Agency (CISA). Agencies and organizations must support each other to defend against these cyber threats that affect every industry.[7]

    To support this greater focus on information sharing, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 took effect in March requiring a more stringent timeline and adherence to disclosing cybersecurity attacks and ransomware payments to the government. CISA also now has the authority to subpoena critical infrastructure organizations if they do not report any cybersecurity incidents within 72 hours of a cyberattack and 24 hours of a ransom payment.[8]

    This threat information sharing requirement along with other recent rules on reporting attack incidents strengthen organizations’ security posture and reduce the success rates of cyberattacks. Through these joint efforts and public-private partnerships, companies can recover faster, resume normal operations and support other businesses in the defense of their industry and the nation.[9]

    To assist with incorporating these cybersecurity best practices, Congress passed the Infrastructure Investment and Jobs Act Public Law 117–58 which offers $2 billion to “modernize and secure federal, state, and local IT and networks; protect critical infrastructure and utilities; and support public or private entities as they respond to and recover from significant cyberattacks and breaches.”[10]

    RANSOMWARE RISK MITIGATION

    Tech modernization, while crucial to agencies and organizations’ survival and growth, also presents unique challenges in protecting those technologies.[11] In their journey to securing their legacy and updated systems, companies must take the time to honestly evaluate their cybersecurity standing across the ransomware cycle and ensure their readiness to handle an attack. Utilizing NIST CSF security strategies and other resources help organizations to mitigate risk and empower other companies to learn and protect their systems. By implementing best practices and technologies to address cyber hacks and data breaches, companies are valuing both their customers and their own bottom line. Proactive cybersecurity measures are key for all companies to stem the tide of ransomware attacks and protect the continued growth of their organizations.

     

    Learn about the current state of ransomware and its impact across sectors in our Ransomware Series. Visit our website to learn how Carahsoft and its partners are providing solutions to assist in the fight against ransomware.

     

    Resources:

    [1] “Everything You Need to Know About Ransomware,” Ransomware.org, https://ransomware.org/

    [2] “Protect, Detect & Recover: The Three Prongs of a Ransomware Defense Strategy for Your Enterprise Files,” Nasuni, https://media.erepublic.com/document/Whitepaper-_A_Three_Prong_Ransomware_Strategy_-_Nasuni.pdf

    [3] “The State of Ransomware in Healthcare 2022,” Sophos, https://news.sophos.com/en-us/2022/06/01/the-state-of-ransomware-in-healthcare-2022/

    [4] “Healthcare data breach costs reach record high at $10M per attack: IBM report,” Fierce Healthcare, https://www.fiercehealthcare.com/health-tech/healthcare-data-breach-costs-reach-record-high-10m-attack-ibm-report

    [5] “Cyber Attacks Against Critical Infrastructure Quietly Increase,” Government Technology, https://www.govtech.com/blogs/lohrmann-on-cybersecurity/cyber-attacks-against-critical-infrastructure-quietly-increase

    [6] “Four Best Practices for Protecting Data Wherever it Exists,” Dell Technologies and Carahsoft, https://www.carahsoft.com/2nd-page/dell-4-best-practices-federal-data-security-protection-report-2022#page=4

    [7] “Ransomware Hackers Will Still Target Smaller Critical Infrastructure, CISA Director Warns,” Nextgov, https://www.nextgov.com/cybersecurity/2022/07/ransomware-hackers-will-still-target-smaller-critical-infrastructure-cisa-director-warns/374953/

    [8] “DHS Convenes Regulators, Law Enforcement Agencies on Cyber Incident Reporting,” Nextgov, https://www.nextgov.com/cybersecurity/2022/07/dhs-convenes-regulators-law-enforcement-agencies-cyber-incident-reporting/374968/

    [9] “Ransomware Attacks on Hospitals Have Changed,” AHA Center for Health Innovation, https://www.aha.org/center/cybersecurity-and-risk-advisory-services/ransomware-attacks-hospitals-have-changed

    [10] “FACT SHEET: Top 10 Programs in the Bipartisan Infrastructure Investment and Jobs Act That You May Not Have Heard About.” The White House, https://www.whitehouse.gov/briefing-room/statements-releases/2021/08/03/fact-sheet-top-10-programs-in-the-bipartisan-infrastructure-investment-and-jobs-act-that-you-may-not-have-heard-about/

    [11] “Global Data Protection Index 2021,” Dell Technologies, https://www.dell.com/en-us/dt/data-protection/gdpi/index.htm#pdf-overlay=//www.delltechnologies.com/asset/en-us/products/data-protection/industry-market/global-data-protection-index-key-findings.pdf

    Infographic Resources:

    “Ransomware and Energy and Utilities,” AT&T Cybersecurity, https://cybersecurity.att.com/blogs/security-essentials/ransomware-and-energy-and-utilities

    Understanding the Philosophy and Complementary Nature of DFARS and CMMC 2.0

    With each passing year, new cybersecurity challenges arise with growing impact and complexity. The federal government and military in particular must be extremely attentive to combat these threats. In response to increased hacker attacks, the Department of Defense (DoD) has formulated several information management and cybersecurity standards, such as DFARS and CMMC, to reduce the risk of system compromises. By complying with these guidelines, government contractors partner with the DoD to mitigate security breaches.

    WHAT ARE THE DFARS & CMMC FRAMEWORKS?

    The Defense Federal Acquisition Regulation Supplement (DFARS) expands on the standards that companies must follow to begin or renew a contract with the DoD. These regulations in Clause 252.204-7012 (7012), “Safeguarding Covered Defense Information and Cyber Incident Reporting,” revolve around protecting Controlled Unclassified Information (CUI) from falling into the wrong hands through unauthorized access or disclosure.[1] DFARS was initiated in 2016 as requirements for contractors within the Defense Industrial Base (DIB)[ 2] to increase their data education, physical security, cybersecurity measures, cyber-attack reports and alerts to the DoD. The requirements in Clause 7012 allow patterns to be assessed and more adequately countered through refined regulations.[3] Through enhancing security in these areas, the DoD strives to protect the national economy and sensitive data by reducing vulnerabilities and monitoring threats.

    To achieve DFARS Clause 252.204-7012 compliance, companies must develop security standards in 14 areas by conducting a gap analysis to identify the company’s current standing and protocols, establishing a remediation plan to align with DFARS standards, continuously tracking suspicious activity and reporting security breaches. Finally, contractors must complete a National Institute of Standards and Technology (NIST) SP 800-171 DoD Basic Assessment and document their compliance on the Supplier Performance Risk System (SPRS).[3]

    In 2020, the DoD launched the Cybersecurity Maturity Model Certification (CMMC) and initially announced it as a replacement to DFARS. The DoD later clarified that CMMC was an additional but complementary framework.[4] Any prime or subcontractor handling national security information and seeking to work with the DoD must follow both DFARS Clause 7012 cybersecurity standards and the appropriate level of CMMC to match the degree of their information sensitivity.

    RECENT UPDATES TO CMMC

    Because of the initial confusion surrounding CMMC, in November 2021, the DoD released CMMC 2.0 to clarify the original specifications. This update reduced the original five maturity levels to three and made compliance more feasible for small businesses by not requiring third-party assessments for the first tier. CMMC 2.0 also provides additional flexibility in the compliance timeline.[5]

    In the new version, the tiers build on each other and include:

    • Level 1 – Foundational: requires the fulfillment of 17 best practices verified through annual self-assessment
    • Level 2 – Advanced: incorporates NIST SP 800-171 standards plus an additional 110 best practices. Some are verified through annual self-assessment, and others are verified through triennial third-party assessment (determined per contract)
    • Level 3 – Expert: aligns with NIST SP 800-172 standards as well as over 110 best practices verified through triennial third-party assessment

    The distinction with these levels allows companies to comply with the tier that matches their involvement with CUI. This level also dictates what contracts companies are permitted to bid on. Companies that already comply with DFARS have a head start in achieving CMMC 2.0 compliance.[2]

    The NIST SP 800-172 document describes three goals for these frameworks to prevent malicious activity from compromising CUI:

    • Develop infiltration-resistant systems
    • Install damage-limiting procedures
    • Promote cyber resiliency and attack survivability[6]

    With this new release, the DoD aims to streamline the process and lower the barrier of entry to save contractors’ resources. Allowing companies to create Plans of Action & Milestones (POA&Ms) as a placeholder enables them to work towards compliance while still receiving contract awards.[5]

    CMMC 2.0 is expected to be officially published in March 2023 followed by a 60-day feedback period. After the targeted finalization date of May 2023, contracts will begin requiring bidders to attain a specific maturity level before applying. While the CMMC 2.0 program will have an extended rolled out, companies should start initiating their journey towards compliance. The Cyber Accreditation Body (Cyber AB) estimates 8-12 weeks for the average maturity level assessment to process.[2] Companies’ compliance costs depend on the gap in their existing organization cybersecurity posture and the desired CMMC level. In some cases, the DoD notes that cybersecurity contracts can cover contractor upgrades under “allowable costs.”[7]

    DIFFERENCES BETWEEN DFARS & CMMC

    Both the DFARS and CMMC frameworks center around data protection through security controls; however, they differ in their compliance assessment. With DFARS Clause 252.204-7012, organizations monitor their own systems without external inspection or verification of proper data generation, storage and transmission. CMMC 2.0 combines self-assessment and assessments by Third Party Assessment Organizations (3PAOs) who determine an organization’s eligibility for a specific maturity level.[8]

    Another difference between DFARS and CMMC are the levels included in CMMC. DFARS Clause 7012 contains only one tier that lays out ground-rules for handling CUI and increasing security in the DIB. CMMC differs from DFARS in that it institutes maturity levels to classify the extent of cybersecurity protective measures. The first CMMC 2.0 maturity level contains less requirements than the NIST SP 800-171, which is the basis for DFARS Clause 7012. Level 2 is identical to NIST SP 800-171 and nearly the same as DFARS Clause 7012 with the exception of additional assessments, while the final CMMC level requires more guardrails.[2]

    Although similar in some respects, DFARS Clause 252.204-7012 and CMMC are not interchangeable standards. Qualifying for one does not instantly precipitate qualification and compliance with the other.

    IMPORTANCE OF DFARS & CMMC

    Implementing DFARS Clause 252.204-7012 and CMMC guidelines not only meet DoD requirements for contracting, the guidelines also strive to protect national security and the economy as well as develop a solid foundation for data and cyber health for organizations which establishes their credibility and furthers their reputation in the field.

    These standards have a large impact on the DoD contracting industry with the integration of DFARS Clause 7012 and CMMC affecting an estimated 100,000 companies.[9] In FY2020, the DoD spent over $665 billion on contracts.[10] According to the US Council of Economic Advisors, the national economy could lose over $1 trillion by 2026 because of cyber-attacks. By following regulations such as DFARS Clause 7012 and CMMC, contractors can do their part to fortify their data security and strengthen national security.[3]

    Instituting adequate cyber hygiene such as server health checks, multi-factor authentication, and zero trust user profiles, not only enables companies to meet DoD mandates, they also safeguard organizations from increased hacking.

    While CMMC 2.0 is expected to have a 5-year phase-in process and is not an immediate requirement across the board, it is imperative that contractors begin investigating their compliance status and initiate the pre-cursory work to meet the requirements of their desired maturity level. By planning in advance and starting the process now, organizations can adequately budget for compliance and have a proactive advantage by being ready before all contracts officially shift to requiring CMMC compliance.

    Failure to comply can result in major consequences for companies including fines, a halt on current contracts and a future ban on working with the DoD. An organization’s disqualification from contracts would also cause revenue loss and harm their reputation in the field.[3] A lack of cybersecurity information management standards could also expose companies to serious data breaches and repair costs.

    DFARS & CMMC: UNIVERSILY PROTECTIVE MEASURES

    Executing a strong, proactive cybersecurity approach is crucial. DFARS and CMMC standards offer guidance in implementing a flexible operational strategy and threat response sufficient to withstand attacks. Together these programs provide safeguards for sensitive information, increase DIB cybersecurity to address advancing threats, institute accountability measures while maintaining a streamlined process, and encourage public trust through good ethics. While DFARS and CMMC are different, they complement each other in protecting national interests and ultimately promoting contractors’ best interests as well.

    Visit Carahsoft’s CMMC resource hub and find out how we can help companies meet CMMC and NIST 800-171 and 800-172 guidelines. Carahsoft partners with great companies and subject matter experts that can help you prepare for CMMC assessment and remediate gaps to compliance in your environment.

     

    [1] “Implementation of DFARS Clause 252.204-7012, Safeguarding Covered Defense Information and Cyber Incident Reporting,” Office of the Under Secretary of Defense, https://www.acq.osd.mil/dpap/policy/policyvault/USA002829-17-DPAP.pdf

    [2] “Understanding the Relationship Between DFARS and CMMC,” SCA Security, https://scasecurity.com/blog/the-role-of-dfars-in-cmmc/

    [3] “What Is DFARS? (+ Your Compliance Checklist),” SCA Security, https://scasecurity.com/blog/what-is-dfars/

    [4] “Fundamentals of Cybersecurity Maturity Model Certification (CMMC) 2.0,” Apptega, https://www.apptega.com/frameworks/cmmc-certification/

    [5] “CMMC 2.0: What You Need to Know About the Latest Version,” SCA Security, https://scasecurity.com/blog/cmmc-2-0/

    [6] “Your Guide to the New CMMC 2.0 Levels,” SCA Security, https://scasecurity.com/blog/your-guide-to-the-new-cmmc-2-0-levels/

    [7] “What Is CMMC?” CISCO, https://www.cisco.com/c/en/us/products/security/what-is-cmmc.html#~the-basics-of-cmmc

    [8] “What is the Difference Between CMMC and DFARS?” FTP Today, https://www.ftptoday.com/blog/difference-between-cmmc-dfars#:~:text=The%20biggest%20difference%20between%20the,government%20agencies%20they%20partner%20with

    [9] “DFARS Interim Rule Compliance 101: What You Need to Know,” SCA Security, https://scasecurity.com/blog/defense-federal-acquisition-regulation/

    [10] “The Importance of CMMC And Its Impact,” SeaGlass Technology, https://www.seaglasstechnology.com/the-importance-of-cmmc-and-its-impact/

    The Advantages of a Risk-Based Approach to Security in Government

    When the US government started the Federal Cloud Computing Initiative in 2009, the US Government had a perimeter-based, traditional on-prem approach to security. It was largely focused on securing hardware and meeting compliance requirements. The US Government knew its approach for cloud had to be different so they created FedRAMP. FedRAMP’s focus on securing multi-tenant cloud environments transitioned security from a hardware focused mentality to one of embracing an approach focused on data security and managed risk.

    FedRAMP’s use of NIST’s Risk Management Framework has continued to expand how the government can use cloud services. When FedRAMP launched, it was predicted that only 25% of Federal IT systems would be suited for cloud computing. By using a risk-based approach to security, FedRAMP has introduced additional security guidelines to now enable more than 75% of Federal data to be suitable for cloud computing.

    Benefiting from a Risk-Based Approach

    The NIST Risk Management Framework allows Federal agencies to focus on a risk-based approach by focusing on data as the first element of security. In this approach, before determining any security requirements of a system, agencies must first determine the data they will be putting into a system. Then you match the security requirements to the data itself.

    By starting with the data, this allows agencies to better understand the risk of that data being manipulated, seen by the wrong person, or unavailable, because ultimately that is what securing a system is meant to protect against. The security process now allows the government to look at a system holistically in how it protects against those threats, not from a component to component approach.

    GovForward Blog Series - Salesforce Embedded ImageDefense in Depth Enables Risk Management

    FedRAMP’s risk based approach uses a concept called defense in depth. This approach leverages protecting data in different ways across multiple components within a system collectively. When you focus on the system as a whole, it allows you to have more adaptable security even if certain components of a system have known weaknesses.

    A simplified way of thinking of defense in depth is the swiss cheese model which is used in other industries like healthcare, aviation, and engineering. The concept is that each individual piece might have holes in it (like swiss cheese), but when you layer each piece on top of each other, you create a solid piece with no holes (imagine putting multiple pieces of swiss cheese together).

    Expanding the Risk Based Approach

    The Federal government, to its credit, is working to enable a risk based approach to many of their cybersecurity programs in addition to FedRAMP. DHS’s CDM program took this to heart with a phased roll out of capabilities. The Trusted Internet Connection 3.0’s trust zones approach undoes the “all of nothing” approach of previous iterations and focuses on data classifications and type of service being used. Not to mention both of these programs have focused on ensuring they are compatible with FedRAMP and the NIST Risk Management Framework.

    One of the newer concepts related to risk management is Zero Trust. In short, anytime data is accessed, there’s a check that whoever is accessing that data is supposed to see that data – and this happens constantly while using any system. Basically, there is zero trust whenever data is accessed – the trust has to be proven from component to component. How is this risk based and also fit the swiss cheese model? Because instead of allowing access to an entire system, you create segmentation that allows many people within the same system, all with different levels of access to data based on their unique need. It allows security teams to match data access with risk all within a singular system or interconnected systems.

    Salesforce and Risk Based Security

    At Salesforce, we have invested heavily in adopting FedRAMP’s risk based approach to security. We have two offerings meeting the strict FedRAMP moderate and FedRAMP high security requirements. When Federal agencies use Salesforce’s #1 CRM, agencies get to leverage the best of the risk management framework allowing them to innovate at speeds 2020 demands, scale to unprecedented levels, all while ensuring government data is secure.

    Visit our website to learn more about the GovForward: Multicloud Series and FedRAMP through our additional resources.