Preparing Federal Systems for Post-Quantum Security: A Strategic Approach

Federal agencies face an urgent timeline to protect their most sensitive data from quantum computing threats. Quantum computers leverage physics principles like superposition and entanglement to perform calculations faster than classical computers, posing a significant threat to current encryption standards. Adversaries employ “harvest now, decrypt later” tactics, collecting encrypted data to store until there is a quantum computer powerful enough to break the encryption. The National Institute of Standards and Technology (NIST) released standardized Post-Quantum Cryptography (PQC) algorithms designed to withstand quantum attacks, ensuring long-term data security. The U.S. Federal Government has also issued guidance urging Federal agencies to update their IT infrastructure and deploy crypto-agile solutions that utilize today’s classical encryption algorithms and provide the ability to upgrade to PQC algorithms to combat this threat.

With the Cloud Security Alliance projecting cryptographically relevant quantum computers by 2030, agencies must implement these quantum-resistant algorithms before current security measures become obsolete.

The Quantum Threat Landscape

Current public key infrastructure (PKI), which underpins the internet, code signing and authentication, faces an existential threat from quantum computing. This vulnerability extends beyond theoretical concerns to three specific risk areas affecting Federal systems:

  1. Harvest Now, Decrypt Later: Attackers intercept communications and data today, storing them until quantum computers can break the encryption—potentially exposing Government secrets and sensitive information.
  2. Forged Signatures: Quantum capabilities could enable impersonation of trusted entities, allowing attackers to load malicious software to long-life devices or create fraudulent financial transactions that impact both commercial and Federal Government systems.
  3. Man-in-the-Middle Attacks: Advanced quantum computing could facilitate access to secure systems, potentially compromising military command and control (C2) environments, disrupting critical infrastructure and interfering with elections.

The most vulnerable assets are those containing long-lived data, including decades of trade secrets, classified information and lifetime healthcare and personal identifiable information. Short-lived data that exists for hours or months faces considerably less risk from quantum-enabled decryption.

Post-Quantum Cryptography Standards and Timeline

The standardization of quantum-resistant algorithms represents the culmination of an eight-year process spearheaded by NIST. In August 2024, NIST published its final standards for three critical algorithms:

  • ML-KEM (formerly Crystals-Kyber) | FIPS 203 | Key Encapsulation
  • ML-DSA (formerly Crystals-Dilithium) | FIPS 204 | Digital Signature
  • SLH-DSA (formerly HSS/LMS) | FIPS 205 | Stateless Hash-Based Signature

A fourth algorithm, FND-DSA (formerly Falcon), is still pending finalization. Simultaneously, NIST has released Internal Report (IR) 8547, providing comprehensive guidelines for transitioning from quantum-vulnerable cryptographic algorithms to PQC.

The National Security Agency’s (NSA) Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), released in September 2022 with an FAQ update in April 2024, outlines specific PQC requirements for National Security Systems. These standards have become reference points for Federal agencies beyond classified environments, establishing a staggered implementation timeline:

  • 2025-2030: Software/firmware signing
  • 2025-2033: Browsers, servers and cloud services
  • 2026-2030: Traditional networking equipment
  • 2027: Begin implementation of operating systems

Crypto Agility and Transition Strategy

It is essential for Federal agencies to deploy crypto-agile solutions that provide the ability to quickly modify underlying cryptographic primitives with flexible, upgradable technology. This capability allows organizations to support both current algorithms and future quantum-resistant ones without hardware replacement.

A comprehensive transition strategy includes seven critical steps:

  1. Awareness: Understand the challenges, risks and necessary actions to prepare for quantum threats.
  2. Inventory and Prioritize: Catalog cryptographic technologies and identify high-risk systems—a process the Cybersecurity and Infrastructure Security Agency (CISA) mandated via spreadsheet submission last year.
  3. Automate Discovery: Implement tools that continuously identify and inventory cryptographic assets, recognizing that manual inventories quickly become outdated.
  4. Set Up a PQC Test Environment: Establish testing platforms to evaluate how quantum-resistant algorithms affect performance, as these algorithms generate larger keys that may impact systems differently.
  5. Practice Crypto Agility: Ensure systems can support both classical algorithms and quantum-resistant alternatives, which may require modernizing end-of-life hardware security modules.
  6. Quantum Key Generation: Leverage quantum random number generation to create quantum-capable keys.
  7. Implement Quantum-Resistant Algorithms: Deploy PQC solutions across systems, beginning with high-risk assets while preparing for a multi-year process.

Practical Implementation of PQC

Thales, Preparing Federal Systems for Post Quantum Security, blog, embedded image, 2025

Federal agencies should look beyond algorithms to consider the full scope of implementation requirements. The quantum threat extends to communication protocols including Transport Layer Security (TLS), Internet Protocol Security (IPSec) and Secure Shell (SSH). It also affects certificates like X.509 for identities and code signing, as well as key management protocols.

Hardware security modules (HSMs) and high-speed network encryptors serve as critical components in quantum-resistant infrastructure. These devices must support hybrid approaches that combine classical encryption with PQC to maintain backward compatibility while adding quantum protection.

The National Cybersecurity Center of Excellence (NCCoE) is coordinating a major post-quantum crypto migration project involving more than 40 collaborators, including industry, academia, financial sectors and Government partners. This initiative has already produced testing artifacts and integration frameworks available through NIST Special Publication (SP) 1800-38.

Crypto Discovery and Inventory Management

Automated discovery tools represent a crucial capability for maintaining an accurate and current inventory of cryptographic assets. Unlike the one-time manual inventories many agencies completed in 2022-2023, these tools enable continuous monitoring of cryptographic implementations across the enterprise.

Several vendors offer specialized solutions for cryptographic discovery, including InfoSec Global, Sandbox AQ and IBM. These tools can:

  • Discover and classify cryptographic material across environments
  • Identify which assets are managed or unmanaged
  • Determine vulnerability to quantum attacks
  • Support centralized crypto management and policies

The Cloud Security Alliance has coined the term “Y2Q” (Years to Quantum) as an analogy to the “Y2K bug,” highlighting the need for systematic preparation. However, the quantum threat represents a potentially more significant risk than Y2K, with a projected timeline that places a cryptographically relevant quantum computer capable of breaking current cryptography by April 14, 2030.

Moving Forward with Quantum-Resistant Security

The transition to post-quantum cryptography is not optional for Federal agencies—it is an imperative. While the process requires significant investment in time and resources, the alternative—leaving sensitive Government data vulnerable to decryption—poses an unacceptable risk to national security.

Agencies should begin by evaluating their existing cryptographic inventory, prioritizing systems with long-lived sensitive data and developing implementation roadmaps aligned with NIST and NSA timelines. By taking incremental steps today toward quantum-resistant infrastructure, Federal organizations can ensure their critical information remains secure in the quantum computing era.

To learn more about implementing quantum-resistant security in Federal environments, watch Thales Trusted Cyber Technologies’ (TCT) webinar, “CTO Sessions: Best Practices for Implementing Quantum-Resistant Security.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Thales TCT we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

A New Era in Government Cybersecurity

Securing government systems was a complex undertaking even before the pandemic. In response to that crisis, agencies rapidly deployed cloud technology, mobile devices and collaboration tools for remote employees — and added new vulnerabilities and IT management challenges to an already long list of cybersecurity priorities. Malicious actors have taken note of the new opportunities and continue to mount increasingly sophisticated attacks on government systems and critical infrastructure. To keep pace with those risks, government teams need multifaceted yet holistic strategies that address a wide range of threats to network endpoints, identity and access management, and data. In addition, agencies must strike the right balance of productivity and security for a mix of on-site and remote employees — a key concern of 75% of the respondents to a recent FCW reader survey. Fortunately, zero trust has been gaining traction because of its ability to address key challenges related to identity management, endpoint security and data protection. Interest in zero trust has skyrocketed thanks to a mandate in the Biden administration’s 2021 Executive Order on Improving the Nation’s Cybersecurity. But although zero trust can play a key role in ensuring that only authorized users have access to IT systems and data, it doesn’t always protect against human mistakes. In addition, security responsibilities have crossed traditional internal boundaries, and agencies are finding that they need to unify the priorities of security teams and mission owners. Learn how agencies can continue to evolve cybersecurity architecture and strategy, given the increased attack rate and creativity of malicious actors in Carahsoft’s Innovation in Government® report.

 

The Power of Real-Time Cyber Intelligence  

“Government agencies are realizing that if they are going to mitigate cybersecurity risks and respond to breaches more quickly, they need access to real-time operational intelligence. However, they also recognize that their security products and intelligence sources must be readily integrated. A security operations center (SOC) can’t function when it has 50 products that don’t talk to one another and whose data can’t be easily fused and normalized. Many organizations try to manually corroborate a notable  security event with other data, such as external threat intelligence, feedback from an endpoint detection and response platform, or information from the Department of Homeland Security. A manual process is slow, inefficient and ultimately doomed to failure.”

Read more insights from Splunk’s chief cybersecurity advisor for public sector, Paul Kurtz.

 

Treating Identity as Critical Infrastructure  

“Agencies can assess the state of their identity infrastructure by continually asking whether they are delivering the right capabilities to their employees, the public and other customers and whether they are doing so in a way that matches how people live and work today. We all have high expectations for capabilities and usability because of our daily interactions with smartphones. We’re used to conducting our business quickly and efficiently, and agencies should likewise be building enterprise systems that support the fast and efficient delivery of government services. Furthermore, agencies should build those systems with a line of sight to the future.”

Read more insights from Okta’s federal chief security officer, Sean Frazier.

 

IIG FCW Cybersecurity September Blog Embedded Image 2022The Importance of Future-Proofing Cybersecurity  

“Access control through multifactor authentication is an important aspect of both directives. The combination of username and password is not sufficient to secure access to IT systems. Agencies also need to deploy strong multifactor authentication that relies on some type of hardware- or software-based token for granting access to the environment and then to the data. Furthermore, the White House executive order mandates the protection of data through encryption not only when it is at rest but also when it is moving to and from the network edge and beyond.”

Read more insights from Thales TCT’s deputy CTO, Gina Scinta.

 

The Game-Changing Nature of Cyber Resiliency

“The COVID-19 pandemic prompted the largest modernization effort the government has ever seen. However, in addition to the many benefits of that modernization, hybrid work environments have added an ever-growing number of endpoints and created new identity-based vulnerabilities for attackers to exploit. Agencies can be more strategic in their approach to endpoint security by focusing on cyber resiliency. Although the term has been around for several years, it has been emphasized recently by the National Institute of Standards and Technology (NIST).”

Read more insights from SentinelOne’s vice president of federal sales, Todd Helfrich.

 

 Galvanizing Agencies into Action on Cybersecurity

“The Executive Order on Improving the Nation’s Cybersecurity has spurred agencies to modernize the way they protect IT systems and data. Now there is a shared commitment to the steps that IT leaders should take, and agencies have been galvanized into action. For example, zero trust was mostly just a buzzword for agencies prior to the executive order, and now it is something that federal agencies are seriously exploring. They’re going beyond reading whitepapers to asking for vendor demos and testing ideas.”

Read more insights from Cribl’s senior director of market strategy, Nick Heudecker.

 

Aligning Your Digital Collaboration to Zero Trust

“Guest access provides people outside your organization access to content inside your M365 workspaces (i.e., Teams, SharePoint and Groups). A health care-focused agency could use guest accounts to collaborate with grantees and their site staff or academic researchers. A defense-focused agency could use guest access to coordinate with local law enforcement to plan incident response or correspond about special event planning. Despite the benefits, agencies need policies and reporting when using features like guest access to ensure your information stays protected.”

Read more insights from AvePoint’s director of federal strategy for public sector, Jay Leask.

 

Download the full Innovation in Government® report for more insights from these digital transformation thought leaders and additional industry research from FCW.

The Ongoing Quest for Cybersecurity

 

Government agencies were already under pressure to modernize their cybersecurity strategies before the pandemic hit, and as workplaces closed and government employees struggled to access data and systems from makeshift home offices, the cybersecurity risks grew. The use of virtual private networks in the U.S. increased to match the early spike in COVID-19 cases, rising 124% in the two weeks from March 8 to March 22, 2020, according to Statista. Around the same time, the Cybersecurity and Infrastructure Security Agency (CISA) issued an alert titled “Enterprise VPN Security,” which offered both warnings and guidance on how to handle the surge in usage. With so many employees logging in remotely, agencies found that they had to shift their focus from securing a well-defined perimeter to securing the data that fuels government operations. In a recent survey of FCW readers, protecting data topped the list of cybersecurity priorities, with 75% of respondents citing it. In response to such concerns, CISA released its Ransomware Guide in September 2020. And in May, President Joe Biden mandated that agencies adopt zero trust in his Executive Order on Improving the Nation’s Cybersecurity, and the National Security Agency released a paper a few months ahead of that mandate titled “Embracing a Zero Trust Security Model.” Read the latest insights from industry thought leaders in Carahsoft’s Innovation in Government® report on cybersecurity.

 

The Future of Cybersecurity is Autonomous

“Analysts have too much atomic data and not enough context about that data. When they don’t have the full picture, they can’t take appropriate action. Re-creating each attack by hand takes painstaking care. And though analysts often relish this challenge, there’s simply not the time to do so for every presented case. Forward-thinking organizations are using artificial intelligence/machine learning (AI/ML) capabilities to fortify user endpoints and server workloads across an array of operating systems. These automations are designed to monitor the growing number of attack vectors in real time and present the full context of an attack in an easy-to-understand view that’s modeled after a kill chain.”

Read more insights from SentinelOne’s COO, Nick Warner.

 

Tailoring Zero Trust to Individual Users

“Zero trust is an important construct for helping agencies protect their infrastructure in today’s cybersecurity landscape. It focuses on accrediting individuals and their access to government resources. Agencies should make those decisions about access based on a comprehensive understanding of users. Security policies that treat all users as equally risky can be restrictive. Such policies set the bar high and hamper employees’ ability to work, or they set the bar low, which defeats the purpose of having security. Instead, agencies should evaluate users on an individual basis by taking the time to understand what employees do and how they do it — what’s normal behavior and what’s not. Then they can assess the risk of an individual based on that context.”

Read more insights from Forcepoint’s President of Global Governments and Critical Infrastructure, Sean Berg.

 

Modernizing Security for a Mobile Workforce

“Securing data and apps begins with positively identifying the user. In government, agencies have used multifactor authentication and all kinds of certificates, but those are simple pass/fail security checks. Once users are allowed to cross the security barrier, they often have wide-ranging access to government resources. This means adversaries and malicious (or careless) insiders passing the security checks receive free rein as well. Government needs to move to a continuous authentication model, which leads to better security and a better user experience. It involves seamlessly authenticating users every step of the way — when they touch the keyboard or scroll through an app on a screen. That activity, down to the microscopic vibrations in a person’s fingertip, can be sensed and understood so that IT administrators can answer the question: Is this really the authenticated user, or is it somebody else?”

Read more insights from BlackBerry’s Chief Evangelist, Brian Robison.

 

The Dangers that Lurk in Mobile Apps

“Government employees are increasingly reliant on mobile applications to do their jobs. But without formal monitoring programs in place, agencies might be unaware of the risks inherent in commercial and government-built apps. As a result, few agencies are investing resources and time to address a serious problem. The average mobile device has 60 to 80 apps, representing a huge potential for vulnerabilities at agencies whose employees are using those devices for work. Thousands of apps could be tracking employees or intercepting data. NowSecure founder Andrew Hoog has said mobile apps are the ultimate surveillance tool, given the mix of personal and mission activities in one space.”

Read more insights from NowSecure’s Chief Mobility Officer, Brian Reed.

 

Why Data is a Critical Cybersecurity Tool

“Once agencies have gathered their data in a scalable, flexible platform, they can apply artificial intelligence to derive insights from the data. AI speeds analysis and is particularly effective when agencies move from signature-based to behavior-based threat detection. A signature-based approach is good for detecting threats we already know about, but a behavior-based AI approach can adapt to new threats by looking for anomalies such as changes in the behavior of a server or endpoint device. AI also helps with investigations by reconstructing the sequence of events that happened during an intrusion, which fuels agencies’ ability to prevent future attacks. With AI, agencies can start to apply more sophisticated algorithms in their hunt for vulnerabilities and cyber threats.”

Read more insights from Cloudera’s Principal Solutions Engineer and Cybersecurity SME Lead, Carolyn Duby.

 

IIG FCW Cybersecurity Blog Embedded Image 2021Zero Trust Data Management Foils Ransomware Attacks

“Agencies must ensure recoverability because none of these protections matter if they can’t recover data and systems that run their critical missions and operations. Agencies need to gather and protect data at the edges of their networks, in their data centers and across different clouds. And regardless of where agencies decide to store that data, they need to be able to access it instantly. Recoverability service-level agreements of minutes and hours are possible and delivered today across the whole of government and the Defense Department. Gone are the days of weeks and months to get back online.”

Read more insights from Rubrik’s Public-Sector CTO, Jeffrey Phelan.

 

Reclaiming Control over Complex IT Environments

“When employees were sitting in a government office behind a firewall, IT administrators had a clearly defined perimeter to protect. Now IT administrators are still focused on protecting the agency’s mission and assets, but the responsibility has become more difficult because they’ve lost some visibility and control over the infrastructure. In response, many organizations are moving toward strategies based on zero trust, which requires validating users and devices before they connect to government systems, or least privilege, which involves only giving employees access to the resources and applications they need to perform their jobs. Zero trust and least privilege require continuous monitoring and a risk-based approach to adding or removing authorizations.”

Read more insights from SolarWind’s Group Vice President of Product, Brandon Shopp.

 

The Role of Authentication in Data Protection

“Users who need to access low-risk applications and data — for example, publicly available product information — can use an authentication method such as one-time password tokens. But if that same user wants to access higher-value data such as corporate finance records, the required level of authentication should increase, perhaps requiring public-key infrastructure (PKI) authentication with a smartcard. The key is to manage those activities via one pane of glass or one platform that supports the entire risk-based and continuous authentication process. In the past, we’ve been able to base decisions on where users are located — for example, whether they’re accessing data from within the network or remotely via VPN — but that is no longer enough. New technology tools enable agencies to gain a deeper understanding of users’ online behavior so they can make more informed decisions about authentication.”

Read more insights from Thales TCT’s Vice President of Product Management, Bill Becker.

 

Verification and Validation to Enhance Zero Trust

“Networking teams rely on standard configurations to maintain the security policy. These standard configurations dictate connectivity and traffic flows to ensure users can access appropriate resources while preventing unauthorized access. The idea of a standard configuration seems simple, but maintaining it is extremely difficult. Validating configurations is clearly mission critical, but monitoring and validating network behavior are even more telling and help ensure that policies are not inadvertently being circumvented and that there is no unintended connectivity.”

Read more insights from Forward Networks’s Technical Solutions Architect, Kevin Kuhls.

 

Extending Zero Trust Down to the File Level

“A software-defined perimeter integrates proven, standards-based security tools to create the ideal foundation for zero trust. When used together, those two approaches give agencies the granularity to customize their security protocols. For example, the IT team could allow USB mice but not USB thumb drives that can store data, and they could block potentially unwanted applications that anti-malware engines might not identify as malicious, such as bitcoin-mining or file-sharing apps. Zero trust is a mindset rather than a specific group of tools. The National Institute of Standards and Technology’s Special Publication 800-207 on zero trust architecture advocates taking a holistic approach to authenticating devices and users and extending that attitude to agency assets, services and workflows.”

Read more insights from OPSWAT’s Senior Director of Government Sales, Michael Hylton.

 

Download the full Innovation in Government® report for more insights from these government cybersecurity leaders and additional industry research from FCW.

No-Excuse Defenses Against Supply Chain Attacks

 

A supply chain attack aims to damage an organization by targeting less secure elements in its supply network. The initial victim becomes a steppingstone to infiltrate other networks. Exploiting a service provider’s data supply chain or traditional manufacturer supply chain has been the objective in many recent major data breaches. There was a 78% increase in supply chain attacks from 2018 to 2019—and 45% of those attacks targeted federal agencies.

Instead of directly compromising an agency, attackers infiltrate an integrator or partner. That helps attackers bypass the strong existing defenses of agencies themselves. Once inside the network, attackers can move vertically, compromising other vendors, software, IT contractors, or IoT devices.  Attackers also have the option of moving horizontally, taking advantage of connections to other agencies or contractors that share joint projects.

The 2013 attack against Target is the classic example of a supply chain attack. Attackers used stolen credentials from Target’s HVAC systems vendor to access the retailer’s network and move laterally into the systems that stored customer payment information.

The Scope of the Cybersecurity Problem

The movement of nation states into the cyberattack business has increased attackers’ technological capabilities. A recent study found that if Russia infiltrates a network, that organization would only have 19 minutes to mitigate the risk and shut it down before the attackers move to another server, PC, or device in the network. Moreover, the risk to government agencies is growing in a number of alarming ways.

  • Thales federal data threat report showed that 60% of federal agencies have been compromised at least once.
  • 35% of federal agencies were compromised just last year.
  • Of that 35%, 14% had also been compromised the year before.
  • COVID-19 has increased the use of BYOD policies.
  • IoT also multiplies the availability of soft targets.

Although 94% of malware is delivered by email, most people get dozens of emails a day, making it hard to police all of them. The recent compromise of Solar Winds, for example, included dormant malware hidden in a file or attachment.

Thales Supply Chain Attack Blog 2021 Embedded ImageSupply Chain Attack Scenarios

A secure file gateway is next generation technology that handles attacks in a fundamentally different way from most cybersecurity solutions, stopping a threat before it spreads into a network. Many cybersecurity vendors focus in on the execution of an attack—determining how it happened after it has occurred. A secure file gateway helps agencies prevent the attack from being executed while also allowing the agency to access its environment and continue to be productive.

Rather than quarantining problematic files the way most antivirus programs would, agencies need a solution that sanitizes them. A secure file gateway cleans the files by quarantining the negative data; then it places the positive data in a new template so it can be used by the end user.

For example, a small law firm might send a message to an insurance provider, unaware that there was malicious code hidden inside the Excel spreadsheet. When the end user opens up the spreadsheet, it launches a shell session for the attacker to attack the insurance provider’s network. But a secure file gateway breaks down that file into pieces and examines each one. It removes the malicious file within the Excel file, directly thwarting the attack so it never makes it into the network. The end user receives a sanitized message with a new Excel spreadsheet that does not contain the malicious code.

Enabling Both Safety and Productivity

In another scenario, a legitimate email message might contain a link for free ice cream that was actually a threat with an embedded shell file. The secure file gateway directly processes the message, stripping away the shell file and retaining the real message. It sanitizes messages as they’re being downloaded to end users’ desktops, ensuring that the end users receive the original files no matter what happens.

By the time the end user receives the files, they’re 100% sanitized and safe to be inside the organization’s infrastructure. Another cybersecurity solution might have blocked or quarantined the message altogether. If the end user wanted to get the information in the message, it would need to be released from quarantine and scrubbed by the security team.

With a secure file gateway, an agency’s employees can use files without having to wrestle with the security team about which files are safe to use.  A dashboard allows security personnel to see which files have been sanitized. The solution enables agency productivity without compromising security.

A good gateway solution also retains copies of the original and the sanitized version so an agency can investigate the attempted attack. Ordinarily, when these types of attacks occur, the file gets executed on the user’s machine and deletes itself. That prevents the security team from triaging the file or understanding exactly what it did when executed. By retaining the original file, a secure file gateway makes it easier for security teams to examine it and learn where it entered the system.

 

View Thales and Votiro’s webinar to learn more information about Supply Chain Attacks and how to solve these cybersecurity issues.