The Importance of Securing the Software Supply Chain

Moving Upstream: The Evolution of Software Supply Chain Attacks

The software supply chain consists of multiple components, touching every piece of code from the moment of conception to the moment of deployment into a Government application. This includes a variety of software, including third-party libraries, open source components, build tools and software architecture, making it a valuable target to hackers.

The software supply chain threat landscape has evolved from a series of disjointed yet targeted attacks to a broader upstream poisoning strategy. Historically, malicious actors targeted specific agencies; today, they have shifted to targeting upstream public software libraries and repositories. These open source libraries are used by thousands of Government agencies and can cause untold damage in a single attack. In the Public Sector, a compromised supply chain does not just mean a data link—it can constitute a threat to national security.

Several real-world cyberattacks exemplify this pattern change, including the 2025 Shai-Hulud software supply chain attack and the 2025 GlassWorm Integrated Development Environment (IDE) extension cyberattack. Malicious actors contribute code that appears to be helpful to public open source projects that contain hidden backdoors or vulnerabilities. In this case, it grants access to systems run by Government agencies.

Some hackers target the developer toolchain and IDE more broadly, as shown in the GlassWorm IDE extension cyberattack. GlassWorm was a self-propagating vulnerability whose initial threat injection was through an IDE extension download through a popular IDE extension marketplace. Other malicious actors have targeted artificial intelligence (AI)-powered supply chains, taking advantage of the speed and power of AI to propagate sophisticated multi-threaded threat campaigns against the developer ecosystem.

Setting Up for Success: Security Built Into the Process

In February 2022, the US Government published the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF) to combat threats to the software security chain. This publication divides guidance under four main practice groups:

  • Preparing the organization
  • Protecting the software
  • Producing well-secured software
  • Responding to vulnerabilities

These groups shift the model from fragmented security tools stitched together toward a unified process in which the security is baked directly into the developer’s workflow. For agencies, this framework provides a common language from which they can all develop a cohesive, secure and regulated software supply chain.

One of the ways developers can secure their supply chains is through Software Bill of Materials (SBOMs). SBOMs are essentially recipes for software; they outline all of the components inside a piece of software. These became required through Executive Order (EO) 14028 but creating them manually at the speed of modern DevSecOps is nearly impossible. Furthermore, as the Government manages risk and prepares for quantum-safe cryptography, the ability to support industry-standard and Federal compliance requirements for Software Package Data Exchange (SPDX) and CycloneDX SBOM formats, which include Vulnerability Exploitability Exchange (VEX) and cryptographic information, is mandatory for mission success.

The automation of SBOMs affects multiple components of the software supply chain:

  • Real-Time Visibility: Agencies have insight into all aspects of the software supply chain, from the deployment of a new line of code to the introduction of common vulnerabilities and exposures (CVE) to their inventory.
  • Reach of Vulnerability: DevSecOps teams can look at a vulnerable part of a library and determine the status of execution, the path of remediation and how agencies should prioritize remediation efforts.
  • Continuous Compliance: Every automated SBOM ensures that every release is compliant with Federal standards without requiring manual audit every time.

Beyond SBOMs, Federal agencies can focus on implementing other safeguards. Developing a curation process to vet open source libraries and components before they are ever downloaded is a critical first step. Agencies should examine potential application and service exposures, such as leaked credentials or backdoors in the software architecture. Additionally, securing the code at the binary level ensures that what was tested and developed is exactly what is run in production.

The JFrog Software Supply Chain Platform: All in One

From inception of code to runtime during mission-critical operations, having a single platform that provides security and visibility across the Software Development Life Cycle (SDLC) is crucial. The JFrog Platform ensures those factors by focusing on universal binary management. It supports over 30 open source packages, including Docker, Maven and Python. JFrog Artifactory, JFrog’s universal artifact repository manager, manages this package from one place, providing a single source of truth for developers that support mission-critical applications.

JFrog does not just look at the top layer for vulnerabilities and exposures; they scan deep into every dependency and sub-dependency within the binary to protect developer tools and infrastructure. Signed evidence at every gate creates end-to-end traceability from the developer’s IDE to edge deployment. The JFrog Platform is compatible with multiple network environments, from on-prem to hybrid to a multicloud flexible strategy.

As the Government modernizes its approach to digital transformation, agencies need industry partners that provide visibility into the next frontier. Security starts and extends across the software supply chain, from the inception of the code at the binary level to deployment of the application. The JFrog Platform delivers unprecedented trust assurance and risk mitigation through their signature binary-level security and positions their Public Sector customers and partners at the bleeding edge of innovation.

Explore JFrog’s DevSecOps solutions and how JFrog can protect Public Sector software supply chains from code to production.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including JFrog, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Built for This Moment (and All Those to Come) Introducing Symantec CBX: Finally, a security platform for smaller teams fighting larger threats

  • Disconnected, vendor-dependent security stacks leave smaller teams blind to threats and overwhelmed by noise they’re not equipped to manage.
  • Symantec CBX unifies Symantec and Carbon Black capabilities into a cloud-based XDR platform that delivers native telemetry correlation, AI-driven insights and enterprise-grade protections without enterprise-level complexity.
  • Built for resource-constrained teams, Symantec CBX reduces costs, cuts alert fatigue, accelerates response and gives organizations a longoverdue advantage against increasingly sophisticated, AI-powered attacks.
  • See Symantec CBX in action in Booth N-5345 at RSAC 2026 Conference.

It’s time for the cybersecurity industry to face an uncomfortable truth: The tools meant to make organizations safer are often the very systems slowing them down, and sometimes leaving them vulnerable.

The problem is that security stacks are built over time from disparate tools that prevent analysts from seeing the full operating environment. Smaller security teams have relied on vendors to solve the challenge of integrating various products—and too often, vendors have fallen short, making it too difficult to gather and correlate the telemetry needed to understand what’s really happening across endpoints, networks and data.

While large enterprises have the resources to manage and integrate complex security stacks, left behind are the organizations that make up the largest swath of the cybersecurity customer market: smaller, less-resourced security teams that increasingly face AI-powered, enterprise-grade threats but lack the budgets and in-house expertise to implement enterprise-grade defenses. These sophisticated attacks can decimate smaller organizations, turning them into casualties of an escalating cyber war fueled by nefarious AI agents that never miss a day of work.

These security teams don’t just need better tools. They need an advantage. Now they have one.

XDR from the pioneer of EDR

Today, we’re introducing Symantec CBX, a groundbreaking new extended detection and response (XDR) solution that combines all the best capabilities of Symantec and Carbon Black into a unified, cloud-based platform. Symantec CBX is the first new product to integrate features from these two iconic brands. But more importantly, it’s the first fully featured XDR platform built expressly for smaller teams looking to evolve their security protections, but that lack the expertise and resources needed to configure and optimize traditional enterprise-class XDR solutions.

In Symantec CBX, we’ve distilled decades of innovation from Symantec and Carbon Black into a platform that solves the problem of correlating and making sense of telemetry across endpoints, networks and data. Typically, the various tools within security stacks attempt this via API integrations. But those fragmented couplings are often incomplete and leave dangerous gaps in visibility and actionable insight. Security analysts may understand that something is happening—they just don’t always know what it is or what to do about it.

The problem grows worse as attack surfaces expand. Organizations send more and more data to costly SIEM platforms, leading to a waterfall of challenges, from endless false positives that waste analyst time to murky outcomes that frustrate corporate management looking for evidence that security programs are working. These are costs smaller organizations can’t afford.

Symantec CBX solves this by combining into a single cloud platform Symantec’s robust prevention, data security and network security features with Carbon Black’s pioneering EDR technology for deep visibility, exceptional threat detection and rapid response across attack surfaces. Spared from log-centric ingestion, security teams detect incidents more precisely and can act more confidently.

Native correlation is just the beginning

With Symantec CBX, native telemetry correlation sits at the center of a vast array of advanced capabilities that, until today, were available only from multiple point solutions. In CBX, we have integrated breakthrough features from Symantec and Carbon Black that make teams smarter and more efficient. Here’s what security teams can look forward to:

AI that makes life easier for humans at the helm. We’ve strategically deployed AI to deliver meaningful improvements to security workflows, resulting in capabilities that simply aren’t available anywhere else. Take Carbon Black Threat Tracer, which allows any analyst to see all adversary activity in a single pane. (Even junior analysts can understand immediately where attackers came in, how they executed their attack and what data they accessed across endpoint, network, email and cloud environments.) The CBX platform also includes Symantec Adaptive Protection, which uses AI to stop living off the land (LOTL) attacks before they do damage. And Symantec’s Incident Prediction, the groundbreaking feature we introduced last year, predicts an attacker’s next four to five moves so teams can stop threat actors moving laterally to steal data or shut down systems.

More complete insights for faster remediation. Incident Summaries, another AI-powered feature, gathers comprehensive data about incidents and presents them in well-written, intuitive summaries and remediation guidance so any analyst can engage mitigation when and where it makes sense.

Enterprise-grade network and data protections. Drawing from the best of Symantec Secure Web Gateway (SWG) and Symantec DLP solutions, this new XDR platform defends the network and data domains by stopping malicious traffic at the network edge, while packaging data security essentials from our acclaimed DLP offerings to ensure that sensitive data stays where it belongs. Via the integrated Symantec Cloud SWG

Express, this new platform even supports post-quantum computing cryptography protocols, thus shielding organizations from the threat of increasingly common “harvest now, decrypt later” attacks and relieving concerns over the prospect of attackers someday unlocking encrypted data.

Meaningful outcomes and rapid time to value. Security managers are expected to continuously improve their team’s performance, but that’s not easy when disjointed solutions create needless friction and confusion, and multiple dashboards steal time from an already busy day. We built Symantec CBX with the features and unified management console that enable the outcomes security teams need most: driving down SIEM and operational costs, rescuing analysts from alert fatigue, speeding time to resolution, meeting governance requirements and demonstrating progress by improving metrics.

Out-of-the-box policy configurations make CBX easy to implement and deliver immediate value.

The Goldilocks platform for the heart of the market

Symantec CBX is aimed squarely at the heart of the cybersecurity market, empowering and enabling security teams of virtually any size with a platform that puts them first. No other XDR solution is built so specifically for organizations laboring under tight budgets, too few resources, a persistent lack of senior expertise, chronic alert fatigue and the ever-more–daunting threat of AI-powered attacks.

Symantec CBX is the XDR platform for this moment and this market. As the first new solution from Broadcom to integrate capabilities from both Symantec and Carbon Black, CBX is the realization of our strategy to deliver on the “better together” pledge we made when these two legendary brands first came together under Broadcom’s Enterprise Security Group. And it’s the ideal solution for our global network of Catalyst Partners, with their deep regional expertise and close customer relationships, as they help organizations struggling to keep up in an environment of constant change and unrelenting challenges.

Overwhelmed security teams need an advantage, and now they have one.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Security.com, and is re-published with permission.

4 ways AI agents change the way we approach Identity Security

As if gaining visibility into all human and non-human identities wasn’t a big enough task for security teams, adding AI agents into the mix takes identity complexity to a new level. Organizations of all sizes are tackling this new reality, where it feels premature to confidently say they know about all the AI agents running in their environment. 

That uncertainty is not a knowledge gap. It is an attack surface. 

Gartner’s new report on IAM for AI agents names the real nugget of truth: “Purpose/intent cannot be discovered after the fact by monitoring and observability capabilities.”

That is not just analyst language. It is a fundamental shift in how we need to think about governing agents. You cannot govern agents by watching them after-the-fact. You must know who they are, what they are for, and who is accountable before they run. 

The numbers that should change your priorities

Gartner’s data reinforces the urgency. By 2029, over 50% of successful attacks against AI agents will exploit access control weaknesses. By the year before, 90% of organizations that share credentials between humans and agents will need to make significant investments to undo that design.Gartner IAM for AI agents stat graphic-18 (1)

Those numbers are consequences, not causes. The root cause is structural: IAM maturity for agents is uneven. The Gartner lifecycle maturity assessment makes this visible. Authentication and monitoring capabilities are relatively mature. Identity registration and authorization are not. That gap is the story. 

Weak identity registration means the agent was never properly onboarded as an identity. No defined owner. No declared purpose. No documented scope. It has credentials and it is running, but nobody can tell you who built it, what it is supposed to do, or what happens when it breaks. When registration is weak, ownership is unclear. And when ownership is unclear, accountability does not exist. 

Weak authorization means the agent has more access than it needs. It can reach databases, APIs, and workflows that have nothing to do with its intended function. Nobody scoped it down because nobody defined what “down” looks like. When authorization is weak, privilege is excessive.

Now combine excessive privilege with autonomy. An agent that can reason, chain tools, and act on its own, with more access than it should have, and no one clearly accountable for what it does. That is the exploitable attack surface. That is the chain revealed in Gartner’s data.

You cannot protect what you cannot see

Before you can govern agents, you need to find them. All of them. Not just the ones your platform team sanctioned. The ones that developers spun up to solve an issue. The ones contractors built. The ones that exist because someone needed to “just get this working.” 

We hear this consistently from security teams. As one InfoSec manager at a professional services firm put it: “We do not find out about it until someone goes and does an actual audit of the system.” 

Gartner’s assessment confirms it: identity registration is one of the least mature IAM capabilities for AI agents. Most organizations cannot answer the basics: What is this agent supposed to do? Who owns it? What happens when it breaks? 

Discovery is not a checkbox. It is the foundation. Without it, every policy you write is based on assumptions, and assumptions do not survive first contact with autonomous agents operating at machine speed.

The identity registration gap

Most organizations are trying to govern agents with the wrong tools. They are monitoring. They are logging. But monitoring tells you what happened. Identity registration tells you what should happen. Authorization enforces the boundary between them. 

If your governance model depends on catching problems after they occur, you are always going to be behind. 

This is where many organizations reach for familiar tools. IGA platforms can help with registration and lifecycle management. IAM solutions like Okta or Entra ID can register agent identities. These are necessary steps. But they stop there. They can tell you an agent exists and who requested it. They cannot enforce anything at the moment that agent acts. 

That is the gap: governance on paper versus enforcement in production. 

Agents are identities, but not like any you have managed before

The way I read Gartner’s recommendations, there is a unifying thread: treat AI agents like you would treat any identity in your organization. They authenticate. They access resources. They act on behalf of someone. That is not a tool. That is an identity. 

But agents are more complex than traditional identities. They are what we call composite identities. They combine the blast radius of service accounts with the unpredictability of human decision-making at machine speed.

Four reasons that make them different: 

  • They act autonomously, unlike service accounts that execute predefined operations.
  • They may inherit human delegation, creating privilege escalation risk.
  • They may chain multiple machine identities in a single task.
  • They may operate across trust boundaries your IAM system was not designed to handle.

Think about how you onboard an employee. You do not give them admin access on day one. You define their role, their manager, their scope. You review their access as responsibilities change. Agents need that same lifecycle. But right now, most organizations are skipping straight to “give them credentials and hope for the best.” 

What runtime enforcement actually looks like

Gartner calls out the authorization gap. But what does closing that gap look like in practice? 

Even modern IAM systems, including conditional access and continuous evaluation, were designed primarily to evaluate who is signing in and what that identity is generally allowed to do. Agents introduce a different problem. They do not just sign in. They execute. They invoke tools dynamically. They operate across multiple identity contexts within a single task. 

Traditional conditional access evaluates who is signing in and under what conditions. Agent governance must also evaluate what is being executedat the moment of execution. 

Here is what that looks like: an agent is about to call a tool, read from a database, trigger an API, or execute a workflow. Before that happens, there is a decision point. Runtime enforcement evaluates the composite identity: the human owner, the agent itself, the tool credentials, and the defined purpose, all at execution time. Is this agent authenticated? Does it have permission for this specific action? Is this behavior consistent with its intended function? 

That is runtime enforcement. Not configuration-time policies that assume the agent will behave as designed. Decisions at execution time, every time.

What Silverfort does differently

If the failure pattern is identity immaturity, then the control point must also be identity. Most AI agent security approaches start at the model or application layer. We start at the identity layer. Because if identity is uncontrolled, everything above is fragile. 

Human accountability by design

Every AI agent is explicitly tied to a real human owner in policy. Not informally. Not in documentation. In enforcement logic.

Every action can be traced back to a real chain of accountability: which human owns this agent, what identity the agent is operating under, and what credentials it uses to access resources. That is what we mean by composite identity. And it is what makes enforcement possible before monitoring even begins.

Runtime enforcement at the identity layer

Silverfort enforces at the identity decision point at runtime. For MCP-connected agents, that means sitting in line between the agent and the MCP server. For platform-native agents, enforcement is delivered through native integration, directly within the platform. 

Before a tool call executes, we evaluate identity, context, delegation, and policy in real time. If the action exceeds scope, it does not execute. This is not configuration-time IAM. This is execution-time identity enforcement. That distinction matters. 

Least privilege that survives autonomy

Static least privilege assumes predictable behavior. Agents break that assumption. They reason. They chain tools. They drift from what they were originally authorized to do. Least privilege must be validated at runtime, not just set at provisioning. 

That means if an agent tries to access a resource outside its declared purpose, it gets blocked. If delegated privileges start expanding beyond what was originally scoped, they are contained. This is the same enforcement model we apply to humans and service accounts, now extended to AI agents.

One Identity Security Platform

AI Agent Security is not a standalone product. Agents sit at the intersection of human identities, non-human identities, service accounts, cloud resources, SaaS applications, and protocol layers like MCP. If those domains are secured separately, agents will exploit the seams. 

Silverfort unifies this. One policy framework. One observability layer. One enforcement architecture. Across humans, machines, and AI. That is the architectural difference.

Enabling AI innovation without slowing it down

Security leaders are not trying to stop AI adoption. They are trying to make sure it does not outrun their ability to govern it. The organizations moving fastest with AI agents are the ones that figured out early: the right security model is a speed advantage, not a drag. 

Cars have brakes so you can drive fast. The same principle applies here. 

But, the brakes only work if they’re connected to the same system. Today, most organizations secure human identities in one tool, service accounts in another, and AI agents (if at all) in a third. If those domains are secured separately, agents will exploit the seams. 

That’s the reason teams need a unified Identity Security Platform

  • One policy framework means a CISO can define “no agent accesses production data without human approval” once and have it applied across every agent, every platform, every protocol. No per-tool configuration. No coverage gaps.
  • One observability layer means when an agent acts, you see the full chain: which human triggered it, which NHI it authenticated with, which tool it called, and what data it touched. Not three dashboards stitched together after the fact, but a single view that makes incident response possible in minutes instead of days.
  • One enforcement point means policy is applied at runtime, at the moment of action, not retroactively through quarterly access reviews. When an agent requests access, the decision happens inline. Allow, deny, or step up. Before the action executes, not after. 

This is what shifts AI agent security from a governance exercise to an operational capability. Discovery tells you what exists. Registration tells you who owns it. Runtime enforcement tells agents what they’re actually allowed to do, in the moment, every time. 

AI agents represent the next frontier of identity. Identity Security must evolve accordingly, from governance alone to continuous, runtime enforcement. Discover what is running. Register who owns it. Enforce at the moment of execution. That is the path. 

The Gartner report is worth reading in full. : https://www.silverfort.com/landing-page/campaign/gartner-report-iam-for-agents/.

Want to learn how Silverfort discovers and protects AI agent identities? See AI agent Security in action.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Silverfort, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

This post originally appeared on Silverfort.com, and is re-published with permission.

How Government Agencies Can Modernize Transportation with Uber for Business

State and Local Government agencies are under pressure to do more with less while still delivering reliable services. Transportation is fragmented in many agencies, with four or five separate vendor contracts across departments in larger agencies. There is an over-reliance on legacy vendors that are significantly more expensive, including specialty vendors that are important for certain populations and services but may not be necessary for every rider. In many cases, these systems require rides to be booked days in advance, sometimes through offline means such as phone calls. This lack of centralization also limits reporting and visibility into how transportation dollars are being spent.

Uber for Business helps Government agencies move away from a fragmented model by offering a single enterprise platform that can support a variety of transportation needs across departments. With more than 9.4 million participating drivers and couriers, Uber has the largest rideshare network in the world. Centralized administration and reporting provides agencies with a complete view of their transportation programs while reducing the burden on staff who currently manage rides manually.

Supporting Employee Travel and Community Programs

Agencies are using Uber for Business in several capacities. One major use case is employee travel. Many agencies still rely on rental cars or motor pools for staff traveling for work. Uber for Business provides an alternative that can also augment existing fleet operations, helping reduce reliance on basic sedans while allowing fleet teams to focus on specialized vehicles. Agencies can set controls around who can ride, when they can ride and what trip options are available. This is especially appealing as many employees are already familiar with using Uber in their personal lives, making it a seamless and intuitive option to extend into official Government travel.

Agencies are also using Uber for Business to support community-facing programs, including:

  • Court systems use rideshares to transport victims and witnesses, ensuring they arrive on time reliably and have access to a mode of transportation they are familiar with.
  • Social service departments and similar programs are using rideshare to close mobility gaps for the populations they serve, including workforce reentry, recidivism and youth and family programs that need reliable transportation to access essential services or job opportunities.
  • Public safety and transportation agencies are leveraging rideshare to support anti-driving under the influence (DUI) and safe ride campaigns, helping reduce impaired driving by providing residents with accessible transportation alternatives during high-risk times.

Delivering Value Quickly

One of the clearest advantages of Uber for Business is how quickly agencies can begin seeing value. For program managers responsible for overseeing social service and community programs, the benefits can be immediate when constituents are able to get where they need to go more reliably. Smoother transportation can make programs easier to manage and more effective overall.

Programs can be set up as fast as a couple of days. This speed can be especially important when agencies have immediate transportation needs or are looking for a fast, low-lift way to modernize existing processes.

Reducing Costs and Administrative Burden

Uber, Modernize Transportation Blog, Embedded Image, 2026

Cost savings are another major driver for adoption. Through Uber’s partnership with Carahsoft, the solution is available through a National Association of State Procurement Officials (NASPO) agreement that includes built-in incentives for agencies. Uber also applies a tax exemption tag when setting up programs so eligible rides are exempt from applicable taxes.

Beyond discounts and tax advantages, agencies can realize significant operational efficiencies. Program managers no longer need to call in rides or worry about whether clients are reaching their destinations. Instead, they can see trips in real time, communicate with drivers during the booking process and distribute ride credits easily. These streamlined workflows reduce administrative effort and help programs run more efficiently.

Improving Visibility, Compliance and Oversight

For agencies in large counties, Uber for Business can be set up with a parent account that all department accounts fall under. This gives agencies centralized administration rights and better reporting across the organization. It also supports auditing and grant compliance by allowing administrators to view granular details for each trip.

Centralization also helps agencies capture unmanaged transportation spending that may otherwise happen informally across departments. Instead of relying on ad hoc rideshare use with little oversight, agencies can bring transportation activity into one system and enforce internal policies more consistently.

Enhancing the Transportation Experience

Ease of use is a major reason agencies are adopting Uber for Business. For riders, the biggest advantage is on-demand access. Rather than scheduling transportation days in advance, riders can get a trip when they need it. This flexibility can make a meaningful difference for participants in social service and workforce reentry programs, where reliable access to transportation can affect whether someone is able to reach work, court or other essential services.

Uber has also invested in accessibility features, building tools for riders who may not have a cell phone or the Uber app, as well as for those who speak another language or have low vision or hearing-related disabilities. For Government agencies focused on serving all constituents, not just most, these capabilities can help expand access and improve inclusivity.

A Centralized Transportation Strategy

According to Uber, the most successful deployments happen when an executive or procurement leader helps identify which departments across an agency could benefit from a more modern, efficient mobility solution. That agency-wide visibility makes it easier to structure the right program from the start, including setting up the parent account, selecting the right products for different departments and developing an implementation and training plan for staff. This kind of centralized planning can help agencies move beyond isolated pilots and create a transportation strategy that serves multiple departments and use cases through one platform.

For agencies just getting started, most programs can be up and running in less than a month. While some agencies may choose to run their own solicitation process, others can take advantage of existing contracts through NASPO and Carahsoft to start immediately. In emergency situations, deployment can be done within a day. Uber can move as fast as an agency requires.

As agencies look for ways to improve service delivery, manage budgets more carefully and give employees and constituents more reliable transportation options, Uber for Business provides a scalable and flexible model for modernization. From employee travel and fleet augmentation to court systems and social services, a centralized rideshare platform can help agencies simplify operations, improve oversight and better meet transportation needs across the communities they serve.

To learn more about how Uber provides modern travel and rideshare options to Government agencies, view their Uber for Business portfolio.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Uber for Business, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders

Integration Over Innovation: Cybersecurity’s Real Differentiator

Chief Information Security Officers (CISOs) and security leaders are navigating an overwhelming number of platforms, tools and point solutions, each promising to close gaps in an organization’s security posture. The cybersecurity market is accelerating toward Zero Trust Architectures (ZTAs), artificial intelligence (AI) and machine learning for threat detection and toward Extended Detection and Response (XDR) platforms, as organizations attempt to proactively identify and contain increasingly complex cyberattacks.

At the same time, rising concerns around supply chain exposure, remote workforce vulnerabilities and the rapid expansion of Internet of Things (IoT) and Operational Technology (OT) environments are fueling investments in managed security services, Secure Access Service Edge (SASE) and identity-centric controls.

Yet despite the global cybersecurity market exceeding $200 billion and rapid innovation, cybersecurity organizations continue to face breaches, operational disruptions and threats that slip past even sophisticated defenses. The issue is not a shortage of solutions—it is the complexity created when those solutions are deployed without operational alignment.

The Commercial CISOs Distinct Mandate

The problem is not a lack of innovation; it is a lack of integration. Because commercial organizations are not bound to a single prescriptive security model (NIST, ISO 27001, SOC 2, etc.), every decision about what to buy, integrate and prioritize is made in the service of protecting:

  • The company
  • Customers
  • Employees
  • Daily operations

This imperative requires every tool, team and process to function as part of a coherent, connected system.

A breach is not just a security event; it is a reputational crisis, a failure of customer trust and a direct threat to revenue and competitive standing. The organizations best positioned to respond to evolving threats are not necessarily those with the most advanced individual tools, but rather those that have built environments where those tools work together.

The Integration Problem: When Tools Multiply, So Do the Gaps

Organizations must invest in cybersecurity deliberately. Pilots are often promising, and initial results can look impressive, but the real test comes in year two when hidden interoperability failures emerge. Across industries, tools that perform well in isolated environments often struggle when integrated into broader operations. The result is predictable: more complexity, slower response times and critical threats falling through the cracks.

As organizations expand across hybrid and multicloud environments, the attack surface grows more complex, increasing the need for interoperable systems rather than isolated tools. Security silos are not just an architectural inconvenience—they are an operational risk. When endpoint tools cannot exchange data with a Security Information and Event Management (SIEM) system, or identity management platforms operate independently from network monitoring, organizations lose the visibility needed to detect threats before they become incidents. In competitive markets, loss of visibility is measured not only in recovery costs, but also in eroded customer trust.

For commercial organizations, gaps have consequences beyond IT, affecting customer relationships, brand reputation, third-party liability and the bottom line. The lesson is not to stop investing in new capabilities. It is to recognize that the value of any tool is determined less by its individual features than by how effectively it connects with the systems around it. Integration is the differentiator between a security environment that performs under pressure and one that does not.

What Resilient Organizations Do Differently

For every commercial organization struggling with fragmented tools and reactive security, there are others that have made different decisions, and the difference is rarely budget or access to technology. It is discipline, prioritization and a deliberate commitment to building environments that hold together under real-world operational pressure.

Resilient organizations share a recognizable set of characteristics:

  • Operational consistency is prioritized over tool proliferation.
  • Security maturity is measured through effectiveness, not the number of solutions implemented.
  • Visibility is consolidated into unified frameworks that give security teams a coherent view of the threat landscape.
  • Rapid response is made possible through connected tools, clear escalation paths, tested playbooks, and teams that understand how their responsibilities fit into the broader security operation.

The fastest-growing segment of cybersecurity is not isolated tools, but AI-enabled platforms designed to unify detection, visibility and response across environments. According to Grand View Research, the cybersecurity market is evolving from standalone, reactive solutions toward integrated, intelligence–driven security frameworks that emphasize proactive detection and automated response as foundational elements of organizational resilience. Organizations that operationalize integrated detection and response frameworks are better positioned to reduce dwell time, contain incidents and minimize operational disruption.

Perspective Across the Ecosystem

As the Trusted IT Solutions Provider, Carahsoft works with 450+ vendors, 1,300+ resellers and sits across multiple sectors, lending a key perspective: tools that succeed in pilot or concept fail if they do not integrate into the broader operational ecosystem.

Observing such patterns has helped CISOs prioritize solutions that actually reduce risk, and has provided insight into which integrations truly hold up under real-world operational pressure.

Organizations that succeed focus on building connected environments where people, tools and processes are aligned, rather than accumulating capabilities in isolation.

For CISOs and security leaders, the question is not whether to invest in innovative technology, but how to ensure every investment strengthens the whole, not just the individual part. Every investment should reinforce operational clarity, accelerate decision-making and reduce friction during high-pressure moments.

In a threat landscape defined by speed and complexity, integration is a strategic requirement. The organizations that recognize this will not just withstand disruptions; they will navigate them with confidence, resilience and a measurable competitive advantage.

Learn more about the leading cybersecurity solutions that are changing the way organizations are safeguarding their entire cyber ecosystem by exploring Carahsoft’s expansive Cybersecurity Portfolio.

Ignite. Innovate. Impact: Key Takeaways from NAWB The Forum 2026

For the first time in over 40 years, the National Association of Workforce Boards (NAWB) took its premier annual event on the road, landing in Las Vegas for The Forum 2026. This year’s theme, “Ignite. Innovate. Impact,” signaled a bold shift in how the workforce system addresses rapid economic change, emerging technology and legislative uncertainty.

Whether you missed the sessions or just need a refresher to share with your board, here is a summary of the major trends and tactical insights that defined the conference.

1. The Era of Generative AI: From Hype to Implementation

Perhaps the biggest “main stage” topic this year was the shift from talking about AI to using it. Sessions like “What AI ISN’T: Rethinking ChatGPT and Policy” and “The Current State of AI in Workforce Development” moved past the buzzwords.

Key Takeaways:

  • Capacity Building: AI is being framed as a tool to “do more with less” as boards face funding constraints. By automating routine administrative tasks, staff can shift focus to high-value human services like coaching and relationship building.
  • The “Human” Edge: Despite the automation, speakers emphasized that AI-exposed occupations still require human judgment, creativity and “core employability skills” (soft skills), which workforce boards are uniquely positioned to teach.
  • New Credentials: Discussion centered on emerging credentials for AI quality assurance, prompt design and data annotation as new entry points for job seekers.

2. Advocacy & WIOA Reauthorization

With the workforce system at a crossroads, advocacy was a central pillar of the 2026 agenda. The message from the “Inside the Beltway” updates was clear: workforce boards must be their own best storytellers.

Strategic Priorities:

  • WIOA Flexibility: NAWB continues to push for the reauthorization of the Workforce Innovation and Opportunity Act (WIOA), specifically advocating against “one-size-fits-all” mandates and for the reduction of state-level set-asides (from 15% to 10%) to return more funding to local control.
  • Data-driven evidence: Utilize current employment data from authoritative sources to substantiate your achievements.
  • Short-Term Pell: There was significant momentum around expanding Pell Grant eligibility for high-quality, short-term skills development programs that align with in-demand careers.

3. Solving the Childcare & Trades Equation

A standout session focused on the intersection of labor and family support: “Meeting Big Needs with Big Solutions.” Using Pierce County Labor and the Machinists Institute as a model, the session explored how investing in childcare for trades workers is no longer a “benefit”. It is a critical infrastructure requirement for a stable workforce.

4. Expanding the Apprenticeship Model

Registered Apprenticeships (RA) were highlighted as the gold standard for sustainable sector pipelines.

  • Influence Meets Industry: Sessions focused on making RA a “household name” beyond just the construction trades, expanding into Logistics, Electric Vehicles (EV) and even Childcare.
  • Public-Private Funding: A major theme was leveraging diverse funding streams (not just WIOA) to sustain apprenticeship momentum during economic shifts.

5. Organizational Resilience & Leadership

For Executive Directors and Board Chairs, the conference offered a deep dive into “Full Throttle Leadership.”

  • Contingency Planning: A specialized pre-conference session focused on helping boards navigate labor market shocks and talent shortages with decisive, proactive planning.
  • Culture Matters: Insights from the Eastern Kentucky Concentrated Employment Program (EKCEP) highlighted how a “culture of performance” can increase engagement among employees and elected officials alike.

Why it Matters for Our Community

The shift to Las Vegas was more than a venue change; it was a metaphor for the “nationwide tour of innovation” that NAWB is championing. The 2026 Forum made it clear that the future of work isn’t just about jobs, it’s about ecosystems.

As we bring these insights back to our local regions, our focus should remain on:

  1. Embracing AI ethically to improve service delivery.
  2. Advocating for local control and flexible funding.
  3. Integrating supportive services (like childcare) directly into our workforce strategies.

We had a great time and learned a lot. Schedule a meeting to chat more about the conference.

Keep More, Store Less: The Case for Advanced Compression in Federal EDR

How agencies can retain full-fidelity data without overspending on storage

Endpoint detection and response (EDR) depends on data. The more telemetry you collect, the more context you have to detect threats, investigate incidents and meet Federal compliance requirements.

But data volume is also the problem. Federal agencies generate massive amounts of endpoint telemetry every day. Process activity. File changes. Network connections. User behavior. Multiply that across thousands of devices and storage requirements quickly grow beyond what many teams can sustain.

Security teams often face a difficult tradeoff: retain full-fidelity data and absorb higher storage costs, or limit retention and risk losing critical visibility.

That tradeoff is no longer necessary. Advanced data compression changes the economics of endpoint visibility. Agencies can retain unfiltered telemetry for extended periods without expanding storage budgets or adding operational complexity.

The Visibility–Storage Tradeoff is No Longer Sustainable

Federal cybersecurity requirements continue to raise the bar for telemetry collection and retention. Agencies must support Zero Trust initiatives, continuous monitoring programs and audit readiness. Modernization efforts increase the number of connected endpoints, including cloud workloads, remote systems and contractor-managed devices. Each new endpoint expands the telemetry footprint.

At the same time, budgets remain under scrutiny. Storage infrastructure must compete with other mission priorities and security leaders must justify every dollar. When storage costs climb, teams often respond in predictable ways:

  • Reduce retention windows
  • Sample or filter telemetry
  • Drop lower-priority event types
  • Offload data to external archives that are difficult to query

Each of these approaches creates blind spots. Shorter retention windows limit historical investigations and filtered data weakens threat hunting while fragmented storage slows response times.

In a threat context where adversaries can dwell quietly for months, incomplete data is a liability. Agencies need a way to collect and retain comprehensive telemetry without creating unsustainable storage growth.

Compression-First Architectures Improve Data Retention

Traditional security platforms treat compression as an afterthought. Data is collected at scale, stored in raw or lightly optimized formats and compressed later in the pipeline. By then, infrastructure costs are already locked in.

A compression-first architecture takes a different approach. Advanced compression techniques reduce data size at ingest. Telemetry is optimized as it enters the platform, not after it has consumed storage resources. The result is a significantly smaller storage footprint without sacrificing fidelity. For Federal security operations centers (SOCs), this shift has meaningful impact:

  • Longer retention without higher cost – Agencies can retain 180 days or more of full-fidelity telemetry while remaining within budget constraints.
  • Unfiltered visibility – Teams do not need to decide in advance which data might matter later. They can keep it all.
  • Faster investigations – Optimized storage enables efficient querying across large datasets, supporting threat hunting and incident response.
  • Simplified architecture – Native compression reduces the need for external storage tiers or complex archival systems.

Instead of managing tradeoffs, security teams regain flexibility.

Full-Fidelity Data Supports Compliance and Zero Trust

Federal mandates increasingly require measurable security maturity. Continuous monitoring, device-level visibility and documented audit trails are central to that effort, and retention depth matters.

When agencies can access complete endpoint histories, they strengthen their ability to:

  • Validate Zero Trust controls within the device pillar
  • Reconstruct events during forensic investigations
  • Demonstrate compliance with evolving Federal security requirements
  • Support reporting obligations tied to vulnerability and risk management

Short retention windows make it harder to answer fundamental questions: When did this behavior begin? Was lateral movement attempted? Did similar activity occur on other systems?

With compressed full-fidelity data, those questions become easier to answer and teams can look back months, not days. This level of historical visibility supports stronger analytics, more informed risk decisions and more defensible reporting.

Cost Efficiency Matters Under Federal Scrutiny

Every Federal technology investment must demonstrate operational value. Advanced compression directly addresses cost concerns in several ways:

  • Reduces total storage consumption
  • Delays or eliminates additional infrastructure purchases
  • Lowers operational overhead tied to managing multiple storage systems
  • Minimizes data movement between tiers

At the same time, it strengthens the overall security posture by preserving data that might otherwise be discarded. This combination of efficiency and depth is particularly important for agencies balancing modernization initiatives with budget discipline.

Security cannot become a cost center that expands without limit. It must scale responsibly. Compression-first EDR architecture supports that balance.

The Federal security community no longer needs to accept a compromise between cost and visibility. Advanced data compression enables agencies to:

  • Collect unfiltered endpoint telemetry
  • Retain data for extended periods
  • Support Zero Trust maturity
  • Strengthen investigative capabilities
  • Maintain fiscal discipline

As agencies define the next standard for Federal EDR, data strategy must be part of the conversation. Retention, accessibility and efficiency determine whether telemetry delivers long-term value.

Carbon Black and Carahsoft help Federal agencies adopt a compression-first approach to endpoint detection and response, so teams can keep more data, store less and operate with confidence.

Contact us to learn how your agency can adopt a compression-first approach to endpoint visibility while staying within budget.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Broadcom, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

The Top 5 Insights for Government from HIMSS 2026 

Healthcare and technology leaders convened at the Healthcare Information and Management Systems Society (HIMSS) 2026 conference with a shared sense of urgency as the Federal health ecosystem is undergoing one of its most significant transformations in decades. Across panel sessions, discussions highlighted both the structural challenges and strategic investments shaping Government health agencies, from modernizing public health data infrastructure to addressing long-standing interoperability barriers that have fragmented care delivery.  

Five critical insights emerged that define a path toward a more connected, data-driven and patient-centered Federal healthcare system. 

Federal AI Policy Is Being Rebuilt Around Coordination, Not Fragmentation 

Leaders from the Department of Health and Human Services (HHS) emphasized that agency-by-agency artificial intelligence (AI) experimentation is ending. With dozens of programs across its divisions, HHS has restructured its AI strategy around three coordinated pillars: regulation, reimbursement and research/development.  

Historically fragmented efforts created conflicting signals and limited cross-agency innovation. Now, the Secretary’s office serves as an alignment layer, ensuring regulatory decisions at the Food and Drug Administration (FDA), reimbursement policies at the Centers for Medicare and Medicaid Services (CMS) and research investments at the Advanced Research Projects Agency for Health (ARPA-H) are coordinated. The goal is not to expand Government roles, but to remove barriers and accelerate adoption of existing technologies. 

The FDA is rethinking how AI-enabled medical technologies are regulated. After authorizing more than 1,000 AI and machine learning products, primarily in radiology but expanding into other domains, the agency recognizes the limits of a pre-market framework designed for static hardware, not continuously evolving software. Leaders described a shift toward lighter pre-market review paired with stronger post-market surveillance, focusing on real-world performance, model drift and patient outcomes. This approach requires new regulatory frameworks and enhanced data-sharing between developers, providers and regulators.  

ARPA-H complements this work by funding high-risk, high-reward innovations not supported through traditional mechanisms. Notably, no generative AI (GenAI) technology capable of providing clinical care has received FDA authorization, a gap the agency aims to close. One flagship initiative supports AI systems capable of performing comprehensive physician functions, developed alongside the FDA to establish new regulatory pathways. Additionally, ARPA-H is investing in “supervising agents,” systems that monitor and control deployed AI, addressing the scalability limits of human oversight. 

The VIP Sets a New National Standard for Health Data Exchange 

The Department of Veterans Affairs (VA) positioned itself as a national convener for interoperability through the Veteran Interoperability Pledge (VIP), which unites leading health systems to improve care coordination for veterans regardless of where they receive care.  

Grounded in the Elizabeth Dole Act, the initiative mandates rapid adoption of national interoperability standards across care coordination, benefits, identity matching, quality measurement and public health. VA leaders outlined a layered interoperability model—from foundational standards such as X12Fast Healthcare Interoperability Resources (FHIR) and Bulk FHIR, to data quality frameworks like Patient Information Quality Improvement (PIQI) and ultimately to advanced analytics and decision support. The key message: interoperability is foundational, but value is created through what is built on top of it. 

Operationally, the VIP is already enabling real-world capabilities. The Veteran Confirmation Application Programming Interface (API) allows Electronic Health Records (EHRs) to verify veteran status in real time, supporting eligibility recommendations under the Promise to Address Comprehensive Toxics (PACT) Act and the Comprehensive Prevention, Access to Care and Treatment (COMPACT) Act. Two workgroups are developing recommendations for identity verification and care coordination workflows, targeting submission by the end of March. A structured cadence of monthly plenaries and bi-weekly workgroups ensures continuous alignment between policy, standards and implementation. 

Seamless Collaboration Requires Breaking Down Technical and Cultural Barriers 

Federal, State and Local leaders underscored that populations served by multiple programs cannot be effectively supported by siloed agencies. Both technical and cultural barriers must be addressed simultaneously. 

At the Federal level, CMS, VA and the Indian Health Service (IHA) are advancing shared infrastructure and lowering redundancy. CMS is transitioning from Government-developed systems to commercial platforms, accelerating innovation and enabling AI tools that now reach approximately 80% of its workforce, saving an estimated 5.5 hours per employee weekly. The agency is also adopting a multicloud strategy for resilience and fostering talent pipelines through partnerships with institutions like the University of Maryland. 

IHS is undergoing a similar transition to commercial platforms, improving AI integration and expanding access to advanced tools in rural and tribal communities. Enterprise services help ensure equitable access where local technical resources are limited. The VA is modernizing security processes to reduce delays in technology adoption and leveraging physical locations to support identity verification, improving access for veterans struggling with digital enrollment. 

Bridging the digital divide also requires workforce and literacy solutions. Baltimore City panelists highlighted the need to translate Federal data into local action, particularly around social determinants of health, including housing and economic mobility. Community health workers were cited as essential connectors and should be integrated into digital strategies from the outset. 

Public Health Data Infrastructure Must Shift from Detection to Prediction 

The Center for Disease Control (CDC) acknowledged that current public health infrastructure is designed for detection, not prediction. While improvements have been made since COVID-19, a broader transformation is still underway.  

The One CDC Data Platform (1CDP) serves as a central hub, enabling flexible data exchange, reusable capabilities and advanced analytics. Its purpose is to shift focus from manual data processing to proactive analysis and decision making. Leaders envision disease forecasting becoming as routine as weather forecasting, with real-time modeling to guide early intervention. 

State-level examples illustrate this shift. Illinois is consolidating siloed systems into a unified cloud platform, while addressing cultural resistance to data sharing. Louisiana is focusing on targeted, use-case-driven improvements tied to Medicaid and public health outcomes. Mississippi is prioritizing foundational infrastructure and workforce readiness before scaling analytics. Across all three states, the consensus is clear that interoperability only delivers value when tied to actionable outcomes. 

The VA’s NextGen CCN Redesigns Care Delivery at National Scale 

Community care is one of the fastest-growing components of the VA healthcare system. Of the 17 million veterans served, roughly 6.3 million use VA healthcare annually, with 2-3 million accessing community providers. Programs introduced through the Choice Act and Maintaining Internal Systems and Strengthening Integrated Outside Networks (MISSION) Act expanded access but created operational and financial complexity. 

The Next Generation Community Care Network (NextGen CCN) addresses these challenges through a comprehensive redesign of how the VA manages external care. Expected to launch in early 2027, the program introduces a more competitive ecosystem involving insurers, providers and technology partners. 

Key capabilities include improved care coordination, real-time data exchange, standardized quality benchmarks and outcomes-based reimbursement. Interoperability is foundational to these goals, enabling performance measurement and accountability. The program also prioritizes transparency and trust across stakeholders, ensuring a shared understanding of care delivery. Together, these efforts are designed to position the VA to deliver high-quality, fiscally responsible care while continuing to expand access for a veteran population whose demographics and care needs are rapidly evolving. 

Charting the Course for Federal Health IT Modernization 

HIMSS 2026 reinforced that progress in Federal healthcare requires aligned investment across AI governance, interoperability, cross-agency collaboration, data infrastructure and care delivery redesign. Government health agencies are not simply adding new technologies onto existing systems; they are rethinking how they organize, share data and operate as an integrated ecosystem. Sustained success will depend on aligned standards, cultural transformation and technologies that translate strategy into measurable outcomes. 

As Carahsoft, The Trusted Government IT Solutions Provider™, continues supporting Federal health IT modernization, these insights inform how industry can partner with Government to deliver a more connected, data-driven and patient-centered healthcare system. 

Explore Carahsoft’s Healthcare Technology portfolio of leading solutions that support Federal healthcare modernization priorities including AI, interoperability, cloud infrastructure and advanced analytics. 

Contact the Health IT Team at Healthcare@Carahsoft.com or (571) 591-6080 to learn more. 

Why Supply Chain Risk Management is Now a Public Sector Resilience Priority

From ransomware disrupting city services to vendor failures impacting school operations, supply chain failures seem to be dominating the headlines lately. Naturally, whether your organization is in the Private or Public Sector, you’ll want to avoid attracting attention for the wrong reasons.

The best way to do that is to prioritize implementing best practices to safeguard critical vendors and services from cybersecurity risks and operational disruptions. In this guide, we’ll cover the NIST framework, how it applies to Public Sector organizations and how you can use NIST best practices to reduce risk and maintain public trust. Even private sector teams increasingly rely on NIST supply chain risk management practices when working with Government partners, especially across information technology environments.

Why Is Supply Chain Risk Management Important?

Managing supplier risk should be a fundamental part of any data-based businesses’ operations, but it’s all the more important for Public Sector organizations, whether that means Federal, State or Local services.

Why? Without clear practices for identifying, assessing and mitigating vendor and operational risk, you could expose your organization to a whole host of potential issues, including:

  • Financial losses: Even nonprofit organizations depend on reliable financial backing from Governments and other entities. Those revenue streams can be endangered when an overlooked security risk becomes an operational blockage.
  • Reputational damage: Eroded consumer trust can be as costly as any disruption in service or productivity. When your organization attracts the wrong kind of attention, like for suffering a data breach or failing to fulfill obligations, earning that trust back can be a difficult feat.
  • Regulatory violations: In worst-case scenarios, failing to catch a supply chain risk before it becomes a major problem can lead to your organization falling afoul of relevant regulations and facing stiff consequences like fines or legal fees.

Learn more: Quick Guide: What is Operational Risk Management?

When Does an Organization Need a Supply Chain Risk Management Framework?

The purpose of using a risk management framework is to standardize the process of identifying, assessing and mitigating potential threats and vulnerabilities to your organization’s supply chain. If your organization’s ability to provide services, attract new users and secure funding would be severely impacted by a potential data breach or supply chain disruption, then you’d most likely benefit from using a framework to ensure consistent supplier security.

State, Local and education (SLED) entities are all the more likely to need a framework for regulating risk assessments and mitigation steps. Since the services provided by such entities are typically essential to a community, it’s that much more important that you take all the necessary actions to secure your supply chain and prevent service interruptions whenever possible.

What Is the NIST Risk Management Framework?

The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) is the go-to solution public service organizations have been using to mitigate vendor, technology and cybersecurity risks for the last decade. The result of a Federal task force established in 2014 under the Federal Information Security Modernization Act (FISMA), this framework for risk management processes can be used to set standards across Federal agencies and the organizations that work with them.

Today, the NIST framework is a main point of reference for any organization looking to implement a secure and reliable process for managing cybersecurity risks and other potential supply chain issues. The framework is a living document regularly updated to meet the latest challenges in the data privacy space.

Learn more: What is NIST RMF? Risk Management Framework

What Are the NIST Best Practices for Supply Chain Management?

The 2022 revision NIST SP 800-161 offers comprehensive guidelines for handling supply chain risks related to information and communications technology. These recommendations are divided into three main categories: foundational practices, sustaining practices and enhancing practices.

Think of these categories as sequential stages. You’ll need to implement foundational practices before you move on to sustaining practices, and sustaining must come before enhancing.

1. Foundational Practices: Establishing a Process for Supply Chain Risk Management

Some of the best practices recommended in NIST SP 800-161 for creating a foundation for a supply chain risk management process include:

  • Dedicate a multidisciplinary team to your vendor and technology risk oversight
  • Create and fill dedicated roles for risk oversight procedures
  • Gain support from senior leadership to ensure adequate resources
  • Implement a governance hierarchy and a governance structure
  • Codify processes for identifying and assessing the criticality of your suppliers, products and services and conducting formal risk assessments, preferably using FIPS 199 impact levels
  • Establish internal checks and balances for compliance
  • Integrate risk oversight practices into your policies regarding supplier selection
  • Raise internal awareness and understanding of the importance of supply chain risk management
  • Create processes and practices for quality control and consistent development practices

Learn more: Guide: Risk Management Strategies To Future-Proof Your Organization

2. Sustaining Practices: Improving the Efficacy of Your Supply Chain Risk Management

Some of the best practices recommended in NIST SP 800-161 for building on your foundational risk management processes include:

  • Implement third-party risk assessments
  • Create a program for monitoring suppliers
  • Define and quantify levels of acceptable risk
  • Determine key supplier risk metrics and create procedures for tracking and reporting them
  • Formalize your information sharing procedures
  • Establish a training program for vendor risk practices
  • Integrate supply chain risk management practices into your supplier contracts
  • Solicit supplier participation in contingency planning and incident response
  • Collaborate with suppliers to address risk factors
  • Expand supply chain risk management training to all applicable roles across your organization

Learn more: How to Mitigate Third-Party Risks in Your Supply Chain

3. Enhancing Practices: Predicting Supply Chain Issues Before They Impact Your Business

Some of the best practices recommended in NIST SP 800-161 for building a structured supply chain risk management program include:

  • Codify processes for quantitative risk analysis, optimize risk response resources and measure your return on investment
  • Use insights gained over time to identify key risk factors and create predictive strategies to address risks before they arise
  • Introduce automation into your cybersecurity oversight procedures whenever possible
  • Join a community of practice where you can improve your cybersecurity risk management practices

Learn more: 5 Reasons Your Company Should Automate Third-Party Risk Management – Onspring

Additional NIST Resources

Organizations implementing a supply chain risk management program often reference several complementary NIST publications, including:

How to Future-Proof Your Vendor Risk Program

It’s impossible to overstate the importance of recognizing and addressing risk factors in your supply chain when your organization is responsible for providing or securing local and state services. The best guide to follow when establishing or enhancing your supplier risk program is the NIST Risk Management Framework. A structured platform can help Public Sector teams manage these challenges more effectively while taking advantage of AI advancements without exposing their organizations to unnecessary risk.

See how Onspring’s platform supports these efforts and get a demo today.

How AI is Reshaping Courts and Legal Operations 

The conversation around artificial intelligence (AI) in the legal system has fundamentally shifted from courts and legal organizations debating whether it belongs in legal environments to how to integrate AI responsibly into daily operations. For courts facing expanding caseloads, staffing shortages and budget constraints, AI-powered legal technologies have become operational tools for improving efficiency, access to justice and administrative effectiveness across the legal lifecycle. While AI can significantly enhance legal workflows, responsibility for judgement, accuracy and decision-making must remain with human professionals. 

From Policy Discussion to Practical Adoption 

The American Bar Association’s (ABA) Year 2 Report on the Impact of AI on the Practice of Law makes clear that AI adoption in the legal profession has entered a new phase. Early concerns centered on ethics, confidentiality and professional responsibility. Today, the focus has shifted toward responsible deployment, governance and workflow integration where efficiency gains are immediate and measurable. These applications allow courts to redirect limited staff resources toward higher-value legal and judicial work rather than routine manual processes. 

Common AI-enabled courtroom use cases already in practice include: 

  • Organizing and searching large volumes of filings, briefs and evidence 
  • Creating unofficial or preliminary real-time transcriptions 
  • Summarizing motions, exhibits and prior case materials 
  • Supporting scheduling, workload analysis and calendar management 

This is especially important for Federal, State and Local courts that must maintain service levels despite limited resources. AI-enabled legal technologies provide a validated path to modernizing court operations while preserving judicial independence, transparency and accountability. 

Real-World Applications Delivering Value 

AI adoption is already producing tangible operational benefits across court systems. 

Administrative and workflow automation applications include drafting routine administrative orders and standard court notices, managing scheduling and calendar coordination, conducting workload studies and organizing court documents and filings for improved retrieval. These implementations reduce administrative burden while improving consistency in standard legal processes. 

Document review and case support capabilities allow legal teams to summarize briefs, motions, pleadings, depositions and exhibits at scale. AI systems create timelines of relevant events across large case records and assist with legal research when trained on reputable legal authorities. Some implementations identify misstated law or omitted legal authority in filings, though human verification remains mandatory for all outputs. 

Transcription, translation and accessibility services are also being rapidly adopted. Courts are generating unofficial or preliminary real-time transcriptions to accelerate case documentation. Systems provide preliminary translations of foreign-language documents and support accessibility services for self-represented litigations navigating complex court procedures. These applications expand access to justice by reducing cost barriers and improving navigation of legal systems for citizens. 

Scaling Court Operations Under Budget Constraints 

Rising caseloads combined with constrained budgets make AI adoption particularly relevant for Government legal operations. Technology adoption has emerged as the primary driver of scalability for courts that cannot expand head count. By automating manual processes such as transcription, document review, evidence management and research, AI allows existing staff to handle higher volumes while maintaining or improving service quality.  

This approach aligns with broader access-to-justice goals highlighted in the ABA report. AI-enabled tools are already helping courts improve case management, streamline dispute resolution processes and support self-represented litigants through better access to information and court services. These gains are particularly impactful for jurisdictions seeking to modernize legacy systems while preserving fairness, transparency and judicial independence. 

Human Oversight and Accountability 

While AI delivers meaningful efficiency gains, the ABA report stresses that AI-generated outputs may appear authoritative while containing factual or legal inaccuracies. The risk of hallucinations has not been fully resolved in any current generative AI (GenAI) tools. As a result, AI should not replace judges or court staff, nor should it be treated as an authoritative source of truth. Instead, AI should serve as an assistive technology that augments human expertise, improving documentation quality, accelerating research and making information more accessible. 

Judicial guidelines outlined in the report reinforce several critical principles: 

  • Judges and attorneys remain fully responsible for accuracy and legal reasoning 
  • AI-generated content must always be reviewed for correctness and relevance 
  • Overreliance on AI can introduce risks such as automation bias or misinformation 

Courts adopting AI must establish clear governance frameworks that address privacy, security, transparency and oversight. Human verification of AI outputs is essential to ensuring that AI enhances documentation quality and accelerates legal research without compromising accuracy, professional responsibility and public trust. 

Responsible Adoption Through Trusted Procurement 

The ABA emphasizes that responsible AI adoption is not optional; it is a leadership responsibility. Human oversight, ethical use policies and ongoing evaluation remain essential to ensuring AI strengthens, rather than undermines, trust in the justice system. 

Carahsoft, The Trusted Government IT Solutions Provider®, works with leading legal tech software providers to help Federal, State and Local courts modernize legacy systems, reduce administrative burden and implement AI responsibly at scale. By making these technologies accessible through trusted procurement vehicles, Carahsoft enables courts and Government legal organizations to adopt AI while aligning with established legal, ethical and operational requirements.  

AI is not a substitute for legal expertise, but it is quickly becoming an indispensable tool for courts seeking efficiency, consistency and scalability. By procuring AI solutions through Carahsoft, Government courts can ensure their modernization demands will be met while maintaining legal and ethical standards. As AI continues to reshape legal operations, organizations that pair technology deployment with clear governance, training and accountability frameworks will be better positioned to deliver improved services to the public.  

Ready to explore AI-enabled legal technology solutions? Explore Carahsoft’s Legal & Courtroom Technology Solutions portfolio or take a Self-Guided Tour. 

Contact Carahsoft’s team at LegalTech@carahsoft.com to discuss AI solutions tailored for your organization’s needs.