The Importance of Creativity in Government and How Creative Software Improves Digital Workflows

In today’s rapidly changing world, Government agencies are under immense pressure to deliver efficient, transparent and citizen-focused services. They often work with limited budgets and follow strict rules. Although creativity is commonly associated with the Private Sector, it has become increasingly important in the Government space. Creative thinking allows employees to develop better solutions for complex challenges, such as emergency response and policy implementation. Adobe’s creative software plays a valuable role in this shift by helping agencies improve their digital workflows, reduce delays and operate more effectively while meeting high standards for security and compliance.

The Value of Creativity in the Public Sector

Creativity in the Public Sector goes beyond new ideas. It helps agencies address important issues like public health, infrastructure improvements and fair access to services. By encouraging fresh thinking, Government teams can create clearer communications for citizens, present complex data in simple ways and design programs that truly meet community needs. When creativity is supported, agencies tend to achieve better results, build stronger public trust and adapt more easily to change. Without creative approaches, traditional processes can limit progress and make it harder to serve the public effectively.

Enhancing Digital Workflows with Creative Software

One area where creativity makes a real difference is in digital workflows. Many Government operations still depend on manual, paper-based steps that take considerable time and effort. Creative software tools help transform these into faster, more collaborative digital processes. Applications for graphic design, video production, document creation and data visualization enable teams to produce professional materials more efficiently. This includes public awareness campaigns, reports and e-learning training resources. Improved system integration also makes it easier for departments to share information and collaborate effectively. 

Bottlenecks remain a common challenge in Government. Excessive paperwork, lengthy approval processes and outdated systems often cause delays, increase costs and reduce productivity. Creative software and automation offer a practical way to address these issues. By simplifying routine tasks, agencies can save significant time and resources. Features such as electronic signatures, document templates and real-time collaboration help speed up processes that could take up to twice as long using traditional methods. 

Real-World Success Stories

Several Government agencies have seen clear benefits from creative software. Adobe Creative and Adobe Document Cloud, featuring Adobe Acrobat and Adobe Acrobat Sign, further helps by automating document-related tasks. The City of Denver used Adobe Creative Cloud to strengthen its online services and public outreach campaigns (City of Denver Case Study, n.d.). The Federal Aviation Administration (FAA) integrated these tools to modernize its grants management process. This change reduced paperwork and allowed funding for major infrastructure projects to proceed at a faster pace (FAA Case Study, n.d.). The United States Marine Corps achieved a 38 percent reduction in Adobe eLearning production costs by updating its training workflows with Adobe solutions (USMC Case Study, n.d.). The U.S. Census Bureau also realized substantial savings—between $1.4 billion and $1.9 billion—by digitizing forms and outreach efforts (US Census Bureau Case Study, n.d.). Importantly, Adobe’s tools are designed to meet strict Federal security, accessibility and compliance requirements.

A Step Toward More Effective Government

By embracing creativity through secure and accessible creative software tools, Government agencies can reduce operational bottlenecks and deliver better service to the public, supporting greater efficiency, innovation and accountability.

Check out our on-demand webinar series for more information about how Adobe solutions empower teams to streamline workflows, harness AI-driven tools and elevate creative output.

Sources

“City and County of Denver Case Study.” https://business.adobe.com/customer-success-stories/city-county-denver-case-study.html

“Automating digital documents to improve government efficiency and effectiveness.” May 1, 2024. https://blog.adobe.com/en/publish/2024/05/01/automating-digital-documents-improve-government-efficiency-effectiveness

“USMC Extends Elite Training to the Digital Classroom.” https://business.adobe.com/customer-success-stories/usmc-case-study.html

Adobe Customer Success Story – “U.S. Census Bureau.” The savings range reflects estimates from Government Accountability Office (GAO) reporting on the 2020 Census digital innovations. https://business.adobe.com/customer-success-stories/us-census-bureau-case-study.html

Adobe Customer User Cases. Government Solutions: Efficient, Impactful, Modernized

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Adobe, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How Standardized APIs Streamline AI Integration into Government Workflows

As agencies increase their investment in artificial intelligence (AI), the most pressing challenge is no longer just developing advanced models. It’s ensuring those models fit seamlessly into the operational workflows that underpin essential public services. These processes are deeply embedded in systems built over decades and require reliability above all else. Abrupt changes could introduce mission risk, especially in regulatory enforcement, public benefits and defense environments.

Standardized APIs offer a proven path forward. Acting as controlled, reusable interface points, APIs allow AI-powered automation in the Public Sector to augment legacy systems without destabilizing them. They expose core logic as callable services, enabling integration without overhaul. In this way, APIs bridge the gap between technical advancement and operational continuity, enabling mission-ready integration without disrupting how teams or programs operate.

Bridging Legacy and Innovation Through API Abstraction

Legacy infrastructure remains central to many Federal operations. Replacing it entirely is often impractical, but delaying AI modernization carries operational risks. Standardized APIs provide a strategic link between modern AI capabilities and existing Public Sector systems. By abstracting backend complexity, they make it possible to integrate AI into mission workflows without extensive code changes.

Abstraction layers allow AI models to access structured and unstructured data, delivering AI-driven inferences and task automation within secure, controlled environments. Because APIs provide a consistent interface, AI capabilities can evolve independently of the systems they enhance. This decoupling supports agility without sacrificing system stability, which is critical for maintaining resilience in a fast-changing technological landscape.

Accelerating Secure AI Adoption Through Operational Consistency

Government teams need to move quickly, but without compromising trust. Standardized APIs enable faster deployment by removing common bottlenecks in system integration. They streamline the delivery of secure enterprise-grade AI by enforcing consistency across environments—cloud, on-premises and edge—delivering the performance and efficiency expected from accelerated computing platforms.

These APIs also reinforce compliance with Government AI security standards. By embedding role-based access, encryption and logging at the interface level, AI solutions for the Federal Government can be monitored and governed with confidence, forming a technical foundation for responsible AI deployment.

Supporting Mission-Ready AI Through Infrastructure Portability

Modern Government AI strategies must be infrastructure-agnostic. Agencies operate in hybrid environments, and AI services need to follow. A standardized API layer model enables portability by decoupling AI tools from underlying infrastructure, allowing them to be moved or replicated across platforms without changes to the core logic or dependency on specific hardware configurations.

Portability is especially important for mission-critical operations where performance, latency and security vary by deployment context. Whether in secure data centers, cloud environments or tactical edge scenarios, standardized APIs keep infrastructure aligned with mission needs.

Lifecycle Management for Sustainable AI Operations

Agencies must manage the entire lifecycle, from versioning and deployment to monitoring and updates. APIs simplify lifecycle management by introducing structured controls around model exposure, usage and evolution.

Versioning at the endpoint level preserves backward compatibility, allowing existing applications to continue operating while new capabilities are deployed. Monitoring and audit tools track how models are used, by whom and with what data, enabling full traceability and supporting AI compliance in the Public Sector.

Collaboration and Workforce Enablement Through Shared Interfaces

API-driven design encourages reuse and collaboration. Once an AI capability is exposed via a standardized API, it can be reused across departments, avoiding redundant development and improving consistency. A federated approach supports AI data governance in Government by making it easier to enforce policies across distributed teams and can also support interagency collaboration where appropriate governance models are in place.

Workforce readiness is equally critical. By abstracting technical complexity, APIs enable Government teams to interact with AI capabilities through standardized, well-documented interfaces, lowering the barrier to adoption and empowering teams to manage their own AI workflows using the skills they already have. Rather than requiring deep ML expertise, this approach lets staff build and deploy with confidence.

A useful mental model is to think of APIs as shared utilities: once an AI capability like summarization or classification is made available via API, it can be reused, like electricity travels across the grid. APIs can be shared across programs without rebuilding the engine each time.

Evaluating API Readiness for Long-Term Government AI Success

When evaluating API readiness as part of a Government AI strategy, leaders should consider whether the API layer truly supports integration with the agency’s operational reality. This includes the ability to ingest both structured and unstructured data, interface with current tools and extend across agency-specific workflows.

Security should be integral, not layered in later. APIs must offer native support for encryption, authentication and fine-grained access control, and provide clear audit trails that satisfy compliance frameworks central to secure and responsible AI deployment in Government. Lifecycle support is equally vital: robust APIs must facilitate controlled versioning, rollback and real-time observability, including monitoring, logging and alerting, to ensure performance and trust are never compromised.

Scalability across infrastructure is another benchmark. APIs must perform consistently across cloud, edge and on-premises environments without friction. And since no agency succeeds in isolation, a mature API ecosystem should include reference implementations, shared patterns and a strong developer community to reduce implementation time and cost.

These attributes, taken together, define whether a technology stack is suitable for the mission and whether it can scale securely, responsibly and efficiently as part of a long-term digital transformation roadmap.

API-First Integration: A Catalyst for Scalable, Trusted AI

For Government agencies modernizing AI operations, standardized APIs represent more than a technical solution – they are a strategic enabler of scalable, secure and mission-aligned innovation. By offering a flexible integration layer, APIs make it possible to accelerate adoption, reduce duplication and build trustworthy AI-powered automation in the Public Sector.

Rather than forcing a complete rebuild of legacy infrastructure, APIs allow agencies to evolve at their own pace. They provide the foundation for responsible, compliant and cost-effective AI integration while keeping Government teams in full control.

Agencies that adopt this approach can shift from isolated pilots to enterprise-scale systems where AI becomes a routine, reliable part of Public Sector operations. Standardized APIs transform secure enterprise AI from a strategic aspiration into an operational reality, enabling repeatable success across mission workflows.

Custom AI Without the Complexity: How Automated Fine-Tuning Accelerates Mission-Ready Models

In the evolving era of generative artificial intelligence (AI), pre-packaged AI often falls short in the Public Sector. Off-the-shelf models typically lack the context needed to perform at the standards required by Government use cases, and building AI models from scratch remains too resource-intensive for most agencies.

However, a middle path has emerged powered by advancements in fine-tuning, accelerated computing and security-conscious infrastructure. This new approach enables agencies to adapt robust foundation models to mission-specific needs quickly, securely and without the traditional complexity of AI customization.

What’s changing isn’t just technology; it’s the framework for how Government thinks about AI readiness. By grounding strategy in full-stack development principles and AI lifecycle management, Public Sector AI leaders can begin moving from research to real-world impact at mission speed.

Accelerated Fine-Tuning, Engineered for Agility

Traditional approaches to AI model development often fail to transition from proof-of-concept to production. They can’t keep pace with mission timelines or infrastructure constraints. This is where automated, accelerated fine-tuning plays a transformative role.

By enabling targeted optimization of foundation models, teams can iterate quickly and cost-effectively. This significantly reduces compute requirements and accelerates iteration cycles, enabling rapid experimentation using sensitive data.

These capabilities allow Federal teams to develop and refine models using their existing infrastructure, removing a major roadblock to operational AI. When fine-tuning is seamlessly integrated with the hardware and orchestration stack, model updates are no longer bottlenecks. They become core to a continuous delivery process.

Security Built In, Not Added On

For Federal leaders, security is not negotiable. It’s foundational. AI platforms must be designed from the ground up to operate securely, not simply comply with policy.

Modern development stacks address this by combining containerized workloads, Zero Trust access control and built-in compliance with frameworks like FISMA and NIST 800-53. These capabilities allow agencies to maintain control of sensitive data while leveraging state-of-the-art model development tools.

Equally important is the ability to trace every stage of a model’s lifecycle. Visibility into data lineage and model provenance is essential for building public trust, ensuring transparency and simplifying audit and ATO processes.

Unifying the AI Lifecycle Under One Stack

The journey from raw data to mission-ready application spans preprocessing, evaluation, deployment and real-time monitoring. Without a unified platform to manage this lifecycle, Government teams face silos, drift and duplication of effort.

The most effective AI solutions deliver a full-stack environment where teams collaborate on the same infrastructure. This alignment ensures that experimentation is not only fast but replicable; models don’t need to be rebuilt for deployment, they’re ready to ship by design.

Operational continuity is especially important in Federal settings, where changes in leadership or mission can disrupt priorities. A unified lifecycle platform provides the flexibility to pivot quickly while maintaining compliance and consistency and can help overstretched teams scale AI impact without proportionally scaling headcount.

Mission-Tuned AI for Complex Government Domains

Generic models often struggle to perform in specialized domains. These challenges are amplified in Government, where datasets are often sparse, highly structured or privacy-restricted.

Fine-tuning large language models using domain-specific data is the most effective way to close this gap. When paired with synthetic data generation and tools like retrieval-augmented generation (RAG), agencies can create models that operate with high accuracy without increasing exposure to outside data sources.

These models can be deployed across diverse environments thanks to the flexibility of modern accelerated computing platforms, whether in the cloud, on premises or at the tactical edge. This portability, achieved through containerized AI microservices and optimized orchestration, is critical for Government teams.

From Exploration to Execution

The case for custom AI in Government is no longer theoretical. Advances in hardware-accelerated fine-tuning, lifecycle-integrated orchestration and secure, portable inference environments have made the once-difficult possible and practical.

The goal isn’t simply to deploy AI faster but to deploy AI that is trustworthy, domain-aware and cost-efficient, with solutions that enhance mission effectiveness without compromising governance.

As Public Sector leaders navigate tight budgets, workforce reductions and mounting oversight, platforms that streamline AI delivery can provide much-needed relief. Rather than requiring new teams or expensive retraining, agencies can scale with existing staff and systems.

This moment represents a shift from experimentation to operationalization. The agencies that act now—building their capabilities on a modernized, full-stack AI architecture—will not only realize early wins but will be best positioned to adapt to the accelerating pace of AI innovation in the years ahead.

Why API-Driven Architecture is the Backbone of Scalable Government AI Solutions

As artificial intelligence (AI) advances from exploratory pilots to mission-critical systems, Government agencies face an increasingly urgent challenge: how to modernize intelligently without destabilizing the core infrastructure that supports essential services. From public benefits to regulatory enforcement, Government operations depend on reliable systems—and yet the demand for more agile, intelligent and data-driven services is accelerating.

In this environment, Application Programming Interface (API)-driven architecture offers more than a technical advantage. It provides a framework that aligns with how Government adopts innovation: carefully, incrementally and with strong requirements for security, oversight and continuity. For AI and technology leaders shaping the future of digital Government, APIs are not just useful—they are foundational.

Modernization Without Disruption

Public Sector systems are often mission critical and decades old, built long before real-time inference or machine learning were technical considerations. Replacing these systems would be cost-prohibitive, slow and risky. However, ignoring them is not an option when they contain the data and logic upon which essential functions depend.

API-first design offers a bridge. Instead of rewriting these systems, agencies can overlay intelligent services that interact with them via stable, controlled interfaces. For example, a model trained to extract structured fields from unstructured forms can be accessed as a service. The model can be invoked as needed, without being embedded in the legacy system, decoupling innovation from infrastructure.

That modularity makes progress manageable. Teams can test AI services in narrow use cases, assess results and scale adoption in stages. It also protects staff from abrupt shifts, enabling workforce transition and training to occur alongside technical deployment. For leaders evaluating enterprise readiness, this suggests prioritizing architecture that enables incremental adoption of AI capabilities without high-risk disruption.

Embedding Security and Compliance from Day One

In the Public Sector, systems must be secure and compliant by design. Requirements for data protection, access control, identity management and auditable decision-making are foundational. AI systems must align with those standards from the outset.

An API-first approach gives agencies a way to build governance directly into the AI deployment framework. Rather than relying on one-off integrations, every interaction with an AI model can be mediated through an API that enforces strict controls. Authenticating requests, encrypting data, logging transactions and rate-limiting ensure system resilience.

Just as important is the flexibility to deploy AI capabilities in controlled environments. Whether in air-gapped systems, private cloud infrastructure or hybrid networks, API-exposed services can meet the traceability and isolation requirements essential to mission-critical operations. Decision makers should seek solutions that support environment-agnostic deployment and align with relevant security and data sovereignty frameworks.

Scaling Through Reuse, Not Redundancy

A frequent challenge in agency AI programs is the repetition of effort across teams. Without a unified strategy, different groups may develop overlapping models for classification, summarization or extraction—resulting in redundant investment and inconsistent performance.

API-driven architecture supports reuse as a foundational capability. Once a model is trained, validated, and deployed as a callable service, it can be shared securely across programs.

A federated model allows each office to maintain autonomy while benefiting from shared resources and proven capabilities. This not only accelerates adoption but also improves consistency and reduces the burden on overextended technical teams. Agencies should look for platforms that facilitate model sharing, usage tracking and consumption governance to reduce redundancy and scale effectively.

Bringing Discipline to the AI Lifecycle

AI systems evolve. Models are retrained, refined and replaced to address performance gaps, policy changes or bias mitigation. Without lifecycle controls, these changes can introduce instability or compliance risk.

Deploying models through well-governed APIs introduces discipline. New versions can be released under new endpoints, allowing dependent applications to upgrade at their own pace. Logs can track which models are in use, by whom and for what purpose, enabling structured deprecation and full auditability.

Lifecycle control in AI mirrors DevSecOps practices that have already been adopted in many Government IT environments. Evaluate solutions that support endpoint versioning, access analytics and governance-ready observability to ensure stability and trust throughout the AI lifecycle.

Keeping Options Open in a Fast-Changing Landscape

The AI technology stack is rapidly evolving. New models, deployment frameworks and cost-performance tradeoffs continue to emerge. For agencies operating on long procurement cycles, flexibility is not optional. It is essential for long-term sustainability.

API abstraction allows teams to decouple applications from specific model implementations. A chatbot or summarization service can continue operating even if the underlying model is swapped or updated, supporting continuity and reducing the risk of vendor or architecture lock-in.

Flexibility supports hybrid deployment models where mission-sensitive workloads remain on-premises, and others run in trusted cloud environments. Leaders should prioritize runtime abstraction and model backend flexibility to preserve choice and adaptability as technology evolves. When possible, platforms should also expose APIs through open standards such as Representational State Transfer (REST), OpenAPI or GraphQL to ensure interoperability across systems and vendors.

Enabling Responsible, Scalable AI in Government

Responsible AI requires more than principles—it demands a technical foundation that makes oversight and accountability operational. API-first architecture provides this foundation.

Every request can be logged, every model version tracked and every output monitored for alignment with policy and mission needs. This observability not only supports compliance audits but also enables continuous performance assessment and model improvement. Built-in telemetry from API gateways can offer insights into usage trends, model health and performance, supporting both governance and optimization efforts.

Equally important, API-based integration supports human-centered adoption. Agencies can augment existing workflows, develop AI copilots and embed decision-support tools without forcing radical system changes. Government employees benefit from AI-enhanced tools, improving efficiency, insight and mission outcomes without overwhelming the workforce or introducing operational risk.

For technology and program leaders building AI strategy and capability benchmarks, this architecture offers a durable path forward, enabling secure, scalable and auditable adoption. Agencies can modernize at their own pace while maintaining full control over how AI is introduced, used and governed.

APIs do not just connect systems, they enable strategy. They create a common language between legacy operations and next-generation intelligence. For agencies tasked with delivering modern, secure and responsive public services, API-driven architecture is not just a recommendation; it is the foundation of mission-aligned innovation.

The Top 5 Insights for Government from Sea-Air-Space 2026 

Sea-Air-Space 2026 convened naval leaders, defense technologists and industry partners with renewed urgency. Across panels, one message resonated clearly: the United States cannot sustain maritime superiority through technology and tactics alone. The industrial, organizational and digital foundations of naval power are being re-examined and, in many cases, rebuilt. 

From domestic shipbuilding to space-enabled operational speed and the cultural transformation modern cybersecurity requires, the conference presented a sea services enterprise in motion.  

Five critical insights emerged to define the path forward for naval readiness in an era of sustained great power competition.  

Shipbuilding Strength Starts with Industrial and Commercial Foundations 

Panel discussions on maritime dominance challenged the foundational assumption that naval strength begins with warships – it starts with the economics and infrastructure behind it.  To put this in prospective, the United States was a world leader in shipbuilding up until 1975.  Today we build less than .1% of global commercial ships, and China has become the #1 global shipbuilder followed by South Korea and Japan.  Without a self-sustaining domestic shipbuilding sector anchored in commercial demand, the U.S. cannot field or sustain the naval force it needs. This is a strategic imperative. 

Assured shipping access emerged as a critical operational concern. In crisis, the assumption that commercial shipping will always be available dissolves as capacity reprices, realigns and becomes politically unavailable. This gap between theoretical and reliable access directly affects forward naval operations, contested logistics and distributed maritime operations that depend on commercial sealift. 

The policy implication that maritime power cannot be separated from maritime commerce is clear. Deregulatory frameworks, investment incentives and alignment across Government agencies were cited as necessary conditions, not peripheral considerations, for restoring the industrial base to include maintenance and repair, that will deliver naval deterrence credible and sustainable. 

Force Design Modernization Demands Speed, Scale and Cost Discipline 

Lt. Gen. Paul Rock and the Marine Corps leaders framed Force Design not as a completed transformation but as an ongoing operational imperative. The shift from legacy formations toward multi-domain distribution across the littorals, with reduced signature and expanded logistics reach, requires industry to deliver capability faster, at greater volume and at a sustainable cost structure. Uniformed and industry panelists alike returned to speed, scale and cost as the defining metrics of partnership value. 

Logistics modernization stood out as a near-term priority. Maj. Gen. Andy Niebel, Assistant Deputy Commandant for Installation and Logistics, described sustaining distributed forces forward as a defining Force Design execution challenge especially in a contested environment. Advanced manufacturing, including producing and repairing components at forward locations and resolving technical data rights barriers, were highlighted as targets for industry engagement. Rear-echelon sustainment alone cannot support the dispersed, low-signature posture Force Design envisions.  

Admiral (Ret) Mike Rogers, former Commander of U.S. Cyber Command and National Security Agency, also emphasized engaging industry at the “problem level” rather than the “solution level” by presenting operational deficiencies to the Private Sector instead of prescribing widget requirements. This approach unlocks more solutions and better leverages innovation from non-traditional suppliers, dual-use technology providers and venture-backed entrants into the defense industrial ecosystem. 

Space is the Enabling Domain for Every Other Domain of Operations 

Multi-domain integration discussions reinforced the principle that space is not one domain among equals. It is the foundational layer upon which sea, air, land and cyber operations depend on timing, navigation, targeting and communications. Rear Admiral Tracy Hines, Deputy Director of Global Space Operations at U.S. Space Command, noted that no military operation of consequence occurs today without space as an enabler, a reality our adversaries have designed capabilities to exploit. 

The Space Development Agency’s (SDA) acquisition model is a template for delivering space capabilities at operational speed. By structuring satellite constellations around two-year launch cycles with five-year satellite lifetimes, SDA compresses traditional spiral development into a continuous refresh cycle, limiting requirements creep, maintaining technological currency and ensuring the architecture evolves faster than adversary counter-space capabilities. 

Developing dedicated maritime space officers and a trained sea services cadre was cited as essential to realizing this capability. Space domain awareness, or understanding the real-time health, availability and vulnerability of orbital and terrestrial space assets, requires personnel who understand both the naval operating environment and the physics and threat dynamics of space. Artificial intelligence (AI) tools are increasingly helping analysts manage the volume and complexity of space situational data. 

Cyber Resilience Requires Visibility into Operational Technology 

Cybersecurity panelists drew a distinction with significant implications for naval acquisition and maintenance: Operational Technology (OT) presents a fundamentally different threat surface than traditional IT. Legacy systems built decades ago without cybersecurity in mind are now network-connected, creating vulnerabilities that adversaries are actively seeking to exploit across afloat and shore infrastructure. 

Coast Guard leaders highlighted their model of deploying cyber protection teams to assess port and maritime transportation systems, treating cybersecurity readiness as part of physical safety and operational resilience. The emphasis was not on perfect security but maintaining impact and the ability to respond and recover after penetration, making resilience the goal rather than prevention alone. 

Building cyber resilience at scale requires cultural and technological change. Panelists noted that cybersecurity must evolve from an individual compliance exercise to a shared organizational process where intelligence flows directly to operators, vulnerabilities are treated as tactical liabilities and industry partnerships provide reach and expertise no single service can generate internally. AI was identified as valuable for managing threat noise, prioritizing response actions and balancing speed with security. 

Interoperability Is Won Through the Convergence of Training and Technology 

Interoperability discussions returned to the lesson that technological superiority does not guarantee operational success. The most capable systems deliver a decisive advantage only when operators are trained to employ them together across services, with coalition partners and in degraded communications and distributed command environments. Admiral Thad Allen, former Commandant of the U.S. Coast Guard, framed joint maritime interoperability not as a coordination challenge, but as a warfighting imperative built into training regimens, not assumed from capability inventories. 

Ms. Barbara Supplee, Executive Vice President of the Army and Navy Business Group at SAIC, cited AI as a meaningful interoperability accelerator when applied to the right problems, including reducing operational data processing time, helping communities get ahead of emerging threats and enabling distributed forces to maintain a coherent common operating picture. But panelists cautioned that AI adoption must be paired with institutional investment in training operators to use new tools effectively, not simply acquiring them. 

Panelists emphasized that the most valuable interoperability gains come from working through complexity together by embedding analysts and operators with joint and industry partners, surfacing unit-level capability gaps and designing experiments that change one variable at a time to generate actionable insight. The sea services are making progress, but leaders were clear that integration must accelerate to match how quickly adversaries are learning to operate across domains simultaneously. 

Sea-Air-Space 2026 reinforced that sustainable maritime superiority requires synchronized investment across industrial foundations, space capabilities, cyber resilience, Force Design execution and multi-domain training. The seas services are not simply fielding new platforms; they are rethinking the economic, organizational and technological systems that generate and sustain naval power. Progress depends on industry partners that understand the full challenge and can deliver at the speed, scale and cost the mission demands. 

Explore Carahsoft’s defense portfolio of leading solutions that support naval modernization priorities including AI, cybersecurity, cloud infrastructure and advanced analytics. 

Contact the Aerospace and Maritime team at DOW@carahsoft.com or (888) 662-2724 to discuss how Carahsoft’s technology partners can support your mission requirements. 

Beyond “Checklist” Compliance: Resilience in Healthcare Cybersecurity

For healthcare and medical institutions, dealing with sensitive information comes with the territory of patient care. In 1996, The Health Insurance Portability and Accountability Act (HIPAA) set several regulations for protecting patient privacy; however, it has few guidelines with how institutions can best configure their cybersecurity against a modern threat landscape. Additionally, cybersecurity compliance is often approached as a checklist exercise. In practice, most organizations are managing multiple overlapping frameworks independently, leading to duplicated work, fragmented processes, and limited visibility into actual risk.

Challenges in Healthcare Cybersecurity Compliance

Healthcare and medical institutions handle an incredible amount of sensitive data, including Protected Health Information (PHI) and Personally Identifiable Information (PII). Some institutions may also have Government contracts, in which case they will also handle Controlled Unclassified Information (CUI). This makes it a particularly enticing target for hackers.

Ransomware is on the rise, largely focusing on mid-market small specialty practices. In a month’s time in the fall of 2025, there was a 67% increase in ransomware attacks, primarily from 18 different threat actors. Ransomware affects multiple systems and effectively paralyzes an organization. The stakes are raised the second a cyberattack is launched; in a hospital with patients relying on technology to keep them healthy, the pressure is immediately on to remediate the issue. In these moments, the ability to understand control effectiveness and respond quickly across systems becomes critical, something fragmented compliance programs often struggle to support effectively.

Beyond external threats, many healthcare organizations face an internal operational challenge: the same controls are often assessed and maintained across multiple frameworks, with remediation and evidence tracked separately. This creates inefficiencies that increase cost and slow response times, even when security investments are in place.

When it comes to following cybersecurity compliance standards, healthcare organizations often approach these standards from a position of self-protection. This is not without precedent. Originally enacted in 1863 to prevent the sale of defective goods to the Government, the False Claims Act (FCA) today is used to prevent the filing of false claims to Medicare and Medicaid. Under FCA, liability can be applied broadly to anyone in the healthcare system, from administrators to nurses and physicians. Additionally, every ransomware attack exposes patient PHI and PII, opening the door to class action lawsuits.

What is NIST-CSF?

To establish uniform guidelines for cybersecurity standards across the Public Sector, the National Institute of Standards and Technology (NIST) published the Cybersecurity Framework (CSF). NIST-CSF 2.0 breaks compliance down into six main categories:

  • Govern: This section focuses on how an organization can establish, communicate and monitor cybersecurity risk management strategy, expectations and policy, including a recovery plan.
  • Identify: Once an organization understands their threat landscape, they can identify critical processes and assets and document information flows.
  • Protect: An organization puts safeguards in place to manage cybersecurity risks, training users in proper protocols, securing sensitive assets and conducting regular data back-ups.
  • Detect: When anomalous activity is detected, the organization isolates and analyzes the activity, determining the estimated scope of the impact and continuously monitoring all systems for adverse effects.
  • Respond: After an incident is evaluated, appropriate action is taken. Organizations collect data, prioritize incidents and escalate required actions as needed.
  • Recover: Once an incident has been resolved, an organization should execute their recovery plan. This includes quality checks and communication with both internal and external stakeholders.

Frameworks like NIST-CSF provide a strong foundation, but the challenge is not understanding the categories. It is operationalizing them across multiple frameworks at once.  Not only does this model break down compliance with non-technical language, but it also allows healthcare organizations to approach their cybersecurity framework from a posture of resilience. However, in environments where multiple frameworks are in use, organizations must also consider how these controls align across requirements to avoid repeated effort and inconsistent implementation. NIST-CSF cannot be relied on solely; it states up front that it is not a maturity scale. In other words, it cannot measure how developed or effective an organization’s policies are. Additionally, no healthcare or medical institution faces the same threat landscape. There is no “one size fits all” solution for compliance; each organization must find and adjust a compliance framework that works best for them.

Steps to Strengthen Cybersecurity Posture

Healthcare organizations require clear lines of delineation concerning liability after a cybersecurity breach. It needs to be clear that Security Operations Center (SOC) analysts and other cybersecurity team members do not own the risk; rather, they are simply reporting on risk and identifying the stakeholders that own the risk. It is critical that the Chief Information Security Officer (CISO) remain an objective, honest conveyer of vulnerability and risk intelligence.

Compliance frameworks set the overall goal for cybersecurity, providing a compass to which health organizations can align budgets, staff and policies. To do this, an institution must fully understand their risk tolerance, a process known as risk framing. For example, if an institution chooses to implement a compliance framework focusing solely on HIPAA, it could potentially be neglecting necessary protections for CUI and could face Civil Monetary Penalties (CMP) or the loss of Government contracts or Federal funding. It is critical to examine an entire ecosystem and bolster its weakest points.

Another step in examining that landscape is understanding where multiple frameworks intersect and how they interact with each other. Without a unified approach, organizations often end up performing the same assessments and remediation activities multiple times, creating unnecessary overhead and delaying progress. Simply assuming that alignment across frameworks results in effective compliance creates blind spots, especially when controls are implemented and assessed inconsistently. Ultimately, devoting time and resources to continuous monitoring will keep PHI and PII secure and keep medical institutions running smoothly.

There is no such thing as static compliance; healthcare institutions need to continuously monitor their environment to ensure that their systems are secure. As regulatory requirements continue to evolve, organizations that reduce fragmentation and align controls across frameworks will be better positioned to maintain readiness, respond to threats, and improve their overall cybersecurity maturity.

Increasingly, this means moving toward a more unified, control-based approach, where compliance is not managed as separate efforts, but as a continuous, operational system.

Watch Cyturus’ The Day After Compliance—Healthcare and Medical Institutions webinar to explore more about compliance and observability in healthcare organizations.

Minimizing the Attack Surface: The Onion Model vs. Core-First Protection

Historical Context of Layered Security

The onion model emerged during the growth of enterprise IT when organizations responded to new threats by adding new defensive layers. Each incident or compliance requirement led to another perimeter or middleware control. While effective in the short term, this layered approach produced patchwork systems with overlapping functionality, inconsistent policies and gaps that attackers could exploit.

The Onion Model and Its Vulnerabilities

The traditional “onion model” of cybersecurity layers defenses concentrically around a central database. Each layer is intended to provide a barrier against intrusion, but the cumulative effect is often an expanded and more complex attack surface. From the inside out, the layers typically include:

  1. Database (Data) – the core asset containing customer records, financial transactions, intellectual property, logs and other sensitive information.
  2. Schema & Validation – enforcement of data formats, constraints and integrity checks designed to prevent malformed or malicious inputs from reaching the core.
  3. Application Logic & APIs – business rules and access methods that determine how applications interact with the database, often exposing numerous interfaces.
  4. Access Controls & Identity (IAM) – authentication and authorization services (passwords, tokens, SSO, MFA) that regulate who can reach protected resources.
  5. Encryption Services – cryptographic mechanisms for protecting data at rest and in transit, including key management, TLS/SSL and disk-level encryption.
  6. Firewalls / Perimeter Security – network boundary defenses, intrusion detection systems, packet filtering and monitoring services designed to repel external threats.

Why the Attack Surface Expands

While each layer aims to protect the core, collectively they create new opportunities for exploitation:

  • Integration Points – every interface or protocol boundary becomes a seam that can be misconfigured or attacked.
    • Configuration Complexity – with more interdependent systems, administrators must manage extensive policy sets and security rules, increasing the likelihood of mistakes.
    • Expanded Targets – each layer (firewalls, IAM, middleware, encryption appliances) presents its own vulnerabilities, requiring constant patching and monitoring.
    • Dependency Chains – the failure of a single outer system can cascade inward, leaving the core exposed despite the presence of other controls.

In practice, adding more layers often enlarges the attack surface instead of shrinking it. Attackers exploit this complexity, probing for the weakest link among numerous entry points.

Operational Cost of a Typical Attack Surface

Beyond theoretical weaknesses, a large attack surface carries real operational costs. Tool sprawl burdens administrators with dozens of systems to configure and maintain.

Overlapping monitoring layers generate alert fatigue, obscuring genuine threats. Security budgets become diluted, funding maintenance of redundant defenses rather than reinforcing the integrity of the data itself.

Modern Threat Landscape

Today’s adversaries exploit weaknesses that layered defenses cannot easily address. Lateral movement bypasses layers once attackers are inside a network. Supply chain compromises enter through trusted applications, neutralizing perimeter filters. Zero-day exploits render outer walls ineffective overnight. Core-first security, with protection embedded at the data level, ensures confidentiality and integrity even in the face of these modern tactics.

Architectural Simplicity as Security

Simpler architectures are inherently more secure. Each removed integration point reduces the trusted computing base and the probability of misconfiguration. By embedding protections directly into the data layer, Walacor collapses overlapping controls, producing a system that is easier to audit, verify and trust. This simplicity is itself a security multiplier.

The Core-First Alternative

A core-first security model inverts the paradigm by embedding protections at the data layer itself rather than relying primarily on external systems:

  • Record-Level Encryption and Validation – each data element carries its own cryptographic safeguards, ensuring confidentiality and authenticity.
    • Immutable Integrity Proofs – cryptographic hashes and proofs guarantee that tampering is detectable, independent of outer defenses.
    • Minimized Trust Dependencies – fewer external layers are required for assurance, reducing the number of systems that must be defended and configured.
    • Resilience Under Breach – even if outer controls fail, the data itself remains cryptographically protected and resistant.

This approach shrinks the attack surface by concentrating security at the point of greatest value: the data. Instead of expanding outward with additional complexity, it reduces potential vectors for compromise.

Walacor and Core-First Protection

Walacor implements the core-first philosophy by embedding immutability, cryptographic enforcement and schema validation directly into the data layer. Rather than building outward layers that expand the attack surface, Walacor collapses unnecessary perimeter complexity and anchors protection where it cannot be bypassed: the data itself.

  • Data-Level Cryptography – each record is encrypted and bound to proofs of authenticity, eliminating reliance on external encryption appliances.
    • Immutable Storage – records are tamper-evident at the core, reducing the need for overlapping monitoring systems.
    • Integrated Validation – schema and policy checks occur at write-time, blocking invalid or hostile data without middleware add-ons.
    • Shrinking the Attack Surface – because Walacor renders many outer layers redundant, there are fewer interfaces to defend, fewer seams to misconfigure and fewer targets for attackers.

Walacor demonstrates that the most effective way to minimize the attack surface is to concentrate defenses in the core, ensuring data integrity and confidentiality regardless of the state of external systems.

Agents, AI and the Attack Surface

The emergence of intelligent agents and AI-driven systems adds a new dimension to the attack surface discussion. Agents interact with data across multiple contexts—querying, transforming and making autonomous decisions. In a traditional layered model, each of these interactions multiplies the integration points and potential vulnerabilities. Malicious prompts, poisoned training data or compromised connectors can all bypass outer defenses to reach sensitive information.

A core-first model directly addresses this risk. By cryptographically securing and validating data at the record level, Walacor ensures that even AI agents cannot be tricked into handling falsified or tampered records. Every data element carries its own assurance, creating a trustworthy substrate for automated reasoning and machine learning pipelines.

In this way, AI becomes a consumer of verifiable data rather than a potential vector for hidden compromise, aligning intelligent agents with the same guarantees that protect human operators.

Forward-Looking Implications

A core-first approach lays the groundwork for enduring benefits. Immutable, verifiable data strengthens sovereignty in federated and multicloud environments. Compliance becomes easier, as audit trails and integrity proofs are inherent to the system rather than bolted on. This architecture future-proofs sensitive systems, ensuring resilience against evolving threats.

Reinforcing the Core-First Premise

The onion model reflects a reactionary philosophy that often results in excessive complexity and a sprawling attack surface. A core-first strategy simplifies the architecture by embedding protection directly into the data layer, eliminating unnecessary exposure and ensuring that sensitive information remains secure even in hostile conditions.

To learn more about a core-first approach to cybersecurity, contact Walacor.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Walacor, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Doing More with Less: How Government Agencies are Rethinking Cybersecurity

In December 2025, Carahsoft and Broadcom commissioned Forrester Consulting to survey 212 U.S. Government cybersecurity decision makers about the state of Public Sector security operations following the budget and headcount reductions of early 2025. What they found was a sector under sustained pressure, but also one actively searching for smarter, more resilient ways forward. The findings provide a candid assessment of where agencies stand today and the steps required to strengthen their cybersecurity posture in an era of constrained resources.

Budget Cybersecurity Gaps

Budget instability remains widespread, with 38% of agency budgets still classified as mostly or completely fiscally unstable. Another fifth of agencies reported no change since the initial cuts were enacted. The result is a cybersecurity landscape where teams are being asked to protect increasingly complex digital environments with fewer people, fewer tools and less financial runway than they had even a year ago. Over half of the respondents report that budget constraints have moderately or significantly impacted their ability to maintain core security operations. Perhaps most telling, just 38% of cybersecurity leaders express confidence in their agency’s security posture following headcount reductions.

The areas most exposed under current resource limitations are network security, data protection and incident response. Roughly a third of respondents also flagged concerns around endpoint security, visibility, analytics and compliance. For agencies already navigating a complex regulatory and threat environment, these vulnerabilities represent more than operational friction; they signal genuine risk to mission-critical systems and the sensitive data agencies are entrusted to protect. As leadership teams work to roadmap investments for the year ahead, two priorities have risen to the top: securing critical infrastructure against bad actors and integrating artificial intelligence (AI) and cybersecurity capabilities.  

Rising Breach Risk in a Leaner Environment

Understanding the current risk landscape is an essential first step toward addressing it effectively. 86% of respondents anticipate an increase in potential compromises or breaches in the coming year due to the recent staffing and funding reductions. More than a quarter expect breach numbers to climb by 1–10%, while over 20% anticipate increases of 30% or more. For agencies responsible for protecting sensitive Government data and public-facing services, this trajectory demands immediate strategic attention. The connection between resource reduction and elevated risk is already being experienced across teams, where reduced personnel have created measurable gaps in detection, response and remediation capacity.

The operational data reinforces this concern. 61% of respondents report that security incidents overall have increased in frequency, while 65% say their mean time to remediate (MTTR) has been negatively affected. Over half indicate their ability to secure technology and architecture delivery has also suffered. These are not isolated data points; they reflect a compounding effect where each unaddressed gap creates the conditions for the next. Agencies that do not act strategically in prioritizing their highest-risk exposure areas will face growing difficulty in maintaining the compliance posture and operational resilience their missions demand.

AI and Automation as Force Multipliers for Lean Teams

Amid the challenges, a clear opportunity is emerging. Agencies are increasingly recognizing that AI and automation are essential tools for maintaining security effectiveness when human capacity is stretched thin. 72% of respondents indicated openness to automation tools as a means of enhancing cybersecurity resilience. The top priority areas for automation adoption include incident response, network security, compliance and data protection, precisely the domains where resource gaps are most acute.

Forrester’s recommendations reinforce this direction. Leveraging AI to automate network traffic analysis, policy validation and alert triage allows teams to concentrate on high-confidence threats such as data exfiltration and lateral movement, rather than being consumed by manual tasks. Applied effectively, AI can help offset staffing shortfalls, reduce analyst burnout and preserve or even improve, mean time to investigate (MTTI) or MTTR metrics. Agencies that invest in AI-driven security tools now are not just responding to a short-term resource problem; they are building a more adaptive, scalable security model that can sustain performance through continued uncertainty. This is a strategic shift as much as a technical one, and cybersecurity leaders who embrace it early will be better positioned to protect their environments long-term.

Strategic Consolidation as the Path Forward

The data points toward a clear prescription: agencies must work smarter, not just harder, with the resources available to them.

On the investment side, respondents are focusing on limited resources where they will have the greatest impact: threat detection, incident response, network infrastructure modernization and process automation. Forrester recommends that agencies rationalize their security stack to eliminate overlapping capabilities, adopt consolidated platform solutions such as Endpoint Detection and Response (EDR) or unified network security platforms and reduce one-off tool purchases that contribute to sprawl and complexity. Critically, agencies should plan for sustained lean operations rather than assume a return to pre-2025 staffing or budget levels. Redesigning operating models around automation, risk prioritization and efficiency will be the defining factor for resilient agencies.

The findings from this Forrester study make one thing clear: the agencies that will emerge strongest from this period of constraint are those that treat resource limitations not as a barrier, but as a forcing function for smarter, more deliberate security strategy. By concentrating investments in high-risk areas, embracing AI and automation and consolidating their security stack, Government cybersecurity teams can build a leaner, more resilient security posture that holds up under pressure, today and in the years ahead.

Download the full study, “Smarter Security for Leaner Budgets and Teams” and join our webinar as experts and Government showcase the key findings in depth and discuss the path forward.

A commissioned study conducted by Forrester Consulting on behalf of Carahsoft and Broadcom, March 2026.

Top 10 Autonomy and Robotics Events for Government in 2026 

Autonomy and robotics are reshaping how Government agencies approach defense, public safety, infrastructure and mission-critical operations. From Uncrewed Aerial Systems (UASs) and artificial intelligence (AI)-enabled platforms to geospatial intelligence (GEOINT) tools and autonomous maritime solutions, these technologies are accelerating innovation across every domain of the Public Sector. Carahsoft Technology Corp., The Trusted Government IT Solutions Provider®, is a leading resource for Government agencies navigating this rapidly advancing field, connecting agencies with a robust ecosystem of vendor partners and solutions tailored to the unique demands of defense, law enforcement and civilian missions. Below, we highlight the top autonomy and robotics events of 2026 where Carahsoft will be present to help Government professionals explore, evaluate and adopt the latest in autonomous technology. 

Sea-Air-Space 

April 19–22, 2026 | National Harbor, MD | In-Person Event 

Sea-Air-Space, hosted by the Navy League of the United States, is North America’s largest annual maritime defense exposition, drawing policy makers, senior military leaders, program managers and industry decision makers from across the sea services. The event spans four expansive exhibit hall experiences and 22 sessions—including keynotes, strategy luncheons and expert-led industry discussions—focused on the future of maritime, naval and defense operations. Government attendees will find timely value in sessions addressing AI and robotics for sustainment and manufacturing, naval IT modernization, cybersecurity for critical infrastructure and the Marine Corps’ evolving force structure. 

Carahsoft will showcase its aerospace and maritime technology solutions and partner ecosystem at Sea-Air-Space 2026, giving attendees direct access to innovative capabilities spanning autonomous systems, defense communications and advanced maritime technologies. Stop by Carahsoft’s booth (#415) at Sea-Air-Space and explore technologies from our 36 demoing partners. Our team will be on hand throughout the event to engage with naval and defense professionals on how Carahsoft’s trusted partnerships can support their mission requirements. 

GEOINT Symposium 

May 3–6, 2026 | Aurora, CO | In-Person Event 

Hosted annually by the United States Geospatial Intelligence Foundation (USGIF), the GEOINT Symposium is the nation’s foremost gathering of GEOINT professionals dedicated to advancing the GEOINT tradecraft across Government, industry, academia and professional organizations. The event explores the intersection of technology and national security, engaging experts and innovators to address challenges and opportunities in today’s complex geopolitical landscape. With more than 33 events across the program—including 14 dedicated sessions, morning and afternoon training tracks and rich networking opportunities—GEOINT 2026 provides exceptional value for professionals at the forefront of geospatial and autonomous intelligence. 

Sessions to look out for:  

  • Main Stage Panels: National security executives and industry professionals will discuss advancements redefining GEOINT, providing insights into the latest developments and future direction. 
  • Training Sessions: Participants can engage in hands-on training on topics such as mission planning, precision timing and navigation, enhancing their practical skills and knowledge in GEOINT applications. 

Carahsoft will have a strong presence at GEOINT 2026, featuring a pavilion (Booth #1823) with partner demos throughout the show. As intelligence agencies pursue enhanced situational awareness, precision analytics and real-time decision superiority, we remain focused on linking GEOINT professionals with capabilities that amplify mission effectiveness. Additionally, Carahsoft will host a networking reception offering an evening of food, music and networking. Check back for more details closer to the event!  

XPONENTIAL 2026 

May 11–14, 2026 | Detroit, MI | In-Person Event 

The Association for Uncrewed Vehicle Systems International (AUVSI’s) XPONENTIAL is the premier global event for uncrewed systems and autonomous technology, connecting professionals across the air, land, sea and space autonomy domains in one expansive program. The conference encompasses regulatory and policy sessions, technical workshops, live demonstrations and hundreds of exhibitors representing the full spectrum of autonomous capabilities available today. A standout addition for 2026 is the Law-Tech Connect Workshop (May 13–14), a co-located program bringing together legal, policy and technical leaders to navigate the evolving regulatory and legal landscape governing uncrewed and autonomous systems. 

Carahsoft will be exhibiting at XPONENTIAL 2026 at booth #34022 with live technology demonstrations from our autonomy and robotics vendor partners, offering Government attendees hands-on opportunities to explore mission-enabling solutions across multiple domains. Our team will be available throughout the event to help agencies identify and evaluate the technologies best suited to their operational requirements and compliance obligations. 

SOF Week 

May 18–21, 2026 | Tampa, FL | In-Person Event 

SOF Week is the leading annual conference for the international Special Operations Forces (SOF) community, jointly sponsored by U.S. Special Operations Command (USSOCOM) and the Global SOF Foundation. The event unites thousands of special operators, defense industry leaders and international partners around trailblazing capabilities, strategic priorities and next-generation technologies shaping the future of SOF missions.  

Sessions to look out for:  

  • ISR, GEOINT and Mission Planning Technologies  
  • SOF Interoperability and Multi-Domain Operations  
  • Emerging Technologies Supporting Tactical Decision-Making  

Carahsoft will host a pavilion (#633 – SOF Warrior Zone) at SOF Week, reinforcing our profound respect for operators who depend on superior GEOINT and technology advantages in high-stakes environments. Our team will collaborate with SOF professionals throughout the week to explore how geospatial innovations, autonomous systems and advanced communications enable mission success while keeping operators safe.  

Commercial UAV Expo 

September 1–3, 2026 | Las Vegas, NV | In-Person Event 

Commercial Unmanned Aerial Vehicles (UAV) Expo is one of the premier commercial drone events in North America, featuring dedicated education tracks, keynote presentations, breakout sessions and an expansive exhibit hall focused on the commercial integration of UAS technology across high-impact industries. The event addresses drone operations across various verticals, including energy, infrastructure, public safety and logistics, making it an essential gathering for Government professionals responsible for evaluating, adopting and managing UAS programs. Attendees gain valuable exposure to regulatory developments, emerging industry trends and real-world case studies that directly inform how agencies can leverage drone technology to enhance operations and achieve mission outcomes. 

Carahsoft will be present at Commercial UAV Expo 2026 with live technology demonstrations from select vendor partners, providing Government and Public Sector attendees direct access to innovative UAS capabilities and expertise. Our team looks forward to engaging with agencies navigating drone integration decisions and helping them connect with the right solutions through Carahsoft’s trusted partner network. 

AUSA Annual Meeting and Exposition 

October 12–14, 2026 | Washington, D.C. | In-Person Event 

The Association of the United States Army (AUSA) Annual Meeting and Exposition is the largest land power exposition and professional development forum in North America, designed to deliver the Army’s message by spotlighting organizational capabilities and a wide array of industry products and services. Over three days, attendees engage with State-of-the-Army presentations, panel discussions on military and national security subjects and extensive networking events that connect leaders across Government, industry and academia. For professionals focused on land power modernization and the evolving role of autonomous and robotic systems in ground operations, AUSA remains an indispensable annual event. 

Carahsoft will be at booth #4255 on the AUSA show floor, allowing Army and defense professionals to engage with our comprehensive portfolio of autonomy, robotics and defense technology solutions. Our team looks forward to connecting with mission-focused leaders to explore how Carahsoft’s trusted partner ecosystem can support land power modernization and the adoption of next-generation technologies across the force. 

FAA Drone and AAM Symposium 

November 2026 | Washington, D.C. | In-Person Event 

The Federal Aviation Administration (FAA) Drone and Advanced Air Mobility (AAM) Symposium brings together representatives from the FAA, Government agencies, international aviation experts, industry leaders and academia to accelerate the safe and efficient integration of drones and advanced air mobility platforms into the National Airspace System. Presenters and panelists address the latest developments in diverse drone applications and the regulatory path for advanced air mobility aircraft, including air taxis, into controlled and uncontrolled airspace. The symposium is a critical annual forum for shaping the frameworks and operational standards that will define the future of aviation, autonomous flight and airspace management across the United States. 

Carahsoft is actively exploring sponsorship and participation opportunities at the 2026 FAA Drone and AAM Symposium, reflecting our continued investment in the autonomous aviation community.  

More Events 

Geo Week 

February 16–18, 2026 | Denver, CO | In-Person Event 

Geo Week is a premier industry gathering that unites geospatial and mapping professionals, technologists and industry leaders to explore advancements in spatial intelligence, digital mapping, Light Detection and Ranging (LiDAR), reality capture, AI and machine learning (ML), mobile mapping, digital twins and integrated data workflows. With more than 50 conference sessions, keynotes, workshops, panel discussions and exhibit hall theater talks, the event delivers real-world applications across infrastructure, construction, transportation and emergency response. Government attendees will find value in sessions focused on UAS and drone integration for mapping and inspection, AI-driven geospatial workflows and Public Sector case studies highlighting practical outcomes across agencies. 

Carahsoft brought together our geospatial and autonomy technology partners to support Government attendees exploring the latest spatial intelligence solutions at Geo Week 2026. Our team discussed how Carahsoft’s vendor ecosystem can address agency needs in mapping, autonomous systems and actionable geospatial data. 

Drone Responders National Public Safety UAS Conference 

March 10-11, 2026 | Williamsburg, VA | In-Person Event 

The Drone Responders National Public Safety UAS Conference is a key annual event dedicated to advancing the use of UAS by first responders and public safety agencies. As a nonprofit-driven initiative, the conference serves as a hub for knowledge-sharing, best practices and innovative solutions tailored to the operational realities of emergency management and law enforcement. Sessions addressed critical topics including hurricane response operations, law enforcement tactical detection and mitigation and new FAA public safety waivers—equipping attendees with actionable insights to strengthen their UAS programs. 

Carahsoft served as an Exhibitor Sponsor at this year’s conference, supporting the public safety community’s growing need for trusted UAS technology solutions. Our participation reflects Carahsoft’s long-standing commitment to equipping first responders and public safety agencies with the tools they need to protect communities and execute time-sensitive missions. 

Unmanned and Autonomous Systems Summit 

April 8–9, 2026 | Washington, D.C. | In-Person Event 

The 14th Annual Unmanned and Autonomous Systems Summit convenes key experts, decision makers and innovators from the Department of War (DoW), military services, industry and academia for in-depth dialogue on the advancements driving unmanned and autonomous technologies in military defense. As the battlespace becomes increasingly defined by drone dominance and the ability to produce, maneuver and sustain UASs at scale, this summit examines how the DoW is developing comprehensive drone guidance to ensure operational superiority, responsible integration and strategic deterrence.  

Sessions to look out for: 

  • Counter-UASs in Multi-Domain Operations 
  • Defense-Industrial Acceleration in Uncrewed Systems 
  • Emerging Autonomous Platforms for the Modern Warfighter 

Carahsoft participated as an Exhibitor Sponsor at the Unmanned and Autonomous Systems Summit, engaging directly with defense professionals who are shaping the future of uncrewed operations. Our team connected with mission-focused attendees with our portfolio of autonomy and defense technology partners to help advance the capabilities of tomorrow’s warfighter. 

From battlefield autonomy and naval defense to public safety UAS programs and commercial drone integration, these events represent the full breadth of opportunities shaping the future of Government autonomy and robotics. Carahsoft is proud to be a trusted presence across this landscape, connecting Public Sector agencies with the technology solutions, vendor partnerships and expert insights needed to advance their missions in an era of rapid technological change.  

To learn more or get involved in any of the above events, please contact us at AutonomousTechMarketing@Carahsoft.com. 

For more information on Carahsoft and our industry-leading Autonomy and Robotics technology partners’ events, visit our Autonomy and Robotics solutions portfolio. 

The Importance of Securing the Software Supply Chain

Moving Upstream: The Evolution of Software Supply Chain Attacks

The software supply chain consists of multiple components, touching every piece of code from the moment of conception to the moment of deployment into a Government application. This includes a variety of software, including third-party libraries, open source components, build tools and software architecture, making it a valuable target to hackers.

The software supply chain threat landscape has evolved from a series of disjointed yet targeted attacks to a broader upstream poisoning strategy. Historically, malicious actors targeted specific agencies; today, they have shifted to targeting upstream public software libraries and repositories. These open source libraries are used by thousands of Government agencies and can cause untold damage in a single attack. In the Public Sector, a compromised supply chain does not just mean a data link—it can constitute a threat to national security.

Several real-world cyberattacks exemplify this pattern change, including the 2025 Shai-Hulud software supply chain attack and the 2025 GlassWorm Integrated Development Environment (IDE) extension cyberattack. Malicious actors contribute code that appears to be helpful to public open source projects that contain hidden backdoors or vulnerabilities. In this case, it grants access to systems run by Government agencies.

Some hackers target the developer toolchain and IDE more broadly, as shown in the GlassWorm IDE extension cyberattack. GlassWorm was a self-propagating vulnerability whose initial threat injection was through an IDE extension download through a popular IDE extension marketplace. Other malicious actors have targeted artificial intelligence (AI)-powered supply chains, taking advantage of the speed and power of AI to propagate sophisticated multi-threaded threat campaigns against the developer ecosystem.

Setting Up for Success: Security Built Into the Process

In February 2022, the US Government published the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF) to combat threats to the software security chain. This publication divides guidance under four main practice groups:

  • Preparing the organization
  • Protecting the software
  • Producing well-secured software
  • Responding to vulnerabilities

These groups shift the model from fragmented security tools stitched together toward a unified process in which the security is baked directly into the developer’s workflow. For agencies, this framework provides a common language from which they can all develop a cohesive, secure and regulated software supply chain.

One of the ways developers can secure their supply chains is through Software Bill of Materials (SBOMs). SBOMs are essentially recipes for software; they outline all of the components inside a piece of software. These became required through Executive Order (EO) 14028 but creating them manually at the speed of modern DevSecOps is nearly impossible. Furthermore, as the Government manages risk and prepares for quantum-safe cryptography, the ability to support industry-standard and Federal compliance requirements for Software Package Data Exchange (SPDX) and CycloneDX SBOM formats, which include Vulnerability Exploitability Exchange (VEX) and cryptographic information, is mandatory for mission success.

The automation of SBOMs affects multiple components of the software supply chain:

  • Real-Time Visibility: Agencies have insight into all aspects of the software supply chain, from the deployment of a new line of code to the introduction of common vulnerabilities and exposures (CVE) to their inventory.
  • Reach of Vulnerability: DevSecOps teams can look at a vulnerable part of a library and determine the status of execution, the path of remediation and how agencies should prioritize remediation efforts.
  • Continuous Compliance: Every automated SBOM ensures that every release is compliant with Federal standards without requiring manual audit every time.

Beyond SBOMs, Federal agencies can focus on implementing other safeguards. Developing a curation process to vet open source libraries and components before they are ever downloaded is a critical first step. Agencies should examine potential application and service exposures, such as leaked credentials or backdoors in the software architecture. Additionally, securing the code at the binary level ensures that what was tested and developed is exactly what is run in production.

The JFrog Software Supply Chain Platform: All in One

From inception of code to runtime during mission-critical operations, having a single platform that provides security and visibility across the Software Development Life Cycle (SDLC) is crucial. The JFrog Platform ensures those factors by focusing on universal binary management. It supports over 30 open source packages, including Docker, Maven and Python. JFrog Artifactory, JFrog’s universal artifact repository manager, manages this package from one place, providing a single source of truth for developers that support mission-critical applications.

JFrog does not just look at the top layer for vulnerabilities and exposures; they scan deep into every dependency and sub-dependency within the binary to protect developer tools and infrastructure. Signed evidence at every gate creates end-to-end traceability from the developer’s IDE to edge deployment. The JFrog Platform is compatible with multiple network environments, from on-prem to hybrid to a multicloud flexible strategy.

As the Government modernizes its approach to digital transformation, agencies need industry partners that provide visibility into the next frontier. Security starts and extends across the software supply chain, from the inception of the code at the binary level to deployment of the application. The JFrog Platform delivers unprecedented trust assurance and risk mitigation through their signature binary-level security and positions their Public Sector customers and partners at the bleeding edge of innovation.

Explore JFrog’s DevSecOps solutions and how JFrog can protect Public Sector software supply chains from code to production.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including JFrog, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.