Building Mission-Driven AI That Lasts: A Federal Agency Roadmap for Success

A recent Massachusetts Institute of Technology (MIT) study revealed that 95% of artificial intelligence (AI) projects fail before they even get started. For Federal agencies managing citizen data, classified information and critical infrastructure, this is not just a learning curve; it is a fundamental breakdown of how AI initiatives are conceived and executed. The disconnect between AI proliferation and AI success stems from a common pattern—agencies prioritizing tools over outcomes, launching disconnected pilots without enterprise alignment and lacking the governance structures to ensure accountability. The path forward requires a deliberate shift. Starting with mission-driven use cases, building on clean and governed agency data and ensuring sustainable adoption through people-centered strategies.

Mission-Driven Use Case Development First, Technology Second

The fastest way to stall an AI initiative is to start with the technology instead of the mission. Too often, agencies approach AI adoption by asking “what can we do with generative AI (GenAI)?” rather than “what operational problem needs solving?” This approach yields pilots that work in limited scenarios but often fail to scale because the model, data and governance do not translate to enterprise-level needs. Strong AI use cases are not discovered after implementation; they are designed deliberately around mission outcomes and real operational constraints. Agencies should begin by defining a specific challenge or opportunity, whether it is a slow and manual process, resource-intensive workflows or error-prone operations. The critical test is simple: if success would not fundamentally change how the mission operates, it is not the right use case to prioritize.

Identifying stakeholders early is equally essential. Program owners, analysts, operators and leadership must validate whether AI will genuinely help or simply add noise to an already complex technology landscape. Agencies must also be explicit about outcomes—faster decisions, fewer errors, reduced backlogs, better procurement insights or reclaimed staff time. Without clearly articulated outcomes, measuring success or defining return on investment becomes impossible. A practical prioritization matrix can guide agencies in filtering use cases into four categories:

  • high-impact, high-effort investments for enterprise transformation
  • high-impact, low-effort quick wins ideal for pilots
  • low-impact distractions to avoid entirely
  • interesting but non-urgent projects to defer

By focusing on tightly scoped problems with clear ownership and contained risk, agencies can deliver meaningful pilots that demonstrate real value and build momentum for broader adoption.

Data Foundation and Governance as the Critical Success Factor

Most AI models in use today are generalized Large Language Models (LLMs) trained on public internet data. These models are faster to deploy and have lower upfront costs, making them attractive for proofs of concept. However, they lack understanding of an agency’s unique mission, culture and decision-making context. For lasting, mission-critical AI, agencies should consider Small Language Models (SLMs) trained on agency-specific data. These models are more energy efficient, operationally reliable and context-aware with fewer mistakes. The challenge lies in fragmented data environments where records are spread across systems, formats and classification levels. This is where records management and data governance professionals become invaluable, helping to locate data and establish controls that transform data from a liability into a strategic asset.

AI learns directly from the data it was trained on and from how humans categorized it through reinforcement learning from human feedback. If the underlying information is disorganized, untagged or incomplete, the model will reproduce those flaws at scale. Properly governed, annotated and categorized data produces outputs that are accurate, explainable and trustworthy. Unstructured data—emails, PDFs, chat logs, memos, case files—represents roughly 80% of all agency information and contains the real story of mission operations. Yet most tools focus on structured data like databases and spreadsheets, missing the valuable context hidden in human-generated content. In-place data management addresses cost and security concerns by training and running models where data already lives, minimizing movement and preserving security boundaries. When Chief Data Officers (CDOs) and Chief AI Officers (CAIOs) collaborate under a shared governance model that includes Chief Information Security Officers (CISOs), Chief Information Officers (CIOs), legal teams and records leaders, innovation becomes both safer and faster because trust and accountability are built from the start.

The AI Failure Crisis and Its Root Causes

Federal AI adoption has accelerated faster than almost any other technology in Government history, yet this growth comes with significant risk. Currently, there is no Federal statute enacted by Congress to regulate AI across sectors, leaving agencies to rely on self-assessments and voluntary guidelines. The Office of Management and Budget (OMB) M-24-10 requires agencies to apply risk management and governance controls to high-impact AI systems, but without uniform standards for measuring impact or frameworks for compliance, agencies struggle to implement meaningful safeguards.

Many AI projects begin in isolation, driven by excitement about new tools or pressure to deliver results quickly, without engaging CIOs, CDOs or records management teams. Solutions may work adequately for limited use cases but lack the foundation to scale because governance, data quality and stakeholder alignment were afterthoughts rather than prerequisites. This pattern creates an explosion of activity with limited longevity, the very definition of a bubble. Experts report that Government is a generation behind industry in AI governance, a concerning gap given the sensitive citizen data, classified information and critical infrastructure at stake. If agencies rush to deploy AI without proper governance, they multiply the surface area for data errors, bias and compliance breakdowns. Expansion without oversight increases exposure rather than capability.

Sustainable Adoption Through People and Partnership

Even well-designed AI initiatives fail without sustained human engagement and vendor commitment. Vendors must remain engaged beyond initial implementation, continuing to train systems, monitor performance, incorporate feedback and deliver updates. If a vendor disappears after the sale, agencies are left without the support needed to refine and sustain their AI investments. This reinforces why starting with genuine use cases matters: when AI addresses tangible operational pain points, users are motivated to engage with and trust the technology.

Training cannot be a one-time orientation. Structured, continuous learning programs ensure that users understand not just the technology, but the workflows and data that feed it. Agencies should design AI for growth from the outset, building in governance controls, planning for scalability and considering reuse potential beyond the initial deployment. This “build once, reuse often” approach delivers efficiency gains and cost savings while making funding approval easier.

In an era where understanding how to learn has become the most essential skill, professionals must remain elastic and curious about topics that may fall outside traditional scopes, whether data governance for operational staff or technical architecture for mission leaders. By prioritizing mission-driven use cases, establishing robust data foundations, implementing governance as an enabler rather than a barrier and investing in people alongside technology, Federal agencies can move beyond experimental pilots to deliver AI that creates lasting, measurable impact.

To explore proven strategies for building mission-driven AI that lasts, watch ZL Technologies’ webinar, “From Noise to Impact: Building Mission-Driven AI in the Agency.”

The Year of Expansion for GenAI in Government

Generative AI (GenAI) is entering a pivotal new phase in 2026, marked by rapid advances in accuracy, reliability and mainstream integration. In 2025, GenAI became embedded into our everyday lives – from AI-generated overviews in search engines to classrooms adapting to powerful, readily accessible large language models. At the Federal level, 2025 White House guidance instructs agencies to push forward with AI infrastructure, building secure data centers to support the compute necessary in implementing innovative, American-built AI into our most vital missions.

GenAI’s unique content generation capabilities can be used to increase efficiency and productivity in our US Government agencies in the form of chatbots, text-to-speech audio generation, AI task managers, coding assistance and other Natural Language Processing (NLP) models. With the rising momentum created by America’s AI Action Plan and increased budgets for AI in areas such as the Department of War (DoW) and Veteran Affairs (VA), 2026 is the year of expansion for GenAI.

Augmenting Agencies in Task Execution

In Government agencies, GenAI commonly removes routing and repetitive workflows, freeing up users to focus on strategic tasks. GenAI works best in mission-support roles, supplementing human roles by improving written communication, increasing the efficiency of accessing information, enabling program status tracking and more. Personalized learning paths and AI assistants can augment current roles.

There are various use cases for GenAI. Program-specific examples include:

  • Defense
    • The DoW has deployed GenAI.mil – a secure, bespoke platform that leverages generative AI to enhance efficiency, speed and operational effectiveness in our most critical defense and national security missions.
  • FEMA & NOAA
    • In inclement situations, GenAI has been used to perform tasks like weather [CA1] and disaster prediction and response. Some GenAI models have even been more accurate than traditional deterministic models, suggesting GenAI has a strong use case in research and science.
  • GSA
    • GSI has launched USAi, a secure GenAI evaluation suite that has helped employees draft emails, generate code and summarize documents.
  • The Department of Veterans Affairs
    • GenAI has been used to automate various medical imaging processes to enhance veterans’ diagnostic services.
  • Healthcare & Department of Health and Human Services
    • Generative AI has enabled healthcare systems to enhance medical images, generate molecular structures for potential drugs and create realistic patient data for AI training.
    • To support containment of the poliovirus, the Department of Health and Human Services initiated an effort to use GenAI to extract information from publications and identify outbreaks in areas previously thought to be polio-free.

Procurement of GenAI solutions is being simplified and expedited by the Federal Government, increasing agencies’ ability to use innovative solutions to solve complex problems. GSA’s OneGov strategy delivers generative AI to the government by removing a major barrier to AI adoption: cost. Through the OneGov agreements, popular GenAI solutions are available for $1, and agencies are given the opportunity to experiment with AI and see what works best for their specific use cases. This strategy aligns with America’s broader AI policy framework – allowing agencies to take advantage of the speed, automation and modernization capabilities provided by AI. Carahsoft’s dedicated OneGov page serves as a centralized resource for determining product availability and identifying procurement pathways.

Federal Guidance for AI Usage

GenAI is already being used successfully in the US Government, and recent Federal guidance cements AI’s place in Government operations. 2025 executive orders (EO’s), such as “Removing Barriers to American Leadership in Artificial Intelligence” pave the way for increased usage of the technology. See below for an overview of relevant generative AI-focused memos and EO’s released in the last few months.

Launching the Genesis Mission – November 24, 2025

The Genesis Mission establishes AI at the forefront of scientific and economic growth and calls for an integrated platform to enable AI-automated research and discovery. The next wave of federal AI will prioritize scalable compute orchestration, secure model training environments, hypothesis-testing AI agents, supply-chain rigor, and measurable national return on investment that will be evaluated by acceleration in discovery velocity, compressed innovation cycles, and compounding mission impact – not extended pilots.

Ensuring a National Policy Framework for Artificial Intelligence, December 11, 2025

This EO adds on to previously established framework by ensuring state-by-state regulatory laws do not act as barriers to fast AI adoption, and that ideological bias is not embedded into AI tools used within each state. By creating a unified framework, America will become the winner of the AI race.

M-26-04: Increasing Public Trust in AI Through Unbiased AI Principles, December 11, 2025

In response to Executive Order 14319, OMB released M-26-04 which establishes principles for unbiased AI: that it is truth-seeking, and that it is ideologically neutral. All LLM’s procured by a government agency must abide by the unbiased AI requirements established in this memo.

Transforming the Defense Innovation Ecosystem to Accelerate Warfighting Advantage, January 9, 2026

This DoW memo formalizes AI as a core warfighting capability across DoW operations and streamlines integration and acceleration of adoption.

War Department’s AI Acceleration Strategy to Secure American Military AI Dominance, January 11, 2026

The DoW’s January 2026 memo outlines their AI dominance strategy. It calls for establishing an AI-first warfighting force – echoing earlier EOs and removing barriers that would hinder adopting practical, mission-first AI solutions for DoW. It highlights the previously mentioned GenAI.mil program that provides direct access to leading GenAI solutions for the DoW, enhancing speed and ease of AI adoption.

Department of War’s Arsenal of Freedom Tour, January 2026

A new “AI Swat Team,” led by the CDAO, is charged with removing barriers and increasing data sharing to speed up AI deployment. The DoW’s AI strategy, and the SWAT team enforcing it, shows that their measure of AI success is how fast usable data reaches operational systems. Organizations that improve data access, quality, and interoperability will be able to maintain strategic advantage.

Recent guidance establishes a framework for AI adoption and usage, enabling fast, common-sense deployment to ensure America wins the AI race. While agencies are encouraged to push forward, they must maintain the highest levels of security.

Building the Foundation for Successful Generative AI in Government

As Generative AI moves beyond pilot programs and into operational use, agencies must ensure these systems meet the established requirements for security, reliability and data protection. GenAI is dynamically generating content, so it must be deployed within secure environments where sensitive information remains protected and outputs are grounded in trusted data sources. Federal guidance emphasizes strong governance, secure infrastructure and validation mechanisms to ensure AI-generated outputs remain accurate and mission-relevant. With these controls in place, agencies can scale Generative AI to support mission execution while maintaining full confidence in the integrity of their systems and data.

Current Federal recommendations include utilizing and onboarding:

  • Risk management solutions
  • On-prem and cloud data security
  • Impact Level (IL) 5 and 6 security standards for mission-critical or classified information
  • Air gapping, which physically isolates computer systems and networks to avoid breaches
  • Model Context Protocol (MCP), the universal open standard for connecting AI applications to external systems
  • Zero Trust Architecture (ZTA), the foremost security strategy that verifies the identity of end users as they access the network
  • Data governance for Retrieval-Augmented Generation (RAG), which enables content filtering and identity validation

Agencies are strongly encouraged to draw on guidance from reputable experts, including the National Institute of Science and Technology (NIST), whose AI Risk Management Framework (RMF) offers a proven foundation for responsible adoption. In addition to technical protocols, it is helpful to keep a human in the loop to audit and observe GenAI output, minimizing chatbot errors. Cybersecurity considerations, including data poisoning, data leakage and hallucinations, must be actively monitored to ensure models operate safely and consistently across Government missions.

Keeping security at the forefront is vital for GenAI’s success in Government. With thoughtful governance and strong safeguards, GenAI can advance agency missions without compromising security. The stakes are high, but so is the opportunity.

As The Trusted IT Solutions Provider for Government™, Carahsoft offers a comprehensive portfolio of AI and GenAI solutions designed to meet the unique security, compliance and operational requirements of Federal, State and Local Government agencies. From secure on-premises deployments to cloud-based platforms that meet Impact Level 5 and 6 standards, Carahsoft’s technology partners deliver the tools agencies need to implement AI responsibly and effectively.

Visit Carahsoft’s AI Solutions portfolio to explore GenAI platforms, risk management frameworks and Zero Trust security solutions that align with Federal guidance and support mission-critical operations.

Explore OneGov offerings available through Carahsoft.

Contact Carahsoft’s AI team to discuss how GenAI can transform your agency’s workflows while maintaining the highest security standards.

From Pilot to Production: Operationalizing Healthcare GenAI in Secure Multicloud Environments

Healthcare organizations are under immense pressure to shrink margins, tighten regulations, improve patient expectations and utilize increasingly complex data environments. While generative artificial intelligence (GenAI) has emerged as a powerful tool, most healthcare systems still struggle to move from experimentation to measurable outcomes. Leaders are asking the same questions: Where do we start? How do we ensure security and compliance? How fast should the Return on Investment (ROI) appear?

The answer is not simply selecting a model, it is building a strategy and infrastructure that transforms AI from a promising pilot into an enterprise engine for clinical, operational and financial improvement.

Start With High-Impact Use Cases that Deliver Early ROI

The path to operationalizing GenAI begins with use cases that are narrow enough to implement quickly, but meaningful enough to prove value. Start where measurable gains are most attainable, such as document processing, contract review, claims analysis, compliance workflows and call center optimization.

One of the strongest early candidates is Protected Health Information (PHI) de-identification, where AI can accelerate research access while protecting privacy. Many organizations are also applying GenAI to claims review, using models to flag missing attachments, coding inconsistencies or errors that commonly drive costly denials. With first-pass denial rates hovering in the 17–25% range industry-wide, automating this analysis can generate immediate financial return.

These targeted wins build executive confidence, secure budget and create organizational momentum, which is critical before expanding to more complex clinical or patient-facing scenarios.

Build Trust by Grounding the Model in Your Own Data

Accuracy and trust determine whether healthcare AI is adopted or ignored. General-purpose models are not sufficient for healthcare, where language is deeply nuanced and context dependent. Instead, organizations should ground GenAI in their own governed data sources, such as Electronic Health Records (EHRs), Customer Relationship Management (CRM) platforms, care summaries, research documents or internal policies.

To achieve this, many leaders are adopting Retrieval-Augmented Generation (RAG) with vector databases, which allows models to pull precise information from internal systems in real time. Vector databases are a foundational accelerator, enabling faster, more accurate retrieval across structured and unstructured data. This approach delivers three business advantages:

  1. Higher accuracy and confidence in model responses
  2. Stronger control of PHI and sensitive data
  3. Traceability, which is essential for audits, appeals and clinical validation

Grounding the model in an organization’s own data turns GenAI from a creative tool into a trusted operational system.

Use a Secure Multicloud Strategy to Reduce Risk and Increase Agility

John Snow Labs, Operationalizing Healthcare GenAI blog, embedded image, 2025

To operationalize GenAI responsibly, healthcare organizations should design for security,compliance and flexibility from day one. When separating PHI and non-PHI workloads, a multicloud strategy helps healthcare organizations:

  • Isolate sensitive data to minimize breach impact and simplify governance
  • Reduce lock-in risk and leverage the strengths of different cloud platforms
  • Tap into more innovative options, since each cloud offers unique AI tooling
  • Optimize cost and performance by matching workloads to the right environment

Multicloud design also supports stronger compliance postures by enabling auditability, identity controls, monitoring and bias/hallucination safeguards, all of which must be proven to regulators and accrediting bodies.

Avoid “Pilot Purgatory” and Build a Path to Production

Many healthcare AI programs fail not because the technology underperforms, but because the organization never assigns ownership or a path to scale. To prevent “pilot purgatory,” short-term projects that drag on without measurable outcomes, organizations should:

  • Create a defined production roadmap before the pilot begins
  • Empower a cross-functional AI Center of Excellence (COE) to own outcomes
  • Secure both clinical and administrative stakeholders
  • Treat GenAI as an enterprise capability, not a one-off project

This shift enables the same investment to support multiple use cases, expanding impact while lowering cost per interaction over time.

Continuously Measure, Optimize and Expand

An operational GenAI program is never “set it and forget it.” It is important to continuously track Key Performance Indicators (KPIs) to guide optimization and justify expansion. Recommended KPIs include:

  • Cost per interaction
  • Accuracy and confidence
  • Time saved per task or workflow
  • Time to response (latency and model speed)
  • User satisfaction (providers, staff and patients)

By evaluating these metrics regularly, healthcare organizations can expand from early wins to enterprise scale, from research and development to patient support, revenue cycle, compliance and beyond.

Align People, Data and Infrastructure For AI Success

Technology alone is not the determining factor of AI success in the healthcare space, alignment is. Success requires a shared vision from leadership, responsible data groundwork, a secure multicloud foundation and continuous measurement to maintain trust and value. With the right approach, GenAI can improve patient satisfaction, strengthen trust, accelerate research and innovation, reduce administrative burden and deliver measurable ROI in weeks over years.

Carahsoft and John Snow Labs help healthcare leaders accelerate this journey, combining secure infrastructure, domain-specific healthcare AI and proven deployment models. To explore how your organization can operationalize GenAI safely and effectively, watch the full webinar, “Lessons Learned from Harnessing Healthcare Generative AI in a Hybrid Multi-Cloud Environment.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including John Snow Labs, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Strengthening Cybersecurity in the Age of Low-Code and AI: Addressing Emerging Risks

As new technologies like low-code/no-code development and generative AI (GenAI) revolutionize how we build and interact with software, they also bring about new security challenges—especially for the public sector. Protecting sensitive information and online accounts is more critical than ever, as cybercriminals look to exploit gaps in these emerging systems. Ensuring robust security and threat visibility is now essential for safeguarding against the risks associated with these advancements, especially as traditional safeguards become less effective in the face of evolving threats.


Low-code Development Exposes New Risk

One of the unintended consequences of our shift to a low-code/no-code development paradigm is the delegation of complex development tasks to Large Language Models (LLMs) and GenAI systems, often bypassing seasoned developers and architects. This opens new opportunities for cybercriminals. These systems excel at functional requirements—‘Build me a website that accepts customer checkout requests’—but they rarely infer non-functional needs, like security, unless explicitly instructed.

In traditional software development, security considerations are often implicit, stemming from the experience of developers and architects who’ve spent years learning from real-world failures. GenAI, however, lacks this depth of experience and focuses narrowly on the task at hand. The result? Incomplete or inadequate security measures in software developed through these systems. As organizations lean more heavily on GenAI, we risk creating an insecure software ecosystem ripe for exploitation by threat actors.


The Proliferation of Knowledge-Based Verification Attacks

We’re on the brink of a surge in automated attacks exploiting vulnerabilities in Knowledge-Based Verification (KBV) systems. Large-scale data breaches, like the one that exposed millions of Social Security numbers last year, are eroding the effectiveness of this approach at confirming identity when creating an account or supporting a password reset. These processes often rely on KBV—such as answering questions about your mother’s maiden name or the street you grew up on—but this information is increasingly accessible to malicious actors.

Human Security GenAI Low Code Blog Embedded Image 2025

As these personal details become more widely available through data breaches and online marketplaces, attackers can easily bypass KBV systems. Worse yet, threat actors can now leverage LLMs to develop sophisticated tools to mine personal data at scale and orchestrate automated attacks against these KBV systems. Organizations face an urgent challenge: how to protect accounts in a world where traditional KBV methods are no longer secure or reliable while still offering users a legitimate path to create an account or regain access when needed.


LLM Safeguards Can Be Overridden or Bypassed by Running Models Locally

With the proliferation of local LLM instances and tools like Ollama, we’ll see safeguards embedded in commercial LLMs eroded or bypassed entirely. Running models locally can allow threat actors to fine-tune them, removing restrictions on malicious activity and enabling custom models optimized for cybercrime. This creates a new frontier for scaled attacks that are faster, more targeted, and harder to detect until it’s too late.

Imagine a threat actor fine-tuning a model to craft phishing campaigns, identify vulnerabilities in software, or automate account takeovers. The ability to localize and modify these models fundamentally shifts the balance, empowering attackers with tools tailored to their malicious intent. The guardrails built into commercial LLMs are no match for this growing trend, amplifying the need for robust detection and defense strategies at every level.

As the public sector continues to adopt innovative technologies, staying ahead of emerging cyber threats is crucial. The increasing sophistication of attacks, such as those targeting KBV systems and leveraging GenAI, highlights the need for stronger protections. By prioritizing comprehensive security measures and threat detection, organizations can mitigate the risks of these evolving vulnerabilities and safeguard their sensitive data and online accounts against malicious actors. It is essential to build and maintain resilient security strategies to ensure the integrity of digital infrastructures in this rapidly changing environment.


To learn more about how HUMAN Security helps the public sector protect citizen accounts, sensitive information, and critical infrastructure, click here.


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including HUMAN Security, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Exploring the Future of Healthcare with Generative AI

Artificial intelligence (AI) is an active field of research and development with numerous applications. Generative AI, a newer technique, focuses on creating content—learning from large datasets to generate new text, images and other outputs. In 2024, many healthcare organizations embrace generative AI, particularly in creating chatbots. Chatbots, which facilitate human-computer interactions, have existed for a while, but generative AI now enables more natural, conversational exchanges, closely mimicking human interactions. Generative AI is not a short-term investment or a passing trend, this is a decade-long effort that will continue to evolve as more organizations adopt it.

Leveraging Generative AI

When implementing generative AI, healthcare organizations should consider areas to invest in, such as employee productivity or supporting healthcare providers in patient care.

Key factors to consider when leveraging generative AI:

  1. Use case identification: Identify a challenge that generative AI can solve, but do not assume it will address all problems. Evaluate varying levels of burden reduction across use cases to determine its value.
  2. Data: Ensure enough data is available for generative AI to provide better services. Identify inefficiencies in manual tasks and ensure data compliance, as AI results depend on learning from data.
  3. Responsible AI: Verify that the solution follows responsible AI guidelines and Federal recommendations. Focus on accuracy, addressing hallucinations where incorrect information is provided such as responses that are grammatically correct but do not make sense or are outdated.
  4. Total cost of ownership: Generative AI is expensive, especially regarding hardware consumption. Consider if the same problem can be solved with more optimized models, reducing the need for costly hardware.

Harnessing LLMs for Healthcare

John Snow Labs Healthcare with Generative AI Blog Embedded Image 2024

Natural language processing (NLP) has advanced significantly in recent decades, heavily relying on AI to process language. Machine learning, a core concept of AI, enables computers to learn from data using algorithms and draw independent conclusions. Large language models (LLMs) combine NLP, generative AI and machine learning to generate text from vast language datasets. LLMs support various areas in healthcare, including operational efficiency, patient care, clinical decision support and patient engagement post-discharge. AI is particularly helpful in processing large amounts of structured and unstructured data, which often goes unused.

When implementing AI in healthcare, responsible AI and data compliance are crucial. Robustness refers to how well models handle common errors like typos in healthcare documentation, ensuring they can accurately interpret how providers write and speak.

Fairness, especially in addressing biases related to age, origin or ethnicity, is also critical. Any AI model must avoid discrimination; for instance, if a model’s accuracy for female patients is lower than for males, the bias must be addressed. Coverage ensures the model understands key concepts even when phrasing changes.

Data leakage is another concern. If training data is poorly partitioned, it can lead to overfitting, where the model “learns” answers instead of predicting outcomes from historical data. Leakage can also expose personal information during training, raising privacy issues.

LLMs are often expensive, but healthcare-specific models outperform general-purpose ones in efficiency and optimization. For example, healthcare-specific models have shown better results than GPT-3.5 and GPT-4 in tasks like ICD-10 extraction and de-identification. Each model offers different accuracy and performance depending on the use case. Organizations must decide whether a pre-trained model or one trained using zero-shot learning is more suitable.

Buy Versus Build

When it comes to the “buy versus build” decision, the advantage of buying is the decreased time to production compared to building from scratch. Leveraging a task-specific medical LLM that a provider has already developed costs a healthcare organization about 10 times less than building their solution. While some staff will still be needed for DevOps to manage, maintain and deploy the infrastructure, overall staffing requirements are much lower than if building from the ground up.

Even after launching, staffing requirements are not expected to decrease. LLMs continuously evolve, requiring updates and feature enhancements. While in production, software maintenance and support costs are significantly lower—about 20 times less—than trying to train and maintain a model independently. Many organizations that build their healthcare model quickly realize training is extremely costly in terms of hardware, software and staffing.

Optimizing the Future of Healthcare

When deciding on healthcare AI solutions, especially with the rise of generative AI, every healthcare organization should assess where to begin by identifying their pain points. They must ensure they have the data required to train AI models to provide accurate insights. Healthcare AI is not just about choosing software solutions; it is about considering the total cost of ownership for both software and hardware. While hardware costs are expected to decrease, running LLMs remains a costly endeavor. If organizations can use more optimized machine learning models for specific healthcare purposes instead of LLMs, it is worth considering from a cost perspective.

Learn how to implement secure, efficient and compliant AI solutions while reducing costs and improving accuracy in healthcare applications in John Snow Labs’ webinar “De-clutter the World of Generative AI in Healthcare.”

Discover how John Snow Labs’ Medical Chatbot can transform healthcare by providing real-time, accurate and compliant information to improve patient care and streamline operations.

Generative AI: Improving Efficiency for SLED Agencies

Users in the new age engage with generative AI like a personal assistant, granting it access to their personal calendars and assigning it tasks such as making dinner reservations to make life easier. On the professional level, employees turn to AI to expedite difficult or repetitive tasks to make their work easier. By educating employees on the security ramifications of generative AI, and by properly implementing it into their agency, State and Local Government and Education Market (SLED) decision makers can accelerate and improve their day-to-day processes.

Updated Security Parameters

When it comes to sensitive data, agencies and individuals should always maintain a broad scope of vigilance. With generative AI, agencies need to consider who has access to that information, and which adversaries may potentially exploit that information.

Broadcom Generative AI Blog Embedded Image 2024

Employees should be trained to spot red flags and use AI safely. With the increase in deep fakes, such as voice masking or impersonation, employees need to be able to spot suspicious phone calls and videos. With proper training to detect and report these instances, employees can help prevent hacking attempts. It is difficult to prevent employees from using generative AI, even in specific scenarios where sensitive data is present. Agencies should make the switch to sanctioned vendors, granting them access to fully tracked logs. It is critical to prevent sensitive information from passing into public AI, where it will be shared with others.

By design, AI is a black box. While agencies and users can not know what goes on between input and output, they should only trust generative AI packages that have dependable service hosts. Agencies, especially SLED agencies that handle sensitive information, need to be guaranteed that their data will remain contained by reliable parent companies. By negotiating through contracts vehicles, agencies can maintain visibility over the flow of data by learning if their information is being retained and for how long.

Saving Time with Generative AI

Some of the first generative AI models were built for translation machines such as Google Translate. Many services, such as Zoom, employ generative AI as plugins, which transcript language in real time for the appropriate audience.These models initially generated very verbatim translations, however, intent and context in communication is critical. Users often go to third party generative AI models to translate emails or web pages. They have more trust in their automation capabilities to understand and mirror context and intent in translation than the built-in translation services that many legacy software features offer.

Generative AI can help with drafting emails, broadcasting information, meeting deadlines and responding to agents, ultimately expediting processes. This can be especially helpful with overworked translators. While generative AI works to complete the main translations, the workers can focus on reviewing translations, expediting and perfecting the process. While there will ultimately always be a need for human interaction from a promotional, proofreading and understanding perspective, generative AI can speed up communication.

Generative AI can reduce the number of steps users take. By leading users from step A to step C, bypassing the difficult or time-consuming step B, generative AI keeps users on track. And for models trained on a SLED agency’s own data, users can always reference internal documents if questions arise. This scales back on the amount of busy work, reducing time spent on finding information. Generative AI can also expedite the synthesis of search data. In the past, search engines could locate documents for agencies. Now, agencies going through SLED records can not only find the document itself, but find the information within the document, and analyze that information before returning it to the user.

By accelerating the day-to-day tasks of employees, generative AI frees up creative minds to complete more vital, thorough and intricate projects, improving utility.

AI has been integral to Broadcom’s product solutions in user and enterprise IT. When properly implemented, generative AI can enhance technology, cybersecurity, analytics and productivity. To learn more about how Broadcom can help implement secure generative AI in SLED spaces, view Broadcom’s SLED focused cybersecurity solutions.

EdTech Talks: Modernizing Education with Artificial Intelligence and Machine Learning

Schools must embrace change alongside their growing generations to equip students for the future. Artificial intelligence (AI) and machine learning (ML) are two evolving, expansive technologies that are creating a monumental impact in the private and Public Sector, with education institutions being no exception. At Carahsoft’s annual EdTech Talks Summit, education leaders explored how AI and ML are changing the way teachers instruct, the way students learn and the way administrators approach technology in schools.

As a baseline, when considering AI for K-12 and higher education, administrators should follow several guiding principles for responsible and trustworthy use of AI.

  • Human-centricity: Promote human well-being, individuality and equity
  • Inclusivity: Ensure accessibility and diverse perspectives
  • Accountability: Proactively identify and mitigate adverse impacts
  • Transparency: Instruct students and teachers on proper usage, including potential risks and how decisions are made
  • Robustness: Operate reliably and safely while enabling mechanisms that assess and manage potential risks
  • Privacy and security: Respect the privacy of data subjects

Generative AI in Education

Carahsoft EdTech Talks Summit Blog Series-Part 3 Artificial Intelligence and Machine Learning Blog Embedded Image 2024Generative AI is still fairly new to the education space and educators are on both sides of the spectrum of acceptance—some prefer to erase it from their schools while others are open to embracing the up-and-coming technology for use cases not only in the classroom, but also to prepare students for the future workforce.

For example, one of the first technologies educators may be inclined to use when adopting AI in the classroom is detection tools. Dr. Anand Rao, Professor of Communications Chair of the Department of Communications and Digital Studies at the University of Mary Washington in Virginia recommends against this technology implementation because it could negatively affect vulnerable students. AI detection is not 100% correct in every instance. For some students, English may not be their first language and a detection tool could potentially identify their work as AI generated because it may be more formulaic. While detection tools can be utilized in a positive way to ensure honesty is upheld within students’ work, teachers and professors should use their discretion to determine the results of detection tools.

AI literacy is one of the most important principles for instructors to explore, deliberate and establish guidelines for. Since generative AI platforms such as ChatGPT and other tools like detection programs are still modernizing, students and faculty should go through a test period to learn how they work and understand whether they are comfortable utilizing them. As a next step, IT teams must be prepared to begin implementation and consider cybersecurity in that process.

Analytics and Data in AI

Education data grows exponentially with each new school year; however, collecting, evaluating and taking action based on the insights of that data is a long yet vital process. Instructors and administrators must leverage platforms that can help automate and analyze new and archived data to make the most informed decisions for their schools using the AI analytics lifecycle. This includes managing data efficiently, interpreting observations made about data and finally, creating a plan to incorporate constructive action to address needs discovered via the data. Using this strategy, schools can be better prepared to tackle real world questions and scenarios and provide students and teachers with the tools and processes they need to be successful.

This year’s EdTech Talks Summit event aimed to educate academic IT decision makers and end users about the current challenges and solutions surrounding student growth and development, security, AI and ML and cost-saving, modernization benefits of today’s leading EdTech solutions. The Education sector faces new challenges every school year, and it is imperative now more than ever that the IT industry and Government work together to provide the most safe and successful learning environments for all students.

Visit the EdTech Talks Conference Resource Center to view panel discussions and other innovative insights surrounding security, AI and student success from Carahsoft and our partners.

 

About Carahsoft in the Education Market  

Carahsoft Technology Corp. is The Trusted Education IT Solutions Provider™.  

Together with our technology manufacturers and reseller partners, we are committed to providing IT products, services and training to support Education organizations.  

Carahsoft is a leading IT distributor and top-performing E&I Cooperative Services, Golden State Technology Solutions, Internet2, NJSBA, OMNIA Partners and The Quilt contract holder, enhancing student learning and enabling faculty to meet the needs of Higher Education institutions.  

To Learn more about Carahsoft’s Education Solutions, please visit us at http://www.carahsoft.com/education

To learn more about Carahsoft’s AI Solutions, please visit us at https://www.carahsoft.com/solve/ai-machine-learning

The 12 Artificial Intelligence Events for Government in 2024

Carahsoft 10 Artificial Intelligence Events for the New Year Blog Embedded Image 2024Last year set a landmark standard for innovation in artificial intelligence (AI). Federal, State, and Local Governments and Federal Systems Integrators are eager to learn how they can implement AI technology within their agencies. With the recent Presidential Executive Order for AI, many Public Sector-focused events in 2024 will explore AI modernizations, from accelerated computing in cloud to the data center, secure generative AI, cybersecurity, workforce planning and more.

We have compiled the top AI events for Government for 2024 that you will not want to miss.

1. AI for Government Summit

May 2, 2024, Reston, VA | In-Person Event

The AI for Government Summit is a half-day event designed to bring together Government officials, AI experts and industry leaders to explore the transformative potential of AI in the public sector. As Governments worldwide increasingly adopt AI technologies to enhance efficiency, improve services and address complex challenges, this summit will serve as a platform for collaboration, discussion and sharing knowledge on the latest advancements and best practices in AI deployment within Government organizations.

Sessions to look out for: Cybersecurity & AI – Safeguarding the Government and Generative AI Government Use Case Panel 

Carahsoft is proud to host this inaugural event alongside FedInsider. Join us and over 100 of our AI & machine learning technology and solution providers as they speak towards AI adoption in our Public Sector and how they are using AI to solve our government’s most critical challenges. Attendees will also hear from top government decision-makers as they share unique insights into their current AI projects. 

2. NVIDIA GTC 

March 18 – 21, 2024, San Jose, CA | Hybrid Event

Come connect with a dream team of industry luminaries, developers, researchers, and business strategists helping shape what’s next in AI and accelerated computing. From the highly anticipated keynote by NVIDIA CEO Jensen Huang to over 600 inspiring sessions, 200+ exhibits, and tons of unique networking events, GTC delivers something for every technical level and interest area. Whether you join us in person or virtually, you are in for an incredible experience at the conference for the era of AI.

Sessions to look out for: What’s Next in Generative AI and Robotics in the Age of Generative AI 

Carahsoft serves as NVIDIA’s Master Aggregator working with resellers, systems integrators, and consultants. Our team provides NVIDIA products, services, and training through hundreds of contract vehicles.

Carahsoft is proud to be the host of the GTC Public Sector Reception on Tuesday, March 19th.  

Please visit Carahsoft and our partners at the following booths:

  • Government IT Solutions: Carahsoft (#1726), Government Acquisitions (#1820), World Wide Technology (#929)
  • AI/ML & Data Analytics: Anaconda (#1701), Dataiku (#1704), Datadog (#1033), DataRobot (#1603), Deepgram (#1719), Domino Data Labs (#1612), Gretel.AI (G130), H2O.AI (G124), HEAVY.AI (#1803), Kinetica (I132), Lilt (I123), Primer.AI (I126), Red Hat (#1605), Run:AI (#1408), Snowflake (#930), Weights & Biases (#1505 & G115)
  • AI Infrastructure: Dell (#1216), DDN (#1521), Edge Impulse (#434), Lambda Data Lab (#616), Lenovo (#1740), Liqid (#1525), Pure Storage (#1529), Rescale (#1804), Rendered.AI (#330), Supermicro (#1016), Weka (#1517)
  • Industry Leaders: AWS (#708), Google Cloud (#808), HPE (#408), Hitachi Vantara (#308), IBM (#1324), Microsoft (#1108), VAST Data (#1424), VMware (#1604)

3. 5th Annual Artificial Intelligence Summit  

March 21, 2024, Falls Church, VA | In-Person Event  

Join the Potomac Officers Club’s 5th Annual AI Summit, where federal leaders and industry experts converge to explore the transformative power of artificial intelligence. Discover innovative AI advancements, engage in dynamic discussions, and forge strategic collaborations with key partners at this annual gathering of the movers and shakers in the AI field. Hosted by Executive Mosaic, this summit will be held in Falls Church, Virginia.  

Sessions to look out for: Leveraging Collaboration to Accelerate AI Adoption in the DoD and Operationalizing AI in Government: Getting Things Done with Automation  

Carahsoft is the master aggregator for Percipient AI, a Silver Sponsor, and Primer AI, the Platinum Sponsor. Mark Brunner, President of Federal at Primer AI, will also be speaking at the event. 

4. INSA Spring Symposium: How AI is Transforming the IC

April, 4, 2024, Arlington, VA | In-Person Event

Join 300+ intelligence and national security professionals at INSA’s Spring Symposium, How Artificial Intelligence is Transforming the IC, on Thursday, April 4, from 8:00 am-4:30 pm at the INSA/NRECA Conference Center in Arlington, VA. Key leaders from government, academia, and industry will discuss cutting-edge AI innovations transforming intelligence analysis, top priorities and concerns from government stakeholders, developments in ethics and oversight, challenges and opportunities facing the public and private sector and more!

Session to look out for: AI Ready? Challenges from a Data-Centric Viewpoint

Meet with Carahsoft partners AWS, Google Cloud, Intel, and Primer.

5. Google Next ‘24  

April 9 – 11, Las Vegas, NV | In-Person Event  

Explore new horizons in AI at Google Cloud Next ’24 in Las Vegas, April 9–11 at Mandalay Bay Convention Center. Dive into AI use cases, learn how to stay ahead of cyberthreats with frontline intelligence and AI powered security and boost data and thrive in a new era of AI. Plus, see our latest in AI, productivity and collaboration, and security from Google Public Sector.  

Carahsoft will be a sponsor of Google Next ‘24 with a significant public sector presence and plans to host a reception as well. 

6. SC24  

November 17 – 22, 2024, Atlanta, GA | Hybrid Event  

Supercomputing (SC) is the longest running and largest high performance computing conference. SC is an unparalleled mix of thousands of scientists, engineers, researchers, educators, programmers, and developers. Hosted by The Association for Computing Machinery & IEEE Computer Society, SC24 is hosted in Atlanta, Georgia.   

Carahsoft is proud to attend SC24 for a fourth year as the master aggregator serving the public sector. Carahsoft will be hosting an extensive partner pavilion showcasing daily demos of our technology and solution partners, demonstrating use-cases in AI and HPC intended for higher-ed organizations, research institutions, government agencies, and more.  

Join us at our public sector reception for a night of networking with leading decision-makers and solution experts on November 20. 

7. Elastic Public Sector Summit ‘24  

March 13, 2024, Pentagon City, VA | In-Person Event  

Join top Federal program executives and IT leaders to learn firsthand how advances in data management, search and analytics capabilities are helping agencies turn data into mission value faster and more productively for citizens and Government employees. Learn how agencies are leveraging these capabilities for cybersecurity, operational resilience, and preparing for the new era of generative AI. FedScoop, Elastic and Carahsoft will co-host this summit in Pentagon City, Virginia.   

As a top-level sponsor of Elastic’s Public Sector Summit, Carahsoft will host a pavilion on the exhibit floor that features Elastic’s foremost technology partners for the hundreds of projected government attendees.

8. CDAO Government

September 17 – 19, 2024, Washington DC | In-Person Event  

This event brings together the latest technological advancements and practical examples to apply key data-driven strategies to solve challenges in Government and greater society. Join a unique mix of academia, industry and Government thought leaders at the forefront of research and explore real-world case studies to discover the value of data and analytics. Located in Washington, D.C., CDAO Government will be hosted by Corinium Intelligence.   

Carahsoft was proud to be a Premier Sponsor at the 2023 CDAO Government, involving numerous of our vendor partners, Cloudera, and HP, Alation, Informatica, Progress|MarkLogic, Snowflake, and Tyler Technologies, Alteryx, Coursera, DataRobot, Databricks, Elastic, Immuta, Primer AI, and Qlik. 

Carahsoft looks forward to participating as a leading sponsor again at the 2024 CDAO Government.  

9. OODACON

November 5 – 6, Reston, VA | In-Person Event 

The world is at a transition point where technology is enabling rapid changes that can drive both positive and negative outcomes for humanity. It is also empowering many bad actors and poses new threats. The essence of OODAcon lies in its capacity to forge a robust community of leaders, experts, and practitioners that serve as a collective force that can propel us towards a brighter future.  

Join us at the Carahsoft Conference and Collaboration Center to discuss how disruptive technology can solve the most pressing issues of today. 

10. AWS Public Sector Summit 

June 26-27, 2024, Washington DC | In-Person Event 

Join Carahsoft and our partners for two days on innovation, collaboration and global representation. Designed to unite the global cloud computing community, AWS Summits are designed to educate customers about AWS products and services, providing them with the skills they’ll need in order to build, deploy, and operate their infrastructure and applications. 

As a top-level sponsor of AWS’ Public Sector Summit, Carahsoft will host a pavilion on the exhibit floor that features AWS’ foremost technology partners for the thousands of projected government attendees. 

Learn More About Previously Held Events

11. CDAO Advantage DoD24 Defense Data & AI Symposium  

Carahsoft was at CDAO’s inaugural Advantage DoD 2024: Defense Data & AI Symposium from February 20th to 22nd at the Washington Hilton in Washington, DC. The symposium provided a platform for over 1000 government officials, industry leaders, academia, and partners to converge and explore the latest advancements in data, analytics, and artificial intelligence in support of the U.S. Department of Defense mission. Carahsoft had a small tabletop partner pavilion, featuring our vendor partners Alteryx, DataRobot, Collibra, Elastic, Databricks, PTFS, EDB, Weights & Biases, and Clarifai.

Throughout the symposium, attendees from diverse backgrounds, including technical programmers, policymakers, and human resources professionals, gained valuable insights into emerging technologies and best practices for integrating data-driven strategies into organizational frameworks. Attendees also enjoyed two networking receptions hosted by Booz Allen Hamilton and C3.ai.

The agenda featured compelling speaking sessions including topics such as:

  1. Task Force Lima – The Way Forward (Goals and Progress)
  2. LLMs and Cybersecurity: Practical Examples and a Look Ahead
  3. DoD GenAI Use Cases and Acceptability Criterias

12. Using Generative AI & Machine Learning in the Enterprise  

This intimate one-day 500-person conference curated data science sessions to bring industry leaders and specialists face-to-face to educate one another on innovative solutions in generative AI, machine learning, predictive analytics, and best practices. Attendees saw a mix of use-cases, technical talks, and workshops, and walked away with actionable insights from those working on the frontlines of machine learning in the enterprise. Hosted by Data Science Salon, the event was held in Austin, Texas.

Carahsoft partners NVIDIA and John Snow Labs were in attendance; two leading AI and Machine Learning solution providers. Carahsoft serves as the master aggregator for both NVIDIA and John Snow Labs to provide government agencies with solutions that fulfill mission needs from trustworthy technology and industry partners.

While the landscape of government events has always been in flux, the pace of change in 2024 feels downright dizzying. From navigating hybrid gatherings to crafting data-driven experiences, the pressure is on to connect, inform, and engage. This is where the power of AI steps in, not as a silver bullet, but as a toolbox brimming with innovative solutions. Carahsoft’s curated list of Top 12 AI for Government Events is just the starting point. So, do not let the future intimidate you; embrace it. Dive into the possibilities, explore these AI tools, and get ready to redefine what a government event can be. Your citizens—and your data—will thank you.  

To learn more or get involved in any of the above events please contact us at AITeam@carahsoft.com. For more information on Carahsoft and our industry leading AI technology partners’ events, visit our AI solutions portfolio and events page. 

Building a Foundation for an AI Future

It might seem like agencies are hesitant to adopt artificial intelligence. But really, it is quite the opposite. As Lori Wade, the Intelligence Community’s chief data officer, put it: “It is no longer just about the volume of data, it is about who can collect, access, exploit and gain actionable insight the fastest.” The realization is clear: Humans alone cannot keep pace. They need AI so they can make decisions based on the most relevant and most current information — and make those decisions in a timely manner. It is really as simple as that. Download the guide, “Building the Foundation for Your AI Future,” to pick up pointers on data management and AI, plus take a glimpse at the latest technology developments, tips for best practices and an explanation of the early value that AI is delivering to agencies across government. 

 

How to Revolutionize Government Translation with Generative AI

“In situations where accurate and timely translations are crucial, the shortage of qualified and vetted linguists poses significant challenges. Equally, non-linguist analysts are not equipped with secure, at-desk tools to translate foreign language material at the speed of relevance. For example, during the ongoing war in Ukraine, there has been a scarcity of linguists available to provide real-time updates on the ground. This shortage not only has affected the ability to gather vital intelligence but also hindered the timely dissemination of information to national security and defense agencies in the U.S. and abroad.”

Read more insights from Jesse Rosenbaum, Vice President of Business Development and National Security at Lilt. 

 

How Graph Databases Drive a Paradigm Shift in Data Platform Technology  

Carahsoft IIG FNN Future AI Blog Embedded Image 2023“Federal agencies are awash in data. With recent modernization efforts, including the wide-scale adoption of cloud platforms and applications, it is easier than ever for agencies to receive streaming data on everything from logistics to finances to cybersecurity. But that volume of data requires new solutions to process and analyze it. Older methods like SQL and NoSQL simply are not up to the task of analyzing all of the connections between the government’s many massive databases. That is where the new graph paradigm of data platform technology comes in.”

Read more insights from Michael Moore, Principal for Partner Solutions and Technology at Neo4j. 

 

How Agencies Can Upskill in AI to Achieve a Data Mesh Model  

“Data mesh behavior actually goes a step further. AI has become so easy to use, business owners can actually join in the development alongside the data scientists. Therein lies the challenge: Upskilling subject matter experts across an entire organization is a big lift. The way it works best is to start with a center of excellence, a small group of people who begin working with business owners across the enterprise, office by office. They can then prove the value and evangelize it, and then the agency can move to a hub-and-spoke model, where the data scientists are co-developing alongside business owners. As successes pile up, the data scientists can take a step back and allow frontline workers to do the development, governing the new data products on their own.”

Read more insights from Doug Bryan, Field Chief Data Officer at Dataiku. 

 

How Agencies Can Build a Data Foundation for Generative AI  

“Generative artificial intelligence tools are making waves in the technology world, most famously ChatGPT. Although the code of these tools is significant, their real power stems from the data they are trained on. Gathering and correctly formatting the data, then transforming it to yield accurate predictions, often represents the most challenging aspect of developing these tools. Federal agencies that want to start leveraging generative AI already have massive amounts of data on which to train the technology. But to successfully implement these tools, they need to ensure the quality of their data before trusting any decisions they might make.”

Read more insights from Nasheb Ismaily, Principal Solutions Engineer at Cloudera. 

 

How to Democratize Data as a Catalyst for Effective Decision-Making  

“One of the key best practices in the Office of Management and Budget’s Federal Data Strategy calls for using data to guide decision-making. But that is easier said than done when the ability to analyze the data, much less access it, is limited to an agency’s often overworked and understaffed data science specialists. But now that every line of federal business has their own data silo and a mandate to use that data to guide decisions, agencies need a way to democratize access to that data and empower every federal employee to become an analyst.”

Read more insights from Kevin Woo, Director of Federal Sales at Alteryx. 

 

Download the full Expert Edition for more insights from these artificial intelligence leaders, additional government interviews, historical perspectives and industry research. 

Generative AI, DevSecOps and Cybersecurity Highlighted for the Air Force and Space Force at DAFITC 2023

Thousands of Space Force and Air Force personnel and industry experts convened to discuss the most current and significant threats confronting global networks and national defense at the 2023 Department of the Air Force Information Technology and Cyberpower Education & Training (DAFITC) Event. Throughout the many educational sessions, thought leaders presented a myriad of topics such as artificial intelligence (AI), DevSecOps solutions and cybersecurity strategies to collaborate on the advancement of public safety.

Leveraging Generative AI in the DoD

At the event, experts outlined three distinct use cases for simplified generative artificial intelligence in military training.

  • Text to Text: This type of generative AI takes inputted text and outputs written content in a different format. Text to Text is associated with tasks such as content creation, summarization, evaluation, prediction and coding.
  • Text to Audio: Text to Audio AI can enhance accessibility and inclusion by creating audio content from written materials to support elearning and education and facilitate language translation.
  • Text to Video: Text to Video AI is primarily geared towards generating video content from a script to aid the military with language learning and training initiatives.

Dr. Lynne Graves, representative of the Department of the Air Force Chief Data and Artificial Intelligence Office (CDAO), provided attendees with a brief timeline of how the USAF will fully adopt artificial intelligence. The overarching aim for AI integration is to make it an integral part of everyday training, exercises and operations within the Department of Defense (DoD).

  • In FY23, the DoD is focusing on pipeline assessment. Using red teaming where ethical hackers run simulations to identify weaknesses in the system, internal military personnel target improvement of their infrastructure and mitigation of the vulnerabilities in the different stages of the pipeline.
  • In FY24, the emphasis will be on the Red Force Migration policy, which involves developing, funding and scaling the necessary strategies.
  • In FY25, the goal is for the department to become AI-ready. This entails preparing for AI adoption at all agency levels, establishing a standard model card that explains context for the model’s intended use and other important information, creating a comprehensive repository of data and implementing tools for extensive testing, evaluation and verification.

Carahsoft AI, Cybersecurity, DevSecOps at DAFITC Tradeshow Blog Embedded Image 2023USSF Supra Coders Utilize DevSecOps for Innovation

The current operations of United States Space Force (USSF) Supra Coders involve a range of activities that combine modeling, simulation and expertise in replicating threats. These operations are conducted globally, and currently include orbit-related activities, replication of DA ASAT (Direct Ascent Anti-Satellite) capabilities and the reproduction of adversarial Space Domain Awareness (SDA).

The USSF Supra Coders have encountered limitations with software solutions, including restrictions tied to standalone systems, licensing structures with associated costs and limited adaptability to meet the specific needs of aggressors and USSF requirements. DevSecOps presents a multifaceted strategy for mitigating the identified capability gaps noted by the USSF Supra Coders. It can help create more effective and efficient software solutions through seamless integration of security protocols, streamlining system integration processes, optimizing costs and enhancing customizability.

Cybersecurity Within the Space Force

Cybersecurity is a shared responsibility across the DoD but is especially relevant for the U.S. Space Force. As a relatively newly emerging branch of the military, the Space Force is still developing its cyber strategies. Due to its completely virtual link to its capabilities, the USSF must prioritize secure practices from the outset and make informed decisions to protect its networks and data.

Currently, the Space Force is engaged in the initial phases of pre-mission analysis for its cyber component which serves as a critical element for establishing and maintaining infrastructure through the integration of command and control (C2). These cyber capabilities encounter a series of complex challenges, which necessitate a multifaceted approach including the following solutions:

  • Enforcing Consistent Cybersecurity Compliance
  • Developing Secure Methods to Safely Retire Old Technology
  • Enhancing Cryptography Visibility
  • Understanding Security Certificate Complexity
  • Identifying Vulnerabilities and Mitigating Unknown Cyber Risks

While the Space Force faces a uniquely heightened imperative to bolster its cybersecurity capabilities with its inherent reliance on information technology and networks in the space domain, the entire community must collaborate effectively to achieve military leaders’ targeted cybersecurity capabilities by the goal in 2027.

The integration of generative AI in military training, innovations through DevSecOps by the USSF Supra Coders and cybersecurity initiatives of the Space Force collectively highlight the evolving landscape of advanced technologies within the Department of Defense. Technology providers can come alongside the military to support these efforts with new solutions that enhance the DoD’s capabilities and security.

 

Visit Carahsoft’s Department of Defense market and DevSecOps vertical solutions portfolios to learn more about DAFITC 2023 and how Carahsoft can support your organization in these critical areas. 

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at DAFITC 2023.*