Insights from MESC: Modernization, AI and the Cloud 

At this year’s Medicaid Enterprise Systems Conference (MESC), held August 11-14 in Milwaukee, Wisconsin, Federal officials, technology partners and thought leaders joined to share insights on modernization, artificial intelligence (AI), fraud prevention and cloud adoption. 

Carahsoft attended MESC alongside its partners to facilitate connections between healthcare systems and technology vendors. 

Here are the top five insights from MESC.

1. Unified Observability Enables Modernization

As agencies modernize, observability is becoming critical to success. In the session “You Can’t Modernize What You Can’t See: Observability for Medicaid Cloud Future,” Datadog speakers Greg Reeder, the Senior Director of the Public Sector Marketing, Ryan Gault, the Regional Sales Director for SLED East, and Abe Rosloff, the Enterprise Solution Engineer for SLED, discussed best practices for the cloud. Observability reduces the time it takes teams to find system bottlenecks. With Datadog’s unified observability SaaS platform, agencies can utilize real-time monitoring to oversee critical systems. The Centers for Medicare & Medicaid Services (CMS) has released new modules that run alongside 30 year old modular systems. Onboarding the cloud to legacy systems can be tricky, so Datadog’s unified platform provides a layer of visibility that empowers users with insight in the form of an easy-to-use dashboard. Working with a large Medicaid agency on both coasts under contracts with the National Association of State Procurement Officials (NASPO), the US General Services Administration (GSA) and OMNIA, Datadog helps pinpoint issues quickly through their SaaS platform.  

With Datadog’s unified monitoring, agencies gain real-time visibility into user interactions to proactively identify and address breaches before they arise. 

2. AI Use Cases for Medicaid 

In the session “Avoiding AI Landmines in Medicaid: What Worked, What Didn’t and What We’ve Learned,” speakers Eng Tan and Cynthia Afkhami from Automated Health Systems (AHS) discussed the pros and cons of various AI usages. AHS discussed its digital journey in AI adoption; it took three months for users to learn terminology, three months to build the model and six to nine months to test it. While building the AI was easy, the production stage was more challenging. Instead of traditional IT testing, AI requires strategic, human-centered testing. In one case study, AHS designed Infobot, a “digital butler” for customer support representatives that helped translate notes between Spanish and English. This resulted in reduced wait times, a 3% increase in scores and reduced attrition. AHS attributes this success to relevant management buy-ins, IT management support, staff participation, senior management support and focused training. Ultimately, AHS recommends having a solid foundation of data management, governance frameworks and team training before implementing AI. 

Learn how AI can help streamline processes to build stronger, more resilient healthcare systems with Carahsoft’s AI solutions. 

3. How GenAI Can Embolden Healthcare 

In the session “Unleashing the Power of Generative AI on Care Planning in Healthcare,” speakers Mason Tanaka, the Deputy Commissioner and CIO from Alabama Medicaid Agency, and Andy Pitman, the State and Local Government Health and Human Services (HHS) Strategy Director for Microsoft, showcased Alabama Medicaid’s  goals and guiding principles behind its new generative AI (genAI) platform, AI Continuum. In collaboration with Microsoft, Alabama Medicaid aims to bring genAI into healthcare planning by operationalizing various tools in testing and production to help teams reclaim ownership of the planning and design process. Through AI Continuum, nurses and social workers were able to support clinical decisions and use risk stratification predictive analytics to secure better patient outcomes. 

Leverage genAI insights and maximize your data’s potential through Databricks’s optimized data infrastructure and scalable, predictive machine learning models. 

4. Modernization, the Cloud and AI

In the session “CMS-5: MES State of States,” speakers from CMS Ed Dolly, the Deputy Director for Data Systems Group, and Eugene Gabriyelov, the Director Division of State Systems, reviewed the progress made in Medicaid Enterprise Systems (MES) and future goals.  

The top takeaways include: 

  • The time spent creating modular certifications has been reduced from two years to fourteen months 
  • With over 1,300 MES submissions received annually, timely operational reporting has become more significant 
  • Metric reporting is essential for continued funding 

Looking ahead to the upcoming fiscal year, goals include receiving operational metric reports from each state, standardizing interfaces, expanding AI usage and increasing collaboration across the nation. 

Modernize with confidence to improve efficiency, transparency and accountability. Visit Broadcom’s page to learn how their solutions support interoperability.  

5. Fighting Fraud with DNA Prevention

In the session “Fighting fraud: Insights from Illinois Health and Family Services (HFS) Office of Inspector General (OIG)’s internally developed Fraud, Waste and Misuse (FWM) early warning system,” speakers Wei-Shin Wang, the Bureau Chief at the Bureau of Fraud Science and Technology, Douglas Steinley, data analytics consultant at the University of Missouri, Ben Xu, the Sr. Developer at the Bureau of Fraud Science and Technology, and Jon DeShazo, the Senior Lead Consultant at Leads, discussed Dynamic Network Analysis (DNA)-powered fraud prevention. The FWM early warning system, which was built with NTT DATA and powered by DNA, saw successes in uncovering fraud patterns early on. Within the system, there are modules for profiles, reports and inquiries, advanced surveillance and risk detection, audit support and system usability. Early results from this training illustrate that automation and analytics can strengthen program integrity while reducing manual oversight.  

Leverage trusted data analytics and streamlined operations with Equifax’s trusted solutions in data analytics and fraud-resilient technology.  

——————– 

As medical systems evolve, the common theme is clear: innovation must balance modernization with accountability, human-centered design and measurable outcomes. 

Carahsoft’s healthcare portfolio equips agencies with cloud, genAI and analytics solutions that streamline operations and strengthen program integrity. By improving efficiency, reducing fraud and eradicating unnecessary costs, agencies can reinvest in sustainable healthcare that prioritizes improving patient outcomes.  

Looking to modernize with the latest in healthcare technology? Visit Carahsoft’s broad range of contract vehicles to access Government healthcare solutions quickly and confidently.  

The Role of AI Infrastructure in Government  

To maintain its place as a leader in AI advancements, and to comply with the latest White House guidance, Government agencies must harness AI capabilities, such as secure cloud computing platforms, high-performance data processing systems and scalable machine learning frameworks, for critical functions such as cybersecurity, predictive analytics and economic competitiveness. As with any new technology, AI requires updated infrastructure to power these advanced capabilities. 

The Capabilities of AI Infrastructure 

AI infrastructure refers to the hardware and software needed to create and deploy AI-powered applications and solutions. It enables both AI, the technology that simulates the way people think, and machine learning (ML), a focus area of AI that utilizes data and algorithms to imitate the way humans learn, increasing the accuracy of its results the more data you input. AI infrastructure enables users to create and deploy AI and ML apps, such as chatbots, facial and speech recognition and computer vision. 

Building the infrastructure for AI requires data storage and processing, compute resources, ML frameworks and MLOps platforms to acquire the processing capabilities needed for AI, and also to train ML models.  

AI Infrastructure Deep Dive 

Below are the six pillars that define a strong AI foundation, each continuously evolving to keep pace with the next generation of AI capabilities. 

Specialized Compute 
In 2025, AI solutions rely on more than GPUs, they use a mix of processors designed for different types of AI tasks. This makes it faster and more cost-effective to train, update and run today’s complex models. As AI systems are becoming more advanced, many models are becoming larger and require HPC solutions. On the other hand, smaller models can run on cloud-based architecture for lower compute needs. 

Data Preparation 

The success of an AI solution can tie back to how well the data is prepared before it’s used. Modern AI infrastructure now includes built-in tools to clean, label and organize data at scale, sometimes using AI itself to automate the work. This ensures models are trained on accurate, relevant information, while also tagging and tracking data to meet security, compliance and transparency requirements. 

Data Storage 
Because today’s AI solutions are becoming more and more advanced, additional data is required to train the models. AI now depends on lightning-fast data storage that can easily grow alongside datasets. New tools also make it possible to keep sensitive data in specific locations or environments, meeting strict privacy and Government requirements without slowing down AI workflows. 

Networking 
As AI models get bigger, the speed of moving information between systems is critical. New high-speed networks reduce delays so AI can process and deliver results in near real-time, even across large environments. 

Software & Orchestration 
Managing AI today requires controlling the entire process from development to deployment. Modern platforms help teams easily update models, track their history and ensure they run efficiently whether in the cloud, on-premises, or in secure Government networks. 

Security & Governance 
AI infrastructure in 2025 is built with security at its core. It goes through rigorous testing to ensure it meets Government compliance standards and protects sensitive information. It is important to choose solutions from providers that continuously monitor their models, ensuring they’re safe, reliable and ready to be audited at any time. 

All these AI Infrastructure features will be utilized by Government agencies to enable AI solutions that improve workflows and maintain global competitiveness. 

AI Infrastructure: A National Priority 

Executive Order 14141 names AI infrastructure, including data centers and compute clusters that are powered by clean energy, as a national priority to upholding U.S. leadership, national security and competition.  

The order encourages Government agencies to secure supply chains, integrate clean energy and collaborate with the private sector. It also directs Federal agencies to make Federal lands and sites available for clean power generation and gigawatt-scale AI data centers 

In alignment with the Executive Order, the Department of Energy (DOE) has released a Request for Information (RFI) to use its territories to build AI infrastructure datacenters, citing that they would enable AI training and inference, scientific research and other essential services.  

Most recently, the AI Action Plan outlines recommended policy actions regarding building AI infrastructure such as data centers, semiconductor manufacturing facilities and energy infrastructure. The goal of the AI Action Plan is to streamline AI adoption and, in turn, speed up and scale the development of AI infrastructure on the federal level. National Security, AI incident response, cybersecurity and secure-by-design systems are highlighted as vital pillars of the AI Action Plan’s infrastructure guidance. By sharing specific steps to achieve safe and secure AI infrastructure, such as identifying available federal land, training our workforce, building data centers and keeping security at the backbone, the AI Action Plan outlines clear next steps that agencies need to take in order to push AI adoption forward.  

In an increasingly technology-driven landscape, AI infrastructure allows Government agencies to modernize their operations and deliver more efficient, responsive services. Strategic investment in AI infrastructure enables agencies to enhance decision-making processes, reduce operational costs, protect national security interest and fulfill their core mandate of serving citizens. Once this foundation is in place, agencies can begin to build and deploy solutions that directly support their missions. The next blog in our series will explore how this infrastructure enables Generative AI and its potential for transforming Government workflows. 

Carahsoft’s ecosystem of hardware and software vendors are equipped to connect agencies with the latest technology for AI, including the infrastructure needed to run it. To learn more about AI infrastructure solutions that are tailored for the Public Sector, visit Carahsoft’s Page on AI Solutions. 

Nutanix AHV and Rubrik’s Layered Security – The Key to System Resilience and Efficiency

Protecting critical infrastructure from cyber threats and ensuring business continuity in the face of disasters is a top priority for organizations today. Luckily, Nutanix AHV, a modern, secure virtualization platform that powers and enhances virtual machines (VMs), can help. Rubrik’s integrated solutions fortify AHV environments against ransomware attacks and enable efficient disaster recovery. By leveraging features like immutable backups, anomaly detection and on-demand cloud-based disaster recovery, organizations can enhance their cyber resilience and minimize the impact of disruptive incidents.

A Simple and Secure Path to VM Management

Nutanix AHV is simple to use and secure by design. The platform works through a centralized control plane, where AHV is integrated into a single application programming interface (API). This eradicates a complicated setup on the customer side. By maintaining constant management and a virtualization layer, Nutanix AHV allows organizations to fulfill mission objectives.

Nutanix AHV features several built-in security features, such as micro-segmentation, data insights, audit trails, ransomware protection and data age analytics.

Nutanix features:

  • Built-in, self-healing abilities protect against disk failure, node failure and more
  • A vulnerability patch summary automatically alerts users about susceptibility risks and anomalies that need to be addressed
  • A life cycle manager provides readmittance testing and deployment testing
  • More than one copy of backup data, ensuring that users do not lose valuable information
  • Multi-site replication including to and from the public cloud.

Securing data in Nutanix AHV requires more than just the basic perimeter defenses, but a multi-layered strategy. With Rubrik’s data protection abilities, which include immutable backups, automatic encryption and logical air-gapping, agencies and organizations can recover information within minutes and resume mission objectives in the event of a breach.

Securing Data with Rubrik’s Rapid Recovery Abilities

Rubrik, a security cloud solution provider that keeps your data resilient, enables the near-instant recovery of virtual machines and data within the Nutanix AHV environment. Rubrik provides multiple recovery options within AHV, such as file-level recovery, live mount, export, mount virtual disks and downloadable virtual disk files. Through Rubrik, businesses can recover files from older hypervisors into newer AHV environments without having older hypervisors online. Once granted access to the AHV environment, Rubrik automatically discovers and integrates protocols and base level policies for VMs. Rubrik’s recovery process restores data in minutes, regardless of VM size. As VMs get larger and larger, frequently hitting 50 terabytes, this speedy and precise response empowers organization’s incident response plans to be swift and efficient. After scanning the meta data, users are granted file level recovery after anomaly detection, allowing users oversight on affected data.

As the data that organizations manage grows exponentially, data security becomes critical to business functions. Rubrik offers comprehensive data security, continuously monitoring and remediating data risks within the network.

Through Rubrik, businesses can recover files from older hypervisors into newer AHV environments without having older hypervisors online. Once granted access to the AHV environment, Rubrik automatically discovers and integrates protocols and base level policies for VMs.Rubrik’s recovery process restores data in minutes, regardless of VM size. As VMs get larger and larger, frequently hitting 50 terabytes, this speedy and precise response empowers organization’s incident response plans to be swift and efficient.After scanning the meta data, users are granted file level recovery after anomaly detection, allowing users oversight on affected data.

Rubrik also provides constant monitoring for backups. Typically, businesses do not regulate data backlogs, which increases the likelihood that they miss attackers that sit in the system environment for a few days before collecting data. With Rubrik’s threat monitoring and hunting, organizations can search through backups and detect when an anomaly entered the environment. Through Nutanix and Rubrik’s integration, IT teams can reduce complexity, gain oversight, cut down on operational costs and improve resiliency and efficiency.

Automation: The Key to a Proactive Incident Response

Modern cyber threats require a proactive approach to incident response. With automation and orchestration, facilitated by the combined capabilities of Nutanix and Rubrik, organizations can detect, respond to and recover from cyber incidents more efficiently.

Rubrik has a built-in anomaly detection, which searches protected data for strange behavior, such as mass deletion or encryption. As the volume of data on a network increases, organizations often have sensitive data they are not actively monitoring or even know sensitive data maybe exposed. Rubrik clusters are always scanning protected data for anomalies, sensitive data, and known IOC’s allowing customers to select resolution options, such as isolating compromised VMs, or the ability to restore product systems from last known good copies.

Readiness impacts recovery time, and recovery time impacts organization operations. Nutanix AHV’s recovery organization authorizes IT teams to organize VMs into a set of templates, which can be used to create blueprints and launch application recovery. Nutanix also provides organizations with the flexibility to apply policy to each workload, taking control of network security and BC/DR policy with VM level granularity. By allowing organizations to map out their application owners, Nutanix AHV enables businesses to move from a reactive to a proactive security posture, minimizing the impact of attacks and ensuring swift recovery.

Nutanix and Rubrik’s integration creates a powerful security and operational synergy, empowering organizations with the tools they need for network safety and, if necessary, a swift and comprehensive restoration of critical systems, empowering organizations to resume business missions. Nutanix AHV enables organizations to reduce complexity, improve security and achieve a higher level of resilience and operational efficiency.

To learn more about how Nutanix AHV and Rubrik’s integration delivers streamlined data protection, rapid recovery and robust incident response capabilities, watch our webinar, Fortifying AHV: Cyber Recovery and Incident Response with Nutanix and Rubrik.