Cyberattack Trends Impacting Local Government and Education Sectors

Today’s cybercriminals are no longer driven solely by financial gain, the geopolitical impact of attacks has shifted with nation-state actors now targeting critical infrastructure. While Local Governments have long been a part of this, schools have also become key targets, especially after COVID-19. The pandemic’s disruption to education has left a lasting impact, making attacks on schools and Local Governments both physically and psychologically significant. These institutions, essential to society, are under siege not just for their sensitive data but for their societal importance. With advanced capabilities and financial backing, nation-state actors are accelerating their efforts, heightening the urgency for robust cybersecurity.

Why Threat Actors Target Local Government and Education

Local Governments are frequent cyberattack targets due to their political significance and the essential services they provide. When one city is attacked, neighboring cities often become hyper-vigilant, particularly smaller municipalities managing critical services like water supply. These vital functions make them high-value targets. While financial institutions are seen as obvious targets for their direct connection to money, Government agencies hold more financial value than many realize. The stakes are even higher when political positions are involved, making Local Governments attractive to financially motivated attackers and nation-state actors seeking leverage.

Lumu Technologies SLED Cyberattack Trends Blog Embedded Image 2024

Education has also become increasingly vulnerable. Schools were initially targeted for geopolitical reasons, with attackers seeking to influence the “hearts and minds” of society by disrupting education. However, cybercriminals discovered the financial value of student records, which are worth more on the dark web than credit card or healthcare information due to students not checking their credit scores. This extended window for identity theft, combined with the vast amount of data schools hold, makes educational institutions prime targets for cybercriminals.

Both Local Governments and schools face shared challenges in defending their systems. For Governments, Supervisory Control and Data Acquisition (SCADA) networks that manage infrastructure are often isolated but still present large attack surfaces due to their distributed nature. Schools, on the other hand, struggle with the complexity of students bringing their own devices, which introduces uncontrolled entry points into the network. These vulnerabilities make Local Government and education uniquely attractive and susceptible targets in the cyber landscape.

Two Main Attack Vectors: Phishing and Infostealers

Cybercriminals use various tactics to infiltrate Local Governments and schools, exploiting both technological weaknesses and human behavior. People are often the weakest link, making them prime targets for attackers. The rise of artificial intelligence (AI) has further advanced these attacks, making them more difficult to detect. While agencies and schools cannot fully eliminate the risk through training alone, understanding these evolving threats can significantly reduce the chances of successful attacks.

Phishing and information stealing are two of the most prevalent methods used by cybercriminals. Research from Lumu Technologies shows that phishing accounts for 52% of attacks, while information stealing makes up 48%, illustrating their near-equal presence as cyber threats.

Phishing

Phishing is often used to gain initial access into a network, accounting for approximately 90% of attacks. By tricking users into clicking malicious links or downloading malware, attackers establish a presence in the system. The preliminary malware allows them to move laterally, escalate privileges and locate sensitive data. Attackers either sell the data or use it to launch ransomware attacks. In ransomware scenarios, the attacker takes control of the network, encrypts critical data and issues a ransom demand. Phishing is thus the starting point for a larger chain of events leading to data theft and/or financial extortion.

Information Stealing

Infostealers are designed to capture sensitive information, often to sell on the dark web or to facilitate ransomware attacks. Like intelligence operations, they collect data to spread through an environment or identify new attack points. Keyloggers record keystrokes to capture usernames and passwords for unauthorized access. Other methods include form grabbers, which intercept forms and alter them, and browser hijackers, which mimic legitimate sites to bypass multi-factor authentication. Sensitive data from Local Government and education sectors is highly valuable, with threat actors intensifying efforts to exploit it for profit.

In addition to phishing and infostealers, cybercriminals continually find new ways to exploit technology and human behavior, such as man-in-the-middle (MITM) attacks, credential stuffing and supply chain attacks. These often-overlooked attack vectors can cause significant damage to agencies and schools. Recognizing these methods is crucial for developing comprehensive defenses.

Why These Attack Methods are Successful

These attack methods succeed against Local Governments and schools due to the constantly evolving nature of cyber warfare. Like traditional warfare, attackers adapt, finding new ways in after one vulnerability is closed. Defenders must be equally dynamic.

Even with security measures like Endpoint Detection and Response (EDR), attackers find ways to bypass them. EDR relies on behavior analysis, which takes time, while attackers use advanced AI to quickly develop new methods. Local Governments and schools are often slower to adapt, giving attackers an advantage. The challenge is not just implementing security measures but continuously evolving defenses to keep up with new threats.

AI Versus AI

In the battle against evolving cyberattacks, Local Governments and schools must leverage advanced technologies like AI and automation. As attackers adopt AI to improve the sophistication and speed of attacks, defenders need equally powerful tools. Cybercriminals use AI to bypass traditional defenses, identifying weaknesses faster than humans can.

To keep up, Local Government and education sectors must deploy AI-driven systems to detect threats in real time. AI helps identify vulnerabilities, enabling proactive defense, while automation blocks threats at machine speed. For smaller institutions with limited resources, automation is especially crucial to defend against attacks effectively.

In a landscape where cyber threats continually evolve, matching the speed and sophistication of attackers is crucial for a strong cyber defense. Government agencies and educational institutions must stay vigilant, leveraging AI and automation to outpace attackers and protect the critical infrastructure and data that comprise the foundation of society.

Discover the latest trends in cyberattacks and learn how AI and automation are reshaping the fight against modern cybercriminals in Lumu Technologies’ webinar, “Emerging Cyber Attack Trends Targeting Local Government & Education.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Lumu Technologies, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Grammarly and Carahsoft: Elevating Secure, Private Government Communication

Grammarly and Carahsoft have partnered to provide Government agencies with trustworthy AI assistance supported by robust security measures. Thanks to this collaboration, Government agencies gain access to Grammarly’s trusted AI assistant, which can help them improve communication and boost operational efficiency. This partnership marks a notable advancement in supporting Government agencies in navigating the evolving digital landscape.

Unlocking the Benefits for Government Agencies

As a recognized leader in providing IT solutions to the public sector, Carahsoft offers extensive experience navigating the Government procurement process. Combined with Grammarly’s AI assistant, their expertise creates a powerful resource for Government agencies aiming to improve efficiency and productivity. When your agency works with Carahsoft and Grammarly, you’ll experience the following benefits:

Grammarly Government Communications AI Blog Embedded Image 2024
  1. Rapid Implementation: Our streamlined setup process enables agencies to implement Grammarly across their organization in one day. This allows teams to start benefiting from enhanced communication support almost immediately.
  2. Time Efficiency: On average, our users save about 35 minutes per day per person on communication tasks. This time can be redirected toward more strategic tasks, leading to improved project outcomes and better service delivery to the public.
  3. Enhanced Communication Quality: Effective communication is crucial for Government agencies. Grammarly’s tools help teams craft clear, concise, and impactful messages, ensuring that important information is conveyed accurately. With over 70,000 teams already benefiting from our services, our track record speaks for itself.
  4. Boosting Brand Compliance: Our advanced communication tools can help agencies improve brand compliance by a remarkable 71%. This consistency in communication enhances public trust and strengthens the agency’s reputation.

Our Commitment to Privacy, Security, and Compliance

Grammarly’s commitment to enterprise-grade security offers significant benefits for Government agencies. As a trusted partner, Grammarly adheres to the highest industry standards, ensuring that sensitive information remains secure. The collaboration with Carahsoft further underscores this dedication. Grammarly provides tailored AI solutions that meet the specific security needs of the public sector. By emphasizing stringent security measures, Grammarly helps agencies confidently use their tools while safeguarding critical data.

Additionally, Grammarly’s subscription-based revenue model ensures that customer content is never sold, placing a strong emphasis on user privacy and control. This transparency is essential for Government agencies, allowing them to maintain oversight of their data usage at all times. With a solid foundation supported by third-party audits and certifications, Grammarly provides compliance and regulatory support that agencies can rely on, reinforcing their ability to operate within legal and ethical boundaries while maximizing operational efficiency.

Empowering the Public Sector with AI

Through our partnership with Carahsoft, we are dedicated to helping Government agencies lead, learn, and grow amid evolving demands. With Grammarly, your teams can confidently communicate, innovate, and serve the public more effectively.

For more information on implementing Grammarly within your agency, visit our website or contact Carahsoft today! Together, we can enhance Government operations’ efficiency and ensure that every message counts.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Grammarly, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

How to Accelerate the Journey to Government Compliance with CCM

Government agencies are inundated with a vast amount of daily Governance, Risk, and Compliance (GRC) tasks and processes. Achieving regulatory compliance, an arduous process, can take up precious time that could be reallocated to other business-critical missions.

Continuous controls monitoring (CCM) is one solution. CCM leverages AI and extreme automation to help cut down on manual processes, allowing agencies to overcome regulatory hurdles, supercharge their staff, and make better risk-based decisions with fast, cost-effective automations.

Improving the Compliance Process

Creating a quality compliance report comes with heavy, manual processing time. CCM can help significantly by taking away some of the cumbersome brunt work, cutting 60-80% of the manual tasks required by GRC programs.

RegScale Government Compliance CCM Blog Embedded Image 2024

It can also help overcome hurdles to reaching valuable security authorizations.  Completing an Authorization to Operate (ATO) package can take roughly six months to finish — but that process can be reduced to two weeks with the right CCM platform.  CCM also gives agencies a leg up with gaining Continuous Authorization to Operate (cATO) by leveraging OSCAL, a machine-readable format that standardizes security control documentation and enables automated validation.

The Time-Saving Capabilities of Machine Learning and AI

In the past year, advances in machine learning (including large language models and generative AI) have created exciting new possibilities for GRC teams. AI and machine learning (ML) can offer everything from better data analysis to proactive risk management to a major reduction in manual processes. Here are a few of the most compelling use cases for AI-enabled GRC:

  • Help employees proactively monitor traffic
  • Review code for errors unlikely to be caught by the human eye
  • Explain complex controls and procedures in everyday language, bridging knowledge gaps
  • Generate accurate, up-to-date documentation in one click

Overall, AI allows agencies to move faster, with more accuracy, and with better visibility. To free up staff to complete mission-critical objectives, agencies should create their own AI/ML usage strategies and implement them within a Compliance as Code framework.

How RegScale’s CCM Leverages Compliance-Trained AI

RegScale’s AI-enabled platform, RegML, combines CCM and leading large language (LLM) tools to streamline compliance management with intelligent automation and precision. This approach improves compliance by significantly reducing manual labor and costs. It also provides user-friendly summaries and guidance and improves accuracy and precision in documentation, freeing up staff to focus on core business objectives. 

RegML has four main AI features:

  • AI Extractor, which automatically derives compliance documentation from existing policies and procedures.
  • AI Explainer, which is designed to demystify control statements by providing users with simple explanations of intricate controls.
  • AI Author, which helps draft control implementation statements in the context of relevant regulations and requirements. This process allows writers to focus on editing a draft, leading to fewer errors and better accuracy.
  • AI Auditor, which identifies gaps in controls and provides suggestions for improvement. This frees up teams to work on more critical tasks like fixing gaps and implementing controls.

CCM and the Future

Today, more and more work is being done in the cloud. As data becomes ephemeral and serverless, cybersecurity has become more important than ever — as have the mandatory frameworks governing it. Meanwhile, regulations such as NIST’s Secure Software Development Framework (SSDF), the Digital Operational Resilience Act (DORA), the Security and Exchange Commission (SEC) rules, Cybersecurity and Infrastructure Agency (CISA) mandates, and the European Union’s AI Act have or are predicted to undergo changes.

These shifting frameworks only make CCM more integral, as its AI features allow users to ensure that they are thoroughly compliant at every step of the process. By freeing time for additional tasks, and by maintaining adherence to changing regulations, CCM enables organizations to improve their GRC programs and streamline their operations.

To learn more about how RegScale’s CCM platform provides a layer of security around AI usage, watch its webinar How AI is Revolutionizing Government Compliance.

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including RegScale, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought leaders.

Third-Party Risk Management: Moving from Reactive to Proactive

In today’s interconnected world, cyber threats are more sophisticated, with 83% of cyberattacks originating externally, according to the 2023 Verizon Data Breach Investigations Report (DBIR). This has prompted organizations to rethink third-party risk management. The 2023 Gartner Reimagining Third Party Cybersecurity Risk Management Survey found that 65% of security leaders increased their budgets, 76% invested more time and resources and 66% enhanced automation tools to combat third-party risks. Despite these efforts, 45% still reported increased disruptions from supply chain vulnerabilities, highlighting the need for more effective strategies.

Information vs Actionable Alerts

The constant evolution and splintering of illicit actors pose a challenge for organizations. Many threat groups have short lifespans or re-form due to law enforcement takedowns, infighting and shifts in ransomware-as-a-service networks, making it difficult for organizations to keep pace. A countermeasure against one attack may quickly become outdated as these threats evolve, requiring constant adaptation to new variations.

In cybersecurity, information is abundant, but decision-makers must distinguish the difference between information and actionable alerts. Information provides awareness but does not always drive immediate action, whereas alerts deliver real-time insights, enabling quick threat identification and response. Public data and real-time alerts help detect threats not visible in existing systems, allowing organizations to make proactive defense adjustments.

Strategies for Managing Third-Party Risk

Dataminr Third Party Risk Management OSINT Blog Embedded Image 2024

Managing third-party risk has become a critical challenge. The NIST Cybersecurity Framework (CSF) 2.0 emphasizes that governance must be approached holistically and highlights the importance of comprehensive third-party risk management. Many organizations rely on vendor surveys, attestations and security ratings, but these provide merely a snapshot in time and are often revisited only during contract negotiations. The NIST CSF 2.0 calls for continuous monitoring—a practice many organizations follow, though it is often limited to identifying trends and anomalies in internal telemetry data, rather than extending to third-party systems where potential risks may go unnoticed. Failing to consistently assess changes in third-party risks leaves organizations vulnerable to attack.

Many contracts require self-reporting, but this relies on the vendor detecting breaches, and there is no direct visibility into third-party systems like there is with internal systems. Understanding where data is stored, how it is handled and whether it is compromised is critical, but organizations often struggle to continuously monitor these systems. Government organizations, in particular, must manage their operations with limited budgets, making it difficult to scale with the growing number of vendors and service providers they need to oversee. Threat actors exploit this by targeting smaller vendors to access larger organizations.

Current strategies rely too heavily on initial vetting and lack sufficient post-contract monitoring. Continuous monitoring is no longer optional—it is essential. Organizations need to assess third-party risks not only at the start of a relationship but also as they evolve over time. This proactive approach is crucial in defending against the ever-changing threat landscape.

Proactively Identifying Risk

Proactively identifying and mitigating risks is essential for Government organizations, particularly as threat actors increasingly leverage publicly available data to plan their attacks. Transparency programs, such as USAspending.gov and city-level open checkbook platforms, while necessary for showing how public funds are used, can inadvertently provide a playbook for illicit actors to target vendors and suppliers involved in Government projects. Public data often becomes the first indicator of an impending breach, giving organizations a narrow window—sometimes just 24 hours—to understand threat actors’ operations and take proactive action.

To shift from reactive to proactive, organizations must enhance capabilities in three critical areas:

  1. Speed is vital for detecting threats in real time. Using AI to examine open source and threat intelligence data helps organizations avoid delays caused by time-consuming searches.
  2. The scope of monitoring must extend beyond traditional sources to deep web forums and dark web sites, evaluating text, images and indicators that mimic official branding.
  3. While real-time information is essential, excessive data can lead to alert fatigue. AI models that filter and tag relevant information enable security teams to focus on the most significant risks.

Proactively addressing third-party risks requires organizations to stay prepared for immediate threats. By leveraging public data, they can strengthen defenses and act before vulnerabilities are exploited.

While self-reporting and AI tools are valuable, organizations must take ownership of their risk management by conducting their own due diligence. The ability to continuously monitor, identify and mitigate risks presents not just a challenge but an opportunity for growth and improvement. Ultimately, it is the organization’s reputation and security at stake, making proactive risk management key to staying ahead of today’s evolving threats.

To learn more about proactive third-party risk management strategies, watch Dataminr’s webinar “A New Paradigm for Managing Third-Party Risk with OSINT and AI.”

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including Dataminr, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

“Giving Back is in Our DNA”: How AvePoint is Driving Social Change in the Tech Industry

AvePoint (NASDAQ: AVPT) is the global leader in robust data management and governance with over 21,000 customers across the globe, helping them secure their collaboration environments across Microsoft, Google and Salesforce. Using AI, AvePoint enables organizations to modernize their digital workplace and improve data governance, enhancing productivity, collaboration and security. In addition to helping its customers thrive within their digital collaboration systems, AvePoint is dedicated to philanthropy, reflecting a core mission to drive positive change in the technology industry and their communities.

Internal and External Charitable Efforts

AvePoint’s philanthropy efforts reflect the company’s core values of diversity, equity and inclusion (DEI), with a focus on using technology to drive social impact. Recognizing the tech industry’s challenges with underrepresentation, especially for women and people of color, AvePoint supports groups like Girls Who Code to break stereotypes about women in technology. AvePoint also fosters change within the organization through employee resource groups like AvePoint Veterans, Black AvePoint Excellence, Women in Technology (WIT), Latinx and Queers and Allies (Q&A), all aimed at fostering inclusivity and providing a supportive environment.

AvePoint Driving Social Change in the Tech Industry Blog Embedded Image 2024

Community engagement is integral to AvePoint’s mission, with events designed to blend philanthropy and collaboration. For instance, Black AvePoint Excellence (BAE) hosts an annual gala for partners and customers, typically held around Juneteenth. Likewise, during Pride Month, AvePoint’s Queers and Allies group invited a guest speaker to discuss the significance of Pride Month and what the organization could do to be more inclusive and equitable both internally and externally. These events reflect AvePoint’s culture of integrating ongoing education and fostering empathy, so employees can better serve their communities, extending positive change outward.

Beyond internal efforts, AvePoint’s philanthropic events align with Public Sector initiatives by giving back to communities through local charities where events are held. These collaborations not only contribute to community needs but also highlight AvePoint’s commitment to giving back in meaningful, locally impactful ways.

Past contributions include:

  • At the 2023 National Association of State Technology Directors (NASTD) Conference, AvePoint hosted a cornhole game, raising $2,500 for the Boston Children’s Hospital.
  • In 2023, at the TribalNet Conference in San Diego, California, AvePoint had two surfboards for attendees to decorate that were donated to the Groundswell Community Project.
  • AvePoint partnered with Carahsoft at NASTD 2024 and held a mini-golf game, donating $5,000 to The Minneapolis Foundation.
  • Partnering with Carahsoft for the second time, AvePoint hosted another mini-golf challenge at the 2024 Municipal Information Systems Association of California (MISAC) Conference, raising $3,000 for Patriots and Paws.

AvePoint’s recent partnership with Carahsoft’s Doing Good Team has enhanced these initiatives, particularly by streamlining charity verification and maximizing contributions. By combining resources, AvePoint and Carahsoft can expand their philanthropic impact, support reputable charities and foster community support. AvePoint’s ongoing commitment to diversity, inclusivity and technological advancement drives these charitable efforts, aiming to make a lasting difference in the communities they serve.

A Culture of Support and Service

AvePoint’s philanthropic efforts are deeply influenced by CEO, Dr. Tianyi Jiang, who has prioritized giving back to the technology community throughout the company’s 23-year history. This commitment to social responsibility is exemplified by initiatives like a partnership with Cornell University to mentor the next generation of engineers and entrepreneurs. This leadership-driven ethos resonates throughout the company, promoting charitable engagement at both organizational and individual levels, across the U.S. and globally.

Beyond organized company initiatives, AvePoint encourages employees to pursue their own charitable passions with a donation matching program to support causes that resonate personally with team members. Employees are also empowered to volunteer, with flexibility to balance work and service. AvePoint’s support for these independent initiatives illustrates how the company’s culture of giving is woven into its fabric, encouraging employees to contribute both professionally and personally.

AvePoint’s culture of giving is grounded in values that empower employees to engage in meaningful initiatives, both through company-supported efforts and personal causes. Leadership’s passion for community impact inspires employees at all levels to pursue organized and independent philanthropic efforts, always met with AvePoint’s encouragement and resources. As seen in examples across the organization, this culture of service is more than a formal policy—it is embedded in the company’s DNA, guiding AvePoint’s commitment to making a positive difference within and beyond the technology industry.

Explore the AvePoint culture of giving back on our Careers Blog, and learn more about how the company supports the Public Sector with our award winning technology here.

Exploring the Future of Healthcare with Generative AI

Artificial intelligence (AI) is an active field of research and development with numerous applications. Generative AI, a newer technique, focuses on creating content—learning from large datasets to generate new text, images and other outputs. In 2024, many healthcare organizations embrace generative AI, particularly in creating chatbots. Chatbots, which facilitate human-computer interactions, have existed for a while, but generative AI now enables more natural, conversational exchanges, closely mimicking human interactions. Generative AI is not a short-term investment or a passing trend, this is a decade-long effort that will continue to evolve as more organizations adopt it.

Leveraging Generative AI

When implementing generative AI, healthcare organizations should consider areas to invest in, such as employee productivity or supporting healthcare providers in patient care.

Key factors to consider when leveraging generative AI:

  1. Use case identification: Identify a challenge that generative AI can solve, but do not assume it will address all problems. Evaluate varying levels of burden reduction across use cases to determine its value.
  2. Data: Ensure enough data is available for generative AI to provide better services. Identify inefficiencies in manual tasks and ensure data compliance, as AI results depend on learning from data.
  3. Responsible AI: Verify that the solution follows responsible AI guidelines and Federal recommendations. Focus on accuracy, addressing hallucinations where incorrect information is provided such as responses that are grammatically correct but do not make sense or are outdated.
  4. Total cost of ownership: Generative AI is expensive, especially regarding hardware consumption. Consider if the same problem can be solved with more optimized models, reducing the need for costly hardware.

Harnessing LLMs for Healthcare

John Snow Labs Healthcare with Generative AI Blog Embedded Image 2024

Natural language processing (NLP) has advanced significantly in recent decades, heavily relying on AI to process language. Machine learning, a core concept of AI, enables computers to learn from data using algorithms and draw independent conclusions. Large language models (LLMs) combine NLP, generative AI and machine learning to generate text from vast language datasets. LLMs support various areas in healthcare, including operational efficiency, patient care, clinical decision support and patient engagement post-discharge. AI is particularly helpful in processing large amounts of structured and unstructured data, which often goes unused.

When implementing AI in healthcare, responsible AI and data compliance are crucial. Robustness refers to how well models handle common errors like typos in healthcare documentation, ensuring they can accurately interpret how providers write and speak.

Fairness, especially in addressing biases related to age, origin or ethnicity, is also critical. Any AI model must avoid discrimination; for instance, if a model’s accuracy for female patients is lower than for males, the bias must be addressed. Coverage ensures the model understands key concepts even when phrasing changes.

Data leakage is another concern. If training data is poorly partitioned, it can lead to overfitting, where the model “learns” answers instead of predicting outcomes from historical data. Leakage can also expose personal information during training, raising privacy issues.

LLMs are often expensive, but healthcare-specific models outperform general-purpose ones in efficiency and optimization. For example, healthcare-specific models have shown better results than GPT-3.5 and GPT-4 in tasks like ICD-10 extraction and de-identification. Each model offers different accuracy and performance depending on the use case. Organizations must decide whether a pre-trained model or one trained using zero-shot learning is more suitable.

Buy Versus Build

When it comes to the “buy versus build” decision, the advantage of buying is the decreased time to production compared to building from scratch. Leveraging a task-specific medical LLM that a provider has already developed costs a healthcare organization about 10 times less than building their solution. While some staff will still be needed for DevOps to manage, maintain and deploy the infrastructure, overall staffing requirements are much lower than if building from the ground up.

Even after launching, staffing requirements are not expected to decrease. LLMs continuously evolve, requiring updates and feature enhancements. While in production, software maintenance and support costs are significantly lower—about 20 times less—than trying to train and maintain a model independently. Many organizations that build their healthcare model quickly realize training is extremely costly in terms of hardware, software and staffing.

Optimizing the Future of Healthcare

When deciding on healthcare AI solutions, especially with the rise of generative AI, every healthcare organization should assess where to begin by identifying their pain points. They must ensure they have the data required to train AI models to provide accurate insights. Healthcare AI is not just about choosing software solutions; it is about considering the total cost of ownership for both software and hardware. While hardware costs are expected to decrease, running LLMs remains a costly endeavor. If organizations can use more optimized machine learning models for specific healthcare purposes instead of LLMs, it is worth considering from a cost perspective.

Learn how to implement secure, efficient and compliant AI solutions while reducing costs and improving accuracy in healthcare applications in John Snow Labs’ webinar “De-clutter the World of Generative AI in Healthcare.”

Discover how John Snow Labs’ Medical Chatbot can transform healthcare by providing real-time, accurate and compliant information to improve patient care and streamline operations.

Highlights from the SANS Government Security Forum on Zero Trust, CMMC Compliance and AI

Carahsoft Technology Corporation, a leader in Government IT solutions, partnered with the SANS Institute for the fourth year in a row to host the 2024 Government Security Solutions Forum. The event gathered cybersecurity professionals and Public Sector leaders to address evolving cyber threats facing Government agencies. Experts led discussions on key topics, including Zero Trust implementation, achieving Cybersecurity Maturity Model Certification (CMMC) compliance and harnessing artificial intelligence (AI). This blog highlights key takeaways from three of the six sessions surrounding these imperative industry topics, providing actionable insights to strengthen cybersecurity defenses in today’s digital landscape. During the event a visual artist Ashton Rodenhiser summarized the sessions which are featured in this blog.

Carahsoft SANS Government Security Solutions Forum Blog Zero Trust Image 2024

Zero Trust Implementation

During the session “Zero Trust Implementation Strategies,” experts explored the growing challenges security professionals face with emerging technologies and provided key insights into building a robust Zero Trust framework.

As new technologies rapidly emerge, security professionals face increasing challenges in keeping pace, especially with the integration of on-prem environments and the cloud. A key principle of Zero Trust is the enforcement of least privilege policies, which requires a shift in how identity management is applied. This begins with strong governance to ensure the accuracy and reliability of policies and attributes.

Building a comprehensive security framework also involves implementing contextual authorization through micro-segmentation, considering factors like device, location and time to create a robust protective barrier. Furthermore, integrating identity management with Endpoint Detection and Response (EDR) tools is becoming increasingly important for tracking authorized processes and addressing the extended presence of threat actors who exploit admin identities to execute malware.

One of the biggest challenges in managing security policies is their complexity. Many security policies lack human readability due to their intricate structure, making automation essential for managing actions and enforcing compliance. The National Security Administration’s (NSA) recent Zero Trust guide emphasizes automation as a key pillar, highlighting its importance in responding to data flow deviations and maintaining security.

Despite the advanced systems in place, human error continues to be a major vulnerability. Employees can unknowingly compromise security through phishing attacks or by interacting with malicious links. To mitigate this, organizations must prioritize improving employee awareness and addressing the human factor as a critical component of cybersecurity.

Explore how Carahsoft’s Zero Trust portfolio can help Government implement a comprehensive Zero Trust strategy, strengthening organization’s security and protecting critical assets.

Carahsoft SANS Government Security Solutions Forum Blog CMMC Image 2024

Achieving CMMC Compliance

The session “Navigating Supply Chain Security and CMMC Compliance” provided valuable insights into the upcoming implementation of the CMMC framework and its implications for Defense Industrial Base (DIB) organizations. This certification will ensure that DIB organizations meet stringent cybersecurity standards through third-party assessments and will soon be mandatory for both prime contractors and subcontractors working with the Department of Defense (DoD).

CMMC consists of multiple certification levels, with Level 1 covering basic practices for Federal Contract Information (FCI) and Level 2 addressing 110 practices based on NIST 800-171, extending to around 320 actions. To prepare, organizations should work with Registered Practitioner Organizations (RPOs) to assess their readiness. These RPOs employ Certified CMMC Professionals (CCPs) and Certified CMMC Assessors (CCAs), who are trained and certified by the Cybersecurity Assessor and Instructor Certification Organization (CAICO), a subsidiary of Cyber AB, which oversees the curriculum and training programs.

After preparation, organizations will undergo an official assessment by a CMMC Third-Party Assessment Organization (C3PAO), which hires CCPs and CCAs to evaluate the cybersecurity measures in place. As the CMMC rule takes effect, organizations must ensure they work with certified professionals listed on the Cyber AB marketplace, as uncertified entities will not be recognized by the DoD.

Given the complexity of CMMC and the fact that preparation for certification can take at least six months, organizations are encouraged to start early to meet the new requirements.

Carahsoft is proud to be part of the CMMC ecosystem, with around 800 employees focused on cybersecurity and partnerships with over 150 vendors. By closely tracking policies and industry trends, Carahsoft aligns customer needs with relevant technologies, promoting “better together” integrations to maximize the value of existing investments. Carahsoft works with vendors that address every CMMC maturity level and capability domain, guiding customers through the complex decision-making process to ensure that they select the most suitable technologies to fill security gaps effectively and efficiently. Explore Carahsoft’s CMMC portfolio.

Carahsoft SANS Government Security Solutions Forum Blog AI Image 2024

Harnessing AI

Amid the complexities of cybersecurity, effective threat detection and response are increasingly reliant on advanced technologies like AI. The session “Harnessing AI for Advanced Threat Detection” explored the benefits and risks of integrating AI into security operations, highlighting key strategies for balancing automation with rigorous security practices.

“Advanced threat detection” spans various aspects of security operations, including the development and collection of threat intelligence. AI offers significant benefits in early threat detection, helping organizations quickly identify and respond to malicious activity. However, its use must be approached cautiously across the entire security chain.

With the rise of generative AI, industries are applying AI to automate time-consuming tasks. A key benefit is AI’s ability to condense information quickly. Tasks like threat searching or intelligence analysis, which once took hours, can now be completed in minutes, freeing experts to focus on higher-level tasks. This “toil reduction” is vital, as AI automates routine work and creates immediate efficiencies with minimal effort.

While AI brings advantages, there are inherent risks in implementing AI models and infrastructure. It is crucial to approach AI from two perspectives: using it to enhance security while ensuring the security of AI itself.

Organizations must also consider how they can trust AI-generated information. Trust and validation are essential. Provenance—knowing the source of data and models—is key to building confidence. While AI can handle most of the work, experienced engineers and analysts are still needed to verify and analyze the results so security teams can focus on more complex matters.

The siloed nature of work within security operations may limit intelligence sharing. Maintaining control of input data is critical, especially with public models hosted by technology vendors. If training data enters public models, organizations may compromise sensitive information. In regulated environments, private models offer safer options, allowing companies train AI while retaining control.

When integrating AI into security operations, organizations should build trust by validating each use case, allowing AI to be operationalized while ensuring accuracy. Experimentation is key to identifying where AI can provide a return on investment. However, implementing AI requires careful consideration of security models, AI safety and governance, particularly as organizations scale AI into operations.

Unlock the potential of AI to drive innovation and efficiency in Government organizations with Carahsoft’s AI and machine learning portfolio.

Frank Briguglio, Federal CTO at SailPoint, and Fatih Akar, Security Product Manager at VMRay, led the discussion on Zero Trust. Melanie ‘Kyle’ Gingrich, Interim Executive Director at The Cyber AB, provided guidance on navigating CMMC compliance. Josh Lemon, Director of Managed Detection and Response at Uptycs, and Ron Bushar, Managing Director of Mandiant Solutions at Google Public Sector, explored the role of AI in advanced threat detection.

Explore more insightful sessions on how Public Sector cybersecurity teams are strengthening their security posture by watching the SANS 2024 Government Security Forum in partnership with Carahsoft.

Creating a Unified eLearning Environment to Deliver a Comprehensive Educational Experience

What to Consider When Building a Unified eLearning Environment

The core components of a unified eLearning Environment are content creation, delivery of the information, and tracking the effectiveness of the training. Adobe provides a cohesive platform for organizations to succeed in all three phases of this process. The advantage of having these tools under one umbrella is that they work seamlessly together, so the focus can be on the training and not the technology behind it. In this post, we will look at what tools can be leveraged to create dynamic engaging content, how you can deliver that content in new and immersive ways, and where you can track and manage the effectiveness of the training in an easily digestible manner.

Creating Content that Drives Interactivity

Adobe Unified eLearning Blog Embedded Image 2024

The key to an exceptional eLearning experience is getting the learners to the keyboard and the screen. Interactivity helps mitigate multitasking and keeps the learner focused on the information being delivered. Developing your courses in Adobe Captivate allows you to add interactive elements like quizzes and branching scenarios where learners can make choices that affect the path of the lesson, providing a more personal training tool. Taking this development one step further, the virtual reality (VR) capability can create an immersive learning environment with a plethora of VR interactions that course designers can implement. Finally, adding responsive design to the courses ensures they look amazing. The content will adapt to various screen sizes, so the experience is optimal whether the learner is on a laptop, tablet, or phone. For more hands-on training, the software simulation element allows for creating tutorial-type content that learners can then emulate in a virtual mock-up environment to learn the skills demonstrated. Once the content is built it can be published directly to Adobe Learning Manager (ALM), Adobe’s LMS, for delivery and tracking. Driving interactivity captures the learners’ attention and thus leads to better information retention.

Next-Generation Virtual Classrooms Leveraging AI and Apps

Whether artificial intelligence (AI) is good or bad can be debated, but there is no doubt that it is here, and it will only get faster, more accurate, and grow in its capabilities. Adobe Connect has an app called “Chat Plus” that allows you to access AI in the chat during virtual classes. This allows hosts and presenters to instantly access information that may take several clicks to find in a search engine. Generative AI algorithms can help create new ways to spice up the virtual content through AI tools such as text (ChatGPT, Gemini, Sonnet), images (Adobe Firefly, Midjourney, DALL-E), and audio (Suno, Donna, AIVA). Text can be used to generate session outlines, quiz questions, polls, and slide structures. Images are great for virtual room backgrounds, slide deck visuals, and whiteboard exercises. Audio can be used as lobby background music, quiz music, or translated recordings. By combining these AI features with applications from the Adobe Connect App Store, you create a fully immersive learning experience that goes way beyond screen sharing and whiteboarding. Mixing up media types when delivering virtual classroom training keeps the learner engaged and entertained.

Managing the Blended Learning Classroom

As organizations work on balancing in-office vs. remote workers, the blended learning experience for training is becoming the norm. Blended learning can present numerous challenges, like tracking attendance, utilizing breakout rooms, or taking quizzes. However, it can also provide opportunities, like having content that is always available via recordings, addressing learners who learn better synchronously vs. asynchronously or vice-versa, and cost-effectively training a globally dispersed audience. When you combine the power of Adobe Connect (Virtual Classrooms) and Adobe Learning Manager (Adobe’s LMS), there is now a single hub for all synchronous AND asynchronous learning. Seamless data exchange between the products allows for more accurate reporting to better measure the training’s effectiveness. A unified user experience for instructors and learners means that managing, scheduling, and accessing the blended learning courses can all be done in a straightforward easy-to-use platform.

The Love/Hate Relationship with a Learning Management System (LMS)

The complexity involved with setting up an LMS and managing it can be overwhelming. Adobe Learning Manager was designed specifically for enterprise delivery of courses in an easy-to-manage platform, with Admins and Learners in mind. The idea was to simplify the process with personalized learning paths, comprehensive learning tools, social learning, gamification, mobile accessibility, and certification/badging. Each learner has a dashboard to track their progress and see recommended courses. A calendar with automated emails and system notifications to help learners manage their schedules, and a home page with announcements to provide an easy way to share information. Gamification and social learning elements can be enabled to foster an engaging eLearning ecosystem, and connection to other eLearning tools allows it to serve as a one-stop shop for all learner training. With ALM, automated smart workflows for learning plans, content reusability, and detailed reporting help take the complexity out of managing an organization’s training program.

Additionally, if you or anyone you know would like to dive deeper into Adobe’s digital learning applications and how they can be applied to create exceptional hybrid learning experiences, watch the on-demand recordings from our 8-part webinar series, Advancing Unified Learning Environments, to learn from Adobe’s digital learning experts who will guide you through building an all-in-one learning environment, designing captivating training content, managing content and learners, and amplifying your message through engaging live virtual instruction and social learning experiences.

Access our on-demand recordings and presentation resources.

Democratizing AI: How Pre-trained Models Plus RAG Can Empower State and Local Agencies

Smaller state agencies need out-of-the-box options that solve immediate needs without a lot of funding or skilled machine learning expertise. Combining RAG with pre-trained LLMs and the agency’s own data accelerates development of AI capabilities and speeds time to value.

In my role at HPE over the last two years, I’ve had meetings with government agencies, defense departments, and research institutions around the world about AI. We’ve discussed everything from how to identify the right use cases for AI, to ethical concerns to getting a handle on the wild, wild west of AI projects across their organizations.

Some of these larger public sector organizations and government agencies have received funding from sources like the U.S. National Science Foundation, U.S. Defense Advanced Research Projects Agency (DARPA), the European Commission’s EuroHPC Joint Undertaking (EuroHPC JU), or the European Defense Fund, which has allowed them to develop AI centers of excellence and build end-to-end AI solutions. They have far-reaching goals — goals such as building the first large language model (LLM) for their native language, becoming the first sovereign, stable, secure AI service provider in their region, building the world’s most sustainable AI supercomputer, or becoming the world leader for trustworthy and responsible AI.

HPE Democratizing AI for SLG Blog Embedded Image 2024

But it takes a lot of resources to train an AI model. The infrastructure needed to train a foundational model may include thousands of GPU-accelerated nodes in high performance clusters. Data scientists and machine learning (ML) engineers are also needed to source and prepare datasets, execute training, and manage deployment.

That’s why many agencies are looking for out-of-the-box options that bring rapid capabilities for solving immediate challenges. Many of these are state and local agencies and higher education institutions. They don’t have the same level of requirements, funding, or expertise to build their own LLMs.

So does that mean the door to powerful AI models is closed on smaller state and local agencies?

No — not if you can gain an understanding of the available pre-trained models that can generate value with AI immediately. There is so much that can be accomplished without ever training a model yourself.

Inference is AI in Action

What exactly is inference? It’s the use of a previously trained AI model such as an LLM to make predictions or decisions based on new, previously unseen data.

Sound complicated? It’s just a fancy way of saying that you’re using an existing model to generate outputs.

In contrast with model training, which involves learning from a dataset to create the model, inference is using that model in a real-world application. Inferencing with pre-trained models reduces both funding requirements as well as the amount of expertise needed to deploy and monitor these models in production.

The pre-trained model market has been steadily growing, as have the number of cloud, SaaS, and open source inference options available. Open AI’s GPT-4o, Anthropic’s Claude, Google’s Gemini, and Mistral AI are among the most popular LLMs used for text and image generation. They’re just some of the thousands of models available through libraries like NVIDIA NGC and HuggingFace.

And just last month in Las Vegas, HPE also made an important announcement of their new NVIDIA AI Computing by HPE portfolio of co-developed solutions. These solutions include HPE’s Machine Learning Inference Software (MLIS), which makes it easy to deploy pre-trained models anywhere including inside your firewall.

Pre-trained Models with Your Data

The advantages of running a pre-trained model with the right platform seem pretty clear — you get the capabilities without the costs of training. However, it’s important to note that a pre-trained LLM excels in general language understanding and generation but is trained on some data other than your own. This is great for use cases where broad knowledge is sufficient and the ability to generate coherent, contextually appropriate text are essential.

So what do you do if you need to generate more specific and up-to-date outputs? There is another machine learning (ML) technique called retrieval augmented generation (RAG) which combines the pre-trained LLM with an additional data source (such as your own knowledge base). RAG combines LLM capabilities with a real-time search or retrieval of relevant documents from your source. The resulting system works like an LLM that’s been trained on your data, but with even more accuracy. RAG is particularly useful for tasks requiring specific domain knowledge or recent data.

Improving Outcomes for State Agencies

Getting started with AI models begins with understanding which problem you want to solve and whether it is most efficiently and effectively solved with AI. Here are some ways different kinds of organizations can leverage pre-trained LLMs:

Law enforcement agencies can use pre-trained models for incident reporting and documentation, to analyze crime data for predictive policing, or to analyze audio and video transcription for evidence management. They can improve community engagement through sentiment analysis and reduce administrative burdens through automated report generation.

Conversational AI can also make many types of citizen services more efficient and user-friendly — from permit applications to public query engines for local government agencies. And LLMs can automate document processing, reducing manual tasks for government workers and improving speed and accessibility of services to citizens.

LLMs can enhance the education experience for students and reduce the burden on teachers. AI-powered virtual assistants can provide tutoring and study support to students outside of school hours and assist researchers in conducting literature reviews by summarizing academic papers or extracting information.

As you consider leveraging pre-trained LLMs, think about the unique problems your agency or institution faces and how this approach could quickly solve those challenges without the need for extensive expertise or the burden and cost of training a model from scratch.

Final Thoughts

As the world and society evolves, the relationship between citizens and their governments, students and their teachers, will evolve too. In fact, they already are. Taking advantage of pre-trained models to solve long-standing automation issues or cumbersome documentation processes can give your organization the catalyst it needs to modernize to meet these new dynamics.

AI is being democratized by a growing number of pre-trained LLMs that are available off the shelf. And you don’t need to have complex data science skills to leverage them, just the right tools.

The door to AI is open for state and local agencies, regardless of size or sophistication. A part of my job is to understand the challenges and goals of public sector organizations of all sizes when it comes to AI.

To learn more about HPE Private Cloud AI, visit the Private Cloud AI solutions overview page and contact the HPE team for questions and comments.

This post originally appeared on HPE.com and is re-published with permission.