Strengthening Cybersecurity in the Age of Low-Code and AI: Addressing Emerging Risks

As new technologies like low-code/no-code development and generative AI (GenAI) revolutionize how we build and interact with software, they also bring about new security challenges—especially for the public sector. Protecting sensitive information and online accounts is more critical than ever, as cybercriminals look to exploit gaps in these emerging systems. Ensuring robust security and threat visibility is now essential for safeguarding against the risks associated with these advancements, especially as traditional safeguards become less effective in the face of evolving threats.


Low-code Development Exposes New Risk

One of the unintended consequences of our shift to a low-code/no-code development paradigm is the delegation of complex development tasks to Large Language Models (LLMs) and GenAI systems, often bypassing seasoned developers and architects. This opens new opportunities for cybercriminals. These systems excel at functional requirements—‘Build me a website that accepts customer checkout requests’—but they rarely infer non-functional needs, like security, unless explicitly instructed.

In traditional software development, security considerations are often implicit, stemming from the experience of developers and architects who’ve spent years learning from real-world failures. GenAI, however, lacks this depth of experience and focuses narrowly on the task at hand. The result? Incomplete or inadequate security measures in software developed through these systems. As organizations lean more heavily on GenAI, we risk creating an insecure software ecosystem ripe for exploitation by threat actors.


The Proliferation of Knowledge-Based Verification Attacks

We’re on the brink of a surge in automated attacks exploiting vulnerabilities in Knowledge-Based Verification (KBV) systems. Large-scale data breaches, like the one that exposed millions of Social Security numbers last year, are eroding the effectiveness of this approach at confirming identity when creating an account or supporting a password reset. These processes often rely on KBV—such as answering questions about your mother’s maiden name or the street you grew up on—but this information is increasingly accessible to malicious actors.

Human Security GenAI Low Code Blog Embedded Image 2025

As these personal details become more widely available through data breaches and online marketplaces, attackers can easily bypass KBV systems. Worse yet, threat actors can now leverage LLMs to develop sophisticated tools to mine personal data at scale and orchestrate automated attacks against these KBV systems. Organizations face an urgent challenge: how to protect accounts in a world where traditional KBV methods are no longer secure or reliable while still offering users a legitimate path to create an account or regain access when needed.


LLM Safeguards Can Be Overridden or Bypassed by Running Models Locally

With the proliferation of local LLM instances and tools like Ollama, we’ll see safeguards embedded in commercial LLMs eroded or bypassed entirely. Running models locally can allow threat actors to fine-tune them, removing restrictions on malicious activity and enabling custom models optimized for cybercrime. This creates a new frontier for scaled attacks that are faster, more targeted, and harder to detect until it’s too late.

Imagine a threat actor fine-tuning a model to craft phishing campaigns, identify vulnerabilities in software, or automate account takeovers. The ability to localize and modify these models fundamentally shifts the balance, empowering attackers with tools tailored to their malicious intent. The guardrails built into commercial LLMs are no match for this growing trend, amplifying the need for robust detection and defense strategies at every level.

As the public sector continues to adopt innovative technologies, staying ahead of emerging cyber threats is crucial. The increasing sophistication of attacks, such as those targeting KBV systems and leveraging GenAI, highlights the need for stronger protections. By prioritizing comprehensive security measures and threat detection, organizations can mitigate the risks of these evolving vulnerabilities and safeguard their sensitive data and online accounts against malicious actors. It is essential to build and maintain resilient security strategies to ensure the integrity of digital infrastructures in this rapidly changing environment.


To learn more about how HUMAN Security helps the public sector protect citizen accounts, sensitive information, and critical infrastructure, click here.


Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, including HUMAN Security, we deliver solutions for Geospatial, Cybersecurity, MultiCloud, DevSecOps, Artificial Intelligence, Customer Experience and Engagement, Open Source and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Explore the Carahsoft Blog to learn more about the latest trends in Government technology markets and solutions, as well as Carahsoft’s ecosystem of partner thought-leaders.

Exploring the Future of Healthcare with Generative AI

Artificial intelligence (AI) is an active field of research and development with numerous applications. Generative AI, a newer technique, focuses on creating content—learning from large datasets to generate new text, images and other outputs. In 2024, many healthcare organizations embrace generative AI, particularly in creating chatbots. Chatbots, which facilitate human-computer interactions, have existed for a while, but generative AI now enables more natural, conversational exchanges, closely mimicking human interactions. Generative AI is not a short-term investment or a passing trend, this is a decade-long effort that will continue to evolve as more organizations adopt it.

Leveraging Generative AI

When implementing generative AI, healthcare organizations should consider areas to invest in, such as employee productivity or supporting healthcare providers in patient care.

Key factors to consider when leveraging generative AI:

  1. Use case identification: Identify a challenge that generative AI can solve, but do not assume it will address all problems. Evaluate varying levels of burden reduction across use cases to determine its value.
  2. Data: Ensure enough data is available for generative AI to provide better services. Identify inefficiencies in manual tasks and ensure data compliance, as AI results depend on learning from data.
  3. Responsible AI: Verify that the solution follows responsible AI guidelines and Federal recommendations. Focus on accuracy, addressing hallucinations where incorrect information is provided such as responses that are grammatically correct but do not make sense or are outdated.
  4. Total cost of ownership: Generative AI is expensive, especially regarding hardware consumption. Consider if the same problem can be solved with more optimized models, reducing the need for costly hardware.

Harnessing LLMs for Healthcare

John Snow Labs Healthcare with Generative AI Blog Embedded Image 2024

Natural language processing (NLP) has advanced significantly in recent decades, heavily relying on AI to process language. Machine learning, a core concept of AI, enables computers to learn from data using algorithms and draw independent conclusions. Large language models (LLMs) combine NLP, generative AI and machine learning to generate text from vast language datasets. LLMs support various areas in healthcare, including operational efficiency, patient care, clinical decision support and patient engagement post-discharge. AI is particularly helpful in processing large amounts of structured and unstructured data, which often goes unused.

When implementing AI in healthcare, responsible AI and data compliance are crucial. Robustness refers to how well models handle common errors like typos in healthcare documentation, ensuring they can accurately interpret how providers write and speak.

Fairness, especially in addressing biases related to age, origin or ethnicity, is also critical. Any AI model must avoid discrimination; for instance, if a model’s accuracy for female patients is lower than for males, the bias must be addressed. Coverage ensures the model understands key concepts even when phrasing changes.

Data leakage is another concern. If training data is poorly partitioned, it can lead to overfitting, where the model “learns” answers instead of predicting outcomes from historical data. Leakage can also expose personal information during training, raising privacy issues.

LLMs are often expensive, but healthcare-specific models outperform general-purpose ones in efficiency and optimization. For example, healthcare-specific models have shown better results than GPT-3.5 and GPT-4 in tasks like ICD-10 extraction and de-identification. Each model offers different accuracy and performance depending on the use case. Organizations must decide whether a pre-trained model or one trained using zero-shot learning is more suitable.

Buy Versus Build

When it comes to the “buy versus build” decision, the advantage of buying is the decreased time to production compared to building from scratch. Leveraging a task-specific medical LLM that a provider has already developed costs a healthcare organization about 10 times less than building their solution. While some staff will still be needed for DevOps to manage, maintain and deploy the infrastructure, overall staffing requirements are much lower than if building from the ground up.

Even after launching, staffing requirements are not expected to decrease. LLMs continuously evolve, requiring updates and feature enhancements. While in production, software maintenance and support costs are significantly lower—about 20 times less—than trying to train and maintain a model independently. Many organizations that build their healthcare model quickly realize training is extremely costly in terms of hardware, software and staffing.

Optimizing the Future of Healthcare

When deciding on healthcare AI solutions, especially with the rise of generative AI, every healthcare organization should assess where to begin by identifying their pain points. They must ensure they have the data required to train AI models to provide accurate insights. Healthcare AI is not just about choosing software solutions; it is about considering the total cost of ownership for both software and hardware. While hardware costs are expected to decrease, running LLMs remains a costly endeavor. If organizations can use more optimized machine learning models for specific healthcare purposes instead of LLMs, it is worth considering from a cost perspective.

Learn how to implement secure, efficient and compliant AI solutions while reducing costs and improving accuracy in healthcare applications in John Snow Labs’ webinar “De-clutter the World of Generative AI in Healthcare.”

Discover how John Snow Labs’ Medical Chatbot can transform healthcare by providing real-time, accurate and compliant information to improve patient care and streamline operations.

Generative AI: Improving Efficiency for SLED Agencies

Users in the new age engage with generative AI like a personal assistant, granting it access to their personal calendars and assigning it tasks such as making dinner reservations to make life easier. On the professional level, employees turn to AI to expedite difficult or repetitive tasks to make their work easier. By educating employees on the security ramifications of generative AI, and by properly implementing it into their agency, State and Local Government and Education Market (SLED) decision makers can accelerate and improve their day-to-day processes.

Updated Security Parameters

When it comes to sensitive data, agencies and individuals should always maintain a broad scope of vigilance. With generative AI, agencies need to consider who has access to that information, and which adversaries may potentially exploit that information.

Broadcom Generative AI Blog Embedded Image 2024

Employees should be trained to spot red flags and use AI safely. With the increase in deep fakes, such as voice masking or impersonation, employees need to be able to spot suspicious phone calls and videos. With proper training to detect and report these instances, employees can help prevent hacking attempts. It is difficult to prevent employees from using generative AI, even in specific scenarios where sensitive data is present. Agencies should make the switch to sanctioned vendors, granting them access to fully tracked logs. It is critical to prevent sensitive information from passing into public AI, where it will be shared with others.

By design, AI is a black box. While agencies and users can not know what goes on between input and output, they should only trust generative AI packages that have dependable service hosts. Agencies, especially SLED agencies that handle sensitive information, need to be guaranteed that their data will remain contained by reliable parent companies. By negotiating through contracts vehicles, agencies can maintain visibility over the flow of data by learning if their information is being retained and for how long.

Saving Time with Generative AI

Some of the first generative AI models were built for translation machines such as Google Translate. Many services, such as Zoom, employ generative AI as plugins, which transcript language in real time for the appropriate audience.These models initially generated very verbatim translations, however, intent and context in communication is critical. Users often go to third party generative AI models to translate emails or web pages. They have more trust in their automation capabilities to understand and mirror context and intent in translation than the built-in translation services that many legacy software features offer.

Generative AI can help with drafting emails, broadcasting information, meeting deadlines and responding to agents, ultimately expediting processes. This can be especially helpful with overworked translators. While generative AI works to complete the main translations, the workers can focus on reviewing translations, expediting and perfecting the process. While there will ultimately always be a need for human interaction from a promotional, proofreading and understanding perspective, generative AI can speed up communication.

Generative AI can reduce the number of steps users take. By leading users from step A to step C, bypassing the difficult or time-consuming step B, generative AI keeps users on track. And for models trained on a SLED agency’s own data, users can always reference internal documents if questions arise. This scales back on the amount of busy work, reducing time spent on finding information. Generative AI can also expedite the synthesis of search data. In the past, search engines could locate documents for agencies. Now, agencies going through SLED records can not only find the document itself, but find the information within the document, and analyze that information before returning it to the user.

By accelerating the day-to-day tasks of employees, generative AI frees up creative minds to complete more vital, thorough and intricate projects, improving utility.

AI has been integral to Broadcom’s product solutions in user and enterprise IT. When properly implemented, generative AI can enhance technology, cybersecurity, analytics and productivity. To learn more about how Broadcom can help implement secure generative AI in SLED spaces, view Broadcom’s SLED focused cybersecurity solutions.

EdTech Talks: Modernizing Education with Artificial Intelligence and Machine Learning

Schools must embrace change alongside their growing generations to equip students for the future. Artificial intelligence (AI) and machine learning (ML) are two evolving, expansive technologies that are creating a monumental impact in the private and Public Sector, with education institutions being no exception. At Carahsoft’s annual EdTech Talks Summit, education leaders explored how AI and ML are changing the way teachers instruct, the way students learn and the way administrators approach technology in schools.

As a baseline, when considering AI for K-12 and higher education, administrators should follow several guiding principles for responsible and trustworthy use of AI.

  • Human-centricity: Promote human well-being, individuality and equity
  • Inclusivity: Ensure accessibility and diverse perspectives
  • Accountability: Proactively identify and mitigate adverse impacts
  • Transparency: Instruct students and teachers on proper usage, including potential risks and how decisions are made
  • Robustness: Operate reliably and safely while enabling mechanisms that assess and manage potential risks
  • Privacy and security: Respect the privacy of data subjects

Generative AI in Education

Carahsoft EdTech Talks Summit Blog Series-Part 3 Artificial Intelligence and Machine Learning Blog Embedded Image 2024Generative AI is still fairly new to the education space and educators are on both sides of the spectrum of acceptance—some prefer to erase it from their schools while others are open to embracing the up-and-coming technology for use cases not only in the classroom, but also to prepare students for the future workforce.

For example, one of the first technologies educators may be inclined to use when adopting AI in the classroom is detection tools. Dr. Anand Rao, Professor of Communications Chair of the Department of Communications and Digital Studies at the University of Mary Washington in Virginia recommends against this technology implementation because it could negatively affect vulnerable students. AI detection is not 100% correct in every instance. For some students, English may not be their first language and a detection tool could potentially identify their work as AI generated because it may be more formulaic. While detection tools can be utilized in a positive way to ensure honesty is upheld within students’ work, teachers and professors should use their discretion to determine the results of detection tools.

AI literacy is one of the most important principles for instructors to explore, deliberate and establish guidelines for. Since generative AI platforms such as ChatGPT and other tools like detection programs are still modernizing, students and faculty should go through a test period to learn how they work and understand whether they are comfortable utilizing them. As a next step, IT teams must be prepared to begin implementation and consider cybersecurity in that process.

Analytics and Data in AI

Education data grows exponentially with each new school year; however, collecting, evaluating and taking action based on the insights of that data is a long yet vital process. Instructors and administrators must leverage platforms that can help automate and analyze new and archived data to make the most informed decisions for their schools using the AI analytics lifecycle. This includes managing data efficiently, interpreting observations made about data and finally, creating a plan to incorporate constructive action to address needs discovered via the data. Using this strategy, schools can be better prepared to tackle real world questions and scenarios and provide students and teachers with the tools and processes they need to be successful.

This year’s EdTech Talks Summit event aimed to educate academic IT decision makers and end users about the current challenges and solutions surrounding student growth and development, security, AI and ML and cost-saving, modernization benefits of today’s leading EdTech solutions. The Education sector faces new challenges every school year, and it is imperative now more than ever that the IT industry and Government work together to provide the most safe and successful learning environments for all students.

Visit the EdTech Talks Conference Resource Center to view panel discussions and other innovative insights surrounding security, AI and student success from Carahsoft and our partners.

 

About Carahsoft in the Education Market  

Carahsoft Technology Corp. is The Trusted Education IT Solutions Provider™.  

Together with our technology manufacturers and reseller partners, we are committed to providing IT products, services and training to support Education organizations.  

Carahsoft is a leading IT distributor and top-performing E&I Cooperative Services, Golden State Technology Solutions, Internet2, NJSBA, OMNIA Partners and The Quilt contract holder, enhancing student learning and enabling faculty to meet the needs of Higher Education institutions.  

To Learn more about Carahsoft’s Education Solutions, please visit us at http://www.carahsoft.com/education

To learn more about Carahsoft’s AI Solutions, please visit us at https://www.carahsoft.com/solve/ai-machine-learning

The 12 Artificial Intelligence Events for Government in 2024

Carahsoft 10 Artificial Intelligence Events for the New Year Blog Embedded Image 2024Last year set a landmark standard for innovation in artificial intelligence (AI). Federal, State, and Local Governments and Federal Systems Integrators are eager to learn how they can implement AI technology within their agencies. With the recent Presidential Executive Order for AI, many Public Sector-focused events in 2024 will explore AI modernizations, from accelerated computing in cloud to the data center, secure generative AI, cybersecurity, workforce planning and more.

We have compiled the top AI events for Government for 2024 that you will not want to miss.

1. AI for Government Summit

May 2, 2024, Reston, VA | In-Person Event

The AI for Government Summit is a half-day event designed to bring together Government officials, AI experts and industry leaders to explore the transformative potential of AI in the public sector. As Governments worldwide increasingly adopt AI technologies to enhance efficiency, improve services and address complex challenges, this summit will serve as a platform for collaboration, discussion and sharing knowledge on the latest advancements and best practices in AI deployment within Government organizations.

Sessions to look out for: Cybersecurity & AI – Safeguarding the Government and Generative AI Government Use Case Panel 

Carahsoft is proud to host this inaugural event alongside FedInsider. Join us and over 100 of our AI & machine learning technology and solution providers as they speak towards AI adoption in our Public Sector and how they are using AI to solve our government’s most critical challenges. Attendees will also hear from top government decision-makers as they share unique insights into their current AI projects. 

2. NVIDIA GTC 

March 18 – 21, 2024, San Jose, CA | Hybrid Event

Come connect with a dream team of industry luminaries, developers, researchers, and business strategists helping shape what’s next in AI and accelerated computing. From the highly anticipated keynote by NVIDIA CEO Jensen Huang to over 600 inspiring sessions, 200+ exhibits, and tons of unique networking events, GTC delivers something for every technical level and interest area. Whether you join us in person or virtually, you are in for an incredible experience at the conference for the era of AI.

Sessions to look out for: What’s Next in Generative AI and Robotics in the Age of Generative AI 

Carahsoft serves as NVIDIA’s Master Aggregator working with resellers, systems integrators, and consultants. Our team provides NVIDIA products, services, and training through hundreds of contract vehicles.

Carahsoft is proud to be the host of the GTC Public Sector Reception on Tuesday, March 19th.  

Please visit Carahsoft and our partners at the following booths:

  • Government IT Solutions: Carahsoft (#1726), Government Acquisitions (#1820), World Wide Technology (#929)
  • AI/ML & Data Analytics: Anaconda (#1701), Dataiku (#1704), Datadog (#1033), DataRobot (#1603), Deepgram (#1719), Domino Data Labs (#1612), Gretel.AI (G130), H2O.AI (G124), HEAVY.AI (#1803), Kinetica (I132), Lilt (I123), Primer.AI (I126), Red Hat (#1605), Run:AI (#1408), Snowflake (#930), Weights & Biases (#1505 & G115)
  • AI Infrastructure: Dell (#1216), DDN (#1521), Edge Impulse (#434), Lambda Data Lab (#616), Lenovo (#1740), Liqid (#1525), Pure Storage (#1529), Rescale (#1804), Rendered.AI (#330), Supermicro (#1016), Weka (#1517)
  • Industry Leaders: AWS (#708), Google Cloud (#808), HPE (#408), Hitachi Vantara (#308), IBM (#1324), Microsoft (#1108), VAST Data (#1424), VMware (#1604)

3. 5th Annual Artificial Intelligence Summit  

March 21, 2024, Falls Church, VA | In-Person Event  

Join the Potomac Officers Club’s 5th Annual AI Summit, where federal leaders and industry experts converge to explore the transformative power of artificial intelligence. Discover innovative AI advancements, engage in dynamic discussions, and forge strategic collaborations with key partners at this annual gathering of the movers and shakers in the AI field. Hosted by Executive Mosaic, this summit will be held in Falls Church, Virginia.  

Sessions to look out for: Leveraging Collaboration to Accelerate AI Adoption in the DoD and Operationalizing AI in Government: Getting Things Done with Automation  

Carahsoft is the master aggregator for Percipient AI, a Silver Sponsor, and Primer AI, the Platinum Sponsor. Mark Brunner, President of Federal at Primer AI, will also be speaking at the event. 

4. INSA Spring Symposium: How AI is Transforming the IC

April, 4, 2024, Arlington, VA | In-Person Event

Join 300+ intelligence and national security professionals at INSA’s Spring Symposium, How Artificial Intelligence is Transforming the IC, on Thursday, April 4, from 8:00 am-4:30 pm at the INSA/NRECA Conference Center in Arlington, VA. Key leaders from government, academia, and industry will discuss cutting-edge AI innovations transforming intelligence analysis, top priorities and concerns from government stakeholders, developments in ethics and oversight, challenges and opportunities facing the public and private sector and more!

Session to look out for: AI Ready? Challenges from a Data-Centric Viewpoint

Meet with Carahsoft partners AWS, Google Cloud, Intel, and Primer.

5. Google Next ‘24  

April 9 – 11, Las Vegas, NV | In-Person Event  

Explore new horizons in AI at Google Cloud Next ’24 in Las Vegas, April 9–11 at Mandalay Bay Convention Center. Dive into AI use cases, learn how to stay ahead of cyberthreats with frontline intelligence and AI powered security and boost data and thrive in a new era of AI. Plus, see our latest in AI, productivity and collaboration, and security from Google Public Sector.  

Carahsoft will be a sponsor of Google Next ‘24 with a significant public sector presence and plans to host a reception as well. 

6. SC24  

November 17 – 22, 2024, Atlanta, GA | Hybrid Event  

Supercomputing (SC) is the longest running and largest high performance computing conference. SC is an unparalleled mix of thousands of scientists, engineers, researchers, educators, programmers, and developers. Hosted by The Association for Computing Machinery & IEEE Computer Society, SC24 is hosted in Atlanta, Georgia.   

Carahsoft is proud to attend SC24 for a fourth year as the master aggregator serving the public sector. Carahsoft will be hosting an extensive partner pavilion showcasing daily demos of our technology and solution partners, demonstrating use-cases in AI and HPC intended for higher-ed organizations, research institutions, government agencies, and more.  

Join us at our public sector reception for a night of networking with leading decision-makers and solution experts on November 20. 

7. Elastic Public Sector Summit ‘24  

March 13, 2024, Pentagon City, VA | In-Person Event  

Join top Federal program executives and IT leaders to learn firsthand how advances in data management, search and analytics capabilities are helping agencies turn data into mission value faster and more productively for citizens and Government employees. Learn how agencies are leveraging these capabilities for cybersecurity, operational resilience, and preparing for the new era of generative AI. FedScoop, Elastic and Carahsoft will co-host this summit in Pentagon City, Virginia.   

As a top-level sponsor of Elastic’s Public Sector Summit, Carahsoft will host a pavilion on the exhibit floor that features Elastic’s foremost technology partners for the hundreds of projected government attendees.

8. CDAO Government

September 17 – 19, 2024, Washington DC | In-Person Event  

This event brings together the latest technological advancements and practical examples to apply key data-driven strategies to solve challenges in Government and greater society. Join a unique mix of academia, industry and Government thought leaders at the forefront of research and explore real-world case studies to discover the value of data and analytics. Located in Washington, D.C., CDAO Government will be hosted by Corinium Intelligence.   

Carahsoft was proud to be a Premier Sponsor at the 2023 CDAO Government, involving numerous of our vendor partners, Cloudera, and HP, Alation, Informatica, Progress|MarkLogic, Snowflake, and Tyler Technologies, Alteryx, Coursera, DataRobot, Databricks, Elastic, Immuta, Primer AI, and Qlik. 

Carahsoft looks forward to participating as a leading sponsor again at the 2024 CDAO Government.  

9. OODACON

November 5 – 6, Reston, VA | In-Person Event 

The world is at a transition point where technology is enabling rapid changes that can drive both positive and negative outcomes for humanity. It is also empowering many bad actors and poses new threats. The essence of OODAcon lies in its capacity to forge a robust community of leaders, experts, and practitioners that serve as a collective force that can propel us towards a brighter future.  

Join us at the Carahsoft Conference and Collaboration Center to discuss how disruptive technology can solve the most pressing issues of today. 

10. AWS Public Sector Summit 

June 26-27, 2024, Washington DC | In-Person Event 

Join Carahsoft and our partners for two days on innovation, collaboration and global representation. Designed to unite the global cloud computing community, AWS Summits are designed to educate customers about AWS products and services, providing them with the skills they’ll need in order to build, deploy, and operate their infrastructure and applications. 

As a top-level sponsor of AWS’ Public Sector Summit, Carahsoft will host a pavilion on the exhibit floor that features AWS’ foremost technology partners for the thousands of projected government attendees. 

Learn More About Previously Held Events

11. CDAO Advantage DoD24 Defense Data & AI Symposium  

Carahsoft was at CDAO’s inaugural Advantage DoD 2024: Defense Data & AI Symposium from February 20th to 22nd at the Washington Hilton in Washington, DC. The symposium provided a platform for over 1000 government officials, industry leaders, academia, and partners to converge and explore the latest advancements in data, analytics, and artificial intelligence in support of the U.S. Department of Defense mission. Carahsoft had a small tabletop partner pavilion, featuring our vendor partners Alteryx, DataRobot, Collibra, Elastic, Databricks, PTFS, EDB, Weights & Biases, and Clarifai.

Throughout the symposium, attendees from diverse backgrounds, including technical programmers, policymakers, and human resources professionals, gained valuable insights into emerging technologies and best practices for integrating data-driven strategies into organizational frameworks. Attendees also enjoyed two networking receptions hosted by Booz Allen Hamilton and C3.ai.

The agenda featured compelling speaking sessions including topics such as:

  1. Task Force Lima – The Way Forward (Goals and Progress)
  2. LLMs and Cybersecurity: Practical Examples and a Look Ahead
  3. DoD GenAI Use Cases and Acceptability Criterias

12. Using Generative AI & Machine Learning in the Enterprise  

This intimate one-day 500-person conference curated data science sessions to bring industry leaders and specialists face-to-face to educate one another on innovative solutions in generative AI, machine learning, predictive analytics, and best practices. Attendees saw a mix of use-cases, technical talks, and workshops, and walked away with actionable insights from those working on the frontlines of machine learning in the enterprise. Hosted by Data Science Salon, the event was held in Austin, Texas.

Carahsoft partners NVIDIA and John Snow Labs were in attendance; two leading AI and Machine Learning solution providers. Carahsoft serves as the master aggregator for both NVIDIA and John Snow Labs to provide government agencies with solutions that fulfill mission needs from trustworthy technology and industry partners.

While the landscape of government events has always been in flux, the pace of change in 2024 feels downright dizzying. From navigating hybrid gatherings to crafting data-driven experiences, the pressure is on to connect, inform, and engage. This is where the power of AI steps in, not as a silver bullet, but as a toolbox brimming with innovative solutions. Carahsoft’s curated list of Top 12 AI for Government Events is just the starting point. So, do not let the future intimidate you; embrace it. Dive into the possibilities, explore these AI tools, and get ready to redefine what a government event can be. Your citizens—and your data—will thank you.  

To learn more or get involved in any of the above events please contact us at AITeam@carahsoft.com. For more information on Carahsoft and our industry leading AI technology partners’ events, visit our AI solutions portfolio and events page. 

Building a Foundation for an AI Future

It might seem like agencies are hesitant to adopt artificial intelligence. But really, it is quite the opposite. As Lori Wade, the Intelligence Community’s chief data officer, put it: “It is no longer just about the volume of data, it is about who can collect, access, exploit and gain actionable insight the fastest.” The realization is clear: Humans alone cannot keep pace. They need AI so they can make decisions based on the most relevant and most current information — and make those decisions in a timely manner. It is really as simple as that. Download the guide, “Building the Foundation for Your AI Future,” to pick up pointers on data management and AI, plus take a glimpse at the latest technology developments, tips for best practices and an explanation of the early value that AI is delivering to agencies across government. 

 

How to Revolutionize Government Translation with Generative AI

“In situations where accurate and timely translations are crucial, the shortage of qualified and vetted linguists poses significant challenges. Equally, non-linguist analysts are not equipped with secure, at-desk tools to translate foreign language material at the speed of relevance. For example, during the ongoing war in Ukraine, there has been a scarcity of linguists available to provide real-time updates on the ground. This shortage not only has affected the ability to gather vital intelligence but also hindered the timely dissemination of information to national security and defense agencies in the U.S. and abroad.”

Read more insights from Jesse Rosenbaum, Vice President of Business Development and National Security at Lilt. 

 

How Graph Databases Drive a Paradigm Shift in Data Platform Technology  

Carahsoft IIG FNN Future AI Blog Embedded Image 2023“Federal agencies are awash in data. With recent modernization efforts, including the wide-scale adoption of cloud platforms and applications, it is easier than ever for agencies to receive streaming data on everything from logistics to finances to cybersecurity. But that volume of data requires new solutions to process and analyze it. Older methods like SQL and NoSQL simply are not up to the task of analyzing all of the connections between the government’s many massive databases. That is where the new graph paradigm of data platform technology comes in.”

Read more insights from Michael Moore, Principal for Partner Solutions and Technology at Neo4j. 

 

How Agencies Can Upskill in AI to Achieve a Data Mesh Model  

“Data mesh behavior actually goes a step further. AI has become so easy to use, business owners can actually join in the development alongside the data scientists. Therein lies the challenge: Upskilling subject matter experts across an entire organization is a big lift. The way it works best is to start with a center of excellence, a small group of people who begin working with business owners across the enterprise, office by office. They can then prove the value and evangelize it, and then the agency can move to a hub-and-spoke model, where the data scientists are co-developing alongside business owners. As successes pile up, the data scientists can take a step back and allow frontline workers to do the development, governing the new data products on their own.”

Read more insights from Doug Bryan, Field Chief Data Officer at Dataiku. 

 

How Agencies Can Build a Data Foundation for Generative AI  

“Generative artificial intelligence tools are making waves in the technology world, most famously ChatGPT. Although the code of these tools is significant, their real power stems from the data they are trained on. Gathering and correctly formatting the data, then transforming it to yield accurate predictions, often represents the most challenging aspect of developing these tools. Federal agencies that want to start leveraging generative AI already have massive amounts of data on which to train the technology. But to successfully implement these tools, they need to ensure the quality of their data before trusting any decisions they might make.”

Read more insights from Nasheb Ismaily, Principal Solutions Engineer at Cloudera. 

 

How to Democratize Data as a Catalyst for Effective Decision-Making  

“One of the key best practices in the Office of Management and Budget’s Federal Data Strategy calls for using data to guide decision-making. But that is easier said than done when the ability to analyze the data, much less access it, is limited to an agency’s often overworked and understaffed data science specialists. But now that every line of federal business has their own data silo and a mandate to use that data to guide decisions, agencies need a way to democratize access to that data and empower every federal employee to become an analyst.”

Read more insights from Kevin Woo, Director of Federal Sales at Alteryx. 

 

Download the full Expert Edition for more insights from these artificial intelligence leaders, additional government interviews, historical perspectives and industry research. 

Generative AI, DevSecOps and Cybersecurity Highlighted for the Air Force and Space Force at DAFITC 2023

Thousands of Space Force and Air Force personnel and industry experts convened to discuss the most current and significant threats confronting global networks and national defense at the 2023 Department of the Air Force Information Technology and Cyberpower Education & Training (DAFITC) Event. Throughout the many educational sessions, thought leaders presented a myriad of topics such as artificial intelligence (AI), DevSecOps solutions and cybersecurity strategies to collaborate on the advancement of public safety.

Leveraging Generative AI in the DoD

At the event, experts outlined three distinct use cases for simplified generative artificial intelligence in military training.

  • Text to Text: This type of generative AI takes inputted text and outputs written content in a different format. Text to Text is associated with tasks such as content creation, summarization, evaluation, prediction and coding.
  • Text to Audio: Text to Audio AI can enhance accessibility and inclusion by creating audio content from written materials to support elearning and education and facilitate language translation.
  • Text to Video: Text to Video AI is primarily geared towards generating video content from a script to aid the military with language learning and training initiatives.

Dr. Lynne Graves, representative of the Department of the Air Force Chief Data and Artificial Intelligence Office (CDAO), provided attendees with a brief timeline of how the USAF will fully adopt artificial intelligence. The overarching aim for AI integration is to make it an integral part of everyday training, exercises and operations within the Department of Defense (DoD).

  • In FY23, the DoD is focusing on pipeline assessment. Using red teaming where ethical hackers run simulations to identify weaknesses in the system, internal military personnel target improvement of their infrastructure and mitigation of the vulnerabilities in the different stages of the pipeline.
  • In FY24, the emphasis will be on the Red Force Migration policy, which involves developing, funding and scaling the necessary strategies.
  • In FY25, the goal is for the department to become AI-ready. This entails preparing for AI adoption at all agency levels, establishing a standard model card that explains context for the model’s intended use and other important information, creating a comprehensive repository of data and implementing tools for extensive testing, evaluation and verification.

Carahsoft AI, Cybersecurity, DevSecOps at DAFITC Tradeshow Blog Embedded Image 2023USSF Supra Coders Utilize DevSecOps for Innovation

The current operations of United States Space Force (USSF) Supra Coders involve a range of activities that combine modeling, simulation and expertise in replicating threats. These operations are conducted globally, and currently include orbit-related activities, replication of DA ASAT (Direct Ascent Anti-Satellite) capabilities and the reproduction of adversarial Space Domain Awareness (SDA).

The USSF Supra Coders have encountered limitations with software solutions, including restrictions tied to standalone systems, licensing structures with associated costs and limited adaptability to meet the specific needs of aggressors and USSF requirements. DevSecOps presents a multifaceted strategy for mitigating the identified capability gaps noted by the USSF Supra Coders. It can help create more effective and efficient software solutions through seamless integration of security protocols, streamlining system integration processes, optimizing costs and enhancing customizability.

Cybersecurity Within the Space Force

Cybersecurity is a shared responsibility across the DoD but is especially relevant for the U.S. Space Force. As a relatively newly emerging branch of the military, the Space Force is still developing its cyber strategies. Due to its completely virtual link to its capabilities, the USSF must prioritize secure practices from the outset and make informed decisions to protect its networks and data.

Currently, the Space Force is engaged in the initial phases of pre-mission analysis for its cyber component which serves as a critical element for establishing and maintaining infrastructure through the integration of command and control (C2). These cyber capabilities encounter a series of complex challenges, which necessitate a multifaceted approach including the following solutions:

  • Enforcing Consistent Cybersecurity Compliance
  • Developing Secure Methods to Safely Retire Old Technology
  • Enhancing Cryptography Visibility
  • Understanding Security Certificate Complexity
  • Identifying Vulnerabilities and Mitigating Unknown Cyber Risks

While the Space Force faces a uniquely heightened imperative to bolster its cybersecurity capabilities with its inherent reliance on information technology and networks in the space domain, the entire community must collaborate effectively to achieve military leaders’ targeted cybersecurity capabilities by the goal in 2027.

The integration of generative AI in military training, innovations through DevSecOps by the USSF Supra Coders and cybersecurity initiatives of the Space Force collectively highlight the evolving landscape of advanced technologies within the Department of Defense. Technology providers can come alongside the military to support these efforts with new solutions that enhance the DoD’s capabilities and security.

 

Visit Carahsoft’s Department of Defense market and DevSecOps vertical solutions portfolios to learn more about DAFITC 2023 and how Carahsoft can support your organization in these critical areas. 

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at DAFITC 2023.*

Security Protections to Maximize the Utility of Generative AI

Since the introduction of ChatGPT, artificial intelligence (AI) has exponentially expanded. While machine learning has introduced many merits, it also leads to security concerns that can be alleviated through several key strategies.

The Benefits and Risks of Generative AI

Broadcom Generative AI Blog Embedded Image 2023The primary focus of AI is to use data and computations to aid in decision-making. Generative AI can create text responses, videos, images, code, 3D products and more. AI as a Service, cloud-based offerings of AI, helps experts get work done more efficiently by advancing infrastructure at a quicker pace. In contrast, AI is also commonly used by the general public as a toy, since its responses can sometimes be entertaining. The comfort users have with AI and wide range of inputs introduces risk, and these risks can proliferate exponentially.

There are several key concerns for Government agencies when utilizing generative AI:

  • Copyright Complications – AI content comes from many different sources, and that content may be copyrighted. It is difficult to know who owns the words, images or source code that is generated, as the AI’s algorithm is based on derivative information. The data could be open sourced or proprietary information. To combat this, users should modify rather than copy any information gained from AI.
  • Abuse by Attackers – Bad actors can utilize AI to execute more effective and efficient attacks. While AI is not yet self-sufficient, inexperienced attackers can use AI to make phishing attacks more convincing, personal and effective.
  • Sensitive Data Loss – Users have, either intentionally or unintentionally, input sensitive data or confidential information into Generative AI systems. It is easier to disclose sensitive information into AI prompts, as users may dissociate the risk from the non-human machine.

The many capabilities of AI entice employees to utilize it to support their daily tasks. However, when this includes introducing sensitive information, such as meeting audios for transcripts or unique program codes, security concerns ensue. Once data is in the AI’s system, it is nearly impossible to have it removed.

To protect themselves from security and copyright issues with AI, several large communications companies and school districts have blocked ChatGPT. However, this still carries risk. Employees or students will find ways around security walls to use AI. Instead of blocking apps, organizations should create a specific policy around generative AI that is communicated to everyone in the company.

Combatting AI Risks

One such policy method includes utilizing a Data Loss Prevention (DLP) solution. The DLP’s purpose is to detect and prevent unauthorized data transmission, and its capabilities can be applied to AI tools to mitigate these concerns. Its security parameters work through three main steps:

  1. Discover – DLPs can detect where data is stored and report on its location to ensure proper storage and accessibility based on its classification.
  2. Monitor – Agencies can oversee data usage to verify that it is being used appropriately.
  3. Protect – By educating employees and enforcing data-loss policies, DLPs can deter hackers from leaking or stealing data.

DLP endpoints can reside on laptops or desktops and provide full security coverage by monitoring data uploads, blocking data copied to removable media, blocking print and fax options and covering cloud-sync applications. For maximum security, agencies should utilize DLPs that cover all types of data storage—data at rest, data in use and data in motion. A unified policy based on detection and response to data leaks will prevent users from misapplying AI and provide balance for secure operation.

While agencies want to stay competitive and benefit from AI, they must also recognize and take steps to reduce the risks involved. Through educating users about the pros and cons of AI and implementing a DLP to prevent accidental data leakages, agencies can achieve their intended results.

 

Broadcom is a global infrastructure technology leader that aims to enhance excellence in data innovation and collaboration. To learn more about data protection considerations for generative AI, view Broadcom’s webinar on security and AI.

Empowering Public Sector Technical Teams With Generative AI in a Secure Collaboration Platform

Recent advances in generative artificial intelligence (AI) – with its seemingly limitless potential use cases – have captured the public imagination. And they’re just as compelling to government agencies and the military. Organizations across the public and private sectors are racing to identify the most effective applications of the technology and to implement robust and secure solutions enabled by generative AI.

For instance, generative AI can be a powerful assistant to technical and operational teams such as those involved in application development and incident response. The technology can help teams gain real-time insights, bring to light solutions to unexpected problems, and help make fast, data-driven decisions.

It’s with those advantages in mind that Mattermost partnered with Ask Sage to integrate the Ask Sage GPT solution with the Mattermost secure collaboration platform. The result is secure, AI-enhanced collaboration for technical teams in the U.S. public sector.

Real-time Insights, Natural-language Format

Mattermost is a secure, workflow-centric collaboration platform for technical and operational teams that need to meet nation-state-level security and trust requirements. Available self-hosted or in the cloud, Mattermost integrates team messaging, audio and screen share, technical tools, workflow automation, and project management in an open-source solution.

Mattermost Generative AI Blog Embedded Image 2023

Ask Sage is a GPT-powered platform provider that specializes in enabling secure access to Generative AI capabilities for both government and commercial teams. With a wide range of use cases, including summarization, coding, code review, code improvement, RFP writing, responding and evaluation, and report writing, Ask Sage is built on cutting-edge AI technologies such as Azure OpenAI GPT, Cohere, Google Bard, and various open-source LLMs. The solution can ingest custom datasets, tap into APIs, and connect to data lakes for real-time data and insights in a natural-language format.

Ask Sage can quickly and automatically process large amounts of structured and unstructured data – including government-related data such as laws, Federal Acquisition Regulation (FAR), Defense Federal Acquisition Regulation Supplement (DFARS), DoD Controlled Unclassified Information (CUI), and DoD policy and governance content. Outputs include summaries, translations, sentiment analysis, deep insights, and coding.

Integration of Ask Sage with Mattermost provides technical teams with secure, real-time access to generative AI to enhance collaboration, operational productivity, and decision quality. Government and contractor teams can now securely leverage the power of OpenAI and collaborate within a single, seamless interface.

Real-time Insights, Natural-language Format

With this strategic integration, Mattermost equips technical teams to leverage generative AI to accelerate processes, increase output, and improve outcomes. It’s ideal for government teams that write code, manage RFPs, analyze large data sets, or develop and translate intelligence reports.

Ask Sage offers rapid data analysis and summarization to help teams gain new insights as circumstances evolve. Team members spend less time and effort on manual research and analysis, giving them more time to focus on higher-priority decision-making and strategic tasks.

Users can improve the accuracy and depth of Ask Sage results by uploading relevant data –which is labeled by classification level, encrypted, and separated from the OpenAI models. Once uploaded, the data can be accessed only by authorized users through granular access controls within Mattermost.

Collaboration Purpose-built for Public Sector

Mattermost is well-suited to technical public sector teams, because it’s available as an on-prem, self-hosted deployment. That means teams can collaborate securely with lower risk of compromise. It’s also an open-source solution, so organizations can tailor security settings to protect information at impact levels up to IL6 for DoD Secret data. That’s protection that general-purpose, cloud-based productivity and instant-message tools can’t match.

The platform allows teams to create as many topic- or project-specific communication channels as they need. These channels allow users to centralize conversations, data, and tools – including Ask Sage – in the right context. That keeps team members focused and productive, without the need to continually context-switch.

Another useful Mattermost feature is built-in, customizable playbooks – essentially digital checklists – that help team members consistently take the right actions at the right times. Mattermost playbooks can now include generative AI to further automate and accelerate project workflows and incident response.

Leveraging Mattermost’s secure collaboration platform combined with Ask Sage’s generative AI capabilities can revolutionize the way government teams work together, manage technical projects, and respond to mission-critical situations. As interest in OpenAI GPT and similar platforms grows, this strategic integration is a gamechanger in enabling U.S. government and military organizations to securely benefit from generative AI.

Speak with a member of our team today and learn more about Mattermost at www.mattermost.com.

5 Essential Applications of AI Technology in Education

When growing up and sitting through math class, students often heard teachers say that students should not rely on a calculator to do their math for them. After all, they would never have a calculator in their pockets. Today, that statement could not be farther from the truth. Now, many students have an entire computer in their pockets with a calculator just a click of a button away. The growth of artificial intelligence (AI) has increased exponentially within the last few decades, and students and educators alike must embrace the latest in AI and education technology to keep up with the pace.

Carahsoft AI in EdTech Blog Embedded Image 2023In all learning environments, students and teachers rely on modern technologies to enhance their experiences to be as informational, productive and efficient as possible. In recent years, hybrid learning and collaborative digital spaces became essential components of education for both K-12 and higher education organizations. With this development, education technology has evolved and expanded to include new and more advanced AI systems inside and outside the classroom.

The needs of students are always changing, and educators must constantly adapt to progressive ways of teaching and learn different technologies or platforms that can assist with their daily lessons. With the implementation of AI and numerous benefits of digital learning, all students and instructors can achieve a more wholistic and innovative education. These five topics demonstrate how AI is an essential tool in the learning process for various types of learners across K-12 and higher education.

  • Communication

Carahsoft AI in EdTech Blog Icon 5 Image 2023Innovative trends in education technology have made it possible for students and staff to stay connected, whether through remote online learning or collaborative learning in the classroom. AI tools like SMS bots, predictive technology and ChatGPT can assist students in tasks such as navigating their school’s learning platforms, researching and preparing information for assignments and getting real-time answers to their questions. AI can also help teachers and professors orchestrate discussion points between students and guide next steps within small group collaborative projects.[1]

  • Automation

Carahsoft AI in EdTech Blog Icon 4 Image 2023For teachers, implementing AI can help automate repetitive daily tasks like grading tests and quizzes, and catching minor mistakes within written essays. This way they have more freedom and time to focus on in-depth feedback, creating comprehensive lesson plans and spending one-on-one time with their students. Additionally, AI tools can give students instant feedback on their work, allowing them to be more independent in identifying inaccuracies and recognizing successful projects.[2]

  • Immersive Learning

Carahsoft AI in EdTech Blog Icon 3 Image 2023Augmented reality (AR) and virtual reality (VR) are becoming increasingly more popular in students’ everyday lives, so using these technologies as a learning tool is familiar and compelling for them to gain valuable experiences in the classroom. Immersive technologies can simulate real-world scenarios for students to gain hands-on experience with low risk, like medical simulations and technical experiments. It also can allow students to break the barrier between their physical space and complex concepts like observing the planets up close or enlarging and examining something microscopic.[3] Not only do AR and VR create expansive opportunities for students to view and understand concepts in new and captivating ways, but they also create an additional, interactive and collaborative avenue of learning for students who may not be as responsive to traditional tools like textbooks and study guides.[4]

  • Data-Driven Results

Carahsoft AI in EdTech Blog Icon 1 Image 2023Throughout a student’s education, data is continually collected to better understand and predict their developing needs and most effective learning strategies. AI technologies can quickly and automatically analyze and report on this data, allowing teachers and professors to evaluate trends in an individual student’s or an entire class’s performance. Empowered with this knowledge, educators can tailor their lesson plans and take a more proactive approach to supporting students’ needs, ultimately increasing academic improvement for all.[5]

  • Personalized Learning

Carahsoft AI in EdTech Blog Icon 2 Image 2023Student’s learning styles can vary depending on many factors. For example, some students learn best through more visual and interactive experiences, while others may learn best through memorization and flashcards. Analyzing data collected by AI can help teachers be more informed and prepared educators for different kinds of learners. By applying the insights gathered from AI algorithms, educators can create personalized tracks for individual students, including aspects like adjusting the types of content, working with their comfortability, tailoring to their pace of learning and understanding their comprehension of learning objectives.[6] Additionally, AI technologies can help teachers plan, schedule and produce suggested lesson ideas more efficiently so they can target instruction and reduce the time it takes to create activities that best support each student.[5]

As AI becomes more common in education, maintaining academic integrity and validity within assignments of any kind will remain top of mind for educators. While earlier AI systems are designed to help students achieve academic success, newer AI systems are intended to empower teachers to optimize the use of artificial intelligence for students and encourage positive, ethical engagement with AI technologies.[7] Fostering trust among educators to cultivate the most prosperous learning environment through the implementation of AI can further personal, social and educational growth for all students.

Explore Carahsoft’s education technology solutions to learn how your organization can work together with our top innovative EdTech vendors to bridge the digital divide and meet the demands of modern education.

 

Resources:

[1] Office of Ed Tech. “AI and the Future of Teaching and Learning: New Interactions, New Choices.” Medium, https://medium.com/ai-and-the-future-of-teaching-and-learning/ai-and-the-future-of-teaching-and-learning-new-interactions-new-choices-c726bcf03012

[2] Shonubi, Olufemi. “Council Post: AI in the Classroom: Pros, Cons and the Role of Edtech Companies.” https://www.forbes.com/sites/theyec/2023/02/21/ai-in-the-classroom-pros-cons-and-the-role-of-edtech-companies/?sh=2cb4a227feb4

[3] Dick, Ellysse. “The Promise of Immersive Learning: Augmented and Virtual Reality’s Potential in Education.” Information Technology and Innovation Foundation. https://itif.org/publications/2021/08/30/promise-immersive-learning-augmented-and-virtual-reality-potential/

[4] Dani, Vishal. “How Augmented Reality Creates Interactive and Engaging Classrooms.” Kitaboo, https://kitaboo.com/augmented-reality-creates-interactive-and-engaging-classrooms/

[5] Gururaj, Tejasri. “10 Examples of Artificial Intelligence Improving Education.” Interesting Engineering, https://interestingengineering.com/innovation/examples-how-artificial-intelligence-improving-education

[6] Dani, Vishal. “9 Trends in Education Technology That Will Have a Major Impact.” Kitaboo, https://kitaboo.com/trends-in-education-technology/

[7] Office of Ed Tech. “AI and the Future of Teaching and Learning: Engaging Educators.” Medium, https://medium.com/ai-and-the-future-of-teaching-and-learning/ai-and-the-future-of-teaching-and-learning-engaging-educators-141e90c5e29f