Generative AI: Improving Efficiency for SLED Agencies

Users in the new age engage with generative AI like a personal assistant, granting it access to their personal calendars and assigning it tasks such as making dinner reservations to make life easier. On the professional level, employees turn to AI to expedite difficult or repetitive tasks to make their work easier. By educating employees on the security ramifications of generative AI, and by properly implementing it into their agency, State and Local Government and Education Market (SLED) decision makers can accelerate and improve their day-to-day processes.

Updated Security Parameters

When it comes to sensitive data, agencies and individuals should always maintain a broad scope of vigilance. With generative AI, agencies need to consider who has access to that information, and which adversaries may potentially exploit that information.

Broadcom Generative AI Blog Embedded Image 2024

Employees should be trained to spot red flags and use AI safely. With the increase in deep fakes, such as voice masking or impersonation, employees need to be able to spot suspicious phone calls and videos. With proper training to detect and report these instances, employees can help prevent hacking attempts. It is difficult to prevent employees from using generative AI, even in specific scenarios where sensitive data is present. Agencies should make the switch to sanctioned vendors, granting them access to fully tracked logs. It is critical to prevent sensitive information from passing into public AI, where it will be shared with others.

By design, AI is a black box. While agencies and users can not know what goes on between input and output, they should only trust generative AI packages that have dependable service hosts. Agencies, especially SLED agencies that handle sensitive information, need to be guaranteed that their data will remain contained by reliable parent companies. By negotiating through contracts vehicles, agencies can maintain visibility over the flow of data by learning if their information is being retained and for how long.

Saving Time with Generative AI

Some of the first generative AI models were built for translation machines such as Google Translate. Many services, such as Zoom, employ generative AI as plugins, which transcript language in real time for the appropriate audience.These models initially generated very verbatim translations, however, intent and context in communication is critical. Users often go to third party generative AI models to translate emails or web pages. They have more trust in their automation capabilities to understand and mirror context and intent in translation than the built-in translation services that many legacy software features offer.

Generative AI can help with drafting emails, broadcasting information, meeting deadlines and responding to agents, ultimately expediting processes. This can be especially helpful with overworked translators. While generative AI works to complete the main translations, the workers can focus on reviewing translations, expediting and perfecting the process. While there will ultimately always be a need for human interaction from a promotional, proofreading and understanding perspective, generative AI can speed up communication.

Generative AI can reduce the number of steps users take. By leading users from step A to step C, bypassing the difficult or time-consuming step B, generative AI keeps users on track. And for models trained on a SLED agency’s own data, users can always reference internal documents if questions arise. This scales back on the amount of busy work, reducing time spent on finding information. Generative AI can also expedite the synthesis of search data. In the past, search engines could locate documents for agencies. Now, agencies going through SLED records can not only find the document itself, but find the information within the document, and analyze that information before returning it to the user.

By accelerating the day-to-day tasks of employees, generative AI frees up creative minds to complete more vital, thorough and intricate projects, improving utility.

AI has been integral to Broadcom’s product solutions in user and enterprise IT. When properly implemented, generative AI can enhance technology, cybersecurity, analytics and productivity. To learn more about how Broadcom can help implement secure generative AI in SLED spaces, view Broadcom’s SLED focused cybersecurity solutions.

Security Protections to Maximize the Utility of Generative AI

Since the introduction of ChatGPT, artificial intelligence (AI) has exponentially expanded. While machine learning has introduced many merits, it also leads to security concerns that can be alleviated through several key strategies.

The Benefits and Risks of Generative AI

Broadcom Generative AI Blog Embedded Image 2023The primary focus of AI is to use data and computations to aid in decision-making. Generative AI can create text responses, videos, images, code, 3D products and more. AI as a Service, cloud-based offerings of AI, helps experts get work done more efficiently by advancing infrastructure at a quicker pace. In contrast, AI is also commonly used by the general public as a toy, since its responses can sometimes be entertaining. The comfort users have with AI and wide range of inputs introduces risk, and these risks can proliferate exponentially.

There are several key concerns for Government agencies when utilizing generative AI:

  • Copyright Complications – AI content comes from many different sources, and that content may be copyrighted. It is difficult to know who owns the words, images or source code that is generated, as the AI’s algorithm is based on derivative information. The data could be open sourced or proprietary information. To combat this, users should modify rather than copy any information gained from AI.
  • Abuse by Attackers – Bad actors can utilize AI to execute more effective and efficient attacks. While AI is not yet self-sufficient, inexperienced attackers can use AI to make phishing attacks more convincing, personal and effective.
  • Sensitive Data Loss – Users have, either intentionally or unintentionally, input sensitive data or confidential information into Generative AI systems. It is easier to disclose sensitive information into AI prompts, as users may dissociate the risk from the non-human machine.

The many capabilities of AI entice employees to utilize it to support their daily tasks. However, when this includes introducing sensitive information, such as meeting audios for transcripts or unique program codes, security concerns ensue. Once data is in the AI’s system, it is nearly impossible to have it removed.

To protect themselves from security and copyright issues with AI, several large communications companies and school districts have blocked ChatGPT. However, this still carries risk. Employees or students will find ways around security walls to use AI. Instead of blocking apps, organizations should create a specific policy around generative AI that is communicated to everyone in the company.

Combatting AI Risks

One such policy method includes utilizing a Data Loss Prevention (DLP) solution. The DLP’s purpose is to detect and prevent unauthorized data transmission, and its capabilities can be applied to AI tools to mitigate these concerns. Its security parameters work through three main steps:

  1. Discover – DLPs can detect where data is stored and report on its location to ensure proper storage and accessibility based on its classification.
  2. Monitor – Agencies can oversee data usage to verify that it is being used appropriately.
  3. Protect – By educating employees and enforcing data-loss policies, DLPs can deter hackers from leaking or stealing data.

DLP endpoints can reside on laptops or desktops and provide full security coverage by monitoring data uploads, blocking data copied to removable media, blocking print and fax options and covering cloud-sync applications. For maximum security, agencies should utilize DLPs that cover all types of data storage—data at rest, data in use and data in motion. A unified policy based on detection and response to data leaks will prevent users from misapplying AI and provide balance for secure operation.

While agencies want to stay competitive and benefit from AI, they must also recognize and take steps to reduce the risks involved. Through educating users about the pros and cons of AI and implementing a DLP to prevent accidental data leakages, agencies can achieve their intended results.

 

Broadcom is a global infrastructure technology leader that aims to enhance excellence in data innovation and collaboration. To learn more about data protection considerations for generative AI, view Broadcom’s webinar on security and AI.