Generative AI: Improving Efficiency for SLED Agencies

Users in the new age engage with generative AI like a personal assistant, granting it access to their personal calendars and assigning it tasks such as making dinner reservations to make life easier. On the professional level, employees turn to AI to expedite difficult or repetitive tasks to make their work easier. By educating employees on the security ramifications of generative AI, and by properly implementing it into their agency, State and Local Government and Education Market (SLED) decision makers can accelerate and improve their day-to-day processes.

Updated Security Parameters

When it comes to sensitive data, agencies and individuals should always maintain a broad scope of vigilance. With generative AI, agencies need to consider who has access to that information, and which adversaries may potentially exploit that information.

Broadcom Generative AI Blog Embedded Image 2024

Employees should be trained to spot red flags and use AI safely. With the increase in deep fakes, such as voice masking or impersonation, employees need to be able to spot suspicious phone calls and videos. With proper training to detect and report these instances, employees can help prevent hacking attempts. It is difficult to prevent employees from using generative AI, even in specific scenarios where sensitive data is present. Agencies should make the switch to sanctioned vendors, granting them access to fully tracked logs. It is critical to prevent sensitive information from passing into public AI, where it will be shared with others.

By design, AI is a black box. While agencies and users can not know what goes on between input and output, they should only trust generative AI packages that have dependable service hosts. Agencies, especially SLED agencies that handle sensitive information, need to be guaranteed that their data will remain contained by reliable parent companies. By negotiating through contracts vehicles, agencies can maintain visibility over the flow of data by learning if their information is being retained and for how long.

Saving Time with Generative AI

Some of the first generative AI models were built for translation machines such as Google Translate. Many services, such as Zoom, employ generative AI as plugins, which transcript language in real time for the appropriate audience.These models initially generated very verbatim translations, however, intent and context in communication is critical. Users often go to third party generative AI models to translate emails or web pages. They have more trust in their automation capabilities to understand and mirror context and intent in translation than the built-in translation services that many legacy software features offer.

Generative AI can help with drafting emails, broadcasting information, meeting deadlines and responding to agents, ultimately expediting processes. This can be especially helpful with overworked translators. While generative AI works to complete the main translations, the workers can focus on reviewing translations, expediting and perfecting the process. While there will ultimately always be a need for human interaction from a promotional, proofreading and understanding perspective, generative AI can speed up communication.

Generative AI can reduce the number of steps users take. By leading users from step A to step C, bypassing the difficult or time-consuming step B, generative AI keeps users on track. And for models trained on a SLED agency’s own data, users can always reference internal documents if questions arise. This scales back on the amount of busy work, reducing time spent on finding information. Generative AI can also expedite the synthesis of search data. In the past, search engines could locate documents for agencies. Now, agencies going through SLED records can not only find the document itself, but find the information within the document, and analyze that information before returning it to the user.

By accelerating the day-to-day tasks of employees, generative AI frees up creative minds to complete more vital, thorough and intricate projects, improving utility.

AI has been integral to Broadcom’s product solutions in user and enterprise IT. When properly implemented, generative AI can enhance technology, cybersecurity, analytics and productivity. To learn more about how Broadcom can help implement secure generative AI in SLED spaces, view Broadcom’s SLED focused cybersecurity solutions.

Security Protections to Maximize the Utility of Generative AI

Since the introduction of ChatGPT, artificial intelligence (AI) has exponentially expanded. While machine learning has introduced many merits, it also leads to security concerns that can be alleviated through several key strategies.

The Benefits and Risks of Generative AI

Broadcom Generative AI Blog Embedded Image 2023The primary focus of AI is to use data and computations to aid in decision-making. Generative AI can create text responses, videos, images, code, 3D products and more. AI as a Service, cloud-based offerings of AI, helps experts get work done more efficiently by advancing infrastructure at a quicker pace. In contrast, AI is also commonly used by the general public as a toy, since its responses can sometimes be entertaining. The comfort users have with AI and wide range of inputs introduces risk, and these risks can proliferate exponentially.

There are several key concerns for Government agencies when utilizing generative AI:

  • Copyright Complications – AI content comes from many different sources, and that content may be copyrighted. It is difficult to know who owns the words, images or source code that is generated, as the AI’s algorithm is based on derivative information. The data could be open sourced or proprietary information. To combat this, users should modify rather than copy any information gained from AI.
  • Abuse by Attackers – Bad actors can utilize AI to execute more effective and efficient attacks. While AI is not yet self-sufficient, inexperienced attackers can use AI to make phishing attacks more convincing, personal and effective.
  • Sensitive Data Loss – Users have, either intentionally or unintentionally, input sensitive data or confidential information into Generative AI systems. It is easier to disclose sensitive information into AI prompts, as users may dissociate the risk from the non-human machine.

The many capabilities of AI entice employees to utilize it to support their daily tasks. However, when this includes introducing sensitive information, such as meeting audios for transcripts or unique program codes, security concerns ensue. Once data is in the AI’s system, it is nearly impossible to have it removed.

To protect themselves from security and copyright issues with AI, several large communications companies and school districts have blocked ChatGPT. However, this still carries risk. Employees or students will find ways around security walls to use AI. Instead of blocking apps, organizations should create a specific policy around generative AI that is communicated to everyone in the company.

Combatting AI Risks

One such policy method includes utilizing a Data Loss Prevention (DLP) solution. The DLP’s purpose is to detect and prevent unauthorized data transmission, and its capabilities can be applied to AI tools to mitigate these concerns. Its security parameters work through three main steps:

  1. Discover – DLPs can detect where data is stored and report on its location to ensure proper storage and accessibility based on its classification.
  2. Monitor – Agencies can oversee data usage to verify that it is being used appropriately.
  3. Protect – By educating employees and enforcing data-loss policies, DLPs can deter hackers from leaking or stealing data.

DLP endpoints can reside on laptops or desktops and provide full security coverage by monitoring data uploads, blocking data copied to removable media, blocking print and fax options and covering cloud-sync applications. For maximum security, agencies should utilize DLPs that cover all types of data storage—data at rest, data in use and data in motion. A unified policy based on detection and response to data leaks will prevent users from misapplying AI and provide balance for secure operation.

While agencies want to stay competitive and benefit from AI, they must also recognize and take steps to reduce the risks involved. Through educating users about the pros and cons of AI and implementing a DLP to prevent accidental data leakages, agencies can achieve their intended results.

 

Broadcom is a global infrastructure technology leader that aims to enhance excellence in data innovation and collaboration. To learn more about data protection considerations for generative AI, view Broadcom’s webinar on security and AI.

Best of What’s New In Data, Identity and Privacy

Last year, state lawmakers across the nation introduced hundreds of privacy bills. One of the most prominent pieces of legislation — the California Consumer Privacy Act (CCPA) — took effect in January, marking the first of potentially many state-level attempts to emulate the European Union’s groundbreaking General Data Protection Regulation (GDPR), which gave EU residents more control over how organizations use their personal information. All of this points to a dramatic shift in how state and local government agencies must manage and protect data. Fortunately, technology tools available to help the public sector address privacy challenges are growing smarter and more sophisticated. Learn the latest insights from industry thought leaders in Data, Identity and Privacy in Carahsoft’s Innovation in Government® report.

IIG GovTech July 2020 Data Identity Privacy Blog ImageProtecting the Data That Matters Most

“Organizations should avoid the temptation to skip requirements and get things out there quickly. This crisis forced organizations to establish work-from-home policies overnight. Work-from-home technologies — whether employee-owned or government issued — must incorporate the organization’s security processes and policies around sensitive data. Government-issued laptops should have remote access capability to keep OS and security product patches up to date, ensure VPN connections are working and generally maintain security standards. It’s also important to conduct and continually reinforce security awareness training focused specifically on working at home or remotely. Then, make the new normal as simple as possible; have everything in place for users to just basically turn on their laptop and log into the system.”

Read more insights from Dell Technologies’ Chief Strategy and Innovation Officer of State and Local Government, Tony Encinias.

 

Simple, Smart and Fast: Search-Driven Analytics for Data Privacy and Compliance  

“Clearly defined use cases are critical. What questions do agencies need to answer to fulfill their mission, and what data do they need to obtain those answers? Once you find that data, how do you store it, and how do you track compliance requirements on that data? How do you enable data sharing and transparency without interfering with privacy and security? Another critical piece is the criteria and best practices used for tool selection. Can you get to granular levels of data and customize security clearances down to the role level or column level so you can govern who’s seeing what without having to create duplicate data lakes for each department? That can create a lot of economies of scale and enable organizations to more easily and confidently share data across agencies.”

Read more insights from ThoughtSpot’s Senior Director of Global Public Sector and Industry Alliances, Helen Xing.

 

Using a Data-Centric Approach to Reduce Risk and Manage Disruption  

“AI and ML have a lot of potential to streamline privacy and compliance, but they also come with certain risks. For example, AI/ML require systems to be trained. If systems are trained inadequately or with inaccurate data, the result may be poor decisions that ultimately cause more damage than good. This is why, as discussions about the use of AI and ML continue, we expect to see more emphasis on accountable development and usage. In practice, this means having requirements around transparency of AI usage, decisions and data quality, as well as robustness in terms of AI security and resilience.”

Read more insights from Broadcom’s Global CTO and Chief Architect for Symantec Enterprise Division, Paul Agbabian.

 

Leading Through Change  

“People have been self-servicing analytical needs for years because they need to answer their own questions rapidly. But are people asking the right questions and are they doing all that in the most efficient digital forms? Proficiency is one of the core capabilities defined in the Tableau Blueprint, which is a prescriptive, proven methodology for becoming a more data driven organization. Proficiency speaks to the need to educate people to see and understand data for decision-making. That includes educating them on how to work with data, measuring the value that they derive from their use of data, and institutionalizing best practices that drive behavior change and informed decision-making.”

Read more insights from Tableau’s Senior Manager of Customer Success, Jeremy Blaney.

Download the full Innovation in Government® report for more insights from these Government Data, Identity and Privacy thought leaders and additional industry research from GovTech.