Since the introduction of ChatGPT, artificial intelligence (AI) has exponentially expanded. While machine learning has introduced many merits, it also leads to security concerns that can be alleviated through several key strategies.
The Benefits and Risks of Generative AI
The primary focus of AI is to use data and computations to aid in decision-making. Generative AI can create text responses, videos, images, code, 3D products and more. AI as a Service, cloud-based offerings of AI, helps experts get work done more efficiently by advancing infrastructure at a quicker pace. In contrast, AI is also commonly used by the general public as a toy, since its responses can sometimes be entertaining. The comfort users have with AI and wide range of inputs introduces risk, and these risks can proliferate exponentially.
There are several key concerns for Government agencies when utilizing generative AI:
- Copyright Complications – AI content comes from many different sources, and that content may be copyrighted. It is difficult to know who owns the words, images or source code that is generated, as the AI’s algorithm is based on derivative information. The data could be open sourced or proprietary information. To combat this, users should modify rather than copy any information gained from AI.
- Abuse by Attackers – Bad actors can utilize AI to execute more effective and efficient attacks. While AI is not yet self-sufficient, inexperienced attackers can use AI to make phishing attacks more convincing, personal and effective.
- Sensitive Data Loss – Users have, either intentionally or unintentionally, input sensitive data or confidential information into Generative AI systems. It is easier to disclose sensitive information into AI prompts, as users may dissociate the risk from the non-human machine.
The many capabilities of AI entice employees to utilize it to support their daily tasks. However, when this includes introducing sensitive information, such as meeting audios for transcripts or unique program codes, security concerns ensue. Once data is in the AI’s system, it is nearly impossible to have it removed.
To protect themselves from security and copyright issues with AI, several large communications companies and school districts have blocked ChatGPT. However, this still carries risk. Employees or students will find ways around security walls to use AI. Instead of blocking apps, organizations should create a specific policy around generative AI that is communicated to everyone in the company.
Combatting AI Risks
One such policy method includes utilizing a Data Loss Prevention (DLP) solution. The DLP’s purpose is to detect and prevent unauthorized data transmission, and its capabilities can be applied to AI tools to mitigate these concerns. Its security parameters work through three main steps:
- Discover – DLPs can detect where data is stored and report on its location to ensure proper storage and accessibility based on its classification.
- Monitor – Agencies can oversee data usage to verify that it is being used appropriately.
- Protect – By educating employees and enforcing data-loss policies, DLPs can deter hackers from leaking or stealing data.
DLP endpoints can reside on laptops or desktops and provide full security coverage by monitoring data uploads, blocking data copied to removable media, blocking print and fax options and covering cloud-sync applications. For maximum security, agencies should utilize DLPs that cover all types of data storage—data at rest, data in use and data in motion. A unified policy based on detection and response to data leaks will prevent users from misapplying AI and provide balance for secure operation.
While agencies want to stay competitive and benefit from AI, they must also recognize and take steps to reduce the risks involved. Through educating users about the pros and cons of AI and implementing a DLP to prevent accidental data leakages, agencies can achieve their intended results.
Broadcom is a global infrastructure technology leader that aims to enhance excellence in data innovation and collaboration. To learn more about data protection considerations for generative AI, view Broadcom’s webinar on security and AI.