Back

AI security risks: A CISO perspective

banner-abstract-blog-cube-2

NOTE: If your organization is working to identify these risks as part of AI Policy development, please feel free to leverage any or all of this text.

Artificial intelligence (AI) is a rapidly developing technology with the potential to revolutionize many industries. However, as with any new technology, there are also security risks associated with its use.

A corporate security policy for the use of AI is essential to protect the company's data and reputation. These policies should cover both traditional AI as well as the emerging Generative AI. Generative AI is a powerful technology that can be used to create a wide variety of content, including text, images, and videos.

From a CISO perspective, there are 10 core risks that should be considered in the development of company security policies for the use of AI systems.

1. Uncontrolled usage: Unrestricted and unregulated use of any technology within an organization generally represents an unacceptable risk. Typically, the installation and use of new applications is tightly controlled, as is the onboarding and contracting with new vendors/suppliers. Further, the more powerful and impactful the application, tool, or service, the greater the risk, and the more stringent the controls. Traditional AI as well as the emerging Generative AI tools are among the most impactful systems that any organization is likely to leverage over the next several years and may represent some of the organization’s greatest competitive advantages. However, the unrestricted and unregulated use of this technology within the business by employees also risks catastrophic results.

2. Data security and regulatory compliance: AI systems often require access to sensitive data, such as personal information or intellectual property. This data must be protected at all times to prevent unauthorized access, disclosure, or modification. Many regulations, such as the General Data Protection Regulation (GDPR), require companies to take steps to protect the security of personal data. Therefore, the AI data access, data transport technology, and data storage (training data and knowledge base) must be considered and included in the scope of all existing data classification and protection requirements

3. Output reliability and over-reliance on AI:  There is a clear a risk that businesses could become too reliant on AI systems, marginalizing human judgement and intuition. This could lead to poor decisions if the AI system's outputs are treated as infallible. Three key reasons that business decisions must not become overly reliant on AI are Explainability, Judgement, and Fabrication. 

Explainability: Generative AI may produce results that are not “explainable”. Making recommendations or taking actions based on unexplainable results carries an inherent risk of error. This is compounded during any potential litigation in that the organization may be unable to provide a rational justification for such recommendations or actions.


Judgement: Generative AI cannot make judgements based on real world experience or new information. These systems simply make a statistical best guess about what follows in a sequence of text to emulate a “human-like” response. There is no retained consciousness to learn from experience in a complex set of unconnected, yet interrelated systems.

Fabrication: Generative AI may produce results containing fabricated information. This is known as AI “hallucination” and can result from prompting the system for information that is non-existent or outside of the training data range. Generative AI may also output results that are logically flawed, mathematically incorrect, or contain misinformation. If actions are taken based on such erroneous information, those actions could result in significant damage to the organization or a customer.

4. Output classification and controls: In cases where Generative AI systems are trained or have access to sensitive data, the output of these systems may contain data that is classified beyond “Public,” and requires data tagging and specific data protection controls. Such systems may therefore produce results that expose confidential or sensitive information regarding the organization, their customers, or individuals. Further, due to the nature of Generative AI, these exposures are beyond prediction. The lack of ability to foresee when sensitive data may be generated, automatically recognize and tag such data upon generation, and automatically contain and apply appropriate controls upon generation, represents a potentially significant risk.




5. Cyberattacks: AI systems can be targeted by cyberattacks, such as malware or denial-of-service attacks. Given the pace of development, and the fact that many of the tools are released in “experimental” or “research” versions, it is unlikely that they will have the vulnerability and penetration testing maturity expected in typical enterprise software. Further, there is insufficient historical data to estimate the degree and criticality of the potential unknown vulnerabilities of this class of tools or the environments in which they reside.

6. Harmful output potential: Generative AI may produce content (text, images, video) that is harmful, offensive, sexually explicit, misleading, or advocating undesirable behavior. Risks associated with such content include: employee safety, employee retention, employee litigation, regulatory fines, customer litigation, and damage to the company’s reputation.


7. Output bias: AI systems can often reproduce and even amplify existing biases in the data they're trained on. This could result in unfair or unethical outcomes, such as discrimination in hiring, lending, or customer service scenarios. This could result in harm to the organization’s reputation. 


8. Ownership and Copyright ambiguity: There is an ongoing legal debate around ownership of output from generative AI when that output is a derivative of existing works. Moreover, it has already been observed that systems trained on access collected from the Internet have the potential to be exposed to copyright material that has been pirated and unlawfully posted publicly. The lack of clarity on the ownership of AI generated output, and copyright law regarding inadvertent exposure creates non-trivial business risk.




9. Training and awareness: Without mandatory training specific to the use of traditional and generative AI tools, employees are likely to “experiment” with such tools in ways that place the organization at risk. Note that this is not due to any malicious intent on the part of employees, but simply due to the newness of the technology and the lack of awareness of the potential hazards. For example, since many people may accept without question that the output from an AI system is accurate and complete, they may use this output in making impactful business decisions. Without mandatory training specific to the use of traditional and generative AI tools, employees are likely to “experiment” with such tools in ways that place the organization at risk. Note that this is not due to any malicious intent on the part of employees, but simply due to the newness of the technology and the lack of awareness of the potential hazards. For example, since many people may accept without question that the output from an AI system is accurate and complete, they may use this output in making impactful business decisions. 


10. Rapid rate of change: Generative AI is evolving rapidly and the list of what it can and cannot do is likely to change monthly, or more frequently. The organization may be blindsided by new or changed capabilities without notice, and the impacts of these changes are unpredictable. This means that new and unforeseen risks may be introduced with little or no warning. 

To learn more about how to mitigate the risks of traditional and generative AI, speak with an expert.


The_road_to_10x_improvement_in_security_operations_with_GenAI-_Blog_Banner_

Our newsletter is only one click away!

Topics