top of page

Three Tips for Compliance and Ethics Officers to Integrate AI in their Codes of Conduct

Sharing from Lauren Kornutick, Director Analyst, Gartner



Overview


Compliance leaders are facing heightened expectations to provide clear guidance on the responsible use of artificial intelligence (AI) for their employees. Most recently, the U.S. Department of Justice warned it will take a hard line against misuse of AI and will consider a company’s AI risk management as part of its overall compliance efforts.


For businesses that fall within scope of new global regulations and government orders, such as the European Union’s AI Act, the U.S. AI Executive Order and New York City’s AI bias law, the inability of, or unawareness in, managing AI-related risks could jeopardize their compliance standing.


We spoke with Lauren Kornutick, Director Analyst in the Gartner Legal and Compliance practice, to how chief compliance officers approach prioritizing and updating their AI risk management programs, and communicating AI guidelines to employees across their organizations.


Journalists who would like to speak with Lauren regarding this topic should contact Heather Sabharwal. Members of the media can reference this material in articles with proper attribution to Gartner.


Q: Why should compliance leaders consider adding AI guidance to their codes of conduct?


A: Incorporating AI guidance into an organization’s codes of conduct is crucial. These codes act as a comprehensive resource for employees seeking corporate direction and for external stakeholders monitoring a firm's governance.


The reason for compliance leaders to consider adding AI guidance to their code is threefold:


  • Prevalent use of AI. The average employee now has access to AI, and without guardrails, they may give away sensitive data, make biased decisions, or use the technology to draft misleading or deceiving communications with customers.

  • Increased regulatory scrutiny. With warnings from the U.S. Department of Justice against AI-facilitated misconduct, as well as new global regulations and government orders, appearing oblivious to these compliance obligations is not an option.

  • Growing stakeholder demand for transparency. Investors, suppliers, customers and other external stakeholders demand to know more about the guardrails being placed around companies’ use of AI.


Q: As corporate compliance leaders look to incorporate guidance in the responsible use of AI into their code of conduct, how should they get started?


A: Issuing and updating guidance can seem daunting, so we have three tips for corporate chief compliance officers to consider when getting started.


1) Integrate AI content based on your current code structure and risk assessment. Compliance leaders should use this as an opportunity to uphold a corporate value, tying the ethical use of AI to a company-level principle. This can be a strong message for the workforce.


Leaders can also approach guidance in the context of an existing risk. Companies with limited AI use cases may see the risk manifest in one particular area. Or, when various AI use cases need to cover more complex issues, a dedicated section in the code of conduct can help provide context and clarity.


2) Give employees practical guidance and examples of expected conduct. Explain why AI matters to the business – how it provides new solutions or faster service – which raises the stakes for responsible and ethical use of AI.


Guidance should also provide examples of role-specific responsibilities, such as staff who design, deploy or test AI as part of their remit, or company executives who may benefit from a stand-alone public-facing AI code that outlines their duties with teams, vendors, and business processes. The code of conduct should also serve as a summary of expectations, with linked sources to relevant policies or documents that detail the topics related to AI.


3) Do not overstate your AI risk controls and avoid inconsistency. The AI section in the code should align with any lower-level guidance already issued, such as a GenAI use policy if the company has one. Compliance leaders should also be mindful with statements about their risk controls. To avoid making claims that cannot be backed up, they should work with their partners, including IT, data privacy and enterprise risk management to confirm that relevant processes are in place and followed in practice before highlighting them in their code.


 

“Incorporating AI guidance into an organization’s codes of conduct is crucial.”

 

Q: How can legal and compliance leaders take an active role in assessing and mitigating the risks of generative AI (GenAI)?


A: Evolving regulations across different jurisdictions require a proactive management framework to avoid reputational, regulatory, and financial damage.


The first step is to identify the risks associated with each GenAI solution and map the risks to the mitigation plans and controls.


Next, create a cross-functional team to identify and mitigate the risks associated with GenAI solutions. Team members should include key subject matter experts from legal, compliance, privacy, risk, audit, and IT security. Data and analytics should seek to facilitate the deployment of GenAI within the organization but also address the actual and residual risks related to the specific use case and deployment model for each solution.


The team should then test and monitor GenAI solutions at various stages: during vendor selection, prelaunch and even throughout their use.


Finally, after identifying tech components that support trust risk and security in GenAI applications, models, and other AI entities, set up proofs of concept to test emerging GenAI products to augment traditional security controls. Apply them to production applications once they perform as required.

4 views0 comments
connexion_panel_edited.jpg
CXO_8-in-1.png
subscribe_button.png

 

Disclaimer:

The information contained in this site is for reference only. While we have made every attempt to ensure that the information contained in this site has been obtained from reliable sources, we are not responsible for any errors or omissions, or for the results obtained from the use of this information. All information in this site is provided "as is", with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information, and without warranty of any kind, express or implied, including, but not limited to warranties of performance, merchantability and fitness for a particular purpose. In no event will Ho Hon Asia Limited, its related partnerships or corporations, or the partners, agents or employees thereof be liable to you or anyone else for any decision made or action taken in reliance on the information in this site or for any consequential, special or similar damages, even if advised of the possibility of such damages.
Certain links in this site connect to other websites maintained by third parties over whom we have no control. We make no representations as to the accuracy or any other aspect of information contained in other websites.

2024 @ Inno-Thought and its affiliates. All rights reserved.

bottom of page