close
close
news

Why Organizations Need an Artificial Intelligence Policy (Part 1)

Graphic depicting a boardroom developing an artificial intelligence policy

Estimated reading time: 8 minutes

Given some of the challenges of artificial intelligence (AI) right now, it might be tempting to say that AI isn’t the silver bullet everyone expected it to be. Personally, I think we’re still very early in the AI ​​adoption curve, so organizations need to continue to pay attention to what’s developing and run experiments to see how it works.

In the past we have talked about the need for organizations to develop an AI strategyToday I want to talk about developing an internal AI policy. I had the opportunity to hear our friend Carrie Cherveny speak at SHRM’s 2024 Annual Conference on “Getting Smart About AI,” which was very informative. So I asked Carrie if we could talk about developing AI policy and thankfully she said yes.

Having an AI policy is a fundamental step in being “ready” for AI in your workplace. An AI policy is now as essential as, say, your anti-harassment or Family and Medical Leave Act (FMLA) policy.

Carrie Cherveny is chief compliance officer and senior vice president of strategic solutions at HUB International. In her role, Carrie works with clients to develop strategies that ensure compliance and mitigate risk when it comes to benefits and employment practices. As always, remember that her comments should not be construed as legal advice or as pertaining to specific factual situations. If you have detailed questions, you can direct them to your friendly neighborhood employment lawyer.

Carrie, thanks for being here. Why should organizations consider having an internal AI policy (in addition to an AI strategy)?

Carrie Cherveny, Carrie B Cherveny, Attorney, Legal Professional, HUB International Southeast, HR Careers, Legal Professional, Compensation, FLSA, COVID-19, AI, Artificial Intelligence Policy

(Cherveny) AI is everywhere these days. Have you seen the Olympics? It seemed like more than half of the ads were for AI platforms. On June 10, 2024, Apple announced the upcoming launch of Apple Intelligence, its new artificial intelligence technology that will be integrated into the release of iOS18. According to Apple’s press release, “It uses the power of Apple silicon to understand and create language and images, take action within apps, and draw on personal context to simplify and accelerate everyday tasks.” Ready or not, AI is here. Having an AI policy in place is a fundamental step toward being “ready” for AI in your workplace. An AI policy is now as essential as, say, your anti-harassment or Family and Medical Leave Act (FMLA) policies.

Employers have a number of decisions to make. Employers must decide whether to allow the use of AI in the workplace and whether to limit AI to a specific platform. Employers must also identify the departments and roles that are allowed to use AI and/or not. Well-crafted policies are designed to specifically address these questions and more.

When it comes to policy development, HR departments often take the lead. Who should be involved in helping to develop AI policy?

(Cherveny) AI has the potential to impact every corner of your organization. This means your organization’s AI policy should be multifaceted and span multiple disciplines. Organizations should establish an AI committee and include at least the following:

  • Legal/Corporate Lawyer
  • Human Resources
  • Finance/Accounting
  • Operations

Other members of the subject matter expert (SME) committee will depend on the nature of the business. For example, a healthcare company would likely include its Health Insurance Portability and Accountability Act (HIPAA) Privacy Officer. A financial services company might include its compliance department along with a data privacy officer. Employers with unionized employees may want to include a union representative.

Once we have identified who should be involved in developing an AI policy, is there a framework they can follow to identify key areas of concern?

(Cherveny) Not only should the AI ​​committee work together to develop comprehensive policies, but the committee should also be tasked with vetting the AI ​​tools. For example, a committee should develop a robust discovery process to better understand the vendor’s reputation, how it handles the information fed into its system, and its data security and cybersecurity measures.

The organization must establish comprehensive, clear and unambiguous “rules of the road” for the use of AI in the workplace, including, for example:

  • Prohibited uses of AI. Consider the types of data that employees might never enter into an AI platform, such as personally identifiable information (PII), protected health information (PHI), and confidential business data (financial data, methodologies, trade secrets, confidential attorney-client information, etc.).
  • Permitted applications of AI. When is an employee allowed to use AI in performing their job? For example, AI can create efficiencies for general research, creating/identifying sample documents, composing a written document, or job aids (such as skill development, learning a new system, or a tool in a system like Excel pivot tables).
  • Required safety measures. Should employees be required to “fact-check” data or findings obtained through AI? We’ve all read about the lawyers who submitted letters to the courts filled with fictitious cases and citations. Employees should be required to fact-check reliable sources to ensure the AI’s findings are accurate and credible. There are some AI platforms that also provide the citations and sources for their findings. For example, Microsoft CoPilot provides the citations and sources for their findings. However, even when the AI ​​provides its sources, the end user should also fact-check sources outside of the AI ​​references to ensure the work is complete, thorough, and accurate.
  • Required notices and disclosures. Do your employees need to disclose when they’re using AI? For example, a new law in New York State requires users to disclose when they’re using AI. Notice and disclosures are quickly becoming a best practice in AI policy requirements. Employers may consider requiring employees to disclose the purpose or reason for using the AI, identify the platform(s) used, and provide a summary of the results included in the work product.
  • Mandatory citation and quotations. Are employees required to identify the specific AI tools they relied on in developing their work product? This is a bit like a notice or disclosure policy, but requires the employee to specifically identify the AI ​​tool or platform, not just the use of AI.

Since we’re in an AI conversation, I asked Microsoft CoPilot for a “sample AI use policy for employers in the workplace.” Here’s what they came up with:

Policy for the use of AI tools (template created by Microsoft CoPilot)

Policy note and objective: Our company policy for the use of AI tools is aimed at outlining the responsible and safe use of AI (Artificial Intelligence) tools in the workplace. This is especially important when processing sensitive data and company and customer data.

Domain: AI tools are revolutionizing our work by automating tasks, improving decision-making, and providing insights. However, they also introduce challenges related to information security and data protection.

Goal: The purpose of this policy is to ensure that all employees use AI tools in a safe, responsible and confidential manner, complying with requirements for evaluating security risks and protecting confidential data.

Policy Statement: We recognize the potential risks that AI tools pose to our business and customers. We are committed to maintaining the confidentiality, integrity, and availability of all company and customer data.

Recommended security practices: Employees are expected to follow these best practices when using AI tools:

  • Evaluation of AI tools: Before using an AI tool, evaluate its security features, terms of service, and privacy policy. Check the reputation of the tool developer and any third-party services involved.
  • Protection of confidential data: Do not upload or share any confidential, proprietary or regulated information without prior approval from the appropriate department.
  • Access control: Do not grant access to AI tools outside the company without proper approval and compliance with security requirements.
  • Use of renowned AI tools: Only use trusted AI tools that meet our security and data protection standards.

Compliance: All employees must adhere to this policy as part of their terms of employment. Any violation may result in disciplinary action, up to and including termination of employment.

Consider this template as a starting point and customize it based on your specific needs and legal requirements. It is also advisable to seek legal advice to ensure compliance with all applicable laws and regulations. Remember that an effective policy is clear, comprehensive, and enforceable.

HUB International Insurance logo

I want to thank Carrie for sharing her knowledge with us. And I love that she added the example of an AI policy template to get us thinking! If you want to learn more, check out this archived webinar from HUB International on “Humanizing HR in the Age of AI: Embracing the Technology Revolution.”

Once organizations decide they need to create an AI policy, the challenge begins in determining what to include in the policy. Carrie touched on some initial considerations here, but we’ll dive deeper into the components of an AI policy in our next article. Stay tuned!

Image created by DALL-E demonstrating the importance of human oversight in AI

The post Why Organizations Need an Artificial Intelligence Policy (Part 1) appeared first on hr bartender.

Related Articles

Back to top button