Everyone’s racing to use AI – but few have paused long enough to ask, “Do we have rules for this?” This session walks HR and business leaders through how to create a responsible, legally sound and people-centered AI policy that keeps innovation flowing without landing on the EEOC’s radar.
Participants will be able to:
- Identify where AI is being used in their HR systems
- Explain the legal and ethical risks (bias, discrimination, data privacy, transparency) that policies must address.
- Draft the essential components of an AI policy, including scope, accountability, human oversight, and data use.
- Vet AI vendors and tools for compliance and ethical risk factors.
- Develop an implementation and training plan to keep their AI policy active and effective.
Course outline:
- What AI really is (and what it’s not) in HR settings.
- The new risk landscape: EEOC, ADA, DOL, and state-level AI laws.
- Why “human-in-the-loop” oversight is essential for compliance.
- Core elements of a responsible AI policy.
- How to address data privacy and employee monitoring concerns.
- How to roll out and maintain your policy through training and audits.
- HR’s evolving role as the ethical gatekeeper of technology.
Part 1: Understanding the Risk (0–15 minutes)
- Overview of AI’s growth in HR and common use cases.
- Legal exposure from recent cases (Workday, EEOC ADA guidance).
- How regulators define “responsible AI.”
Part 2: Building the Policy (15–30 minutes)
- Sections every AI policy should include: scope, definitions, oversight, data governance, and transparency.
- How to align with company values and existing compliance frameworks.
- Interactive activity: Spot the missing safeguards in a sample policy.
Part 3: Bringing It to Life (30–45 minutes)
- How to train staff, monitor vendors, and review tools annually.
- Checklist: 3 steps to get started this quarter.
- Wrap-up: balancing innovation and accountability.
Because every company is already using AI — whether they realize it or not. And without a policy, HR is flying blind.
This session will help HR leaders get ahead of AI risk before regulators, employees, or vendors force their hand. You’ll leave with practical templates, real-world examples, and the confidence to lead AI conversations inside your organization — not just react to them.
Register at Your Company is Already Using AI. Where’s Your Policy?


I know a lawyer (a profession that has made terrible use of AI in recent years). His firm’s original policy on AI was “Don’t. Ever.” But they’re recently revised it to “You can only use specific AIs that are approved by management.” None are approved, and none are likely to be.
Their concern wasn’t that AIs tend to hallucinate case law that doesn’t exist. Any competent lawyer who isn’t horribly overworked will verify everything coming out of the AI before even thinking about submitting it. Their concern was that pretty much every AI submits your queries to “the cloud,” where it is added to the database and used to formulate answers for future queries. And in their case, the query would almost always include client information they are legally required to keep confidential. Only one that runs 100% in house on their own hardware would be allowed, and so far, they haven’t found one that is actually useful.