Skip to main content

Employee communications to drive AI success

As AI finds its way into nearly every facet of life, companies are striving to increase AI adoption among their employees to enhance efficiency and optimize results. Yet heightened use of these tools also introduces risks, including losing confidentiality over sensitive data, production of inaccurate documents, poor decision-making, and violating legal and contractual obligations. 

Clear internal communication of AI policies can mitigate these risks – just as poor communication can heighten them. When employees aren’t sure which tools are approved, what data is safe to input, or what use cases are allowed, organizations risk increased exposure to cybersecurity threats, compliance failures, disappointing adoption rates, and a lack of return on investment. Unclear guidance can also lead to employees using confidential client or customer information with AI in violation of privacy policies or contractual requirements. 

To avoid these pitfalls and the reputational damage they can cause, organizations should consider building out employee communications programs that include these three best practices:

  1. Define a strategic framework


    A clear strategy should be the foundation for effective communication. Companies should know: What are the goals of our AI adoption? Is AI intended to accelerate growth initiatives, drive efficiencies, improve customer service, or push other key objectives? Will the use of AI be governed by a centralized function across the enterprise, or is its application – and enforcement of use restrictions – dispersed and flexible?  What tools will employees have access to, and how should they use them, and not use them, with examples?


    The answers will help the company frame communications in a way that supports its goals. For example, if a core goal of AI implementation is to boost productivity, it may be prudent to spend more time assuaging employee fears around AI, building trust in its reliability, and directly speaking to potential objections than listing technical details of the platform.


  2. Craft effective messaging


    Messaging should feel authentic to the company. If a core company value is fostering growth and advancement for employees, explain how using cutting-edge technology is a chance for upskilling, or how it can free up time for additional learning and development. This helps to build trust in leadership and in the tools themselves.


    Demystifying AI also goes a long way in improving employee perceptions. Distilling technical details into clear, actionable language helps employees understand what AI can offer them and sets expectations on secure uses. Avoid acronyms and provide examples of practical use-cases that connect to the day-to-day work of employees whenever possible.


    Most importantly, communications should be two-way. Building feedback loops through anonymous surveys, dedicated Q&A sessions, and empowering middle managers with strategies for talking to their teams together creates a collaborative environment that keeps leadership ahead of issues, ensures gaps in the policy are rapidly identified and remedied, and sees that employee needs are met.


  3. Develop multiple touchpoints


    An internal responsible use policy buried on the company intranet probably won't cut it, especially given the risks posed by AI. A multi-phase, multi-channel approach that leverages commonly used channels like Slack, employee newsletters, and town hall meetings can help to ensure that employees don’t accidentally miss essential requirements of the AI policies and procedures. It also gives employees a choice of where and how they’d like to engage. Finally, different channels lend themselves to different types and styles of information. Blending formats gives the company the chance to share all the relevant information (including things like monthly tips and FAQs) in bite-size chunks without bogging employees down so much that they overlook it altogether.

When AI is not properly introduced and explained to a workforce, companies risk confusion and operational inefficiencies, and in some cases, legal liability and reputational harm. To ensure use of AI is compliant, responsible, strategically aligned, and organizationally successful, leaders should carefully communicate to, and engage with, their internal stakeholders — creating one in-sync team that works together toward a common goal.


About FGS Global

FGS Global is the world’s leading stakeholder strategy firm, with approximately 1,400 professionals around the world, advising clients in navigating critical issues and reputational challenges — including the evolving landscape of artificial intelligence. FGS offers board-level and C-suite counsel in all aspects of stakeholder strategy spanning corporate reputation, crisis management and public affairs, and is also the leading force in transaction and financial communications worldwide.

FGS is consistently ranked a Band 1 PR firm for Crisis & Risk Management and for Litigation Support by Chambers and Partners.  For the second year, FGS was ranked #1 Global M&A PR firm by Deal Count and Value in 2024 by Mergermarket. 

Explore our capabilities

About Debevoise & Plimpton

Debevoise’s Chambers-ranked artificial intelligence (AI) practice is the country’s premier AI practice for financial services and insurance clients. Our AI practice pairs the leaders of our Chambers Band 1 Securities Enforcement team, our Band 1 Insurance Regulatory practice, and our Band 1 Trademark and Copyright practice with our market-leading AI practice run by Avi Gesser, who is ranked Chambers Band 1 for AI and has more than a decade of experience advising financial services clients on AI issues. 

Debevoise’s AI practice is currently advising approximately 150 clients—including 100 financial services and insurance clients—on a broad range of AI issues, including: responding to AI regulatory exams; negotiation of complex AI data licensing agreements; AI deal diligence; board oversight responsibilities for AI risk; AI vendor risk management; AI policies and governance; AI data privacy and scraping issues; risk assessments for core AI use cases; managing bias risk for AI insurance underwriting and consumer credit; defending against deepfakes and other AI-enabled cybersecurity attacks; deploying customer-facing chatbots; implementing AI webmeeting tools (such as Zoom AI Companion and Teams Copilot for meeting summaries, transcriptions, and recordings); and training and deploying foundation models.

Learn more