It won’t be long before AI gives us new tools not just to speed up, but also personalize and tailor communications to individuals where once we could only speak to groups. How will you decide if you should use these tools, and if so how much? What rules should guide your company?
Technology allows for new ways of communicating but also brings new risks – we all saw companies that got into trouble rolling out layoffs or major strategic changes to employees over video calls that got recorded and leaked to the public.
What would your employees say if they learned a layoff announcement was generated by a machine? What would your customers say if they learned you were generating content using AI models optimized for particular demographic or partisan subgroups?
Adopting these systems isn’t just a technology question for the CTO, or an efficiency and profitability question for the CFO. These are reputational questions that fall squarely on our shoulders. What are your values? How do they inform the choices you will make around AI? What is the process you will go through to make those choices – and then to defend them if you are ever challenged in the future?
We’re certainly thinking a lot about this in our work at FGS. As a core principle, AI should be used to inform and enhance our work, never to replace human analysis, judgment and review. We expect employees to adhere to the following requirements:
Confidentiality: Employees are strictly prohibited from entering confidential or sensitive client or firm information into ChatGPT or any other generative AI technology.
Accuracy: Everyone must independently fact check anything produced by ChatGPT or any other generative AI technology, which frequently produce errors and inaccuracies.
Transparency: All staff must disclose to account leads the use of any generative technology in the creation of a specific work product.