Managing reputation in the age of AI: key insights from expert panel
FGS Global hosted a panel discussion in London on managing reputation in the age of AI, highlighting the potential paradigm shift that AI could bring to various industries.
To learn more about what was discussed, read our Insights piece following the event.
The EU AI Act: The race to prevent human extinction
On Wednesday 14 June 2023, the European Union announced new regulations for a safer use of AI by both providers and users in order to mitigate the risks posed by AI systems.
In May 2023, several researchers and CEOs of major tech companies, such as DeepMind’s chief executive Demis Hassabis, addressed an open letter to the public outlining the potential risks of artificial intelligence. The joint statement, urging to make risk mitigation a global priority, warned of the dire threats of AI to human rights, health, as well as, alarmingly, to the survival of humanity altogether.
The EU AI Act classifies risks according to different risk levels, ranging from “limited” to “unacceptable”, imposing severe fines and bans accordingly. These penalties will greatly impact technologies such as facial recognition, deepfake videos, and AI chatbots.
MEPs aspire to reach an agreement with the Council regarding the final form of the EU AI Act by the end of 2023.
Google and OpenAI limit AI chatbots in Hong Kong
US tech companies have limited some of their offerings in Hong Kong as fears over how China’s influence will impact its ability to maintain an open internet have grown, according to media reports.
Google and OpenAI have limited access to their AI chatbots in Hong Kong amid fears that expansion in the city could expose the companies to liability under a Chinese national security law that criminalizes criticism of the government. Disney has also chosen not to stream two episodes of "The Simpsons" that include references to critiques of the Chinese government on its streaming service in Hong Kong.
Meanwhile, Apple has updated its privacy policy to warn users in Hong Kong of malicious links using a tool from China-based Tencent, which has temporarily blocked access to Western sites such as Mastodon, Coinbase and GitLab.
The Ogilvy campaign: Advertising agencies should disclose the use of AI-generated content
Last week, agency Ogilvy encouraged advertisers and social media platforms to enforce brands to disclose the use of AI in their campaigns. Global head of influence at Ogilvy – Rahul Titus – proposed the AI Accountability Act under which brands will be required label their AI-generated content using hashtag #PoweredbyAI or a dedicated watermark.
Titus’ idea is powered by the increasing presence of AI in marketing campaigns. The agency fears that AI may have the potential to threaten content authenticity and cause consumers to lose trust in influencer marketing.
Earlier this week, Ogilvy mandated that its clients inform users of content that been generated by AI. Titus stressed that this is an effort to work with, rather than against, AI to establish an ethical and transparent framework in the industry.