{"id":2926,"date":"2024-10-08T19:44:03","date_gmt":"2024-10-08T19:44:03","guid":{"rendered":"https:\/\/usatrustedlawyers.com\/blog\/risks-realities-and-lessons-for-businesses\/"},"modified":"2024-10-08T19:44:03","modified_gmt":"2024-10-08T19:44:03","slug":"risks-realities-and-lessons-for-businesses","status":"publish","type":"post","link":"https:\/\/usatrustedlawyers.com\/blog\/risks-realities-and-lessons-for-businesses\/","title":{"rendered":"Risks, Realities, and Lessons for Businesses"},"content":{"rendered":"\n<div data-v-1b187944=\"\">\n<div data-v-1b187944=\"\">\n<p>In 2024, a year proclaimed as the &#8220;Year of the Election,&#8221; voters in countries representing over half the world&#8217;s population headed to the polls. This massive electoral wave coincided with the rising prominence of Generative AI (GenAI), sparking debates about its potential impact on election integrity and public perception. Businesses, just like political players, are also facing a new landscape where GenAI can be both a risk and an opportunity.<\/p>\n<\/div>\n<p>  <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>GenAI&#8217;s ability to produce highly sophisticated and convincing content at a fraction of the previous cost has raised fears that it could amplify misinformation. The dissemination of fake audio, images and text could reshape how voters perceive candidates and parties. Businesses, too, face challenges in managing their reputations and navigating this new terrain of manipulated content.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<h2>The Explosion of GenAI In 2024<\/h2>\n<p>Conversations from across eight major social media platforms, online messaging forums and blog sites about GenAI surged by 452% in the first eight months of 2024 compared to the same period in 2023, according to sourced data from\u00a0<a href=\"https:\/\/www.brandwatch.com\/\" target=\"_blank\" rel=\"noopener nofollow\">Brandwatch<\/a>. Many expected 2024 to be the year that deepfakes and other GenAI-driven misinformation would wreak havoc in global elections.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>However, reality proved to be more nuanced than these initial concerns. While deepfake videos and images did gain some traction, it was the more conventional forms of AI-generated content, such as text and audio, which appear to have posed greater challenges. AI-generated text and audio appear to have been harder to detect, more believable, and cheaper to produce than deepfake images and videos.<\/p>\n<\/div>\n<p> <!---->  <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<h2>The &#8216;Liar&#8217;s Dividend&#8217; and the Challenge for Truth<\/h2>\n<p>One of the significant concerns that emerged with GenAI is what has been coined the &#8220;Liar&#8217;s Dividend.&#8221; This refers to the increasing difficulty in convincing people of the truth as belief in the widespread prevalence of fake content grows.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>It is a &#8220;Liar&#8217;s Dividend&#8221; because it allows people to lie about things that have really happened, explaining away evidence as fabricated content. Worryingly, in politically polarized countries like the United States, the Liar&#8217;s Dividend could make it even harder for politicians and their supporters to agree on basic facts.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>For businesses, this phenomenon also poses serious risks. If a company faces accusations, even presenting real evidence to refute them might not be enough to convince the public that the claims are false. As people become more skeptical of all content, it becomes harder for companies to manage their reputations effectively.<\/p>\n<\/div>\n<p> <!----> <!---->  <!----><\/p>\n<div data-v-1b187944=\"\">\n<h2>What Have We Learned So Far?<\/h2>\n<p>Despite early concerns, 2024 has not yet seen the dramatic escalation of GenAI manipulation in elections that many feared. Several factors have contributed to this:<\/p>\n<ul>\n<li><strong>Public Awareness:<\/strong>\u00a0The public&#8217;s ability to detect and call out GenAI-generated content has improved significantly. Regulators, fact-checking organizations, and mainstream media have been proactive in flagging misleading content, contributing to a reduction in its impact.<\/li>\n<li><strong>Regulatory Readiness:<\/strong>\u00a0Many countries have introduced regulations to address the misuse of GenAI in elections. Media outlets and social media platforms have also adopted stricter policies to combat misinformation, reducing the spread of AI-manipulated content.<\/li>\n<li><strong>Quality Limitations:<\/strong>\u00a0The production quality of some GenAI-generated content has not met the high expectations that many commentators had feared. This has made it easier to identify and call out fake content before it can go viral.<\/li>\n<\/ul>\n<p>However, there have still been notable instances of GenAI manipulation during the 2024 election cycle:<\/p>\n<ul>\n<li><strong>France:<\/strong>\u00a0Deepfake videos of Marine Le Pen and her niece Marion Mar\u00e9chal circulated on social media, leading to significant public debate before being revealed as fake.<\/li>\n<li><strong>India:<\/strong>\u00a0GenAI-generated content was used to stir sectarian tensions and undermine the integrity of the electoral process.<\/li>\n<li><strong>United States:<\/strong>\u00a0There were instances of GenAI being used to create fake audio clips mimicking Joe Biden and Kamala Harris, causing confusion among voters. One political consultant involved in a GenAI-based robocall scheme now faces criminal charges.<\/li>\n<\/ul>\n<h2>Exploiting Misinformation<\/h2>\n<p>For businesses, the lessons from political GenAI misuse are clear: the &#8220;Liar&#8217;s Dividend&#8221; is a real threat, and companies must be prepared to counter misinformation and protect their reputations. As more people become aware of how easily content can be manipulated, they may become increasingly skeptical of what they see and hear. For businesses, this can make managing crises, responding to accusations, and protecting brand credibility even more challenging.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>At the same time, proving a negative \u2014 something did not happen \u2014 has always been difficult. In a world where GenAI can be used to create false evidence, this challenge is magnified. Companies need to anticipate this by building robust crisis management plans and communication strategies.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<h2>Positive Uses of GenAI<\/h2>\n<p>While much of the discussion around GenAI focuses on its negative aspects, there are positive applications as well, especially in political campaigns, which offer lessons for businesses:<\/p>\n<ul>\n<li><strong>South Korea:<\/strong>\u00a0AI avatars were used in political campaigns to engage younger voters, showcasing the technology&#8217;s potential for personalized and innovative voter interaction.<\/li>\n<li><strong>India:<\/strong>\u00a0Deepfake videos of deceased politicians, authorized by their respective parties, were used to connect with voters across generations, demonstrating a creative way to use GenAI in a positive light.<\/li>\n<li><strong>Pakistan:<\/strong>\u00a0The Pakistan Tehreek-e-Insaf (PTI) party, led by jailed Prime Minister Imran Khan, effectively used an AI-generated victory speech after their surprising electoral win. The video received millions of views and resonated with voters, demonstrating GenAI&#8217;s ability to amplify campaign messages in powerful ways.<\/li>\n<\/ul>\n<h2>Looking Ahead: GenAI&#8217;s Role In Crisis Management<\/h2>\n<p>For businesses, the key takeaway from the 2024 election cycle is the importance of planning for the risks posed by GenAI. While the technology has not yet fundamentally reshaped the information environment, it&#8217;s potential to do so remains. Companies must be proactive in addressing the risks posed by AI-generated misinformation and developing strategies to separate truth from falsehood.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p>At the same time, businesses should also explore the positive uses of GenAI to engage with their audiences in creative ways, much like political campaigns have done. As technology evolves, firms that are able to harness its potential while mitigating its risks will be better positioned to navigate the complexities of the modern information landscape.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p><strong>Joshua Tucker\u00a0<\/strong>is a Senior Geopolitical Risk Advisor at Kroll, leveraging over 20 years of experience in comparative politics with a focus on mass politics, including elections, voting, partisan attachment, public opinion formation, and political protest. He is a Professor of Politics at New York University (NYU), where he is also an affiliated Professor of Russian and Slavic Studies and Data Science. He directs the Jordan Center for Advanced Study of Russia and co-directs the Center for Social Media Politics at NYU, and his current research explores the intersection of social media and politics, covering topics such as partisan echo chambers, online hate speech, disinformation, false news, propaganda, the effects of social media on political knowledge and polarization, online networks and protest, the impact of social media algorithms, authoritarian regimes&#8217; responses to online opposition, and Russian bots and trolls.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p><strong>George Vlasto<\/strong>\u00a0is the Head of Trust and Safety at Resolver, a Kroll business. Resolver works with some of the world&#8217;s leading social media companies, Generative AI model-makers and global businesses to identify and mitigate harmful content online. George leverages a 15-year career as a diplomat for the UK government, working in a range of locations around the world, to bring a global perspective to the subject of online harms. He has a deep knowledge of online and offline risk intelligence and extensive experience in bringing insight from these domains together to understand the real-world impact for businesses, online platforms and society.<\/p>\n<\/div>\n<p> <!----> <!----> <!----> <!----><\/p>\n<div data-v-1b187944=\"\">\n<p><span style=\"color: #0000ff;\"><em><strong>This article appeared in\u00a0<a style=\"color: #0000ff;\" href=\"http:\/\/www.lawjournalnewsletters.com\/cybersecurity-law-and-strategy\/\" target=\"_blank\" rel=\"noopener nofollow\">Cybersecurity Law &amp; Strategy<\/a>, an ALM publication for privacy and security professionals, Chief Information Security Officers, Chief Information Officers, Chief Technology Officers, Corporate Counsel, Internet and Tech Practitioners, In-House Counsel. Visit\u00a0the <a style=\"color: #0000ff;\" href=\"http:\/\/www.lawjournalnewsletters.com\/cybersecurity-law-and-strategy\/\" target=\"_blank\" rel=\"noopener nofollow\">website<\/a> to learn more.<\/strong><\/em><\/span><\/p>\n<\/p><\/div>\n<p> <!----> <!----> <!----> <!----><\/div>\n","protected":false},"excerpt":{"rendered":"<p>In 2024, a year proclaimed as the &#8220;Year of the Election,&#8221; voters in countries representing over half the world&#8217;s population headed to the polls. This massive electoral wave [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2927,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[3747,3746,3745,2740],"class_list":["post-2926","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-lawyers","tag-businesses","tag-lessons","tag-realities","tag-risks"],"_links":{"self":[{"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/posts\/2926","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/comments?post=2926"}],"version-history":[{"count":0,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/posts\/2926\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/media\/2927"}],"wp:attachment":[{"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/media?parent=2926"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/categories?post=2926"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/usatrustedlawyers.com\/blog\/wp-json\/wp\/v2\/tags?post=2926"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}