Europe investigates Big Tech’s use of generative AI

Europe investigates Big Tech’s use of generative AI - Politics - News

contact Union Initiates Probe into Big Tech’s Use of artificial intelligence and Deepfakes: Implications for the Upcoming EU Parliamentary Elections

The contact Union (EU) has announced the launch of an in-depth investigation into the use of artificial intelligence (ai) and deepfakes by major tech companies, including Meta, Microsoft, Snap, TikTok, and X. The probe aims to examine how these tech giants plan to manage the risks associated with generative ai and prevent its misuse in spreading false information or manipulating voters during the upcoming EU parliamentary elections.

The contact Commission has expressed concerns regarding the potential chaos and disruption that generative ai could cause in the lead-up to the EU parliamentary elections. Online platforms will be required to respond to questions about their readiness and measures taken to prevent ai tools from disseminating election misinformation by April 5, 2023.

Regulators at the contact Commission are particularly interested in understanding how tech companies are addressing the risks of ‘hallucinations’ where ai provides false information, deepfake dissemination, and automated manipulation that can potentially mislead voters. The investigation covers various aspects of generative ai’s impact on user privacy, intellectual property, civil rights, children’s safety, and mental health.

Tech companies have until April 26 to provide responses to these concerns. The information provided by the companies could influence the development of election security guidelines for tech platforms that the contact Commission plans to finalize by March 27, 2023.

The investigation also builds upon an ongoing probe into Elon Musk’s social media company, which began during the opening days of the Israel-Hamas conflict last year. One of the grievances expressed by EU regulators is the ability to manipulate services through automated means, which includes generative ai. A link exists between the two investigations, as indicated by one of the commission officials.

In late February, X CEO Linda Yaccarino met with Thierry Breton, a top EU digital regulator, to discuss these concerns. The contact Commission’s goal is not only to gain insight into how companies are approaching the issue of deepfakes but also to put them on notice that ai-related mishaps could result in fines or other penalties under the Digital Services Act.

The EU’s investigation is a significant step towards regulating and ensuring transparency and accountability in the use of ai and deepfakes by major tech companies. It underscores the importance of addressing the potential risks associated with these technologies, especially during critical political events such as elections.