Zoom’s Data Usage Sparks EU Legal Questions over AI Training

Zoom’s Data Usage Sparks EU Legal Questions over AI Training

Zoom's Data Usage Sparks EU Legal Questions over AI Training

Videoconferencing platform Zoom faces potential legal challenges in Europe over using customer data for training artificial intelligence models.

Three years after settling with the Federal Trade Commission (FTC) in the US for deceptive marketing related to security claims, Zoom is embroiled in another controversy.

A clause added to its terms and conditions in March 2023 came under scrutiny, as it appeared to allow Zoom to utilize customer data for training AI models without an opt-out option.

Although there were claims that the “no opt-out” clause applied only to “service generated data,” the public reaction was outrageous.

Critics argued that Zoom’s customers shouldn’t have their interactions repurposed for AI models. It could potentially make their jobs redundant in the future.

The controversy concerns clauses 10.2 through 10.4 in Zoom’s terms and conditions. They sparked concerns about the consent required for processing customer content for AI model training.

Zoom’s Controversial Response

Zoom’s response to the situation — an attempt to clarify its position, was met with further criticism. The company’s communication did not provide clear answers and raised suspicions that Zoom might be hiding something.

The main issue sparking controversy involves important privacy laws in the European Union (EU). They include the General Data Protection Regulation (GDPR) and the ePrivacy Directive.

Experts argue that Zoom may need to adhere to EU data protection laws.

These laws mandate that personal data processing requires clear and explicit user consent. The ePrivacy Directive also ensures that interception or surveillance of communications is prohibited unless user consent is given.

Zoom seems to rely on a US approach that doesn’t distinguish between personal and non-personal data. Zoom’s presentation of consent options, like pre-checked boxes, and its bundling of different data processing purposes, may not align with EU standards. However, Zoom claims that metadata can be used without consent contradicts EU law.

Performance of a Contract

Despite claims that Zoom relies on customer consent as a legal basis for AI data mining, legal experts suggest that the company is actually relying on the “performance of a contract”. This legal basis might not cover non-essential processing like AI training.

With no apparent lead supervisory authority in the EU and the potential for multiple DPAs to investigate, Zoom faces increased regulatory risks.

This situation parallels the recent case of OpenAI’s chatbot service, ChatGPT. It had to switch to the legal basis of “legitimate interests” to process data for AI model training.

Zoom competes in a rapidly changing market, striving to match AI giants like OpenAI. However, its controversial data usage practices might result in increased regulatory scrutiny.

The situation emphasizes the importance of aligning data usage practices with EU privacy laws. Besides, it highlights the necessity of giving users genuine choices regarding consent for data processing.

In a nutshell, Zoom’s legal tangle over its data usage for training AI models highlights the complexities of navigating privacy laws across different regions. This situation underscores the need to act with transparency and compliance while respecting user rights.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechReport – https://techreport.com/news/zooms-data-usage-sparks-eu-legal-questions-over-ai-training/

Exit mobile version