top of page
Writer's pictureMal McCallion

Zoom loop

Updated: Dec 11, 2023


For someone who literally has "Your happiness is my happiness" as the headline in his LinkedIn profile, Zoom CEO Eric Yuan must have been feeling very sad indeed this week.


Many of his 800m users certainly were. They had discovered, via cunning privacy sleuths, that Zoom updated their Terms & Conditions (T&Cs) in March. These changes explicitly allowed them to harvest users' data (read - transcripts of meetings, attendee information, company records, videos, everything) to train AI.


Pretty big move.


But then came The Worming.


First, an out-of-the-box PR play, doubling-down on the new T&Cs. "Nothing to see here," they (might as well have) intoned, "You gave consent by going into one of our meetings, remember?"


That didn't work. Voices grew louder, noting that many mental and physical health consultations are forced to happen over Zoom. Vulnerable people's fragmenting lives being harvested and feasted upon by voracious Large Language Models (LLMs).


It's just not a great look.


With the share price being hammered downwards by nearly 10%, Yuan's gang finally got around to the reverse-ferret. Damned systems (not people)! Always messing themselves up. But sorry-if-you-were-offended, shucks, even big businesses make mistakes every once in a while (or twice, or three times, etc).


This won't be the last time that an organisation tangles itself up in PR tripwire over its AI policies. Right now, in an online meeting which is probably not hosted by Zoom, besuited legal eagles are trying to work out whether they ought to try and ride-out the fact that they've already included these clauses in their T&Cs - or whether they ought to soften them before anyone really notices.


Whilst this is a short-term victory for privacy campaigners, the fact that it's taken five months to come to light shows the David v Goliath fight that's happening here. I am not a doomster either - I genuinely believe that AI is going to solve some huge problems for humanity - but it's all about the ethical transparency, people.


Are we all really going to have to read through never-ending T&C's before we get to use our daily apps? Well, here's an idea. Why not use AI to search for AI-related data changes to our favourite platforms?


My recommendation - get Bing Chat to read through it and highlight any AI that's used. You may decide to use it anyway - but at least you know what's happening with all your highly personal videos and texts. You may choose to use a medium that's committed to ensuring it can't use your data in such ways for more sensitive (video and non-video) chats in the future.


I wonder if an increase in happiness of people using non-Zoom services will truly add to the happiness of its CEO?



Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai

15 views0 comments

Comments


bottom of page