To twist a line from a classic film, the greatest trick that big tech platforms ever pulled was convincing people that their rights were a premium feature.
So much data about what we do or buy or where we go online has been recorded and used for so long that it’s absolutely the norm. Every second that you are in front of a screen.
Most people have no idea to what degree their information and actions are not their own unless something truly freaky happens. And yet, a lot of people still find conspiracy theories more plausible than Occam’s Razor. My phone is spying on me! …Or maybe our every move online is just already being tracked, analyzed, aggregated and more.
Actually, when I write it out, it does kinda sound like the same thing. Only difference is the initial mechanism being hardware vs. software.
Unlike in many other parts of the world, where your consent is required before companies can track you and “process” your data (collect, store, analyze, share, sell…), the U.S. has gone with data privacy laws where you opt out instead of opting in. The U.S. doesn’t have a federal data privacy law, just several state-level ones at this point.
Now, you may be thinking, “Madame, I am a Canadian and these American shenanigans do not affect me.” Yeah, well, I’d be willing to bet you interact with American companies many times during the day. The U.S. privacy laws being state-level, compliance applies to residents of those states. Sooo
The Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s data privacy law, is more than 20 years old. An attempt to update it, Bill C-10, died with the last federal election. Now they’re wrangling over Bill C-11, its successor. That’s been going on for the better part of a year.
Oh, and there are provincial and even municipal laws that go into the privacy mix here, too. Quebec has some interesting ones.
The problem with patchworks, particularly legal ones, is that there tend to be more holes – loopholes if you will – that can be manipulated. For example…
In some states, organizations can collect, process or sell your data unless you tell them not to. Or they can collect it, share, profile you with it, etc. at will, but you can tell them they’re not allowed to sell it. In some states you can tell them you don’t want your data used for profiling. Or targeted advertising. Or “automated decision-making,” which will increasingly include the use of AI or machine learning.
Sometimes data controllers or processors only need your consent if the data they’re collecting is “sensitive,” so about your health, political affiliations, racial background, sexual orientation or other things that could be used to harm you if it got into the wrong hands. You know, like this. Or this. Sometimes they don’t even need your consent to collect that, but they have to notify you and enable you to opt out.
Here’s the thing, though: That’s what some laws say. How the real world works (still) isn’t so cut-and-dried or compliant. Yet, anyway. Did I mention that the tech companies they’re most looking to regulate are very involved in the legislative process around data privacy? Hell, they’ve offered to help write them.
Welcome, Mr. Fox. We think you’ll make a fine manager for our hen house.
We are increasingly inseparable from our phones, and apps remain particularly eyebrow-raising in terms of a lack of privacy compliance – even in the European Union where they’ve been cracking down on this sort of thing for years already. We’re not just talking TikTok, which a bunch of governments, including Canada’s, have recently banned from government-issued devices.
The default way that the internet has developed is that we do not have a right to our own data. To privacy. Imagine if people acted in the real world like companies do online. Some dude could stand peering into your windows and watch anyone in your house, at any time. He could go through your trash. He could put a tracker on your car. He could do all these things unless you went and engaged with him and explicitly asked him not to.
That would be completely legal. Or, depending where you live, he could Peeping Tom all he wanted, whether you consented or not. He’d just have to ask for your consent before he sold the information he learned while spying on you. Sounds insane, doesn’t it?
Even when we do get asked for our consent, there are commonly used tricks and manipulations online to get us to do what companies want. They’re called nudges or dark patterns. They are increasingly frowned up by data protection authorities and are illegal in some places, but don’t expect to see them eradicated any time soon.
Ever gone to a website that you can’t access until you interact with a giant “privacy wall” that prevents you accessing anything on the site until you click a button to agree to let them track you? Ever gone to a website that pops up a banner asking you to agree to be tracked, but there’s only an “accept” button? (The “deny” option may exist in that banner somewhere, but would likely be on a separate layer, hard to find, and not in a big, clear button format.) Those are some examples of dark patterns. Even when companies are respecting our privacy… are they really?
Inextricably linked to our privacy is our security. Let’s face it, the less security we have the less privacy we have. Just ask current and former employees of Indigo, whose personal information may have been acquired in a recent ransomware attack. Social platforms seem to be going down the road of making security a premium feature. And, by extension, our privacy, since if your account gets hacked, there goes your privacy…
Twitter has made SMS two-factor authentication (2FA) available only to paid accounts. Its documentation claims the change was because that form of 2FA has been abused by “bad actors.” Good guys pay CDN$105/year, I guess. There are still two other ways to use 2FA, but one isn’t commonly used, at least among less-technical people, and the other has worked inconsistently for some time. Is it likely that further privacy and security functions will become paid-only? It’s certainly possible.
Meta, parent company of Facebook and Instagram, is also apparently testing paid verification. CEO Mark Zuckerberg noted in the announcement, “This new feature is about increasing authenticity and security across our services.” Among other features you get “extra impersonation protection against accounts claiming to be you.” But only for US$11.99/month or US$14.99 a month on mobile.
Let us not forget that these companies still make billions of dollars a month, primarily from advertising that’s shown to users – us. The platforms themselves are also not the product. We are. We generate all the content. We are what draws our friends, family and followers there, providing more eyeballs for the platforms to show more ads.
We are expected to enable these platforms to make money with our data that they collect and “process,” with the ads they show us, and now by paying them for “premium” features to protect our privacy and security. For those familiar with the term “chutzpah,” it has two meanings, and this is a glaring example of both.
Sure, some will say, you always have the option not to play. Delete your social accounts and apps and go “touch grass.” And yes, the mainstreaming of the internet, and especially social platforms, did put a lot of people’s data at risk and nuke a lot of our former privacy. Back in the day the biggest threat to your privacy was if your phone was a party line.
But telling people not to play is victim-blaming. It’s normalizing giant companies’ rights to our lives, identities and information. Those platforms could just as easily have been built not to track us. Oh, by the way, you also get tracked when you leave a social platform or site and go somewhere else. Look up “third-party cookies.”
We have gotten so used to not paying for things online – well, we pay with data rather than money – that the idea of platforms that make money in other ways seems quite niche and downright quaint. Imagine when Twitter or Facebook launched if you’d have had to agree to pay $5/month to use it. Would they have fizzled?
I don’t think there’s any question that the reason these platforms got as big and ingrained in our lives as they have is because the barrier to entry was so low. Just make an account and off you go! Don’t pay now OR later! Look, there are all your friends! Grow your business! For now, anyway… And maybe don’t read our Terms & Conditions too closely.
It’s a lot harder, more time-consuming, more resource-intensive and frustrating to fix something that was built wrong than to do it right the first time. Just ask anyone who’s tackled home renovations. While in a lot of ways that horse is long out of the barn – you’re never going to perfectly retrofit privacy online, no matter how much you legislate it – there is some hope.
New companies are starting all the time. While they might not have the gazillions in revenue or government clout or data-hoovering capabilities on the astronomical scale that tech giants do, they can do it right the first time. They can choose another revenue model. They can build “privacy first,” where that is a key consideration for everything from the ground up. They can build their business around data privacy compliance and, hell, just respect for their users, visitors or customers.
Would it be easy to make that the mainstream? Is it likely that it will happen any time soon? No. But if we are the product that these platforms have come to rely on so much, if their gazillions in revenue depend on us, then we have a lot more power than we realize to demand change. We just have to do something about it.
If you don’t believe that, then I guess that’s another successful trick they’ve pulled.
M-Theory is an opinion column by Melanie Baker. Opinions expressed are those of the author and do not necessarily reflect the views of Communitech. Melle can be reached on Twitter at @melle or by email at email@example.com.