I’ve complained about eroding online privacy a fair bit. But I was recently reminded that privacy is just one part of the bigger picture, thanks to this piece that asks legitimate questions about platforms’ responsibilities to keep users safe,

The article included the phrase “duty of care,” which really got me thinking. It’s such a communal – and, at this point, foreign – notion, conjuring up notions of communities and caretakers, rather than the more realistic view of users of online platforms as resources to be strip-mined.

Some would argue that we should be responsible for keeping ourselves safe online. And yes, there is a baseline level of behaviour we should all make habitual. But humans aren’t computers running the same programs over and over. We can be messy, illogical, and downright dumb.

Plus, so much of what can violate our trust and safety online is far beyond our control, and far more often than not, that isn’t accidental. Arguments that we should protect ourselves online have a nasty familiarity with sentiments like those dictating how women should conduct themselves to avoid sexual assault (and sidestep the real issues).

This idea ignores the realities of abuse of power and the responsibilities of abusers – individual or corporate. It puts the onus of self-protection on those victimized or at risk.

From the OneZero article referenced above, this section gives us an idea of the rules governing the big tech platforms (it’s U.S. law, which is what U.S.-based companies are supposed to answer to):

“The regulation at the center of [a decision finding company/app Grindr not responsible for the complainant’s safety] was Section 230 of the Communications Decency Act of 1996. Section 230 protects tech platforms by insulating them from the actions of their users. It’s why Twitter, YouTube, and other service providers can’t be held accountable for libel or malice spread by accounts they host. The legislation holds that such apps merely provide a platform, and that companies can’t then be held responsible for what their users do with it.”


Doesn’t really scream “duty of care,” does it?

We’ve gotten to the point where we don’t even really expect tech companies to take duty of care seriously. We’ve moved on to requesting that governments enact regulations to require it. Wouldn’t be hard to argue that governments aren’t the ideal people for the job, though, either ...

It can sometimes seem like we, as users, are our own worst enemies when it comes to our online security. I chalk that up mainly to lack of knowledge and a dangerous intolerance for inconvenience.

For the former, basically there’s just too much to know, and we’re not going to learn or remember it if we don’t need it often. Consider: once you’ve set up an account on a social platform, or email hosting, or created your website, how often do you check your account settings?

For the latter, we’ve been very thoroughly trained for it. Whip out your phone to order anything, and get it delivered by a drone because 24-hour delivery is too slow. How many people would eschew two-factor authentication for logins because it slows down access?

The big platforms could have been built with frictionless privacy and security. But that was never central to their design, and good luck with that retrofit. And so the way we access these tools that shape our lives remains archaic and easily circumvented or exploited.

Vulnerable groups need the most protections to live their lives and do their work online. But rarely do they get that from the mainstream platform providers. More secure tools tend to come from private, and less easily accessed, sources.

Of course, there are ways to focus more duty of care toward users that would (ironically?) risk making it worse. From the OneZero article:

“Some technology experts and digital rights activists argue that holding companies accountable for users’ posts could spark an increase in digital surveillance or automated censorship filters, because companies would be compelled to monitor users content more heavily. This kind of automated content moderation has been alleged to disproportionately target LGBT+ users, such as the apparent censorship of queer YouTube creators, and the deletion of trans Facebook users who choose to register under their chosen names.”


Given companies’ established culpability with not protecting people, tracking us even more doesn’t seem like the best way to make us safer. It just makes the potential target bigger …

Large organizations, be they companies or governments, also don’t have a great track record of consulting (or at least really listening to) many of those most affected when efforts are made to tackle safety and security.

As an example, the implementation of the FOSTA-SESTA laws in the U.S., meant to fight human sex trafficking, has made many sex workers much less safe. And since those laws affect websites, the effects are felt globally, well beyond the U.S. borders. Same goes for the big tech platforms, which are U.S.-based but which have international user bases.

Opponents note that, among other things, the FOSTA-SESTA laws could be used to increase censorship. Not to mention the fact that the risks or liabilities these platforms face under these laws – and thus that users face – isn’t entirely clear, which could result in broad and/or arbitrary interpretations and enforcement.

Combine that with the aforementioned Section 230 of the Communications Decency Act absolving tech companies from responsibility for their users’ safety, and it doesn’t really paint a picture where anyone is looking out for us online. End-user licence agreements might as well say, “Good luck with that.”

Perhaps to work toward a tech sphere that takes duty of care seriously, we could borrow from social work, with the concept of harm reduction. This model looks at people and circumstances as they are now, rather than idealized scenarios of where we want them to be. How are people living, what are they doing, what are their immediate needs, etc.

It’s about keeping people safe today, and working towards making things better and easier to maintain over time, at whatever pace or in whatever ways are realistic.

This wouldn’t be an overnight revolution in tech, but it would be a significant start to a shift in how the platforms are forced to consider their customers. It would require an acceptance that not everyone will use things as we’re told to, or with equal care and vigilance for our privacy and security at all times. But even so, that doesn’t mean we and our data don’t deserve to be safe.

It would require commitments to working with how we actually live and what we want to do online. It wouldn’t require the users to have to jump through hoops to try to retrofit their own security after the latest breach. Or require those already put through hell by malicious use of these platforms to try and force corporate decency upon tech companies through legal means.

If corporate personhood exists as a legal notion, then so too should corporate humanity.

M-Theory is an opinion column by Melanie Baker. Opinions expressed are those of the author and do not necessarily reflect the views of Communitech. Melle can be reached on Twitter at @melle or by email at me@melle.ca.