Facebook manages to achieve the near impossible. It often appears technically and politically incompetent, yet remains highly profitable.

Its ability to track, monitor and profile us, our friends, families and interests, is well understood. Leading privacy campaigners have long warned us of its underhand behaviour and documented many of its failings over a long time period. And recently there’s also been increased public awareness of social media’s negative impact on political events, even if much of it still remains opaque and unaccountable.

All of which adds up to an unsavoury mix – manipulating our timelines, pumping out fake news, and exposing us to undesirable political propaganda that chips away at democracy.

However, Facebook is far from unique. Similar concerns apply to many of the other big technology platforms too. So here’s a brief flavour of some the issues currently troubling me — from the trivial to the far more significant.

Usability

The biggest usability bugbear of many regular Facebook users is the lack of a chronological timeline. Instead of seeing all your friends’ posts in the order they were posted, Facebook decides what to show you, what to hide, and which posts to prioritise over others. The result? You miss lots of interesting posts from your friends. When you do eventually unearth a friend’s post and click on it to read it in full, on returning to your timeline you find it’s been randomly shuffled yet again courtesy of the mysterious Facebook croupier.

If it cared about its users, Facebook would let us choose how we want to see posts, not insert itself in the middle as an amateur and irrational censor. It’s hard to see how Facebook can argue it’s not a publisher when it performs precisely that role, deciding what to publish and what to hide, constantly manipulating what we get to see in order to maximise its revenues.

On the face of it, Facebook ignoring its users doesn’t make sense. And if this bizarre lack of usability is indicative of what Facebook claims to be leading edge artificial intelligence (AI), it just exposes the reality of the poor state of generalised AI beneath the hype. However, I say ‘on the face of it’ deliberately, because the other option is that annoying users is intentional Facebook strategy. It helps keep users online longer than they would otherwise be if they had the option of easily seeing everything in one place. After all, the more time that users spend having to click and delve into each of their friends’ timelines to see the posts that Facebook hides, the more chance there is to spam them with ads. And – probably more importantly – there’s also more chance to learn and monetise yet more information about us, including which friends and Facebook groups and interests we interact with most.

Technology

Is Facebook just inept at software development? Or is its tantrum-inducing ‘hide and seek’ software part of a strategy to ensure users spend more time on Facebook than they want to? 

On the face of it, the software is surprisingly bug-ridden and poor – features that don’t work and ‘artificial intelligence‘ (sic) that makes some very weird choices about what we get to see and what we don’t. 

One obvious example of irritating software design is the nearly-impossible-to-click-successfully widgets intended to let you remove annoying adverts, memories or friend suggestions. Even if you manage to actually click on the minuscule widget rather than the advert, more often than not it doesn’t work and simply throws an error along the lines of ‘I’m sorry Dave, I’m afraid I can’t do that’ and prevents you removing it.

But is this poor software – or deliberate design? Given the amount of money presumably spent by Facebook on software development, and the talent pool its healthy finances allow it to call upon, it must be deliberate. No-one would ship such flakey, irritating software by accident.

Other examples of clunky software include the random suggestions of ‘friends you may know’. These are not so much ‘intelligent’ suggestions derived from advanced data analytics, but appear to be simple Boolean logic along the lines of ‘If your friend knows this person, maybe you know them too.’ Er, yeah, right. Programming 101 stuff. For all the talk of Facebook using an almost mystical social graph, observed behaviour suggests a much more rudimentary set of algorithms working mechanically in the background.

The growing number of hacks and misuses that have been publicly emerging suggest that Facebook’s software engineering has serious quality problems. Letting a single organisation hold, manipulate and share such a rich and extensive set of personal data was never going to be a good idea or end well.

The lack of any transparency into the black box operations of software developed and deployed by companies such as Facebook means we never know why they make the often bizarre suggestions they do, why they target and manipulate us the way they do, and what they do with all the data – biographic, biometric, behavioural – they gather. It raises many concerns about users privacy and security.

Privacy and security

Facebook has experienced a wide range of privacy issues. Users have little control or consent over many of the ways their data is accessed and used – despite what Mark Zuckerberg, Facebook’s chief executive, may have assured US lawmakers earlier this year when he claimed that users “have complete control” over everything they share on Facebook. The New York Times for example reports that:

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews … Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants. New York Times, December 18 2018.

Facebook constantly nags users to sync their contacts to see who else they might know on their platform. Anyone who falls for this trick and opens up their contact book breaches all of their friends’ privacy, exposing their email addresses, phone numbers and presumably anything else Facebook might happen to hoover up and choose to exploit. Facebook presumably mines that personal data to better understand who knows who, mapping networks of friends and family (including those not even on Facebook) in order to improve their revenue opportunities. And all of this happens without friends or family members having any say in this abuse of their personal data. So much for ‘complete control’ over our data and the idea of ‘consent’.

Another privacy concern is Facebook’s constant recommendations of other users you may know, most of whom usually turn out to be complete strangers. It seems to insert these suggestions into users’ timelines on the spurious basis that you might know a friend of a friend. And what happens when that randomly selected victim doesn’t have their privacy locked down (which often seems to be the case)? There’s something particularly creepy about Facebook inviting us to peer into a complete stranger’s timeline, an unwanted and unseen voyeur into all of their personal photos and posts. That’s when it isn’t already busy leaking some of this personal information anyway

Anyone who uses Facebook’s login ID and password to access other online services unwittingly enables Facebook to cultivate even more information about them. Every time you use your Facebook login ID to access other services, Facebook gathers additional useful data about your interests, the sites you visit and how frequently you use them.

The use of the same digital login everywhere is poor security, as the recent breach of Facebook’s login demonstrated. The use of the same login ID everywhere runs the significant risk of all your online services being compromised by a single security breach. Using the same account for a whole range of online services goes against best security practice of not using the same username and password everywhere. Don’t do it. Using Facebook as your single digital identity also makes it hard to leave Facebook. Once you’ve built a dependency across multiple online services that all rely on your Facebook login, it’s going to be difficult to migrate to something else.

Another bad privacy habit? Users tagging their friends in posts and photos. Doing so not only breaches friends’ privacy without their consent but it also helps Facebook build up ever more comprehensive facial recognition data and a more detailed understanding of the relationships between people, who meets and is connected to whom, etc.

Facebook’s reach can’t be underestimated, particularly given that it continues to track you even when you’re not logged into its services and tracks users and harvests their data wherever they go online. Facebook pixel enables them to quietly track your online behaviour and interests in order to spam you with more ‘appropriate’ (i.e. profitable) ads.

The way we behave on Facebook often doesn’t do much to help ourselves either. Take part in one of the frequent memes that sweeps through Facebook – posting your favourite books, films, etc. while tagging a series of friends to also take up the same challenge – and you help Facebook learn more about your interests, relationships and networks. And never do the quizzes, most of which seem to involve tricking people into revealing the passwords or ‘shared secrets’ used to secure their online accounts – all those questions such as favourite pet, first school, favourite teacher / colour and so on are a hacker’s goldmine.

Politics

We know remarkably little of the inferences Facebook makes based on the things we look at and interact with online. After all, how does it distinguish between someone looking at extreme right- or left-wing posts to edcuate themselves on the poisonous ‘thinking’ and content those groups are spewing out versus those who look at them because they support them? We risk being tagged and associated with interests and assumptions that are entirely incorrect. It’s impossible for any algorithm, regardless of how ‘intelligent’ someone may falsely claim it to be, to infer the reason why we look at some of the content we explore when we’re online.

Being aware of this intrusive, tireless surveillance of our online behaviour also runs the danger of increasing self-censorship, and a decline in the robust debates that are an essential characteristic of a free and open society. We’re likely to stop looking at more diverse content, or to stop posting more challenging information (such as to counter fake news), if we fear such behaviour could invite vicious online abuse in retaliation, or be used to misrepresent or discredit us in some way, personally or professionally. This in turn amplifies the likelihood of us living in ever more insular and narrow group-think bubbles of like-minded people rather than encountering, challenging and exploring alternative political, social and economic perspectives as a natural and essential part of a healthy, robust and sustainable democracy.

The old Onion joke from 2011 – that the CIA’s invention of Facebook has saved the government millions of dollars – has always been a bit too near the mark to be funny. The Onion’s video includes the following spoof ‘Congressional testimony’:

After years of secretly monitoring the public, we were astounded so many people would willingly publicise where they live, their religious and political views, an alphabetised list of all their friends, personal email addresses, phone numbers, hundreds of photos of themselves. And even status updates about what they were doing moment to moment. It is truly a dream come true for the CIA.

Although perhaps it’s less the CIA, and more Putin and the Kremlin, that appear to be exploiting Facebook and other social media platforms these days. Facebook profiles were harvested by Cambridge Analytica to enable interference in political elections. Only recently the Washington Post also reported

… how Russian teams ranged nimbly across social media platforms in a shrewd online influence operation aimed squarely at American voters. The effort started earlier than commonly understood and lasted longer while relying on the strengths of different sites to manipulate distinct slices of the electorate, according to a pair of comprehensive new reports prepared for the Senate Intelligence Committee and released Monday.

Russian disinformation teams targeted Robert S. Mueller III, says report prepared for Senate. Washington Post, December 17 2018.

In the days when television and radio broadcasts and newspapers were the main media for reaching voters, we all saw the same adverts. We could all read and access the same articles and messages being broadcast. Today, that is no longer true. Advertising has become highly personalised, micro-targeted. We have no idea what ads someone else may be seeing when they look at the very same online sites and content that we also view. They may be spun a very different perspective of the world and what is happening than the one you receive. But you won’t even know about it, or have the opportunity to take on and challenge the potentially biased and manipulative views they may be receiving.

It’s impossible for us to fully understand the impact that this constant drip of highly targeted and personalised advertising – and outright propaganda – may be having on our politics and society. Based on the in-depth understanding that Facebook and other social networks have about our interests, behaviour and prejudices, advertisers (including political players and hostile foreign states) can play highly manipulative and divisive psychological games of the kind that make the tactics Vance Packard wrote about in the late 1950s in his The Hidden Persuaders look positively amateurish. 

Social media platforms have become significant, if unwitting, players in highly effective disinformation and disruption campaigns in Western democracies, and more insidious hate speech both at home and further afield. All the personal data that we ‘volunteer’ is an essential ingredient in this destructive process.

Our use of these platforms is far from being ‘free’. It has potentially significant and far-reaching consequences.

Media

Facebook has sucked significant revenue out of our mainstream media, leaving them under-resourced and often battling to survive. This is doubly ironic given that this is the very time when we need trusted, well-resourced and professional media. The type of professional media that’s able to invest resources and time into investigative journalism, helping research and expose the fake stories and distorted news endemic on a whole range of platforms, from Facebook to YouTube. 

The sources we could once have turned to as trusted and authoritative are themselves disappearing or under threat in part because advertising platforms like Facebook have redirected their revenues. This is a vicious circle we need to break by actively supporting credible media outlets such as The Washington Post, The Guardian, The Times, Private Eye, The Atlantic and others who still invest in the painstaking work of proper journalism and investigation – whilst we also ensure regulators and governments crack down on social media platforms’ abuses of truth, trust and personal data.

Taxes

Social media companies are not only depriving much of our traditional media of its revenue, they’re also taking vital money from our democracies. Through a whole range of exploitative financial techniques, these companies are not paying their taxes at the point of sale, where the taxes should be due. Whilst this is morally reprehensible, it’s not illegal.

Ultimately it’s our governments that must shoulder responsibility for this. As in so many areas, they have been slow to understand the digital age – both its upsides, and its downsides. It’s long been an option to impose a tax at the point where a sales transaction happens rather than where a company chooses to record the sale. The social, economic and political costs of such lost revenue are significant, but at least there are apparently belated efforts under way both in the UK and more broadly to remedy the historic failures of our governments.

Once it is being properly collected, some of this tax revenue must be used to improve the resources available to regulate these platform players – ensuring transparency and countering the growing threat they pose to democracy.

So what now?

Well, the most obvious thing would be to ditch Facebook and other social media. Easier said than done of course. If you’re not ready to take that step yet, digital rights campaigner and former Chief Executive of Big Brother Watch Renate Samson provides some practical steps to better protect your online privacy and security.

I wrote back in 2015 about the need for platform co-operatives as an alternative to the big commercial platform players, an idea that subsequently made its way into Labour party proposals. It remains an aspiration, but who knows? Maybe one day it will happen. In the meantime, there are alternatives such as mastodon (a decentralised, open source social network) to explore.

While watchdogs may be closing in on Silicon Valley giants as monopoly concerns mount, it’s all about a decade too late. For far too long there has been poor transparency and inadequate external, independent oversight of these new tech platforms. It’s encouraging to see the likes of the Department for Digital, Culture, Media and Sport (DCMS) select committee taking much firmer action, including calls for tighter regulation. So too the penalties now available under the General Data Protection Regulation (GDPR) may start to have a beneficial impact in driving improvements, with Facebook potentially facing a billion dollar fine for data breaches.

This blog is only a brief personal snapshot of just some of the issues that need fixing. We need far more democratic investigation, transparency and accountability of these platform players given the potentially significant impact they are having on shaping our lives and indeed our wider society. While it may well be true that Facebook has never deserved our trust, it’s also time our governments and regulators caught up.

And so too should we, as the users of these platforms. It’s our personal data that fuels and sustains them after all.

Footnote

For those of you thinking “Why pick on Facebook? Why not Twitter? Why not YouTube? Why not x, or y, or z?” Fair questions, in a ‘what-about-ery‘ sort of way.

Many of the criticisms above apply equally to them all.

However, debating which platform or company to prioritise for criticism is a pointless distraction. It displaces the more important questions we should be asking: why haven’t governments and regulators responded to these many, many issues long before now? And how do we now ensure that effective accountability and transparency are put into place?

Original source – new tech observations from a UK perspective (ntouk)

Comments closed