There is much discussion currently about the responsibility of the major media platforms for the social acceptability of the content that they allow to be posted. Social evils have been propagated such as trolling, self-harm and suicide incentives, pornographic images, and sophisticated political manipulation – as was alleged in the last US presidential election.
On another level, criticism has been levelled at social media for their sinister role in collecting personal data. This extends not merely to consumer preference for product advertising purposes, but also to the capture and commercial exploitation of visual characteristics such as facial expressions that can reveal the likely intentions and emotional reactions of the unsuspecting user.
This aspect has been well explored in a recently published book ‘The Age of Surveillance Capitalism’ by Soshana Zuboff, Professor of Administration at Harvard Business School. Her lecture can be heard here
In her book, the American author make the surprising assertion that this data collection activity is actually more profitable to the corporations concerned than the product or service that they actually market. That these activities of the social media providers are largely untaxed is another, but not altogether unrelated, issue.
Responsibility for “moderation”
Until lately, social media platforms such as Facebook for example have maintained their claim that they do not act as publishers of the material they carry and therefore have no responsibility and no legal liability to ‘moderate’ the content (i.e. to oversee it to ensure its social acceptability).
More recently however, messages deemed offensive have been taken down and access denied, even to the sitting US president, Donald Trump, who has been denied access and effectively banned from Twitter. This censorship is practically tantamount to Twitter waiving its legal defence to claims against unlawful publication. The knives are out!
As often happens in a period of rapid social change, the advance of technology has outrun the development of ethical considerations and restraint. It can also be said that technology has outrun itself and is out of control. How can so many million tweets and comments daily be adequately scrutinised and sanitised?
Just another spam call!
This was on my mind this morning when the phone rang early before breakfast. It was a spam call purportedly from Amazon: “Your account has been debited with £120; if you have not ordered recently, please press 1.” My 1471 was met by the predictable response: “We do not have the caller’s number.” Amongst the million of hourly calls, does BT have the technology to block access to all unidentified callers who seek to use their communication system?
Now that the banks reimburse victims of fraud, should BT compensate their customers for the inconvenience and often financial damage suffered? Their standard response of offering a call-minder facility appears inadequate and unnecessarily cumbersome. More useful is BT’s caller ID facility which is now available free of charge, so that withheld number calls may be pre-recognised.
It must be hoped that technology will catch up to circumvent anti-social behaviour. The law-makers can only wield the stick if democratically persuaded to do so; it is for the information providers and enablers themselves to develop the appropriate technology to cure or combat these social evils.
Some progress has indeed been made. Child safety charities have said that up to five million child abuse images a month are passed to law enforcement agencies by social media firms. However, it has been reported (Daily Telegraph January 12) that, according to the Five Eyes intelligence network, EU online privacy laws, which are still enshrined in UK legislation, could hamper or prevent police detection.
The work of detection is onerous and costly and there is a strong case that social media firms should be tasked with bearing or sharing this cost. What must not be forgotten though is that freedom of speech is a right treasured by our western democracies. The European preference for regulation differs from that of the United States. In Germany for example laws are in place (the Network Enforcement Act), which require social media firms to remove potentially illegal material within 24 hours of being notified, or face fines of up to 50 million euros (£44.9m). This might prove the better way forward.
Image Credits: Dee Alsey, Shutterstock , Kenneth Bird .