Datafication is about the way we use data as a resource. Data is the subject of capitalization – both politically and for profit.
In the transition to future online social interactions in Web 3.0, regulation of datafication is a central challenge. Regulation must balance efficiency, security, protection, innovation, economic growth, ethics and freedom of expression.
The article uncovers some critical markers in this balancing act.
PREDICTING WEB 3.0
Recent changes in laws of the internet show that the era of self-regulation for tech giants and social media companies is coming to an end – possibly forecasting the terminal trajectory of uncensored Web 2.0 social media platforms as we know them.
Web 2.0 is mostly characterised by providing us a variety of online platforms with a wealth of possibilities to engage in two-way, many-to-many, public/quasi-public interaction, progressing from the one-way communication model of Web 1.0.
It is precisely this user interactivity that defines the ‘social’ into ‘social media’. At the same time the unrestricted user interaction of Web 2.0 accentuates the dysfunctional nature of the online social environment. It’s also the obvious reason for moving on to Web 3.0.
Several sources agree that the introduction of Web 3.0 signals a change in how websites are created and crucially, how people interact with them. Web 3.0 is the new ‘Buzz’ in advertising, content management and social media that forecast the emergence of new business models and technologies. IOT, 5G, AI, SEO-Semantics are some of the highlighted concepts characterizing the future buzz; however, predicting the exact contours of Web 3.0 is anyone’s guessing game. The basic ‘What’, ‘When’, ‘Where’ and ‘Why’ still remain unanswered.
SOMETHING’S GOTTA GIVE
Yet, one thing is certain. Something’s gotta give – particularly towards considerations defining how future regulatory trade-offs are balanced between harm prevention and preservation of fundamental rights.
Introducing Web 2.0 without a valid regulatory architecture has proven to be a blunder, opening a number of cybergaps that evidently rock the boat. We struggle to define who is steering, who is rowing and where responsibility is anchored for delivery of security in online social configurations. Making the same mistake upon entering the era of Web 3.0, will capsize the boat – for sure.
While political visions for exponential digital growth have focused on either technology or people, governance in Web 3.0 must deal with technology and people.
The regulatory architecture that must emerge with Web 3.0 – imperatively responding to the tensions and dilemmas of Web 2.0 – will have consequences for the character of social controls and modes of policing cybercrime over many years to come.
Judging the emerging Web 3.0 technologies and practices, the future internet must balance protecting society from harm while at the same time supporting innovation, the digital economy and freedom of speech.
The following seeks to map some critical milestones in this balancing act.
WEB 2.0 – FAILURE TO REGULATE
Today’s online social platforms and practices, have revealed a range of harm-related issues that have an increasingly negative impact.
Some issues are defined in law and subject to prosecution. Such acts include spreading terrorist content, child sex abuse, revenge porn, hate crimes, harassment, stalking, and the sale of illegal goods. But social media are also ripe with harmful behaviour that has a less obvious legal definition. Such acts are cyber-bullying, trolling, distribution of fake news, disinformation, and material that advocates self-harm or suicide.
Together this plethora of deviance and crime impact individual users – children as well as adults – and have wide-spread harmful consequences for social groups, minorities, business, national security and political stability.
The perception of sequential failures of both the industry and states to effectively address online crime, harassment, electoral manipulation by foreign agents, and misuse of personal data, promote a discursive shift.
Interactions between humans and digital technology influence our cognitive bias – disrupt our ability to make rational choices.
The original model based upon self-regulation and voluntary content control by the media companies themselves questions the value of mechanisms through which social media are regulated, and the way harmful content and users’ behaviour are policed.
Taming social media is subject to repeated calls to action, demanding a standardised approach across all platforms and tech-pushers, to prevent that providers aren’t making so many important decisions of their own.
Time is up for providers of Web 2.0 social online networks. They’ve failed to police themselves and we all paid the price for the omnipresent failure to regulate. Voluntary actions from the industry to tackle online victimization from harm, deviance and crime have become too little – too late.
Justified by the necessity to manage increasingly alarming levels of victimisation, the rebellious unruly character of contemporary online social configurations is at the centre of a clash between two camps: rights to freedom of speech and demands for social controls to mitigate and reduce deviance and crime in social media.
At the core of the debate is the essential obligation to strike an appropriate balance between delivery of security to keep users safe and at the same time preserving the open, free-spirited nature of the internet.
This ambiguous dispute presents daunting political and practical challenges on how to effectively monitor and manage unprecedented flows of distorted mass – communication, unregulated uploads of harmful content, and predatory social exploits in real time on a global scale.
Recent trends suggest that balancing freedom of speech and social controls becomes muddled by national security concerns, encouraging securitization of the internet as a ‘critical’ function of society.
Entering into Web 3.0 includes a latent requirement for taming the ‘Wild West Web’.
SECURITIZATION OF THE WILD WEST WEB
Cybersecurity have taken societal salience, increasingly entering political agendas, and rapidly saturating national security concerns. The initial successful dramatizations of national security threats against critical infrastructures have firmly placed the concept of cybersecurity within the military complex.
In a Danish strategic perspective, this observation is supported by the fact that responsibility for cybersecurity is primarily rooted in the Defence Intelligence Service and the Center for Cyber Security.
The development of internet regulation has featured an “arms-length” relationship between state agencies and other actors that are involved in managing and “policing” online activity on a daily basis.
However, recent controversy has caught the attention of policymakers. Manipulation of social media are highlighted through issues ranging from the misuse of users’ Facebook data by the political consulting firm Cambridge Analytica, to the alleged online manipulation of the 2016 US presidential election by Russian hackers and ‘trolls’,
Accordingly, national security concerns have developed to include non-military spheres of the digital domain, particularly towards dilemmas and tensions caused by disruptive digital influences to the core processes of democracy that come embedded within the open online social environment.
Data generated by users on social media has become a potent strategic resource.
In terms of policy this expansion of national security concerns extends the perception of cybersecurity to include the concept of ‘functional security’, which strives to ensure the security of the critical functions of society. Subsequently, non-military functions of the digital domain become subject of securitization.
Securitization of digitization in general – and social media in particular – provides political motivation for taking initial steps in a bid to initiate more rigorous and mandatory state-sanctioned measures.
It is safe to say that social media has obtained societal salience.
Regulatory tools available to the nation-state comprise the use of measures, such as direct institutional oversight, installing corporate accountability in social media providers, and the potential use of punitive and criminal law sanctions against companies to compel remedial action.
In the case of state-sanctioned control of social media, regulation is mainly understood as detecting, identifying, blocking, removing and reporting offensive or illegal content. Such regulatory tools place a legal duty of care on providers of social networks to remove offensive or illegal content.
Recent developments suggest that securitization of social media will inspire far more rigorous state-sanctioned regulatory regimes in the future; however, conflicting aims of making the internet both the safest place for online social interaction, and the best place to start a digital business, could be mutually incompatible.
To be continued….