Posted by

‘Gas the Jews’, ‘Hitler Did no Wrong’ and ‘Pussy_Slayer’ are usernames in one of the most popular global computer gaming communities on the internet. Would you be concerned, if your child was surfing cyberspace, using such derogatory alias?

Perhaps this is a rhetorical question; however, the internet is overloaded with easily accessible inflammatory, sexist or outright disturbingly undisputed content.

It is safe to say that interactions between man and machine can bring out the worst in all of us. The following is an attempt to answer one simple question: Why?

Taking an offset in a broader criminological approach with embedded psychological and psychosocial aspects, the following aims at explaining the flux in paradigms that guide political and public attitudes towards the regulation, policing, and control of social media.

The piece concludes we need to install a new interpretation of ‘cognitive security’ that points towards protection against exploitation of online psychosocial enablers that create cognitive bias in groups or entire populations.


On 23th March 2016, Microsoft released their experimental Artificial Intelligence bot, TAY, to Twitter. TAY was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. At launch TAY replied to other Twitter users. Microsoft shut down TAY after only 16 hours of operation. In this short time the AI bot released 96.000 tweets, some with very disturbing or inflammatory content. TAY mimicked the racist and sexually charged tweets of other users. Microsoft had not given the bot an understanding of inappropriate behaviour. Critics named TAY as artificial intelligence at its worst. Maybe TAY is just the beginning of a greater debacle of AI, but also reveals the anti-normative behaviour humans exercise in the cyber realm.


The internet has been available to the general public for a mere 20 years or so. Digital advances are undisputed. Furtherance and ubiquitous embracement of the technology on a global scale appear unstoppable, but we still need to address the mysteries of the ‘man vs. machine’- interface. Preferably in a manner that minimize the opportunities for deviant behaviour and reduce exposure to risk emanating from the obviously disruptive criminogenic attributes embedded within the technology supporting online social interaction.

In the exponential trajectory towards singularity, we have become aware of accelerating shifts in paradigms, not only in technology, but increasingly in the way humans and technology interact. Technological innovations in the rebelliously unruly online social realm occur at such rapid pace that delivery of justice, policing, protection, security, privacy, accountability, transparency, and democratic control stumble to catch up.


The socio-technical model claims that a gap emerges in the mismatch between what technology is capable of and what social controls are supposed to do.

Thus, the term ‘cybergap’ refers to the virtual distance between social norms and technology. Cybergaps become evident in the attempt to advance controls that regulate the constant flux of challenges provoked by the turbulent process of technological innovation and pervasive digitization in vital core functions of society, affecting the way humans interact and express themselves in the hyper-connected digitized reality of online social interaction.

This revelation is pushing digital technology from being a discipline almost entirely seated within computer science or engineering, into other academic fields concerned with the messy realities of human social life. The sociological approach to digitization reveals a number of shifts in paradigms that are essential to our understanding of society, democracy, and human online behaviour.

Responding to crime and deviance in online social configurations is not solved by introducing a technological quick-fix, but suggests that we need to inject some soul into the equation. Disruptive digital technology is a powerful game changer that comes without social norms, ethical principles or moral consciousness.

Criminological incursion into online social interactions requires the added advancement of sociological and psychological analysis to determine modes of governance, protection and regulation that are capable of addressing socio-technological challenges conceived in online human exchanges of information.


Admittedly, ‘deindividuation’ is not a buzzword that you hear every day in debates on cybersecurity, cybercrime or human-computer interaction in online social domains. The concept has not penetrated the technologically inspired debate on digital exchanges.

Notwithstanding the somewhat anecdotal definition of the term ‘Buzzword’, deindividuation could easily be introduced into the buzz surrounding the exploration of persistent criminogenic attributes and anti-normative psychosocial enablers that come with exchanges of information throughout the online domain – and particularly in relation to human-computer interaction on social media platforms.

Yet, deindividuation is probably the most common psychosocial feature that fuels other buzzwords, such as ‘Flaming’, ‘Hate speech’, ‘Doxxing’, ‘Cyber-bullying’ and ‘Shitstorms’. These terms represent online acts of posting insults, often laced with profanity or other offensive language in online social platforms.

Flaming is the online act of posting insults, often laced with profanity or other offensive language on social networking sites. Many attribute the emergence of flaming to the anonymity that internet forums provide. The term has been almost entirely superseded by the term ‘trolling’.

Lack of social cues, less accountability than in face-to-face communications, and remote textual mediation are considered main contributors to offensive behaviour in online social platforms. While online offensive behaviour may be common or expected and even applauded in some forums they can have dramatic, adverse affects in others. Hateful or insulting behaviour can have a lasting impact on some internet communities where division or even dissolution may occur.   

Research into Computer Mediated Communication has spent a significant amount of time and effort describing and predicting engagement in uncivil and offensive online communication. Specifically, the literature has described aggressive insulting behaviour, which has been characterized as intimidating verbal outbursts, the uninhibited expression of hostility, insults, ridicule, and antagonistic comments directed towards a person or organization within the context of online social interactions.

Photo by Anton Belitskiy on

However, neither online crime nor deviant sociopathic behaviour in social media are fully settled in criminological theories on human-computer interactions in the online social realm.

Yet, some types of offensive behaviour are defined in law and subject to prosecution. Such criminal offenses include spreading terrorist content, child sex abuse, revenge porn, hate crimes, harassment, stalking, and the sale of illegal goods. But social media are also ripe with harmful behaviour that has a less obvious legal definition. Such acts are cyber-bullying, trolling, distribution of fake news, disinformation, and material that advocates self-harm or suicide.

Together this plethora of deviance and crime impact individual users – children as well as adults – and also have wide-spread harmful consequences for social groups, minorities, business, national security and democratic political stability.

Deliberate flaming, as opposed to flaming as a result of emotional outbursts, has become almost entirely superseded by the term ‘trolling’; however, the phenomenon of intentional or premeditated targeted manipulation that is presumed in contemporary interpretations of the term ‘trolling’ is considered outside the context of the present discussion of deindividuation, and further deliberations on the concept of trolling – as a means of mass persuasion – should be sought elsewhere.

In the context, this paper will focus on the mechanics involved in situations where respectable and law-abiding individuals engage in anti-social behaviour when joining and acting in relation to particular online configurations.


Offensive online social interaction comes in a wide array of ‘shades of grey’. Some criminologists argue that crime and deviance in online social configurations is not qualitatively different from crimes committed in meatspace. From this point of view, online criminal acts can be analyzed and explained by established theories of causality and motivation why some people commit crime in meatspace. In this interpretation, ‘cybercrime’ is just well-known criminal acts carried out using new methods and technologies. It is ‘new wine in old bottles’.

Other criminologists conclude that although some of the common core concepts can actually be used to explain crime in the digital domain, there are fundamental differences between cyberspace and meatspace. These differences limit the applicability of established theories. This claim provides qualified support to the argument that the digital domain actually represents the emergence of a new and distinctive social environment characterized by its own rules, roles and responsibilities, constraints, opportunities and interactive standards. This alternative social space gives birth to new forms of anti-normative, deviant or criminal behaviour, which require a completely different theoretical interpretation and approach.

In general, the traditional theoretical approach to crime and causality in meatspace is based on the assumption that motivated offenders commit crime as a result of a rational choice. This presumption also applies in most forms of criminal activity occurring in the digital domain.

Following this argumentation, the online environment provides features that support motivated criminal intent and reinforce the offender’s ability to commit crimes, stealthily, anonymously, undiscovered, and with very little risk of prosecution. Criminogenic attributes make the digital domain the ideal setting for committing ‘the perfect crime’.

The novel socio-interactional features of the online environment primarily comprise the collapse of spatial–temporal barriers, many-to-many connectivity, and the anonymity and plasticity of online identity. that drives new forms and patterns of online illegitimate activity. These features signal a departure from the socio-interactional organization of crime and deviance in meatspace, providing a fundamental challenge in theorizing online social interactions in the traditional criminological conceptual framework.

In addition, anti-normative human-computer interaction in the unruly online social environment appears to be favored by latent psychosocial enablers that reduce the rationality of choices made by both offenders and victims of cybercrime in online social interactions. Such psychosocial enablers create a blurry demarcation of deliberate or unintentional online behaviour. Some offenders might unwittingly violate others, while some victims are coerced into unintentionally invite, partake or commit offenses.

Thus, the concept of deindividuation holds both belligerent and benign characteristics. Although flaming and hate speech  are often objectionable, it may be a form of normative behaviour that expresses the social identity of a certain group of users. The use of offensive or anti-normative language could be seen as a type of bonding in certain groups, where racist, sexist, defamatory or outright hateful statements are accepted and serve to identify group members or solidify membership of the group.

Psychosocial enablers cloud legal, ethical, moral and politically correct practices in online social interactions, leaving both children and adults at risk. Deindividuation is recognized as one of the main psychosocial enablers and particularly anonymity reinforce deindividuation of individuals in online social domains.    


Deindividuation as a concept is mainly derived from social psychology studies of groups or crowds, describing an individual’s need to feel a sense of belonging to a certain group; however, theories on deindividuation are not settled and scientific incursions into the phenomenon still hold very diverse definitions of the concept. In a group context individuals exposed to deindividuation could exhibit criminal or deviant behavioral patterns that otherwise would be considered inconceivable behaviour when they act as individuals outside the group.

Most theories on deindividuation are in conceptual debt to Gustave Le Bon’s piece Psychologie des Foules (Psychology of Crowds) published in 1895. As Cannavale, Scarr, and Pepitone (1970) acknowledge, most contemporary interpretations of deindividuation find a direct lineage  from Le Bon’s concept of ‘submergence’. Le Bon’s observations surfaced during a period in which the French state appeared particularly vulnerable to mass agitation – in particular to a rising tide of syndicalist and socialist protests. For Le Bon, becoming submerged in a mob leads individuals to lose both external and internal behavioral constraints. Le Bon’s central argument is that the sense of power derived from strength in numbers leads individuals to express instincts that would otherwise be kept under restraint.

A pro-Russian activist aims a pistol at supporters of the Kiev government during clashes in the streets of Odessa May 2, 2014. Police said a man was shot dead in clashes between a crowd backing Kiev and pro-Russian activists in the largely Russian-speaking southern port of Odessa, which lies west of Crimea, annexed by Moscow in March. REUTERS/Yevgeny Volokin (UKRAINE – Tags: POLITICS CIVIL UNREST TPX IMAGES OF THE DAY) – RTR3NKN5

It was Zimbardo, however, who provided the impetus for a flourishing of deindividuation research by providing a more exact conceptual specification. It is important to note that Zimbardo’s underlying concerns matched those of Le Bon. Zimbardo (1969) proposed a model in which a series of antecedent variables lead to a state of deindividuation. variables include anonymity, arousal, sensory overload, novel or unstructured situations, involvement in the act, and the taking of consciousness-altering substances such as alcohol or drugs.

Some theorists argue that the source of deindividuation is rooted in loss of self-awareness and self-regulation. A segment of research argues that the state of deindividuation is promoted when individuals become anonymous and realizes that there are no consequences when engaging in anti-normative, deviant or criminal behaviour. Early research observes that when a state of deindividuation is reached, individuals express normally inhibited behavioral patterns and gravitate towards groups that enable and applaud the practice of such uninhibited behaviour.

Somewhat anecdotal evidence suggests that the sense of belonging to a group or a crowd, engaging in anti-social or deviant behaviour, promotes deindividuation, causing loss of rational or intellectual notions, which normally restrain individuals from taking part in extreme behaviour in a group. This loss of restraint or inhibitions often cause uncontrollable spread of primitive and aggressive emotions, invoking instinctual urges that resemble characteristics similar to those of uncivilized savages.

Studies suggest that the more identifiable an individual feels, the more likely the behaviour of an individual will be inhibited by social conformity restraints. Humans will by nature seek other individuals or groups with which they sense a strong common interest. Deindividuation appears to occur when individuals can identify with a group, but feel indistinguishable from the group.

TOPSHOT – A “Yellow Vest” (Gilets jaunes) anti-government protester kicks a teargas shell during a rally in Nantes on January 5, 2019, during a nationwide day of demonstrations. – France’s “yellow vest” protestors were back on the streets as a government spokesman denounced those still protesting as hard-liners who wanted only to bring down the government. Several hundred protestors gathered on the Champs Elysees in central Paris, where around 15 police wagons were also deployed, an AFP journalist said. Marches were underway in several other cities across France. (Photo by LOIC VENANCE / AFP) (Foto: LOIC VENANCE © Scanpix)

From this argument follows that individuals, who share common interests in a group, are assured that they are not linked to deviations from social or anti-normative norms and are not made accountable for the consequences of the deviation. Under such circumstances individuals will conform to the activities and statements of the group.

In this context low identifiability promote deindividuation and often result in impulsive, unrestrained anti-normative urges. A host of research support the hypothesis that the feeling of anonymity in an individual promotes aggression to a much larger extent than individuals who are clearly identifiable.

Observations indicate that deindividuated individuals have a tendency to become influenced by emotions and stimuli that cause unregulated or anti-normative behaviour. It can be deduced that a deindividuated individual is blocked from self-awareness as a separate individual and becomes incapable of monitoring own behaviour.


Take all the above social psychology studies in meatspace into a setting of human-computer interaction. Undoubtedly you have registered anti-normative behaviour of deindividuated individuals in almost any social media platform of the day.

The online social environment provides a free easily accessible global playground for children and adults. The playground is jam-packed with opportunities to get hurt or harm others – intentionally or unintentionally. Nevertheless, children and adults use the playground in confidence that providers deliver a safe, healthy and secure product. Providers make fortunes monitoring the playground and selling data on user behaviour to other companies. But providers do not supervise the playground to an extent that prevents users from harm during play.

In principle, all users are responsible for promoting safety, security, trust, transparency and accountability in social media. However, these responsibilities are shared among users and the industry that provide the platform for online social interaction. Private entities that provide social media and digital communities, e.g. Facebook, Twitter, Instagram, Steam, Google, etc., share responsibility ensuring users are not at risk of becoming victim of harm or risk harming others.

The failure to regulate social media  impacts individual users – children as well as adults – and have wide-spread harmful consequences for social groups, minorities, business, national security and political democratic stability.

The narrative of social media involves mounting levels of insecurity, vulnerability and victimization that prompt growing distrust in the online social environment. The root cause of this mistrust can be found in a disruptive cocktail, comprising legal and regulatory complexities; criminogenic attributes, and uncharted psychological and psychosocial influences that are indigenous to the interaction between technology and humans.

This observation paired with studies of human interaction in cyberspace suggests that attributes embedded within the digital environment are decisively conducive to uninhibited aggression, radicalization, protofascism, predatory exploitation, and other forms of anti-normative behaviour.


The acronym SCAREM categorize Stealth, Challenge, Anonymity, Reconnaissance, Escape, and Multiplicity as decisive criminogenic attributes that are conducive to crime perpetration in virtual reality. (Newman and Clarke, 2003)

The criminogenic attributes contained in SCAREM are further compounded by challenges of novel socio-interactional features that are found in online environments: the dissolve of spatial-temporal barriers, suspension of trust-risk relations, and the psychological effect of deindividuation. They all promote anti-normative behaviour in cyberspace.

Interestingly, despite these fundamentally conducive attributes in digital technology, we are attempting to regulate such unwanted behavioral patterns through the introduction of social norms and law on crime and deviance that originate in observations on human interactions in meatspace.

Measured in the vast number of examples of deindividuation in cyberspace one could derive that this current transformation of social restraints in meatspace into a digital reality appears unsuccessful or at least negotiable. The electronic superhighway we all engage every day at break-neck speed appears devoid of proper traffic codes.


A study by Christina Demetriou and Andrew Silke at University of Leicester in 2003 introduced the fake ‘Cyber Magpie’ website – a ‘honeypot’ homepage set up by the researchers – to measure behavioral patterns of users visiting the site.

The Cyber Magpie contained both legal material and links to illegal material. Although the website had a written legal disclaimer, the study provided evidence that 56% of visitors, who accessed the website for legal and legitimate purposes, ended up accessing illegal or pornographic material.

In conclusion the study revealed that users were prepared to engage in illegal activities, most likely in a misunderstood trust to the legitimate legal framework created by the researchers.

On the other hand, although users at times recognize acts as being illegal, they consider such acts as trivial or unimportant. Particularly in a digital environment, where users are exposed to perceived anonymity or deindividuation, criminal and deviant behaviour becomes both common and accepted as the norm. This type of scientific ‘internet sting’ suggests that users are exposed to similar types of temptations to commit crime or deviance due to sophisticated ‘entrapment’ delivered by social media companies that provide unregulated digital exchange platforms.


Today’s online social platforms and practices, have revealed a range of harm-related issues that have an increasingly negative impact on democracy, civil liberties and security. Conflicting aims of making the internet both the safest place for online social interaction,  and the best place to promote freedom, prosperity, and well-being, could be mutually incompatible. We struggle to define who is steering, who is rowing and where responsibility is anchored for delivery of security in online social configurations.

Anonymity is recognized as one of the most significant causes of deindividuation, creating disinhibition. Anonymity also reinforces the criminal’s ability to commit crimes undetected and with very little risk of being prosecuted. Anonymity or assumed identity is also a decisive factor in behavioral design, ‘nudging’, marketing, spin, targeted political manipulation or misinformation.

Exploiting online psychosocial mechanisms is also an important element of new types of warfare such as ‘cognitive infiltration’- a form of covert coercive persuasion.

Paradoxically, anonymity is also a strong argument to retaining the right to privacy in cyberspace. Goals of making the Internet both the safest place for online social interaction and the best place for the right to privacy can prove to be mutually incompatible. Something’s gotta give.


Cyberspace requires users to navigate a highly sophisticated and insufficiently regulated legal environment. In this online environment children and juveniles in particular are exposed to risk of becoming both victims or perpetrators of crime or deviance, thereby unintentionally causing harm to themselves or others. Prevention and protection of users through responsible parenting and education is an obvious – but inadequate – approach. The responsibility for protection and safety in cyberspace is shared among both users, providers, policing authorities and government. Shared accountability is the ultimate prerequisite for creating a resilient digital environment.

The online social domain provides a rebelliously anarchistic platform for anti-normative social interaction. In a criminological perspective cyberspace provides the ideal setting for opportunities to commit ‘the perfect crime’. A gap appears to form in the governance of progressively belligerent sociological interpretation of digital disruption, and the benevolent framing of disruption as digital systems innovation translated into a common good.

This belligerent vs. benevolent dichotomy calls for governance modalities ensuring delivery of security, trust, ethics and privacy in online social configurations. The holistic properties encompassed by the term ‘Cognitive Security’ appears to be a valid bid for a dynamic first line of defense protecting the rights of both the individual user and complex online social systems.


Interactions between humans and digital technology influence our cognitive bias –  disturb  our ability to make rational choices. Digital technology influences the choices and decisions you make. Most often without you realizing it or having the opportunity to protect yourself against it.

These largely hidden influences require a new kind of ‘cognitive security’, a reinterpretation of protection that focuses on regulating and policing human online interactions, thus maintaining the integrity of the online social environment and helping users defend themselves against risks permeating exponential digital progress.

While the somewhat colloquial term ‘cyber- and Information security’ refers to protection and regulation of critically interdependent  digital systems and physical infrastructures, ‘cognitive security’ points towards protection against exploitation of cognitive bias in a group or an entire population.

Cognitive security emerges from a discourse of new types of social engineering that enable social influence and deceptive manipulation of human behaviour. Exponential growth in online social platforms increasingly cause social disruptions and victimization.


Cannavale, F. J., Scarr, H. A., & Pepitone, A. (1970). Deindividuation in the small group: Further evidence. Journal of Personality and Social Psychology, 16,141-7.

Demetriou, C. and Silke, A. (2003) A Criminological Internet ‘Sting’, British Journal of Criminology, 2003, no. 43, 213-222

Kiesler, S., Siegel, J., & McGuire, T. (1984). Social psychological aspects of computermediated communication. American Psychologist, 39, 1123-34.

Le Bon, G. (1895, trans. 1947). The Crowd: a study of the popular mind. London: Ernest Benn.

Newman, G.R. and Clarke, R.V. (2003) ‘Superhighway Robbery: Preventing E-commerce Crime,’ Cullompton: Willan.

Siegel, J., Dubrovsky, V., Kielser, S.. & McGuire T. (1986). Group processes in computer-mediated communication. Organizational Behaviour and Human Decision Processes, 37, 157-87.

Zimbardo, P. G. (1969). The human choice: Individuation, reason, and order versus deindividuation, impulse and chaos. In W. J. Arnold & D. Levine (Eds), Nebraska Symposium on Motivation. Lincoln, NB: University of Nebraska Press. -%3E%0

Skriv et svar

Please log in using one of these methods to post your comment: Logo

Du kommenterer med din konto. Log Out /  Skift )

Google photo

Du kommenterer med din Google konto. Log Out /  Skift )

Twitter picture

Du kommenterer med din Twitter konto. Log Out /  Skift )

Facebook photo

Du kommenterer med din Facebook konto. Log Out /  Skift )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.