Posted by

Abstract –  Digital technology and the exponential growth in digital performance appear to be the underlying factor exacerbating convergence across distinct theoretical frameworks.

Convergence of previously distinct frameworks appears to be the common enabler of complex disruptions that emphasize gaps in the virtual distance between social norms and technology.

Cybergaps invoke the necessity for identifying red flags that require focus towards defining societal resilience in response to social repercussions of exponential digital progress.

The main aspiration of’ ‘Red Flags in Cybergaps’ is to communicate a thesis informing governance of digital disruption, resilience, and accountability in response to vulnerabilities caused by exponential digital progress. The following introduces a research model that presents a matrix for a new regulatory architecture, which promote the concept of cognitive security.


    Our understanding of ubiquitous digitization – as a result of enhanced digital performance – seems to be at a crossroads.

    The socio-technical model claims that a gap emerges in the mismatch between what technology is capable of and what social controls are supposed to do. Thus, the term ‘cybergap’ refers to the virtual distance between social norms and technology.

    Cybergaps become evident in the attempt to advance controls that regulate the constant flux of challenges provoked by the turbulent process of technological innovation and pervasive digitization of vital core functions of society.

    Technological innovations in the rebelliously unruly digital realm occur at such rapid pace that delivery of security, privacy, accountability, transparency, sustainability and democratic control stumble to catch up. This mismatch between social controls and technological innovation presents a number of or challenges related to digital disruption of the very socio-economic fabric of democracy and society as we know it. These challenges represent a number of red flags that we need to address.

The Locomotive Act or Red Flag Act (1836-1892). First traffic code for self-propelled automobiles.

    In a historical context, we are familiar with gaps dividing governance and disruptive technologies. Gaps become evident during early arrival of new disruptive technologies, particularly when such emerging technologies are predominantly perceived as a threat to the paradigms of the prevailing sociological life script.


Within dystopian narratives disruptive technologies are assigned destructive capabilities and governance takes on ‘draconian’ characteristics. Retrospectively ‘draconian’ controls are launched as a result of deliberations that often are rooted in fear. Fear of the technology itself, or fear that the technology will shatter the prevailing world order. It is possible to posit that pervasive introduction of digital technology disturbs the middle class life-script. The perception of disruptive features embedded in the digital domain guides governance of social controls that address the consequences of disruptive influences on the prevailing societal equilibrium.

    Digital disruption holds both terrifying and fascinating characteristics (Rosenstand, 2018). Bridging the gap between technological determinism and rampaging cyberangst appears to gain priority. When governance is confronted by disruptive technologies, the character of social controls depend on political framing of digital disruption. In the emerging political framing digital disruption is prescribed either benevolent or belligerent attributes.

    A cybergap forms between belligerent perception of digital performance, and benevolent framing of digital progress as accelerating systems innovation translated into a common good. This belligerent vs. benevolent dichotomy calls for both theoretical and pragmatic incursion into contemporary governance of digital performance with particular focus towards the social repercussions that follow in the wake of the anticipated future exponential growth in technological progress (Kurzweil, 2005).

    ‘State-of-the-Art’ scientific incursions into the consequences of exponential growth in digital performance are dispersed across an intricate multidisciplinary field of research. In a global perspective new titles on the subject are published by the hour.

    In theoretical terms this expansive magnitude in scholarly awareness is scattered across a multitude of discrete conceptual frameworks. Although these frameworks are at times overlapping, they are rarely informed by each other.

However, considered in empirical terms, the increase in scholarly awareness could imply that we have reached that particular stage in exponential growth of digital capabilities, where we begin to perceive the accelerating exponential trend in digital performance. In Kurzweil’s words, we have reached ‘the knee of the curve’ (Kurzweil, 2005).

    In the exponential trajectory towards singularity, we have become aware of accelerating shifts in paradigms, not only in technology, but increasingly in the way humans and technology interact.

     This revelation is pushing digital technology from being a discipline almost entirely seated within computer science or engineering, into other academic fields concerned with the messy realities of human social life. The sociological approach to digitization reveals a number of shifts in paradigms, which become a compilation of red flags in cybergaps.


    In general, social sciences commonly describe the mechanics accelerating exponential digital performance in terms of ferocious speed, unlimited global instantaneous connectivity, networked interdependence, and unrestrained interactive socio-technological innovation. These mechanics invoke a complex plethora of fluctuations in paradigms that are essential to our understanding of society.

    A review of recent scientific literature, which provide research into the social repercussions of digital progress, exposes a methodological trend where scholars explore distinctive dichotomous themes in an attempt to embrace and explain the complexities involved.

    Subsequently a range of authors claim that exponential growth in digital performance causes blur in the distinctive conceptual demarcations of fundamental normative constructs, such as virtual vs. real (Yould, 2003); old vs. new (Grabosky, 2001); analogue vs. digital (Vang, 2018); national security vs. computer security (Nissenbaum, 2005); state vs. private (Krahmann, 2008); territorial vs. borderless (Everard, 2000); crime vs. deviance (Yar, 2005); national security vs. corporate risk (Petersen, 2008); anonymity vs. accountability (Capeller, 2001); biological vs. technological (Kurzweil, 2005); risk vs. resilience (Dahlberg, 2017); disruption vs. destruction (Hein & Honoré, 2016); and terror vs. fascination (Rosenstand, 2018).

    This wealth of diverse themes represents a vaguely articulated general view of the socio-technological impact caused by digital progress (Whitworth & de Moor, 2009).

    In this multitude of opposing conceptual themes one common approach suggests that developing events manifest themselves as conditional spaces between extreme dichotomous constructs in a continuum across a set of common traits that bind these seemingly discrete factors together (Yould, 2003; Everard, 2000).

Jerry Everard (2000) emphasizes how digital progress challenge a multitude of boundaries, most importantly the boundaries of identity, primarily the identity of the nation-state (Everard, 2000).

    Employing a Foucauldian ‘archaeological approach’, Everard particularly classify the blurring of boundaries between warfare, criminal activity and plain system failures as the most difficult issues for the nation-state to address (Everard, 2000:103). According to Yould (2003) ‘it appears that IT may be the common underlying factor upon which all security sectors converge’ (Yould, 2003:78).


    Convergence appears to be a descriptive common denominator for occurring paradigmatic shifts caused by digital progress.

    Singularity primarily signifies the culmination of a merger of biology and technology – a dissolve of demarcations between man and machine. Merger of a range of distinct dichotomous constructs is an unavoidable logic consequence of reaching the Singularity; however, at this particular point in time, arriving at this crossroads – or ‘knee of the curve’, science observes a number of trends that appear to suggest an emerging convergence among a variety of paradigms previously dispersed across discrete frameworks.

    Recent studies suggest that the complexities invoked by exponential digital progress can be disentangled by bonding traits together across dichotomous themes in a convergence that brings about new types of identity that strengthen shared accountability, coordination, and collaboration among both public and private actors (Matz, 2011a).

    Exponential digital progress causes a disruptive merger of battle space, humanitarian space and cyberspace. In theoretical terms, this disruptive merger requires an understanding of exponential technological innovation in relation to multidisciplinary convergence across diverse boundaries in discussions on national security, crime perpetration and corporate risk management.

    Delivery of security in digital domains demands political understanding of this decisive convergence that affect relations and create new types of interdependent identities across a changing security policy field, which traverse defense, law enforcement and regulated market economics.

    One could argue that convergence of technology and discrete physical or social domains is a precondition for digital disruption, and the only viable mitigation is to become resilient to changes caused by this convergence. Nevertheless disruption and resilience are not interpreted as antonyms. In the context of the following discourse, the concept of resilience is defined as a response to vulnerabilities exacerbated by or originating in either belligerent or benevolent perceptions of digital disruption.

   Digital disruption and resilience are therefore in this essay considered key representations of common traits – or ‘Buzzwords’ – that are mentioned across all conceptual theoretical frameworks, exploring the impact of digital progress; however, digital disruption and resilience are interpreted differently and given diverse weight in discussions on the impact of digital technology in national security (Shea, 2016); crime perpetration (Brenner, 2010); or corporate risk (Rosenstand, 2018; Christensen, 2018).

Convergence impedes a clear demarcation of boundaries between management of disruptions, resilience and accountability across the spheres of national security discourse, digital crime perpetration in law enforcement discourse, and corporate risk governance.

    Convergence of issues common for  defense, police and civil society is dubbed ‘The Security-Crime-Risk Continuum’: a theoretical conceptual construct based on the claim that the robustness of any digital community must be provided through cross-sectorial regulation of military intervention, law enforcement, and corporate risk management.

    The concept inspires a holistic strategic approach that is prerequisite for providing a regulatory framework to maintain accountability, robustness and sustainable digital disruption in core societal functions. Delivery of security in exponential digital progress requires governance of the many vulnerabilities, uncertainties and complexities across convergence in protection of critical infrastructures, regulating online social platforms, and navigating relations between public and private partners.

    Adverse social repercussions of exponential digital progress are not solely mitigated by an agent of the state, one single corporation or a capable guardian. The ‘security-crime-risk continuum’ posits that responsibilities to establish functional societal resilience are shared across the entire cast of actors. the ultimate objective being a restoration of trust and accountability. The absence of accountability encourages manipulation of human exchanges of information (Capeller, 2001:233). Trust and accountability depend on public private partnerships to provide reliable infrastructures and regulations for policing online social platforms. 

    Public and private actors share responsibilities for establishing regulatory social controls, which ensure transparency and proportionality are maintained in the development of new socio-technological mitigation instrumentation. The concept of shared responsibilities remains the pivotal starting point for response mechanisms rooted in national security concerns, crime control, and corporate risk governance. The concept of shared responsibility becomes a prerequisite in policy-making and pragmatic implementation of resilience strategies.

    Responsibilities cannot be shared without the explicit notion of shared accountability. Accountability in digital progress is shared equally horizontally across individual users over organizations to government, and vertically from the local community to the international landscape.

    The future regulatory architecture must reflect the convergence occurring in the ‘security-crime-risk continuum’. The goal is to provide direction in regulatory mechanisms to bridge cybergaps between social controls and technology, Collaboration should focus on how to regulate exponential convergence of cybersecurity, cybercrime, and cyberrisks.


    Interactions between humans and digital technology influence our cognitive bias –  disturb  our ability to make rational choices. Digital technology influences the choices and decisions you make. Most often without you realizing it or having the opportunity to protect yourself against it.

    These largely hidden influences require a new kind of ‘cognitive security’, a reinterpretation of protection that focuses on regulating and policing human online interactions, thus maintaining the integrity of the online social environment and helping users defend themselves against risks permeating exponential digital progress.

    While the somewhat colloquial term ‘cyber- and Information security’ refers to protection and regulation of critically interdependent  digital systems and physical infrastructures, ‘cognitive security’ points towards protection against exploitation of cognitive bias in a group or an entire population.

    Cognitive security emerges from a discourse of new types of social engineering that enable social influence and deceptive manipulation of human behaviour. Exponential growth in online social platforms increasingly cause social disruptions and victimization.

    The following will elaborate on social repercussions and red flags emerging as a result of exponential digital progress across market economics, defense and crime.


    Convergence is apparent in paradigmatic shifts occurring in contemporary economics. The fundamental presumption of the emerging 4th Industrial Revolution is that current transformations represent a shift in the way we produce, consume and socially interact. These transformations are driven by rapid convergence of three previously distinct domains: the physical, the digital and human social relations (Davies, 2015).

    Confronted with amalgamation of human, social, and technological spheres, neglecting gaps that separate governance and digital progress could project the terminal trajectory of the prevailing industrial world order. Continued digital growth signals a new political reality, which require understanding of the social repercussions rooted in convergence of humans and technology. Exponential growth in science, technology, engineering, and mathematics involve an obligation to bridge existing gaps to legislation, competencies, creativity, and cognitive agility.

    In evolving market economics the convergence of man and machine is pivotal for understanding digital disruption. In general the concept of digital disruption is defined as a product of exponential growth in digital performance; however, in the specific setting of market economics, the conceptual meaning of digital disruption is defined as ‘events where the introduction of new technology changes the market’ (Rosenstand, 2018:15).

Photo by Pixabay on

    Yet, the politicization, securitization, and commercialization of data is difficult to understand or theorize. We have not fully mapped the social consequences of digitization. Technological developments occur too fast. The technology is non-transparent. Practice is veiled in secrecy and anonymity. Data creates invasive profit accumulation through ‘surveillance capitalism’. Psychosocial and behavioral psychological influences are insidious and difficult to detect over time. Digitization creates cognitive bias and shifts the individual’s judgement in making rational choices. Users are the victims of cunning rhetorical misinformation and deliberate political or cultural deception.

    Any digital human-computer interaction produces data. Over time, more and more data is generated about the individual, but personal data is controlled by others. In the digital domain, individuals are not in control of their own digital data.

    In totalitarian regimes, the state basically takes centralized ownership of all data.

    In liberal democracies, data is a commodity. The product is harvested and refined by the technology that engulf everyone. The product is traded, mostly to 3rd parties, by those who own the technology.

   Based on the acceptance that digital technology always produces objective indisputable facts, a belief arises that the harvesting and processing of data as an inexhaustible resource can predict future behavioral patterns of individuals and consumers.

    Analysis of data can identify effective psychosocial or behavioral psychological stimuli that lead to predictions, which can be translated into political influence or profits.

    The commodification of data spawns a powerful  economic model based on digital disruption. This new type of accumulation of wealth capitalize any human-computer interaction.

    Identity, privacy and digital behaviour merely become crops in digital farming – where data is exploited as a free raw material. Digital exponential growth is the fundamental paradigm of a new type of autonomous data analysis, employing artificial intelligence that prey on digital behaviour of users, who are largely kept ignorant or unaware of its processes and social consequences.

    Data as a commodity accumulates not only assets and capital, but more importantly rights. Exponential digital progress in datafication appears to function without meaningful mechanisms of consent, neither in terms of the traditional practice of supply and demand, nor in terms of democratic control or regulation.   

    This new redistribution of rights without meaningful mechanisms of consent creates cognitive dissonance. In order to access the technology, one is forced into making a choice between two evils. One is required to sign off proprietary rights to own data to obtain an instant reward in the present that could shape one’s life in the future.

    The prevailing paradigm of data analysis is basically a formidable tool for modifying human behaviour. It is used in both totalitarian regimes and in liberal democracies.


    Digital progress opens up a variety of cybergaps or ‘legal gray zones’, which refer to coercive and aggressive activities in an uncertain borderland between belligerent vs. benevolent perceptions of digital disruption.

    Cybergaps exist in that particular unruly sliver of the digital realm, which is situated beyond the threshold of applied governance that define responsible corporate accountability.

    Digital disruption, understood as technological events that change markets, appear to flourish in cybergaps on the outside of contemporary legal interpretation. In this context, digital disruption exploit and thrive on regulatory shortcomings that blur the demarcation between legitimacy and crime.

    One example is the injunctions, verdicts and subsequent ban of UBER for operating in contravention of national Danish legislation, and the prosecution of UBER drivers for tax evasion.

    Yet, in other jurisdictions UBER is repeatedly applauded as an example of successful pragmatic implementation of a benevolent interpretation of digital disruption.


    It is worth noting two fundamentally different philosophies on strategic governance of emerging new technologies. One way is to think of law on technology, whether domestic or international, as an inhibitive constraint. This approach posits that regulatory strategies restrict abilities to develop new technology and cause delay in the implementation of new technological innovations. In this view regulation takes on restrictive ‘draconian’ characteristics that impede progress or growth.

    However, vulnerabilities caused by emerging digital technologies could be resolved without legal draconian inhibitions, giving way to another approach that view regulation not as draconian, but as holding ‘benign’ restraints. In this interpretation strategy is not purely a restrictive constraint, but empowers us to do things we could never do without the legitimacy found in benign approaches to digital governance. This view promotes a culture of compliance, and argues enhanced legitimacy in digital disruption through adherence to common strategic decisions.

    The ‘draconian vs. benign’ dichotomy is an essential offset or a significant red flag in a market oriented cybergap that evolves in the debate on defining both domestic and international resilience against vulnerabilities found in digital technologies.  Information is viewed as a commodity. Personal data becomes the going currency in the digital domain. However, a quick glance into any crystal ball will reveal one weighty prediction: governance of human-computer interaction in the gap between man and machine will become the primary challenge of tomorrow.

    The prevailing judicial and legislative gaps in regulation of human-computer interaction gives rise to discussions on defining boundaries of regulatory instrumentation and government interventions with particular focus towards internet accessibility and the role of social media in democratic processes. The most recent topic of contention in the gap that dominates emerging digital market economics, appears to be harvesting algorithmic psychometrics and the application of demographic, behavioral, and psychological data into the digital economy (Matz, Kosinski, Gosling, Popov, & Stillwell, 2015).

    Government regulatory practices such as logging, filtering or blocking online content, and denying internet access have been introduced by authoritarian regimes; however, liberal democracies and supra-national authorities are also drifting towards regulation of the internet and social media platforms. In a corporate perspective regulatory initiatives such as GDPR do present a number of challenges.

    In current developments of digital market economics this gap appears to represent growing contention between state hierarchy vs. market anarchy, leaving innovations in digital performance relatively unregulated. In the relations between state and market, compliance and industrial standards of quality management appear to become key components in legislative and regulatory efforts to define political responsibility and shared accountability in public-private partnerships (Matz, 2011a).

    Shared public-private responsibilities in the evolving digital market economy are conditioned by Neo-liberal policies introducing private ownership into societal core functions, and creating diversification of roles and responsibilities. Labeling this fragmentation of roles and responsibilities as “privatization” does not fully reflect the nuances and complexities of contemporary challenges to governance and corporate strategic management in which infrastructural societal core functions are partially delegated or shared on a case-by-case basis without resemblance to full transfer of property related ownership and responsibilities typically found within the traditional concept of industrial privatization (Bailes, 2006).

    In delivery of resilience against vulnerabilities caused by digital disruptive progress, defining the fabric of public-private relations becomes essential, particularly in questions related to protection of critical infrastructures. This causes a merger of military and non-military spheres of interest.


    In military terminology, digital technology becomes “Weapons of Mass Disruption” (Yould, 2003:84-8; Nissenbaum, 2005:67), replacing earlier cataclysmic digital threat imagery, such as the “Electronic Pearl Harbor” (Bendrath, 2003).

    The interconnected nature of information and communication networks makes critical infrastructures vulnerable to the risk of cascading failures with widespread cross-sector consequences.

Exponential innovation in digital technology remains one of the most salient contemporary concerns as a source of national security threats.

    Cybersecurity have taken societal salience, increasingly entering political agendas, and rapidly saturating national security concerns. The initial successful dramatizations of national security threats against critical infrastructures have firmly placed the concept of cybersecurity within the military complex.

    In contemporary conflicts digital technology is increasingly deployed as new types of remotely controlled weapon systems or robotic capabilities with the purpose of creating political, strategic, operational or tactical effects in support of policy objectives (Kaldor, 2011).

    When taking advantage of digital technology to create ‘military’ effects, the weaponization of the term ‘digital disruption’ amalgamates into the concept of ‘Hybrid Warfare’: a political strategic vocabulary involving conventional weaponry augmented by information operations with the purpose to inform, influence, intimidate, deceive, deter, and disrupt targets – or rather target audiences (Nissen, 2015).

    However, recent developments surrounding the US Presidential Elections in 2016, and concerns related to the Cambridge Analytical misuse of data suggest that attributes found in digital technology provide fundamental capabilities to build ‘Weapons of Mass Persuasion’ (Hwang, 2015), marking a new trend in weaponization of social media platforms and exploitation of personal data.

    Future conflicts are about control of populations and the political decision-making process rather than about control over territory or the pursuit of geopolitical interests. Weaponization of digital technologies makes warfare remote and risk-free to the aggressor. Belligerent disruption of critical infrastructures or manipulation of social media, targeting the adversaries’ population, could cause nations to ‘self-combust’.

    Accordingly, national security concerns have developed to include non-military spheres of the digital domain, particularly towards dilemmas and tensions caused by disruptive digital influences to the core processes of democracy that come embedded within the open online social environment.  
    Data generated by users on social media has become a potent strategic military resource.

    In terms of policy this expansion of national security concerns extends the perception of cybersecurity to include the concept of ‘functional security’, which strives to ensure the security of the critical functions of society. Subsequently, non-military functions of the digital domain become subject of securitization.

    Securitization of digitization in general – and social media in particular – provides political motivation for taking initial steps in a bid to initiate more rigorous and mandatory state-sanctioned measures.

    It is safe to say that social media has obtained societal salience.  

    The fact that victims of digital aggression do not resist – or perhaps are unaware of the aggression taking place – appear to be essential characteristics embedded in digitization of modern information warfare (Darczewska, 2014).

    The psychological characteristics of modern inter-state information warfare are also applicable in a setting of ‘cognitive infiltration’ through informational and reputational influences that are intended to  affect own electorates (Sunstein & Vermuele, 2008).

    Even liberal democratic governments appear increasingly willing to employ strategies to propagate government talking points and disrupt online dissent among their own electorates. Such manipulation of online behaviour involve deployment of distributed algorithms facilitating persona online management, fake online personas, ‘sockpuppets’, ‘astroturfing bots’, and ‘shills’, all referring to digital opportunities in the so-called ‘Information Battle Space’.

    Militaries and other public – private actors continue to refine traditional public relations methods with the latest findings in quantitative social science. The result will be a form of tactical persuasion that is unique in its quantitative approach, the potential level of targeted precision, and subtlety. Just as the software and hardware of the internet has been militarized by the imperatives of a mostly secret “cyberwar,” so too are online social spaces being weaponized in new and mostly hidden ways.

    These operations are difficult for most web users to detect and even harder to counter. In contrast to the discipline of computer security, which focuses on the integrity of machines, weaponization of digital social interactions represents a legislative gap that requires the advance of a new kind of “cognitive” security, focusing on maintaining the integrity of the online social environment, and assist citizens in protecting themselves (Hwang, 2015).

    In an interpretation of Kurzweil’s merger of man and machine, digital progress seems to invoke a paradigmatic shift in warfare, where opposing conventional armed forces are augmented – or replaced entirely – by employing new ways of combining military strategy, psychology and social sciences into computer science and technology, suggesting alternate approaches to the traditional narrative of ‘psychology of human computer interaction’.

    Apart from this obvious belligerent weaponization of digital technology, benevolent use of digital capabilities give military commanders the options to avert the use of kinetic force.

    The digital domain represents a unique medium through which to operate against a wide array of targets free from the physical constraints of geography and territorial boundaries in ways that can effectively contribute to securing an enemy’s submission.

    Stuxnet, Petya and WannaCry are well-known variants of malware, which in sophistication of design and widespread impact are suspected of originating from agents of states or state-sponsored agents.

    These types of malware cause belligerent disruptions that raise legitimate humanitarian concerns. In these types of weaponization, the cardinal legal principles of warfare are questioned. The relates particularly to the principles of  proportionality – the obligation to minimize collateral damage, and distinction – the requirement to distinguish between combatants and non-combatants.

    Because these attacks have occurred outside the context of armed conflict, the lex specialis of International Humanitarian Law did not apply, raising separate questions as to the applicability and efficacy of the lex generalis of the Law of State responsibility.

    Disentangling the belligerent vs. benevolent perception of digital disruption in hybrid warfare appears to be further complicated by the fact that this type of state-sanctioned digitized aggression remains unregulated in International Humanitarian Law.

    This deficiency represents a major cybergap in relation to our perception of war vs. peace, and raises a distinct red flag in rewriting laws on armed conflict.

    The somewhat vulnerable legal exposure in contemporary warfare gives weight to the emerging concept of resilience as a decisive strategic paradigm (NATO, 2016). The necessity to adapt resilience into the military vocabulary is further compounded by the fact that the assets comprising the digital domain are largely owned and operated by private entities outside the military chain of command. Particularly the concept of resilience necessitates introduction of notions of shared responsibility and shared accountability across public and private spheres (Matz, 2011b).

    This observation paired with studies of human-computer interaction in digital domains suggests that attributes embedded in digital technology are conducive to new types of clandestine warfare.

    Attributes rooted in digital technology, such as stealth, anonymity, undetected reconnaissance, and easy escape from retribution, facilitate the possibility of claiming ‘plausible deniability’ in modern warfare (Lanoszka, 2016).

    The features intrinsic to digital technology become decisive force multipliers not only in hybrid warfare, but also in crime perpetration.


    In criminology, taking an offset in ‘rational-choice’ theory, the acronym SCAREM categorizes Stealth, Challenge, Anonymity, Reconnaissance, Escape, and Multiplicity as decisive criminogenic attributes that are conducive to crime perpetration in virtual reality (Newman & Clarke, 2003). The criminogenic attributes found in digital technology makes the virtual environment an ideal setting for committing ‘the perfect crime’.

    As the number of users rises worldwide, and an increasing proportion of ‘everyday social life’ is staged in digital virtual space, the risk increases becoming victim of various forms of virtual crime and deviance. The threat imagery in criminology is constructed around the notion that ubiquitous digital opportunities is a decisive force multiplier of crime and deviance – affecting victims in both virtual reality and in real time.

    In Routine Activity Theory the basic prerequisite for committing crime in meat-space is a convergence of three factors: A motivated offender, an attractive target, and the absence of capable guardians.

   However, crime in digital domains challenge purported criminological theories on crime causation and perpetration. The presence of a multitude of victims congregating in easily penetrable virtual locations, a cast of anonymous motivated offenders ranging from juvenile delinquents to terrorists and state-sponsored cyber-aggressors, operating in the blatant absence of capable guardians, constitute a significant gap between the digital opportunities to commit crime and the constitutional restraints that prevent crime perpetration in digital domains.

    Collectively the criminogenic attributes in digital domains give rise to a number of red flags in terms of judicial, jurisdictional, causational and preventative challenges. These discrepancies lead to a discussion of whether the broad range of anti-normative, deviant or criminal behavior occurring in the digital realm require a re-interpretation of established theories on crime.

    Some sources argue that crimes in cyberspace are not qualitatively different from crimes in meat-space.   From this point of view, actions in cyberspace can therefore be analyzed and explained using established theories of crime causation in meat-space. In this depiction, cybercrime is merely well-known criminal acts, which are carried out using new methods and technologies. It is merely ‘new wine on old bottles’ (Grabosky, 2001).

    Others conclude that although some of the common core theoretical concepts on crime in meat-space can be used for explaining crime in the digital domain, fundamental differences still exist between virtual and real. These differences limit the applicability of established theories.

    This claim provides qualified support for the suggestion that the digital domain actually represents the emergence of a new and distinctive social environment characterized by its own rules, roles, limits, opportunities and interactive norms. This alternative social space gives rise to new forms of anti-normative, deviant or criminal behavior that require a completely different theoretical interpretation and approach (Capeller, 2001).

    These theoretical contradictions have characterized the criminological debate on crime in digital domains over the past three decades. Nevertheless, revised theories on crime in digital domains are indeed in conceptual debt to established criminology.

    According to theories of differential association, crime is committed when those definitions favorable to committing crime exceeds those unfavorable to crime. It is safe to posit that the criminogenic attributes (SCAREM) in digital domains certainly favor the motivated offender.

    Humans are basically rational actors who, in the context of situational deliberations, assess and weigh up potential costs against gains (Cohen & Felson, 1979).

    The basic theoretical assumption in Rational Choice Theory is that individuals who commit crime seek through their actions an advantageous result, such as gains in the form of money, sex or excitement (Cornish & Clarke, 1987).

    When a motivated offender weighs a potential gain from an action against the risk of being discovered and punished, the evaluation also includes an assessment of the skills, tools and situational circumstances needed to successfully complete the action. The individual elements of this balancing need not have the same meaning. A high probability of being discovered may, for example, be more important for the perpetrator to give up his conduct than his assessment of hard penalties. (Clarke, 1997).

    This theoretical approach to crime perpetration and causation is based on the assumption that motivated offenders commit crime in meat-space as a result of making a rational choice.

    This assumption is also true in most types of criminal activity observed in digital domains. However, digital technology appears to carry psychological effects that seem to reduce the rationality of the choices victims make.

    This latter observation is further compounded by complex challenges of novel socio-interactional features that are found in digital environments. Human-computer interaction presents uncharted psychological effects, such as the dissolve of spatial-temporal barriers (Yar, 2005), trust-risk relations (Capeller, 2001), and the perceived anonymity of individual users, carrying the psychological online ‘disinhibition effect’ (Suler, 2004) and deindividuation, causing anti-normative behaviour (Demetriou & Silke, 2003).

    The inherent criminogenic attributes in digital technology are decisively conducive to a range of belligerent psycho-social disruptions, causing uninhibited aggression, radicalization, protofascism, predatory exploitation, impersonations, crime, deviance and other forms of anti-normative behaviour.  It becomes possible to posit that it is the novel socio-interactional features of the cyberspace environment – primarily the collapse of spatial–temporal barriers, many-to-many connectivity, and the anonymity and plasticity of online identity – that enable new forms and patterns of illicit activity. It is in this alleged discontinuity from the socio-interactional organization of ‘terrestrial crimes’ in meat-space that the criminological challenge of theoretical explanations of cybercrime is held to reside.

One can conclude that the digital domain provides a rebelliously anarchistic platform for anti-normative human social interaction and the failure to regulate the Wild West Web represents a significant cybergap that needs to be addressed.

The following is an attempt to provide a matrix for addressing the cybergaps across the convergence of distinct theoretical frameworks mentioned above. 


    Read from left to right, SCAREM@PPP.CIP, mimics an e-mail address. The title imitates a DNS-format comprising an addressee (SCAREM), a host (PPP), and a domain (CIP).

    But SCAREM@PPP.CIP is not a valid e-mail address. This reading is intended to make the subject available and communicate policy analysis to the techno-inclined audience, including those script-kiddies raised in the virtual reality of their mother’s basement.

    However, when read from right to left, SCAREM@PPP.CIP reflects a condensation of three main themes: CIP, PPP, and SCAREM, which are highlighted as a merger of fundamental vulnerabilities among a plethora of multidisciplinary complexities populating the digital domain.

    Three core vulnerabilities are identified as being  exacerbated by exponential growth in digital performance: Critical Infrastructure Protection (CIP), understood as the primary precondition for societal resilience and survival of the nation-state; emerging importance of defining shared responsibilities across Public-Private Partnerships (PPP), understood in terms of cross-sectorial collaboration, coordination, and shared accountability; and finally the inherent attributes in digital technology that are decisively conducive to vulnerabilities in human-computer interaction as expressed through the acronym SCAREM: categorized as Stealth, Challenge, Anonymity, Reconnaissance, Escape, Multiplicity.

    These attributes appear to affect the psychological landscape of human-computer interaction found across national security concerns, crime control and corporate risk management in the turbulent digital domain.

    Bridging gaps in the conceptual convergence of battlespace, humanitarian space and cyberspace is pivotal for political understanding of the need to condense the complexities of digitization into applicable strategies that address vulnerabilities generated by digital growth.

    Vulnerabilities related to digitization create a shift in traditional governmental roles and responsibilities. Digitization entails, for example, paradigmatic shifts between ‘military vs. police’ and ‘national security vs. crime’. In addition, there is movement in the traditional perceptions of boundaries between authorities and private entities: a shift in the relationship between ‘state vs. private’, and ‘military vs. non-military’.

    Digitization challenges a series of traditional key paradigms, suggesting that digitization should be managed from a more holistic perspective. The digital domain dissolves traditional organizational and professional boundaries. It is an emerging political reality we must learn to navigate in new networks of cooperation.

    Providing robustness to counter vulnerabilities in the digital domain is first and foremost a cross-sectorial task. A robust response to digital vulnerabilities therefore takes an offset in cross-sectorial cohesion. The cross-sectorial division of responsibilities in national security policy appears to be a precondition for successful implementation of resilience. It requires cross-sectorial efforts to defend the nation’s digital infrastructure. It requires cooperation between defense, police and civil society.

In this particular context, the convergence of roles and responsibilities shared across military, police and civil society is introduced as ‘The Security-Crime-Risk Continuum’: a theoretical conceptual construct based on the thesis that robustness of any digital community is achieved through multi-disciplinary and cross-sectorial interaction. The thesis imply a holistic comprehensive approach that is considered prerequisite for providing informed modalities of jurisprudence and governance of security in exponential digital progress.

    The continuum ascertain three conceptual frameworks essential to governance in exponential digital progress: ‘security’, understood within the frame of security studies as national security in terms of protecting the sovereignty of the nation state. ‘Crime’ understood in the frame of criminology as national juridical systems and law enforcement, and finally ‘risk’, understood in the framework of corporate risk management practices in the emerging market economics of the 4th Industrial Revolution.

    The ‘Security-Crime-Risk Continuum’ is the conceptual theoretical framework and the primary baseline thesis underpinning the intended exploration of the virtual gaps between social controls and technological innovation in the convergence of military, law enforcement and corporate spheres.

    Combining SCAREM@PPP.CIP and the Security-Crime-Risk Continuum into a research model, provides a framework that allows for analysis of the cybergaps forming in the convergence of national security, crime, and corporate risk. In short, this model will provide an analysis of CIP, PPP, and SCAREM in relation to the Security-Crime-Risk Continuum.

    The purpose of this exercise is to provide research that aid policy and strategy development by examining the repercussions of introducing digital technology into societal core functions.

    In this reading the emerging matrix – or analytical model – aspires to present a roadmap for future management of vulnerabilities in exponential digital progress.

    The issues inserted in this initial draft model are not exhaustive, but rather representative of the range of complexities and interdependencies we need to address in an attempt to bridge virtual gaps between social controls and technology; however, the model offers a framework that grant an overview for further meticulous discussion.


    In sum, taking an offset in a discussion of belligerent vs. benevolent perceptions of exponential digital progress, a number of paradigmatic shifts are identified as fundamental game changers causing a blur in conceptual boundaries, accelerating a merger of battle space, humanitarian space and cyberspace.

    The initial literary review recognizes common traits, which become particularly relevant in a discourse that accentuate the importance of digital performance as the root source of convergence across emerging trends in warfare, crime perpetration and commodification of data in the digital domain.

    Digital technology and the exponential growth in digital performance appear to be the underlying factor exacerbating convergence across theoretical frameworks of security studies, criminology and market economics. The merger of previously distinct frameworks appears to be the common enabler of complex ‘cybergaps’, describing a mismatch between what technology is capable of and what social controls are supposed to do. Cybergaps call upon the necessity for defining resilience in response to social repercussions of exponential digital progress across the traditional mindset of military, police and civil society.

    Digital governance is most commonly articulated within the conceptual framework of cybersecurity; however, the intangibly anecdotal theoretical construct of ‘cybersecurity’ is not necessarily applicable in terms of responsibility, accountability, safety, social security, trust, reliability, ethics, privacy, freedom, transparency, and democratic control.

    Nevertheless, cybersecurity encompasses a range of conceptual connotations across most branches of theoretical discourse in relation to vulnerabilities rooted in digitization. In the current political reality, exponential integration of digital performance into societal core processes impacts most discourse in meat-space.

    This questions whether the conceptual construct of cybersecurity is applicable as a common denominator for bridging gaps across emerging complexities in the digital realm. The construct of cybersecurity is subjected to dismantling across a range of sources.

    Conceptual interpretations of digital disruption and resilience transcends all theoretical frameworks; however, defining accountability, in terms of pervasive legislative and regulatory efforts, also remains a common trait to be determined in relation to the emerging convergence.

    The conceptual connotations of digital disruption, resilience, and accountability are considered key representations of common traits that are acknowledged across all conceptual theoretical frameworks that are employed in this analysis of the impact of digital progress; however, digital disruption, resilience, and accountability are interpreted differently and given diverse weight in discussions on the impact of digital technology in relation to national security, crime perpetration, and corporate risk management.

    Analysis of diverging definitions of digital disruption, resilience, and accountability is arguably a key enabler for bridging cybergaps, and is carried forward as a core principle in a matrix that identify cybergaps and relate regulatory interdependencies in response to convergence among theoretical frameworks.

    Initially the essay identifies three core vulnerabilities exacerbated by digital performance: critical infrastructure protection (CIP), understood as the primary denominator for societal survival; the emerging importance of defining shared responsibilities across public-private partnerships (PPP), understood in terms of collaboration, coordination, and shared accountability; and finally the inherent attributes in digital technology that are decisively conducive to vulnerabilities in human-computer interaction (SCAREM).

    These attributes appear to affect the psychological landscape across national security concerns, crime control and market economics in the evolving digital domain.

In conclusion, the emerging matrix of cybergaps provides an offset for further deliberations that identify the complexities and interdependencies required to establish a regulatory architecture that address a number of red flags exacerbated by exponential growth in digital performance.    


Bailes, A. (2006). Private Sector, Public Security. In A. a. Bryden, Private Actors and Security Governance. Berlin: Lit Verlag.

Bendrath, R. (2003). The American Cyber-Angst and the Real World. Any Link? In R. Latham, Bombs and Bandwidth, the Emerging Relationship between Information Technology and Security. New York: The New Press.

Brenner, S. (2007). At Light Speed: Attribution and Response to Cybercrime/terrorism/warfare. Journal of Criminal Law and Criminology, Vol.97(2) , pp. 379-475.

Brenner, S. (2010). Cybercrime: Threats from Crime in Cyberspace. NY: Jaeger.

Capeller, W. (2001). Not such a Neat Net: Some Comments on Virtual Criminality. Social & Legal Studies Vol. 10:2 , pp. 229-242.

Christensen, K. K. (2018). Corporate Zones of Cyber Security. Ph. D. Thesis, Department of Political Science, University of Copenhagen: Academic Books.

Clarke, R. R. (1997). Situational Crime Prevention: Successful Case Studies. New York: Harrow and Heston.

Cohen, L., & Felson, M. (1979). Social Change and Crime Rate Trends: A Routine Activity Approach. American Sociological Review no. 44 , pp. 588-608.

Cornish, D., & Clarke, R. (1987). Understanding Crime Displacement: An Application of Rational Choice Theory. Criminology 25, 4 , pp. 933-947.

Dahlberg, R. (2017). From Risk to Resilience, Challenging Predictability in Contemporary Disaster and Emergency Management Thinking. Ph. D. Thesis Copenhagen: Danish Defence Academy.

Darczewska, J. (2014). The Anatomy of Russian Information Warfare. Ośrodek Studiów Wschodnich, No. 42 .

Davies, N. (2015). 5 ways to Understand the 4th Industrial Revolution. Retrieved April 3, 2016, from World Economic Forum, Davos:

Demetriou, C., & Silke, A. (2003). A Criminological Internet “Sting”: Experimental Evidence of Illegal and Deviant Visits to a Website Trap. British Journal Of Criminology no. 43 , pp. 213-222.

Everard, J. (2000). Virtual States: The Internet and the Boundaries of the Nation State. London: Routledge.

Grabosky, R. N. (2001). Virtual Criminality: Old Wine in New Bottles? Social Legal Studies , pp. 243-249.

Hein, T., & Honoré, T. (2016). Disrupt eller dø: En guide til din digitale ledelsesudfordring [Disrupt or Die: A guide for your digital leadership challenge]. Copenhagen: People’s Press.

Hwang, T. (2015). Weapons of Mass Persuasion. Retrieved May 2, 2018, from Motherboard Vice:

Kaldor, M. (2011). New and Old Wars – Organised Violence in a Global Era. Cambridge, UK: Polity Press.

Krahmann, E. (2010). States, Citizens, and the Privatization of Security. Cambridge: Cambridge University Press.

Krahmann, E. (2008). The Rise of Non-State Actors in Security Governance. In P. Kenneth, Governance, Globalization, and Public Policy. Cheltenham: Edward Elgar Publishing Ltd.

Kurzweil, R. (2005). Singularity is Near: When Humans Transcend Biology. US: Penguin, Viking.

Lanoszka, A. (2016). Russian Hybrid Warfare and Extended Deterrence in Eastern Europe. International Affairs 92-1 , pp. 175-195.

Matz, S. C., Kosinski, M., Gosling, S. D., Popov, V., & Stillwell, D. (2015). Facebook as a Research Tool for the Social Sciences: Opportunities, Challenges, Ethical Considerations, and Practical Guidelines. American Psychological Association Vol. 70, no. 6 , pp. 543-556.

Matz, S. (2011a). Shared Accountability: Security, Crime and Risk in Cyber-based Critical Infrastructures. 53rd Annual Seminar of Scandinavian Research Council for Criminology (pp. 115-132). Stockholm: Scandinavian Research Council for Criminology.

Matz, S. (2011 b). Public-Private Resilience: State vs Private Conceptions of Security Risk Management in Cyber-based Critical Infrastructures. European Intelligence and Security Informatics Conference. Athens: IEEE Conference papers.

NATO. (2016). Commitment to Enhance Resilience, Press Release. Retrieved October 20, 2017, from NATO:

Newman, G. R., & Clarke, R. V. (2003). Superhighway Robbery: Preventing E-commerce Crime. Cullompton: Wilan.

Nissen, T. E. (2015), #TheWeaponizationOfSocialMedia-@Characteristics_of_Contemporary_Conflicts. Copenhagen: Royal Danish Defence Academy.

Nissenbaum, H. (2005). Where Computer Security Meets National Security. Ethics and Information Technology, Vol. 10 , pp. 61-73.

Petersen, K. L. (2008). Risk, Responsibility, and Roles Redefined: Is Counterterrorism a Corporate Responsibility? Cambridge Review of International Affairs, Vol. 31:3 , pp. 403-420.

Rosenstand, C. (2018). Digital Disruption. Aalborg: Aalborg University Press.

Shea, J. (2016). Resilience: A Core Element of Collective Defence. Retrieved December 7, 2016, from Nato Review Magazine, 1st June 2016:

Suler, J. (2004) The Online Disinhibition Effect, Cyberpsychology & Behavior, Volume 7, Number 3,

Sunstein, C. R., & Vermuele, M. (2008). Conspiracy Theories. Retrieved April 28, 2018, from Social Science Research Network, 4th June 2008:

Vang, M. R. (2018). We are Facing a Paradigm Shift. Altinget Digital 31 January 2018 . Copenhagen, Denmark: Altinget.

Whitworth, B., & de Moor, A. (2009). Handbook of Research on Socio-Technical Design and Social Networking Systems. Haag: IGI Global.

Yar, M. (2005). The Novelty of ‘Cybercrime’: An Assessment in Light of Routine Activity Theory. European Journal of Criminology Vol. 2 , pp. 407-427.

Yould, R. E. (2003). Beyond the American Fortress: Understanding Homeland Security in the Information Age. In R. (. Latham, Bombs and Bandwidth: The Emerging Relationship between Information Technology and Security. New York: The New Press.

Zuboff, S. (2016) The Secrets of Surveillance Capitalism, Frankfurter Algemeine, 03. 03. 2016,

Skriv et svar

Please log in using one of these methods to post your comment: Logo

Du kommenterer med din konto. Log Out /  Skift )

Google photo

Du kommenterer med din Google konto. Log Out /  Skift )

Twitter picture

Du kommenterer med din Twitter konto. Log Out /  Skift )

Facebook photo

Du kommenterer med din Facebook konto. Log Out /  Skift )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.