In his book ‘Sapiens: A Brief History of Humankind’, Yuval Noah Harari proffers the sobering thesis that, in the next 100 years, the most significant revolution will be the human condition as such (das Menschsein selbst). This is a crucial thesis, for despite all historical changes in recent millennia, the human condition served as a constant. We did not change. That is, we had the same bodies and more or less identical physical and cognitive capacities. Yuval Noah Harari argues that this constant is bound to change, and he refers to biotechnology, human-computer interfaces, artificial intelligence or a combination of these as manifestations of that imminent change toward a transhumanistic society.

He bemoans – and I agree – that technological ambitions are often presented in terms of remedies for physical or psychiatric pathologies, albeit scientists are often not fully aware of the wider social implications this might have, especially if the boundary between therapeutic and enhancement applications is blurred. In a recent interview in Sueddeutsche Zeitung, Yuval Noah Harari appears alarmed about the limited awareness of some scientists of the practical implications of their work, despite their incredible technical knowledge. Interestingly, he makes that statement in the context of the EU-funded Human Brain Project.

This provocative remark struck a cord with me, especially as I have expressed similar views earlier already on the perils (rather than promises) of what is now described as organisational neuroscience. In fact, there is not another research topic am I currently involved in that alerts me more to emerging threats to individual autonomy, determinism, and physical/psychological integrity of those individuals participating in this type of research, especially when it is geared toward enhancement rather than therapeutic purposes.

To exemplify, consider the work by Dara Hallinan and colleagues (link can be found here). If they are correct in their assertion that neuro-data (i.e., data collected via neuro-imaging techniques such as EEG, fMRI, or CT) constitute a form of novel data whose unique characteristics are only insufficiently accounted for in existing EU data protection frameworks (and probably – by extension – ethics frameworks governing scientific studies too), then there is reason to be both cautious and sceptical about the increasing application at work and beyond (e.g., in the form of brain-computer interfaces). In light of the potential lack of appropriate data protection governing the collection, analysis and storage of (especially commercially used) neuro-data, the emancipatory imbalance between the data processor (i.e., public or private research institutions) and research participants is likely to deteriorate further. As Harari notes, “we loose control” in the context of bio-technological advances.

Thus, it seems plausible to suggest that the loss of control is associated with the speed of technological advances, which outpaces relevant legislation. The case of biotechnology (including neuroscience) is highly germane in this respect. For Yuval Noah Harari, however, it is the case that governments and legislators seemingly do not even notice these developments (i.e., how technological advances – current or forthcoming – necessitate the introduction of new, or the amendment of existing, legislation). Instead, large co-operations (e.g., Google) pursue visions of technological progress and how that can (putatively) help facilitate our daily lives with unrelenting zeal.

There are some scientists who, by way of engaged a thoughtful critique, have tried to close the gap between technological advances and our ethical maturity. But these are minority voices. Many others, by contrast, do not perceive any ethical issues at stake in relation to whether collecting neuro-data is any more sensitive than conventional types of data. And how could they? Within the paradigmatic tenets of positivism, there is no scope for ethics and moral values as steers of scientific inquiry that might question the very rationale of pursuing certain studies a priori. Consistent with this, a staunch advocate of organisational neuroscience once wrote to me that this type of data is no more sensitive than other types of data, such as sexual orientation or drug usage. Indeed, such a conclusion was considered “premature”.

So what does the relevant literature suggest here? To begin with, the ESRC Framework for Research Ethics has a different view on this point, explicitly stating that:

“Research organisations are responsible for determining when research is considered to involve more than a minimum risk. We would consider that the following research would normally be considered as involving more than minimal risk and therefore most likely to require a full ethics review” (p. 8, italics added for emphasis).

A subsequent point elaborates:

    “Research involving access to records of personal or sensitive confidential information, including genetic or other biological information, concerning identifiable individuals” (p. 9).

More pertinently perhaps, consider the recent work by Hallinan et al., who argue that neuro-data is more sensitive compared to conventional types of data due to the inherent difficulties in severing the link between the data and the individual associated with it. They note:

   “When data cannot any longer be related to a civil identity, they will be regarded as ‘anonymous’ [in the respective EU Directive]. This concept will not function in the same way in relation to neurodata. While the link neurodata has to an identifiable individual’s civil identity can be severed (in that one could destroy the name at the top of a neurodata file leaving only the raw data), the data still remains a unique representation of that specific individual. Accordingly, neurodata cannot be referred to as ‘anonymised’ in the same way as, for example, travel information, could” (p. 66).

I share these concerns, even though arguments presented in previous publications of mine led some colleagues to conclude that I am “straying into science ideology”, which is an all-too-obvious yet unimaginative attempt of rhetorical bullying. It too is very similar to those arguments rehearsed in the context of ‘dustbowl empiricism’ and its implicit appeals to rationality and evidence (see Roy Suddaby’s last editorial in AMR, entitled “Why Theory?”). Yet, as I have shown here, it is the current advocacy of organizational neuroscience that represents the dominant ideological movement. As a reminder, ideologies are characterised as a system of values, ideas, and beliefs that seek to legitimize extant hierarchies and power relations and preserve group identities. Power is defined as access to resource (in this instance, journal space), and a close look at the articles published in top-tier US management journals quickly reveals a publication bias in favour of neuroscience as discipline that can (putatively) enrich management theory and practise. Yiannis Gabriel once astutely observed that “what gets published and what gets rejected … are barely concealed exercises in power and resistance … what gets published is one of the most political processes” in today’s academia (p. 761).

Now consider this. I once provided a count of 15 to 0 in favour of advocates (as shown here), which might have changed slightly depending on which publications are counted. Still, previous experience with two top-tier US journals also suggests that there was limited interest in providing publication space for dissenting views. In one case, the editor did not bother sharing the comments of one reviewer, while another journal rejected a paper based upon one single review. And of course, the damage of publications biases becomes most visible if we take into account the reduced likelihood that the findings reflect a true (that is, non-null) effect. This has been forcefully demonstrated in the context of neuro-data in a recent meta-analysis by Button and her colleagues. Pointing the ethical problems this creates, the authors conclude that “unreliable research is inefficient and wasteful” (p. 365).

As I write these lines, I am reminded of Christian Jarrett and his recent book “Great Myths of the Brain”. Prudent in expression, he suggests that we should “seek out the truth for its own sake, not in the service of some other agenda” (p. 5, italics added). When I see advocating articles on organisational neuroscience being fast-tracked in the most prestigious journals despite years of backlog articles, I find it hard to believe that publication bias does not play a role. Articles presenting positive and clean results, or those that represent novel and attractive topics, are more likely to get published, not least as it will help boost the journal’s impact factor. All this makes me wonder about the very essence of, and motivation behind, our profession in this day and age.

If Harari is correct in predicting the transhumanistic society, the current publication bias in relation to organisational neuroscience can potentially expedite its arrival. Will a transhumanistic society give rise to more individual and collective happiness? Any possible response is likely to be contentious. However, if our task as social scientists is to examine how we as individuals behave in and influence the world around us, then thorough and informed ethical critiques and discussions in which we – as social scientists – exercise strong influence on the future emergence and manifestations of the transhumanistic society are of utmost importance.

© Dirk Lindebaum. All rights reserved.

NOTE: Please consult the ‘publication’ section of this website to review the relevant articles I have been involved in on the topic of organisational neuroscience.

Leave a Reply

Your email address will not be published. Required fields are marked *