Social science and the emerging transhumanistic society

By Dirk Lindebaum | February 26, 2015

In his book ‘Sapiens: A Brief History of Humankind’, Yuval Noah Harari proffers the sobering thesis that, in the next 100 years, the most significant revolution will be the human condition as such (das Menschsein selbst). This is a crucial thesis, for despite all historical changes in recent millennia, the human condition served as a constant. We did not change. That is, we had the same bodies and more or less identical physical and cognitive capacities. Yuval Noah Harari argues that this constant is bound to change, and he refers to biotechnology, human-computer interfaces, artificial intelligence or a combination of these as manifestations of that imminent change toward a transhumanistic society.

He bemoans – and I agree – that technological ambitions are often presented in terms of remedies for physical or psychiatric pathologies, albeit scientists are often not fully aware of the wider social implications this might have, especially if the boundary between therapeutic and enhancement applications is blurred. In a recent interview in Sueddeutsche Zeitung, Yuval Noah Harari appears alarmed about the limited awareness of some scientists of the practical implications of their work, despite their incredible technical knowledge. Interestingly, he makes that statement in the context of the EU-funded Human Brain Project.

This provocative remark struck a cord with me, especially as I have expressed similar views earlier already on the perils (rather than promises) of what is now described as organisational neuroscience. In fact, there is not another research topic am I currently involved in that alerts me more to emerging threats to individual autonomy, determinism, and physical/psychological integrity of those individuals participating in this type of research, especially when it is geared toward enhancement rather than therapeutic purposes.

To exemplify, consider the work by Dara Hallinan and colleagues (link can be found here). If they are correct in their assertion that neuro-data (i.e., data collected via neuro-imaging techniques such as EEG, fMRI, or CT) constitute a form of novel data whose unique characteristics are only insufficiently accounted for in existing EU data protection frameworks (and probably – by extension – ethics frameworks governing scientific studies too), then there is reason to be both cautious and sceptical about the increasing application at work and beyond (e.g., in the form of brain-computer interfaces). In light of the potential lack of appropriate data protection governing the collection, analysis and storage of (especially commercially used) neuro-data, the emancipatory imbalance between the data processor (i.e., public or private research institutions) and research participants is likely to deteriorate further. As Harari notes, “we loose control” in the context of bio-technological advances.

Thus, it seems plausible to suggest that the loss of control is associated with the speed of technological advances, which outpaces relevant legislation. The case of biotechnology (including neuroscience) is highly germane in this respect. For Yuval Noah Harari, however, it is the case that governments and legislators seemingly do not even notice these developments (i.e., how technological advances – current or forthcoming – necessitate the introduction of new, or the amendment of existing, legislation). Instead, large co-operations (e.g., Google) pursue visions of technological progress and how that can (putatively) help facilitate our daily lives with unrelenting zeal.

There are some scientists who, by way of engaged a thoughtful critique, have tried to close the gap between technological advances and our ethical maturity. But these are minority voices. Many others, by contrast, do not perceive any ethical issues at stake in relation to whether collecting neuro-data is any more sensitive than conventional types of data. And how could they? Within the paradigmatic tenets of positivism, there is no scope for ethics and moral values as steers of scientific inquiry that might question the very rationale of pursuing certain studies a priori. Consistent with this, a staunch advocate of organisational neuroscience once wrote to me that this type of data is no more sensitive than other types of data, such as sexual orientation or drug usage. Indeed, such a conclusion was considered “premature”.

So what does the relevant literature suggest here? To begin with, the ESRC Framework for Research Ethics has a different view on this point, explicitly stating that:

“Research organisations are responsible for determining when research is considered to involve more than a minimum risk. We would consider that the following research would normally be considered as involving more than minimal risk and therefore most likely to require a full ethics review” (p. 8, italics added for emphasis).

A subsequent point elaborates:

    “Research involving access to records of personal or sensitive confidential information, including genetic or other biological information, concerning identifiable individuals” (p. 9).

More pertinently perhaps, consider the recent work by Hallinan et al., who argue that neuro-data is more sensitive compared to conventional types of data due to the inherent difficulties in severing the link between the data and the individual associated with it. They note:

   “When data cannot any longer be related to a civil identity, they will be regarded as ‘anonymous’ [in the respective EU Directive]. This concept will not function in the same way in relation to neurodata. While the link neurodata has to an identifiable individual’s civil identity can be severed (in that one could destroy the name at the top of a neurodata file leaving only the raw data), the data still remains a unique representation of that specific individual. Accordingly, neurodata cannot be referred to as ‘anonymised’ in the same way as, for example, travel information, could” (p. 66).

I share these concerns, even though arguments presented in previous publications of mine led some colleagues to conclude that I am “straying into science ideology”, which is an all-too-obvious yet unimaginative attempt of rhetorical bullying. It too is very similar to those arguments rehearsed in the context of ‘dustbowl empiricism’ and its implicit appeals to rationality and evidence (see Roy Suddaby’s last editorial in AMR, entitled “Why Theory?”). Yet, as I have shown here, it is the current advocacy of organizational neuroscience that represents the dominant ideological movement. As a reminder, ideologies are characterised as a system of values, ideas, and beliefs that seek to legitimize extant hierarchies and power relations and preserve group identities. Power is defined as access to resource (in this instance, journal space), and a close look at the articles published in top-tier US management journals quickly reveals a publication bias in favour of neuroscience as discipline that can (putatively) enrich management theory and practise. Yiannis Gabriel once astutely observed that “what gets published and what gets rejected … are barely concealed exercises in power and resistance … what gets published is one of the most political processes” in today’s academia (p. 761).

Now consider this. I once provided a count of 15 to 0 in favour of advocates (as shown here), which might have changed slightly depending on which publications are counted. Still, previous experience with two top-tier US journals also suggests that there was limited interest in providing publication space for dissenting views. In one case, the editor did not bother sharing the comments of one reviewer, while another journal rejected a paper based upon one single review. And of course, the damage of publications biases becomes most visible if we take into account the reduced likelihood that the findings reflect a true (that is, non-null) effect. This has been forcefully demonstrated in the context of neuro-data in a recent meta-analysis by Button and her colleagues. Pointing the ethical problems this creates, the authors conclude that “unreliable research is inefficient and wasteful” (p. 365).

As I write these lines, I am reminded of Christian Jarrett and his recent book “Great Myths of the Brain”. Prudent in expression, he suggests that we should “seek out the truth for its own sake, not in the service of some other agenda” (p. 5, italics added). When I see advocating articles on organisational neuroscience being fast-tracked in the most prestigious journals despite years of backlog articles, I find it hard to believe that publication bias does not play a role. Articles presenting positive and clean results, or those that represent novel and attractive topics, are more likely to get published, not least as it will help boost the journal’s impact factor. All this makes me wonder about the very essence of, and motivation behind, our profession in this day and age.

If Harari is correct in predicting the transhumanistic society, the current publication bias in relation to organisational neuroscience can potentially expedite its arrival. Will a transhumanistic society give rise to more individual and collective happiness? Any possible response is likely to be contentious. However, if our task as social scientists is to examine how we as individuals behave in and influence the world around us, then thorough and informed ethical critiques and discussions in which we – as social scientists – exercise strong influence on the future emergence and manifestations of the transhumanistic society are of utmost importance.

© Dirk Lindebaum. All rights reserved.

NOTE: Please consult the ‘publication’ section of this website to review the relevant articles I have been involved in on the topic of organisational neuroscience.


What neuroscience can(not) bring to the world of business

By Dirk Lindebaum| April 4, 2016

Through the clinical application of neuroscience, researchers are now in a much better position to understand and treat brain-related illnesses. However, hardly a day passes without neuroscience being hailed in the media as a panacea for a variety of problems afflicting many non-clinical disciplines. Follow your imagination, and add the prefix ‘Neuro’ to any kind of existing discipline, and – lo and behold – a brand new discipline emerges, whether it is Neuro-strategy, Neuro-leadership, Neuro-marketing, Neuro-economics, or Neuro-education to name only a few. The volume and speed with which studies in these domains are published is staggering.

Why does the prefix ‘neuro’ have such enticing effect on business researchers and practitioners? Part of the answer is the assumption that neuroscientific data, such as functional magnetic resonance imaging (or fMRI), offers researchers the confidence to rely upon ‘hard’ and ‘objective’ data. The data is allegedly more rigorous since neuroimaging techniques do not rely upon individuals’ self-reports, since they measure all brain processes, whether they are conscious or unconscious. In other words, neuroimaging techniques would seem a better choice as unconscious processes can be made visible in ways that traditional questionnaire surveys cannot. The prefix ‘neuro’, then, seems to serve one key purpose, namely, that of legitimization.

The consequence of this is – and the argument is exploited by advocates along these lines – that we should be able to offer better predictions about important outcomes at work, especially but not exclusively the selection and development of effective leaders. According to some news articles, prospective leaders can enjoy an intensive brain-training retreat for a mere $15,000, and the whole ‘brain fitness’ market is expected to grow to $6 billion by 2020.

But more serious neuroscientists warn that the science and data that should justify that kind of growth are not at all that rigorous and valid. So why should business researchers and practitioners prick their ears on that note? The answer is as simple as it is of intrinsic importance in the world of business research and practise: when we believe that we make the best selection decision possible based upon neuroscientific data when, in fact, we are not, then we create the bad consequences for business practise that I and others have written and warned about.

To put this more practically, imagine your company is interested in hiring a leader without latent trait aggression, simply because the previous incumbent displayed this trait and caused a considerable damage at work (e.g., increased staff turnover and stress-related sick notes). So you enlist the support of neuroimaging techniques to really make sure this does not happen again. You have, after all, bought into the argument that neuroscience can make better predictions (and you cannot be blamed for it, for in times of financial crises or stiff competition, that is a natural response). It turns out, however, that the person hired in the end has the same kind of traits again as the previous incumbent. Hence, considerable resources have been wasted on using data that – at least for now – is unlikely to deliver what advocates and business coaches promise.

The recent ruling by the Federal Trade Commission (FTC) in the US against ‘Luminosity’ firmly underlines this point. The firm has to pay $2 million to settle FTC deceptive advertising charges for its “Brain Training” programme. This product alleged to boost performance in everyday life and safeguard against cognitive decline. In consequence, it is central to recognise the ethical dimensions here, as unreliable research is frankly useless and wasteful.

But there are further nuances to ethical concerns associated with the using neuroscience in business settings. Foremost here is that neuroscience is increasingly applied beyond its original clinical and important purpose; what seemingly matters now are cosmetic applications in the pursuit of cognitive enhancement by way of drugs or non-invasive stimulation techniques. For example, there are proverbial ‘arms races’ ongoing amongst student populations in the UK and elsewhere, motivated by the fear of unemployment and the desire to obtain a better degree classification in order to boost job prospects, even though the long-term health consequences are not well-established. Besides, past studies have shown that these effect of these drugs is really small to moderate, implying that consumers may not experiences that kind of cognitive enhancement they are made to believe is possible.

In the world of business, a similar development can be observed. For instance, studies profess that electroencephalogram (or EEG, a brain recording technique picking up electric signals), coupled with neurofeedback sessions, can aid in the selecting or developing inspirational leaders. So let us assume you are one of the individuals identified with a ‘brain profile deficiency’, because you happen to have an anger management problem. You are then subjected to series of neurofeedback sessions, following which you it is concluded that you are now an inspirational leader.

Putting the invalidity of the conclusion aside (due to issues in the design of the study, which are explained here), it raises ethical questions if that kind of personality change can be allegedly induced as such (e.g., how would family and friends react to that since they probably like you for who you are), and even worse if the effects of that change serve the companies’ interests first, and yours second at best.

In sum, the injunction ‘buyer beware’ must be heeded if we are not to fall prey to dubious claims about the potential of neuroscience outside its core clinical application.

Note: This blog of mine has original appeared on the LSE Business Review website on the 4th of April 2016.

The Right Way to Get Angry at Work

By Deanna Geddes & Dirk Lindebaum | April 19, 2016

Anger has a place at work, Deanna Geddes and Dirk Lindebaum argue, as long as it meets three conditions.

Workplace anger often gets a bad rap. It conjures up images of hotheaded bosses who intimidate their employees, or creative types with big egos who can’t stand being critiqued. From multiple news outlets, we witness disgruntled coworkers “going postal” and resorting to violence after becoming enraged at something or somebody. Or perhaps we think of the everyday hassles that stir up resentment at work, from coworkers not meeting our expectations to passive-aggressive emails.

But not all anger is created equal. Some types of anger—particularly “moral anger,” a concept we delineate in a recent paper published in the Journal of Organizational Behavior—may be crucial to the health of our institutions.

Moral anger is triggered not because of something done (or not done) to us, but because of wrongdoings and ethical violations perpetrated against others. We get angry witnessing unfair or degrading acts against a colleague, biased and insensitive management practices, or even seriously flawed company policies and harmful products. Moral anger is an intense emotional state stemming from the violation of a recognizable moral standard that impacts others more than oneself. Importantly, it also prompts us to act to improve the situation, even in the face of significant personal risk.

When the term “moral” is applied to anger, it necessarily implies positive intentions: We seek to uphold moral standards, ensure fair treatment, or protect those who are vulnerable. As such, moral anger differs from other forms of anger in three major ways:

1. Universal standards are violated. Moral anger is triggered by the violation of universal standards of justice, ethics, or human dignity, rather than individualized, idiosyncratic preferences, values, or ideology. For example, in contrast to getting angry about a perceived personal insult by a coworker or boss, moral anger is more likely to emerge when an employee witnesses a coworker unfairly accused or punished when he was not responsible. Worse still would be a situation of managers bullying or intimidating their employee, demonstrating blatant abuse of their position of power. Universal standards are core to the moral fabric of society; society in general and healthy individuals in particular tend to be equally outraged at their violation.

2. We feel concern for others. Moral anger reflects an altruistic orientation, where one’s focus is on the needs and rights of others, more than our own. Rather than be upset because of our personal disadvantage or inconvenience, our angst stems from desiring to help others, especially those who are particularly vulnerable. I may be upset because you yelled at me, but getting angry—although justified—benefits primarily my own ego and pride. Becoming angry while witnessing another’s berating, however, would more likely demonstrate moral anger. Selfishness or self-centeredness is antithetical to moral anger.

3. We take corrective action. Moral anger prompts us to do something to help improve the situation. While we recognize that anger is an emotion, not an action. If anger doesn’t prompt some action meant to correct the situation, then it lacks the “moral” moniker. Feeling indignant about someone’s mistreatment but doing nothing to help, because “it’s none of my business” or “someone else can/should/will handle this situation,” is not moral anger. To refer back to the previous examples, to qualify as moral anger, the colleague witnessing the unfair reprimand must also support or defend the embattled coworker. Only then can we speak of the corrective power of moral anger.

Whistleblowing often involves moral anger—for example, in the celebrated case portrayed in the 1999 film The Insider. In 1996, Jeff Wigand (then Vice President of Research and Development at Brown and Williamson Tobacco and responsible for developing reduced-harm cigarettes) blew the whistle that his company intentionally increased the amount of nicotine in their cigarettes, thereby enhancing their addictive nature. After initially voicing his objections inside the company, he reported being harassed and receiving anonymous death threats.

In a 1996 interview with Mike Wallace of 60 Minutes, Wigand stated that he “got angry” about the company’s decision to abandon the safer cigarette and ultimately found the courage to speak up. However, his whistleblowing brought significant danger to him and his family at the same time that it exposed unethical practices by U.S. tobacco companies. In the end, Wigand’s actions potentially benefitted millions of people, who would have been exposed to this especially harmful but legal product.

Perhaps more mundane and less dramatic—although no less valuable—are personal experiences when our own anger prompts us to speak up on behalf of colleagues experiencing injustice, intimidation, bullying, discrimination, cruelty, or disrespect. Moral anger is a type of “energy” we can use to overcome what for many of us is an inclination to stay silent or uninvolved in conflict. As political philosopher Edmund Burke reminds us, “The only thing necessary for the triumph of evil is for good men (or women) to do nothing.”

The benefits of encouraging employees to speak up when they witness threatening behaviors, unfair accusations, or coercive tactics are plentiful. Minimally, challenging these moral violations in the workplace exposes the acts and perpetrators. Allowing space for the expression of moral anger at work can promote environments of support, compassion, courage, integrity, and fairness. Managers who allow or even encourage employees to express moral anger can learn not only what is happening but also—and perhaps more importantly—what needs to change in their realms of influence. Displays of anger at work signal something is wrong that needs management’s immediate attention and careful response.

Are we saying we want more angry workplaces? Yes and no. Self-serving expressions of anger should always be kept to a minimum. However, if we witness something that is so wrong that it makes us angry, say something…do something. Organizations and their individual members benefit when morally-wrong behaviors are addressed and eliminated.

Are you potentially sticking your neck out by expressing moral anger? Yes. Is the risk worth taking, given the chance to benefit someone else? Yes! Moral anger shows we care about others, not just ourselves. At the same time, it can help correct many of the woes we experience at work and in society. As one of us argues in a new essay, a world without anger would be a world without “critical corrections,” and that’s a world we don’t want to live in—no matter how peaceful and harmonious it appears.