Technology, which has made our lives immeasurably better (thus, we live longer and healthier lives), is now one of the greatest threats facing humankind. In brief, technology is more pervasive and invasive than ever before. It encompasses and affects every aspect of our lives. From chip implants that threaten to alter irreparably our brains and bodies, artificial intelligence and robots that threaten to relegate humans to permanent subservience, to driverless cars that threaten to cause millions to lose their livelihoods, and not least of all their dignity, technology is out of control.
For this reason alone, it’s imperative that we do research on how technology can be used for the betterment of humankind. I cannot think of anything that is more relevant and more appropriate for RRBM. Based on my philosophic background, I outline what this looks like.
As a case in point, consider the fact that MIT engineers have recently developed gold leaf like tattoos that can be placed directly on our bodies[i]. Their prime purpose is to allow us to “communicate seamlessly” with all of our marvelous devices. Needless to say, no thought was given to the fact that young people already sleep with their cell phones directly under their pillows lest they miss an important call or text during the night. Now their skins will be buzzing all night long as well. Say goodbye to the benefits of reparative sleep. Technology is thereby disrupting us as much, if not more, than the world around us.
Because they are “used”—“abused and misused” is more accurate—in ways not thought of, or given serious consideration, by their creators, all technologies come with significant downsides and negative consequences. Since this is universally the case, why then are technologists generally unable and unwilling to think of and take corrective actions to counteract the negative effects before they produce irreparable harm? For instance, prior to its launch, why didn’t Facebook assemble teams of kids, parents, and psychologists to think about potential problems? If it had, I am convinced that cyber bullying would have emerged as a serious issue. Further, when it became readily apparent, why didn’t Facebook act immediately to thwart cyber bullying? Why did it take so long before it acted responsibly?
Along the same vein, Facebook should have anticipated how it would be used by foreign powers to spread dis- and mis-information to influence our elections and to provide a platform for spreading hate speech. Only after it became abundantly clear that once again Facebook failed to act responsibly was it forced to take action. In short, by steadfastly opposing all reasonable regulations from the very beginning, social media companies have only brought the need for more severe regulations on themselves. Tech companies not only need to be studied, but held to account.
As we move forward, the stakes are even greater. Artificial intelligence and robots threaten the fundamental role of humans as never before. Indeed, every day, more and more jobs are at risk of being replaced by robots that supposedly can do everything cheaper and faster than humans can. The question, “Will robots serve humans, or will those of us who are still around serve robots?”, is no longer the stuff of science fiction. The status of humans is more problematic than ever.
Based on my research, one of the biggest reasons why technologists and tech companies are both unable and unwilling to contemplate the negative aspects and harmful consequences of their marvelous creations is that they are the prisoners of an underlying, largely taken-for-granted, belief system that directs them to see only the positive aspects of technology. Indeed, they overly rhapsodize the positive aspects of their wondrous inventions to the near, if not total, exclusion of anything negative. This tendency is in fact one of the prime components of their underlying belief system, i.e., that technologists need only concern themselves with the positive aspects of their creations. Indeed, they should not waste their time thinking about anything negative. Yes, this was true of earlier revolutions as well, but in terms of its impacts, it’s nothing like the current one.
I call this belief system “The Technological Mindset.” Until we acknowledge and deal adequately with it, we will continue to suffer the ill effects of technology. In my forthcoming book, Technology Run Amok: Crisis Management for the Digital Age, I discuss in depth how to counter it.
For instance, before any new technology is unleashed, a serious audit of its social impacts, both negative and positive, needs to be conducted by panels made up of technologists, parents, social scientists, teachers, children, etc. A technology should be adopted if and only if it continues to pass the most severe Social Impact Assessments we can muster. In other words, the burden is placed squarely on technologists to justify their creations, and to ensure that the negative impacts are not only given serious thought, but to the best of our efforts are under control. Furthermore, similar to clinical trials with regard to new drugs, such assessments cannot be left to voluntary compliance. They have to be made mandatory.
Ideally, as early on as possible, such an assessment would be an integral part of the development of every new technology. The cost of such assessments would thereby be part of the cost of developing any new technology. If history is any guide, as costly as they are, they are less than the costs of cleaning up after the fact. Indeed, the field of crisis management has shown repeatedly that those organizations that are prepared for crises not only experience fewer of them, but are substantially more profitable. In the same way, Social Impact Assessments are not only the right ethical things to do, but they are good for business and society as a whole.
One of the key components of “The Technological Mindset” is the unbridled assertion that technology is the solution to all our problems, including those caused by technology itself. As a result, any damage and disruption to humankind is justified. If substantial numbers of people are inconvenienced, and worse, lose their jobs, then that’s the cost of progress. For the advance of technology is inevitable. What’s more, the sooner we’re all replaced by robots who can do everything cheaper and faster, supposedly the better off we’ll all be, that is, for those of us who are still around to serve the robots. Humans are thus devalued as never before.
Contrary to its proponents, the unrestrained advance of technology is not inevitable. In short, we need to change direction. Unless the underlying belief system that is driving technology is confronted, and ultimately changed, then nothing substantially will be altered. Technology will just lurch from one crisis to the next. Indeed, we are already witnessing the beginnings of a backlash against technology as a whole. Dealing with it is a vital area of research.
In sum, I can think of nothing more important than contributing to the ethical management of technology. In short, we need to study the social, political, and economic consequences of the decisions of technologists and technology companies.
Ian I. Mitroff is one of the principal founders of the modern field of crisis management. He is an Affiliate of The Center for Catastrophic Risk Management at UC Berkeley. He is Professor Emeritus from USC.
He is a Fellow of the American Psychological Association, the American Association for the Advancement of Science, and the American Academy of Science.
This blog is based on a forthcoming book, Technology Run Amok: Crisis Management in the Digital Age, Palgrave-Macmillan, 2018.