INTERNATIONAL  INSTITUTE FOR SOCIAL AND ECONOMIC STUDIES 
Back to the list

Rethink Technology: A Personal Odyssey. ~Preface~

© 2018, Tom Mahon 

Copyrights on the illustrations remain with the original copyright holders.

This is my introduction to an upcoming book, drawing from my 40-year love/hate relationship with “high tech” in Silicon Valley. For the past century, the natural and life sciences have shown us the inter-connections evident throughout the universe: the Great Web of Being. Unfortunately, our technologies are still largely based on a wedge shape, separating us from nature, each other, and even ourselves: the Great Chain of Being. The rush to create a virtual, artificial, digital wall to further separate us nature will only make things worse. If technology is how we leverage natural forces to our benefit, then it’s time to rethink the entire technology enterprise. Because what’s coming in the next decade will redefine human being. And we don’t even have an agreed-upon definition of what it means to be human now.  


We’ve arranged a global civilization of which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a recipe for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces. — Carl Sagan, The Demon-Haunted World (1995)

Homo habilis (tool-maker). One million years ago

I’ll add another ingredient to Carl Sagan’s concern. About 90 percent of all the scientists and technologists who ever lived are working today to create greater capabilities. But about 90 percent of history’s greatest wisdom teachers who urged us to act responsibly have been dead for over 1,500 years.

Homo singularity (on the verge of what?) Ten years from now

Although some echoes remain in our time: The day will come when, after harnessing space, winds, the tide and gravitation, we shall harness the energies of love. And, on that day, for the second time in the history of the world, humanity will have discovered fire. — Pierre Teilhard de Chardin, S.J. * * * *

Homo caritas? One million years from now.

Some years ago, I spoke at a Conference co-sponsored by the Artificial Intelligence Lab at MIT, and the Boston Theological Institute, a consortium of divinity schools including Harvard’s.

The speaker ahead of me at MIT that morning was a prominent astrophysicist whose talk echoed the conclusion in Nobel laureate Steven Weinberg’s Dreams of a Final Theory: The Search for the Fundamental Laws of Nature: “The more the universe seems comprehensible, the more it also seems pointless.”

I have a different take on the subject. If the universe is pointless then why are there conscious being like us to observe the pointlessness? There’s no point in that. 

As we exchanged places at the podium, I asked the previous speaker if that was a wedding band he was wearing. 

“Yes, it is.”

“And you have children?” I asked. 

“And you love them?” 

“Of course, I love my wife and children,” he replied. 

“They bring joy into your life?” 

“They certainly do.” 

“Good. But yet you find the universe pointless? How is that?” 

“I’m aware of that apparent disconnect,” he said. 

“And someday I plan to give it more thought.”

Sooner, rather than later, is a good time to start. 

With our advanced technologies we are drowning in data and information but, lacking any underlying notions of meaning or value in all that data, we rush ahead to acquire even more data for no other purpose than acquire more data more quickly. 

An electron can circle the earth eight times a second. while the fastest human takes almost four minutes to run a mile. We can’t keep up with our ingenuity any more. 

So, the current thinking goes, let’s acknowledge that and merge with our technology and become a race of super-intelligent beings so we can go even faster.

There has even been the suggestion that with our enhanced intelligence we may, in time, become the operating system of the cosmos. (Have these thought-leaders ever stepped out of their techno-bubble and seen what we’ve made of our own home planet first?)

At the current pace, will see more profound changes in the next decade than we have seen in the past century.

Work is well underway to develop mind-machine meshes, 3D printing of bio organisms, designer babies, and the ability to upload our memories into machines and thus attain immortality. (Which is something only the very wealthy will be able to afford, and they may want to be sure their heirs don’t have access to the power cord.)

We may even create intelligent, self-generating machines that won’t need us anymore, although why invest in such a scheme when you won’t be here to cash in?

Nobody knows what’s possible in the next decade, which makes it hard to prepare for what’s to come. But if we continue on the current trajectory, focused only on innovation for its own sake and new investment opportunities, we will look less like gods and more like lemmings heading for the cliff. 

We design all kinds of means to, but barely give a though to what for? or how come?

Yes, we can create a virtual reality. But will we — can we — imbue it with any sense of virtue? And if so, what is virtue under these circumstances? Is there a place for authentic integrity (AI) in artificial intelligence (AI)? 

We are progressing at breakneck speed, with little thought to where or why, creating nearly unbearable stress on ourselves, our communities and on the planet.

And here’s the crowning paradox. Finally, health care is beginning to embrace digital technology in the hopes of controlling the rising cost of such care. But to the extent that digital-induced stress in the workplace, the classroom and the highway causes even more dis-ease, what will be gained?

It’s clear by now that the winner of the digital revolution is a small coterie of investors and entrepreneurs with unprecedented wealth, power and influence. But with the exception of a few celebrity CEO’s, we don’t even know their names. Yet they can count the hairs on our head.

Carl Sagan wisely called for a more informed public about these matters. That is adequate but not sufficient to our situation now.

After nearly a half-century writing about technology, most of that time observing and participating in the digital revolution in Silicon Valley, I’ve seen enough to convince me that we have to rethink why we even do science and technology at all.

Developing science without con-science, and technical capability with no sense of moral responsibility, we built and deployed tens of thousands of nuclear warheads on intercontinental missiles with precise targeting abilities. And to pay for his, we left the public sphere — public health, education, infrastructure — in a shamble. This was not an accident.

We have assumed god-like power, without a godlike sense of justice or mercy, and with no reference to limits or direction.

And so (gulp), it’s time for a paradigm shift.

For the Scientific Revolution to happen in the 16th Century, and the Industrial/Technical Revolution in the 18th , it was necessary to break free from Church control of thought and speech, based on ancient scrolls. Society had to move past the horrors of the Inquisition, and the bloodbaths of the Reformation, to secure freedom of expression and scientific enquiry.

But to facilitate this, the scientists and churchmen agreed to split the uni-verse in two, creating a bi-verse in the human imagination. The natural and the supernatural were disconnected, each into a self-contained silo. Going forward, scientists and technologists would deal only with the natural and the physical. And the clergy with the supernatural and metaphysical.

Since then, scientists-as-scientists cannot express awe in the face of their discoveries, nor are they allowed to question the ‘why’ of things, such as why there’s existence at all. 

And engineers-as-engineers are not encouraged to question the ends to which their products are used by clients or sponsors or governments, only that they meet tech-specs, technical specification.

The absurd reduction of all this was the defense used by Nazi genocidists at Nuremberg: “We were only following orders!” Incredibly, after all the atrocities of the last century, this is still a bona fide defense.

Since then we’ve made amazing advances in many fields, but at the same time questions of right and wrong have become increasingly irrelevant. To the point that now even the notion of right/wrong itself is questioned. When all values are accepted to be of equal value, then is anything of any value?

This decoupling of technical capability from moral or social responsibility is not sustainable. Even now, so much of our technology goes to repairing the unintended consequences of earlier technology, such as the damage to the environment from auto exhaust.

And our coming capabilities will let us reshape the direction of human evolution which, going forward, will likely be guided more by “intelligent design,” than traditional natural selection.

Nowhere has this bifurcation been more evident than in the practice of medicine, although that’s starting to change now. The physician’s training is so science-oriented — learning about antibiotics and bone scans and MRIs — that there is little time to study the ultimate meaning and purpose of life. For that we have the clergy, who use terms like transcendence, salvation and enlightenment. And the physician and the clergyman can’t coordinate their treatment because they don’t have a common language.

Years ago, I ghostwrote a nationally syndicated medical advice column for The New York Times. After months using the local university bio-med library for my research, I told the head librarian how impressed I was by all the books, journals and monographs I had at my disposal that describe how to heal. 

But where, I asked her, are the books that explore why we have healing arts? Where are the books that test the assumption that life has value and should be prolonged? She said such questions are not part of a physician’s training. I asked why not. She asked me if I wanted my library privileges revoked. 

I said no, and let it go. Still, it’s interesting that we call a physician doctor — meaning teacher not healer — yet the training excludes questions of ultimate meaning; does life have inherent value worth preserving? Certainly “primitive” cultures thought so, because the shaman and the healer were one and the same.

It was experiences like that which made me more and more aware of the cognitive dissonance we all live with now. 

(And I really hate cognitive dissonance. I went to a Catholic high school where ROTC was part of the core curriculum. For one hour every day in ROTC, we studied how to kill people. In the next hour in Religion class we were taught it is wrong to kill people. Four years of that every day, and I could feel my brain turning into oatmeal. I determined then I would never again find myself with one foot in a boat headed upstream, and one foot in a boat headed downstream. For eventually, such a person, or such a society, will be ripped in half.) 

So where do I get off suggesting we rethink the whole notion of technology before we take the big leap from the natural into a brave new digital world from which there may be no do-overs or Plan B’s?

I was hooked on the promise of the new digital technologies in the early 1970s, after seeing what word processing allowed. I’d written several novels on a typewriter by then, when a major re-write meant re-typing an entire 300-page manuscript. Word processing was a heaven-send for a writer. And I figured that more of this digital legerdemain was bound to be even better. 

(Moment of Zen: we see the engineering skill behind word processing. Consider the skill it takes to put a strip of lead in the middle of a wooden stick and make something as basic as a pencil. That’s high-tech low-tech.)

I grew up in the Midwest in the 1950s and ’60s, and in 1970 I moved to England to study at the London Film School. Upon my return to the U.S., I became a documentary film maker in St. Paul-Minneapolis. 

In 1974, the economy had a hiccup, something called ‘stagflation,’ and that meant that corporations that had put up six-figure budgets to make public affairs social documentaries, were now putting up five-figure budgets to make industrial films.

That spring, Univac, a major computer firm in the Twin Cities, invited me to bid on making a film for them.

At our first meeting, the Senior Vice President of QA explained to me that the film’s purpose “was show our key chip vendors who supply our DRAMs and SRAMs the importance of Six Sigma and ISO 9000 compliance so as to increase the QA of deliverables.” Or words to that effect.

I had no idea what he was talking about, and worse, no idea of where any of that connected to anything I’d been exposed to up to then: at home, in class, at church, in the community. It’s not that he or the company refuted the value of history or literature or ideas. But such things were just irrelevant to any discussion of Quality Assurance in the production of Dynamic Random-Access Memories.

I told the executive that he should find a filmmaker who understood his business. “There aren’t any,” he said. “But look, you know your business, and we know ours, and we’ll make this work.”

And to help make it work, he opened the company up to me, so I could explore all the areas of its business in order to reflect them properly in the film. I saw early research Univac was doing then on what are now consumer products: computer-generated music, art, voice recognition, speech synthesis, mapping, machine intelligence, A.I. software…

And to round out my education, and learn something about the semiconductor business, Univac sent me to Intel, a young semiconductor firm in Santa Clara California, for another week of on-the-job training. By the end of that summer, I probably had as good an education in computers (Univac), and semiconductors (Intel), as a layperson could get.

My timing was impeccable. Intel had recently introduced the first microprocessor, ‘the computer on a chip’ that is arguably the most significant invention since the printing press. Like Gutenberg’s device, the importance is not so much the thing itself, but rather the myriad applications it enables that touch every aspect of life.

The electronic digital revolution was ramping up in 1974, and it was going to need a lot of people who could take technical specifications and turn them into words or images understandable to investors, employees, consumers, government regulators, etc.

That summer, as President Nixon faced impeachment, I learned about mega and giga; micro and nano. 

Like many college students in the 1960s, I had a liberal arts bias against computers. In those days of campus protests, one popular poster read: Do not bend, fold, spindle or mutilate the student body: a swipe at the instructions on computer punch cards.

So I was very impressed when my contact at Univac told me that his firm’s computers would ultimately make health care more affordable, air travel safer, and education more broadly available. 

And I was even more impressed when I read a new book that summer titled Zen and the Art of Motorcycle Maintenance: An Inquiry into Values. It turned out the author, Robert M. Pirsig, was also a tech writer then at Univac.

Something else boosted my confidence in the good intentions of the industry. In those early years, I got assignments from various firms to interview their founders, early industry pioneers who were going into retirement back then: William Hewlett, William Shockley, Fred Terman, Presper Eckert, and others whose names are barely known to the general public. 

These men were keenly aware of the larger world they lived in, and the impact of their work outside of the labs. During World War II and the Cold War that followed, they saw how their technical advances helped the Allies win the great moral crusade against the Nazis, and checkmate the Soviets.

On the quality and reliability of their work hung the future of human civilization. Their daily reality was not virtual or artificial; they did not design games to keep kids indoors when they should be outside horsing around; and they certainly would never waste their time designing microprocessor-enabled tennis shoes.

Unfortunately, all these early encounters I had with electronic engineers and computer scientists in the mid-1970s set up expectations for me that were never fulfilled. The hoped-for goals of improvements in health care, education and air travel, anticipated by that Univac executive, never happened. In fact, it’s those very services — and so many others that benefit the common good — that are now in such disrepair.

A few years later I moved to the San Francisco Bay Area, to work in what was then starting to be called Silicon Valley. And, to my surprise and disappointment, in all the intervening decades that I participated in, and helped evangelize, the digital revolution, I never again heard the words meaning or values. 

Instead of discovering resources to help me find some connection between my humanist background and my technical workplace, I saw at first-hand how the frenzy to keep up with Gordon Moore’s 1965 Law of increasing technical capability was costing us the ability to make thoughtful decisions or consider the consequences of our efforts.

And this was especially true in the new high-tech office parks springing up in the Santa Clara Valley that until recently was called “the valley of the heart’s delight.” Self-imposed time-to-market deadlines encouraged hostility between co-workers (called “constructive confrontation”), while free, on-site gourmet dinners let engineers eat well and be productive late into the evening, and not be distracted by going home to the family. It all got to be too much. It was not uncommon to pass a conference room and hear a man sitting in the dark and sobbing at the treatment from another employee. And then there were the suicides….

Since the introduction of the Internet in the early 1990s, the eco-system has gone into hyper drive. The possibility of unimaginable fortunes creates unimaginable blather-filled nonsense. I’ve seen business pitches that, boiled down to the core message, said:

BUSINESS/SOCIAL MISSION STATEMENT OF NEWCO, INC. 

· Make the world a better place; 

· Make several billion dollars doing so; 

· Not necessarily in that order.

There is way too much drinking the Kool-Aid now. The old-time engineers I’d met, in their white, short-sleeve shirts and pen protector, were sticklers for clarity and precision. They would gag today on words like “awesome,” “passionate,” and “thought leadership.” In their estimation, curing cancer would be awesome, but showing up for a meeting on time is “meeting expectations,” not awesome-ness.

I’ve listened to many new business pitches in recent years and am gob-smacked at how many solutions are out there in search of real problems to solve. Do we really need to make surveillance more omnipresent, or drone weapons more precise? 

So much innovation and investment is squandered on the trivial and the deadly, when there are so many real human problems that are not being addressed and/or don’t lend themselves to digital solutions. (And, to be clear, I know there is much good, necessary, beneficial work being done with digital technology. Unfortunately, it usually doesn’t get the same exposure as the spectacular and the flameouts.)

Some things still confound me about the public’s rhapsodic embrace of the digital lifestyle:

o The tech industry is not ‘clean.’ Much of the groundwater in Santa Clara county, once a flourishing agricultural area, has numerous cancer plumes from chemicals deposited years before. There are no ugly smokestacks; just ugly drains. 

o The industry did a superb job making the consumer feel stupid, as in Windows for Dummies. The consumer is not the dummy when he or she is required to click Start Up to shut down. 

o The working conditions in some overseas manufacturing facilities are worse than any of the dark, satanic mills that Dickens described. 

o Human-made problems — war, famine, climate change — are so depressing and intractable now, the digital industry has come up with a way to enable guilt-free distraction: by creating virtual, augmented and artificial digital worlds so we don’t have to see what we’ve done to the natural world. (Conveniently forgetting that Mother Nature always bats last.) 

o Thirty years ago, the Internet was a curiosity. Today it’s the central nervous system of human civilization on which so much depends. And if it went down for even a day, which is not inconceivable, the global dislocation would be catastrophic. At a minimum, anyone whose home or business depends on constant online capability should keep a week’s supply of food, water, cash and other supplies on hand. Just in case…. (This is not a radical idea. It is the same advice offered to people who live in earthquake zones, like the San Francisco Bay Area.) 

o And the rush to hook everything to the Internet of Things (IoT) is reckless in the extreme. Let me count the ways. a) Adding “intelligence” to a toaster does not make the bread taste any better. b) Anything linked to the Web is hackable. c) The amount of energy required will be beyond staggering. 

From a business standpoint, the IoT is the ultimate bonanza. Sell an electronic chip to put on everything in the world to make it “smart.” I get that. Mucho moola.

But the underlying assumption is that by accumulating enough data we can manage the planet better than nature can. So, I come back to my original point: we are drowning in data, but starved for knowledge or wisdom or just plain, old-fashioned horse sense or humility. Maybe my upbringing in the Midwest helped me see that more clearly.

What I want to do in this book is to reconsider and rethink the very notion of technology itself, and not just digital technology.

It wasn’t always like this. During the Classical Age, 2,500 years ago, people undertook “science” to know the truths of nature. And they undertook “technology” to make beautiful objects and to do good deeds. 

Certainly, nature was also manipulated for more deadly goals then, like building engines of war or keeping subject people in chains. But at least in theory, science and technology were understood then as the means to attain the ends of truth, beauty and goodness: the holy trinity of the ancient world. We’ve not only lost that, we’re not even aware there was such a thing to lose.

This book is for people who wish to re-envision and re-engineer tools that serve human and humane ends, to bring moderation and compassion back in to their individual and communal lives. 

Based on what I was taught as a kid in school and in church, at home and in the community, I believe the proper end of technology — in fact, the proper end of all human enterprise — is to help us be happy and composed in the truest sense of those words. And the wisest men and women in history agree the path to true happiness is living a life of moderation, empathy and kindness.

To the extent current technology does that, it’s a godsend. But to the extent that isn’t happening, it’s time for a grassroots effort to begin the process. It will certainly not be mandated from the top down. The industry now claims to thrive on innovation, but such innovation is limited to maintaining the forward momentum of the status quo: invasive, persuasive and pervasive. 

In board brush, the material here considers three areas:

Part One is a brief history of technology, as humanity learned over time how to leverage our muscles, senses, brains, and consciousness to achieve greater and greater results. In ancient times the stated goal was to achieve self-knowledge. Now it’s often just to make a killing soon and cash out young. 

Part Two examines what the new sciences have revealed. Until the 20th Century, we used to think that the spectrum of knowledge was a straight line: extending from physics on one side, to chemistry; to biology; medicine; psychology; the social sciences; the law, liberal arts; fine arts; philosophy and ultimately to theology at the other end of the spectrum.

But increasingly the natural and life sciences have revealed that the spectrum of knowledge is not a straight line, but is curved. It bends back on itself. 

Such that if you study physics deeply enough, you end up confronting the basic question of theology: Why? Likewise, if you study theology far enough now you will come to the mystical quantum realm where the accepted realities of daily life are upended.

And that place where physics and metaphysics meet, where the natural and supernatural are one, is consciousness. 

In the last century the sciences have revealed that we live in a uni-verse; one existence. And possibly a conscious existence, at that. It is not a bi-verse divided between the physical and metaphysical; the natural and supernatural. Those archaic divisions are the products of our primitive minds, not things-as-they-are. It is time for them to go.

As we increasingly understand, the new scientific discoveries show that every thing is connected to everything, and that the cosmos may in fact be holographic — with each part containing the whole — or perhaps one single waveform. If that is so, then the crippling, old, destructive mindsets and lifestyles based on greed, gain and self-aggrandizement may at last be annihilated, in favor of . 

Part Three reconsiders how these new insights from science demand from us a rethinking of how the design and use of our tools advance or frustrate our sense of being at home in the universe. 

If these new notions in science are repeatedly confirmed over time, then we need to reimagine the tools we use that are based on the principles of nature revealed by science.

So many of our tools are wedge-shaped: spears, axes, arrows, bullets, missiles. Likewise, so many of our gods are also wedge-shaped: “You are my chosen ones, and all others are inferior apostates.” And in large measure our understanding of gods and tools reinforce each other: us vs. them. And even so much of our communications technology now serves only to separate us by facilitating the spread of false news and incendiary posts, pitting one against another for the benefit of a third party.

Much of the “developed world” looks to a thousands-year-old book called the Bible for moral guidance. And the Bible tells us right up front that for using their God-given big brains to satisfy their curiosity, God cast Adam and Eve out of Eden to suffer in a world of exile, banishment and woe. And when the people of Babel used their engineering skills to build a tower to reach God, He cursed them with the confusion of language and civil discord.

Based on these stories we have spent much of our history making tools to subjugate and dominate the frightening world of nature, and gain power and authority over those who are different from us. 

If that was in “the good book,” I don’t want to see the “bad book.”

Thankfully, we are getting better at hearing Nature shout back to us: there is no Great Chain of Being, with each link having the power of life and death over the links below. Rather, we exist in a Great Web of Being; each node essential to the whole.

Now we need to do a better job of incorporating those insights of interconnectivity into the tools we design and use to manifest that interconnectivity between ourselves, each other and nature. 

Time is running out for us and our descendants to re-incorporate composure and compassion into our tools and technologies, and reconnecting tech-knowledge with self-knowledge — technical capability with moral and social responsibility — to engage in what Austrian poet Rainer Maria Rilke called “the great work of the world.” 

And thus, fulfill the commandment of the Talmud, echoed in many other wisdom traditions: “Do not be daunted by the enormity of the world’s grief. Do justice, now. Love mercy, now. Walk humbly, now. You are not obligated to complete the work, but neither are you free to abandon it.”

Source: Medium