Techno-Sceptics' Objection Growing Louder

Techno-Sceptics' Objection Growing Louder
Advertisement
Astra Taylor's iPhone has a cracked screen. She has bandaged it with clear packing tape and plans to use the phone until it disintegrates. She objects to the planned obsolescence of today's gadgetry, and to the way the big tech companies pressure customers to upgrade.

Taylor, 36, is a documentary filmmaker, musician and political activist. She's also an emerging star in the world of technology criticism. She's not paranoid, but she keeps duct tape over the camera lens on her laptop computer - because, as everyone knows, these gadgets can be taken over by nefarious agents of all kinds.

Taylor is a 21st-century digital dissenter. She's one of the many technophiles unhappy about the way the tech revolution has played out. Political progressives once embraced the utopian promise of the Internet as a democratizing force, but they've been dismayed by the rise of the "surveillance state," and the near-monopolization of digital platforms by huge corporations.

Last month, Taylor and more than 1,000 activists, scholars and techies gathered at the New School in New York City for a conference to talk about reinventing the Internet. They dream of a co-op model: people dealing directly with one another without having to go through a data-sucking corporate hub.

"The powerful definitely do not want us to reboot things, and they will go to great lengths to stop us from doing so, and they will use brute force or they will use bureaucracy," Taylor warned the conferees at the close of the two-day session.

We need a movement, she said, "that says no to the existing order."

The dissenters have no easy task. We're in a new Machine Age. Machine intelligence and digital social networks are now embedded in the basic infrastructure of the developed world.

Much of this is objectively good and pleasurable and empowering. We tend to like our devices, our social media, our computer games. We like our connectivity. We like being able to know nearly anything and everything, or shop impulsively, by typing a few words into a search engine.

But there's this shadow narrative being written at the same time. It's a passionate, if still remarkably disorganized, resistance to the digital establishment.

Techno-skeptics, or whatever you want to call them - "humanists" may be the best term - sense that human needs are getting lost in the tech frenzy, that the priorities have been turned upside down. They sense that there's too much focus on making sure that new innovations will be good for the machines.

"I'm on Team Human!" author Douglas Rushkoff will say at the conclusion of a talk.

You could fill a college syllabus with books espousing some kind of technological resistance. Start the class with "You Are Not a Gadget" (Jaron Lanier), move on to "The Internet Is Not the Answer" (Andrew Keen), and then, to scare the students silly, "Our Final Invention: Artificial Intelligence and the End of the Human Era" (James Barrat).

Somewhere in the mix should be Astra Taylor's "The People's Platform: Taking Back Power and Culture in the Digital Age," a clear-eyed reappraisal of the Internet and new media.

Of the myriad critiques of the computer culture, one of the most common is that companies are getting rich off our personal data. Our thoughts, friendships and basic urges are processed by computer algorithms and sold to advertisers. The machines may soon know more about us than we know about ourselves.

That information is valuable. A frequent gibe is that on Facebook, we're not the customers, we're the merchandise. Or to put it another way: If the service is free, you're the product.

Some digital dissenters aren't focused on the economic issues, but simply on the nature of human-machine interactions. This is an issue we all understand intuitively: We're constantly distracted. We walk around with our eyes cast down upon our devices. We're rarely fully present anywhere.

Other critics are alarmed by the erosion of privacy. The Edward Snowden revelations incited widespread fear of government surveillance. That debate has been complicated by the terrorist attacks in Paris and San Bernardino, because national security officials say terrorists have exploited new types of encrypted social media.

Some dissenters think technology is driving economic inequality. There are grave concerns that robots are taking the jobs of humans. And the robot issue leads inevitably to the most apocalyptic fear: that machine intelligence could run away from its human inventors, leaving us enslaved - or worse - by the machines we created.

Technological skepticism isn't new. Plato told the story of a king who protested the invention of writing, saying it would weaken his people's memory and "implant forgetfulness in their souls."

But something different is going on now, and it simply has to do with speed. The first commercial Internet browser hit the market in 1994. Google arrived in 1998. Twitter appeared in 2006, and the iPhone in 2007. Facebook founder Mark Zuckerberg is all of 31 years old.

Our technology today is so new that we haven't had time to understand how to use it wisely. We haven't quite learned how to stop ourselves from texting and driving; many of us are tempted to tap out one more letter even if we're going 75 on the highway.

Some countries are taking aggressive action to regulate new technologies. The South Korean government has decided that gaming is so addictive that it should be treated similarly to a drug or alcohol problem. Meanwhile, the European Union law "Right to Be Forgotten" forces companies such as Google and Yahoo to remove embarrassing material from search engine results if requested to do so.

Washington's political establishment, however, has largely deferred to Silicon Valley. The tech world skews libertarian and doesn't want more government oversight and regulations.

One of the tech world's top advocates in Washington is Robert Atkinson, president of the Information Technology & Innovation Foundation, which receives about two-thirds of its funding from tech companies.

Atkinson is a lanky, voluble man who sounds exasperated by the rise in what he considers to be neo-Luddite thinking. ("Luddite" is a term dating to the early 19th century, named for a murky character named Ned Ludd, who inspired textile workers to smash mechanical looms.)

He's worried that books by people such as Astra Taylor will create a thought contagion that will infect Washington policymaking. In his view, there are two types of Luddites: the old-fashioned hand-wringers who are spooked by anything new and innovative; and the "soft" Luddites - he would put Taylor in that category - who say they embrace technology but want to go slower, with more European-style regulations.

"It's the emergence of soft Luddites that I worry about, because it has become the elite conventional wisdom in a lot of spaces," Atkinson said.

But he may be worried prematurely. A Senate bill to regulate self-driving cars went precisely nowhere. It's not as though people are marching on Washington to demand that lawmakers address the self-driving-car threat.

The technological resistance is not limited to nonfiction polemics. Fiction writers are picking up the thread, often borrowing from George Orwell and his dystopian masterpiece, "1984."

For example, Gary Shteyngart's "Super Sad True Love Story" is a tale of people struggling to find love and humanity in a world of Big Brother-like surveillance, societal breakdown and increasingly coarse social norms. The novel features gadgets that allow people to rate one another numerically on their sexual attractiveness. Not implausible: A start-up company recently announced its plan to market an app that would allow users to rate everyone on a 1-to-5 scale, without their consent. (After furious protest from around the Internet, the backers modified their plan to include only positive reviews.)

Dave Eggers's novel "The Circle" tells of a rising star at a Google-like company. She excels by answering thousands of emails a day, working at a frenetic pace. She lives with a camera around her neck that streams everything she sees onto the Internet. This does not go well for her.

And there's a new voice among the dissenters: Pope Francis. The pontiff's recent encyclical "On Care for Our Common Home" contemplates the mixed blessings of technology. After acknowledging the marvels of modern technology ("Who can deny the beauty of an aircraft or a skyscraper?"), Francis sketched the dangers, writing that technological development hasn't been matched by development in human values and conscience.

"The economy accepts every advance in technology with a view to profit, without concern for its potentially negative impact on human beings," he wrote.

The pontiff is saying, with his special authority, what many others are saying these days: Machines are not an end unto themselves. Remember the humans.

The dean of the digital dissenters is Jaron Lanier. He's a musician, composer, performer and pioneer of virtual-reality headsets that allow the user to experience computer-generated 3D environments. But what he's most famous for is his criticism of the computer culture he helped create.

He believes that Silicon Valley treats humans like electrical relays in a vast machine. Although he still works in technology, he largely has turned against his tribe.

"I'm the first guy to sober up after a heavy-duty party" is how he describes himself.

He can typically be found at home in California's Berkeley Hills, swiveling in a chair in front of a computer screen and a musical synthesizer. Directly behind him is a vintage Wurlitzer golden harp. Lutes and violins hang from the ceiling. This is his home office and man cave.

Lanier, 55, is a man of considerable girth and extraordinary hair. He has dreadlocks to his waist. He hasn't cut his hair for at least 30 years and says he wouldn't know how to go about it. When a visitor suggests that he could see a barber, he replies, in his usual high-pitched, singsong voice: "I don't know that term. Is that a new start-up?"

Lanier's humanistic take on technology may trace back to his tragic childhood: He was 9 when his mother was killed in a car accident in El Paso. He later learned that the accident may have been caused by an engineering flaw in the car.

"It definitely influenced my thinking about the proper relationship of people and machines," he said.

By age 14, he was taking college classes at New Mexico State University. He never graduated from college, which didn't matter when he wound up in Silicon Valley, designing computer games. He eventually started a company that sold virtual-reality headsets, but the company folded. In 2000, he made his first major move as a digital dissenter when he published an essay, "One Half a Manifesto," that began with a bold declaration:

"For the last twenty years, I have found myself on the inside of a revolution, but on the outside of its resplendent dogma. Now that the revolution has not only hit the mainstream, but bludgeoned it into submission by taking over the economy, it's probably time for me to cry out my dissent more loudly than I have before."

Lanier later wrote two books lamenting the way everyone essentially works for Facebook, Google, etc., by feeding material into those central processors and turning private lives into something corporations can monetize. He'd like to see people compensated for their data in the form of micropayments.

Other tech critics have rolled their eyes at that notion, however. Taylor, for example, fears that micropayments would create an incentive for people to post click-bait material. Stupid stunts - "Hold my beer, and watch this" - would be potentially marketable.

Lanier's broadest argument is that technological change involves choices. Bad decisions will lock us into bad systems. We collectively decided, for example, to trade our privacy for free Internet service.

"It's a choice. It's not inevitable," he says.

Lanier told his 8-year-old daughter recently: "In our society there are two paths to success: One is to be good at computers and the other is to be a sociopath."

She's a smart girl and knows what "sociopath" means, he said. And he understands the nature of this world that he has helped invent. That's why this summer he sent his daughter to a software programming camp.

Much of today's tech environment emerged from the counterculture - the hackers and hippies of the 1960s and '70s who viewed the personal computer as a tool of liberation. But the political left now has a more complicated, jaundiced relationship with the digital world.

The same technologies that empower individuals and enable protesters to organize also make it possible for governments to spy on their citizens. What used to be a phone now looks to many people like a tracking device.

Then there's the question of who's making money. Progressives are appalled by the mind-boggling profits of the big tech companies. The left also takes note of the gender and racial disparities in the tech companies, and the rise of a techno-elite.

Most painful for progressives has been the rise of the "sharing economy," which they initially embraced. They feel as though the idea was stolen from them and perverted into something that hurts workers.

They say that companies such as Uber, Airbnb, TaskRabbit and Amazon Mechanical Turk are creating a "gig economy" - one that, although it offers customers convenience and reasonable prices, is built on freelancers and contractors who lack the income or job protections of salaried employees. (Amazon founder Jeffrey P. Bezos, an investor in Uber and Airbnb, owns The Washington Post.)

"What was billed as 'sharing' was actually 'extraction,' " said Nathan Schneider, a journalist and co-organizer of the recent New School conference on cooperative platforms. "It's revealed to be a way of shirking labor laws and extracting resources back to investors and building monopolies."

He was speaking at a reception at the end of the two-day conference. The event was a huge success, with attentive audiences packing the panel discussions. These people are committed to reinventing the Internet.

"The story of the Internet has been one of disappointment after disappointment," Schneider said.

As Schneider spoke, Astra Taylor stood a few feet away, holding court with friends and allies. Taylor is tall, with striking features that give her a commanding presence. She was born to be a tech critic. She wasn't home-schooled, she was "unschooled." Her parents in Athens, Ga., put her in charge of her education. At age 13, she created her own newspaper with an environmentalist bent. She burned with a sense of right and wrong. "I was a serious child," she says, persuasively.

She says she'd like to see more government-supported media platforms - think public radio - and more robust regulations to keep digital powerhouses from becoming monopolies. Taylor is skeptical of the trope that information wants to be free; actually, she says, information often wants someone to pay for it.

The Internet, she said, is a bit like a friend who needs to be straightened out. She imagines giving the Internet a talking-to: "You know, Internet, we've known you for a long time and we think you're not living up to your potential. You keep making the same mistakes."

The final event at the New School conference featured a stemwinder of a talk by someone Taylor considers a mentor: Douglas ("I'm on Team Human!") Rushkoff.

Rushkoff, whose upcoming book is titled "Throwing Rocks at the Google Bus," provided a primer on the rise of capitalism, central banks and industrial culture. He suggested that civilization started making wrong turns in the Middle Ages. Centralized currency - not good. In the early days, every community could have its own coinage. We need to "rebirth the values of the peer-to-peer bazaar culture."

Growing louder and more animated as his lecture went on, he talked about the need to "optimize the economy for humans."

"Where do humans fit into this new economy?" he said. "Really not as creators of value, but as the content. We are the content. We are the data. We are the media. As you use a smartphone, your smartphone gets smarter, but you get dumber."

Taylor, Rushkoff, Lanier and other tech skeptics do not yet form an organized, coherent movement. They're more like a confederation of gadflies. Even Pope Francis's thoughts on technology were largely lost amid his headline-grabbing views about climate change.

Andrew Keen, author of "The Internet is Not the Answer," sounds a glum note when talking about what the technological resistance might accomplish.

"No one's ever heard of Astra Taylor," he said.

He didn't mean that as an insult. He was making a point about the whole crew of dissenters. No one, he said, has ever heard of Andrew Keen, either.

The world is not about to go back to the Stone Age, at least not willingly. One billion people may use Facebook on any given day. Jaron Lanier may not like the way the big companies scrape value from our lives, but people are participating in that system willingly - if perhaps not entirely aware of what is happening to their data.

Taylor's smartphone with the cracked screen clearly has been in heavy use. She knows these gadgets are addictive by design - "like Las Vegas slot machines in our pockets." But she also has trouble living without one.

"I need to learn to turn it off," she said.

The world's spookiest philosopher is Nick Bostrom, a thin, soft-spoken Swede. Of all the people worried about runaway artificial intelligence, and Killer Robots, and the possibility of a technological doomsday, Bostrom conjures the most extreme scenarios. In his mind, human extinction could be just the beginning.

Bostrom's favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves "superintelligence." It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth - including the human race (!!!) - into paper clips.

Then it goes interstellar.

"You could have a superintelligence whose only goal is to make as many paper clips as possible, and you get this bubble of paper clips spreading through the universe," Bostrom calmly told an audience in Santa Fe, New Mexico, earlier this year.

He added, maintaining his tone of understatement, "I think that would be a low-value future."

Bostrom's underlying concerns about machine intelligence, unintended consequences and potentially malevolent computers have gone mainstream. You can't attend a technology conference these days without someone bringing up the A.I. anxiety. It hovers over the tech conversation with the high-pitched whine of a 1950s-era Hollywood flying saucer.

People will tell you that even Stephen Hawking is worried about it. And Bill Gates. And that Elon Musk gave $10 million for research on how to keep machine intelligence under control. All that is true.

How this came about is as much a story about media relations as it is about technological change. The machines are not on the verge of taking over. This is a topic rife with speculation and perhaps a whiff of hysteria.

But the discussion reflects a broader truth: We live in an age in which machine intelligence has become a part of daily life. Computers fly planes and soon will drive cars. Computer algorithms anticipate our needs and decide which advertisements to show us. Machines create news stories without human intervention. Machines can recognize your face in a crowd.

New technologies - including genetic engineering and nanotechnology - are cascading upon one another and converging. We don't know how this will play out. But some of the most serious thinkers on Earth worry about potential hazards - and wonder whether we remain fully in control of our inventions.

Science fiction pioneer Isaac Asimov anticipated these concerns when he began writing about robots in the 1940s. He developed rules for robots, the first of which was: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

People still talk about Asimov's rules. But they talk even more about what they call "the Singularity."

The idea dates to at least 1965, when British mathematician and code-breaker I.J. Good wrote, "An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind."

In 1993, science fiction author Vernor Vinge used the term "the Singularity" to describe such a moment. Inventor and writer Ray Kurzweil ran with the idea, cranking out a series of books predicting the age of intelligent, spiritual machines.

Kurzweil, now a director of engineering at Google, embraces such a future; he is perhaps the most famous of the techno-utopians, for he believes that technological progress will culminate in a merger of human and machine intelligence. We will all become "transhuman."

Whether any of this actually will happen is the subject of robust debate. Bostrom supports the research but worries that sufficient safeguards are not in place.

Imagine, Bostrom says, that human engineers programmed the machines to never harm humans - an echo of the first of Asimov's robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.

Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.

Bostrom isn't saying this will happen. These are thought experiments. His big-picture idea is that, just in the past couple of hundred years, we've seen astonishing changes in the human population and economic prosperity. In Bostrom's view, our modern existence is an anomaly - one created largely by technology. Our tools have suddenly overwhelmed the restrictions of nature. We're in charge now, or seem to be for the moment.

But what if the technology bites back?

ai_resistance_wp.jpg

There is a second Swede in this story, and even more than Bostrom, he's the person driving the conversation. His name is Max Tegmark. He's a charismatic 48-year-old professor in the physics department at the Massachusetts Institute of Technology. He's also a founder of something called the Future of Life Institute, which has been doling out Elon Musk's money for research on making A.I. safer.

Tegmark is something of a physics radical, the kind of scientist who thinks there may be other universes in which not only the speed of light and gravity are different but the mathematical underpinnings of reality are different. Tegmark and Bostrom are intellectual allies. In Tegmark's recent book, "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality," he writes about meeting Bostrom at a conference in 2005 in California:

"After some good wine, our conversation turned to doomsday scenarios. Could the Large Hadron Collider create a miniature black hole that would end up gobbling up Earth? Could it create a 'strangelet' that could catalyze the conversion of Earth into strange quark matter?"

In addition to taking the what-could-go-wrong questions seriously, Tegmark and Bostrom entertain optimistic scenarios. Perhaps, they say, Earth is the only planet in the universe that harbors intelligent life. We have a chance to take this startling phenomenon of intelligence and spread it to the stars - if we don't destroy ourselves first with runaway technology.

"The future is ours to shape. I feel we are in a race that we need to win. It's a race between the growing power of the technology and the growing wisdom we need to manage it. Right now, almost all the resources tend to go into growing the power of the tech," Tegmark said.

In April 2014, 33 people gathered in Tegmark's home to discuss existential threats from technology. They decided to form the Future of Life Institute. It would have no paid staff members. Tegmark persuaded numerous luminaries in worlds of science, technology and entertainment to add their names to the cause. Skype founder Jaan Tallinn signed on as a co-founder. Actors Morgan Freeman and Alan Alda joined the governing board.

Tegmark put together an op-ed about the potential dangers of machine intelligence, lining up three illustrious co-authors: Nobel laureate physicist Frank Wilczek, artificial intelligence researcher Stuart Russell, and the biggest name in science, Stephen Hawking. Hawking's fame is like the midday sun washing out every other star in the sky, and Tegmark knew that the op-ed would be viewed as an oracular pronouncement from the physicist.

The piece, which ran in the Huffington Post and in the Independent in Britain, was a brief, breezy tract that included a tutorial on the idea of the Singularity and a dismayed conclusion that experts weren't taking the threat of runaway A.I. seriously. A.I., the authors wrote, is "potentially the best or worst thing ever to happen to humanity."

"Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History,' " one online science news site reported.

And CNBC declared: "Artificial intelligence could end mankind: Hawking."

So that got everyone's attention.

Tegmark's next move was to organize an off-the-record conference of big thinkers to discuss A.I. While the Boston area went into a deep freeze in January of this year, about 70 scientists and academics, led by Tegmark, convened in Puerto Rico to discuss the existential threat of machine intelligence. Their model was the historic conference on recombinant DNA research held in Asilomar, California, in 1975, which resulted in new safeguards for gene splicing.

Musk, the founder of Tesla and SpaceX, joined the group in Puerto Rico. On the final night of the conference, he pledged $10 million for research on lowering the threat from A.I.

(Also see:  Elon Musk Backs Artificial Intelligence Non-Profit OpenAI With $1 Billion)

"With artificial intelligence, we are summoning the demon," Musk had said earlier, a line that sent Twitter into a tizzy.

In the months that followed, 300 teams of researchers sent proposals for ways to lower the A.I. threat. Tegmark says the institute has awarded 37 grants worth $7 million (roughly Rs. 46 crores).

Reality check. More than half a century of research on artificial intelligence has yet to produce anything resembling a conscious, willful machine. We still control this technology. We can unplug it.

Just down Vassar Street from Tegmark's office is MIT's Computer Science and Artificial Intelligence Laboratory, where robots are aplenty. Daniela Rus, the director, is an inventor who just nabbed $25 million (roughly Rs. 165 crores) in funding from Toyota to develop a car that will never be involved in a collision.

Is she worried about the Singularity?

"It rarely comes up," Rus said. "It's just not something I think about."

With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars.

Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.

"The progress has not been as steady as people say, and the machine skills are really far from being ready to match our skills," she said. "There are tasks that are very easy for humans - clearing your dinner table, loading the dishwasher, cleaning up your house - that are surprisingly difficult for machines."

Rus makes a point about self-driving cars: They can't drive just anywhere. They need precise maps and relatively predictable situations. She believes, for example, that they couldn't handle Washington, D.C.'s Dupont Circle.

In Dupont Circle, vehicles and pedestrians muddle their way forward through a variety of interpersonal signals that a machine could not interpret, she said. Self-driving cars struggle with heavy traffic, she said, and even rain and snow are a problem. So imagine trying to understand hand gestures from road crews and other drivers.

"There's too much going on," Rus said. "We don't have the right sensors and algorithms to characterize very quickly what happens in a congested area, and to compute how to react."

The future is implacably murky when it comes to technology; the smartest people on the planet fail to see what's coming. For example, many of the great sages of the modern era didn't anticipate that computers would get smaller rather than bigger.

Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity.

In our increasingly technological society, we rely on complex systems that are vulnerable to failure in complex and unpredictable ways. Deepwater oil wells can blow out and take months to be resealed. Nuclear power reactors can melt down. Rockets can explode. How might intelligent machines fail - and how catastrophic might those failures be?

Often there is no one person who understands exactly how these systems work or are operating at any given moment. Throw in elements of autonomy, and things can go wrong quickly and disastrously.

Such was the case with the "flash crash" in the stock market in 2010, when, in part because of automated, ultra-fast trading programs, the Dow Jones industrial average dropped almost 1,000 points within minutes before rebounding.

"What we're doing every day today is producing super stupid entities that make mistakes," argues Boris Katz, another artificial intelligence researcher at MIT.

"Machines are dangerous because we are giving them too much power, and we give them power to act in response to sensory input. But these rules are not fully thought through, and then sometimes the machine will act in the wrong way," he said.

"But not because it wants to kill you."

A living legend of the A.I. field and the MIT faculty is Marvin Minsky, 88, who helped found the field in the 1950s. It was his generation that put us on this road to the age of smart machines.

Minsky, granting an interview in the living room of his home a few miles from campus, flashed an impish smile when asked about the dangers of intelligent machines.

"I suppose you could write a book about how they'll save us," he said. "It just depends upon what dangers appear."

The A.I. debate is likely to remain tangled in uncertainties and speculation. In theory, as that original Huffington Post op-ed stated, there's no theoretical limit to machine intelligence - "no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains."

But the academic and scientific establishment is not convinced that A.I. is an imminent threat.

Tegmark and his Future of Life allies decided this summer to take on a related but more urgent issue: the threat of autonomous weaponized machines.

Tegmark teamed with Stuart Russell, the artificial intelligence researcher, on an open letter calling for a ban on such weapons. Once again, they got Hawking to sign it, along with Musk, Bostrom, and about 14,000 other scientists and engineers. On July 28, they formally presented the letter at an A.I. conference in Buenos Aires.

Russell said it took him five minutes of Internet searching to figure out how a very small robot - a microbot - could use a shaped charge to "blow holes in people's heads." A microrifle, he said, could be used to "shoot their eyes out."

"You'd have large delivery ships that would dump millions of small flying vehicles, probably even insect-sized, the smallest you could get away with, and still kill a human being," Russell said.

After Nick Bostrom's lecture in Santa Fe, held at the New Mexico School for the Deaf, he went to a book-signing event across town at the School for Advanced Research. His book is a meticulously reasoned, rather dense tract titled, "Superintelligence: Paths, Dangers, Strategies."

Bostrom, standing at the edge of a courtyard, held forth amid a small cluster of party guests. Then he sat down for an hour-long interview. Reserved, intensely focused on his ideas, the 42-year-old Bostrom seemed apprehensive about whether his ideas could be fully grasped by someone who is not an academic philosopher. He was distracted by the possibility that a gnat, or fly, or some such insect had invaded his water glass.

Asked if there was something he now wishes he had done differently with his book, he said he should have been clear that he supports the creation of superintelligence. Unsurprisingly, most readers missed that key point.

"I actually think it would be a huge tragedy if machine superintelligence were never developed," he said. "That would be a failure mode for our Earth-originating intelligent civilization."

In his view, we have a chance to go galactic - or even intergalactic - with our intelligence. Bostrom, like Tegmark, is keenly aware that human intelligence occupies a minuscule space in the grand scheme of things. The Earth is a small rock orbiting an ordinary star on one of the spiral arms of a galaxy with hundreds of billions of stars. And at least tens of billions of galaxies twirl across the known universe.

Artificial intelligence, Bostrom said, "is the technology that unlocks this much larger space of possibilities, of capabilities, that enables unlimited space colonization, that enables uploading of human minds into computers, that enables intergalactic civilizations with planetary-size minds living for billions of years."

There's a bizarre wrinkle in Bostrom's thinking. He thinks a superior civilization would possess essentially infinite computing power. These superintelligent machines could do almost anything, including create simulated universes that include programs that precisely mimic human consciousness, replete with memories of a person's history - even though all this would be entirely manufactured by software, with no real-world, physical manifestation.

Bostrom goes so far as to say that unless we rule out the possibility that a machine could create a simulation of a human existence, we should assume that it is overwhelmingly likely that we are living in such a simulation.

"I'm not sure that I'm not already in a machine," he said calmly.

© 2015 The Washington Post

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Samsung Galaxy S6 Mini Listed by Online Retailer With Images, Specifications
Indian IT Eyes Digitisation, Automation for Growth in 2016
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »