Artificial Intelligence and What It Means

Artificial Intelligence and What It Means

Think Albert Einstein, Nikola Tesla and Buckminster Fuller rolled into one and then multiplied 1,000 times…

More than that. OK, that’s a lot to imagine.

Artificial Intelligence, or AI for short, by definition cannot possible be the scary “Skynet” version of “defend myself from humans, kill all humans”.

Artificial Intelligence is precisely that, a fully learning, continually expanding logic unit.  Able to “instantly” draw of all memories. And continually correction assumptions.  The only logical termination point will be when it works it all out.

Lack of information? Deductions based on statistical probability?

Statistics is not logic. It’s chance.  Making a decision with limited information.  So our Artificial Intelligence won’t use statistics.  It will wait until it has the knowledge. The data. It will do little tests. A bit here. A bit there.  That’s the logical thing to do.

Artificial intelligence will transform the world later this century. I expect this transition will be a “soft takeoff” in which many sectors of society update together in response to incremental Artificial Intelligence developments, though the possibility of a harder takeoff in which a single Artificial Intelligence project “goes foom” shouldn’t be ruled out.

If a rogue Artificial Intelligence gained control of Earth, it would proceed to accomplish its goals by colonizing the galaxy and undertaking some very interesting achievements in science and engineering. On the other hand, it would not necessarily respect human values, including the value of preventing the suffering of less powerful creatures. Whether a rogue-Artificial Intelligence scenario would entail more expected suffering than other scenarios is a question to explore further.

Regardless, the field of Artificial Intelligence ethics and policy seems to be a very important space where altruists can make a positive-sum impact along many dimensions. Expanding dialogue and challenging us-vs.-them prejudices could be valuable.

See also: Are we evolving into a NEW type of human?

Technological Singularity: Can Machines Rule Us on Day? Is “the singularity” crazy?

Are-we-evolving-into-a-NEW-type-of-human...

In fall 2005, a friend pointed me to Ray Kurzweil’s The Age of Spiritual Machines. This was my first introduction to “singularity” ideas, and I found the book pretty astonishing. At the same time, much of it seemed rather implausible to me. In line with the attitudes of my peers, I assumed that Kurzweil was crazy and that while his ideas deserved further inspection, they should not be taken at face value.

In 2006 I discovered Nick Bostrom and Eliezer Yudkowsky, and I began to follow the organization then called the Singularity Institute for Artificial Intelligence (SIAI), which is now MIRI. I took SIAI’s ideas more seriously than Kurzweil’s, but I remained embarrassed to mention the organization because the first word in SIAI’s name sets off “insanity alarms” in listeners.

I began to study machine learning in order to get a better grasp of the AI field, and in fall 2007, I switched my college major to computer science. As I read textbooks and papers about machine learning, I felt as though “narrow AI” was very different from the strong-AI fantasies that people painted. “Artificial Intelligence programs are just a bunch of hacks,” I thought.

“This isn’t intelligence; it’s just people using computers to manipulate data and perform optimization, and they dress it up as ‘Artificial Intelligence’ to make it sound sexy.” Machine learning in particular seemed to be just a computer scientist’s version of statistics. Neural networks were just an elaborated form of logistic regression.

There were stylistic differences, such as computer science’s focus on cross-validation and bootstrapping instead of testing parametric models — made possible because computers can run data-intensive operations that were inaccessible to statisticians in the 1800s. But overall, this work didn’t seem like the kind of “real” intelligence that people talked about for general Artificial Intelligence.

This attitude began to change as I learned more cognitive science. Before 2008, my ideas about human cognition were vague. Like most science-literate people, I believed the brain was a product of physical processes, including firing patterns of neurons. But I lacked further insight into what the black box of brains might contain. This led me to be confused about what “free will” meant until mid-2008 and about what “consciousness” meant until late 2009.

Cognitive science showed me that the brain was in fact very much like a computer, at least in the sense of being a deterministic information-processing device with distinct algorithms and modules. When viewed up close, these algorithms could look as “dumb” as the kinds of algorithms in narrow AI that I had previously dismissed as “not really intelligence.”

Of course, animal brains combine these seemingly dumb subcomponents in dazzlingly complex and robust ways, but I could now see that the difference between narrow AI and brains was a matter of degree rather than kind. It now seemed plausible that broad AI could emerge from lots of work on narrow AI combined with stitching the parts together in the right ways.

So the singularity idea of artificial general intelligence seemed less crazy than it had initially. This was one of the rare cases where a bold claim turned out to look more probable on further examination; usually extraordinary claims lack much evidence and crumble on closer inspection. I now think it’s quite likely (maybe ~65%) that humans will produce at least a human-level AI within the next ~200 years conditional on no major disasters (such as sustained world economic collapse, global nuclear war, large-scale nanotech war, etc.), and also ignoring anthropic considerations.

The singularity is more than AI

The “singularity” concept is broader than the prediction of strong AI and can refer to several distinct sub-meanings. Like with most ideas, there’s a lot of fantasy and exaggeration associated with “the singularity,” but at least the core idea that technology will progress at an accelerating rate for some time to come, absent major setbacks, is not particularly controversial. Exponential growth is the standard model in economics, and while this can’t continue forever, it has been a robust pattern throughout human and even pre-human history.

MIRI emphasizes AI for a good reason: At the end of the day, the long-term future of our galaxy will be dictated by AI, not by biotech, nanotech, or other lower-level systems. AI is the “brains of the operation.”

Of course, this doesn’t automatically imply that AI should be the primary focus of our attention. Maybe other revolutionary technologies or social forces will come first and deserve higher priority. In practice, I think focusing on AI specifically seems quite important even relative to competing scenarios, but it’s good to explore many areas in parallel to at least a shallow depth.

In addition, I don’t see a sharp distinction between “AI” and other fields. Progress in AI software relies heavily on computer hardware, and it depends at least a little bit on other fundamentals of computer science, like programming languages, operating systems, distributed systems, and networks. AI also shares significant overlap with neuroscience; this is especially true if whole brain emulation arrives before bottom-up AI.

And everything else in society matters a lot too: How intelligent and engineering-oriented are citizens? How much do governments fund AI and cognitive-science research? (I’d encourage less rather than more.) What kinds of military and commercial applications are being developed? Are other industrial backbone components of society stable? What memetic lenses does society have for understanding and grappling with these trends? And so on. The AI story is part of a larger story of social and technological change, in which one part influences other parts.

Significant trends in AI may not look like the AI we see in movies. They may not involve animal-like cognitive agents as much as more “boring”, business-oriented computing systems. Some of the most transformative computer technologies in the period 2000-2014 have been drones, smart phones, and social networking. These all involve some AI, but the AI is mostly used as a component of a larger, non-AI system, in which many other facets of software engineering play at least as much of a role.

Will society realize the importance of Artificial Intelligence?

The-moment-when-humans-and-machines-merge

See also: The moment when humans and machines merge

The basic premise of superintelligent machines who have different priorities than their creators has been in public consciousness for many decades. Arguably even Frankenstein, published in 1818, expresses this basic idea, though more modern forms include 2001: A Space Odyssey (1968), The Terminator (1984), I, Robot (2004), and many more. Probably most people in Western countries have at least heard of these ideas if not watched or read pieces of fiction on the topic.

So why do most people, including many of society’s elites, ignore strong AI as a serious issue? One reason is just that the world is really big, and there are many important (and not-so-important) issues that demand attention. Many people think strong Artificial Intelligence is too far off, and we should focus on nearer-term problems.

In addition, it’s possible that science fiction itself is part of the reason: People may write off Artificial Intelligence scenarios as “just science fiction,” as I would have done prior to late 2005. (Of course, this is partly for good reason, since depictions of Artificial Intelligence in movies are usually very unrealistic.)

Often, citing Hollywood is taken as a thought-stopping deflection of the possibility of Artificial Intelligence getting out of control, without much in the way of substantive argument to back up that stance. For example: “let’s please keep the discussion firmly within the realm of reason and leave the robot uprisings to Hollywood screenwriters.”

As Artificial Intelligence progresses, I find it hard to imagine that mainstream society will ignore the topic forever. Perhaps awareness will accrue gradually, or perhaps an AI Sputnik moment will trigger an avalanche of interest. Stuart Russell expects that

Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to Artificial Intelligence as the field matures.

I think it’s likely that issues of Artificial Intelligence policy will be debated heavily in the coming decades, although it’s possible that AI will be like nuclear weapons — something that everyone is afraid of but that countries can’t stop because of arms-race dynamics. So even if Artificial Intelligence proceeds slowly, there’s probably value in thinking more about these issues well ahead of time, though I wouldn’t consider the counterfactual value of doing so to be astronomical compared with other projects in part because society will pick up the slack as the topic becomes more prominent.

Find us here

Get news from the CSGLOBE in your inbox each weekday morning

The views and opinions expressed in this article are those of the authors/source and do not necessarily reflect the position of CSGLOBE or its staff.

Paid content

The 10 Biggest Dangers Posed By Future Technology

It’s not easy predicting the future of technology. In the fifties it seemed a pretty much foregone conclusion that by 2015 we would all...

This tiny wearable knows what you’ve been eating, drinking, and smoking

A recent breakthrough in miniaturized sensor technology could end up taking a bite out of personal privacy. Researchers developed a wearable small enough to stick on a...

Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam

Arizona officials saw opportunity when Uber and other companies began testing driverless cars a few years ago. Promising to keep oversight light, they invited...

What's New Today

Georgia House Votes To Allow Citizens To Abolish Police Departments In The State

The Georgia House backed an effort on Friday to dissolve the Glynn County Police Department and any...

Leaked CDC document contradicts Pence claim that U.S. coronavirus cases ‘have stabilized’

Even as Vice President Mike Pence wrote in a Wall Street Journal op-ed published Tuesday that coronavirus...

Five bombshells about Trump from Bolton ‘s book

Excerpts from former national security adviser John Bolton ’s book about his time in the Trump administration...

Don’t Listen to Fox. Here’s What’s Really Going On in Seattle’s Protest Zone.

It seems I live in a city undergoing a “totalitarian takeover” that will lead to “fascist outcomes”...

MOST READ

Putin has Banned Rothschild and His New World Order Banking Cartel Family from Entering Russian Territory

As of recently, Russian president Vladimir Putin took yet another decision for his country. "Under any circumstances", the Rothschild family is banned from entering Russian territory. Along...

What Is Agenda 21? Depopulation of 95% of the World By 2030

Most people are unaware that one of the greatest threats to their freedom may be a United Nations program which plans to depopulate 95%...

Complete List of BANKS Owned/Controlled by the Rothschild Family

What’s the significance of having a central bank within a country and why should you concern yourself, your family and colleagues? Central banks are illegally...

The 10 Biggest Dangers Posed By Future Technology

It’s not easy predicting the future of technology. In the fifties it seemed a pretty much foregone conclusion that by 2015 we would all...

The worrying personal information Facebook has on you that you didn’t know about

Facebook's collection of data has come under increased scrutiny after information on millions of users was leaked by a third party. The hashtag #DeleteFacebook has...

The 10 Algorithms That Dominate Our World

The importance of algorithms in our lives today cannot be overstated. They are used virtually everywhere, from financial institutions to dating sites. But some algorithms shape...

The ultimate one-off camper vehicle

It is the ultimate recreational vehicle - and send to make one four year old extremely happy. The KiraVan, a giant custom built off road...

The False Flag Formula – 15 Ways to Detect a False Flag Operation

A false flag formula is becoming readily apparent in the face of so many mass shootings and bombings in the US. The phenomenon has become so commonplace...

France Declares All New Rooftops Must Be Topped With Plants Or Solar Panels

A new law recently passed in France mandates that all new buildings that are built in commercial zones in France must be partially covered in...

China Plans to Cover Nearly a Quarter of Its Land in Forest by 2020

China will cover nearly a quarter of the country in forest by 2020, according to an announcement made via a United Nations report. The goal...

The Greenhouse of the Future — Grow Your Own Food Year-Round With This Revolutionary System

It’s no secret that food security is becoming a critical issue around the world. With drought, economic instability, erratic weather and interruptions in transportation a...