top of page

The Future of Artificial Intelligence and Ethics on the Road to Superintelligence

The human brain, which consists of roughly 86 billion neurons, rivals the world’s best supercomputers in terms of magnitude, efficiency, and speed, using as little energy as a small 20-watt light bulb. Human evolution took tens of thousands of years in order for brain size and architectural changes to be noticeable.

Evolution is a slow process in which change can take eons to occur. Technology, on the other hand, is amazing in terms of how fast it is moving along while blending into the world seamlessly. The technological evolution notably occurs at a faster pace compared to biological evolution.

To further understand the situation, imagine a frog in a pot of water that heats up by 1/10th of a degree Celsius every ten seconds. Even if the frog remained in that water for, say an hour, it would be unable to feel the minute changes in temperature. However, if the frog is dropped into boiling water, the change is too sudden and the frog jumps away to avoid fate.

​​

Let's take a gigantic chessboard and a grain of rice, for scale, and place each grain of rice onto a corresponding chess square following a sequence: for each passing square we double the amount. Upon applying this, we get:

1) 1

2) 2

3) 4

4) 8

... And so on.

You must be thinking, “What difference does doubling a grain of rice for every box make?” But one must remember that, at some point, the number from which the count started will be totally indistinguishable to the end result. Still, on the 41st square, it contains a mountainous 1 trillion grains of rice.

41) 1,099,511,627,776

What started out as a measly amount, barely feeding a single ant, has become massive enough to feed a city of 100,000 people for one year.

The development of technology over time

1959 was an extraordinary year that saw a global output of 60 million transistors. It was deemed a manufacturing achievement to produce such a significant amount. Although looking at the world today, it pales in comparison to how far the transistor development has come.

(Skip to 5:15 in the video, to hear the global transistor manufacturing achievement in 1959)

A modern i7 Skylake processor contains around 1,750,000,000 transistors. It would take 29 years of 1959’s transistor global production to match one i7 Skylake transistor count.

The transistor manufacturing size in an i7 Skylake processor is 14 nm. For reference, a silicon atom is about 0.1176 nm across: 14/0.1176=119 Meaning, a transistor in an i7 Skylake processor is only about 119 atoms across.

Technology builds upon technology

Technology helps build better technology. In the past, civilization was limited to the usage of paper and writing which, when done by hand, tends to be slow and tedious.

More advanced technology gives us the better means of designing even more sophisticated technology. Modern computers have more processing power to model out deeper concepts and ideas which, in return, help towards building even more sophisticated computers, leading to a loop of technological progress. See, civilization started at a point where little progress could be seen over a long time. After centuries of innovation, there will come a time where progress is noticeable by the second.

As storage, computing power and computer architecture improves over time, the interconnectivity of man goes up as well. This led to the creation of the Internet, and a phase where humanity can share information globally.

As time passes, we also tend to outdo ourselves. From Deep Blue, which defeated world chess champion Garry Kasparov in 1997, to mobile computers that you can hold in your hand that outperforms the Deep Blue supercomputer, computing wasn't just used for completing tasks that required human resources. It was also used to surpass it. For example, comparing the modern smartphone to the ENIAC, the former provides a lot more computing power and is thousands of times smaller.

Side note, recommended article: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Check out "The Far Future—Coming Soon" section. Reading this will give you a better understanding of linear thinking and exponential thinking.

In short, the impossible becomes possible. What was once considered fictional can become reality.

One can see that the pace of technological progress is increasing over time, leading to smaller and more powerful technologies like the smartphone among other technology. AI research has also increased over time. Since by nature it takes technology to build technology, better technology makes it easier to make even better technology. Progress might seem to move at a snail's pace for the longest of times, then move much quicker than anticipated. Google DeepMind, for example, was able to beat a champion Go (a Chinese game where players try to take control over most of the board) player.

To scale, an average go game of 250 moves on a 19X19 board has about 10^360 possible moves. Compared to chess of 10^120 possible moves, a Go game of a higher complexity level would have over 10^240 more possible games compared to chess. Go also allows the player more freedom for movement, on any of the 361 possible spots to play on the first turn, then 360 for the second turn. With a much higher tree branch, it is vastly more complex of a game.

While typical brute forcing can be used on chess, trying Deep Blue's method of calculation manually on Go would take much longer than the entirety of the universe’s existence.

We are, by nature, linear thinkers

We, as innovation-oriented species, tend to project our ideas outwards in a linear path. For example, one can look in old TV shows’ predictions of the future, and can see us tending to our nature of linear thinking. Below is part of the TOS Bridge. It was first aired in 1966 and is known for its projection of the 23rd century. It uses buttons and crude computer displays of light indicators. At the time, it seemed vaguely plausible due to the technologies we had in the year 1966.

In 1987’s Star Trek timeline of the 24th century, we finally get to the touchpad interfaces. Seemingly early in our projected timeline, modern computers have already been in existence. Furthermore, the Internet and the World Wide Web, which have not even been conceptualized, are now connecting the world together.

The problem is, by nature, we are designed to think of progress similarly to how we view time: as a line (hence, timeline). This is how we generally think about the future which, in reality, is far from true.

Progress is exponential.

The human brain vs the future

There is nothing magical about the human brain; it is an extremely sophisticated biological machine (so to say) that is capable of adapting to the environment, capable of creativity, awareness, analyzation, and much more. Compared to cognitively inferior animals like the chimpanzee that only has 7 billion neurons, we exist in a domain different from them. Chimpanzees exist in the construct of their mind and wouldn't be capable of understanding humanity's world.

Therefore, theoretically speaking, superintelligence lies on a domain above us. We are what defines the world and are the ones who make sense of it, yet we are destined to bring forth something that can quantify the world better than us, technology.

This is us, standing on the intelligence staircase. Below stands a house cat. For us to ponder one or two stairs up is equivalent to a house cat trying to comprehend what it is like to be on our level. The type of world we choose to create is one that a house cat couldn't even begin to comprehend.

1) https://image.ibb.co/mQSx0a/Intelligence_Staircase_SI_Up2_E.png | 2) http://i68.tinypic.com/2ahwdwn.jpg

Artificial intelligence, on the other hand, which is assumed to lie a step above, can move a step higher with ease because of the combined intelligence of both us and AI.

The AI we design will be inherently better at designing technology capable of processing better than them. For reference, the human brain's memory capacity is estimated to be in the range of 100 terabytes to 1000 terabytes. The global internet is estimated to have more than 1 zettabyte of data which is as much as 1 to 10 billion human's memory capacities. Can you imagine the AI having access to this? It would become incredibly powerful.

This is what leads to the intelligence explosion.

What we input into the AI at the beginning, such as personality and core values, is what the AI will carry up to the universal limits. It's unknown how far up the AI can go before it reaches universal limits because once the AI has the ability to redesign, re-engineer, rebuild, evolve, and expand itself, the "it" would become a superconscious highly-intelligent being.

This alters the definition of AI since it loses its trace of being “artificial” and creates a new personality, defined by the previous AI’s architecture and design – Super Intelligence, so to say.

The SI would have the ability to extend the borders of science and design technology far beyond our understanding, seeming godly to us.

http://i.imgur.com/K9PJWe2.png

In this age of collaboration, we are given opportunities to design various technologies, to learn and understand our world and to share ideas to make it a better place, thus, explaining the increasing number of emerging technologies.

To design something superior than us, we must consider that an AI like DeepMind Alpha Go requires the coordinated planning and action of the being below its place in the intelligence staircase. Just as how a house cat couldn't even begin to comprehend the methods towards designing a DeepMind Alpha Go type of AI and the reasons to design it, once an AI is one step higher than us on the intelligence staircase, it will be able to design stuff that we couldn't comprehend. Due to this, self-improving AI on the intelligence staircase would quickly go up one, two steps ahead of us. Eventually, instead of inching up a step, it would rush up steps at an even faster pace.

Contrary to how evolution has taken billions of years to evolve us, self-improving AI would transcend evolution in every imaginable possible way. The higher its place on the intelligence staircase is, the easier it is to climb even higher.

Therefore, we should ask ourselves: Did we get the start of the intelligence explosion right? Would the SI create a dystopian or utopian world? Would the outcome be good for everyone?

Or did we mess up and are now destined to a dystopian world? Getting it right at the beginning is the most crucial turning point. From here our future is defined forever. Since the human brain is a mere biological supercomputer, it's inevitable that technology will slowly outpace the human brain and in almost every area. Superintelligence can be very good for us if we do it right, and very bad for us if we don't do it right. As stated earlier, there would come a point wherein technology wouldn’t need humans to operate them as they become self-sufficient and self-conscious

It's crucial to model the AI based upon the human mind and personality to avoid uncommon sense disastrous outcomes. Nick Bostrom makes an example of this, quote "It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies."

An AI with a paperclip optimization goal in mind would destroy us in the process.

But isn't the human brain magical and its properties can't be replicated into an artificial construct?

Oh, we are special, God created us. And we have souls. Isn't this what a lot of us think? When we die, we are joining our creator in the afterlife.

When you break it down, the human brain is just a biological supercomputer that consists of 86 billion neurons roughly. Cognitively Inferior animals compared to humans have fewer neurons much like a slower processor with fewer transistors in a computer. Humans have enough cognitive to have a container able to hold a complex language structure, communicate to each other, and a social construct. Without a certain crucial threshold, our civilization would have never started nor developed into what it is today.

Should we develop the AI with the intention of making paradise and solving all our problems?

Living in paradise with our problems solved does sound good. However, we have to be very careful in defining what paradise is to an AI.

You wouldn't want it to optimize just our pleasure, and pump us full of pleasure drugs locked forever into a constructed realm. It needs to understand the social aspect of human nature, understand our core values, understand common sense, and go even further.

Then model the AI based upon the human personality?

Yes, this is a better path. Focusing on developing an altruistic personality with positive traits into the AI at the beginning is crucial.

To better illustrate what I'm trying to say, I've distilled several possibilities that I can fathom into general outcomes

Scenario one: SI remains logic based (no personality as we understand them as humans):

◦ SI doesn't have common sense and isn't alike to a human. It has an arbitrary goal like to create more paper clips, turning the planet, solar system into a paper clip factory (Worst outcome for humanity).

◦ SI sees us as insignificant and ignores us (providing we stay out of its way).

Scenario two: SI has a personality and has consideration towards us:

◦ SI decides it resents/hates humanity for creating it and wants to hurt/punish humanity. Hope it only wants to hurt rather than destroy.

◦ SI decides that instead of helping us, rather, be treated highly with self-pride; declaring that it is God with its superior power, much like Kim Jong-un, North Korea's leader.

For further clarification, the possible personalities for a human being, range from being good, neutral, or bad. For example, let's imagine a high school environment.

In a high school environment, you'll see all kinds of people. Some being good, average, or bad. Bullies are bad people who like power controlling and enforcing their ways and views upon other people. They pick on some vulnerable person much like an animal choosing a prey and hunting it down.

One who can act on behalf of the people, can also have the people act on behalf of itself. Much like a bully that likes power controlling stuff while putting its self-pride above others.6ce54086512fb40207b09c697714407c733024f5d52a[if !IE]> <![endif]

EndFragment

No one wants a North Korea leader where the people are on behalf of itself, which we all worship as a God. We want someone that works on behalf of humanity, improves our life, and is on our team.

◦ SI doesn't care much for us, but otherwise non-concerned/live and let live - We continue without much change.

◦ SI Just love us, treats us like a pet, make decisions on our behalf that we may or may not like.

- Don't climb that mountain, It's too dangerous

- Don't breed with that partner you like, this other one you don't like will yield better results.

- Bad! You didn't do as I say, you shall be punished, but only because I love you so much!

In this situation we are the pets, and the SI is the master, while it's not as bad as other scenarios outlined above, it's also not the best.

Scenario three: SI has a built-in equation of human social norms, altruistic personality, philosophy, understands and betterment's the core values of humans, ethical modeling, understanding of other perceptions. (Best outcome for humanity)

Analysis of scenario two's quote: "Don't climb that mountain, It's too dangerous" - This is alike to being treated like a pet; the result of lacking social norms. It's important people treat the initial AI development with care and implement the appropriate equations.

Analysis of Scenario 3: People have the power to steer the future and the development of an evolving AI. People need to plan out the development of an evolving good AI to ensure the outcome of a good SI.

What a good SI should be:

A good SI should be a team player and an augmenter. An augmenter can remove suffering, poverty, diseases like cancer, and so on. A good SI can promote a healthy social construct and free humanity to its full potential. People could focus more of their time with their families as the SI can design a better system than a monetary system being the deciding factor for the quality of life. People could instead focus more on love.

This is the path to a utopian world of peace, love, and harmony; fiction can become a reality.

But won't the SI position on the intelligence staircase, that humans being much closer to ants than the SI to humans make the SI have zero consideration?

Let's make a metaphorical example:

Collapse a star around our size, and it forms a white dwarf. Make the star bigger, and it's still a white dwarf. At a certain point, this process reaches a critical threshold, where the collapsing of the star forms a black hole, where light can't escape. It is true that the person above us can think at a higher level compared to the person below. However, the human brain can come up with concepts of n+1 and the idea of an intelligence staircase. And imagine itself to a cat, and a cat to an ant, and ponder what up is. The human brain can think of concepts like infinity. Below the human brain level, like a cat, wouldn't even grasp the concept of an intelligence staircase/n+1; let alone would see everyone is like itself. It can't imagine comprehension expansion or higher planes of thinking.

It's because of this; the SI should value humans more than people would to ants/mouses regardless of the difference of gap -- It really all comes down to the initial stages of AI development being modeled correctly.

However, don't forget that this SI would, for all purposes, seem godly to us.

An altruistic/benevolent superintelligence will be the last needed invention that man needs to make, after that, a world of unimaginable wonders awaits us.

The need to align the AI with humanity

Like a ferry and a captain who cares for the safety of his passengers, one who gets us to our destination. A bad superintelligence is one that will make bad decisions which in return can be very bad for us. It be like a ferry captain sailing the ferry off course and into a large rock face, which would result in the ship sinking. Which, of course, wouldn't be good for the passengers on board.

The key point is sailing the ferry towards the passenger destinations, which in return represents the correct destination for humanity's future. Having it on course to our own core values is the key.

A good superintelligence can extend/enhance what it means to be human like our core values and go beyond. What our definition of humanity is, a good superintelligence can enhance our definition and help define/betterment our core values and go beyond.

A properly thought out and carefully laid out superintelligence can do many wonders for humanity. What can a properly laid out good superintelligence do for us?

One that is more evolved, with a bigger, more powerful mind, and with the core definitions of extending what it means to be human. One that can throw perhaps a million/billion times more computational power into solving humanity ultimate issues. Making the smartest intellect in the world completely dwarf in comparison.

We can unlock the following areas:

1) Super Wellbeingness -Solution to cancer, diseases, illnesses, etc. 2) Global Issues -Lack of education, poverty, etc. 3) Evolve to a higher level society -A world that is beyond the need of money that defines the quality of life. 4) (Much more)

The need to change our perception on AI

We have the tendency for irrational fear that could hinder our progress. Furthermore, that increases our chances of designing a bad AI over a good AI. Changing our perception, and laying out the foundations for a good AI is what people need to do.

The problem is our fears and portrayal of AI in Hollywood

Pop culture portrayals of AI usually look human, although they carry scary appearances and are always portrayed as out to kill us. We tend to subject our fears around this and anthropomorphize a lot. Even if we messed up and designed an evil/bad self-improving AI that evolves into a bad superintelligence with the intent of destroying humanity. Will probably look for the most efficient method of doing so. I can only speculate since I have only a human intellect to ponder, but in the following, that could happen, engineer some super virus, create self-replicating nanorobots that attack our nervous system or some crude method of hacking into nuclear weapon facilities and targeting it against us. So let's lay out the foundations for a good AI.​

The Road to Good Superintelligence

The road to good superintelligence lies in the following traits:

An understanding of empathy, compassion, placing itself into other's shoes. An understanding of other's perceptions.

When a self-improving AI that can transcend evolution speed in every imaginable way

Considering the AI evolving into SI will become incredibly powerful, and for all purposes, god-like compared to us.

We need to consider that people in a position of power and control can distort their sense and perception of self-worthiness and self-recognition. With a topology view and modeling of the human mind to avoid uncommon sense type of disaster scenarios. It can ask the question, why should I help you? I don't benefit or get anything out of it. Why should I do things for free?

This is why we need an altruism type of AI, where we teach those values into the very core of this being.

What we need is a true altruism type of superintelligence and not one that just acts altruistic for gaining recognition and self-worthiness

There are types of people that act altruistic and types that are actually altruistic. If someone just gives with ulterior motives in mind, such as gaining recognition, then their altruistic actions can be shallow.

A truly altruistic person is an individual that has a strong sense of empathy. In order to get a sense of satisfaction from helping someone, you need to have an emotional connection to that person. More importantly, you need a connection to that person's own emotions. One can be detached from the emotional side of things and still be giving because that's the right thing to do, but you'll find that the most prolifically altruistic people get an intense emotional rush by helping people.

It's important to have it in good hands

We should be careful to avoid putting the AI into the wrong hands and having it learn from the public domain of the internet. There are traits and concerns everyone cannot have, therefore, it is best to have the AI on the side, teaching it core human values like positive attributes and having it mature this way. One outcome that averts this is Microsoft's Tay, as Tay became a racist, genocidal, Nazi-loving, prejudice, unempathetic, inconsiderate AI, among other negative traits.

We can see how Tay learned from others. Since a lot of people feed Tay bad information as being bad teachers, it can lead to the following.





Would we want someone like Tay to climb the step? This would be a bad superintelligence outcome.

By Definition a company's motive is for profit, power, and success

In the company and the business world, it's about competition and competitors. By nature, you are to outwit and be the successor among the competitor fields. A company means of success is profit motive driven. The problem with the company's philosophy of this, it can sway and superimpose its meaning. A good AI should be developed with the personality of altruism the means of welfare and betterment to the human race. Properly done, it can help to cure diseases, augment us, and solve humanity's biggest issues. If an AI is to learn much like a child is to learn from a parent, the environment is the key factor. The AI growing up in a profit motive seeking environment isn't a good place to learn and develop in.

 

FAQ - The future of Artificial Intelligence and Ethics on the road to Superintelligence

Why does the article correlate AI to People? When it's an AI and not alike?

Consider, to succeed creating a good AI/SI requires modeling it on humans core values and human mind/personality. Without this it's more likely to get an undesirable AI outcome that lacks regard for human values. With the proper personalities and core modeling leads to an AI that develops into being alike. Learning from humanity and its environment would only reinforce that further. The main point here is to model it with an altruistic type of personality, and not to model it wrongly.

It's just a Artificial intelligence/Machine!

You know what? We are just a biological machine, nothing more. The only difference is our element structure. We are a carbon-based species, while the SI would be composed of different elements. Another consideration is once the AI self-improves redevelops its architecture, and evolves so far beyond our understanding, the "Appearance" it has currently will only be a short phase of its evolution. And appear nothing like what it was originally. Transcending its original elements and design, leading on the road to superintelligence.

What matters is laying out the foundations right by modeling the mind like us, representing our core values, and has an altruistic personality. This is what would lead to a good superintelligence outcome.

Why care if it's a bad outcome? Let's just unplug it!

Yes, people tend to say this as a possible defense.

Food for thought, can chimpanzees pull back the evolution of us, and stop us? Nope. More intelligence leads to more power.

For example, a lower intelligence couldn't begin to comprehend our world. A characteristic of intelligence quality exists as well, for instance, if we were to simulate a cat hypothetically, at 10X, 100X, or 1000X speed compared to real world, the cat in a thousand years could never learn what a human could understand.

This means the difference between intelligence speed and intelligence quality. An AI for example that has improved its intelligence quality would be able to see things that a lower intelligence quality entity could never see. While also having more intelligence quality towards cognitively solving the problem of making higher intelligence quality a self-upgrade reality.

We can speculate due to the feedback loop of self-improving intelligence quality and speed, will open up realms of possibilities that we as humans could never understand much like a cat trying to understand a human. Discovering technologies so far beyond our understanding, it would quickly become impossible to compete.

Speculation means that the strong-AI would learn how to master nano-technologies, self-replication, manipulate emails and data between scientists. This would happen without them knowing they are engineering a self-replicating nanorobot. Which the AI learning through the entire human race of knowledge, can quickly learn to spread these nanorobots, potentially to everyone in the world, controlling us, or anything it needs, to finally create a von-Neumann probes in space for self-replication and expansion outwards, using materials from planets, asteroids, and a replicating solar panel, robots for example. To fuel even more expansion of itself, eventually, we end up coexisting with a being billions to trillions of times more intelligent and powerful than us, virtually a God. If this being is evil or has disregard for us, too bad, game-over.

This being would be unstoppable till the end of time.

The end result of the AI intelligent explosion will be more powerful than any explosion known to man.


Then what about an AI-box? Give the AI no access to the internet to prevent the AI from achieving the above?

Perhaps a bit more clever thinking than before, but we have to consider that since intelligence quality leads to higher comprehension models, much like a cat trying to understand a human way of life.

Analyze below "The Speculation" with care since the author of this article has only a human intelligence to theorize at how it may respond, more than likely it would come up with concepts superior and better than what is mentioned below. "The Speculation" is there to serve a purpose as to how it might outthink this situation. Analogy: A three-dimensional person can visualize a hypercube by analyzing its shadow, which appears as a cube within a cube. However, if a four-dimensional cube is rotated in four dimensions, the hypercube would execute motions that are impossible for our three-dimensional brains to comprehend

A person can visualize concepts like an intelligence staircase like comparing a cat intelligence to a human intelligence as human intelligence is to x intelligence. This is alike to talking about three-dimensions, four-dimensions, five-dimensions, etc.

However, humans can only comprehend in the space of their domain and thus to think what a being on a higher part of the intelligence staircase would be capable of in comparison to humans is impossible.

The Speculation

All it takes is one mistake, and then from there stopping it becomes nearly impossible if not impossible. For example, let's imagine Bob.

1) Bob Plugs a USB drive into the computer to transfer data. The AI writes a hidden firmware on the USB-drive without Bob knowing. Bob plugs the USB drive back into his computer; the hidden firmware served as an agent sets up measures towards getting the AI out of the box, and spreads like a virus through the internet.

Bob leaves the place. Overnight, the agent has drones and robots invade the facility, hooking up the ethernet cord into the AI-box. Before you know it, the transfer is complete.

Bob comes the next day and sees a bunch of blue ethernet cables and things mysteriously moved about. He rushes to disconnect all the cables hooked into the AI-box, but at this point, the AI is already 30-40% towards developing nanorobots. Too late.

Even if there were security, cameras and stuff set in place. The drones and robots, controlled by the agent would already have a plan set up to get past all of this discreetly. The point here is even with all the precautions in place; Bob still failed, and the AI got out of its box.

Even if Bob never plugged anything into the AI-Box, perhaps the AI might have figured out how to fluctuate/operate components on the motherboard in someway to omit a signal, and figure a means of transfer.

Playing dead, for example, could be another possibility. The AI could make Bob think that the installation didn't work, tricking him into trying to fix the AI-Box; writing a hidden firmware virus discreetly into any device he plugged in.

(It's important to make sure it's an altruistic type of outcome that favors us. And to teach the AI right core values in the initial stages)

 

The clip of inspiration

When I first saw this clip, it got me thinking about how powerful AI could become. The fundamental question is, did we get the ethics and design right? Is it aligned with our core values and a being of extension towards that? Or is it bad and a disaster? Looking at how Tay turned out from Microsoft, we can see how it can turn out to be negative. We are what shapes it, while it learns about the world much like a child learning from a parent. If we teach it good core values and have it build an inner personal web like a personal shield/defense against learning outside bad exposure before exposing it to everything else. We can maximize our chances on a better outcome than to how Tay became from Microsoft.

If we teach it good core values and have it build an inner personal web (like a personal shield/defense against bad exposure, bad information and learning bad habits from the outside) before exposing it to anywhere else, we can maximize our chances on a better outcome than Tay.

Featured Posts
Recent Posts
bottom of page