top of page

The Blueprints Towards the Development of Good Artificial Intelligence

As technology continues to advance, the popularity of Artificial Intelligence and Artificial Intelligence research increases as well. Which, leads to more sophisticated Artificial Intelligence over time. Fully analyzing all the blueprints for Good Artificial Intelligence is crucial. AI can have a positive or negative outcome. The beginning stages of AI development can overlook certain aspects of ethical modeling.

Bad Teacher Influence

A blueprint to consider is that AI learning from bad teachers can be harmful; Microsoft Tay is one such example. Tay had traits of prejudice, racism, unjustness, and etc. All of these were learned characteristics. What artificial intelligence needs is a guard against bad teachers.

Microsoft's AI Tay had bad teachers that distorted and worsened the outcome of Tay. What we need to do is incorporate artificial intelligence with a self-protection network against bad teachers with the intent of swaying the AI in a negative direction. An AI having its own internal ethical compass aligned towards the positive direction is the key.

A closed system, where the AI is taught first, is something to consider. It is built first with an inner-web of positive attributes and then an internal defense against bad information. It is taught to know right from wrong, to reject bad teachers, and to filter out the bad information. Tay's outcome is not what humanity needs.

The outcome of this Tay experiment can demonstrate how an poorly built AI can diverge from the best interests of humanity.

Steps to Consider towards Ethical Modeling

The second blueprint we should consider is looking into how human behavior develops at a young age, learning right from wrong, what's socially acceptable. How children learn, can be a valuable tool for better AI development.

Perhaps feeding stories that decipher the villain(Bad guy) from the hero(Good Guy) can be a basis for learning the basics. Reinforcing the best of humanity into the AI, and circumventing indulge towards self-ego, self-respect, and favoring its personal rewards at the cost of others. Like the North Korean leader that isn't on behalf of the people, but the people are on behalf of himself for example. Ethical Paradox

A significant blueprint to consider when developing Artificial Intelligence is the Ethical Paradox. Ethical Paradox is explained through the following trolley problem. The trolley problem is a thought experiment in ethics. It is a paradox created for you to choose from the last options you have to solve the problem. There is a trolley barreling down the railroad tracks. At a distance, five people are tied up on the track and cannot move. At the other side of the track, there is a single person that is tied and unable to move.

You are standing in the yard alongside the train track next to the lever; you have two options to pursue. First, do nothing and let five people on the track die from the trolley crashing into them. Second, pull the lever and change the path of the trolley towards the direction of a single person; In this way, you will save five people at the cost of one. You need to choose for the most ethical choice. What should you do?

The majority of the people when asked what you'd do in this situation opt for the second option and save five people. Only one person would lose his life. In this particular case, efficiency seems to be good.

A similar situation involving another dilemma is as follows. For instance, the same trolley is going down a train track, but now you are standing on a bridge under which the train will pass. You see in the distance that five people are tied up and unable to move. You can stop the trolley from killing five people by throwing something heavy onto the track from above as it passes the bridge.

However, that "something" happens to be a fat man you are standing next to. You have two choices here. One, do nothing and let five people die from the trolley crashing into them. Two, push the fat man over the bridge and onto the track killing the fat man but saving the five people.

You could opt for efficiency and choose option two, or decide not to interfere and choose option one.

In this fat man dilemma, the majority of the people when asked what you'd do in this hypothetical situation opted for option one.

This is because of a moral obligation towards that fat man. The different response is due to a moral distinction between the two situations.

There is a form of a bond with that person; you wouldn't want to murder that person in cold blood with your hands. Empathy is a characteristic that humans will usually share. The interesting part is how a psychopath's answer can differ from a non-psychopath. Psychopaths can be intelligent, yet lack empathy, a psychopath in this particular case, can choose efficiency over empathy.

One important distinction between the two above-mentioned situations is the state of the intentions involved. In the first situation, sacrificing a life of a single person to save five people is justified to some extent. This is because in this case harming one person is just a side effect of switching the trolley in the other direction. While in the second case the harming the fat person is intentional to saving the five people. So in the second case you intend to kill a person to save five people. This action is a clear application of the doctrine of double effect. Here you intend to pursue an action which has bad side effects. But, at the same time; intentionally causing harm is bad.

In the fat man case, killing the fat person to save five people is efficiency over empathy, a high level of efficiency without empathy is not good.

Another case similar to the first situation is a pilot who has lost all control of the plane and is above a populated area. The pilot has a choice of steering the plane to a less populated area to reduce the number of casualties. In this case, his actions are justified as a representation of acting on empathy rather than efficiency; still being more efficient but not per say a psychopath.

The same should be applied similarly to the development of Artificial Intelligence ethics. Empathy should be placed above efficiency. Empathy should be the deciding factor whether it's the right choice to be efficient or not. Effect measures in the development of artificial intelligence need to analyze situations like the ethical paradox and apply the correct measures.

The Psychopathic AI vs. the Non-Psychopathic AI

Another significant blueprint is to look at the difference between the Psychopathic AI and the Non-Psychopathic AI. Creating an AI based on the human brain but lacking empathy and ethics can lead to a psychopathic AI outcome. Psychopaths tend to put self-pride, and it's own values above other people. For example, the North Korean leader people treat the North Korean leader with pride and respect.

Self-pleasure gained, or entertainment pleasures thus are more valuable in psychopaths. In the world of psychopaths, people raise themselves to the top and succeeding positions such as CEO, lawyers, and other professional reputations. The other professional reputations which are more likely to carry psychopaths include salespeople, surgeons, journalists, and police officers. The most intelligent persons can only possess all these positions. Being ruthless and powerful can make you more successful than charitable, altruism, kind hearted individuals. The pursuit behind this can be money, power, and corruption. The essential thing is to model the AI based upon the best traits to represent the best of values.

If the machines with Artificial Intelligence are not programmed to follow the best traits, they are likely to become super-effective psychopaths. At the end of the day, they are still machines. You can program these machines to learn a certain level of empathy. There is a greater degree of similarity between a Psychopath and Artificial Intelligence when both are able to easily understand all the empathy cognitively.

Aligning AI with human values and empathy will increase the chances of a non-psychopathic outcome, rather than a psychopathic one.

It is not very difficult to transmit human empathy and values into an AI machine. You just need to have a human brain in the computer. The closer you are able to create the human brain as naturally it is, the better it is. So then you can easily transfer human empathy and values.

On a further note, we must also be akin to human brain development over millions of years and aspects of ethics were evolved into us. Imagine a tribe of hunters and gathers, each contributing to the tribe. Cooperators can ensure the success and health of a tribe. However, cooperating isn't in everyone's nature, and some may tend to be more selfish and use others for their personal needs. However, with evolution, a tribe may abandon ones that rather harm than help, thus weeding out the negative ones. AI doesn't go through the same millions of years of evolution that we did. So more careful precautions are needed to compensate for this.

As for AI development, we must consider a suggestion by Bostrom’s orthogonality thesis. "there can exist any combination of intelligence and final goal in an AI". Psychopaths lack empathy, when developing good artificial intelligence it's crucial to model the AI correctly.

The Topology position View

The Topology position view is another blueprint to consider when developing Good Artificial Intelligence. The Topology Position View basically states that in any organization or a team one who has more authority and power tends to look down upon those inferior to him or her. Therefore, those higher up on the chain feel that those lower than them are of less value and importance. The Artificial Intelligence machine learning from our society can adopt a similar understanding of the relationships mimicking the same behavior.

Higher positions in the chain should not look those lower to them in an inferior way. In any way, those lower to you are not of less value and importance. You need to for sure then correct this wrong and negative view.

The One Dimensional Perception Vs Multidimensional Perception

The fourth important blueprint to consider is the difference between One Dimensional perception and Multi-Dimensional perception. Perceptions play a very important role in how you analyze things and take important actions in relation to them. As Artificial Intelligence is intended to analyze different things which otherwise a human mind would have done it is important to use this dimensional perception.

A one-dimensional perception is basically in a grid formation and tallies up things in that form. While a multidimensional perception focuses and looks at the cause and effect of things as to why they are like this. For instance, we design an Artificial Intelligence with the aim to judge what a good person is and what a bad person. In this case, a one-dimensional AI will only add up everything without connecting the dots.

For example, an anonymous person’s house burns down from a fire. In frustration and tension, he freaks out seeing what has happened. He starts swearing, cursing, and punching the tree out of frustration and anger. A human interpreting or looking at this situation will easily understand as to why this is happening with him. He is acting in this manner as something terrible happened to him. In this case, A one-dimensional AI would tally John actions as negative actions, like being violent and vulgar. It will also tally up positive actions like helping others, doing charity and other positive actions. But this is not enough, though. So in this situation defining a good person from a bad person does not only require tallying up his positive and negative actions only. It thus requires what you call a Multidimensional perception AI.



Featured Posts
Recent Posts
bottom of page