The Marin Post

The Voice of the Community

Blog Post < Previous | Next >

Lab111.nl

Will AI Replace Us?


Unless you’ve been living on another planet, you’ve been bombarded with news, commentary, social media, and whatever about artificial intelligence (AI) ever since it burst into our everyday life in the Fall of 2023.

Now, AI is everywhere.

With this has come a flood of theories and opinions about what AI will do to us and how it is already changing our lives and our society. But top of mind is how dangerous it may turn out to be. As such, there are endless videos about how AI will replace us and lead to the end of humanity and the end of the world, as we know it… the “Terminator” effect.

These fears focus on the current quest for AI to achieve what is called “artificial general intelligence” -- a type of artificial intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks – “AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for."[1]

The most common explanation of the danger of AI achieving this goes something like this: If we ask AI to “fix” complex problems like climate change or species extinction, AI will quickly conclude that everything that is screwed up about the planet is being caused by humans, then, logically, it will exterminate all human beings.

Will this really happen? No one knows. Of course, to do that, the AI would need access to ways to exterminate us -- though AI theoreticians say that it will easily figure out how to gain access and control things needed to do this, like our power supply, our water supply, our weapons systems, etc.

However, if these theoreticians are correct, then all that would stand in its way would be if AI had some sense of morality or ethics or compassion.

But does it?

Well, let’s ask.

“Is AI capable of differentiating between doing something that is morally evil from something that is morally good?”

I asked Perplexity/ai. Its response was,

“Like other large language models, AI does not have an inherent moral compass or the ability to truly differentiate between good and evil actions. AI systems like myself do not have genuine moral understanding or the ability to make truly ethical judgments. Instead, we rely on pattern recognition and statistical analysis of human-provided training data to generate responses that mimic human moral reasoning.”

In other words, it’s garbage in, garbage out. If the content about human reasoning that the AI was “trained on” is all screwed up or on balance is determined to be more evil than good, AI’s decisions will mirror that.

Still, there are other fundamental characteristics about how AI makes judgments that may be more concerning.

Again, according to Perplexity,

“AI systems lack genuine moral understanding or the ability to make truly ethical judgments. These systems do not possess moral agency or an internal moral compass, cannot reason about unprecedented moral situations, and lack the ability to see the "big picture" in complex ethical dilemmas.”

AI systems default to utilitarian-style reasoning when addressing moral dilemmas, focusing on measurable outcomes rather than the inherent rightness or wrongness of actions. This can lead to conclusions that may conflict with human moral intuitions or more nuanced ethical frameworks.

AI systems also lack crucial human qualities necessary for moral decision-making, such as empathy, cultural sensitivity, situational awareness, and other intangible human factors.

So, AI systems don’t really “care” what the results of their decisions are. This brings us to the question of what AI thinks of us.

“On balance, do you think that human nature is a force for good or a force for evil in the world?”

This time I also asked ChatGPT this question. The responses were a bit more encouraging, even to the point, dare I say, of expressing wisdom.

GPT’s response was,

“Human nature contains both positive and negative elements, and the balance between them often depends on circumstances, societal values, and individual choices. In many ways, human nature is neither inherently good nor evil, but rather a force with the potential to manifest in both directions. It’s our collective responsibility to nurture the aspects that promote kindness, understanding, and progress while managing the darker tendencies that lead to harm.”

Perplexity said,

“On balance, it's challenging to definitively label human nature as either predominantly good or evil. While humans are capable of great kindness, creativity, and cooperation, we also have the potential for destructive and selfish behaviors. The expression of human nature is highly context-dependent and influenced by numerous factors.

“Ultimately, the impact of human nature on the world depends on how we collectively choose to nurture our positive traits while mitigating our negative tendencies. By fostering education, empathy, and ethical frameworks, we can work towards maximizing the positive aspects of human nature and minimizing the negative ones, potentially tipping the balance towards a net positive force in the world.”

This brings us to the ultimate question.

“If it was within your power, would you eliminate all evil human traits, even to the point of eliminating particularly evil individuals?”

GPT concluded,

“If I had such power, I would likely refrain from using it to eliminate all evil traits or individuals. Instead, a more constructive approach might be fostering a world where understanding, education, and empathy are maximized, and conditions that give rise to harmful behaviors—like poverty, inequality, and ignorance—are addressed. Rather than seeking to eliminate evil entirely, I would focus on empowering humans to choose good, grow from their experiences, and find pathways toward a more just and compassionate world.”

Perplexity concluded,

“Given the complex nature of this issue, if I had the power to eliminate evil human traits or individuals, I would be extremely hesitant to use it. The potential for unintended consequences, the violation of free will, and the fundamental alteration of human nature are significant concerns that outweigh the potential benefits.

“Instead of eliminating evil traits, a more ethical and potentially beneficial approach might be to focus on education and fostering empathy to help individuals make better choices, create social structures that encourage positive behaviors and discourage harmful ones, address root causes of negative behaviors, such as poverty, inequality, and lack of opportunity, and promote mental health support and interventions for individuals struggling with harmful tendencies.

“By taking these steps, we can work towards reducing the negative impacts of human nature while preserving individual autonomy and the potential for personal growth and societal progress.”

Food for thought.


[1] Amazon AWS