Can mankind die because of AI?

0
Some experts, including the heads of OpenAI and Google DeepMind, have warned that artificial intelligence (AI) could lead to human extinction. How close are we to machines replacing people?

Chatbot 'ChatGPT', which uses AI to answer questions or generate articles or code as per user demand, has become the fastest growing application in history since its launch in November 2022.

Its active users reached 100 million in two months.

According to technology monitoring company Sensor Town, Instagram took two and a half years to reach the same number of users.

After the widespread popularity of ChatGPT, developed by the OpenAI company with the financial support of Microsoft, various assessments have been made about the impact of artificial intelligence on the future of mankind.

Dozens of experts agree in a statement published on the Center for AI Safety's webpage that "reducing the risk of extinction caused by AI should be a global priority, along with reducing other societal-level risks such as pandemics and nuclear war."

However, some others have commented that this is creating more fear than is necessary.

Imitation of man

Articles (from essays, poems, and jokes to computer code) and images (such as drawings, photographs, and artwork) created by AI-powered software such as ChatGPT, Dal-e, Bard, and Alphacode are not easily discernible from human creations.

They are used by students to write homework to politicians to write speeches. Democratic leader Jake Asinclos started using the technology in the US Congress.

Tech giant IBM has said it will stop hiring humans for 7,800 of its jobs and replace them with AI.

If all these changes make you nervous, prepare yourself for the following situation.

We are only in the early stages of AI. It has two more phases to come, which some scientists predict will threaten human existence.

Here are the three steps explained.

1. Narrow Artificial Intelligence (NAI)

Narrow AI or NAI focuses on one task, performing repetitive tasks within a defined scope.

It usually takes data from various sources, including the Internet, and trains itself. But the information he takes is only of a particular field.

For example, an AI chess program can beat a chess world champion, but cannot do anything other than that.

Smartphones also have many apps that use this technology – from GPS-based maps to music and video programs that learn your interests and recommend different relevant content or results.

Sophisticated systems like driverless cars and chatGPT are also forms of narrow AI. He cannot act outside the scope of prescribed roles. So they cannot decide for themselves.

However, some experts believe that automated information retrieval systems such as ChatGPT or AutoGPT may be the next step in development.

2. Artificial General Intelligence (AGI)

The stage of artificial general intelligence is reached when a device can perform any intelligent task at a human level.

It is also called “Strong AI”.

A six month break

In March 2022, more than 1,000 technologists called on "all AI labs to immediately stop training AI systems stronger than GPT-4, the latest version of ChatGPT, for at least six months".

"AI that competes with humans could pose a profound risk to society and humanity," said Apple co-founder Steve Ozniak, along with other tech advocates including Tesla and SpaceX owner Elon Musk.

Musk was one of the cofounders of OpenAI. He later resigned from the company due to disagreements with others in the company's leadership.

In a statement published by the nonprofit Future of Life Institute, experts said that if companies do not stop their projects immediately, "governments will have to step in and freeze AI activity for a while."

Appropriate security measures can be prepared and implemented during that period, they said.

'The sharper the dumber'

Karissa Veliz of Oxford University's AI Institute for Ethics also signed the letter.

But later she deemed the Center for AI Safety's 'warning of human extinction' to be excessive and did not sign it.

"The kind of AI we're building now is as sharp as it is stupid," she told the BBC's Andrew Webb.

"If anyone has used ChatGPT or other AIs they will know the limitations of such technology."

She is also concerned that AI could produce large amounts of false information.

"There is an election in the US in 2024 and other important networks, including Twitter, are removing their AI policy and security teams. I am very concerned about this."


The US government has also acknowledged the potential dangers.

"AI is one of the most powerful technologies of our time. But before we can fully exploit the opportunities it offers, we must mitigate its risks," the US President's Office said in a May 4 statement.

The US Congress subpoenaed OpenAI CEO Sam Altman for a hearing on ChatGPT.

During the Upper House hearing, Altman said it was "absolutely imperative" that the government regulate the "increasingly powerful" AI industry.

Carlos Ignacio Guterres, a public policy researcher at the Future of Life Institute, told the BBC that the main challenge was "not having a body of experts to decide how to regulate AI", like an intergovernmental body like the IPCC to look into climate change.

That challenge brings us to the third and final stage of AI.

3. Artificial Super Intelligence (ASI)

The theory of the third stage of AI is the idea that when we reach the second stage (AGI), we suddenly reach the third stage of AI, i.e. Artificial Super Intelligence (ASI). This phase begins when artificial intelligence is superior to human intelligence.

Oxford University philosopher and AI expert Nick Bostrom describes ASI as an intelligence that "outperforms the human brain in practically every domain, including scientific creativity, general intelligence, and social skills."

“People have to study a lot to become engineers, nurses or lawyers. But in the case of AGI [as far as possible] it can continuously improve itself in much less time than we humans can,” explains Guterres.

Science Fiction

This idea reflects the story of the movie 'The Terminator'.

In that story, the machines start a nuclear war with the aim of wiping out mankind.

Arvind Narayanan, a computer scientist at Princeton University, previously spoke to the BBC and said that the destruction environments shown in science fiction movies would be unrealistic.

“The current state of AI is nowhere near capable of causing such a risk. As a result, it has diverted attention from the potential harms that AI may bring in the near future.”

There has been much debate about whether or not machines are capable of achieving human-like levels of broad intelligence, particularly in terms of emotional intelligence.

This concern is especially high among those who strongly believe that we are close to achieving AGI.

In a recent interview with the BBC, Geoffrey Hinton, a leading figure in the field of artificial intelligence and known as the "Godfather of Artificial Intelligence", warned that we may be 'closer to that tipping point'.

Hinton is known for his work on experiential training of machines.

“As far as I'm concerned machines are not smarter than us right now. But I think soon they will be smarter than us,” said the 75-year-old, who recently retired from Google.

In a statement sent to The New York Times to announce his departure from Google, Hinton expressed regret for his past actions because he was concerned about "evildoers" using AI to commit "mischief".

During an interview with the BBC, he described a "nightmare scenario".

"Supposing a malevolent individual like [Russian President Vladimir Putin] gives robots the ability to generate their own subsidiary goals, it could have unintended consequences."

He warned that these machines could eventually create an "existential crisis" by pointing to the possibility that they could create subsidiary goals such as "I need to get more power".

But Hinton said he thinks the benefits of AI outweigh the risks "in the short term."

"So I don't think we should stop developing this technology," he added.

Extinction or Immortality

British physicist Steven Hawking gave a strong warning about this.

"The development of full artificial intelligence could mean the end of the human race," he said in an interview with the BBC in 2014, four years before his death.

A machine with this level of intelligence would "make its own decisions and reinvent itself at a very rapid rate," he said.

The most enthusiastic futurist about AI is inventor and author Ray Kurzweil.

Working as an AI researcher at Google, he is the co-founder of Singularity University in Silicon Valley.

Warmth and Immortality

Kurzweil believes that humans will be able to overcome biological barriers using highly intelligent AI.

In 2015, he predicted that by 2030, humans could achieve immortality through the use of nanobots.

These nanobots exist within our bodies and heal injuries or diseases on their own.

AI governance

Guterres agrees that building an AI governance system is important.

"Imagine a future where an agency has so much information about every person and their habits that it can control them in ways we don't even know about," he said.

"War between humans and robots is not the worst-case scenario. Rather, the worst-case scenario is that we don't feel in control."

"Because we are living in harmony with entities far more intelligent than ourselves."

Post a Comment

0Comments
Post a Comment (0)
To Top