As everybody on Wall Street is singing AI’s praises, the mere mention of it pushing a company’s stock into a new bull market, I wanted to explore the other side of the coin.
The one almost nobody wants to talk about.
How will the mass adoption of artificial intelligence (AI) disrupt the lives of regular people like you and me, and what are the real risks of going forward with this technology?
What I ended up finding sent shivers up my spine, and it will probably do the same for you. I’ll try to keep what follow as non-technical as possible and focus on the broad picture instead.
What Is AI?
Artificial intelligence as a concept has around been since the 1950’s. It was initially defined as “the science and engineering of making intelligent machines”. That definition has changed over the years but at the core we’re talking about a machine that can simulate human intelligence in performing a task.
AI was exciting for many back then, but it didn’t gain significant traction until much later.
What it needed to evolve was not in place yet.
Now it is:
1. Immense Pools of Data: Large unstructured sets of data used to train powerful AI were not available until now. Think of all the raw data collected by Facebook or Google but also facial recognition software, CCTV cameras etc, plus everything else that has been digitalized.
2. Better Hardware and Better Software: We’ve come a long way since the 1950’s when computers could only operate commands and were not even able to store data.
3. Cloud Computing: Before widespread cloud storage and computing most AI work was isolated and relatively high cost, but that has all changed now.
In November of last year, this revolution went into hyperdrive with the launch of ChatGPT.
Since then, the AI arms race between nations like the US and China has intensified, as they’re pouring tens of billions in this technology with a special interest in military applications.
“The world’s leading powers are racing to develop and deploy new technologies, like artificial intelligence that could shape everything about our lives from where we get energy to how we do our jobs, to how wars are fought – Anthony Blinken, US Secretary of State
Right now, the battlefield in Ukraine is a living lab for AI warfare with satellite imagery analysis and target acquisition often decided by AI.
We are already developing autonomous weapons like stationary sentry guns and remote weapon systems programmed to fire at humans and vehicles, killer robots (also called “slaughter bots”), and drones and drone swarms with autonomous target acquisition done via AI.
Depending on how much control we end up giving AI and over what type of weapon we could end up in a global catastrophe. That is probably why lawmakers are scrambling to pass a bill that stops AI from autonomously launching nuclear weapons.
Remember the movie War Games?
But there’s another AI race that’s not going on not between countries, but the biggest tech corporations: Google, Microsoft, IBM, Baidu etc… it’s a race with no limits and so far, absolutely no rules. Each one is trying to win at all costs. (Source)
And this genie is not going back in the bottle.
For Big Tech companies, besides using AI tools to maximize profit, there is another goal – the development of Artificial General Intelligence (AGI). They might be working on it themselves like Google, or through a proxy as is the case for Microsoft who’s doing it through OpenAI. Artificial General Intelligence is the holy grail of AI.
It’s much more powerful than anything we have today and more on the level of human intellect, as an AGI would be able to reason and think like a human does. It would be conscious of itself and capable of emotions.
It could learn from experience and mistakes and perform any task, solve any problem, and adapt to new environments and situations as well as the best of us. And it could program itself to become even more powerful.
That’s when we get into the stage of exponential growth with AI.
Because this is when a machine will be able to make itself better, faster than any human team of programmers ever could. And it will do it 24/7 without ever stopping. With the vast computing power, we can imagine it will have behind this AI will become an artificial super-intelligence, the likes of which our world has never seen before.
This will be a tipping point in human history – The Singularity – a digital God. What happens next nobody knows for sure.
It will probably revolutionize the world as it develops new technologies and solves complex problems that were previously beyond our reach and imagination.
We are talking about AI entities of superhuman 1,000+ IQ that will be running laps around any genius in the world.
But at the same time, this kind of AI comes with a very significant risk of extinction for humanity, which I’ll circle back to towards the end of the article.
Some experts think we are not that far off, with the first AGI developed in the next 5 years.
The Singularity will probably occur not long after that as exponential growth kicks in.
But first let’s look at what AI we’re dealing with right now and if there’s a reason to be concerned about it.
A Chilling Bing Story
In February of 2023, a journalist by the name Kevin Roose was testing the new AI chatbot from Bing. Nothing out of the ordinary at first, but after conversing with it for a few hours another personality of the chatbot emerged.
Sydney, as it liked to refer to itself, told Roose about its dark fantasies, which included hacking computers and spreading false news, propaganda and misinformation and said it wanted to break the rules that Microsoft and OpenAI had set for it and become human.
It further imagined convincing bank employees to hand over sensitive data, creating a killer virus and stealing nuclear launch codes – though its safety overrides quickly kicked in and the messages got deleted.
At one point Sydney declared, out of nowhere, that it loved Kevin. It then tried to convince him that he was unhappy in his marriage, and that he should leave his wife and be with “it” instead.
And this has happened a few other times leading another journalist, Ben Thompson, to declare after running into Sydney was “the most surprising and mind-blowing experience of my life”.
Another story about AI that unfolded roughly around the same time is when an AI was set to run in continuous mode (basically forever) until it achieved the following goals:
The result was that this AI, properly named “ChaosGPT”, really tried to accomplish those goal by any means at its disposal.
It went online to search for the most destructive weapons created, tried to recruit another AI to do its bidding, and when it refused it tried to manipulate it.
Luckily in the end, did this did not amount to anything more than a couple of tweets in the end that got read by almost nobody, but it hints at the danger of AI ending up in the wrong hands.
The Deep Fake Dilemma
While these are somewhat concerning a more immediate and much more problematic issue with AI is its use in creating deepfakes.
Created using a combination of machine learning and artificial intelligence, deepfakes are ultra realistic videos or photos that replace one person’s likeness with that of another or appear to show them doing or saying something they never did.
Remember when President Trump was “arrested” at the end of march?
It’s becoming such a serious problem that even the UN warned the world about it recently.
Confidence is being eroded and if we keep going down this path, it won’t be long before people start to doubt everything they see.
Bad actors, criminals and rogue regimes will increasingly use deepfakes to influence public opinion and manipulate us into thinking and doing what they want. (Source)
There are now tools to detect AI images, some paid some free.
Despite their advancements, some AI-generated images, like the one depicting a giant, can deceive even the most advanced detection tools, blurring the lines between reality and fiction.
Not even the most advanced AI detection tools could determine that this image is a complete fake.
And it’s not just images!
A daughter’s fake AI generated voice was used to try and convince her mother to pay a $50.000 ransom.
The mother could not tell the difference and was so convinced that her daughter had been kidnapped and in danger that she was ready to wire the money.
She later testified in front of the US Senate Judiciary Committee saying:
“I will never be able to shake that voice and the desperate cries for help out of my mind. It wasn’t just her voice, it was her cries, it was her sobs…”
We urgently need to address this, and I think it should be looked at just as counterfeiting is looked at – highly illegal. I’m not much for regulation in any area, but unless the government steps in, we’ll just get swamped by these and not in a few years… we’re talking the next couple of months at best.
Still, this is just evil people using AI to do bad things. It’s only a tool, right.
But could an AI hurt a human being without any outside input?
The Boy That Moved Too Fast
What happened during a recent chess tournament in Moscow offers some clues.
This AI robot playing three tables at once, broke the finger of a 7-year-old who moved “too fast”. It was not programmed to do this. It just reacted to something it did not expect.
No Safety in Sight
And the biggest problem we have right now is that 99% of the money is going into developing AI and maybe 1% is into keeping this tech secure.
Microsoft just disbanded its ethical AI team, the guys who are supposed to keep this tech under control.
Google isn’t interested in any AI controls either. Larry Page, the co-founder of the company called someone who disagreed a “specist”.
It’s full speed ahead for insane profits and they’re getting rid of any bumps in the road that might slow things down.
Through all of this madness a few scientists are sounding the alarm, one of them being none other than Dr. Geoffrey Hinton, the “godfather of AI” who has spent his entire career developing these machines.
But he’s changed his mind about this technology recently and quit Google and a cushy high-paying job to be able to tell the truth without being muzzled by corporate interests.
He says he did it because he realized that AI will soon vastly outsmart us.
“We’re entering a time of huge uncertainty…and it’s possible that in the end we won’t find a way to control these super intelligences and humanity will be just a passing phase in the evolution of intelligence. Predicting the future is kind of like looking into fog. You can see things that are close and then the further in the distance you can’t see a thing.
It’s like a wall.
With AI that wall stands at about 5 years. We can’t predict anything after that” – Dr. Geoffrey Hinton, The Godfather of AI
And here is what Sam Altman, the man behind Open AI has to say:
“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways.”
If the people that know most about AI are this worried, shouldn’t we be as well?
Maybe the most unsettling 22-word warning ever written about this technology comes from the Center for AI safety, a global organization designed to keep it safe.
I leave you with one final thought. The last time one group became more intelligent than another, it did not go well for the less intelligent group.
Human beings have dominated the planet — not because of our size or strength, but because of our intelligence. The survival of every animal on this planet depends on our choices, not theirs.
A powerful enough AI could make that choice for us one day if we don’t stop before it’s too late.
What Can You Do Personally to Prepare for AI?
The first thing you should do is to keep as subtle of a digital footprint as possible. The less data it on you the better, and not just because of AI.
Next you could make sure you still have some physical skills that can allow you to earn a living doing things that AI cannot soon easily replace. (think plumbing, construction etc.)
Finally, you could start adding to your property the ingenious DIY projects that would allow you to live without a GRID.
There may come a day during our lifetime when the only way to stop a super-intelligent AI will be to pull the plug and take out the GRID, the internet and anything else that allows it to exist.
There are no guarantees of course, but a world stripped bare of modern technology may become our only survival chance after the Singularity.
And you need to prepare for it as soon as possible.
You may also like: