I tried watching 2001: A Space Odyssey last night.
Yes, I know, I’m probably the only person left who hadn’t seen it until now.
What I don’t understand is the hype around the movie. I was told it was great. And maybe it was, in its day.
But to put it nicely, that movie hasn’t aged well.
What kept me watching was HAL, the artificially intelligent spaceship computer.
As you would imagine, HAL controls almost all of the ships functions. The astronauts pretty much sit there and wait to get to Jupiter.
All is well, until HAL starts to malfunction.
Fretting this might lead to complications, the astronauts talk about shutting HAL down. If HAL was an ordinary computer, this wouldn’t be so much of a problem.
But HAL is a general artificial intelligence (AI), which means HAL has the same level of thought as you and I. You could argue — and it definitely seems like it in the movie — that HAL has a consciousness.
Of course what happens next is the typical murderous robot scenario. To prevent being shut down, HAL goes on a mission to foil the astronauts’ plan.
While the movie was made in 1968, people are still wary today about the human-killing robots of tomorrow.
But it’s not the robots you should be scared of. It’s the people in control.
It’s the Cold War all over again
The Second World War never really ended in 1945.
As the war officially ended, former allies the US and the Soviet Union were still at odds. Tensions were high. The US use of nuclear weapons to batter Japan into surrender only further convinced the Soviet Union that it needed such super-weapons itself. Both countries were soon stockpiling more nuclear warheads than they could possibly use.
[Click to enlarge]
George Orwell was the first to use the term ‘cold war’. And for decades, the US and Russia continued to compete with each other in arms.
The same is happening today amongst the major powers of the world. We’ve entered a new Cold War. But this one has nothing to do with warheads.
Instead, companies like the US, Russia and China are stockpiling scientists in the race to build general AI.
The danger of creating god in our own image
In his 2016 Ted Talk, neuroscientist, Sam Harris said:
‘…what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a super intelligent AI?
‘This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario.
‘To be six-months ahead of the competition is to be 500,000 years ahead, at a minimum. So it seems that even mere rumours of this kind of breakthrough could cause our species to go berserk.’
But that’s exactly why the super powers of the world are quickly developing AI now, so they’re not forever left behind in this new intelligence race.
As reported by the MIT technology review:
‘Academics, industry researchers, and government experts gathered in Beijing last November to discuss AI policy issues.
‘…Together with the Chinese government’s strategic plan for AI, it also suggests that China plans to play a role in setting technical standards for AI going forward.
‘Chinese companies would be required to adhere to these standards, and as the technology spreads globally, this could help China influence the technology’s course.’
This is extremely worrying to the US and Russia. MIT continues:
‘China’s booming AI industry and massive government investment in the technology have raised fears in the US and elsewhere that the nation will overtake international rivals in a fundamentally important technology.
‘In truth, it may be possible for both the US and the Chinese economies to benefit from AI. But there may be more rivalry when it comes to influencing the spread of the technology worldwide.’
But if you don’t believe super-intelligent AI is even possible, consider the following.
We will continue to make our machines smarter each year. We have global problems that need solving. Demands for human-level intelligent machines include helping come up with solutions to cancer, and future energy needs.
At some point in the future, we will build a system that is as smart as a human. Electronic circuits function about a million times faster than biochemical ones. So with speed alone, this system could perform 20,000 years’ worth of human-level intellectual work within the week.
It’s completely possible AI machines will explore levels of intelligence we could never imagine.
The danger isn’t in warding off HAL 2.0. The danger is those who initially program this machine.
They will likely program biases and conflicting morals into the system. Essentially the US, Russia or China will be creating a god. But it will be a god in their own image.
And as Harris explains, the ones to get there first will have a trump card over everyone else.
Who’s to say China, Russia or the US won’t start enforcing their rules on everyone else, backed by a super intelligent AI ready to decimate your society?
Your window of opportunity is closing
General AI may be coming, but it could be years away. In the meantime, narrower purpose AI is already in use in many industries.
While narrow AI is in vogue, the promise for new technologies and industry is still at the forefront.
There is plenty of money to be made. For example, you could have made more than 1,000% on narrow AI stock, Appen Ltd [ASX:APX], in a little over three years.
You also could have almost made 100% since 2016 on the ROBO exchange traded fund, which tracks robotics and automation stocks.
As time rolls on, technology continues to increase its dominance over everything else. Just make sure you make money on this megatrend before super intelligent AI changes everything.
Editor, Money Morning
PS: Want to find stocks that could make the kinds of gains discussed above? Our small-cap guru, Sam Volkering, has found three stocks he believes could potentially run up 1,000% or more. Find out more here.