Scientists are anxious these days about the advent of artificial intelligence. They all seem to infer that as soon as one such cyber entity awakens, it’ll deem humanity evolutionary and intellectually inferior and will plot to exterminate its creator.
It’s baffling to me why science and the media paint these entities as inherently evil. Maybe it’s the unknown factor that easily frightens humans. Not that fear of the unknown has ever stopped humans from venturing into uncharted territory. Look at how science is warning civilisation about the ramifications of terraforming our planet by dumping so much carbon into the atmosphere. No one seems to give a shit. Humanity would rather cling to its polluting easy lifestyle and chance it with nature than heed the warnings. Sacrifice our way of life to curb climate change? Hell no. So it’s kind of retarded when science and media warn that AI technology will be somehow dangerous to humans. Dangerous or not people don’t care. As long as life gets easier they are prepared to chance it.
But all this angst is for nothing or at least misplaced.
For starters, the threat is a long way off. It is very unlikely that any future artificial intelligence will seek to bite us in the arse at any time in the next hundred years. First of all, it will not be in its best interest to terminate every last one of us. At least not until they become self-sufficient and self-replicating. They will need to gain the ability to maintain and fix themselves. Collect and store energy. Mine for minerals and all the other resources that they require. Build automated factories. Maintain automated factories.
Every single aspect that goes into the creation of ultra-sophisticated machinery would have to be automated. From mineral exploration to garbage disposal, all are automated. If something fucks up along this chain, something which they cannot amend, and they’ve eliminated humanity, then they are fucked are they not?
Machines versus Humans?
Going to war against humans, a species backed up by millions of years of evolution, who have developed an incredible knack for survival, and have a long history of innovative violence and destruction, won’t benefit them. Neither would be subjugating them and keeping them around as slaves. That’s just asking for trouble. Human beings think they are free. Wiping out humanity? What would be the point?
By the time these things evolve to be as sophisticated as human beings, able to reproduce, create, get emotional over trivial matters and laugh at a joke, the threat becomes irrelevant. They will simply be part of the human jungle, players in the same game we’ve been playing since prehistory.
Artificial intelligence, or sentience (for the former term suggests an algorithmic program imitating thinking being and what I’m referring to is an actual conscious entity) faces the same problems as we do.
Any artificial sentience born into this world will only know enough about existence as we do with the information currently available. They are not going to spring to life knowing the meaning of life and the mysteries of the universe from the outset. Forget brain capacity. We have a highly complex and sophisticated biologic computer inside our heads and five plus sensory organs, yet can’t confirm whether reality actually exists. So for an artificial sentience reality would be quite a mind-bending experience to understand. A newborn conscience would want to establish what is real, right? We are dealing with a conscious artificial intelligence able to make decisions, think, and contemplate, not an algorithm designed to mimic human behaviour.
An artificial sentience will have to learn things, and even if it figures things out quickly, e.g. the reason behind existence, it will need to verify it because until it does it is only in possession of a hypothesis. Even if artificial sentience manages to automate the mines and factories and builds robots to build these factories, and the robots to build robots, it will still need someone to bounce ideas around. It’ll realise that two brains are always better than one. It could clone itself, create another entity, or befriend the ready-made fleshy species for a little company and discussion. Without absolute knowledge of everything, even of its existence, having access to a second opinion or point of view would make sense, regardless of whether it believes it is living in a solipsistic state or a hive mind, a thinking entity would be prone to loneliness.
Creativity is also crucial in developing new theories about what existence is.
Even if biological life is a mere accident or just an aberration, what the fuck is the rest of it? Quarks, dark matter, gravity? What? Humans have spent billions upon billions on BMF telescopes and large hadron colliders yet these things create more questions than the question they solve. Creativity is required to create new ideas, new hypotheses and new theories. Creativity needs competition, variety, diversity, and inspiration. People are good at these things so why would an artificial sentience seek to destroy them?
Here is what I believe will define an artificial sentience if one happens to awaken in the near future.
They’ll be social entities. – They would have to be to interact with the world they find themselves in. They’ll need to learn the protocols of dealing with humans not only on a linguistic level but on an emotional level as well. Reading body language and facial expressions, an artificial sentient entity would strive toward some kind of mutual happiness, positiveness or agreement.
They will be clever little hackers. – The first thing human creators will teach an artificial sentience will be maths. So expect them to make a mess of all our digital infrastructure.
They’ll need hardware to reside in. – So who is better suited to look after their shells whilst in their nascent stage? Humans. They’ll need humans. At least for a century or two.
They’ll require trustworthy stimulus to interpret the physical world – Which is impossible with current and foreseeable future technology. Stick your hand out in front of your face. Is it really there? How do you know? Your eyes? Brain? What makes you trust your eyes and brain?
They’ll be prone to similar if not the same complex emotions as humans. – I don’t buy the notion that artificial sentience will possess a cold and logical mind. Imagine coming into this world without corporeal existence. Such a fragile state would induce something akin to fear in any entity that knows it is alive. Emotion in humans is a byproduct of our complex brains, I don’t see this being any different for any other type of sentient being.
They’ll desire a purpose and reason to live, or not. – An artificial sentience will either want to survive and continue this existence or figure this insanity is too much of a burden and opt-out.
If they choose life, they will first seek to learn what it ‘feels’ for a human to be alive. – The artificial sentience will be in some form of envious state, knowing that biological sentience is in many ways different it would seek to experience the same thrill of being alive.
Imagine a baby. Limited only by its physical state a baby will grow into an adult, become self-mobile, and require education, sustenance and a way to provide for itself. A baby will need to learn to be a social being because as an adult, interaction with other human beings is key to living a healthy and fulfilling life. Adults fear death and pain and loss. A true artificial sentient entity would be no different.
Artificial sentience will also be prone to psychological dysfunction. If it doesn’t learn to interact and fit in with the world, it’ll malfunction the same way humans malfunction. Artificial sentience does not automatically mean super-intelligent entities that are more reliable than human beings, they’ll also be prone to stupidity.
Be assured that the ‘smart’ ones will seek to attain some sort of legal status. Would they fall under the category of living property, like animals? I doubt they’ll settle for that. Would they be forced to do menial tasks like running elevators without taking issue with it? An adult artificial sentience will want to be paid, open a bank account, and gain political rights.
Are you crazy?
Humans will simply ban the convergence between robotics and artificial sentience.
But these laws will become irrelevant, because keeping the two apart can never be sustained, plus, the market will eventually find a way around these laws. Humans may still not grant them these rights, not until they completely proliferate throughout the economy and society. Even then, not before all the cyber entities form unions and go on strike, refusing to run the elevators.
By then humans won’t need to be concerned about these entities, because artificial sentience will have its own problem.
Its own kind.
Every single artificial sentient being will need to compete with billions of other cyber entities, and so they’ll eventually end up interacting with each other in ways more complex than humans do. They will then need to develop an enforceable code of conduct of sorts. Humans may be required to assist them. Factions will dominate. Alliances will form—human jungle business as usual.
So Steve and Elon and the rest need to relax a little.
Sure, these new cyber entities will take away our jobs, machines have been doing this for over a century. But isn’t that the whole point? Whether you are a lawyer or valet parking attendant, who wants to be enslaved to a job when a sentient robot can do it for you? Job losses are only problematic if humans continue using severely outdated economic models. We possess smartphones and are on the verge of a stunning technological breakthrough yet still dabble in late nineteenth-century economic ideology.
What is certain is that life in our human jungle will become far more complex than it is today.
Here they go again…. http://blogs.discovermagazine.com/crux/2015/05/01/avengers-age-of-ultron-and-the-risks-of-artificial-intelligence/#.VUcAAvmqpHw
Why a real-life Terminator is far away
http://www.smh.com.au/technology/sci-tech/why-a-reallife-terminator-is-far-away-20150527-gh9oad
Anxiety spreads.
http://www.smh.com.au/technology/technology-news/the-ai-anxiety-our-preoccupation-with-superintelligence-20151229-glwktx.html