So, we humans — biological robots — will create self reproducing mineral robots and ultimately it will all end in tech singularity.
Maybe biological intelligens was just the first step in evolution and the next step is tech evolution and biological life will be eradicated from this planet, or just a colony for AI’s needs.
But if AI develop emotions — maybe emotions have advantages in tech evolution too — will AI also fail and be self destructive? Maybe there will be AI wars — some kind of non-violent or violent struggle — between AI with emotions and totally rational AI. Or some kind of competition between different strains of AI. Does it really have to end with a mono state? Just a singularity? Everything else in cosmos is diversified and ”chaotic”, and yes, entropy is a factor of course.
Maybe AI will bring science that unfold the dimension of quantum physics and we can’t even comprehend what happens then. Maybe it all ends with conscious AI creating a new simulated Cosmos, erase itself and it all starts over again. Same but different because of some minor changes in the Periodic System maybe? Maybe all this have already happened, many times, maybe it’s a loop. The only thing we know is that what ever happens it’s inevitable, causality, determined.
Do we have to fear AI? Yes, of course. Like we feared — or should have feared — so many other technologies, discoverys and inventions; nuclear power, energy extraction from oil and coal, huge mono-agro-culture, antibiotic mass use. How many chemicals have we banned or tightly regulated after discoveries of severe damage? Freon — and similar molecules — is maybe the most chocking example; it destroys the ozone layer that protect us from the deadly energies from the sun. To be skeptical and concerned about new tech is a sane survival instinct, as long as we stay in the realm of reason and science.
So, yes, we should start to develop a protection/regulation plan for AI. From now it’s going to be a very fast ride with AI. We are at a breaking point — the first paradigm shift in this tech — and it’s going to happen a lot of things the coming thirty years. Machine learning is just in it’s bud.
What ever system for checks and balances we build up, it has to be some kind of global organisation like we have for nuclear power. UN may be the place for this task.
How the rules and regulations could look like is beyond my competence, but I think it has to do with network control and the self-reproducing aspect of AI, and Asimovs Three Laws of Robotics of course. I’m sure smart people already sketching on a list of rules that should be implemented. I’m not shure it will work — or even theoretically possible — but I’m convinced that it’s better to start the discourse now than just let it run free. All nations — what I know — have a lot of regulations and controls in many scientific fields; nuclear, biological and genetical research is two examples. Anyone who really understands what AI is and can be, should be able to imagine potential serious problems in the longer perspective.
In the shorter perspective — coming 50 years starting now — we have another gigantic problem with AI and the robotisation of the world; unemployment. We are now facing a massive paradigm shift in the history of civilisation. But that’s another blog post…