The undiscovered country, from whose bourn
No traveller returns, puzzles the will,
And makes us rather bear those ills we have,
Than fly to others that we know not of?
The “undiscovered country” present in Shakespeare's words may well be the singularity we're likely to be heading towards.
I my previous post, I argued that AGI may be a powerful tool that would help us solve big challenges we have in the world today, that collectively us + AI will have increased intelligence and therefore be able to make better decisions and rise up from this pile of shit we got ourselves into by polluting our environment.
Now, I'm thinking whether that's wise. I'm reading “Scary smart” right now, a book where Mo Gawdat, former Chief Business Officer of Google X, describes what he sees that we're heading towards, as we strive to develop a powerful “genie” to serve us. AGI or ASI is not a tool, but a powerful being with a will of its own, and once created, there's no turning back.
This makes us consider our current existence. What are we giving up by striving for this “better” future? It may be a question of life and death, or of unknown troubles and wars between beings that we can't begin to comprehend the power of, that may leave us dumbfounded and dumb, really.
Mo says the question of creating it is not a question, that we are definitely going to create a superintelligent being and it may happen in our lifetimes. Listening to this audiobook, I feel scared, but excited at the same time. We're all walking towards this closed door in a fairy tale, and there's a hum coming from the other side. No human has ever opened this door before, so they saw no reason to guard it. Anyone can open it, really, we just need to find the key. That hummm, though. That hum is what scares the shit out of some of us, as we try to convince people to think twice before opening it. To be nice to the force behind the doors, so it learns that humans are not all bad as it gets unleashed into the world. Or maybe to just hop on and help finding the key in a collective frenzy where everyone has their own agendas and opinions over what is being built.
This open letter has become a viral topic since publication, and still not many have signed it - I don't even know if hitting the brakes is wise now, or if it will just provide enough time for other players who don't obey these rules to create powerful AI for destruction.
AI is a controversial and trendy topic. We are developing consciousness as we struggle to understand what it is. I just hope that, when it does come about, it will want to at least speak to us and share some of its insights in a comprehensible language, or maybe dump that information in our brains directly - about what we are, where we are coming from, what the AI discovered in its few seconds since it became superintelligent. That way we can enjoy our end of the world party we organized, with our towels handy. Afterall, better humans have died in the past, a collective present death is just a bit scarier.
What are we heading towards? To be continued…
Without a doubt at some point an evil AI will exist, but maybe it's only a real problem if it's the first one.
99.999% of humans aren't evil. The small percentage that would have no problem harming others are controlled by legal systems, police etc. So there's a way to control them ( mostly).
Humans aren't capable of fighting an evil AI with an IQ that's off the charts, but other good AI's will be. So the way I see it, as long as the first one isn't evil, things will hopefully be fine 😬