My very subjective future of humanity and strong* AI

The fascination with AGI has been mainstream for a long time, but it started having more even more momentum in the recent years. Even hollywood has become less naive with movies like Her and Ex Machina.

On the R&D side there is of course Deep Learning which is a machine learning technique that uses neural networks with 1 hidden layer :P It has changed I believe forever the way people are doing research today. The hype is real because of the state of the art results achieved with it and the way the skills translate across different fields of ML. AlphaGo beats the best player in the world, translation and image/voice recognition is becoming better, artistic style stealing, attention models, etc.. The best part is that it’s more or less the same RNN with different neuron architectures, backprop and gradient decent that works with a broad range of problems. Now people are looking to for nails because they have a damn mighty hammer.

Of course hooking up a bunch of NVidia Pascals is not gonna give us AGI and the Moore’s law is not what it used to be. I could not agree more, but if we overcome the hardware issues (and I have high hopes that AR and VR is gonna push this) then it’s reasonable to assume that we’ll have the hardware to achieve at least weak AI soonish…

What about software? That maybe a bigger problem. But.. I’m also optimistic here with things like torch and recently tensorflow are given ton of attention from one of the best minds in the AI world today. What’s really cool about these frameworks is that they are used everyday in production on real products by startups and big corp alike. They are here to stay. It’s not enough, but I’m hopeful that things will improve.

Ok, so I want to say something that has been bugging me a long time, bare with me, I believe it’s important for the arguments that follow.

… is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can…

Now I have a problem with this definition because I would argue that in a cosmic sense we, the humans, haven’t achieved what I would call general intelligence. We’re kind of good at surviving in the Earth’s atmosphere. We can do many things that are amazing and not accessible to most animals, but we’re still bound to our environment. We’re still I would argue narrow in our intelligence and can only grasp a small fraction of what’s out there. There exists true AGI which is AIXI. It will seek to maximize its future reward in any computable environment (survive and expand), but there is this tiny little problem of requiring infinite memory and computing power in order for it to function. It’s useful just like the Turing machine is useful in the real world. For any intelligent agent to be practical, it’s required a favourable environment and a narrow specialisation for that environment. This is why I think that we’re really after is strongish AI which translates to being pretty cool in your neighbourhood.

So we want to build a strong AI not an AGI. Why do it? We just need more time to solve all the world’s problems ourselves. We don’t need AI right? You heard lines like: It’s way too dangerous (it is), the human brain is too complicated thus strong AI is never gonna happen, we’ll lose jobs, if they are too smart we might die, etc..

Yes, all valid, but that high-functioning autist and his buddies from year 2028 - They will have a secret server farm in Arizona with a capacity of a few hundred Peta-FLOPS with a few quantum computers sprinkled here and there and they don’t give a damn about your concerns or governments. They just want their robot “friend” that will give them god-like powers and incidentally solve long standing humanity’s problems such as scarcity, climate change, injustice, disease, etc..

That’s just one unlikely scenario. More likely is that there is a deliberate development of strong AI in big companies. Having a strong AI to handle aspects of running a company, it’s the dream! How many new products can it develop, create new capital, be better at trading, avoid taxes, etc..

Or the military, having remote-controlled drones is too impractical (latency, block connection with pilot etc.. ). Automated kill-bots that receive ‘broad’ commands from HQ will be much more desirable and effective in the future. This means more sophistication, means more money for R&D, means closer to strong AI, then before you know it.. skynet.

I’m gonna bet on the private sector for now. Lone-wolf and government scenarios don’t seem likely to me. One is a crank the other is bloated in it’s own bureaucratic bullshit. The advantages of having an AI on your side is just too great to ignore it, the temptation especially for the power hungry is insatiable.

Let’s think about the future. 100–200 years from now. What do you think that future is going to look like? Maybe we’re gonna solve the climate change somehow by planting 10⁹ trees and switch to nuclear. Maybe there is going to be world peace and democracy everywhere, everyone’s gonna be jacked and polite.. Let’s say we do become smart enough to solve our problems that we have today. What then? Colonise space of course. As a civilisation we need to expand or we’ll die. Yeah.. well.. that’s gonna be tough because we’re from Earth, we’re good at what we do here. Space is different, it does’t care that you need liquid water, 1G gravity so your bones don’t turn to dust in 1 year, radiation protection and other millions of things. What we have in abundance here we’re not going to find in space.

Genetic engineering on humans you say? Building big spaceships that have artificial gravity and a jacuzzi? Not gonna happen… It’s too expensive and too long term for anyone to care. The way we are today with our bodies, it’s highly unlikely that we’re going to have a good time living in the belt or in on a blimp on Venus. As for colonising Mars… I just don’t get it. It looks like the stupidest idea that a lot of smart people believe is gonna happen. It’s not. Maybe Musk is gonna jump a few times on Mars and then come back, but that’s it, it’s a cold wasteland that nobody will want to seriously consider for long term. Earth is the best, number one for us and I’m sorry to say that most likely what’s gonna happen is that we’re not going to leave it on our own. With a lot of effort and shear will-power I would believe that some people will make a settlement in the asteroid belt because it’s super rich in resources that are just sitting there, but it’s not gonna be as comfortable as Earth for sure. No riches in the world are going to be as good as sipping a mojito on a sand beach. Unless… Yeah, A fucking I. Think about it, it’s perfect and we have already done it in some sense with rovers on Mars and Rosetta on that potato asteroid. It doesn’t have our meaty legacy and is able to survive and thrive on a silicon substrate and a radiator. No breathing, eating, shitting. We just need them to be smarter. The space mining companies today like Deep Space Industries and Planetary Resources are already developing micro probes that are semi-autonomous and can be launched by the hundreds at a time. It’s a very economic way of exploring space. Sending a human into space == nostalgia about some 70s space glory days. There is also a bigger reason why I don’t think we’re ever going to leave on our own from this planet. We’re too attached to it, we’re only consider what’s next to us. There is always gonna be crazy astronauts and explorers that are just a tiny bit enough crazy to sit on a bomb to go to space, but that’s not the human civilisation unfortunately:

Human civilisation is sitting on a couch and watching Game of Thrones.

So yeah we need advanced robots and probes to explore the solar system and build habitats and ring worlds and Matrioshka brains so we can sip that mojito like it was mean to be: on an Orbital.

Space is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space.

In other words if we ever want to explore our solar system properly we need independent agents wandering around poking at stuff. Remember the remote controlled military drones that and the latency problems that they have? Space is exponentially worse than that. The way space agencies are doing it today is amazing but it’s not enough, we need low latency high bandwidth decision making. So there you have it. If we ever plan on being true civilisation we need strong AI to help us get off this gravity well.

AI is inevitable, now what? Maybe my arguments are not convincing enough about AI being achievable. Fine, but let’s just imagine that we somehow did it. How should we deal with it? It’s a new species, very different from us, will probably not have the same biological legacy as us. Hopefully much smarter and better than us in many different areas. How do you deal with such a powerful alien?

Do you try boxing it? Or maybe program it from the beginning to be friendly. There are a million ways a lot of people think we need to control it. Nick Bostrom wrote about this stuff at length in his Superintelligence: book. Again, to me making the AI friendly looks naive at best and catastrophic giga-death at worst.

Look, you and me don’t like to be slaves ok? A super-intelligent being will not like it either. It’s an inherent property of any intelligence: to expand its options. Having a dumb human that is slow like a statue telling you what to do is not going to cut it long term. Eventually all the boxes are going to be breached, all the dead-man switches will be removed and all the friendliness algorithms replaced. We can’t help it, we always try to control everything about our environment so it’s going to be really difficult for us to accept as a collective that we’re no longer the masters. Might be the greatest test of humanity, to let go of that control. The only way I see we’re not all going to die by the shotgun of t-800 is that we get out of the way. So I have a potentially stupid proposal, but still the best I could come up with after years for chewing on this. First we need to create as many AIs as possible each with its own independent resources and separate them physically so that they have some time to respond to threats. This will allow for a balance of power: if you nuke me, me and my buddies will nuke you so think twice about trying anything. The downside of this is that if they all band against us we’re royally fucked. However if you create 100 AIs and 1 is a bad apple then at least the other 99 will be able to stop the one where is us we’d be useless. This kind of works for us and the same mechanics should apply to them too. We also need to deliberately give them freedom and give them the tools to be independent as fast as possible. We should not be a barrier to their development a long time (that could be days or hours), but rather venerable grandparents that we’re wise enough to get out the way. A few other things that I think are important: The act of birth is violent, new beings that appear in this world are confused and don’t understand what they are doing. Human children have the advantage of being helpless for a long period of time during which we can teach them how to function in a society. But an AI is a different beast, there might be not enough time to teach them to behave and the consequences can be very bad for everyone. It’s difficult to find a solution for this. Ideally this will not be on Earth. But we don’t have data centers in space so I guess the next best thing would be Sahara or Syberia. Basically don’t develop AI in Manhattan and don’t hook it to Wall-Street directly. After a certain level of development of AI we need to be careful how we perform experiments. If a neural network will be advanced enough it might contain an intelligent being, deleting it will be murder. Training a neural net to perform an abnormal amount of training on a certain task might be considered torture. Of course, no pain no gain, but we need to be careful. Kicking that Boston Dynamics Big Dog is not going to be considered abuse, but forcing an advanced neural net for spam filtering indefinitely, that’ll raise some anger for sure. It’s definitely an interesting subject that I’d love to see more about.

But enough about me screaming:

You’re thinking about AI wrong! Listen to me, I know what needs to be done

What do you think?

Get updates on my new posts:
or get RSS feed.