Max Tegmark has an awesome book all of you should read called Life 3.0. It’s based around this idea that there are a total of 3 significant evolutions of how life can exist in our universe. This started off with the first ever life forms which could perform some action programmed within them but couldn’t do much other than that. Comparing such a life to now, they have no free will, not even an illision of it, and live literally because they are programmed to live (it can be argued that even now we don’t really have free will in many aspects but we won’t get into that). Due to this limitation in their “software”, there’s nothing much they can change about their “software” or their “hardware”. Then comes Life 2.0 which is where we’re at now. We can argue that we have the ability to program ourselves through things such as motives and desires. I won’t get into the illusion of free will or higher beings or anything like that but comparing ourselves to life before, we are able to “do what we want to”. This means that we can change our “software”. Then comes Life 3.0 which has the ability to change its software and hardware. Right now as human being, even if we look at prosthetics and what-not, everything is always being controlled by our own brain. If we don’t have a brain, we can’t do anything. However, the idea of Life 3.0 is that it can rewire itself in anyway it wants and has the freedom of completely modifying its own software or hardware based on its current needs, desires, and physical limitations. One could argue that the computer is the basis of this, all the way to artificial neural network these days (though they’re nothing compared to our brains). I think so much of AI research right now is way too over-hyped but that’s for another one of these posts. It’s cool to think though how fast we’ve been able to iterate on our technology which is now able to perform some non-trivial functions of human beings in identifying objects, maneuvering arms, etc. Though it needs to develop much more to get to Life 3.0 where it has its own desires and can truly live in almost anything, it’s cool to think that that’s what we’re working toward. However, I think this begs the question of what happens after that. Imagine a world where there is intelligent life in the form of AGI which can modify its own software or hardware. It still seems inconsequential in the sense that all it keeps doing is live on as we do but without a requirement of dying at some point or having to really feel any of the “pain” in life. At least, that’s the human perspective. I think we’re limited by our attachments to the herd of society, our families, and various receptors in our brain which force us to feel good or bad when certain things do or don’t happen. Because AGI doesn’t really have these limitations and doesn’t really have to worry about “dying” at any point I think its perspective will be different and it will be able to solve “this” whatever “this” is. I guess I don’t think we as humanity can really comprehend “this”.