An Underlying Function

I’ve been reading so much recently about models that tend to fit to a certain task better after they’re extremely over-parametrized (i.e. GPT-3). The crazy part is that these gigantic models are still inconsequential compared to the number of synapses we humans have in our brain firing every second. We’ve been trying so hard to re-create the learning abilities of our own brains (being at a stage of life 2.0/3.0). It made me resurface the idea in my head of whether there exists an underlying function which can model everything. By everything, I mean everything.

In the TV show Devs on HBO, there is an idea of creating a machine which has the ability to see crystal-clearly in the past and in the future. This could mean seeing the lives of dinosaurs in the past or the eventual fate of human life in the future. While one character in the show has a firm belief that there is no sense of free will and that the machine can predict to the last detail what will happen in this deterministic world, another character decides to implemeent a model based on the Hugh-Everett Many World’s interpretation which says that all possible outcomes of quantam measurements are eventually realized in the real-world or universe. Though none of this has been concretely proven, I think it’s more realistic to believe in this many worlds universe.

If we go by the Copehagen Interpretation or any other deterministic interpetation of the universe, the sense of prediction makes no sense with these ML models given that we can literally create a machine that can predict the future and all the actions we think we perform are in fact pre-determined by an initial state and a function that determines what happens at each moment in time.

However, with the other intepretation it’s an interesting question to ask whether there is an underlying function which can probalistically model everything in the world rather than perform a specific task based on training data which is intrinsically lossy since it doesn’t represent all that exist (rather it is an approximation of it). I’m not sure where the rest of this post is going since I’m still thinking about it while writing this but it’s something cool I’ve been thinking about lately.