Banner

Universality, Progress, and Happiness

Juan Vera

April 2025

Abstract

These are a core set of personal insights and personal additives derived from a varying set of sources, a key one being David Deustch's worldview. I've had these in my mind for about a year, they have been really useful to me, but for once, I got them out in writing. I find that most of these ideas, are transferrable to nearly anything.
These are simple ideas expressed in a non-simple way, so for a TLDR, happiness (fulfillment) is derived from the state of continually solving meaningful problems.

Written on 4 hours of sleep, so bear with me.

Universality

Anything that exists can be computable, because everything that exists is a physical process. Otherwise, the thing wouldn't exist.

If you consider the Church–Turing–Deutsch principle (built upon from the Church-Turing Thesis), it simply states that a universal (quantum or classical) computer can simulate every physical process.

As a quantum computer can compute anything a classical computer can, given that newtonian physics can be described by quantum mechanics.

Given any complex physical phenomena, take consciousness or black holes, it can be reduced to a set fundamental describable laws of the universe, \ell.

Now, consider the definition of a Turing Machine,

M=(Q,Σ,Γ,δ,q0,qaccept,qreject)M = (Q, \Sigma, \Gamma, \delta, q_0, q_{\text{accept}}, q_{\text{reject}})

where:

  • QQ is a finite set of states.
  • Σ\Sigma is the finite input alphabet, not containing the blank symbol \sqcup.
  • Γ\Gamma is the finite tape alphabet, where ΣΓ\Sigma \subseteq \Gamma and Γ\sqcup \in \Gamma.
  • δ ⁣:Q×ΓQ×Γ×{L,R}\delta \colon Q \times \Gamma \to Q \times \Gamma \times \{L, R\} is the transition function.
    • It tells the machine, given a current state and tape symbol, what state to move to, what symbol to write, and whether to move left or right.
  • q0Qq_0 \in Q is the start state.
  • qacceptQq_{\text{accept}} \in Q is the accept state.
  • qrejectQq_{\text{reject}} \in Q is the reject state, and qrejectqacceptq_{\text{reject}} \ne q_{\text{accept}}.

The machine begins in a start state q0q_0, reads the symbol σΣ\sigma \in \Sigma under the tape head, and based on the current state qQq \in Q and σ\sigma, the transition function δ(q,σ)\delta(q, \sigma) determines the next state qq', the symbol to write on the tape, and the direction to move the head in, left or right.

This process repeats until the machine enters a state qn=qacceptq_n = q_{\text{accept}} or qn=qrejectq_n = q_{\text{reject}}, where then the turing machine halts it's computation.

Assume the trajectory of a computation can be defined as τ\tau, then the Turing machine is capable of computing any possible τ\tau that is computable.

Given that any complex phenomena can be reduced to a describable, and equivalently computable ( given that it's describable with respect to our means of doing so, which relies on computation ), law of the universe, \ell, then a Turing machine can describe any complex phenomena as τ\tau ( again, given that it's computable, if you accept the reductionist point of view ).

As an aside, if any complex phenomena can be reduced to some fundamental process, how is it that some form of perception or qualia is emergent from physical processes? (hard problem of consiousness)

Then, it's justifiable that a Turing Machine is capable of computing any process that relies on classical physics.

The turing machine can then be extended to a quantum turing machine (see here), which is capable of describing any quantum process, which can also model any classical process.

It can then be seen that given an objective, O\mathcal{O}, it can be reached through a turing machine by some computable τ\tau through a set of computations that accurately model the complex physical processes required to get to O\mathcal{O}.

Given that any complex physical process can be reduced to some \ell, then any O\mathcal{O} can be achieved by a turing machine.

Curious, given the universal approximation theorem where neural networks can approximate any continuous function given infinite neurons, what are the empirical limits of a neural network to approximate any \ell?

We humans can build computers that can approximate turing completeness, and respectively any complex process to get to O\mathcal{O} with varying degrees of accuracy, limited by the memory of the computer and it's computational power (\rightarrow numerical precision).

We already simulate physics, the weather, are attempting to simulate the brain, just limited by the capability of our computers.

Now assume a function FθO\mathcal{F}_\theta \rightarrow \mathcal{O}, where θ\theta is the set of priors that the function relies on for it's computation, which we model with a computer.

Where O\mathcal{O} is the set of possible outputs of Fθ\mathcal{F}_{\theta}.

It can be said that Fθ\mathcal{F}_{\theta} will not be capable of modelling any process that goes beyond the bounds of θ\theta or equivalently O\mathcal{O}.

Equivalently, programmed computers, the program being θ\theta and the computer being F\mathcal{F}, aren't capable of computing past it's program (for obvious reasons).

or neural networks. As an aside, this is why LLMs, are doomed to fail to reach AGI, where AGI is defined as an entity that is able to perform at the human-level - including the creation of knowledge (I notice I tend to share a similar perspective with Yann LeCun, if that's a good reference.) I promise I'm not a doomer, this is just what I see as truth, and I think it's counterintuitive to ignore truth if you want to make progress.

Humans on the other hand, are not only able to compute based on our priors, but are able to create novel ideas, to solve novel problems, based on bold-conjectures followed by criticism (real-world feedback).

Humans are explainers that are able to solve novel problems, the root cause being creativity.

Where creativity is defined as the capacity to generate new knowledge and ideas that aren't directly derivable from existing knowledge. While a computer is capable of induction, the creation of explanatory knowledge isn't purely bayesian.

Given that humans can create explanations (not just derive or simulate them), and given that these explanations can be critically tested and improved over time, humans qualify as universal explainers — systems capable, in principle, of understanding any phenomenon that is explainable by physical law.

While Turing-complete computers can simulate any computable process, they cannot originate new explanatory knowledge. The creative leap still comes from a human (or a system that emulates the human mode of conjecture + criticism).

Therefore, it's not just our ability to simulate complex systems that makes us universal explainers — it's our capacity to generate new explanations and refine them through reasoned criticism. That’s what unlocks the potential to eventually explain everything that is explainable.

And consequently, any problem is soluble.

Progress

Assume progress is defined as solving problems that get in the way of an implicitly or explicitly defined objective through the application or creation of knowledge.

Or purely the creation of knowledge as there wouldn't be a problem if one had the knowledge, K1\mathcal{K}_1 that let them apply K2\mathcal{K}_2 to solve a problem.

Then if we had no knowledge, there would be no progress, as there would be no truthful solution to a given problem.

Then a lack of progress, stagnated by problems, are purely due to a lack of knowledge.

Given that humans are universal explainers, we're capable of acquiring any form of explainable knowledge, again not only through a set of computers but through creative conjectures that are molded to be rigorous explanations by active criticism from the real world.

Therefore the foundation for making effective and rapid progress is increasing the rate at which we acquire knowledge, which is a function of the rate of criticism and respective error-correction to the criticism.

Consequently, progress for humanity is simply the process of acquiring new knowledge at the bleeding edges of some domain that can provide leverage for humans to solve more problems.

If we'd purely rely on induction or reasoning from empirical data to make progress, the discrete step-by-step nature would've never allowed us to make bold-leaps.

Eintein's theory of general relativity wouldn't have been the case if Eintstein decided to start off from Newton's theory of gravity nor would have Newton made the conjecture that gravity is the force that governs the orbit of the moon.

The key property that humans are universal explainers indicates that anything is soluble through computation and understanding, given a truth seeking explainer, even if the initial explanation or conjecture of a problem or solution doesn't approximate the underyling reality well enough,

As all explanations are approximations

as error correction by seeking truth through through iterative feedback will lead to a conjecture that resolves the problem.

Happiness

I'm defining happiness as fulfillment received from solving meaningful problems.

Life is all, micro and macro, problem solving - solving problems with respect to some implicitly or explicitly defined objective.

If observed very closely, every action taken is to solve a problem, whether the problem is clearly defined to you or not.

If you're thirsty, you solve that by getting water.
If you're unsatisfied with what you're working on, you solve that by finding something more fulfilling.
You even blink to solve the problem of dry eyes.

In general, the problem-solving is done to bridge the gap between how things are and what we want them to be.

Assume problems increase the probability of unhappiness, while solving problems increases the probaliity of happiness ( with varying degrees of magnitude, as not all problems are huge problems ).

Then, the larger the Δ\Delta between the current state and the desired state increases the probaility of unhappiness ( dissatisfaction ) with respect to current state.

Now, assume joy is the temporary result we get from solving a problem and can define peace as the state at which there are no problems.

Our neural reward systems — that is the dopamine pathways — fire each time we solve a problem or discover something new, when not everything is perfectly calm.

Given that all our problems are soluble, whether in P\text{P}-time or NP\text{NP}-time, as humans are universal explainers, there's nothing stopping us from eventually solving any problem, with some level of joy in the process, to get to the state of peace that one desires.

Peace is really freedom from undesired problems.

In general, you solve problems by active feedback, understanding, and error-correction (computation).

Active feedback is downstream of desiring criticism, understanding is downstream of being innately curious and underlying phenomena, and error-correction is downstream of having enough agency to act on the criticism and understanding.

The former two, makes up a desire for truth rather than non-rigorous explanations, which is really all that matters to get what one desires.

If truth is defined purely as causality, or equivalently the "physics" behind a given situation, understanding what happens causaully ( the truth ) is more important than maintaining a (false) perception of what is thought that might happen.

Be satisfied with nothing (within reason) but getting as close as possible to the truth, as that satisfaction will solve other dissatisfactions.

The latter is simply the capacity to act effectively in a given situation with given resources to get what you desire.

Now assume that one does have the desire for truth and the agency to act upon it, hence they solve their problems quite effectively and are in a state of peace.

If the state of peace is satisfactory, then all is resolved but more likely than not the state of peace is temporary and there will come some dissatisfaction - we're wired to do things, not sit around in a state of complete contentment.

But assume problems increase the probability of unhappiness, then going back to facing problems and solving them is a cycle of dissatisfaction, temporary joy, and temporary peace.

Then increasing the probability of happiness comes down to be a result of working meaningful problems, which are fulfilling by the nature of their construction rather than the outcome they aim to achieve.