NotBru / mind_serialization_project

Bru's Mind Serialization Project.
1 stars 1 forks source link

Bru's Mind Serialization Project

Lots of people, me included, would love to believe our ideas are consistent, definite, and unbiased. However, that's hardly the case, for many reasons:

However, some of us do make an effort to reduce those (hopefully) undesirable qualities. To this end, I decided to try to slowly write down my worldview, including morals and whatnot and, while I'm at it, maybe throw some ideas, insights, or questions that may hopefully appeal to someone. That's the purpose of this repo.

This README shall contain a list of assertions and, when needed, a quantifier of my degree of belief in them. I shall make an effort of being explicit with dependencies, so that it is clear whenever I think a certain position can be deduced from previous ones.

Welcome to Bru's Mind Serialization Project! I hope we all have a safe trip. Feel free to skip to the beliefs section.

Structure of this repo

Bru's Mind Serialization Project
├─ Structure
├─ Contributing
├─ Definition of done
└─ Beliefs

Contributing

Definition of done

A properly defined moral, as a set of consistent assertions and beliefs with the possibility of partial conviction finds its most natural expression in the language of probability theory. HOWEVER, even in the case where this language would suffice, I just can't quite get to know how sure I'm of everything. Sure I can distinguish absolute certainty from almost absolute, moderate certainty, and absolute uncertainty. But how am I to distinguish a 68% certainty from a 68.1% certainty?

The process of structuring this as a Bayesian probability distribution is just so hard, both conceptually (like, do I even know how to properly assign my degrees of knowledge? and, can I formally divide the assertions in terms of the fundamental ones?) and in practice (do I have the time and will?). Sooo, my definition of done is that it serves to reproduce to a good degree my moral judgment making.

And most importantly, I'm doing this for fun and to explore the limits of my own rationality. So... it needn't be completed.

Beliefs

1. Axiom 1: There is a working mechanism to the universe that is unchangeable by any of its parts.

A good example of such a situation would be if the universe was completely defined by a Newtonian model. It would be defined by Newton's laws, say, “Force equals mass times acceleration, ...” and a set of interaction terms that defined what the forces applying onto any of those particles would be. It would thus follow, by definition, that none of its particles would have a say on what really happens to them.

Now, the universe sure as hell doesn't follow Newtonian mechanics. But we do have pretty successful models for reality. And however faulty they may be (at unfathomably high resolution), it doesn't change the fact that they do limit what we can say happens. We can't say a planet isn't orbiting the sun. We can't say dopamine doesn't do what we know it does do.

2. Consequence 1: Human behaviour is, fundamentally, outside of its control.

From axiom 1

Whatever rules they be. Electrons do as electrons do, and there's no sense to claiming otherwise. And we, humans, aren't an exception. We may be complex, hard to model or predict, but we aren't above it all, and there's always a mechanism behind us doing as we do.

Jeremy

This is a very graphical example of what I mean by “we aren't in control of ourselves”. I'm gonna assume a materialistic worldview for this example, since it's the simplest, but I'll go on to highlight that this worldview doesn't really matter a lot for this point, since even the worldview in which people have souls is, to a good degree, consistent with axiom 1.

Picture a particular human, let's call it Jeremy. It all began with the conjunction of a sperm and an ovule, about which Jeremy had no say, since they didn't exist to begin with. This conjunction takes DNA from somewhere, which contains the instructions to make “a Jeremy”. It starts doing its thing on the mother's womb, at first simply replicating, and then growing and growing. Jeremy is simply a passive actor here, they can't really do anything nor have any opinion on what's happening. Factors such as their mother's nutrition slowly define the way Jeremy develops.

Once Jeremy's out of the womb, they begin making choices, yeah. But there is an underlying mechanism to those choices. Namely, Jeremy has a brain, which was formed all without their intervention. So at the very beginning of their existence, nothing about Jeremy was in control of them. Both their environment and their very own identity precedes and shapes their will.

It is the case that Jeremy will then go on and grow, making decisions as they do, but all those decisiones are defined by some underlying mechanism. At any given time, the decision happens in the brain, which has been shaped by the past and its environment, and all decisiones in the past have happened in the brain, which has been shaped by... until ultimately, it can all be traced back to a beginning and an environment on which Jeremy has had no opinion whatsoever.

Identity

But the assumption of a materialistic worldview isn't all that important really. If you want a system with identity, it has to behave as it behaves. Jeremy is Jeremy because they behave like Jeremy.

Let's assume, instead of a materialistic worldview, that there are greater agents at play. Suppose Jeremy has a so called free will that's ruled by something “greater” than the laws of physics.

This means that, whenever Jeremy ought to make a decision, it is not only their brain and the physical environment that comes into play, but a primordial identity Jeremy's imbued with affects its decision, allowing them to surpass the materialistic limitations.

But then again, that's Jeremy we're talking about. You may not be able to predict the outcome of this decision by looking at their brain, but this decision is Jeremy's. If this decision were random, independent of Jeremy's identity, you wouldn't call it free will, would you? so, after all, Jeremy can't act against their identity.

The worldview has simply gotten more complex (because of this soul agent) and potentially less predictable (where there unpredictable forces at play), but Jeremy is in no control.

Either Jeremy is Jeremy, and this defines a mechanism by which Jeremy abides, or Jeremy is a random force of the aether.

3. Axiom 2: The universe can be explained in terms of subsystems, their self interaction and their mutual interaction

Let's take for example a universe with just a planet and an orbiting moon. This universe can be obviously divided into two mutually exclusive subsystems, the aforementioned ones. We can describe the dynamics of the whole universe as a term describing how the planet behaves by itself, how it interacts with the moon, and how the moon interacts with itself.

This axiom is meant to state, whenever we divide the universe into mutually exclusive parts, we can describe the whole dynamics in terms of how each part interacts with itself and with each of the rest of parts.

Although not stated in the title, it is also implicit that whenever two systems are alike, their self-interaction dynamics are alike. And whenever two pairs of systems are alike, their interaction dynamics are alike too.

Say, a perfect copy of something is gonna act all the same, and so forth...

A more complete axiom would state “core theory is sufficient to properly describe all of human experience”, and since this theory has the currently described axiom incorporated, it would follow. But for now I'd rather have the broadest possible axioms come first.

4. Consequence 2: There is a sense in which we do have control over ourselves

From axiom 2

Even though consequence 2 clearly states that we are governed by rules that are above us, human experience features a notion of control. This comes from the fact that, since we experience the universe from within “the shell”, we don't actually know nor perceive the universe in its entirety. We see people (subsystems) and their interactions with each other and with the environment.

In this interaction, we can see people's decisions making changes both to their environment and to themselves. It's not like we can claim “aah, it's just the universe's doing” and excuse our behaviours or let our brains and bodies rot. It's a choice, it's a perfectly valid choice, but from this standpoint, we're to a certain degree in control of said choice.

So, the illusion (and useful heuristic) of ourselves having control arises when we place ourselves in our own shoes, one of those many subsystems, that experiences the world with some parts of the brain and makes decision with other parts, having those two connected.

5. Definition 1: Human morals

We define a person's morals as a mechanism through which they determine the “desirability” of decisions and actions. We expect this desirability factor to correlate to the person's own decisions, though it needn't be this way.

Do note that, as a consequence of this very definition, morals are subjective. The way people define their morals is completely personal. Some people get together and agree on a common moral ground, which may then serve to coordinate efforts to enforce them through actual actions.

You know how we used to think slavery was OK? and left handed people were evil or something? Let that sink in for a minute. Fucking left handedness. People judge men who wear dresses or who like dicks. They judge non-binary people for not falling into a dumb category we've made up or for, god forbid it, trying to have their identity respected...

We're judging one another constantly, oftentimes very harshly, so this fact needs to be emphazised: Morals only have “true sustance” to the degree by which they are enforced by individuals, as physical (and social) beings.

This subjectivity is one another reason for this project, actually. The axiomatization of my moral began as a very informal process when I decided I wanted to have a more robust approach to judgement. Throughout our whole history we find so many examples of morals that, between cultures and times, clash with each other. But I believe, as more basic needs got solved, people started to have time to think, and to care of each other. The driver of this is mere human empathy, hardcoded in our DNA through years of evolution.

And, as subjective of a reason as “Iunno, it's just in my DNA” is, we can more or less take that as a guarantee. Unless we accidentally create a society of psychopaths, we can more or less foresee a future where basic needs get solved more and more (or we all go sink at the bottom of perdition). So I myself can take human empathy as a non-aesthetic driver, in the sense that it doesn't change a lot, and isn't quite subjected to historical accidents, in contrast with, I don't know, shaving?

By reducing my morals to the smallest and broadest axiom set I can, my goal is to:

  1. Respect other people's liberty, in contrast with having narrow morals that forbid other people of things.
  2. Resort to moral judgements that more or less aligns with the hypothetical morals of a healthy society that has had time to care for sentient thingies.