I’ll make the claim right at the start: all propositions about motivation thoughts or thinking are either false, nonsense, or incomprehensible, in the strict sense, and at most vaguely true in a loose or generalized sense.
For a while now I’ve been bothered by philosophical arguments that rely on a structure of agents and propositions. For example, while reading about Reasons Internalism in relation to Bernard William’s “Internal and External reasons”, all of the various theses are described in terms of agents performing certain actions based on reasons expressible in propositional form. Judith wants to go to the opera to see Rigoletto, because she is a fan of both Victor Hugo and Verdi, and interested in the intersection of narrative between this work and, say, the Hunchback of Notre Dame. This constitutes some reason for her action, either normative or motivational. Which one, doesn’t matter for this purposes of this discussion. Her husband Oliver does not want to go, because he thinks opera is boring. This constitutes his side of the motivational or normative reasons. Why these agents hold these beliefs is also mostly irrelevant here. I suppose Williams would say that they are motivated by these reasons due to their respective motivational sets’ compatibility with them, thus constituting an internal reason, etc.
But, actual human behaviour or motivation is not expressible in propositional form (In fact, most of reality is not expressible in propositional form but that’s a somewhat different topic). First, I should say what I mean by propositions. A proposition is some logical statement that can either be true or false. It is the essence of an utterance, or the meaning of a sentence. “X is Y” is a proposition, and likewise “If X, then Y”, and likewise more complex ideas such as “For all X, for some Y, if X is P, then X is Y”, etc. are propositions. This is all well and good.
What is not well and good is the fact that nothing within the complex neural networks that make up our deliberations corresponds to any such propositions, such that if “reasons” either internal or external were to be expressed in a proposition form, they would always be false or nonsense. Judith does not want to go to the opera because X. She wants to go because her 86 billion neurons are electro-chemically charged and arranged in just such a way as to make this behaviour most likely. Nothing about the state of her brain can truly be expressed in a simple proposition of the English language, or some language of simple logic.
It should be noted that this is not a claim about epistemology, or what we can know about something. It is just false to stipulate that the state of Judith’s brain can be expressed truly in a simple proposition of the above sort. Why? Because nothing in her brain would constitute a truth maker of any such proposition. And because any language we use to express propositions (including formal logical languages) are too coarse grained to carve the world in the detail necessary to express some truth. And even if they could, the resultant propositions would be incomprehensible, due to their enormous complexity. In fact, any true propositions would be necessarily more complex than the thing they are about (because the rules of the expressing language add more information to the system). So any true proposition about Judith’s motivation would have to be at least as complex as her brain, and probably much more complex than this, due to the causal relationship her brain has with almost everything else in the universe.
This reasoning is motivated by what we learn from deep neural networks in the world of computer science. Even with neural networks, which are millions of times less complex than the human brain, it is often impossible to express what exactly these computer agents are learning. To try and express accurately why an agent made some choice rather than another would require an explanation more complex than the neural network. Or, one could simply hand over the neural network as the explanation itself. Why did this chat bot choose to say the sentence it chose rather than another one? I don’t know, here’s the model (the trained network), that’s why.
Obviously, we make summary statements, and approximations all the time. For example, we might say about the chat bot that, perhaps within the training data it found relevant statistical relationships between various words and thus it is likely to respond in this way. But in the case of computer science we understand these statements to be approximations, and often educated guesses. Not because we don’t know something, but because the nature of the deep neural network is far too complex to express in any more useful way. And since neural networks are comprised only of their architecture and a set of matrices with various weights and biases, we cannot point to anything more useful that the network itself. Nothing within the network says “if X then Y”. It is simply a complex web of multiplications and weights that takes inputs and spits out some output based on these millions of multiplications.
If we ask why Judith wants to go to the opera, any proposition we come up with becomes equally silly, given the computer scientist’s point of reference. It is just silly to say that “She wants to go because she likes Victor Hugo”, because nothing in her brain corresponds to something like that. As an approximation we might guess about what the state of her brain is likely to produce given further approximations and assumptions about Judith as a person, but these are educated guesses at best, not the proper bearers of truth values.
There is a lot more to be said here, but for now I should summarize and say only that any discussion about internal or external reasons or normative motivations to act in some way or other become exercises in futility based on the non-propositional nature of the decision making process in the brain. The state of the brain is the agent’s reason to act or think a certain way, not some expressible proposition like “because I think opera is boring”.