Many articles present logic in decision making as an “alternative” to the way decision making is typically done. This article’s premise is that upon reflection, logic is everyone’s preferred method of decision making – even illogical arguments use logically sounding statements to appear more valid. Unfortunately, many of us do not use logic consistently for decision making in real life, resulting in many poorly made decisions.
Common falacies in decision making:
Some decision making can be difficult. In real life, most decision making takes place in the absense of complete information. Often times, multiple choices are presented, but it isn’t clear if the best decision is choice #1, choice #2, or some other choice not considered at all.
Often, shortcuts are taken. The average person, who is not necessarily fully versed in the decision making criteria, will make decisions based on unrelated factors, such as: What do other people think? While this strategy has the advantage of being easier than “thinking”, and also being easier to deliver, it often fails to result in the best decision being made.
To understand why, and how to make better decisions, consider that the “Truth” of something is not affected by anything but its actual truth. Let’s start with a very simple example just to make the point: What is the “true” value of “1+1”? As you read the below, replace “1+1” with any question you might have on your mind, such as: “Is capitalism better than communism?”, “Are GMO products safe?”, “Should I have children?”, “Is there a God?”.
So: Suppose that “1+1” equals 2. (Note that if you are not certain of this, then this exercise won’t work very well, so suspend your disbelief for the time being, the reasoning behind this will be explained later.)
Who says it:
If someone you trust says that “1+1” equals 3, would that change things? What if a serial-rapist-murderer-pedophile said that 1+1=2, but the Pope, the president or prime minister of your country, your favourite celebrity, your parents, and last year’s Nobel prize winners all said that 1+1=3, would that change things? Or would the pedophile be the most reliable source in that group? What if a survey was taken that showed that more than 50% of individuals surveyed believed that 1+1=3? What if everyone surveyed unanimously agreed? What if this belief was standing for thousands of years? Would that change anything?
Perhaps you think it is ridiculous that so many people would be so deluded for such a long time, and hence the question is unrealistic, and therefore pointless. But let’s keep in mind that up until the 17th century, most of the world was convinced that the Earth is the centre of the universe (geocentrism). Did that make it true? Up until the 20th century, most of the world was certain that human activity was far too minor to have global effects on the planet, such as ozone depletion or global warming. Does that make it true? Is the fact that the majority of people everywhere and throughout history believe in some deity good enough reason to be certain of the existence of one?
Here are some common shortcuts used to determine truth: “According to Dr. So-and-so of the University of Such-and-such, …”, “4 out of 5 dentists agree, …”, “Everyone I know says, …”, “The democratic vote is, …”, “It is well known that, …”. Try insering “1+1=3” in place of the elipses (…) – how do any of these statements affect the actual truth?
How many times:
Advertisers know that repetition works too. But really, does the number of times something is said change its truth? What if you hear something from multiple different independent sources? What if you hear it regularly? What if it is embedded in your culture? If you grew up believing something, does that affect its truth? If a high-budget lobby group started a massive marketing campaign to assert that 1+1=3, how long would it be before the majority of the audience believed it?
This might sound ridiculous as well, but once again, let’s keep in mind that there was a time when radioactive substances were sold as health products, communism was considered evil, and homosexuality viewed as deviant and dangerous. Better decision making will come from first recognizing that saying something multiple times doesn’t make it any more or less true, and conversly, hearing it multiple times makes no difference either.
You may recall being in situations where an argument escalated to raised voices, insistent repetition, off-topic distraction, emotional blackmail, and similar tactics. Perhaps you’ve attempted those tactics yourself in the past. Now give it some thought: If a statement is made louder, is it any more or less true? What if it is repeated using different words? What if the person says “trust me” before making the statement, or “I’ll bet with you”, or “how could you possibly think otherwise”? If it doesn’t affect the truth of the statement, then why do people so often resort to these tactics?
The influence of such tactics may be subconscious. For example, you might be skeptical about something you heard from a friend. You might be less skeptical after hearing it on the news, and if you read it in a book, well, that might be the clincher. Good salesmen know that face-to-face meetings are far more effective than email – is the same person more “trustworthy” or “credible” depending on their proximity? Is what they say about the products or services provided by the company they represent more or less true if you meet them in person?
Try running through the above exercise again, this time replacing “1+1” with one of the example questions mentioned earlier, or with your own question, or any other issue that was raised by an article you read, a story you heard, or a movie you watched.
So are you saying that other people are useless for decision making?
First of all, if you’ve been paying attention, then you already know that it doesn’t matter what I say. It doesn’t matter who I am, how I say it, or how often. If you want to know the answer to any question – truly – then there is no shortcut: You will have to answer the question yourself, by “thinking”. Sorry.
Secondly, if you’ve learned anything new from reading this, or noticed something you hadn’t before, or just become more conscious of your sources of information and decision making process, then you already see how other people can be of benefit for decision making. Discussing issues with other people is immensly useful for suggesting options you may not have considered, calling attention to logical flaws in your thinking, and providing guidance with collecting relevant information.
Finally, no single human being can in a single life-time learn everything there is to learn, do everything there is to do, or even think through everything there is to think through to make the enormous number of decisions each is faced with. To live the lifestyle we are accustomed to, we rely on others to handle at least some portions of our lives (eg, research, invention, technology, manufacture, legislation, etc), and by implication, to make reasonable decisions on our behalf.
Alright, if I can’t use any of the above shortcuts, then how do I make decisions?
If you’ve given some thought to the many questions listed above, and are fairly confident about the answers (eg, Does repetition change truth? No!), then you might ask yourself – how am I making *that* decision? Am I relying on the opinions of others? Could the opinions of others change my mind about this? If the answer to both those questions is again, No!, then it’s time to recognize and give credit to your built-in decision-making ability.
The fundamental concept behind “Logic” and logical thinking in general, is that it is universal and innate. Human beings are born with the capability to think logically, and to see its unchangeable universal truth. The axioms of logic are the basis of the type of decision making described above – the kind that no outside forces such as the opinions of others could ever possibly change. Indeed, nothing – including what you are now reading, or even your own apprehension, doubt, or rejection of the principles of logic – can change it. Unfortunately, humans are also born with the ability to think illogically, so the distinction between logic and non-logic is important.
Let’s go through a simple example that will also help to illustrate 2 important concepts of logic: What is “1+1”, and why are we all so certain that it equals 2? You might think that this is a given, that is, we have defined things that way. This would be an example of “a priori” knowledge. We often talk about “definitions” or “axioms” (things we take for granted), and “deductive reasoning” (things derived from such knowledge), as being related to this.
However, a common error is to take things for granted (assume true) that aren’t so. The value of “1+1” is not “a priori” knowledge. To understand why, consider an empty mailbox. Open it to make sure that it’s completely empty, and then close it up. Now, insert an envelope into it through the mail slot. Now insert another envelope. At this point, it is technically unknown how many envelopes are in the mailbox, and we won’t know for sure until we open it up. If we were to open it and find 3 envelopes inside, then we would likely have to conclude that 1+1=3 (at least some of the time). But when we open it up, we find 2 envelopes. This exercise is repeated throughout our lives (unintentionally and subconsiously) whenever we work with multiple objects (eg, 2 apples in a basket, 2 cars in a driveway, etc). After many years of experience, with results so consistent as to hardly ever suggest exception, we can be quite certain that 1+1=2. So actually, this is an example of “inductive reasoning”, or “empirical” knowledge (things we learn from experience).
Many facts historically taken for granted have been shown to be “empirical” rather than “a priori”, and in some cases in fact incorrect. Sometimes we take things as self-evident because we simply don’t have the imagination needed to see how we can determine them empirically. For example, “Does God exist?”, or “How did the universe begin?”. Must the answers to these questions be axiomatic? Is there really no way to determine the answers empirically? Or is that just short-sighted thinking and a lack of imagination? Up until fairly recently, it was assumed that science would never be able to explain the connection between the mind and the body, the origins of living species, the existence of altruism and morality, or cognition. Scientists are tackling the question of the origins of the universe without fear.
The logic/math/science trinity:
Logic, math, and science, can be seen as 3 different views of the same concept, each focusing on a different area or part of the same thing. It is important to recognize this only in order to understand that when we talk about logic, we are also talking about math and science. Here is an example:
Science primarily focuses on inductive reasoning: If all swans that we’ve ever seen are white, then we hypothesize that all swans are white. Logic primarily focuses on deductive reasoning: If all swans are white, and this is a swan, then it must therefore be white. These seem like quite different concepts at first blush. However, logical arguments can get quite complex, and use many abstract concepts. As such, any logical proof is up for scrutiny, and anyone reviewing the proof may find an error in it, and thereby correct it, improve it, or disprove it. Of course, the disproof itself is subject to the same type of review by others! Scientific theorems are equally open to scrutiny. Were someone to find a black swan, we might be tempted (based on deductive reasoning) to dismiss, or at least adjust the original theorem to “most swans are white”. However, this reaction must itself be subject to scrutiny too: It may turn out that the specimen was not a swan, but a very swan-like duck, or it may be the case that the swan suffered from a disease that coated its feathers in black scales, but underneath, the swan is as white as its kin. This process demonstrates the scientific method.
Math certainly demonstrates the same properties: If we find, through testing, that prime numbers greater than 2 are odd, then we might propose a theorem that all prime numbers greater than 2 are odd. Then someone might come up with a deductive proof of this, greatly increasing our confidence in the original theorem. However, the proof may be overturned if someone finds a flaw in it, or if a supercomputer finds a number that contradicts expectations. Granted, this is highly unlikely to happen given how simple this particular proof is (how few steps removed from the definitions involved), but for much more complex proofs, the chances increase.
An important consequence of this is that the term “proof” appears to be misused in every day language: Proof is incorrectly seen as truth. If you hear “it has been scientifically proven”, it is interpreted as “it is true”. However, what it really means is that the person saying it doesn’t understand how science works. The scientific method uses inductive reasoning, whereas “proof” is a term describing deductive reasoning. Science doesn’t ever “prove” anything – it only suggests things based on available evidence. What’s more, even a real (deductive) proof is subject to scrutiny, and should not be interpreted (strictly speaking) as “truth”, although for day-to-day decision making purposes, it usually suffices.
Properties of logic:
Some of the differences between logic and non-logic (eg, pseudo-science, religion, politics), include:
– There is a right and there is a wrong. There is no “matter of opinion” (although there can be a “matter of preference”, such as emotions).
– Logic is universal: 2 people using the same information on 2 different sides of the globe will come to the same conclusion every time.
– A single contradiction makes everything equally possible and impossible, true and untrue, simultaneously. This would be a pointless outcome, and so it is crucially important that contradictions are recognized and resolved.
– Everything in logic is subject to scrutiny (including its axioms!).
– All the points above about how truth is not determined.
So for example: God either exists, or not. Different people may have different beliefs, but they can’t all be right simultaneously. What’s more, so long as some properties of God are predicted, it ought to be possible to find evidence one way or the other. So far, many of the predictions made based on the word of God (eg, geocentrism, creationism, young Earth, static Earth, infinite resources, historical accuracy, non-determinism, etc) have not favoured this theory, making this concept effectively useless for day-to-day decision making. What’s more, retractions in the position of religious authorities make any remaining predictions elusive. In this sense, God has become synonymous with “stuff we don’t know”. However, a scientist’s interpretation of “stuff we don’t know” is: “Stuff we should try to find out”, whereas the religious point of view tends to be “Stuff we can never know, so let’s not bother trying.” This seems a shame.
Some language analogues / synonyms:
– Definition, tautology, axiom, a priori, self-evident, postulate
– Deduction, argument, proof, disproof, inference, syllogism
– Inductive, empirical, hypothesis, theorem, theory, law, conjecture
– Fact, assumption, evidence, information, premise, proposition, predicate
How would logic and science have impacted the change from geocentrism to the relative world view more common today? Even without the scientific method, once evidence (such as that gathered by Galileo Galilei) became available, the accepted viewpoint began to change. So if this happened anyway, then how would logic and science have improved things? Well, prior to the relative viewpoint becoming dominant, there was a geocentric viewpoint. With the scientific method in mind, we might acknowledge that geocentrism was reasonably consistent with the evidence available at the time. However, it would also be acknowledged that a non-geocentric viewpoint may fit the data just as well. This is a remarkably different approach, with significant consequences. Prior to Galileo, there was already mounting evidence against geocentrism, but it was largely dismissed because geocentrism was not viewed as being up for scrutiny. What’s more, when Galileo did suggest a model that better fit the data, he was punished. In the modern approach, with the scientific method more fully developed, scientists would be encouraged to question and look for flaws in existing theories. A scientist suggesting an alternate theory would certainly be reviewed, probably be scrutinized, and perhaps even criticized, but not punished. As such, while the end result is the same – we now have a non-geocentric consensus – how quickly this type of change comes about has increased enormously.
This wonderfully successful methodology has ultimately led to significant increase in scientific knowledge, and technological advancement, but for many, it hasn’t made it effectively into daily life. While we value science and technology for how they have improved our standard of living, we rarely think to employ these successful principles to our own day-to-day decision making. What beliefs do you hold that you have no evidence to support? What facts do you take for granted based on external sources of information? What opinions do you express that you have not opened up for scrutiny? What evidence could be presented that would cause you to change your mind? What efforts do you make to fill in these gaps with deductive reasoning and empirical observations?
Many people hold strong views about political, economic, moral, and philosphical ideas. How can logic and science help resolve these views? Clearly not everyone can be right, at least some must be wrong. How do we figure out who? Based on the degree they have? How much conviction they speak with? Their resume perhaps? A scientist would put each viewpoint to the test! Let’s have a control group take one position, and an experimental group take another, and see who comes out ahead. Well, these “groups” would need to be entire countries, and the experiment would need to run for quite a long time. Costs would be outrageous, and there is always the risk of irreversible damage (environmental crisis, climate change, nuclear war). So this is where creativity comes in: What can we learn from smaller groups? What can we extrapolate from past experience? A relatively new tool available to researchers is the virtual simulation. If we can design a realistic simulation of the way a population behaves, then we could experiment with different models much more cheaply, easily, and safely. The bottom line for anyone expressing a strong opinion on such complex issues is that at best, they are basing those opinions on relatively small, poorly tested, fragmented, and questionable evidence – how certain they are of their opinion does not affect that truth.
Perhaps for now, the most important change we can make to our thinking and decision making is simply recognizing that not only are we limited in terms of the available evidence to support our own strong opinions, but that the same applies to others, including those we trust and look up to. Perhaps just opening up our strongest, most closed-minded views to scrutiny through evidence, realizing that the strong, closed-minded ideas of others should be open to the same type of scrutiny, and thinking about what type of evidence could call these opinions into question, would be an enormous step forward. The next step might be recognizing that many of these questions are not intangible – they can, and should be resolved through observation, evidence, and always allowing for the possibility of counter-evidence. Then we can move on to working on gathering the information needed to make better decisions in an objective manner. Unfortunately, truly good decision making requires expertise, at least in the area of decision making itself. This is non-trivial: Logical analysis, scientific method, and philosophy of math are university level courses. But there is no shortcut to thinking things through logically yourself.