Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу
or an influx of foreign investment money. It is also possible that each of those factors contributed, along with other possibilities. Claiming knowledge in that situation is tricky. An economist might claim she knows which explanation is the correct one or how much each factor contributed, but different economists might provide distinct causes for the improvement. There is no way for a nonexpert to identify who might be correct. Even if you are an economist yourself, you might trust your own evaluation better, but there really is no reasonable argument on why your estimate is actually better than the opinions of your best peers.

      The Ptolemaic situation is also different. We know now that the Ptolemaic system is not the best description for the movement of the planets. The justification is, in that sense, wrong. The planets do not orbit the earth; they orbit the sun. And their orbits are not made of a composition of circles and smaller circles. Newtonian mechanics provide a much better description, but even Newtonian mechanics is not really correct. General relativity corrections are needed to account for some discrepancies. And yet, using Ptolemaic methods, when they had been observed to describe the movements well, might not be a bad idea. At the time of the example, it was known the methods provided correct predictions. If that was your best evaluation of the world, it would make sense to use it. You might even have doubts if the Ptolemaic description is correct, but those previous observations might justify your trust in the results. It might make some sense to say you know the planets will be there, even if you are not certain that Ptolemy was right. Maybe the term knowing is too strong, but you did have a reliable prediction.

      There is something special about the scenario with the child and prime numbers. In every other situation, you might have tried to claim you knew the answer, but, if pressed, you would be forced to admit there was a chance that you could be wrong. More than that, even if you collected more data and the data agreed with you, some doubt could remain. You might find the best economical or astronomical explanation for your problem, but there is always the possibility a new and better theory might show up in the future. In the mobile phone case, even if your partner says you had forgotten the phone, there is still a remote possibility that is a lie. Maybe things did happen as you thought. The amount of remaining doubt might be very small, but it is always possible to imagine a very unlikely, but not impossible, case that would contradict your current so-called knowledge.

      That is not the same with prime numbers. We know the child in the example was wrong. There is no doubt 37 is prime and there is no doubt that the rule the kid used is wrong. We know how to determine if a number is prime with certainty and ending in 7 simply does not work. In the general case, given any number n, if you prove correctly n is a prime, everyone will agree with your demonstration, and no doubt will remain about it—at least, none aside from the possibility every human shares the same mental disease that made us certain about mathematics and logic while that certainty was not possible. But, in that case, all our reasoning would be non-reliable. Being non-reliable, it makes no sense to reason about that possibility, as our reason would fail anyway. Collective mental disease scenarios aside, mathematical and deductive logical demonstrations do not share the doubt that remains in the other three scenarios.

      It seems knowing could mean different things when applied to the real world and when we are talking about demonstrations. For mathematics, justified true belief might make sense. The fact that 37 is a prime number is a belief I can have. It is also true, and I can prove it to be correct beyond any doubts. I just need to assume the facts we know about numbers and the definition of a prime number, and the justification is as perfect as any justification can be.

      The same is not true for the real-world scenarios. There might be situations when we are close to being certain. When that happens, we may have trouble finding reasonable scenarios where our opinion might be wrong, but it is always possible to come up with a very implausible but not impossible paranoid case. That failing, we can always pull a brain-in-a-vat or a Descartes demon situation.

      That difference poses a few important questions. One of them is if there are methods that could allow us to be as sure about the real world as we are about the fact that 37 is prime. If they exist, which methods are those? If they do not exist, are there alternatives that might still allow us to claim some amount of knowledge? Is knowledge of the real world possible at all or are we in a deep skeptical scenario? Is there a middle path? We might need to give up some of our expectations about knowledge if we want to reason in a competent way.

      The other central question in this book is also related to knowing. Remember, I was looking for a definition of knowing that felt right, that is, a definition that matched what we really mean when we say we know something. As a first guess, justified true belief seemed to fit that role—but it was not enough. The justification of any opinion could be wrong. Too often, there might be serious problems in assessing if some ideas are actually true, even when they are justified.

      But I wanted the definition to feel right, that is, it should be supported by our common sense. We do tend to trust our common sense. It helps us in our daily lives, in our everyday decision making. People with no common sense are considered impaired. In principle, it makes sense to expect common sense could at least help us in our inquiries. But that raises this question: Is our common sense, or even our careful reasoning, reliable? Are there circumstances we could expect both of them to fail?

      If our reasoning is not perfect, it makes sense to look for methods that can protect us against our mistakes. Logic and scientific methodologies might play a role in that. But, if we want to keep improving, we should always be wary of our limitations. It is quite possible that some of our deepest biases might still influence our scientific conclusions and practices in ways we haven’t noticed yet. Our methods need not only to be correct, they might also need to be tailored to help us avoid our own biases. To do that, it makes sense to check what the most recent experiments have been telling us about our reasoning skills. Our next stop is to review a few things about our cognition skills.

      Reference

      Gettier, E. 1963. “Is Justified True Belief Knowledge?” Analysis 23(6), 121–23.

       Individual Reasoning

      Our reasoning skills are not perfect—and we have known that for a long time. Maybe we have even been aware of that since we start arguing with one another. Noticing the mistakes of others is something we do quite well, particularly when we disagree with them. Our own failures are harder to accept, but very few sane people (if any) would dare to call their own reasoning skills infallible.

      Whenever the first actual debates about correct reasoning and argumentation started, it is very clear the problem of building solid arguments was already well known in ancient Greece. By that time, there were already proposals for solving the problem. Aristotle’s Organon (Aristotle 2007) provides very clear rules for demonstrations. It tells us how, if we accept some ideas, we must also accept some conclusions that become unavoidable. But it also shows there are situations where we cannot reach any conclusions with certainty. Aristotle’s goal was, indeed, determining when we can say without doubt one conclusion follows from a set of premises—and from there, determining what is true.

      Yet, while we have been able to identify many reasoning mistakes for thousands of years, we still commit them. We even make very trivial mistakes. Some of the cases in Aristotelian logic are quite simple to understand and learn. With little training, it should be possible to avoid those mistakes, but generation after generation has been prey to those well-mapped cases. Nowadays, we have large lists of fallacies easily available to everyone. In books, on websites, it is easy to find them. It would make sense to assume people should want to be competent at reasoning, but it is not hard to find intelligent people making simple mistakes. That is far more puzzling than we usually think. If reasoning correctly is an advantage, intelligent people should try harder to correct their own mistakes. While some of us do train our minds to avoid simple logical errors, in general, that is not what we observe.

      As we keep committing the same mistakes, a couple of questions come to mind: Are we that dumb? Are most of us incapable of logical reasoning? Calling humans rational beings might be an exaggeration,


Скачать книгу