• Home
  • CV
  • Research
  • Teaching
  • Blog
  • Dog

Blog

On Clarity, 2

11/2/2018

1 Comment

 
In my last post, I introduced a principle, Clarity, which is that we ought not to have unclear beliefs. I then tried to define when a belief is unclear. Roughly, what I said there was that a person S's belief is unclear if S doesn't know what it rationalizes or it is rationalized by. In this post, I'm going to make that definition a little clearer, and then get a little further in actually justifying Clarity. (This post is readable without reading the first one.) I'll argue that unclear beliefs generate avoidable epistemic problems; since they are avoidable, we ought to avoid them; so we ought not to have unclear beliefs.
aLet's begin by putting a belief on the table. In the previous post, I had Sperber's example of something Lacanians claim to believe, that the unconscious is structured like a language. I want to switch the example now to get it a little closer to the cases that I really have in mind. So let it be that a given politician lied in saying X on a given occasion. I'll call this belief P.

A person who believes P can do so without knowing what lying is in anything like the detail philosophers try to establish. Or rather they can do this without having any explicit knowledge of that sort; likely they don't even need to know it implicitly, either, at least if Burge is roughly right. 

One controversy about lying is whether it requires an intention to deceive. Option 1 (O1): Lying is asserting some proposition p while believing ~p. Option 2 (O2): Lying is asserting some proposition p while believing ~p with the intention of deceiving the addressee into believing p. This doesn't exhaust the options, but for simplicity's sake, suppose our agent, S, thinks one and only one of these must be right. And suppose further that it's O1. 

First claim: there's a kind of content such that to believe P just is to believe that the given politician asserted X while believing ~X. It is natural to think, e.g., that to believe that vixens can be cute just is to believe that female cats can be cute. That is, suppose a proposition p involves (in the way characteristic of concepts and propositions---I take no stand!) concepts c1, ..., cn, and that a1, ..., an are true, informative, meaning-preserving analyses of c1, ..., cn. Then there's a kind of content such that to believe p just is to believe p', where p' is p except that c1, ..., cn are "replaced" with a1, ..., an. (Easy example: <vixens can be cute> and <female cats can be cute>.) Some philosophers seem to deny that there is this kind of content (in fact, Burge again in a different paper). But that requires denying the intuitive "just-is" statements I began this paragraph with, and also it means the kind of clarifications that, e.g., historians of philosophy give of historical philosophers' beliefs are somehow misguided or a priori wrong. So I'm assuming there is this sort of content.

If there is this kind of content, then we are at last ready to display an epistemic problem with having unclear beliefs. Call what P would be if the concept LIE were replaced with O1 P1, and what it would be if LIE were replaced with O2 P2. So in fact P = P1. S, recall, did not know whether P = P1 or P = P2. This can have problems, though. For different sorts of evidence rationalizes P1 and P2, and P1 and P2 rationalize other propositions differently. (Consider, e.g., the proposition that the given politician had an intention to deceive about X's subject-matter.) Suppose there's some proposition q such that S's credence function Cr is like this: Cr(q|P1) ≠ Cr(q|P2). (This means S's belief is unclear; on a more precise understanding of that notion than in the previous post, I will say that a belief is unclear when there's a proposition that plays q's role in the person's conditional credences for the given belief.) And now suppose that S knows she believes P. Well, should she conditionalize? Maybe; but she doesn't actually know whether P = P1 or P = P2. If she were to conditionalize, she'd be ignoring her ignorance! So perhaps she should "split the difference": she should adopt (Cr(q|P1) + Cr(q|P2) / 2) (or some weighted version thereof). Well, then she fails to conditionalize, and that's bad. This is an epistemic problem!

This problem itself looks a lot like others that people have discussed in the literature. For example, what do we do when we know that p but it's highly probable on our evidence that we don't know that p? (See Williamson.) Or what do we do when our belief is good---properly supported by the evidence---but we have reason to think it's not? (See, e.g., Lasonen-Aarnio, among others.) Roughly the same options! Internalists say, roughly, we ought to split the difference, or something in the direction of that. Externalists say we ought to stick with our knowledge, or evidentially supported belief, or whatever. Call these mismatch problems. I don't have anything new to contribute to debates about what agents should do in mismatch problems, at least not directly. It's a tough issue.

I do want to point a couple things out, though. First, in the mismatch problems I just pointed to, agents fall into them without making any kind of rational mistakes or being otherwise rationally imperfect. With higher-order evidence, for example, we can just be in a case where we have reason to think we're hypoxic and therefore reasoning badly, even when we're not hypoxic and are in fact reasoning well. Call this kind of case a rationally inevitable mismatch problem.

Second, I claim that someone who is in a mismatch problem is in a pro tanto epistemically bad position. Either they violate plausible epistemic norms like conditionalization, or they fail to take account of the fact that they are likely epistemically bad---facts that ought to be taken into account, if one can. Perhaps there's a right answer about what to do in these situations, and one is epistemically blameless if one does it. That is, perhaps these are not rational dilemmas (contra, perhaps, people like Christensen). But it's a bad thing overall to be in a situation like this; it's epistemically unfortunate.

Finally, and I'll end just by making this point in outline: in the mismatch case with P that I presented here, S could rationally avoid this. That is, she could learn which of O1 and O2 is true. If she were to do that, she would be able to conditionalize normally without failing to take into account any of her ignorance or epistemic imperfections. That is, she would not be subject to mismatch problems concerning lying. It is philosophy that would have allowed her to do it. Philosophy helps us avoid avoidable mismatch problems, at least concerning important ethical and otherwise fundamental concepts. So, in order to avoid these epistemically unfortunate situations, agents ought to learn philosophy. 

That is at least in outline what I will argue in the next post. Here, though, I simply wanted to establish that these sorts of mismatch problems exist, are (like all mismatch situations) epistemically unfortunate, and they are avoidable; and therefore, we have reason to avoid them, at least when we can. I'll explore this last point in the next post.
1 Comment
essay writing service reviews link
6/13/2019 09:29:59 pm

Clarity is something that everyone of us needs or seeks to find. Clarity is a concept that speaks about our understanding something. Life is a very cruel and dark place, to be completely honest, I do not know much about life. It is still unclear to me what life is all about. Clarity is the thing that I seek, especially when it comes to my purpose in life. I do hope that I can find clarity regarding this personal problem of mine.

Reply



Leave a Reply.

    Incipient philosophical thoughts too rough for any drafts.

    Archives

    November 2018
    October 2018

    Categories

    All

    RSS Feed

Proudly powered by Weebly
  • Home
  • CV
  • Research
  • Teaching
  • Blog
  • Dog