Should one act only upon what one knows?

Searching. Mary is in the park, looking for Fred. She recognizes Fred's friend Ted some distance away on the left. Knowing that Fred is often in the park with Ted, she turns that way.
Waiting. Alexandre is waiting for Veronique in a cafe. He's been waiting for several hours now, and is doubtful that Veronique will ever show up. Nevertheless, he thinks it is worth waiting some more.

Mary and Alexandre are acting rationally here, even though Mary does not know that Fred is to the left, and Alexandre does not know that Veronique will ever show up. Even if it turns out that both were wrong, I wouldn't blame them for their decisions.

Situations like this occur all the time. When we make a decision, we often don't know which course of action will be best. All we can reasonably do is consider the available evidence and then go for an option with reasonably high chance of success and relatively low costs. (By 'chance', I mean subjective credence, of course: we can't go by objective chance unless we happen to know what it is, which usually we don't.)

If anything like this is true, then knowledge cannot be the norm of action. That is, it cannot be true that one should act upon p only if one knows that p, as John Hawthorne and Jason Stanley have recently suggested.

One argument in favour of the knowledge constraint is that it explains why we are less likely to make knowledge attributions when the stakes are high than when they are low (see the introduction to Stanley's Knowledge and Practical Interests):

Bank: Hannah stops at her bank on a Friday afternoon, intending to deposit a check. She sees a long queue and decides to come back the following day, remembering that the bank was open on the previous Saturday.

We're more inclined to say that Hannah knows that the bank will be open the following day when little is at stake for her than when her whole life depends on depositing the cheque before Monday. This corresponds to what is rational for her to do: if little is at stake, she should come back tomorrow; if a lot is at stake, she should line up in the queue today.

This judgment about rationality is explained by decision theory, without any mention of knowledge: if Hannah is somewhat confident that the bank will be open tomorrow and depositing the cheque today is bothersome, she ought to do it tomorrow unless the cost of not doing it on either day would be too high.

Nevertheless, the question remains why our judgments about knowledge always follow our judgments about what is rational to do. The answer is that they don't. Jonathan Schaffer argues in "The irrelevance of the subject" that in cases like Bank, our willingness to attribute knowledge doesn't really go with what is at stake (and thereby with what is rational to do), but rather with the salience of certain doubts. My intuitions are less clear than his, but even if our knowledge attributions went with what is rational to do in a few Bank cases, this is not enough.

As knowledge is not the same thing as sufficient credence, it is easy to construct cases where an action is rational according to decision theory but not according to the knowledge constraint, and vice versa. The two stories with which I began are of this kind: in Searching and Waiting, the agents act rationally despite not possessing the relevant knowledge. Mary doesn't know whether Fred is over there, Alexandre doesn't know whether Veronique will show up. (They probably do know that there is a certain subjective probability for Fred to be over there and for Alexandre to show up, but the proposed knowledge constraint is that for acting upon p, you should know that p, not merely know that your credence in p is sufficiently high.) So in this case, my intuitions follow the decision theoretic predictions and not the knowledge constraint.

The other direction, knowledge with insufficient credence, is a bit more unusual.

APPD Fred and Ted are offered a bet for 1 cent paying infinite happiness if the Anarchist Pogo Party of Germany wins the next federal elections. 1 cent is worth almost nothing to them and the bet has no other subjective cost. Both Fred and Ted reasonably give a credence around 0.0001 to the APPD winning. Fred accepts the bet, Ted rejects.

It seems to me that Ted's action was irrational. Moreover, there is no context in which it is true to say that the action was rational. But there are contexts in which it is true to say that Ted knows that the APPD will not win the next elections. Therefore, it is possible to act upon something that counts as knowledge (that the APPD will not win), but where doing so is irrational. Again, decision theory, but not the knowledge constraint, gives the intuitively right result.

It's true that the decision theoretic standard says nothing about how the credence comes about. And we do blame people if they act upon irrational credences. But in this case, we should really blame them for the credences, not for the actions. Often, that's exactly what we do, when we say that they shouldn't have been so confident, that there was no evidence, etc.

NB: the decision theoretic standard for rational action is not high credence. There are clear cases where one should not act upon p even though one has high credence in p, and cases where one should act upon p even though one has low credence in p. What is the rational thing to do depends not only on credence, but also on what is at stake -- more generally, on the potential benefits and costs.

Comments

# on 15 April 2007, 17:33

Hi wo,

I wonder if one might insist that in both Searching and Waiting, the respective characters should act only on what they know about the relevant objective chances. True, as you point out, we usually don't know the relevant chances, but then again, some norms could just be very demanding. (It might help if sometimes, one only needs to know whether the relevant chances are high or low, never mind their exact values.)

I also wonder if there's a real clash between decision theory, and the claim that knowledge is the norm of action (KNA). There is an important sense in which Sue should not eat the poisonous apple, even though she's starving, and her evidence indicates that it's edible. Alas, she eats it. Decision theory says she's not irrational; KNA says her action has fallen short of its expected norm. But the two seem compatible, insofar as the first is concerned with what is rational for Sue to do, based on her available evidence, and the second with what she has objectively good reason to do (given she doesn't want to die!).

Finally, I wonder what is lost if advocates of KNA make the lesser claim that justified true belief is the norm of action.

# on 15 April 2007, 17:57

I think that this question is somewhat redundant in light of the previous question, but it seems to me that one obstacle you face in arguing against the knowledge account in this manner is that we have to assume that some form of, say, decision-theoretic consequentialism doesn't determine what the right choices are in any given setting. If the right choice is determined by the evidence and the subject we intuitively take to be doing the right thing knows that, but not the objective outcomes, it's not clear that intuition counts against the knowledge account.

Isn't there an easier way to knock down the knowledge account?
(1) The conditions that make it such that a belief is Gettiered do not necessarily make it such that it is wrong to act upon a consideration.
(2) The conditions that make it such that a belief is Gettiered ensure that the belief doesn't constitute knowledge.
(3) According to the knowledge account, if some consideration you believe is not known to be true, you shouldn't act upon the basis of that consideration.
(C) The knowledge account needs to be revised.

# on 16 April 2007, 04:46

Thanks!

Weng Hong: The main problem with the demanding KNA norm is that we are bound to constantly violate it, and the norm tells us nothing about what to do instead. But intuitively, it is very clear in most of these cases which option is better and which worse. (The other problem is that even when it is applicable, the KNA norm sometimes gives the wrong advice, as in the APPD case.)

You're right that there is a sense in which Sue did not have an objectively good reason to eat the apple. But I find it odd to turn that into a 'norm': there's nothing blameworthy about Sue's action. In fact, whenever the decision theoretic norms and KNA give different advice, I would blame people for following KNA -- as when Mary doesn't go to the left and when Ted rejects the bet.

Clayton: I think your first point depends on what it means to "act upon p". I took it that in Searching, Mary acts upon Fred being somewhere to the left (which she doesn't know). One could say instead that she acts upon the evidence of Ted being to the left and Fred often being where Ted is (which she does know). But this only shifts the problem: suppose the person she took for Ted is somebody else who happens to look exactly like Ted. Her decision would still have been right, but again would not be based on knowledge. You would have to shift the knowledge acted upon even further, ultimately to her immediate sensory evidence: at least she knows that it looks to her as if Ted is over there, and this is what she acts upon. But now the norm doesn't look very interesting any more: if one always knows one's immediate evidence, and one should act upon one's evidence, then one should act upon what one knows, alright; but the evidence is what counts here, not the knowledge. Moreover, evidence doesn't determine beliefs: Mary could have strange background assumptions that lead her to infer from her evidence that the person over there is not Ted. That is, to make a decision, she needs to figure out where Fred is most likely to be, but her sense-data (what she knows) entail nothing to that end.

Re your simpler argument, I like that as well. But one could respond by simply weakening KNA to JTBNA, as Weng Hong suggests. I think the KNA claim is more substantively wrong.

# on 16 April 2007, 06:52

Hi wo,

I guess it partly boils down to what is involved in violating a norm. Do defenders of KNA really claim that one should always obey KNA on pain of irrationality? That sounds really implausible to me. It seems more plausible to hold that KNA is an (epistemic?) ideal towards which we should strive: falling short of the ideal makes us imperfect, but we shouldn't be blamed for it. Sue is not blameworthy for eating the poisonous apple, but insofar as her actions should be based on her knowledge, she is not ideal. Nonetheless, there may be cases in which knowledge can be easily acquired, and in such cases, one may be blamed for violating KNA.

Also, if there is more than one norm governing action, then although it would be good if we obey all of them, there might be occasions when we obey one only by violating another.

In response to APPD, I think a defender of KNA might say that if Ted is irrational, it is because he is not acting according to what he knows about the objective chance of APPD winning, namely, that however slight, it is non-zero.

(By the way, I should mention that I'm just trying to see how someone sympathetic to KNA might respond to your arguments - I actually agree with the gist of what you said.)

# on 18 April 2007, 00:20

Wo,

Hawthorne and I have been working on a paper, "Knowledge and Action", for a couple of years now, which is now finished. It responds to the points you raise here and others. It has been sitting at a journal for a few months, and we've been waiting ages for the response. I don't have the time now to spell everything out, but I can make a few quick comments here.

First, however, I didn't understand your cases. What are the other options Alexandre has in WAITING? I just don't know what to say about the case until I know this. It seems to me that if is doubtful that Veronique will ever show up, Alexandre isn't rational to wait, and you don't need the knowledge account to explain that; decision theory will do. Anyway.

I also didn't understand SEARCHING. Mary knows that it's sufficiently likely that Fred is in the park to warrant the effort of turning her head. What exactly is the problem? (see my PPR reply to Neta, on my website, and the discussion of knowledge of chances in my reply to Schiffer, also there).

Let's make a distinction between complying with a norm, and excusably violating a norm. Where one has every reason to think that one knows P but does not, acting on P is quite excusable. But that is no objection to the norm that one ought to act on P only if one knows P. On the contrary, the need for an excuse in the case is explained by that norm. Maybe all we can do is try to comply with a norm; that doesn't mean that we've successfully complied with the norm. It just means that our action is understandable, or excusable. As you point out, we don't blame people for acting on what they don't know. But their actions are excusable only if they acted on what they thought they knew. We need the knowledge norm of action to explain the pattern of excuses we allow.

As to whether decision theory explains what the knowledge norm explains; I was particularly puzzled here. Decision theory doesn't at all explain the fact that ordinary humans use the term "know" in the appraisal of action. That's the fact that I was most eager to explain in the book. Decision theory has no explanation whatever of that phenomenon, which you yourself fully admit ("This judgment about rationality is explained by decision theory, without any mention of knowledge"). I'm trying to explain why people use the concept of knowledge in these circumstances. Maybe you don't think people *should* use the concept of knowledge to appraise actions, but that's a different point. I'm trying to defend the norms we actually have.

I also didn't understand APPD, and I think maybe you are not seeing the relation between the knowledge norm and decision theory. I don't see them as conflicting, but as addressing different issues. Decision theory tells us what we ought to do -- we ought to do X iff doing X maximizes expected utility. The knowledge norm concerns something different; when we act, we act for reasons. What epistemic relation do I have to bear to a proposition for it to be a good reason for acting? This is what the knowledge norm is addressing.

I can do the right thing, but for the wrong reasons (e.g. if I do what I ought to do according to decision theory, but my reasons for acting are not known by me). In APPD, Ted ought to buy the ticket; that's fully consistent with the knowledge norm for action. The knowledge norm is just a necessary condition for an action to be fully rational.

You also endorse Schaffer's paper. I haven't published a reply to Schaffer's paper. But I suspect I will in the near future. Schaffer's intuitions in High&Fast and Low&Slow are exactly predicted by the version of interest-relative invariantism I defend in Chapter 5 of my book. The reason is that in High&Fast, the proposition that the bank will be open is not a serious practical question, and in Low&Slow, the proposition that the bank will be open is a serious practical question. So Schaffer is wrong that these are intuitions that show that salience, rather than stakes are not guiding our intuitions. The theory I develop in Chapter 5, blindly applied to these examples, straightforwardly explains exactly the intuitions Schaffer has. Schaffer is just mistaken about how the stakes function.

# on 18 April 2007, 01:54

I should explain the point about Schaffer's paper more explicitly. Here are Schaffer's two examples that are supposed to show that Stakes do not guide our intuitions:

Low & Slow: On Friday afternoon, Sam is driving past the bank with his paycheck in his pocket. The lines are long. Sam would prefer to deposit his check before Monday, but he has no pressing need to deposit the check. He has little at stake. Sam remembers that the bank was open last Saturday, so he figures that the bank will be open this Saturday. He is right—the bank will be open
As Sam is about to drive on, his car dies, right beside the bank. Now he has an hour to kill before the tow truck comes. He could easily deposit his check, or at least look at the hours posted on the door to confirm that the bank will be open this Saturday. But instead Sam just dozes in the backseat. So, does Sam know that the bank will be open this Saturday?

High & Fast: On Friday afternoon, Sam is driving past the bank with his paycheck in his pocket. The lines are long. Sam would prefer to deposit his check before Monday, and indeed has pressing financial obligations that require a deposit before Monday. His entire financial future is at stake. Sam remembers that the bank was open last Saturday, so he figures that the bank will be open this Saturday. He is right – the bank will be open.
As Sam is about to stop to double-check the bank hours, he remembers that he promised to buy a present for his wife. She will be furious if he forgets – his whole relationship is at stake. The stores are about to close. Sam must choose. So Sam makes a split-second decision to drive past the bank and pick up a present for his wife instead, thinking that, after all, the bank will be open this Saturday. So, does Sam know that the bank will be open this Saturday?

According to Schaffer, Low & Slow Sam does not know that the bank will be open, but High & Fast Sam does know that the bank will be open. This is a problem for IRI, if IRI predicts, as Schaffer claims, the opposite intuitions.

In Knowledge and Practical Interests, I give several characterizations of what it is for a proposition to be a “serious practical question”, and on these characterizations, the proposition that the bank will be open on Saturday for Low and Slow Sam does not in fact meet the definition of a practically irrelevant proposition. A Low Stakes situation is, by definition, one in which the proposition putatively known is practically irrelevant. But in Low and Slow, the proposition that the bank will be open is not practically irrelevant for Sam.

Here are two definitions I give in K&PI of when a proposition is practically irrelevant for a person at a time (pp. 94-5). According to one, a proposition is practically irrelevant at a time t for a subject S if and only if its truth or falsity would not affect the preference ordering of the actions available for S at t. According to another, a proposition is practically irrelevant at t for X if and only if, where a1….an are the actions at X’s disposal at t, the differences between the warranted expected utilities of a1….an relative to the nearest states of the world in which p are not meaningfully different from the differences between the warranted expected utilities of a1…an relative to the nearest states of the world in which ~p. According to both definitions of practical irrelevance, the proposition that the bank will be open on Saturday is practically relevant for Low and Slow Sam, and so Low and Slow Sam is not in a Low Stakes situation.

The options at Sam’s disposal are to go check the bank hours, or to laze about in his car. Given that he has nothing to do at that time the proposition that the bank will be open on Saturday is not practically irrelevant. Consider, for instance, the action of lazing about in his car and then going to the bank tomorrow, versus the action of taking the easy opportunity now to see whether the bank will be open tomorrow. If the bank will be open tomorrow, then the expected utility of waiting around in his car is slightly higher; if the bank will not be open tomorrow, then given the cost of driving to the bank tomorrow, the expected utility of checking now (given the easy opportunity) is much higher than the expected utility of lazing about in his car. It follows, contra Schaffer, that Low & Slow is in fact not (by definition) a Low Stakes situation.

Schaffer’s High & Fast also poses no problems for IRI. For the same reason that the proposition that the bank will be open is not practically irrelevant for Low and Slow Stakes Sam, the proposition that the bank will be open is practically irrelevant for High and Fast Stakes Sam. This may at first seem unintuitive. But a moment’s reflection shows that it is the correct consequence. Whether a proposition is practically relevant depends upon one’s practical situation. If one is in a dire practical situation, relatively important propositions become practically irrelevant. For example, if an ax-murderer attacks me now, the proposition that the bank will be open on Saturday suddenly becomes practically irrelevant for me, even if I may be evicted from my apartment if I don’t deposit my check by Monday.

# on 18 April 2007, 02:13

As I said, your APPD case (if one shares your intuitions) doesn't raise a problem for the principle that one ought to act only upon what one knows, which is the principle I defend in my book. This principle is a claim about a necessary condition for acting, and your challenge is to a sufficient condition for acting. That is, your APPD case does raise a problem for the other direction of the biconditional, which is the sufficient condition for acting:

If one knows that p, then it is alright to act on p.

As I said, I don't defend this claim anywhere in my book or in anything I've published so far. But Hawthorne is attracted to this principle, and in our joint paper we defend it. I think there is much to be said for it, but I haven't yet completely embraced it. Be that as it may, I don't in the end think that APPD raises a devastating problem for it. We discuss cases like APPD in our joint paper, and give a particular account of it. But you're just going to have to wait for the paper to appear, since I have to go prepare my classes now!

# on 18 April 2007, 03:48

Many thanks Jason, that's very helpful. I was somewhat unsure that I understood your position correctly, and still am a little unsure. Guess I'll wait for that paper.

Just quickly on my examples. The point in Waiting and Searching is that the agents rationally act upon something they don't know, viz. upon their tentative beliefs that Fred is to the left and that Veronique will still show up. Alexandre has better things to do than sitting in the cafe, but he desperately wants to meet Veronique, so it's worth waiting even though it's doubtful she will show up. And as I said in the post and in the reply to Clayton, Mary does *not* know that it's likely that Fred is to the left, nor does Alexandre know that it's sufficiently likely that Veronique will show up -- unless by that you mean only that they known about their sufficient subjective credence.

I see your point about ordinary appraisal and blaming. But I wonder what the data here really shows. Aren't there cases where we praise people for acting upon false beliefs for which they had very good evidence? Think of a judge in court, or, to a lesser degree, Alexandre and Mary: "what she did was absolutely right; after all she had very good reason to believe that Fred was over there (even though the chance of him being there was 0 because he was dead by then)". When we judge actions, we usually also judge the credences on which the actions are based. Thus norms about credence -- that such-and-such things should only be believed upon evidence of a relevant kind -- enter into judgments about actions. Given that, I don't find it surprising that in some cases, we do use "knowledge" in appraisals: that's a handy way of conveying that the subject had good evidence for the belief they acted upon.

That said, I have to admit that I find some "reasons" talk puzzling, especially in philosophy, where many say that a good reason has to be true. Supposing that one should do p for reason q only if q is a good reason in this sense for doing p, something like the knowledge norm might follow. But I'm not sure this is a folksy notion of "good reason", and anyway I find it hard to press all actions into the "did p for reason q" scheme, without turning q into a proposition about the agent's subjective evidence or credence (thereby making the knowledge requirement uninteresting): what is the reason for which Alexandre keeps waiting?

# on 18 April 2007, 04:07

Wo,

I have to disagree strongly with your claim that there are "cases where we praise people for acting upon false beliefs for which they had very good evidence". I don't think we ever do that. What we do, is we *excuse* the judge who sentenced the innocent man to death on the basis of good but misleading evidence. We don't praise him. I worry you're letting theory guide your intuitions.

# on 18 April 2007, 04:23

Wo and Jason,

I think it is a mistake to endorse the inference:
PP: If A is praiseworthy in light of X-ing, A's X-ing was permissible.

I might be non-culpably mistaken in thinking that X-ing is right when Y-ing is. I might know that X-ing comes at a significant cost to me and do it from the best of motives. It seems I should be praised, but I've acted impermissibly.

I thought that to understand the difference between an excusable action and justified action, we talk about actions that involve an agent acting against an undefeated reason (without the agent being aware of this) and those that involve the agent acting without acting against an undefeated reason. Actions in the first category are merely excusable; actions in the second are justified, permissible, right, etc... If that is the way to understand the justification/excuse distinction, there is nothing in the notion of excusable action that precludes the possibility that the agent who excusably fails to do what he should fails to act in a praiseworthy fashion.

[So far as I can tell, this line is the natural one to take in order to deal with certain problems concerning moral luck. Second-order moral judgments having to do with praise and blame are true in virtue of facts accessible to the subject; the truth of the first-order moral judgments having to do with permissibility, justification, etc... need not be true simply in virtue of the facts that determine the truth or falsity of second-order moral judgments].

Oh, and wo,

I've been working on developing something along the lines of the JTBNA and JTBNB for a while now in a couple of papers. It's a lovely view. Hopefully, it is a view that will someday see the light of day.

# on 18 April 2007, 04:45

Jason, I'm sorry for being unclear on this point. When I speculated that *there are cases* where we praise people for acting upon false beliefs for which they had very good evidence, I meant to make an existential claim. I didn't want to say, as you took me to say, that *whenever* people act upon false beliefs for which they had very good evidence, we are willing to praise them for the action. Of course we're not willing to praise people for actions with terrible consequences, as in your example. When in other cases we say things like "what she did was absolutely right" etc. and reward judges, police officers and firemen for generally acting in accordance with the available evidence -- even if that often turns out to have been misleading --, what looks like an appraisal to me might look like an excuse to you. I'm not sure how to settle this disagreement.

Clayton, I think you have a good point here. I was indeed assuming a close link between praise-/blameworthiness and following/violating norms. I gotta think more about this. What is JTBNB?

# on 18 April 2007, 04:59

Wo,

Right, I understood. But I think the case I described is just a dramatic way of making the point that we don't praise people for acting for good but misleading reasons; we think their actions are excusable, not praiseworthy. Suppose I've got excellent but misleading reasons for thinking that the cake is done, say the timer is broken. On that basis, I take the cake out of the oven, even though it isn't done yet. Surely, I don't get congratulated. I shouldn't have taken the cake out unless I knew it was done. That's a perfectly ordinary thing to say, in fact it's how we describe the situation. If you don't recognize that way of speaking as natural, you've been blinded by theory. We should try to explore the consequences of taking this perfectly ordinary way of speaking at face value.

# on 18 April 2007, 05:00

The extra B is for belief. I defended something like the JTB accounts for belief and assertion in the dissertation (a belief, on this account, satisfies the relevant theoretical norms, is permissibly held, etc... when it is faultlessly held and is faithful to the world).

# on 21 April 2007, 21:00

Why is it rational to expect to see a friend in the park if the friend is not in sight. Friends stay close to each other. Same for waiting more than a few hours on a friend, there's no logical explanation that waiting another few hours will have the desired result.

# on 23 April 2007, 07:11

OK, I should have given more background to these examples. There are lots of bushes in that park, and lots of people -- so Fred might be close to Ted, but out of sight. And Alexandre doesn't have a date with Veronique, he merely has some reason to believe she will come to the cafe, and he desperately wants to meet her. (For the full background story, see the movie "the double life of Veronique": here Alexandre actually waits for 48 hours.)

On second thought, I think there are many dimensions along which we evaluate actions:

1. Outcome: if I randomly kick a stone and thereby open a door to a treasure or kill a dangerous enemy of humankind, I might be praised and rewarded. If the stone kills a cute bunny or causes an avalanche killing innocent people, I'll more likely be punished.

2. Decision theoretic rationality: if in a complex situation with lots of options and insufficient evidence I pick out the option with the highest expected utility, I can be said to have made the right decision. And I can be criticized if I make a different choice.

3. Reasonable credences: if as a judge or fireman or scientist I carefully consider all the available evidence and come to a reasonable conclusion upon which I then act, say in publishing a book called Philosophiae Naturalis Principia Mathematica, I may be praised and rewarded for my actions and conclusions (even if they ultimately turn out to be wrong). And I may be criticized for acting upon beliefs that were not supported by my evidence.

4. Truth: if I truly believe that P and that given P, doing Q will have some desired result, then doing Q will lead to the desired result. Whereas if my beliefs aren't true, my action will probably not have that result. In general, acting upon true beliefs means acting successfully -- no matter if the belief is based on evidence. In many contexts (most clearly in quiz shows and exams), success is all that matters, and then I can be said to have done the right thing or made the right decision.

The list could probably be continued. Anyway, I don't think it is helpful to isolate any one of those dimensions, or any combination of them, and call it 'the' norm of action, unless that is meant to be stipulative. And of course the dimensions interfere with one another. We don't praise people for actions that lead to half-baked cakes or innocent people dying (bad outcome), even if the actions are based on knowledge.

I'm inclined to say that there is no separate norm of knowledge because a) the above conditions even in combination do not entail knowledge, but b) in cases where these conditions are satisfied without the agent possessing the relevant knowledge, it seems quite wrong to criticize them. Example: I've got very good reasons for believing that the cake is done, say, because I've just heard the timer alarm going off in the kitchen. Moreover, my belief is true: the cake really is done. But in fact, the timer doesn't work any more and the sound I heard was caused by a very rare bird outside my kitchen window. So my belief doesn't amount to knowledge. Nevertheless, I go and take the cake out of the oven. Would I be criticized because I shouldn't have taken the cake out unless I knew it was done?

# on 25 April 2007, 15:18

Wo,

I don't at all disagree with most of what you're saying. But it doesn't conflict with the view I take that there is a knowledge norm of action (though for the full story, you'll have to see the paper with Hawthorne). Briefly, here is why. Someone can do the right thing, but for the wrong reasons. To take an example from our paper, suppose it is the right thing for a surgeon to perform a very risky operation that puts the patient's life at risk; there is no better option. But if the surgeon performs the risky operation based on her belief that the patient will survive, then she has done so for the wrong reason. It is this phenomenon we are trying to capture by the knowledge norm.

Most of your points have to do with *doing the right thing*. You're emphasizing that the knowledge norm doesn't explain when someone does the right thing, regardless of their reason for acting. You are right that the knowledge norm is not the full story about doing the right thing. I never claimed it was. In the paper, we adopt a version of objective Bayesianism, where probabilities are epistemic. This is the story relevant for doing the right thing. The knowledge norm comes in when we want to explain what it is to do the right thing for the right reasons.

As far as your final example, yes, you have violated a norm, and I think it is clear that you have. But since we all want want some cake, we probably aren't going to spend any time criticizing you. Our mouths will be too full of cake!

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.