Imagine you and I are walking down a long path. You are ahead, but we can communicate on the phone. If you say, "there are strawberries here" and I trust you, I should not come to believe that there are strawberries where I am, but that there are strawberries wherever you are. If I also know that you are 2 km ahead, I should come to believe that there are strawberries 2 km down the path. But what's the general rule for deferring to somebody with self-locating beliefs?
Let's start with two extreme cases. First, suppose I know nothing at all about my own location and how it relates to you -- not even that I am not identical to you. The assumption that you believe that are strawberries "here" (as you put it) then only tells me that there are strawberries somewhere in the universe. If you are moderately rational, then this is an uncentred belief you have as well. So in this case, deferring to you amounts to deferring to your uncentred beliefs; your centred (self-locating) beliefs can be ignored. That is, I treat you as an expert just in case, for any uncentred proposition A,\[ Cr_I(A / Cr_U(A)=x) = x. \]
If I have further uncentred evidence that you may lack, I need to conditionalize your credence on that evidence, as usual: \[ Cr_I(A / Cr_U(A / E)=x) = x. \]
Now for the other extreme: I am certain about my location relative to yours. This is information you may or may not have, and it may be relevant to other things you believe. For example, if I know that I am 1 km behind you but you are uncertain whether I am 1 km or 2 km behind, then conditional on your beliefs I should not be uncertnain whether I am 1 km or 2 km behind.
So I should consider your beliefs conditional on the extra information I have. But the information is centred, so conditionalizing your credence on it yields the wrong results: I'm not interested in your credence conditional on the hypothesis that you are 1 km behind you. We need to adjust the content.
For any centred proposition A, let [+n]A be the proposition that A is true n km down the path. My evidence E tells me that E is true here, but to consider your beliefs in light of that evidence, I should consider your credence conditional not on E but on [-1]E: the hypothesis that E is true 1 km behind you.
We need the opposite adjustment for the content of your self-locating beliefs: knowing that you are 1 km ahead of me, and assuming you are certain that there are strawberries around you, I should become certain that there are strawberries 1 km ahead.
So we arrive at the following rule for cases where my evidence E entails that you are n km ahead: for any centred or uncentred proposition A,\[ Cr_I([+n]A / Cr_U(A/[-n]E)=x) = x. \]
Equivalently,\[ Cr_I(A / Cr_U([-n]A/[-n]E)=x) = x. \]
What about in-between cases, where my evidence is not completely silent on matters of self-location, but also doesn't fully settle our relative location? (Every real-life case falls into this category.)
Well, we can apply the previous rule to possible extensions of my evidence that would settle our relative location. To spell this out, let \(Cr_U=f\) be the proposition that f is your credence function, and let D=n be the proposition that you are n km ahead of me. By the law of total probability,\[ Cr_I(A / Cr_U=f) = \sum_n Cr_I(A / Cr_U=f \land D=n) Cr(D=n). \]
When computing \( Cr_I(A / Cr_U=f \land D=n)\) it's important that I conditionalize your credence function not only my (shifted) evidence E but also the (shifted) assumption that D=n. So, slightly generalizing the previous rule:\[ Cr_I(A / Cr_U=f \land D=n \land E) = f([-n]A / [-n]D=n \land [-n]E). \]
Plugging this into the law of total probability, we get the general rule we were looking for:\[ Cr_I(A / Cr_U=f) = \sum_n f([-n]A / [-n]D=n \land [-n]E) Cr(D=n). \]
This still isn't entirely general because it reduces the question of our relative location to the question how many kilometers you are ahead of me on some path. The fully general rule requires generalizing the [-n] operator and the distance propositions D=n.