I want to explore a principle I will call Clarity. My point will not be to defend it, exactly, but rather to clarify it and put it to work in an argument that seems to me very interesting. Here's a rough formulation:
Clarity. All agents are rationally required not to believe any proposition that is unclear to them.
This formulation leads a lot open (and thus might be autological right now), but I want to focus on what 'unclear' might mean. To do that, we need to see why a principle like Clarity might be intuitive in the first place; that will provide some discipline on what 'unclear' can mean.