• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Social distancing? Try a better way to work remotely on your online files. Dokkio, a new product from PBworks, can help your team find, organize, and collaborate on your Drive, Gmail, Dropbox, Box, and Slack files. Sign up for free.


legal coherence

Page history last edited by abogado 2 years, 2 months ago

Legal Notes


Developing Legal Arguments


Aleksander Peczenik

A Coherence Theory of Juristic Knowledge*

1. The Problem: Is Knowledge of the Morally Justified Interpretation of Law Possible?

Moral values play a great role in legal argumentation and decision-making. To be sure, both are based on such institutional sources as statutes, precedents, legislative history etc. Yet, even justice is obviously relevant in legal reasoning. Another interesting fact is the existence of the so-called legal dogmatics (Rechtswissenschaft, Rechtsdogmatik, “science of law”, legal doctrine). The legal doctrine is a good example of a practice of argumentation, pursuing knowledge of the law, yet in many cases leading to a change of the law.

No wonder that generations of Legal Realists (in Sweden – Hägerström and the Uppsala School) repeatedly “deconstructed” legal argumentation as a mere facade, concealing the fact that the lawyers follow their feelings and emotions.

Now, cannot one claim - contrary to the Legal Realists and quite closely to the “spirit” of Natural Law theories - that norm-expressive statements and value-statements in the legal argumentation can be well grounded and thus not a mere expression of feelings? Consequently, cannot one argue that Legal Dogmatics, evaluative and yet presenting itself as a kind of science gives us knowledge of the morally justified interpretation of law?


2.         Weighing in the Law

The key topic in this context must be weighing and balancing. For legal both legal rules and juristic opinions are almost always justifiable with a recourse to weighing of various reasons. The reasons to be weighed are mostly values and principles. But I think that any rule can be weighed (cf. Peczenik 1989, 80 ff. and Verheij 1996, 48 ff.) against other reasons.

For example, statutory interpretation is an important field of legal argumentation where weighing of reasons, including moral ones, is inevitable. Various reasons and methods, such as literal interpretation, analogy, systemic interpretation, historical interpretation and goal interpretation support statutory interpretation in hard cases. The choice between the alternatives depends on weighing and balancing of various legal arguments (Peczenik 1995, 376).

What is weighed is prima facie. A prima facie rule thus do not determine definitive duties; the latter must result from weighing and balancing of all morally and legally relevant values, principles and rules in the particular case (cf. Peczenik 1995, 444 ff. and 484 ff.).


3.         Cumulating of Arguments and Chains of Arguments

Different ways of explicating weighing are attractive for different purposes. Let me speak about only one, in my opinion crucial from epistemological point of view.

Thus, weighing can be conceptualized aggregation of arguments and construction of chains of arguments. As soon as one states that one thing weighs more than another, the question occurs “Why?” Then, one needs another argument and another act of weighing. Briefly: x may weigh more than y in isolation but, in a certain situation, z can occur, and reverse the order: x weighs less than y+z. Then, x+q weigh perhaps more that y+z. In this way, the weights can be aggregated (cf. Peczenik 1997).

To be sure, not all reasons may be cumulated. But reasons proffered in legal argumentation cumulate often enough to make cumulation an interesting rule of thumb: ceteris paribus, two reasons pulling at the same direction are jointly stronger than each of them alone.

Moreover, cumulating of reasons may be structured as a chain of arguments. Intuitively, a chain of arguments gives us more information than a mere cumulating would. The chain "x because of q because of r" is stronger than cumulating "x and q and r". This idea of cumulating-and-chain must not be confused with chain-without-cumulating. A well-known example of the latter is hearsay evidence in legal process. Charles says that p because John told him that p and John told him so because he has heard it from Peter. If Charles knows nothing more but merely what he has heard from John, and John nothing more but what he heard from Peter, the weight of Charles’s assertion is obviously much lesser than that of Peter’s. Such a chain weakens – not strengthens - evidence. But the justificatory chains are different. Each reason in the chain has already a weight, and this weight is increased when the reasons that support it are added. Ceteris paribus, the longer the chain of thus cumulated arguments support a statement, the stronger the weight of the statement (cf. Peczenik 1996, 150 ff.).

But this theory should not be misunderstood. The assertion “each reason has already a weight” does not mean that the weight is a kind of foundation that would be impossible to doubt. Each reason has already a weight due to another chain, for example "x because of a because of b".


4.         Foundationalism and Coherentism

Such a chain of arguments has no obvious end. To be sure, foundationalists claim that all knowledge ultimately rests on evident foundations, such as empirical data (cf., e.g., Chisholm 1966, 30 ff.). However, foundationalism has been rebutted: the alleged foundations are not certain. The main competitor of it is coherentism. Roughly speaking, whatever is justifiable, is justifiable on the basis of the background system of beliefs (or - in Keith Lehrer’s terminology – of acceptances and preferences (1997, 3).


5. The Network of Big Circles

The most profound problem of coherentist justification is its circularity. If nothing is an unshakable foundation of knowledge and everything may be doubted, I need reasons for reasons for reasons… etc. To avoid an infinite regress, a coherentist must accept circularity. Indeed, a coherent system of acceptances and preferences is like a network of argumentative circles, mostly quite big ones. Metaphorically, a chain of arguments, sooner or later, bites its own tail, and thus may be represented as a circle. In such a chain, p1 supports p2, p2 supports p3 etc…. and psupports p1.  “Support” is only explicable as a reasonable support: p2 follows from p1 together with another premise, say r1. This premise  ris reasonable, which implies that it is a member of another such circle.


No Privileged Position for a Skeptic

But cannot a coherent system of acceptances and preferences be false, “isolated from the world”? To understand coherentism, one must keep in mind that neither skepticism in general nor this isolation objection in particular has privileged status, compared with other beliefs. It is merely a competitor of other beliefs. If someone says that my personally justified, coherent system of acceptances and preferences is not “objectively” justified, he has to win the competition with my system (cf. Lehrer 1990, 176 ff.). Consequently, if I want to argue that I am justified in accepting or preferring x, I must appeal to my system of acceptances and preferences at that time. And if the skeptic wants to convince me that I am wrong, the appeal to my acceptance system at that time is again all he can make. If what the skeptic accepts is less reasonable than the objection, he loses. The loss means that the acceptance in question is defeated. Lehrer has taken reasonableness as a primitive concept (id. 1990, 127).


Weighing More = Being More Reasonable Than

The expression “weighs more than”, at least in juristic contexts, works analogously to Lehrer’s expression “is more reasonable than”. Thus, one may reformulate the concept of winning the justification game in terms of “weighing more”: If what a person accepts/prefers weights more than the skeptical objection, then the person wins the justification game against the skeptic.


Law and Morality

Back to the law. It is plausible to say that a reasonable legal argumentation is a special case of a reasonable moral argumentation. (This thesis is stronger than Alexy’s view that legal argumentation is a special case of practical argumentation; cf. Alexy 1989, 212 ff.).

Both moral substantive reasons and legal authority reasons, based on such sources of the law as statutes, precedents, preparatory legislative materials etc., are relevant in both moral and legal reasoning. Various sources of the law have, however, a privileged position in legal reasoning:

  • Their weight in the law is so great that only particularly strong substantive reasons are sufficient to override them.
  • They are apt to explain legal decisions. It is thus quite natural to say, “A was sentenced to prison because the statute required it”.
  • They are also necessary from the legal point of view. Without sources of the law, a conclusion has no legal support.

Notice that this theory does imply that a good lawyer weighs and balances his “purely” moral convictions (resulting from a moral deliberation which do not take the law into account) and the law. This, however, does not make him a morally bad man. Since what a good man is, is possible to tell only when one takes into account of the role in which the “good man” acts. If he acts as a lawyer, the law morally binds him.

To put the same thing in different words: The fact that Peter is a lawyer is a morally relevant reason for Peter to make a different weighing of other moral reasons than he would make, if he were not a lawyer.


9.         Knowledge of Morally Justified Interpretation of Law

Coherentism is applicable to all knowledge, even in natural sciences. Yet, the question remains, In what sense, if any, can (moral and legal) evaluations give us knowledge? To say that a theoretical proposition gives us knowledge may be thought about as the same as to say that it is true. Can a legal interpretative statement – supported by weighing of moral arguments - be true, even if justifiable by a set of premises containing evaluations?

One way of answering our question involves a theory, which is cognitivist as regards prima facie norm- and value statements and, at the same time, noncognitivist as regards all-things-considered norm- and value statements. The former are true if they correspond to the cultural heritage of the society. The latter may be more or less reasonable in the light of the acceptance- and preference-system of an individual, but they are not true in the ontological sense.

In view of such a theory, knowledge of prima facie values is possible, whereas an well-argued belief concerning an all-things-considered value merely expresses something essentially similar to knowledge, not knowledge in the literal sense.

But how can I be sure that this theory is adequate? After all, many well-known arguments can be proffered both against noncognitivism and against the grounding of prima facie values on cultural heritage. Of course, I cannot be sure. Rather, the theory appears to be more reasonable than its competitors. This estimate of reasonability is defeasible. In other words, prima facie moral and legal statements are prima facie truth-evaluable in the light of cultural heritage whereas all-things-considered moral and legal statements are prima facie not truth-evaluable.

Notice an important point. The idea of prima facie, originally introduced in the context of moral and legal theory, is used here at an abstract philosophical level. Not only the law and morality are defeasible. Even basic philosophical ideas are. Coherentism as I understand it, is a general theory of defeasibility.


Should the Law Be Coherent?

It is important to notice that the postulate that the law should be as coherent as possible does not follow logically from the epistemological coherence theory. Different values may have different weight in different parts of the law. Yet, even a well-known anti-coherentist, Joseph Raz (1994, 315) has admitted “that the application of each of the distinct values ought to be consistently pursued, and this generates pockets of coherence.

In my opinion, plurality of pockets of coherence corresponds to the plurality of social roles. As already stated, legal argumentation is a special case of moral argumentation. What is special is that a person performing legal argumentation enters a particular social role, the role of a lawyer and – within it – a particular role of a judge, attorney, constructive legal researcher  (a “dogmatician”), critic of the law, etc. Each role has moral consequences. Within each role, the lawyer must create a pocket of coherence. Sometimes a given part of the law is to be made more and more coherent with its moral base. Another time, various parts of the law are to be made coherent with each other, at the expense of a (partial) separation of law and morals. A lawyer must attempt creating a pocket of coherence, but only his own choice of the role, explicit or implicit in his practice, determines which one.


11.       A Reflective Question: Why Shall Law Theorist Rely on Coherentist Epistemology

The reader my now ask, Why do we lawyers need all this talk about coherence, when the author himself admits that the best law not always is the most coherent one? The answer to this question is surprisingly simple. Recourse to coherence is inevitable, if not at the lawyers’ level, then at another, higher, level. For if anyone tells us that the best law in the particular situation is not the one which is the most coherent, she must coherently (!) argue that this is the case. Let us call such a view the Legal Anti-Coherentist Thesis (LACT). The LACT is precisely in the same position as Lehrer’s skeptic. Paraphrasing Lehrer, let me state the following: The LACT has no privileged status compared with other beliefs. It is merely a competitor of other beliefs. It has to win the competition with the view that the best law is the most coherent one. If what the coherentist accepts is something that is more reasonable for her to accept than the LACT, then the coherentist wins.

The ultimate basis of all theories about blessings of incoherence must be composed of coherent argumentation. Coherentism is self-supporting. Anti-coherentism is self-destroying. One consequence of this observation is that the discussion of coherence is useful not only for the traditional Legal Dogmatics but also for more critical orientations of legal research. Whatever you criticize, you must criticize in a reasonable, that is, coherent manner. Thus, only coherence at a higher level of argumentation may justify incoherence at a lower level.

No living person is a Hercules who efficiently can put all her beliefs into a coherent system. This is only the goal of knowledge and a goal of morality, unreachable but irresistible. A human being is often like a Sisyphus, pursuing unreachable goals like reason, truth, justice and coherence.


Robert Alexy, Theory of Legal Argumentation, Oxford 1989: Clarendon Press.

R.M. Chisholm, Theory of Knowledge, Englewood Cliffs 1966: Prentice-Hall.

Keith Lehrer, Self-Trust. A Study of Reason, Knowledge, and Autonomy, Oxford 1997: Clarendon Press.

Keith Lehrer, Theory of Knowledge, London 1990: Routledge.

A. Peczenik, On Law and Reason, Dordrecht/ Boston/ London 1989: Kluwer Academic Publishers.

A. Peczenik, Vad är rätt, Stockholm 1995: Norstedts.

A. Peczenik, Juridiska avvägningar, Festskrift till Strömholm, Uppsala 1997: Iustus förlag.

A. Peczenik, Jumps and Logic in the Law, Artificial Intelligence and Law, Vol. 4 (1996), No. 3-4

Joseph Raz, Ethics in the Public Domain. Essays in the Morality of Law and Politics, Oxford 1994: Clarendon Press.






Aleksander Peczenik

Second Thoughts on Coherence and Juristic Knowledge*

1. Introduction: What all this is about

Jurisprudence (legal theory) is a strange discipline, at the intersection of traditional legal research, some sociology and almost all branches of philosophy, such as ontology, epistemology, moral philosophy, philosophy of science, logic and philosophy of language. A law theorist must know all of these. Since nobody can have a robust knowledge of this dimension, doing research in jurisprudence is almost a mission impossible. The scholar must pick up fragments of different disciplines and put them together to serve new purposes. This is a high risk project, and there is not much to win. Yet, this is my project. To use Popper’s famous words, it must remain an “unended quest”. But there are some moments when the quest appears meaningful. This seminar was such a moment.

The topic of the seminar was “coherence theory of juristic knowledge”. Let me begin with a clarification. Jan Wolenski incorporates the idea of coherence into so called classical definition of knowledge. “According to this definition, knowledge consists in true justified belief. More explicitly, the phrase `X knows that A'… is equivalent to the conjunction of the three following conditions (a) X believes that A, (b) A is true, and (c) X’s belief is (correctly) justified… Where can coherence enter into the classical definition of knowledge? The condition (c) is the only suitable place. Thus, we can combine the classical theory of knowledge with a coherence theory of justification”. I agree. And I wonder whether Aulis Aarnio is not too radical when concluding “that the legal ‘truth’ is much smoother matter than correspondence. It is coherence”. I would rather say: Legal truth is correspondence, legal justification is coherence. Legal justification does not rest on evident foundations. In the law, whatever is justifiable, is justifiable on the basis of the background system of beliefs.


2. Pre-existing prima facie values

Wlodek Rabinowicz raises the question of cognitivism: “Peczenik proposes a theory for norm- and value judgments that is partly cognitivistic and partly non-cognitivistic. Prima facie statements of this kind are supposed to be true or false in the standard sense. They are true if they, as he puts it, ”correspond to the cultural heritage of the society”. On the other hand, the all-things-considered normative statements, such as “Taking everything into consideration, I ought to do X”, lack truth value; they lack the objectivity of the former statements. Thus, ethical cognitivism for the prima facie is combined with non-cognitivism for the all-things-considered. This is an interesting combination, but it raises some difficulties. In particular, I am not sure how Alexander interprets prima facie norm- and value judgments. Are they simply reports about our cultural heritage? That is, does he accept some form of naturalistic analysis of the prima facie? Or does he want to argue for a non-naturalistic form of ethical cognitivism? Does he want to say that that prima facie statements are claims about objective values rather than about some sociological facts, that such statements are irreducible to factual statements?”

Now, I find non-naturalism more plausible. Value judgments – according to their meaning – motivate action, a mere description of facts does not. Surely, cultural heritage – if fact - motivates us to a particular kind of action, namely to weighing and balancing. But this motivation does not follow logically from the description of cultural heritage. Rather: cultural heritage “triggers” motivation. This is a causal connection, not logical necessity.

No doubt, this position is in need of elaboration. It assumes that cultural heritage indicates moral values but it is not identical with them. It also assumes a distinction between what belongs to the meaning of propositions (judgments, sentences) and what merely is their regular cause. But it says nothing profound about what moral values are. Neither it says anything profound about what the meaning of propositions is. In brief, one must state clearly what this theory attempts to say and what not. It says something about how we can find moral values, not what moral values are. Thus, the theory is philosophically unfinished. For the excuse, see section 1 above.

This unfinished theory does not entail any profound moral philosophy. But it seems to be compatible with some philosophies.  One of these is preference utilitarianism. The leading preference utilitarianist, R.M. Hare, has elaborated the following theory of two levels of moral thinking. The critical level includes a complete knowledge of other people’s preferences in all thinkable cases, together with weighing and balancing of these preferences. On the basis of this knowledge, one can compute what action maximizes the aggregate of preferences, according to the formula number of preferences (regardless whose) multiplied by strength of respective preferences. Only an “archangel” could perform such a task. One ought to follow a rule that thus maximizes the fulfilment of preferences. The opposite of the “archangel”, a “prole”, lacks ability to think “critically”. He must stay at the intuitive level, that is merely follow his own moral intuitions and some established moral principles. The archangel could show that some intuitions and principles more or less correspond to the calculus of preferences. The prole does not know it but still acts rightly. Ordinary people are neither archangels nor proles but rather a mixture of both. They have some moral intuitions, follow some principles and have some ability to check whether these correspond to what other people wish (Hare, R.M. 1981. Moral Thinking. Oxford: Clarendon Press, 44 ff.).

Comparison with Hare tells for the non-naturalist position. According to Hare, the prima facie moral judgments that are made on the intuitive level are just as prescriptive (=non-cognitive) as the critical all-things considered judgments. Let me repeat: cultural heritage “triggers” uttering of prima facie judgments on the intuitive level, but these moral judgments are something more than the description of cultural heritage. For example, the fact that many other people before me endorsed the principle “pacta sunt servanda” triggers my moral judgment that promises ought to be kept, but this moral judgment is not the same as a description of what other people endorse.

My point in this context is that cultural heritage gives us evidence of principles that both proles and ordinary people take into account in moral deliberation.

Moreover, this evidence would perhaps matter even if one adopted another profound moral theory. One alternative would be: moral is what a morally sensitive person in our culture has a disposition to endorse. But the point is the same, namely that we tend to disagree about what precisely such a person would endorse, except as regards very abstract and vague prima facie values. Can we know these, then? Indirectly, by relying on the cultural heritage, namely by reading books showing principles and value judgments persistently coming back during centuries.


3. Why the abstract basis, not the particular one?

Wlodek continues: “Second, one wonders why all-things-considered statements are supposed to be less objective than the prima facie ones. At least some all-things-considered normative statements seem to be as objective as you can get in these matters. That, say, Hitler and Stalin, all things considered, did wrong, that they committed heinous crimes, seems to be a claim that is at least as established, given the facts, as the claim that, say, prima facie no man should profit from his crimes or that, say, inequality is prima facie something bad. So, where is the difference?”

I think, there are two important differences. The first is that abstract prima facie statements are more plausible end-points of moral justification than particular moral judgments are. The second difference is that prima facie statements are explicitly defeasible, justified unless outweighed by counter-arguments, whereas all-things-considered statements are, by definition, irreversible. These two differences together make the theory plausible. For example, the all-things-considered statement “Things done to Bukharin in his Moscow process in the thirties (detailed description follows) were evil”, albeit emotionally automatic, is in need to be justified upon request. When confronted with the description of what happened then, I feel that it was evil. But if anyone asks me why I feel so, I can give him reasons, either general principles, or something following from general principles. If I say: this was evil but I cannot tell you why, I must be an idiot.  Not so in the case of abstract prima-facie principles. Surely, no end-point of justification is logically necessary. Yet, I may say without further moral justification “Killing people is prima facie evil, which is morally acceptable only on the basis of overweighing reasons”. If I say it, and cannot give a further moral justification (except the philosophical talk about “prima facie” etc.), I am not automatically an idiot. (Or so I think). Moreover, reading history, one gets an impression that very few people were prepared to say generally “Killing people for no reason at all is OK”. If I did say something like this, I would be an idiot, indeed.

In brief, it is an anomaly to sincerely refuse arguing for particular all-things-considered statements. On the other hand, it is an anomaly to totally deny fundamental, abstract – and vague – prima facie values.

Here is a starting point of a possible moral theory. Must I have such a theory, elaborated in detail? It would be nice, but it may be too much to require from a law theorist.


4. Does the size matter?

Coherence is a complex subject. Wlodek Rabinowicz asks an important question regarding circularity of coherentist justification: “When one discusses coherentism, one often makes the observation that coherentist justification is essentially circular: According to a coherentist, there are no fundamental or basic beliefs: every belief requires other beliefs for its justification. Therefore, if we are to avoid an infinite regress in justification, then - as Peczenik puts it - we must accept that ‘a chain of arguments, sooner or later, bites its own tail, and thus may be represented as a circle’.  Coherentists use to say that such circles in argumentation need not be vicious provided that the circles are sufficiently big. A big circle is better than a small circle. Why should it be so? Does the size really matter? I would like to suggest that what is important is not so much the size of a circle but rather the complexity of its structure. Higher complexity of an appropriate kind gives extra safety, makes the circle more robust, less vulnerable to destruction… To put it metaphorically: nets are safer than chains.”

First of all, I am grateful for the clarification as regards chains and nets. Indeed, nets are safer than chains. But in my opinion, the size matters, too. Let me quote from Robert Alexy:  “a coherent set of propositions should comprise as many and as different propositions as possible… The idea of coherence includes the ideal of an all-embracing theory” (see further on in Alexy’s paper). For a science fiction novel can give us a beautiful net. A whole world has once been described in the three volumes on Heliconia. Yet, it is all fiction. On the other hand, what if I wake each morning and perceive myself as living in the Heliconia world, if I do not meet psychiatrists who try to convince me that I am crazy, if I perceive myself doing business with Heliconia figures and freezing in the Heliconia winter, if all this goes on and the Earth becomes a distant memory? If the net is big enough, Heliconia ceases to be a mere fiction and converts into a virtual reality. More, if the net is big enough – embracing almost all information I get – how could I tell the virtual reality from reality unqualified?

Yet, one must be careful with the words “big enough”. Wlodek Rabinowicz has made the following important point: It is not so much the absolute size that matters (there is no difference between three volumes on Heliconia and one volume on Heliconia) as the relative size, or embracingness: the circles or nets that don’t leave much outside their area are better than those that do. This is a matter of coherence of the whole system of beliefs. It doesn’t help if some part of that system is internally coherent if it is deeply incoherent with the rest. I agree entirely. The talk about the size of the net, or “comprehensiveness” of it, indicates that not much is left outside of it. (By the way, this is why I find Keith Lehrer’s theory of coherence better than other theories, but this problem is too technical to be discussed here).


5. Two kinds of defeasibility

Wlodek asks: “Can Rules Be Weighed? According to Peczenik, the process of weighing reasons is central for justification. Ideally, every reason can and should be weighed against other reasons. Applied to law, legal judgments are arrived to by such a process of weighing where what is weighed are not just values and principles but also legal rules.” He contrasts my position with Dworkin’s, characterized, as follows: “There is no room for weighing a valid rule against other considerations. Certainly, the interpretation of a rule might involve some process of weighing… But when a particular interpretation has been determined, there is no room anymore for weighing the rule against other considerations, according to Dworkin. If it is valid, the answer it supplies must be accepted. It would be interesting to know why Peczenik rejects this position”.

Now, I do not reject the difference between rules and principles. But I do say that both rules and principles are defeasible, and that the defeat is a result of weighing. What is then the difference between rules and principles in my re-formulation? All use of principles in legal reasoning is for weighing. A lawyer is not supposed to just follow a principle. He is supposed to confront with it and to weigh it against other principles relevant for the case. In this sense, the use of principles is never a matter of routine. Principles are always used as defeasible. In this sense, one still may talk about the everyday defeasibility of principles. On the other hand, the use of rules varies between routine cases and “hard” cases. In the segment of legal thinking which I would call “routine legal thinking”, a lawyer does not weigh rules, but takes them for granted. In another segment of legal thinking which I would call “hard-case legal thinking”, both rules and principles are defeasible on the basis of weighing. The everyday use of rules is not to weigh them. Weighing rules is not an everyday defeasibility, but hard-case defeasibility. For a lawyer has a good reason to ask questions about the weight of rules first when these are very objectionable.

Wlodek continues: “Is the explanation (of Peczenik’s views) that the law for him is never the end of the matter? That even unambiquous legal rules must still be balanced by the lawyer against other considerations such as morality or efficiency?” To answer, let me paraphrase: The explanation is that the law for me is not always the end of the matter. Even unambiquous legal rules must sometimes (albeit seldom) still be balanced by the lawyer against other considerations such as morality or efficiency.

Then, he continues: “Or would he (Peczenik) say that a legal rule should also be balanced against other considerations internal to the law itself?”  This is a difficult question because some considerations of morality and efficiency are internal to the law in the following sense. First, lex iniustissima non est lex. A “legal” system worse than Pol Pot’s would not be the law but legis corruptio. Second, a “legal” system that is not on average efficient would neither be the law, as Kelsen and others pointed out. But the borderlines are not precise. For example, who knows how much morality is internal to the law itself, in the sense that without this minimum of morality the law ceases to be law? In this connection, one may re-read Alexy’s remarks on criteria-less criteria.


6. Prima facie and pro tanto

Then, Rabinowicz turns attention to Kagan who “distinguishes between “prima facie” reasons (reasons “at first sight”) and reasons “pro tanto” (“insofar”). Kagan suggests that certain considerations may appear to be reasons for a decision or a judgment at first sight, so to speak, but then turn out to be irrelevant when other aspects of the situation have been taken into consideration…  A prima facie reason can be undercut, so to speak, by other aspects of the situation, and then drop out of sight altogether. It is different with pro tanto reasons…. Such reasons are never undercut, even though they may be outweighed in some cases by reasons to the contrary, if the latter are stronger…  The idea of weighing reasons seems natural for the pro tanto reasons, but it is not appropriate for the prima facie reasons that aren’t pro tanto”.

Now, this distinction is very useful. Let me add something. Pro-tanto is the weighing-and-coming-back defeasibility, typical for morality. Prima-facie is the ordinary defeasibility, occurring everywhere. And “hard-cases” legal reasoning is a very similar segment to morality. This means that I would like to re-write everything I have published and put clearly in what contexts I use the words “prima facie” to designate “pro tanto”. Sorry for the reader.

Rabinowicz continues: “a given factor may make very different contributions to the value of the whole depending on which other factors it is combined with”. Indeed. I have said many times that one weighing is dependent on other weighings. (Remember that coherence is a net in which parts are inter-dependent).

But now, there comes something much worse: “The so-called ‘ethical particularists’, such as Jonathan Dancy, David McNaughton and John McDowell, have in fact went as far as to question whether there are any valid moral principles that can specify pro tanto reasons: Dancy seems to suggest that any reason can be undercut or at least change its weight when it appears in new configurations, in combination with other factors. I don’t think that there is a reason to go that far, but it seems fair to say that Peczenik’s reliance on the weighing model for reasons may be inappropriate in many contexts”. Here, I wonder what “undercutting” means. Can a moral principle “ be undercut, so to speak, by other aspects of the situation, and then drop out of sight altogether” (see above)? I would rather say that this is unusual. The same moral principles come again and again since antiquity. Their everyday defeasibility is pro tanto. But what if a principle really drops out of sight altogether? This may happen, though I would make analogy between this and a scientific revolution or a paradigm shift in the much-abused Kuhn’s language. Is the weighing model inappropriate for this – say - holiday defeasibility? Yes and no. No, because the reason for dropping out of sight is that the principle in question becomes hopelessly outweighed. What else can the reason be? But yes, because once the principle is outweighed to such a degree that it drops out of sight, it will never be weighed again. Or almost never. Defeated physical theories never come back, but defeated moralities? Who knows?

Yet, Wlodek meant something else here. For a particularist, any principle (and indeed, any reason) can be fully undercut and drop out of sight altogether in a particular situation, and not from now onwards. It is not a question of some paradigm change, but rather a conviction that the principle in question is not relevant in this situation. Is the weighing model inappropriate for this particularist defeasibility? Again, yes and no. Yes, because the conviction that the principle in question is irrelevant in this situation is in fact often intuitive, not justified by weighing of other reasons. No, if one asks whether this conviction can be justified. A coherentist must say that it can. Moreover, the reason for dropping out of sight is that the principle in question becomes hopelessly outweighed. Let me repeat: What else can the reason be?

When a particularist insists that the principle drops out of sight intuitively, and that this dropping out must be stated without further justification, he is a foundationalist. The dropping out is for him obvious, precisely as the alleged foundations of knowledge are for a foundationalist. One may call him a negative foundationalist, since what is evident for him is the opinion that the principle in question is not relevant.

Surely, everything can be defeated, not only moral principles but all abstract beliefs, even scientific ones. Is this a reason to adopt negative foundationalism and to stop trying to justify abstract beliefs? In morality? Always? In science, too? Even worse, are only abstract beliefs defeasible, but concrete absolutely sure? Why? And what would the consequences be? Absolute reliance on particular intuitions? What if my intuitions are different from yours? Shall we fight? See also section 3 above.


7. Cumulating by chaining?

Rabinowicz asks for clarification: “When Peczenik discusses weighing, he takes up the well-known phenomenon of cumulating of reasons… Peczenik suggests that ”cumulating of reasons may be structured as a chain of arguments”. When we adduce q as a reason for x and r as a reason for q, we have what he calls ”cumulating-and-chain”. As he argues, the chain of reasons is cumulative if each reason in the chain comes with an independent weight. It must get this weight from other supportive reasons that do not belong to the chain in question. Alexander seems to think that this form of cumulating reasons by chaining them is in some sense more preferable (”more informative”) to what he calls ”mere cumulating” - when q and r are adduced as two separate reasons for x. I must say I don’t quite understand why he takes this view. A clarification would be helpful”.

What I mean is that coherence theory is something else than theory of evidence. In the theory of evidence, the probability of a cumulation of P(a), P(b),,, , P(n) is 1-[P(~a)×P(~b)×...×P(~n)], which entails that the cumulative probability is greater than the components’.  The probability of a chain is on the other hand P(a)×P(b)×...×P(n) and must therefore be less than probability of each component. An important observation here is, however, that the probabilistic theory of evidence does not pay attention to defeasibility. More precisely - when asserting that the probability of an event a is x, one has already taken defeasibility into account. In other words, the probability is the measure of defeasibility of the claim. The theory of evidence excludes therefore further consideration of defeasibility. Speaking in terms of weight, instead of probability, one may say the same, as follows. Once the reasons are equipped with weight, regardless from where it came, the problem of cumulation is divorced from the problem of defeasibility.

But this is not what I had in mind. Instead, my point is a straight application of coherentism. In the coherence theory, assertions are defeasible.  Therefore, coherence theory cannot follow the logic of the theory of evidence. Rather, it tells us something about initial probabilities. Chaining makes the initial probabilities higher. To say the same in the terminology I prefer here: One reason, not supported by anything else is unsafe. When chained to other reasons, it becomes safer. Nets are safer than chains. (See above). And big nets are safer than small. Chaining reduces risk of defeat. And the only way for a reason to get any weight at all is to get it from nets.

For example, distinguish:

1.      x is a bad man because he did not help his son who was mobbed at work (reason 1) and x is a bad man because he left his old and ill parents alone (reason 2); nothing is said about why.

2.      x is a bad man because he did not help his son who was mobbed at work (reason 1) and refusal to help a close relative in need is a morally bad thing (reason 2*).

Here, chaining seems to be a stronger combination of reasons than cumulation. Reason 2* - to which reason 1 is chained - is a deeper reason that supports reason 1, and makes it stronger (weightier). On the other hand, reason 2 does not provide such a deeper support.

In this example, the weight of a reason increases when it is chained to another reason. A mere cumulation cannot give this result.

This is my answer, but I am far from happy with it. A lot must be added.


8. Active weighing?

Returning to coherence, I would like to comment on an important point made by Robert Alexy: “Raz says that there is a pervasive incommensurability among values. This is true, if one understands commensurability as some kind of common property which allows to determine the preferability of values in an objective way. Weighing would then have a somehow passive nature. She who weighs only expresses what is already implicit in the common properties of the conflicting values or principles. But that is not the only way of conceiving weighing, and it is surely not the way weighing is performed in legal systems. Weighing, there, has an active character. Conflicting values or principles are related to each other by creating and fixing concrete preference relations between them. Perhaps, one could say that they are made commensurable by adding an evaluation. This is the way, legal systems can solve the problems of value pluralism… The mere fact that the creation of a coherent system presupposes evaluation, which is not already entailed in the system, seems to show that coherence is - contrary to what has been said above - no genuine super criterion. It appears that there must be another criterion, so that coherence cannot be conceived as the super criterion any longer. Indeed, another criterion exists, but this does not deprive coherence of its character as a genuine super criterion. This other criterion is the procedure of rational discourse”.

    In my paraphrase, this view is equivalent to the conjunction of two theses:

1.      The end-point of legal justification is not the coherence of enacted law “as it is” but coherence of a broader – mixed - system, embracing, inter alia, the enacted law, moral convictions represented in the society and the subjects’ (the justifying jurist’s) own moral and other beliefs. What is incommensurable in the narrower system of enacted law is commensurable – and justifiably weighed and balanced – within the broader system.

2.      Rational discourse is, according to Robert, the procedure that justifies this mixed system. She who weighs does not only express what is already implicit in the common properties of the pre-existing – and conflicting - values or principles. Conflicting values or principles are related to each other by creating and fixing concrete preference relations emerging from – and justified by - the rational discourse-procedure.

I agree with the first thesis, but what about the second? I would rather modify this theory in the following way:

1.      The end-point of legal justification is not the coherence of enacted law “as it is” but coherence of a broader – mixed - system, embracing, inter alia, the enacted law, moral convictions represented in the society, the subjects’ (the justifying jurist’s) own moral and other beliefs, and the philosophical theses, such as Alexy’s discourse theory. What is incommensurable in the narrower system of enacted law is commensurable – and justifiably weighed and balanced – within the broader system.

2.      Conflicting values or principles are related to each other by creating and fixing concrete preference relations emerging from addition of deeper, underlying reasons, and reasons for reasons, in a coherent net. The rules of rational discourse and the procedures – actual or hypothetical - emerging from the implementation of these rules have no privileged position. They are just a component of the belief system of the subject.

3.      There are many ways to fix and arrange coherent justification of this kind. Among other things, there are many possible outcomes of rational discourse. The individual subject intuitively, unjustifiably, picks up one of many.

4.      What is the point of a coherent value system, then? There are at least two points. Both are very general hypotheses about contingent facts. The first is that we humans simply have a disposition to reason, a desire (“passion”) for reason. The second is that individuals who try to arrange their respective belief systems as coherently as possible agree more often and in a more stable manner with each other than such who do not try it. Practical reason promotes consensus and peace.

Item 3 is related to what Wolenski implies when he says that there are no criteria for picking up one model instead of another. This is the main reason why I assume non-cognitivism as regards weighing.


9. Coherence in spite of non-cognitivism?

I thus believe that the all-things-considered normative statements, such as ”Taking everything into consideration, I ought to do X”, lack truth value. But Wlodek asks the question whether this non-cognitivist position does not remove the ground for claims to coherence. “Suppose we discover that our system of beliefs is internally incoherent; or suppose we acquire a new belief that does not cohere with what we have believed before. It is here that the principle of conservatism comes in: A smaller modification is to be preferred to a larger one. Thus, conservatism is a principle of minimal change. Peczenik accepts this principle… but does not explain why it should be accepted”. One explanation is: “insofar as my aim is truth, the whole truth and nothing but the truth, giving...up [my beliefs] is a real loss from the ex ante point of view. Since I should minimize the losses, the principle of minimal change is vindicated. This explanation works well as long as we restrict ourselves to belief systems, in which beliefs are supposed to be carriers of truth-value. However, on Peczenik’s view, our all-things-considered normative judgments are not cognitive in their nature: he denies that they have a truth-value. How can one in such cases defend the principle of conservatism?”

I find this objection the most difficult to answer. Let me consider five answers, here ranked on a scale from the strongest and most controversial to the safest and weakest.

1.      The first answer is cognitivist in the strong ontological sense. Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence) and thus they produce knowledge of the world. Knowledge of the world is thus conceived as independent of ourselves (that is, independent of our situation, theories, concepts, beliefs, etc.). In order to be a cognitivist in this strong sense about all-things-considered moral values, we have to embrace a kind of Platonism, namely the idea that these values are among the facts of the world.

2.      The second answer is cognitivist in the anti-ontological sense. Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence) and thus they produce knowledge of all-things-considered moral values, duties, rights etc., but the “external” ontological question, whether these belong to the world or not, is meaningless. Thus, Ronald Dworkin (Objectivity and Truth: You'd Better Believe It. Philosophy & Public Affairs 25, no. 2, Spring 1996) thinks that the only plausible reading of "It is a moral fact that genocide is wicked" simply repeats that genocide is wicked. (This view is a strange reversal of Hägerström’s dictum that we can speak meaningfully only about morality, not in morality. Dworkin’s opinion seems to be that we can meaningfully speak in morality, but we cannot meaningfully ask “external” philosophical questions about morality).

3.      The third answer is cognitivist in the formal sense. Let me mention two versions of it, without any attempt to determine how they are related to each other. (a) Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence) and thus they produce (or approximate) truth in the minimalist sense. For example, according to Giorgio Volpe (A minimalist solution to Jörgensen’s dilemma, in press in Ratio Juris), our notion of truth is implicitly defined by the instances of the equivalence schema "The proposition that p is true if and only if p." Thus, saying that all-things-considered moral sentences are capable of expressing (true or false) propositions in the minimalist sense does not commit one to say that the "facts" that make such propositions either true or false are similar to the "facts" that make propositions about the structural properties of physical things either true or false. The facts that make such propositions either true or false may very well be reducible to, or supervene on, the emotions, attitudes, feelings or desires of people. (Volpe’s view is based on the “minimalist” truth theory by Paul Horwich, with roots in Frege’s and Tarski’s writings). (b) Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence) and thus they produce (or approximate) truth about a formal object (model). According to Jan Wolenski, we can identify the object of knowledge with a semantic model, that is, the set of all truths in the language. The set is produced by the logical improvement of natural language. “Having any consistent set of sentences we can describe its model just by taking into account the semantical properties of words. The model which comes from purely linguistic considerations is called the formal object”.  And he continues: “Now the piece of the real world is understood as the material object. Thus, any consistent fiction has a formal (intentional) object, but it has no material object. On the other hand, we assume, at least if we are realists, that any piece of knowledge has its formal as well as the material object… If we assume that our language, eventually improved by mathematics and science, is a proper device for speaking, truly or not, about the world, then we automatically decide that formal objects constructed on the basis of everyday speech are good candidates for representing the reality. But I (Wolenski) must admit that it is rather a confession of faith than a fully justified statement”.

4.      Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence) and it is a correct thing to do so.

5.      Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence). They do it because they typically possess a desire (a “passion”) to do so. Thus, whoever arranges their moral beliefs in this way, typically meets understanding and acceptance from others.

I am sceptical as regards answer 1, because it is difficult to find anything in the world that corresponds to all-things-considered moral statements. As regards answer 2, the problem is, as follows. Dworkin concludes that we cannot climb outside of morality to judge it from some external Archimedean (that is, ontological) tribunal. But in my opinion, we both can and must to do precisely this. I am sceptical as regards Dworkin’s view, because it contradicts the long philosophical tradition. Many philosophers have been seriously engaged in ontological research, and I simply cannot believe that all of them talked nonsense. Answers 3a and 3b evade the ontological question, as well. No doubt, our language is such that we often speak about moral objects in the formal sense, and claim the truth of such sentences in the minimalist sense. But the ontological question is whether this objectivist language deceives us or not. No formalism can help us to either ask or answer this question.

Yet, Wolenski writes: “The sceptic can of course always argue that our language deceives us. We have no chance to prove that it does not. However, successes in predictions, filling gaps in our cognition of the past, efficient explanations, possibilities in criticism of opinions, and many other epistemic actions rather support the view that we are able to reach the real world by selecting a semantical model than to undermine it. Hence, I regard a combination of a semantic account of knowledge with a coherence theory of justification as a promising base of epistemology”. This is all right as regards truth concerning natural objects, such as mountains, cows and chairs, but can we say the same about values? Let us try: success in predictions, success in filling gaps in our cognition of values of the past, efficient explanations of values, possibilities in criticism of opinions concerning values, and many other epistemic actions rather support the view that we are able to reach objective values by selecting a semantical model than to undermine it. Is this OK? No, it is not. Surely, it seems plausible, but it does not solve the mystery of platonic values hovering somewhere in the air around chairs and cows.

As regards answer 4, I agree with Robert Alexy that the law raises the claim to correctness. The problem is that the idea of correctness is far less clear than the idea of truth as correspondence to the facts.

   At the end of the day, the only honest answer to Wlodek’s question seems to be the weak answer 5, combined with secondary points inspired by the answers 3 and 4. Human beings de facto try to arrange moral beliefs according to principle of conservatism (coherence). They do it because they typically possess a desire (a “passion”) to do so. Thus, whoever arranges their moral beliefs in this way, typically meets understanding and acceptance from others who thus think that his/her conclusions are in some sense correct. In consequence, epistemic conservatism (coherence) assures success in predictions of future valuations, success in filling gaps in our cognition of valuations of the past, efficient explanations of valuations and possibilities in criticism of opinions concerning values.

Wlodek has read this and answered, as follows: Yes; but why do we desire to make as small changes as possible? This doesn’t seem to be a ”bare” desire, such as the desire I have for a cigarette, but a desire for a reason. And if the reason is that we believe that we will lose less truth that way, then this answer is open to a non-cognitivist only if he accepts some ”error theory” à la Mackie.

Yet, I think that the error theory is too strong. It claims that people normally assume that value judgments are truth evaluable but they are not. In brief: it claims that people normally assume an error. As a conservative (in many respects!), I hesitate to accept such a theory. I would demand stronger arguments for an error theory. But why not to assume a more modest attitude in this respect? We desire to make as small changes as possible. Indeed, this does not seem to be a ”bare” desire, such as the desire I have for a cigarette, but a desire for a reason. But I do not know what this reason is. Perhaps can a reason be found in future, perhaps not. Perhaps the passion for reason is an unexplainable fact. Or is it the ultimate foundation to be assumed without justification? A human being is a person demanding reasons, but we cannot give a reason why she is like this.

It seems that we have here a choice between two unpleasant alternatives. The first is that all-things-considered moral judgments tell us the truth but we do not know what this truth is about. The second is that we have a passion for reason, for a reason, but we do not know for what a reason.

It is a limit of what I can think in this context.




Ronald Dworkin, Objectivity and Truth: You'd Better Believe It. Philosophy & Public Affairs 25, no. 2, Spring 1996.

Cf. R.M. Hare, Moral Thinking, Oxford 1981: Clarendon Press.

Giorgio Volpe, A minimalist solution to Jörgensen’s dilemma, in press in Ratio Juris.



* I am indebted to Lars Lindahl for valuable comments to this paper.

* I am indebted to Wlodek Rabinowicz and Christoffer Wong for valuable comments to this paper.


Comments (0)

You don't have permission to comment on this page.