A fruitful idea about morality?
By TIBOR R. MACHAN
Freedom News Service
As someone who teaches about ethics and morality in colleges and universities, I have noticed that most of my students entertain a conflicting position about the subject. They see it either as a one-size-fits-all system of guidelines, wherein everyone has to act the same way or they are bad people, or as purely subjective, wherein nothing is either right or wrong and it’s all a matter of one’s opinion.
And this is understandable. If there are right answers to questions about how we should conduct ourselves, it seems to many those answers must apply to us all, equally. Otherwise how could they be right? So they are pulled toward what is often called moral absolutism. But it also seems quite reasonable that certain answers as to how one ought to act do not apply to other, very different people.
How can both of these valid insights be satisfied.
One possibility is that a sound, correct morality offers perhaps just one or two very basic principles, ones that are broad enough to apply to everyone, simply in virtue of being human. But this morality would also recognize that different individuals need different guiding principles, given their special situations, including their unique individuality.
We have this, for example, in medicine or nutrition. There are basic principles or guidelines in these areas, but when they are applied to people, accommodations must be made to the individuals in question -- are they men or women, young or old, tall or short, of a certain metabolism or another, allergic to this or that? So, while the basics of medicine and nutrition are taught pretty much the same everywhere, when they are applied things begin to vary quite a bit.
In morality or ethics, also, we may well have certain very basic principles that we all need to heed and practice -- such as "Think things through before you act," or "Be honest with yourself" or "Don’t deceive anyone," "Do unto others as you have would have them do unto you," or "Pursue excellence in life." (I leave aside now which might actually be the one or few sound and universal guiding principles -- that takes a lot of figuring out.) But as applied to particular, individual persons, what specific guidance would emerge from such basic principles will not be the same from one person to the next.
Yet something very important about both the concerns expressed by my students and many others would be satisfied in so understanding morality: There would indeed be something quite absolute or invariable about how we ought to act, yet this wouldn’t amount to an artificial detailed one-size-fits-all code.
Indeed, the idea would help with many things that concern us all: a just legal system would not have many general laws, only a few, because citizens are quite different from one another and have just a few things in common as citizens. The marketplace would make sense, what with all its highly varied goods and services aiming to fit different customers and using the varied talents of producers. Even art might benefit from this outlook: We all tend to think, I believe, that some things really are artistic while others lack this quality; yet we also realize that different people, with various special attributes, backgrounds, and so forth, will appreciate different works of art. Instead of thinking that everyone who responds to something other than what inspires oneself is artistically blind, varied works will be seen as have artistic value to different sorts of people, varied talents will produce varied yet still artistically excellent works. Yet, there will still remain plenty of room for concluding that some creations do not cut it at all.
Anyway, there isn’t much hope of settling big issues like this in a brief discussion, but perhaps some hints toward a sound approach could at least be suggested. Very formidable thinkers throughout human history have grappled with these matters and studying their reflections would be a prerequisite for making headway. What I was trying to do here, however, is no more than sketching out some promising ideas.
Tibor Machan is a professor of business ethics and Western Civilization at Chapman University in Orange, Calif., author of "A Primer on Ethics" and co-author of "A Primer on Business Ethics." E-mail him at Machan@chapman.edu
Observations and reflections from Tibor R. Machan, professor of business ethics and writer on general and political philosophy, now teaching at Chapman University in Orange, CA.
Wednesday, May 21, 2003
Thursday, May 15, 2003
Ethics and its Controversial Assumptions
Tibor R. Machan
Whether ethics even exists is often in dispute. For example, many believe ethics and science to be incompatible. A good many social scientists and psychologists think that people cannot be morally blamed for what they do; instead their conduct is explainable by various causes. Ordinary folks, too, are at times convinced that people act badly only because something made them do so, some event in their upbringing or some factor of the culture to which they belong. (We tend to explain our own bad conduct more readily but blame other people for theirs!)
So whether ethics is a bona fide part of human life is not self-evident, nor obvious. In order for there to be ethics, some other facts must already obtain. Consider, for example, the claim that "Judy should not lie," or "The president of the United Sates of American ought to try to restrain his powers over the lives of people." This assumes that Judy and the president have a choice about what they will do. "Ought" implies "can." This means that if one ought to do something, it must be that one can either do or not do it. It also assumes that Judy, the president and anyone who would look into the matter can identify certain standards of conduct to be used in figuring out what we should or should not do.
First, then, ethics requires that we can exercise some genuine choices, that we have the capacity to initiate some actions. If this were impossible, the idea that we should (or should not) act in such and such a fashion would have no application in human life. Second, ethics requires that some principles that apply to conduct be identifiable or objective. Unless we all can learn those very general principles, ethics has no place in our lives. It consists, after all, of such principles of conduct that pertain to all human beings. Moral or ethical principles pertain to action, how we should conduct ourselves, on what basis we should choose or select what we will do. To succeed at living a human life which is morally good, some principles would have to be followed. "Morally good" here means: being excellent as a human being in one's life but as a matter of choice, of one's own initiative, not accidentally. So being tall or talented or beautiful are not aspects of moral excellence, whereas being honest, courageous and prudent would be, if ethics is indeed a bona fide dimension of human living.
If we could not exercise genuine choices, morality would be impossible since no one could help what her or she is doing. It would all be a matter of good and bad things simply happening, as indeed they often do at the hands of nature, as it were: when tornadoes or earthquakes or diseases strike. Que serra, que serra, period.
If we could not identify moral principles, we could never make a sensible selection from among alternative courses of conduct. Depending on what we aim for, we can identify the principles that will enable us to reach our goal. This is clearly evident in such fields as medicine, engineering and business. Thus, it seems that this second requirement of ethics, that we can identify principles of conduct, might be satisfied. But we will need to explore that further to be able to tell.
Let me explore briefly whether these two assumptions are reasonable or merely prejudice or myth as some folks believe, ones who would consign ethics to the dustbin of pre-science, akin to demonology or witchcraft.
The answers I will reach cannot be considered conclusive -- there isn't enough time and space to carry out a full investigation. But we will have a chance to look at the major points for and against the assumptions. Without some idea about whether they are true, ethics itself is left unsupported -- it could just as easily be in the class of the occult, such as astrology or palmistry.
1. Free Will?
The first matter to address is whether we have free will -- not necessarily all of us, all of the time but, rather, as a rule, normally. In other words, are human beings, as they have appeared throughout history in their innumerable diverse circumstances (though not when incapacitated or significantly damaged) capable of bringing about, of their own initiative, the behavior in which they engage?
Against Free Will
Nature's Laws versus Free Will. First, one of the major objections against free will is that nature is governed by a set of laws, mainly the laws of physics. The argument here is that all material substances are controlled by these laws and we human beings are basically complicated versions of material substances. Therefore, whatever governs material substance in the universe must also govern human life.
Social science, for one, which studies human beings in their social relations, looks into some of causes that produce our behavior. So does neuroscience, a sub-discipline of biology, in its study of our individual brains-minds. In each case what is studied are the causes of behavior. So, the only difference between the rest of nature and ourselves, as far as these branches of science are concerned, is that we are more complicated, not that we are not governed by the same principles or laws of nature.
Most definitely, it is argued, no such thing as an original cause is evident in the rest of nature, something that would have to be possible for free will to exist. As one advocate of determinism puts it, "[T]he best response to the demand for an explanation of the relation between an originator and decisions is that an explanation cannot be given. We have to regard this relation as primitive or unanalyzable."1 In other words, originating or initiating some action seems nothing more than a myth or an unexplainable fact, for which no evidence or argument can be given.
The determinist claims that all our actions, including decisions, are more sensibly taken to be effects of some prior events. It is the determinist's view that everything we do is the effect of some set of causal circumstances. This makes better sense, say the determinists, than leaving things unexplained, mysterious.
Affirming Initiative: Now, in response one might argue that nature exhibits innumerable different domains, distinct not only in their complexity but also in the kinds of beings they include. There are, to be sure, many domains where we find the familiar cause and effect situation clearly evident -- for example, on the billiard table, in geological movements, and in the motion of the planets. But there are areas were something else appears to be going on. For example, is the cause of a musical composition, the composer, itself some effect of a prior cause, so that the composer makes no original contribution?2
So, "causal" reasoning does not necessarily rule out that there might be something in nature that exhibits agent or original causation, the phenomenon whereby a thing causes some of its own behavior. Causal interactions depend on the nature of the beings that interact, what they are. So one cannot rule out, a priori (before investigation), that some beings could have the capacity to act on their own initiative.
Thus it seems that there might be in nature a form of existence that exhibits free will. Whether there is or is not is something to be discovered, not ruled out by a narrow world view or metaphysics that restricts everything to being just one kind of thing so that everything has just one kind of causal characteristics. Nature appears to be composed of many types and kinds of things and thus does not have to exclude free will.
So, free will seems to be possible, even in a world of causality. Whether free will actually exists we'll examine shortly.
We Cannot Know of Free Will. Now, another reason why some think that free will is not possible is that the dominant mode of studying, inspecting or examining nature is what we call "empiricism." In other words, many believe that the only way we know about nature is by observing it with our various sensory organs. But since the sensory organs do not give us direct evidence of such a thing as free will, there really isn't any such thing. Since no observable evidence for free will exists, therefore free will does not exist.
We Can Know Free Will. But the doctrine that empiricism captures all forms of knowing is wrong -- we know many things not simply through observation but through a combination of observation, inferences, and theory construction. (Consider, even the purported knowledge that empiricism is our form of knowledge is not "known" empirically!)
For one, many features of the universe, including criminal guilt, are detected without eyewitnesses but by way of theories which serve the purpose of best explaining what we do have before us to observe. This is true, also, even in the natural sciences. Many of the complex phenomena or facts in biology, astrophysics, subatomic physics, botany, chemistry -- not to mention psychology -- consist not of what we see or detect by observation but is inferred by way of a theory. The theory that explains things best -- most completely and most consistently -- is the best answer to the question as to what is going on.
Free will may well turn out to be in this category. In other words, free will may not be something that we can see directly, but what best explains what we do see in human life. This may include, for example, the many mistakes that human beings make in contrast to the few mistakes that other animals make. We also notice that human beings do all kinds of odd things that cannot be accounted for in terms of mechanical causation, the type associated with physics. We can examine a person's background and find that some people with bad childhoods turn out to be decent, while others become crooks. Free will, then, amounts to a very helpful explanation. For now all we need to consider that this may well be so, and if empiricism does not allow for it, so much the worse for empiricism. One could know something because it explains something else better than any alternative. And that is not strict empirical knowledge.
Free Will Is Weird? Another matter that very often counts against free will is that the rest of (even living) beings in nature do not exhibit it. Dogs, cats, lizards, fish, frogs, etc., have no free will and therefore it appears arbitrary to impute it to human beings. Why should we be free to do things when the rest of nature lacks any such capacity? It would be an impossible aberration. Some opponents of the free will idea, such as the behaviorist psychologists B. F. Skinner, have stressed this objection (for example, in the book Beyond Freedom and Dignity [Bantam Books, 1972])
Free Will is Natural. The answer here is similar to what I gave earlier. To wit, there is enough variety in nature -- some things swim, some fly, some just lie there, some breathe, some grow, while others do not; so there is plenty of evidence of plurality of types and kinds of things in nature. Discovering that something has free will could be yet another addition to all the varieties of nature. Determinism seems to depend upon adherence to a very specific ontology, in terms of which everything must be a given kind of thing, one that can only move when prompted by something else, and this is not something that can be shown to hold universally so as to preclude free will.
Does God Allow Free Will? There is also the theological argument to the effect that if God knows everything, he/she knows the future, so what we do is unalterable. If someone knows that some future event will occur, e.g., that Haley's Comet will come nearest to earth at some given time in the future, then whatever is involved in that event cannot have a choice about it. So if God knows that you will have three children, then you have no genuine choice about that matter. It has to turn out that way.
God's "knowledge" is Mysterious. But God's knowledge is not likely to be the kind human beings have, indeed, it is a mystery just what it is. So nothing much can be inferred from it. It is mistake to confuse what would follow from a human being's knowing the future versus God's "knowledge" of the future. The latter is entirely different from the former and so the implications wouldn't be the same either.
For Free Will
Let's now consider whether free will actually does exist. I'll offer four arguments in support of an affirmative answer. (They are not uniquely my arguments but ones that have been proposed throughout the philosophical community.) Thus far we have only considered whether free will is possible. But does it exist? The following points support that contention.
Are We Determined to be Determinists -- or not? There is an argument against determinism to the effect that, if we are fully determined in what we think, believe, and do, then the belief that determinism is true is also a result of this determinism. But the same holds for the belief that determinism is false. There is nothing you can do about whatever you believe -- you had to believe it. There is no way to take an independent stance and consider the arguments in an unprejudiced manner because all the various forces making us assimilate the evidence either cause us to believe or disbelieve in determinism. One either turns out to be a determinist or not and in neither case can we appraise the issue objectively because we are predetermined to have a view on the matter one way or the other, ad infinitum.
But then, paradoxically, we'll never be able to resolve this debate, since there is no way of obtaining an objective assessment. Indeed, the very idea of philosophical, scientific or judicial objectivity, as well as of ever coming to know anything, has to do with being free. Thus, if we're engaged in this enterprise of learning about truth and distinguishing it from falsehood, we are committed to the idea that human beings have some measure of mental freedom. This view was put forward by Immanuel Kant, the important 18th century Germany philosopher, as well as by Nathaniel Branden, a psychologist who defends free will in his book The Psychology of Self-Esteem (Bantam Books, 1969).
Should We Become Determinists? There's another dilemma of determinism. It starts with noting that the determinist wants us to believe in determinism. In fact, he believes we ought to do so rather than believe in "the illusion of free will". But, as the saying goes in philosophy, "ought" implies "can". That is, if one ought to believe in or do something, this implies that one has a free choice in the matter; it implies that it is up to us whether we will hold determinism or free will as the better doctrine. That, in turn, assumes that we are free.
In other words, even arguing for determinism assumes that we are not determined to believe in free will or determined but that it is a matter of our making certain choices about arguments, evidence, and thinking itself. We run across this paradox when we find people who blame us for not accepting the view that people's fate is not in their hands so we should not blame them. Blaming some while denying that anyone should be blamed is a paradox, one which troubles a deterministic position. In one book defending determinism, the author ends by posing the following question: "If ['Left Wing politics is less given to attitudes and policies which have something of the assumption of Free Will in them'], should one part of the response ... be a move to the Left in politics? I leave you with that bracing question."3 Yet can this be a genuine question, if the answer is predetermined and one either will or will not move Left or Right and has no choice in the matter? I t cannot, which is the idea advanced by Joseph M. Boyle, G. Grisez and O. Tollefsen, in their book Free Choice (University of Notre Dame Press, 1976).
We Often Know We Are Free. In many contexts of our lives introspective knowledge is taken very seriously. When you go to a doctor and he asks you, "Are you in pain?" and you say, "Yes," and he says "Where is the pain?" and you say, "It's in my knee," the doctor doesn't say, "Why, you can't know, this is not public evidence, I will now get verifiable, direct evidence where you hurt." In fact your evidence is very good evidence. Witnesses at trials give such evidence as they report about what they have seen. This invokes, in a certain respect, introspective evidence: "This indeed is what I have seen or heard." It involves reference to something we recall from memory and is thus within us, not evident to others without our reports. Even in the various sciences people report on what they've read on surveys or seen on gauges or instruments or studies. Thus they are giving us introspective evidence.
Introspection is one source of evidence that we take as reasonably reliable. So what should we make of the fact that a lot of people do believe and say things like, "Damn it, I didn't make the right choice," or "I neglected to do something." They report to us, furthermore, that they have made various choices, decisions, etc., or that they intended this or that but not another thing. They, furthermore, often blame themselves for not having done something, thus implying that they know that they made a choice (for which they are taking responsibility).
In short, there is abundant evidence from people all around us of their experience of the existence of their own free choices. This cannot just be ruled out, since it would also undercut much else we take very seriously, indeed treat as decisive, coming from such sources.
Science Discovers Free Will. Finally, there is also the evidence of the fact that we do seem to have the capacity for self-monitoring. The human brain has the kind of structure that allows us, so to speak, to govern ourselves. We can inspect our lives, we can detect where we're going, and we can, therefore, change course. The human brain itself makes all this possible. The brain, because of its structure, can monitor itself3/4that is, its higher regions can influence the rest3/4and as a result we can decide whether to continue in a certain pattern or to change that pattern and go in a different direction. This is how we change habits, restrain impulses, control our temper, "watch what we eat," alter our developed motor skills in, say, how we play the piano, or even change our established opinions. That is the sort of free will that is demonstrable.
At least some scientists, for example Roger W. Sperry -- in his book Science and Moral Priority4 and in numerous more technical articles -- maintain that there's evidence for free will in this sense. This view depends on a number of points I have already mentioned. It assumes, for example, that there can be different kinds of causes in nature and, also, that the functioning of the brain as a complex neurophysiological system could manifest self-causation. An organism with our kind of brain could cause some mental functions to occur via what Sperry calls a process of "downward causation." (Sperry argues that there is some evidence of such causation even apart from how the human organism's higher mental activities occur, for example, in the way water freezes.)
Now the sort of thing Sperry thinks possible seems evident in our lives. We make plans and then, upon reconsideration (which at times takes but a fraction of a second) revise them. We explore alternatives and decide to follow one of these. We change a course of conduct we have embarked upon, or continue with it. We resist temptations, act despite the desire to do something else, and gradually build up good habits which, at first, were difficult to exhibit. In other words, there is a locus of individual self-responsibility or initiative -- or, to use Ted Honderich's term, "origination" -- that is evident in the way in which we look upon ourselves, and the way in which we in fact behave.
Some Cautionary Points. There clearly are cases of conduct in which some persons behave as they do because they were determined to do so by certain identifiable forces beyond their control. A brain tumor, a severe childhood trauma or some other intrusive force sometimes incapacitates people. This is evident in those occasional cases when a person who engaged in criminal behavior is shown to have had no control over what he or she did. Someone who actually had no capacity to control his or her behavior, could not control his or her own thinking or judgment and was, thus, moved by something other than his own will, cannot be said to possess a bona fide free will.
Compatibilism. Those who deny that we have free will would seem to be unable to make sense of our distinction between cases in which one controls one's behavior and those in which one is being moved by forces over which he or she has no control. When we face the latter sort of case, we still admit that the behavior could be good or bad but we deny that it is morally and legally significant -- it is more along lines of acts of nature or God by being out of the agent's control. This is also why philosophers who discuss ethics but deny free will have trouble distinguishing between morality and value theory -- e.g., some utilitarians, Marxists. Morality concerns how we ought to act (or the rightness of conduct), whereas value theory deals with what is good and bad and why. It is possible to address the latter field without taking a side on the free will issue. But that is not so with the former.
Some, though, will defend the view that even if we have no ultimate control over our actions -- even if our behavior, the judgments which we make, or our character is controlled by forces such as the environment or our genetic make-up -- we may still speak of ethics or morality. They are called compatibilists. They would mean by the term "ethics" or "morality" something different from what the terms would mean if we did possess free will, however. Ethics would concern good behavior, conduct in conformity with standards of right regardless of how it came about that one conformed or did not conform to those standards. Ethics, in line with the compatibilist position, might concern itself with values and how to secure them, without implying that one could of one’s own initiative exert control over whether these values would be achieved. Accordingly, then, without personal responsibility or agency, where one is the cause of what one does, whereby one initiates or originates one's significant actions, ethics would amount to something drastically different from what we usually mean by the term.
Given the way compatibilists conceive of their doctrine—namely, that it is possible to have both a causal explanation of human conduct as well as its being the agent’s responsibility to have engaged in that conduct—the theory does succeed better than all others, including the one dubbed agent causal or libertarian free will. But the compatibilist doctrine does not in fact allow for personal moral responsibility, however much its proponents so insist. No bona fide, ultimate personal responsibility can be attached to behavior that is, so to speak, softly determined. If one’s character has been molded so that one will be honest, generous, and so forth, and then one is indeed honest, generous and so forth, one cannot reasonably be said then to be responsible for being honest, generous and so forth. It is whatever molded one’s character that would explain one’s honesty, generosity and so forth. Any kind of moral pride or credit would amount to an illusion, not something well deserved.
Is Free Will Well Founded?
So these several reasons provide a kind of argumentative collage in support of the free will position. Can anyone do better with this issue? I don't know. I think it's best to ask only for what is the best of the various competing theories. Are human beings doing what they do solely as the consequences of forces acting on them? Or do they have the capacity to take charge of their lives, often neglect to do so properly or effectively, make stupid choices? Which supposition best explains the human world and its complexities around us?
I think the free will view makes much better sense. It explains, much better than do deterministic theories, how it is possible that human life involves such an array of possibilities, accomplishments as well as defeats, joys as well as sorrows, creation as well as destruction. It explains, also, why in human life there is so much change -- in language, custom, style, art, and science. Unlike other living beings, for which what is possible is pretty much fixed by instincts and reflexes -- even if some extraordinary behavior may be elicited, by way of extensive prodding in laboratories or, at times, in the face of unusual natural developments -- people initiate much of what they do, for better and for worse. From their most distinctive capacity of forming ideas and theories, to those of artistic and athletic inventiveness, human beings remake the world without, so to speak, having to do so! This, moreover, can make good sense if we understand them to have the distinctive capacity for initiating their own conduct rather than relying on mere stimulation and reaction. It also poses for them certain unique challenges, not the least of which is that they cannot reasonably expect any formula or system to predictably manage the future of human affairs, such as some of social sciences seem to hope it will. Social engineering is, thus, not a genuine prospect for solving human problems -- only education and individual initiative can do that.
Yet, it should be noted that free will does not contradict social science if the latter is not conceived in strict deterministic terms and the former is understood to allow for long range commitments, chosen policies, strategies, institutional involvements, etc.
Human beings make choices, some of which, however, commit them to a course of long range behavior which can be studied in terms of their impact on various features of the social world. People choose to enter schools, careers, relationships, to form institutions, to carry out plans, etc., and often their choices justifies expecting them to stick with a reasonably predictable course of conduct.
In economics, for example, one may be studying the market place as an arena wherein human beings make various free choices concerning how they will be earning a living, what they will be producing and consuming, how they will be marketing their products, bargaining for prices, wages, and benefits, etc. The discipline examines the various permutations and consequences of these choices, as well as various regularities that are evident in the overall sphere of their activities.
Nonetheless, people are free to do what they do as commercial agents in various ways, embark upon their tasks more or less intensely at various periods of their lives, for various reasons of their own or because of circumstances they face. None of this needs to be determined and all these actions are open to moral evaluation.5
Yet this does not take away a good deal of orderliness, even predictability from people's economic activities, provided one does not expect that they behave like Haley's comet or a subatomic particle, according to impersonal laws or random forces. If social science appreciates that human beings have free will, they do not necessarily give up being scientific about human life, quite the contrary. They could, instead, be closer to dealing scientifically with human life.
2. Moral Skepticism
We now turn to the second assumption and briefly discuss the pros and cons.
Let's once again recall what's at stake: is there any basis for our ethical or moral judgments? When a politician is denounced, a newspaper criticized for its practices, or a teacher (or even a text book author) praised or blamed for his or her product, can any of this be made out? Is it possible to justify such judgments or claims? When one claims that one's parents have mistreated one or one's physician engaged in malpractice, is this just hot air or the expression of displeasure? Is it that we simply know this without any justification, without any basis for that knowledge? Is ethics, perhaps, some kind of realm where we can be right without any justification?6 Or perhaps there are standards we can identify that can help us show that what we claim is true?
Our discussion, here, will again certainly not exhaust the topic. There is much more to be considered in a thorough study, but what we will do should help lay the foundation and give a clue as to what the debate involves.
Against Morality
Moral Diversity vs. Objectivity. There are too many moral opinions, so how can there be one, true moral standard for all? Clearly, across the globe and throughout human history great diversity exists and has existed concerning what is supposed to be right and wrong in human conduct. Indeed, apparently decent and intelligent people differ very seriously on the topic. Surely that suggests very strongly that no common, objective standard is available as to how we ought to act. It is mostly cultural anthropologists who advance this view -- e.g., Ruth Benedict, in her famous book Patterns of Culture.
No Evidence of the Senses supports Moral Claims. Moral judgments are not verifiable by observation, as are many other judgments we make. We can pretty much decide what color hat one is wearing, how many people are sitting in a class room, where China's borders are, how bright the sun is at noon, and other subjects we want to know about, by means of the diligent use of our sensory organs. Yet, no such use is going to enable us to decide whether we ought to tell the truth, write a letter to mother, help the poor, avoid pornography or ban abortions. Accordingly, moral disputes appear to be impossible to settle. This is an argument stressed by members of the philosophical school logical positivism -- e.g., A. J. Ayer, Language, Truth and Logic (Dover, 1936).7
The Gap Between "Is" and "Ought" No judgment of what is the case can support a conclusion of what one ought to do -- the "is/ought" gap argument of the philosopher David Hume (1711-1776). The rules of sound reasoning, good judgment, require that when one draws a conclusion from premises, the terms that are present in this conclusion also appear in the premises. Yet if one begins an argument with claims about this or that being so and so, there is no "ought" or "should" present, whereas in a conclusion having moral import it is just those terms that would have to appear. Clearly, then, such moral conclusions cannot be derived from non-moral premises.
Morality is Against Nature Nothing else in nature is subject to moral judgment or evaluation, so applying moral judgment or evaluation to human beings is odd, arbitrary, unjustified. Consider anything -- rocks, trees, birds, fish or whatever, and there is no place for praising and blaming in our understanding of these things. So bringing morality into the picture when we consider human affairs is arbitrary, out of the blue, unjustified. John Mackie argues, for example, that moral values, if they existed, would be "entities or qualities or relations of a very strange sort, utterly different from anything else in the universe."8
For Morality
Diversity is More Apparent than Real. (A) Moral opinions tend to differ about details, not basics. (B) Some persons have a vested interest in obscuring moral standards lest they be found guilty of moral wrong doing or evil. (C) Some persons are professional "devil's advocates" and propagate skepticism because they are testing, questioning, making sure (even if they do not act as if they were skeptics, e.g., toward their children, friends, political reps).
Perceptual Knowledge is Not All. In complicated areas observations do not suffice to verify judgment -- e.g., in astrophysics, particle physics, psychology, crime detection, etc. Moral judgments may require verification by way of a fairly complex theory or definition of, e.g., what "good" or "morally right" means. (Moral theories propose such theories and definitions.)
How not to Deduce but to Derive Ought from Is (A) Hume was arguing against those who believed that moral conclusions can be deduced from premises stating various facts. But not all arguments consist of deductions, a formal statement linking premises to conclusions by nothing other than its logical structure and the essential meaning of the terms. Thus nothing strictly new is ever established by way of deduction, nothing that isn't true implicitly already. There is, however, reasoning that's not deductive but, roughly, inferential. Based on our observations, reflections, economical theorizing, and the like, we forge or develop an understanding of the world. When detectives explain a crime, they do not deduce -- contrary to Sherlock Holmes -- but infer who did it. Scientists work from evidence to conclusions in other than a strictly deductive fashion. They reach their understanding of what is what by developing valid, well founded concepts and theories that best explain what they see and have previously learned about. Indeed, most often we are concerned to establish definitions which are not the product of deduction but generalization, abstraction, the formation of ideas.
Accordingly, (B) the premises of moral arguments could include theories or definitions as to what "good" or "ought to" mean and thus give support to particular moral judgments. For example, "the will of God is Good," "Good is what everyone ought to do," thus, "the will of God is what everyone ought to obey." Or, "Goodness is Living (for human beings)," "Living (for human beings) is furthered by thinking," thus, "Goodness is furthered by thinking." These definitions or theories cannot be just dismissed. There is a possibility that one of them captures accurately what the relevant terms mean and from this we could infer moral conclusions.9
Nature is Diverse Enough to allow for Major Differences As in the case of the free will hypothesis, there is nothing odd about something new emerging in nature that does invite judgments. Mary Midgley puts forth a very interesting idea, in The Ethical Primate (Routledge, 1994), to the effect that human beings are precisely distinctive in nature by having a moral nature, an ethical dimension to their lives. Indeed, this view was advanced by Aristotle, as well. This is evident enough when we consider how really extraordinary human life is3/4what other aspect of nature gives us board games, museums, symphony music, philosophy or the novel?
The Best Theory is As True as Can Be
When we put all of this together what is at issue is whether we get a more sensible understanding of the complexities of human life than otherwise -- do we get a better understanding, for example, of why social engineering and government regulation and regimentation do not work, why there are so many individual and cultural differences, why people can be wrong, why they can disagree with each other, etc. It may be because they are free to do so, because they are not set in some pattern the way cats and dogs and orangutans and birds tend to be. In principle, all of the behavior of these creatures around us can be predicted because they are not creative in a sense that they originate new ideas and behavior, although we do not always know enough about the constitution of these beings and how it would interact with their environment to actually predict what they will do. Human beings produce new ideas and these can introduce new kinds of behavior in familiar situations. This, in part, is what is meant by the fact that different people often interpret their experiences differently. Yet, we can make some predictions about what people will do because they often do make up their minds in a given fashion and stick to their decision over time. This is what we mean when we note that people make commitments, possess integrity, etc. So we can estimate what they are going to do. But even then we do not make certain predictions but only statistically significant ones. Clearly, very often people change their minds and surprise us. Furthermore, if we go to different cultures, they'll surprise us even more. This complexity, diversity, and individuation about human beings is best explained if human beings are free than if they are determined.
That is, at least, what is required for ethics to be a bona fide, genuine subject matter of concern.
Endnotes:
1 Ted Honderich, How Free Are You? The Determinism Problem (Oxford, England: Oxford University Press, 1993), pp. 42-3.
2 Even in physical reality, as in the freezing of water, the causal relationship isn't exactly what it is in other domains. The freezing occurs by way of what has been called "downward causation," instead of the more familiar "action-reaction" causation.
3 Ibid., p. 129.
4 New York, NY: Columbia University Press, 1983.
5 For more on this, see Tibor R. Machan, ed., Commerce and Morality (Lanham, MD: Rowman and Littlefield, 1988), especially "Ethics and its Uses."
6 This view is advanced in the name of Ludwig Wittgenstein by Johnston, op. cit., Wittgenstein and Moral Philosophy. But see, in contrast, Julius Kovesi, Moral Notions (London, England: Routledge & Kegan Paul, 1967), who also approaches ethics from Wittgenstein's teachings.
7 A very helpful discussion of the type of argument Ayer advances can be found in Laurie Calhoun, "Scientistic Confusion and Metaethical Relativism," Ethica, Vol. 7, No. 2 (1995), pp. 53-72. See, also, Renford Bambrough, Moral Scepticism and Moral Knowledge (Atlantic Highlands, NJ: Humanities Press, Inc., 1979).
8 J. L. Mackie, Ethics (Baltimore, MD: Penguin Books, 1977), p. 38.
9 For more along these lines, see W. D. Falk, Ought, Reasons, and Morality (Ithaca, NY: Cornell University Press, 1986), especially "Goading and Guiding" and "Hume on Is and Ought."
Tibor R. Machan
Whether ethics even exists is often in dispute. For example, many believe ethics and science to be incompatible. A good many social scientists and psychologists think that people cannot be morally blamed for what they do; instead their conduct is explainable by various causes. Ordinary folks, too, are at times convinced that people act badly only because something made them do so, some event in their upbringing or some factor of the culture to which they belong. (We tend to explain our own bad conduct more readily but blame other people for theirs!)
So whether ethics is a bona fide part of human life is not self-evident, nor obvious. In order for there to be ethics, some other facts must already obtain. Consider, for example, the claim that "Judy should not lie," or "The president of the United Sates of American ought to try to restrain his powers over the lives of people." This assumes that Judy and the president have a choice about what they will do. "Ought" implies "can." This means that if one ought to do something, it must be that one can either do or not do it. It also assumes that Judy, the president and anyone who would look into the matter can identify certain standards of conduct to be used in figuring out what we should or should not do.
First, then, ethics requires that we can exercise some genuine choices, that we have the capacity to initiate some actions. If this were impossible, the idea that we should (or should not) act in such and such a fashion would have no application in human life. Second, ethics requires that some principles that apply to conduct be identifiable or objective. Unless we all can learn those very general principles, ethics has no place in our lives. It consists, after all, of such principles of conduct that pertain to all human beings. Moral or ethical principles pertain to action, how we should conduct ourselves, on what basis we should choose or select what we will do. To succeed at living a human life which is morally good, some principles would have to be followed. "Morally good" here means: being excellent as a human being in one's life but as a matter of choice, of one's own initiative, not accidentally. So being tall or talented or beautiful are not aspects of moral excellence, whereas being honest, courageous and prudent would be, if ethics is indeed a bona fide dimension of human living.
If we could not exercise genuine choices, morality would be impossible since no one could help what her or she is doing. It would all be a matter of good and bad things simply happening, as indeed they often do at the hands of nature, as it were: when tornadoes or earthquakes or diseases strike. Que serra, que serra, period.
If we could not identify moral principles, we could never make a sensible selection from among alternative courses of conduct. Depending on what we aim for, we can identify the principles that will enable us to reach our goal. This is clearly evident in such fields as medicine, engineering and business. Thus, it seems that this second requirement of ethics, that we can identify principles of conduct, might be satisfied. But we will need to explore that further to be able to tell.
Let me explore briefly whether these two assumptions are reasonable or merely prejudice or myth as some folks believe, ones who would consign ethics to the dustbin of pre-science, akin to demonology or witchcraft.
The answers I will reach cannot be considered conclusive -- there isn't enough time and space to carry out a full investigation. But we will have a chance to look at the major points for and against the assumptions. Without some idea about whether they are true, ethics itself is left unsupported -- it could just as easily be in the class of the occult, such as astrology or palmistry.
1. Free Will?
The first matter to address is whether we have free will -- not necessarily all of us, all of the time but, rather, as a rule, normally. In other words, are human beings, as they have appeared throughout history in their innumerable diverse circumstances (though not when incapacitated or significantly damaged) capable of bringing about, of their own initiative, the behavior in which they engage?
Against Free Will
Nature's Laws versus Free Will. First, one of the major objections against free will is that nature is governed by a set of laws, mainly the laws of physics. The argument here is that all material substances are controlled by these laws and we human beings are basically complicated versions of material substances. Therefore, whatever governs material substance in the universe must also govern human life.
Social science, for one, which studies human beings in their social relations, looks into some of causes that produce our behavior. So does neuroscience, a sub-discipline of biology, in its study of our individual brains-minds. In each case what is studied are the causes of behavior. So, the only difference between the rest of nature and ourselves, as far as these branches of science are concerned, is that we are more complicated, not that we are not governed by the same principles or laws of nature.
Most definitely, it is argued, no such thing as an original cause is evident in the rest of nature, something that would have to be possible for free will to exist. As one advocate of determinism puts it, "[T]he best response to the demand for an explanation of the relation between an originator and decisions is that an explanation cannot be given. We have to regard this relation as primitive or unanalyzable."1 In other words, originating or initiating some action seems nothing more than a myth or an unexplainable fact, for which no evidence or argument can be given.
The determinist claims that all our actions, including decisions, are more sensibly taken to be effects of some prior events. It is the determinist's view that everything we do is the effect of some set of causal circumstances. This makes better sense, say the determinists, than leaving things unexplained, mysterious.
Affirming Initiative: Now, in response one might argue that nature exhibits innumerable different domains, distinct not only in their complexity but also in the kinds of beings they include. There are, to be sure, many domains where we find the familiar cause and effect situation clearly evident -- for example, on the billiard table, in geological movements, and in the motion of the planets. But there are areas were something else appears to be going on. For example, is the cause of a musical composition, the composer, itself some effect of a prior cause, so that the composer makes no original contribution?2
So, "causal" reasoning does not necessarily rule out that there might be something in nature that exhibits agent or original causation, the phenomenon whereby a thing causes some of its own behavior. Causal interactions depend on the nature of the beings that interact, what they are. So one cannot rule out, a priori (before investigation), that some beings could have the capacity to act on their own initiative.
Thus it seems that there might be in nature a form of existence that exhibits free will. Whether there is or is not is something to be discovered, not ruled out by a narrow world view or metaphysics that restricts everything to being just one kind of thing so that everything has just one kind of causal characteristics. Nature appears to be composed of many types and kinds of things and thus does not have to exclude free will.
So, free will seems to be possible, even in a world of causality. Whether free will actually exists we'll examine shortly.
We Cannot Know of Free Will. Now, another reason why some think that free will is not possible is that the dominant mode of studying, inspecting or examining nature is what we call "empiricism." In other words, many believe that the only way we know about nature is by observing it with our various sensory organs. But since the sensory organs do not give us direct evidence of such a thing as free will, there really isn't any such thing. Since no observable evidence for free will exists, therefore free will does not exist.
We Can Know Free Will. But the doctrine that empiricism captures all forms of knowing is wrong -- we know many things not simply through observation but through a combination of observation, inferences, and theory construction. (Consider, even the purported knowledge that empiricism is our form of knowledge is not "known" empirically!)
For one, many features of the universe, including criminal guilt, are detected without eyewitnesses but by way of theories which serve the purpose of best explaining what we do have before us to observe. This is true, also, even in the natural sciences. Many of the complex phenomena or facts in biology, astrophysics, subatomic physics, botany, chemistry -- not to mention psychology -- consist not of what we see or detect by observation but is inferred by way of a theory. The theory that explains things best -- most completely and most consistently -- is the best answer to the question as to what is going on.
Free will may well turn out to be in this category. In other words, free will may not be something that we can see directly, but what best explains what we do see in human life. This may include, for example, the many mistakes that human beings make in contrast to the few mistakes that other animals make. We also notice that human beings do all kinds of odd things that cannot be accounted for in terms of mechanical causation, the type associated with physics. We can examine a person's background and find that some people with bad childhoods turn out to be decent, while others become crooks. Free will, then, amounts to a very helpful explanation. For now all we need to consider that this may well be so, and if empiricism does not allow for it, so much the worse for empiricism. One could know something because it explains something else better than any alternative. And that is not strict empirical knowledge.
Free Will Is Weird? Another matter that very often counts against free will is that the rest of (even living) beings in nature do not exhibit it. Dogs, cats, lizards, fish, frogs, etc., have no free will and therefore it appears arbitrary to impute it to human beings. Why should we be free to do things when the rest of nature lacks any such capacity? It would be an impossible aberration. Some opponents of the free will idea, such as the behaviorist psychologists B. F. Skinner, have stressed this objection (for example, in the book Beyond Freedom and Dignity [Bantam Books, 1972])
Free Will is Natural. The answer here is similar to what I gave earlier. To wit, there is enough variety in nature -- some things swim, some fly, some just lie there, some breathe, some grow, while others do not; so there is plenty of evidence of plurality of types and kinds of things in nature. Discovering that something has free will could be yet another addition to all the varieties of nature. Determinism seems to depend upon adherence to a very specific ontology, in terms of which everything must be a given kind of thing, one that can only move when prompted by something else, and this is not something that can be shown to hold universally so as to preclude free will.
Does God Allow Free Will? There is also the theological argument to the effect that if God knows everything, he/she knows the future, so what we do is unalterable. If someone knows that some future event will occur, e.g., that Haley's Comet will come nearest to earth at some given time in the future, then whatever is involved in that event cannot have a choice about it. So if God knows that you will have three children, then you have no genuine choice about that matter. It has to turn out that way.
God's "knowledge" is Mysterious. But God's knowledge is not likely to be the kind human beings have, indeed, it is a mystery just what it is. So nothing much can be inferred from it. It is mistake to confuse what would follow from a human being's knowing the future versus God's "knowledge" of the future. The latter is entirely different from the former and so the implications wouldn't be the same either.
For Free Will
Let's now consider whether free will actually does exist. I'll offer four arguments in support of an affirmative answer. (They are not uniquely my arguments but ones that have been proposed throughout the philosophical community.) Thus far we have only considered whether free will is possible. But does it exist? The following points support that contention.
Are We Determined to be Determinists -- or not? There is an argument against determinism to the effect that, if we are fully determined in what we think, believe, and do, then the belief that determinism is true is also a result of this determinism. But the same holds for the belief that determinism is false. There is nothing you can do about whatever you believe -- you had to believe it. There is no way to take an independent stance and consider the arguments in an unprejudiced manner because all the various forces making us assimilate the evidence either cause us to believe or disbelieve in determinism. One either turns out to be a determinist or not and in neither case can we appraise the issue objectively because we are predetermined to have a view on the matter one way or the other, ad infinitum.
But then, paradoxically, we'll never be able to resolve this debate, since there is no way of obtaining an objective assessment. Indeed, the very idea of philosophical, scientific or judicial objectivity, as well as of ever coming to know anything, has to do with being free. Thus, if we're engaged in this enterprise of learning about truth and distinguishing it from falsehood, we are committed to the idea that human beings have some measure of mental freedom. This view was put forward by Immanuel Kant, the important 18th century Germany philosopher, as well as by Nathaniel Branden, a psychologist who defends free will in his book The Psychology of Self-Esteem (Bantam Books, 1969).
Should We Become Determinists? There's another dilemma of determinism. It starts with noting that the determinist wants us to believe in determinism. In fact, he believes we ought to do so rather than believe in "the illusion of free will". But, as the saying goes in philosophy, "ought" implies "can". That is, if one ought to believe in or do something, this implies that one has a free choice in the matter; it implies that it is up to us whether we will hold determinism or free will as the better doctrine. That, in turn, assumes that we are free.
In other words, even arguing for determinism assumes that we are not determined to believe in free will or determined but that it is a matter of our making certain choices about arguments, evidence, and thinking itself. We run across this paradox when we find people who blame us for not accepting the view that people's fate is not in their hands so we should not blame them. Blaming some while denying that anyone should be blamed is a paradox, one which troubles a deterministic position. In one book defending determinism, the author ends by posing the following question: "If ['Left Wing politics is less given to attitudes and policies which have something of the assumption of Free Will in them'], should one part of the response ... be a move to the Left in politics? I leave you with that bracing question."3 Yet can this be a genuine question, if the answer is predetermined and one either will or will not move Left or Right and has no choice in the matter? I t cannot, which is the idea advanced by Joseph M. Boyle, G. Grisez and O. Tollefsen, in their book Free Choice (University of Notre Dame Press, 1976).
We Often Know We Are Free. In many contexts of our lives introspective knowledge is taken very seriously. When you go to a doctor and he asks you, "Are you in pain?" and you say, "Yes," and he says "Where is the pain?" and you say, "It's in my knee," the doctor doesn't say, "Why, you can't know, this is not public evidence, I will now get verifiable, direct evidence where you hurt." In fact your evidence is very good evidence. Witnesses at trials give such evidence as they report about what they have seen. This invokes, in a certain respect, introspective evidence: "This indeed is what I have seen or heard." It involves reference to something we recall from memory and is thus within us, not evident to others without our reports. Even in the various sciences people report on what they've read on surveys or seen on gauges or instruments or studies. Thus they are giving us introspective evidence.
Introspection is one source of evidence that we take as reasonably reliable. So what should we make of the fact that a lot of people do believe and say things like, "Damn it, I didn't make the right choice," or "I neglected to do something." They report to us, furthermore, that they have made various choices, decisions, etc., or that they intended this or that but not another thing. They, furthermore, often blame themselves for not having done something, thus implying that they know that they made a choice (for which they are taking responsibility).
In short, there is abundant evidence from people all around us of their experience of the existence of their own free choices. This cannot just be ruled out, since it would also undercut much else we take very seriously, indeed treat as decisive, coming from such sources.
Science Discovers Free Will. Finally, there is also the evidence of the fact that we do seem to have the capacity for self-monitoring. The human brain has the kind of structure that allows us, so to speak, to govern ourselves. We can inspect our lives, we can detect where we're going, and we can, therefore, change course. The human brain itself makes all this possible. The brain, because of its structure, can monitor itself3/4that is, its higher regions can influence the rest3/4and as a result we can decide whether to continue in a certain pattern or to change that pattern and go in a different direction. This is how we change habits, restrain impulses, control our temper, "watch what we eat," alter our developed motor skills in, say, how we play the piano, or even change our established opinions. That is the sort of free will that is demonstrable.
At least some scientists, for example Roger W. Sperry -- in his book Science and Moral Priority4 and in numerous more technical articles -- maintain that there's evidence for free will in this sense. This view depends on a number of points I have already mentioned. It assumes, for example, that there can be different kinds of causes in nature and, also, that the functioning of the brain as a complex neurophysiological system could manifest self-causation. An organism with our kind of brain could cause some mental functions to occur via what Sperry calls a process of "downward causation." (Sperry argues that there is some evidence of such causation even apart from how the human organism's higher mental activities occur, for example, in the way water freezes.)
Now the sort of thing Sperry thinks possible seems evident in our lives. We make plans and then, upon reconsideration (which at times takes but a fraction of a second) revise them. We explore alternatives and decide to follow one of these. We change a course of conduct we have embarked upon, or continue with it. We resist temptations, act despite the desire to do something else, and gradually build up good habits which, at first, were difficult to exhibit. In other words, there is a locus of individual self-responsibility or initiative -- or, to use Ted Honderich's term, "origination" -- that is evident in the way in which we look upon ourselves, and the way in which we in fact behave.
Some Cautionary Points. There clearly are cases of conduct in which some persons behave as they do because they were determined to do so by certain identifiable forces beyond their control. A brain tumor, a severe childhood trauma or some other intrusive force sometimes incapacitates people. This is evident in those occasional cases when a person who engaged in criminal behavior is shown to have had no control over what he or she did. Someone who actually had no capacity to control his or her behavior, could not control his or her own thinking or judgment and was, thus, moved by something other than his own will, cannot be said to possess a bona fide free will.
Compatibilism. Those who deny that we have free will would seem to be unable to make sense of our distinction between cases in which one controls one's behavior and those in which one is being moved by forces over which he or she has no control. When we face the latter sort of case, we still admit that the behavior could be good or bad but we deny that it is morally and legally significant -- it is more along lines of acts of nature or God by being out of the agent's control. This is also why philosophers who discuss ethics but deny free will have trouble distinguishing between morality and value theory -- e.g., some utilitarians, Marxists. Morality concerns how we ought to act (or the rightness of conduct), whereas value theory deals with what is good and bad and why. It is possible to address the latter field without taking a side on the free will issue. But that is not so with the former.
Some, though, will defend the view that even if we have no ultimate control over our actions -- even if our behavior, the judgments which we make, or our character is controlled by forces such as the environment or our genetic make-up -- we may still speak of ethics or morality. They are called compatibilists. They would mean by the term "ethics" or "morality" something different from what the terms would mean if we did possess free will, however. Ethics would concern good behavior, conduct in conformity with standards of right regardless of how it came about that one conformed or did not conform to those standards. Ethics, in line with the compatibilist position, might concern itself with values and how to secure them, without implying that one could of one’s own initiative exert control over whether these values would be achieved. Accordingly, then, without personal responsibility or agency, where one is the cause of what one does, whereby one initiates or originates one's significant actions, ethics would amount to something drastically different from what we usually mean by the term.
Given the way compatibilists conceive of their doctrine—namely, that it is possible to have both a causal explanation of human conduct as well as its being the agent’s responsibility to have engaged in that conduct—the theory does succeed better than all others, including the one dubbed agent causal or libertarian free will. But the compatibilist doctrine does not in fact allow for personal moral responsibility, however much its proponents so insist. No bona fide, ultimate personal responsibility can be attached to behavior that is, so to speak, softly determined. If one’s character has been molded so that one will be honest, generous, and so forth, and then one is indeed honest, generous and so forth, one cannot reasonably be said then to be responsible for being honest, generous and so forth. It is whatever molded one’s character that would explain one’s honesty, generosity and so forth. Any kind of moral pride or credit would amount to an illusion, not something well deserved.
Is Free Will Well Founded?
So these several reasons provide a kind of argumentative collage in support of the free will position. Can anyone do better with this issue? I don't know. I think it's best to ask only for what is the best of the various competing theories. Are human beings doing what they do solely as the consequences of forces acting on them? Or do they have the capacity to take charge of their lives, often neglect to do so properly or effectively, make stupid choices? Which supposition best explains the human world and its complexities around us?
I think the free will view makes much better sense. It explains, much better than do deterministic theories, how it is possible that human life involves such an array of possibilities, accomplishments as well as defeats, joys as well as sorrows, creation as well as destruction. It explains, also, why in human life there is so much change -- in language, custom, style, art, and science. Unlike other living beings, for which what is possible is pretty much fixed by instincts and reflexes -- even if some extraordinary behavior may be elicited, by way of extensive prodding in laboratories or, at times, in the face of unusual natural developments -- people initiate much of what they do, for better and for worse. From their most distinctive capacity of forming ideas and theories, to those of artistic and athletic inventiveness, human beings remake the world without, so to speak, having to do so! This, moreover, can make good sense if we understand them to have the distinctive capacity for initiating their own conduct rather than relying on mere stimulation and reaction. It also poses for them certain unique challenges, not the least of which is that they cannot reasonably expect any formula or system to predictably manage the future of human affairs, such as some of social sciences seem to hope it will. Social engineering is, thus, not a genuine prospect for solving human problems -- only education and individual initiative can do that.
Yet, it should be noted that free will does not contradict social science if the latter is not conceived in strict deterministic terms and the former is understood to allow for long range commitments, chosen policies, strategies, institutional involvements, etc.
Human beings make choices, some of which, however, commit them to a course of long range behavior which can be studied in terms of their impact on various features of the social world. People choose to enter schools, careers, relationships, to form institutions, to carry out plans, etc., and often their choices justifies expecting them to stick with a reasonably predictable course of conduct.
In economics, for example, one may be studying the market place as an arena wherein human beings make various free choices concerning how they will be earning a living, what they will be producing and consuming, how they will be marketing their products, bargaining for prices, wages, and benefits, etc. The discipline examines the various permutations and consequences of these choices, as well as various regularities that are evident in the overall sphere of their activities.
Nonetheless, people are free to do what they do as commercial agents in various ways, embark upon their tasks more or less intensely at various periods of their lives, for various reasons of their own or because of circumstances they face. None of this needs to be determined and all these actions are open to moral evaluation.5
Yet this does not take away a good deal of orderliness, even predictability from people's economic activities, provided one does not expect that they behave like Haley's comet or a subatomic particle, according to impersonal laws or random forces. If social science appreciates that human beings have free will, they do not necessarily give up being scientific about human life, quite the contrary. They could, instead, be closer to dealing scientifically with human life.
2. Moral Skepticism
We now turn to the second assumption and briefly discuss the pros and cons.
Let's once again recall what's at stake: is there any basis for our ethical or moral judgments? When a politician is denounced, a newspaper criticized for its practices, or a teacher (or even a text book author) praised or blamed for his or her product, can any of this be made out? Is it possible to justify such judgments or claims? When one claims that one's parents have mistreated one or one's physician engaged in malpractice, is this just hot air or the expression of displeasure? Is it that we simply know this without any justification, without any basis for that knowledge? Is ethics, perhaps, some kind of realm where we can be right without any justification?6 Or perhaps there are standards we can identify that can help us show that what we claim is true?
Our discussion, here, will again certainly not exhaust the topic. There is much more to be considered in a thorough study, but what we will do should help lay the foundation and give a clue as to what the debate involves.
Against Morality
Moral Diversity vs. Objectivity. There are too many moral opinions, so how can there be one, true moral standard for all? Clearly, across the globe and throughout human history great diversity exists and has existed concerning what is supposed to be right and wrong in human conduct. Indeed, apparently decent and intelligent people differ very seriously on the topic. Surely that suggests very strongly that no common, objective standard is available as to how we ought to act. It is mostly cultural anthropologists who advance this view -- e.g., Ruth Benedict, in her famous book Patterns of Culture.
No Evidence of the Senses supports Moral Claims. Moral judgments are not verifiable by observation, as are many other judgments we make. We can pretty much decide what color hat one is wearing, how many people are sitting in a class room, where China's borders are, how bright the sun is at noon, and other subjects we want to know about, by means of the diligent use of our sensory organs. Yet, no such use is going to enable us to decide whether we ought to tell the truth, write a letter to mother, help the poor, avoid pornography or ban abortions. Accordingly, moral disputes appear to be impossible to settle. This is an argument stressed by members of the philosophical school logical positivism -- e.g., A. J. Ayer, Language, Truth and Logic (Dover, 1936).7
The Gap Between "Is" and "Ought" No judgment of what is the case can support a conclusion of what one ought to do -- the "is/ought" gap argument of the philosopher David Hume (1711-1776). The rules of sound reasoning, good judgment, require that when one draws a conclusion from premises, the terms that are present in this conclusion also appear in the premises. Yet if one begins an argument with claims about this or that being so and so, there is no "ought" or "should" present, whereas in a conclusion having moral import it is just those terms that would have to appear. Clearly, then, such moral conclusions cannot be derived from non-moral premises.
Morality is Against Nature Nothing else in nature is subject to moral judgment or evaluation, so applying moral judgment or evaluation to human beings is odd, arbitrary, unjustified. Consider anything -- rocks, trees, birds, fish or whatever, and there is no place for praising and blaming in our understanding of these things. So bringing morality into the picture when we consider human affairs is arbitrary, out of the blue, unjustified. John Mackie argues, for example, that moral values, if they existed, would be "entities or qualities or relations of a very strange sort, utterly different from anything else in the universe."8
For Morality
Diversity is More Apparent than Real. (A) Moral opinions tend to differ about details, not basics. (B) Some persons have a vested interest in obscuring moral standards lest they be found guilty of moral wrong doing or evil. (C) Some persons are professional "devil's advocates" and propagate skepticism because they are testing, questioning, making sure (even if they do not act as if they were skeptics, e.g., toward their children, friends, political reps).
Perceptual Knowledge is Not All. In complicated areas observations do not suffice to verify judgment -- e.g., in astrophysics, particle physics, psychology, crime detection, etc. Moral judgments may require verification by way of a fairly complex theory or definition of, e.g., what "good" or "morally right" means. (Moral theories propose such theories and definitions.)
How not to Deduce but to Derive Ought from Is (A) Hume was arguing against those who believed that moral conclusions can be deduced from premises stating various facts. But not all arguments consist of deductions, a formal statement linking premises to conclusions by nothing other than its logical structure and the essential meaning of the terms. Thus nothing strictly new is ever established by way of deduction, nothing that isn't true implicitly already. There is, however, reasoning that's not deductive but, roughly, inferential. Based on our observations, reflections, economical theorizing, and the like, we forge or develop an understanding of the world. When detectives explain a crime, they do not deduce -- contrary to Sherlock Holmes -- but infer who did it. Scientists work from evidence to conclusions in other than a strictly deductive fashion. They reach their understanding of what is what by developing valid, well founded concepts and theories that best explain what they see and have previously learned about. Indeed, most often we are concerned to establish definitions which are not the product of deduction but generalization, abstraction, the formation of ideas.
Accordingly, (B) the premises of moral arguments could include theories or definitions as to what "good" or "ought to" mean and thus give support to particular moral judgments. For example, "the will of God is Good," "Good is what everyone ought to do," thus, "the will of God is what everyone ought to obey." Or, "Goodness is Living (for human beings)," "Living (for human beings) is furthered by thinking," thus, "Goodness is furthered by thinking." These definitions or theories cannot be just dismissed. There is a possibility that one of them captures accurately what the relevant terms mean and from this we could infer moral conclusions.9
Nature is Diverse Enough to allow for Major Differences As in the case of the free will hypothesis, there is nothing odd about something new emerging in nature that does invite judgments. Mary Midgley puts forth a very interesting idea, in The Ethical Primate (Routledge, 1994), to the effect that human beings are precisely distinctive in nature by having a moral nature, an ethical dimension to their lives. Indeed, this view was advanced by Aristotle, as well. This is evident enough when we consider how really extraordinary human life is3/4what other aspect of nature gives us board games, museums, symphony music, philosophy or the novel?
The Best Theory is As True as Can Be
When we put all of this together what is at issue is whether we get a more sensible understanding of the complexities of human life than otherwise -- do we get a better understanding, for example, of why social engineering and government regulation and regimentation do not work, why there are so many individual and cultural differences, why people can be wrong, why they can disagree with each other, etc. It may be because they are free to do so, because they are not set in some pattern the way cats and dogs and orangutans and birds tend to be. In principle, all of the behavior of these creatures around us can be predicted because they are not creative in a sense that they originate new ideas and behavior, although we do not always know enough about the constitution of these beings and how it would interact with their environment to actually predict what they will do. Human beings produce new ideas and these can introduce new kinds of behavior in familiar situations. This, in part, is what is meant by the fact that different people often interpret their experiences differently. Yet, we can make some predictions about what people will do because they often do make up their minds in a given fashion and stick to their decision over time. This is what we mean when we note that people make commitments, possess integrity, etc. So we can estimate what they are going to do. But even then we do not make certain predictions but only statistically significant ones. Clearly, very often people change their minds and surprise us. Furthermore, if we go to different cultures, they'll surprise us even more. This complexity, diversity, and individuation about human beings is best explained if human beings are free than if they are determined.
That is, at least, what is required for ethics to be a bona fide, genuine subject matter of concern.
Endnotes:
1 Ted Honderich, How Free Are You? The Determinism Problem (Oxford, England: Oxford University Press, 1993), pp. 42-3.
2 Even in physical reality, as in the freezing of water, the causal relationship isn't exactly what it is in other domains. The freezing occurs by way of what has been called "downward causation," instead of the more familiar "action-reaction" causation.
3 Ibid., p. 129.
4 New York, NY: Columbia University Press, 1983.
5 For more on this, see Tibor R. Machan, ed., Commerce and Morality (Lanham, MD: Rowman and Littlefield, 1988), especially "Ethics and its Uses."
6 This view is advanced in the name of Ludwig Wittgenstein by Johnston, op. cit., Wittgenstein and Moral Philosophy. But see, in contrast, Julius Kovesi, Moral Notions (London, England: Routledge & Kegan Paul, 1967), who also approaches ethics from Wittgenstein's teachings.
7 A very helpful discussion of the type of argument Ayer advances can be found in Laurie Calhoun, "Scientistic Confusion and Metaethical Relativism," Ethica, Vol. 7, No. 2 (1995), pp. 53-72. See, also, Renford Bambrough, Moral Scepticism and Moral Knowledge (Atlantic Highlands, NJ: Humanities Press, Inc., 1979).
8 J. L. Mackie, Ethics (Baltimore, MD: Penguin Books, 1977), p. 38.
9 For more along these lines, see W. D. Falk, Ought, Reasons, and Morality (Ithaca, NY: Cornell University Press, 1986), especially "Goading and Guiding" and "Hume on Is and Ought."
Friday, May 09, 2003
The Tax Anomaly
Tibor R. Machan
Since the American revolution, when monarchy was rejected on this continent and sovereignty was finally legally established for individual human beings, not governments, there has been a problem with taxation. The institution is an anomaly, plain and simple, in a genuinely free society. In such a society one has unalienable—meaning, never justifiably violable—rights to life, liberty and the pursuit of happiness, among other rights. But instead of transforming public finance from a coercive to a voluntary system, the framers left intact taxation, albeit changed so that at least there’d be representation along with it.
Those who kept loving government more than individual sovereignty have made use of this anomalous feature of our legal system to expand the state. It is quite natural that this should have occurred—whenever one compromises a principle, eventually the compromise devours the principle altogether. (This is why ethics counsels even against little white lies—it corrupts character.)
By now the tax system in the USA doesn’t even adhere to the principle, “No taxation without representation.” (It was the famous pre-Revolutionary patriot James Otis who said, “Taxation without representation is tyranny.”) Government actually taxes members of future generations, ones certainly not represented in Congress. And taxes are imposed on travelers all over the place by politicians who do not represent them. What is far worse, but to be expected, given the logic of such processes, is that instead of confining taxation to financing the only proper function of government, which is “to secure [our] rights,” taxation is now used to fund every project in society that the human imagination can conceive.
But, isn’t it the case that, to quote Justice Oliver Wendell Holmes, Jr., “Taxation is the price we pay for civilization”? That is a ruse! It comes from one of America’s legal giants who had no sympathy at all for limited government, quite the opposite.
In fact, taxation is extortion. The government tells us, “You may work for a living only if you hand over roughly forty percent of your earnings to us to fund goals we have decided need funding.” This is not what citizens of a free society deserve from their agents, ones who are entrusted with protecting not attacking their rights.
But, didn’t “we” enter into a social compact that resulted in the tax system we have? No we didn’t, not if we indeed have unalienable rights—no contract can give up anyone’s rights. I certainly may not contract so that you lose your rights. A contract can only be entered into voluntarily—unwilling third parties may not be conscripted to it. If, as in the case of the USA, the society is grounded on unalienable individual rights, the only way government can come about is through “the consent of the governed.” And while this had been understood too loosely in the past, even by the American founders, its meaning is clear: you and I must consent to be governed.
Now we do consent to being governed if we remain within the legal jurisdiction of a certain sphere, but only to the extent that is just—it is the just powers of government only to which we can consent, and to tax isn’t one of the just powers of government. To be properly funded, some other, but in any case voluntary, means must be found. Since, however, this is a very novel idea—about as novel even in the USA as free markets are in the for former Soviet bloc countries or freedom of religion in Iran—studies as to how to bring it off are in short supply. (Remember, most universities are tax funded, so they aren’t likely to encourage alternative ways of funding government!)
Still, there has been some progress in the study of funding government without any coercive means. One method proposed is to charge for all contracts which are, ultimately, backed by the courts. Sure, one can just shake a hand and proceed, but this isn’t likely when multimillions are at stake and legal recourse is wanted in case of some kind of mishap. There is also the possibility of funding government via lotteries. And at the beginning, governments could make a bundle and fund plenty of their proper undertakings by selling off all the properties that they should not own in the first place.
No, I am not expert in the field of public finance for a government of a free society. Still, I can say confidently that if the idea were not dismissed so readily by those who just love to tax their fellows for projects of their own, human beings could put their minds to the task profitably enough and find a way to eliminate this anomaly from our midst.
Tibor R. Machan
Since the American revolution, when monarchy was rejected on this continent and sovereignty was finally legally established for individual human beings, not governments, there has been a problem with taxation. The institution is an anomaly, plain and simple, in a genuinely free society. In such a society one has unalienable—meaning, never justifiably violable—rights to life, liberty and the pursuit of happiness, among other rights. But instead of transforming public finance from a coercive to a voluntary system, the framers left intact taxation, albeit changed so that at least there’d be representation along with it.
Those who kept loving government more than individual sovereignty have made use of this anomalous feature of our legal system to expand the state. It is quite natural that this should have occurred—whenever one compromises a principle, eventually the compromise devours the principle altogether. (This is why ethics counsels even against little white lies—it corrupts character.)
By now the tax system in the USA doesn’t even adhere to the principle, “No taxation without representation.” (It was the famous pre-Revolutionary patriot James Otis who said, “Taxation without representation is tyranny.”) Government actually taxes members of future generations, ones certainly not represented in Congress. And taxes are imposed on travelers all over the place by politicians who do not represent them. What is far worse, but to be expected, given the logic of such processes, is that instead of confining taxation to financing the only proper function of government, which is “to secure [our] rights,” taxation is now used to fund every project in society that the human imagination can conceive.
But, isn’t it the case that, to quote Justice Oliver Wendell Holmes, Jr., “Taxation is the price we pay for civilization”? That is a ruse! It comes from one of America’s legal giants who had no sympathy at all for limited government, quite the opposite.
In fact, taxation is extortion. The government tells us, “You may work for a living only if you hand over roughly forty percent of your earnings to us to fund goals we have decided need funding.” This is not what citizens of a free society deserve from their agents, ones who are entrusted with protecting not attacking their rights.
But, didn’t “we” enter into a social compact that resulted in the tax system we have? No we didn’t, not if we indeed have unalienable rights—no contract can give up anyone’s rights. I certainly may not contract so that you lose your rights. A contract can only be entered into voluntarily—unwilling third parties may not be conscripted to it. If, as in the case of the USA, the society is grounded on unalienable individual rights, the only way government can come about is through “the consent of the governed.” And while this had been understood too loosely in the past, even by the American founders, its meaning is clear: you and I must consent to be governed.
Now we do consent to being governed if we remain within the legal jurisdiction of a certain sphere, but only to the extent that is just—it is the just powers of government only to which we can consent, and to tax isn’t one of the just powers of government. To be properly funded, some other, but in any case voluntary, means must be found. Since, however, this is a very novel idea—about as novel even in the USA as free markets are in the for former Soviet bloc countries or freedom of religion in Iran—studies as to how to bring it off are in short supply. (Remember, most universities are tax funded, so they aren’t likely to encourage alternative ways of funding government!)
Still, there has been some progress in the study of funding government without any coercive means. One method proposed is to charge for all contracts which are, ultimately, backed by the courts. Sure, one can just shake a hand and proceed, but this isn’t likely when multimillions are at stake and legal recourse is wanted in case of some kind of mishap. There is also the possibility of funding government via lotteries. And at the beginning, governments could make a bundle and fund plenty of their proper undertakings by selling off all the properties that they should not own in the first place.
No, I am not expert in the field of public finance for a government of a free society. Still, I can say confidently that if the idea were not dismissed so readily by those who just love to tax their fellows for projects of their own, human beings could put their minds to the task profitably enough and find a way to eliminate this anomaly from our midst.
"Looting" in Iraq
By TIBOR R. MACHAN
Freedom News Service
This idea that Iraqis are criminals for looting is full of problems. To begin with, can you loot from a dictator? What about from a dictator who isn’t even in power any longer? After the dictatorship collapses, whose stuff is being looted anyway? To whom does the money in those government banks belong? To whom do the artifacts in museums belong?
Well, they belong to no one — or to everyone. If they do not belong to no one, then there is no looting going on, only some grabbing of stuff that’s lying about, left there by, well, the original looters who accumulated this stuff with the money they stole from the people who are supposedly doing the most recent looting.
If the stuff belongs to everyone, it’s like public property and everyone who is part of the public has a claim to it. Sure, in modern societies, public stuff is usually controlled by the government. The government, in turn, hands it out to people who have jumped through various bureaucratic hoops to obtain it. But when the bureaucracy itself is in shambles, the public stuff is obtained mostly by random, disorderly pickings.
Think of it for a moment: The Iraqi "looters" now dubbed criminals, sometimes even being shot for engaging in "looting," had been systematically looted for decades. So, now that they realized that their original looters -- dictator Saddam Hussein and his gang -- are no longer in power, they decided to go scavenger hunting.
Sure, the American officials in charge there now may have wanted to have it all left to be sorted out by them or their newly appointed bureaucrats. But that may — or even need — not be much of an incentive to all the previously victimized Iraqis to abstain from grabbing some valuables while the picking was possible.
In short, maybe what we had and still have in Iraq in the aftermath of the dictatorship’s demise is a version of the tragedy of the commons. That tragedy goes on in most countries, only in more orderly fashion. All those lobbyists flocking to centers of government holding out their hands for what the public servants might hand to them from the loot they have taken from nature or the pockets of the citizenry are but organized looters, actually. And as it has been so well shown by economists and others, since the time of Aristotle to Garrett Hardin, when stuff is owned in common, there is a problem environmentalists should be very concerned about. As Aristotle put it, "that which is common to the greatest number has the least care bestowed upon it. Every one thinks chiefly of his own, hardly at all of the common interest; and only when he is himself concerned as an individual. For besides other considerations, everybody is more inclined to neglect the duty which he expects another to fulfill; as in families many attendants are often less useful than a few."
What might help in restoring a civilized approach to the treatment of valuables in Iraq? It is something with which they aren’t very familiar, namely the institution of private property rights. But since such an idea has meant the encouragement of greed, avarice and similar sins, rather than what it actually does, namely promote responsibility and care for things, it isn’t likely that Iraqis will see much of it very soon. Not even American officials who may try to assist the Iraqis in restoring order have much of a clue about the merits of full privatization.
So, perhaps the "looting" Iraqis ought to be viewed with more understanding and even compassion. The main difference between their looting and that done by others around the globe is but a detail: the rest follow some sham rules, while the Iraqis made no pretense at having any such rules where public "ownership" is concerned.
Tibor Machan is a professor of business ethics and Western Civilization at Chapman University in Orange, Calif., and co-author of "A Primer on Business Ethics." He advises Freedom Communications, parent company of this newspaper. E-mail him at Machan@chapman.edu
By TIBOR R. MACHAN
Freedom News Service
This idea that Iraqis are criminals for looting is full of problems. To begin with, can you loot from a dictator? What about from a dictator who isn’t even in power any longer? After the dictatorship collapses, whose stuff is being looted anyway? To whom does the money in those government banks belong? To whom do the artifacts in museums belong?
Well, they belong to no one — or to everyone. If they do not belong to no one, then there is no looting going on, only some grabbing of stuff that’s lying about, left there by, well, the original looters who accumulated this stuff with the money they stole from the people who are supposedly doing the most recent looting.
If the stuff belongs to everyone, it’s like public property and everyone who is part of the public has a claim to it. Sure, in modern societies, public stuff is usually controlled by the government. The government, in turn, hands it out to people who have jumped through various bureaucratic hoops to obtain it. But when the bureaucracy itself is in shambles, the public stuff is obtained mostly by random, disorderly pickings.
Think of it for a moment: The Iraqi "looters" now dubbed criminals, sometimes even being shot for engaging in "looting," had been systematically looted for decades. So, now that they realized that their original looters -- dictator Saddam Hussein and his gang -- are no longer in power, they decided to go scavenger hunting.
Sure, the American officials in charge there now may have wanted to have it all left to be sorted out by them or their newly appointed bureaucrats. But that may — or even need — not be much of an incentive to all the previously victimized Iraqis to abstain from grabbing some valuables while the picking was possible.
In short, maybe what we had and still have in Iraq in the aftermath of the dictatorship’s demise is a version of the tragedy of the commons. That tragedy goes on in most countries, only in more orderly fashion. All those lobbyists flocking to centers of government holding out their hands for what the public servants might hand to them from the loot they have taken from nature or the pockets of the citizenry are but organized looters, actually. And as it has been so well shown by economists and others, since the time of Aristotle to Garrett Hardin, when stuff is owned in common, there is a problem environmentalists should be very concerned about. As Aristotle put it, "that which is common to the greatest number has the least care bestowed upon it. Every one thinks chiefly of his own, hardly at all of the common interest; and only when he is himself concerned as an individual. For besides other considerations, everybody is more inclined to neglect the duty which he expects another to fulfill; as in families many attendants are often less useful than a few."
What might help in restoring a civilized approach to the treatment of valuables in Iraq? It is something with which they aren’t very familiar, namely the institution of private property rights. But since such an idea has meant the encouragement of greed, avarice and similar sins, rather than what it actually does, namely promote responsibility and care for things, it isn’t likely that Iraqis will see much of it very soon. Not even American officials who may try to assist the Iraqis in restoring order have much of a clue about the merits of full privatization.
So, perhaps the "looting" Iraqis ought to be viewed with more understanding and even compassion. The main difference between their looting and that done by others around the globe is but a detail: the rest follow some sham rules, while the Iraqis made no pretense at having any such rules where public "ownership" is concerned.
Tibor Machan is a professor of business ethics and Western Civilization at Chapman University in Orange, Calif., and co-author of "A Primer on Business Ethics." He advises Freedom Communications, parent company of this newspaper. E-mail him at Machan@chapman.edu
Friday, May 02, 2003
SARS, Quarantine and Liberty
Tibor R. Machan
Let us assume here that SARS is a contagious disease that can be identified as such by doctors, including ones screening people who arrive from foreign shores. Would it be proper for the government to quarantine such folks?
If the disease is a serious health hazard, quarantine by legal authorities would be proper. Why?
It is the proper task of government to secure citizens’ rights. If someone with a contagious disease chooses to mingle with others who aren’t aware of this person’s disease, this person is very likely about to inflict a serious health hazard on these innocent citizens who haven’t chosen to mingle with the diseased person. So, the authorities entrusted with the job of securing our rights then have the responsibility to keep such people out of circulation.
Of course, whether SARS is such a serious disease is not something I know for sure and so I must put the matter in hypothetical form. If the disease is serious—not merely someone with a bad sneeze who may transmit a slight cold to others with whom contact will be unavoidable—then if this person intends to mingle with others, this person will be intent on embarking on criminal behavior—on assaulting others with his or her disease. No one has the right to do that to other persons who haven’t been forewarned and who have no choice about remaining in the vicinity of the diseased person.
None of this deals fully with the SARS phenomenon. There are, to the best of my knowledge, many others who carry contagious disease other than SARS. Thousands of persons with, for example, influenza travel freely about the globe without anyone going into panic about it. And, yes, influenza can kill! Arguably, then, SARS is something of a media driven scare, not a real serious hazard, compared to others afoot in various parts of the world.
The phenomenon reminds me of the time when thousands of people canceled trips to Europe after the USA bombed Libya back in April 1986 and there was fear of terrorism because of the bombing. One clever economist did some calculations and found that by remaining home, the chances for serious injury and even death for those who canceled their trips increased because of traffic hazards they would face when driving around on US soil. In contrast, flying to Europe and taking a train or a tour bus to various parts would have meant minimal danger to the tourists. No one, to my knowledge, has done a follow-up study on just how many of those who stayed away from Europe met with traffic mishaps. But the initial calculations by the economist seemed right.
In the present SARS scare thousands of people are foregoing vacations in China, Hong Kong, Toronto and other places where SARS has made its appearance. Given the relatively small numbers of those who have been felled by SARS, and given the statistical probability of meeting with traffic mishaps, it seems to me clear enough that this media driven and highly selective scare is once again leading to some unrecorded disasters.
Not much can be done about it, of course. When people get scared, however unreasonable their fear may well be, they will take measures to protect themselves. However, their protection may lead to worse things than what they feared in the first place.
It would be nice, under the circumstances, if the media—primarily news organizations—would report the comparative hazards stemming from SARS versus from other diseases and from the protective measures people are taking to avoid SARS. It should not be necessary for ordinary citizens, who rely so much on news organizations to inform them about what’s what, to become experts in this area. They should, instead, enjoy the services of their news reporters who should, in turn, dig deeper than superficial data that provides little more than grounds for panic rather than information that can help us make intelligent decisions. (For some official, government provided information on SARS, visit http://www.cdc.gov/ncidod/sars/faq.htm.)
Tibor R. Machan
Let us assume here that SARS is a contagious disease that can be identified as such by doctors, including ones screening people who arrive from foreign shores. Would it be proper for the government to quarantine such folks?
If the disease is a serious health hazard, quarantine by legal authorities would be proper. Why?
It is the proper task of government to secure citizens’ rights. If someone with a contagious disease chooses to mingle with others who aren’t aware of this person’s disease, this person is very likely about to inflict a serious health hazard on these innocent citizens who haven’t chosen to mingle with the diseased person. So, the authorities entrusted with the job of securing our rights then have the responsibility to keep such people out of circulation.
Of course, whether SARS is such a serious disease is not something I know for sure and so I must put the matter in hypothetical form. If the disease is serious—not merely someone with a bad sneeze who may transmit a slight cold to others with whom contact will be unavoidable—then if this person intends to mingle with others, this person will be intent on embarking on criminal behavior—on assaulting others with his or her disease. No one has the right to do that to other persons who haven’t been forewarned and who have no choice about remaining in the vicinity of the diseased person.
None of this deals fully with the SARS phenomenon. There are, to the best of my knowledge, many others who carry contagious disease other than SARS. Thousands of persons with, for example, influenza travel freely about the globe without anyone going into panic about it. And, yes, influenza can kill! Arguably, then, SARS is something of a media driven scare, not a real serious hazard, compared to others afoot in various parts of the world.
The phenomenon reminds me of the time when thousands of people canceled trips to Europe after the USA bombed Libya back in April 1986 and there was fear of terrorism because of the bombing. One clever economist did some calculations and found that by remaining home, the chances for serious injury and even death for those who canceled their trips increased because of traffic hazards they would face when driving around on US soil. In contrast, flying to Europe and taking a train or a tour bus to various parts would have meant minimal danger to the tourists. No one, to my knowledge, has done a follow-up study on just how many of those who stayed away from Europe met with traffic mishaps. But the initial calculations by the economist seemed right.
In the present SARS scare thousands of people are foregoing vacations in China, Hong Kong, Toronto and other places where SARS has made its appearance. Given the relatively small numbers of those who have been felled by SARS, and given the statistical probability of meeting with traffic mishaps, it seems to me clear enough that this media driven and highly selective scare is once again leading to some unrecorded disasters.
Not much can be done about it, of course. When people get scared, however unreasonable their fear may well be, they will take measures to protect themselves. However, their protection may lead to worse things than what they feared in the first place.
It would be nice, under the circumstances, if the media—primarily news organizations—would report the comparative hazards stemming from SARS versus from other diseases and from the protective measures people are taking to avoid SARS. It should not be necessary for ordinary citizens, who rely so much on news organizations to inform them about what’s what, to become experts in this area. They should, instead, enjoy the services of their news reporters who should, in turn, dig deeper than superficial data that provides little more than grounds for panic rather than information that can help us make intelligent decisions. (For some official, government provided information on SARS, visit http://www.cdc.gov/ncidod/sars/faq.htm.)
Rush Limbaugh's Fallacy
Tibor R. Machan
Every time I drive to school, I listen to Rush Limbaugh for about five minutes. It is cultural anthropology for me more than an interest in Rush's latest ridicule of Daschle & Co.. although I sympathize with that.
Last time I tuned in he was trying very hard to explain away the fact that no weapons of mass destruction had yet been found in Iraq. His take is that there were other good reasons to go to war there, namely, the alleged connection between Hussein's regime and Al Qaeda, the terrorist network, and Hussein's sadistic dictatorship. So, Rush reasons, never mind the WMD--we had the authority to do the war even without them.
Why would Limbaugh try so hard now to rationalize the war with Iraq? Well, for one, he seems to by loyal to a fault to George W. Bush and wants to make sure the guy continues in office. Without the WMD he may well be beaten up in the upcoming race for taking America to war without good reason. The sad fact for Rush is that those other reasons he gives for why Bush was justified in going to war are not good reasons, actually.
It is not the business of the United States Armed Forces to engage in retaliatory armed conflict against some country that is merely speculatively connected with the perpetrators of the September 11, 2001, terrorist attacks in the USA. Some kind of procedure is needed whereby within the rule of law the connection is firmly established and Iraq's regime is shown beyond a reasonable doubt to have supported Al Qaeda's terrorism. Just to point a finger and say, they had something to do with this simply isn't enough, not in a civilized society. Indeed, the thing that is supposed to differentiate terrorists from civilized warriors is that the former care nothing about the niceties of the rule of law--due process, burden of proof, rules of evidence and such.
What then about Saddam being a vicious dictator? Trouble is there are many such around the globe, have always been, and while everyone is authorized to try to assassinate these guys, morally speaking, armies of various countries owe it to their citizens to stay on their post and stand ready to defend them. They have no moral authority to gallivant about the globe and purge it of dictators--they already have a job of defending the rights of their citizens.
So Limbaugh's efforts to bail out Bush just won't wash. Bush knew better, too, which is why he insisted that the war with Iraq is first and foremost about the weapons. Why was that important?
Because in a free country the military must act defensively, never aggressively. Preemptive strikes are justified only if there is serious, demonstrably high probability that another country will attack. This is akin to the idea in the criminal law that if one acts against another because the other is about to act against oneself, it is excusable; otherwise it is aggression, nothing less. Even in the cases where battered woman's or wife's syndrome is invoked, the idea is that the man was certainly going to attack the woman, so she could only escape the attack by acting first.
Of course, the US military has acted in the past without the justification needed for preemptive attack. But those cases are far more testy to square with the basic American idea that self-defense is the only justification for using force against other people.Humanitarianism is usually given as the justification for such cases of military interference. Do they suffice as such? May a country's military invade another country when that country's rulers oppress the bulk of the people there?
This is a big question and only a little space is left to deal with it. Suffice it to say that citizens of other countries could, as volunteers, be justified in coming to the aid of the oppressed but the armies are not since, well, their job is to defend their own citizens.
Tibor R. Machan
Every time I drive to school, I listen to Rush Limbaugh for about five minutes. It is cultural anthropology for me more than an interest in Rush's latest ridicule of Daschle & Co.. although I sympathize with that.
Last time I tuned in he was trying very hard to explain away the fact that no weapons of mass destruction had yet been found in Iraq. His take is that there were other good reasons to go to war there, namely, the alleged connection between Hussein's regime and Al Qaeda, the terrorist network, and Hussein's sadistic dictatorship. So, Rush reasons, never mind the WMD--we had the authority to do the war even without them.
Why would Limbaugh try so hard now to rationalize the war with Iraq? Well, for one, he seems to by loyal to a fault to George W. Bush and wants to make sure the guy continues in office. Without the WMD he may well be beaten up in the upcoming race for taking America to war without good reason. The sad fact for Rush is that those other reasons he gives for why Bush was justified in going to war are not good reasons, actually.
It is not the business of the United States Armed Forces to engage in retaliatory armed conflict against some country that is merely speculatively connected with the perpetrators of the September 11, 2001, terrorist attacks in the USA. Some kind of procedure is needed whereby within the rule of law the connection is firmly established and Iraq's regime is shown beyond a reasonable doubt to have supported Al Qaeda's terrorism. Just to point a finger and say, they had something to do with this simply isn't enough, not in a civilized society. Indeed, the thing that is supposed to differentiate terrorists from civilized warriors is that the former care nothing about the niceties of the rule of law--due process, burden of proof, rules of evidence and such.
What then about Saddam being a vicious dictator? Trouble is there are many such around the globe, have always been, and while everyone is authorized to try to assassinate these guys, morally speaking, armies of various countries owe it to their citizens to stay on their post and stand ready to defend them. They have no moral authority to gallivant about the globe and purge it of dictators--they already have a job of defending the rights of their citizens.
So Limbaugh's efforts to bail out Bush just won't wash. Bush knew better, too, which is why he insisted that the war with Iraq is first and foremost about the weapons. Why was that important?
Because in a free country the military must act defensively, never aggressively. Preemptive strikes are justified only if there is serious, demonstrably high probability that another country will attack. This is akin to the idea in the criminal law that if one acts against another because the other is about to act against oneself, it is excusable; otherwise it is aggression, nothing less. Even in the cases where battered woman's or wife's syndrome is invoked, the idea is that the man was certainly going to attack the woman, so she could only escape the attack by acting first.
Of course, the US military has acted in the past without the justification needed for preemptive attack. But those cases are far more testy to square with the basic American idea that self-defense is the only justification for using force against other people.Humanitarianism is usually given as the justification for such cases of military interference. Do they suffice as such? May a country's military invade another country when that country's rulers oppress the bulk of the people there?
This is a big question and only a little space is left to deal with it. Suffice it to say that citizens of other countries could, as volunteers, be justified in coming to the aid of the oppressed but the armies are not since, well, their job is to defend their own citizens.
Wrong Take on Basic Human Rights
Tibor R. Machan
The University of California Press is one of the more prestigious university book publishers, so for one to get a manuscript accepted and published there, one must jump through many hoops. Manuscripts are usually sent out to peer scholars; if they like it the editors take the MS to a board that authorized issuing a contract for the book.
In certain fields of study this means pretty much that only books with a certain point of view will get the nod. Those of us looking for publishers can, thus, get a pretty good clue about whether we have a chance for one or another publisher by just looking at what they have published recently.
In light of this I should definitely not try to get the UC Press to take a look at any of my manuscripts that I would like to have published by a prestigious press, no sir. Why?
One of the books UC Press has just published is “Pathologies of Power; Health, Human Rights, and the New War on the Poor.” The author Paul Farmer is praised by Tracy Kidder, who is the author of The Soul of a New Machine, for have produced “An eloquent plea for…human rights that would not neglect the most basic rights of all: food, shelter and health….” The foreword to the book is written by one of my favorite intellectual adversaries, Nobel Laureate Amartya Sen of Trinity College, Cambridge UK (soon to move to Harvard University to hold a most prestigious chair). All these have lined up giving this book their blessing. And that is too bad.
People wonder why the West—especially America and Great Britain—is seen in such a bad light by prominent folks around the globe! One clear reason is in evidence in the praise given to Paul Farmer’s book by Tracy Kidder and those who did the peer reviews and decided to publish the book—they seem not to have a clear idea about what is a fundamental, basic human right. One cannot repeat this often enough: no one has a basic right to food, shelter and health.
Just think of it for a moment. To have a basic right means all others are obligated to make sure it is not violated. With bona fide basic rights, such as to one’s life and liberty, this poses no problem because all others need to do to respect them is to abstain for intruding in one’s life and liberty. You respect my right to my life by not killing me; the right to my liberty by no assaulting or kidnapping me; my right to private property by not robbing or stealing fro me. You need do nothing for me, only abstain from becoming an intruder. Such basic rights have, thus, been dubbed “negative” rights.
Compare this with respecting a basic right to “food, shelter and health.” To do this one must actually work for others. The legal protection of these rights means, plain and simple, forced labor! If I have a right to food, those making food must provide me with food without compensation, just as I do not have to pay you if you do not murder, assault or rob me. So, such so called basic rights mean nothing less than the conscription of those who provide the goods to which we allegedly have basic rights (or the forcible taking of wealth from others so as to pay for the services). They have, thus, been named “positive” rights, requiring positive actions from others at the point of a gun.
Despite the evident forced labor implications of the “positive” rights idea, this kind of book gets to be published by a major American press, supported and endorsed by famous people. There should then be no great wonder about why the Western tradition of liberalism—the idea, as per the US Declaration of Independence, that public policy must aim first an foremost toward the securing of our rights to life, liberty and the pursuit of happiness—is so defenseless from the intellectual community. Securing such rights is in direct conflict with securing those that author Paul Farmer insists are our basic human rights.
It is a constant source of puzzle to me that major intellectuals in Western universities and their presses have so little understanding of or appreciation for what it is that makes the Western World the envy of the rest, namely, its more or less strict protection of the right to individual liberty. Once such a right is secured, the securing of food, shelter and health care becomes a task for us all, a task that seems to be carried out much more successfully in the largely liberal West than where those other alleged rights, requiring governments to regiment people to respect them, are supposed to be held as basic.
Tibor R. Machan
The University of California Press is one of the more prestigious university book publishers, so for one to get a manuscript accepted and published there, one must jump through many hoops. Manuscripts are usually sent out to peer scholars; if they like it the editors take the MS to a board that authorized issuing a contract for the book.
In certain fields of study this means pretty much that only books with a certain point of view will get the nod. Those of us looking for publishers can, thus, get a pretty good clue about whether we have a chance for one or another publisher by just looking at what they have published recently.
In light of this I should definitely not try to get the UC Press to take a look at any of my manuscripts that I would like to have published by a prestigious press, no sir. Why?
One of the books UC Press has just published is “Pathologies of Power; Health, Human Rights, and the New War on the Poor.” The author Paul Farmer is praised by Tracy Kidder, who is the author of The Soul of a New Machine, for have produced “An eloquent plea for…human rights that would not neglect the most basic rights of all: food, shelter and health….” The foreword to the book is written by one of my favorite intellectual adversaries, Nobel Laureate Amartya Sen of Trinity College, Cambridge UK (soon to move to Harvard University to hold a most prestigious chair). All these have lined up giving this book their blessing. And that is too bad.
People wonder why the West—especially America and Great Britain—is seen in such a bad light by prominent folks around the globe! One clear reason is in evidence in the praise given to Paul Farmer’s book by Tracy Kidder and those who did the peer reviews and decided to publish the book—they seem not to have a clear idea about what is a fundamental, basic human right. One cannot repeat this often enough: no one has a basic right to food, shelter and health.
Just think of it for a moment. To have a basic right means all others are obligated to make sure it is not violated. With bona fide basic rights, such as to one’s life and liberty, this poses no problem because all others need to do to respect them is to abstain for intruding in one’s life and liberty. You respect my right to my life by not killing me; the right to my liberty by no assaulting or kidnapping me; my right to private property by not robbing or stealing fro me. You need do nothing for me, only abstain from becoming an intruder. Such basic rights have, thus, been dubbed “negative” rights.
Compare this with respecting a basic right to “food, shelter and health.” To do this one must actually work for others. The legal protection of these rights means, plain and simple, forced labor! If I have a right to food, those making food must provide me with food without compensation, just as I do not have to pay you if you do not murder, assault or rob me. So, such so called basic rights mean nothing less than the conscription of those who provide the goods to which we allegedly have basic rights (or the forcible taking of wealth from others so as to pay for the services). They have, thus, been named “positive” rights, requiring positive actions from others at the point of a gun.
Despite the evident forced labor implications of the “positive” rights idea, this kind of book gets to be published by a major American press, supported and endorsed by famous people. There should then be no great wonder about why the Western tradition of liberalism—the idea, as per the US Declaration of Independence, that public policy must aim first an foremost toward the securing of our rights to life, liberty and the pursuit of happiness—is so defenseless from the intellectual community. Securing such rights is in direct conflict with securing those that author Paul Farmer insists are our basic human rights.
It is a constant source of puzzle to me that major intellectuals in Western universities and their presses have so little understanding of or appreciation for what it is that makes the Western World the envy of the rest, namely, its more or less strict protection of the right to individual liberty. Once such a right is secured, the securing of food, shelter and health care becomes a task for us all, a task that seems to be carried out much more successfully in the largely liberal West than where those other alleged rights, requiring governments to regiment people to respect them, are supposed to be held as basic.
Monday, April 28, 2003
Why Islamists Detest America
Tibor R. Machan
Over the last several months there’s been a lot of consternation about why so many Muslims detest America. Why do they find the system of political economy associated with the USA so objectionable?
Put bluntly, their charge that America’s culture is “materialistic” is largely true, if by this they mean that people in America pay a good deal of attention to how well they can live, how much joy life can bring them—including when they go shopping.
Not that Americans do not believe in God or don’t embrace some religious faith but they do not do so with the kind of utter and blind devotion leaders of the Islamic faith demand of Muslims. For a most of these leaders the only government that is legitimate is one that demands of and forces its citizens to fully adhere to the Koran as they interpret it. Nothing else will do and when America associates politically or economically with countries where this goes on or where Muslim leaders want it to go on, the leaders believe it corrupts those societies, leads them astray from the Koran, which for them is a disaster. So, they hate the country from which such influences emanate.
America, in contrast, rests on a classical liberal political tradition in which tolerance reigns supreme as a principle of human relationships. John Locke, the grandfather of the American system of government, was also preoccupied with figuring out how government and church should be related. Out of his and some others’ reflections the American founders took away a liberal theory of government, one that opposes any union of state and church, especially at the federal level but by now also in every state. This has spawned a great many religious denominations in the USA—one needs only to look at all the different churches in one’s own neighborhood to appreciate this.
Yet, Americans tend, in the main, to confine their religiosity to Sundays or the Sabbath while during the rest of the week they go about their personal and professional lives pretty much with little deep concern for how these square with their faiths. Just compare the amount of public prayer Muslims practice to that of Americans!
Moreover, Christianity has by now made relative peace with commerce and the “materialism”—I’d prefer calling it “naturalism”—Muslim leaders find so detestable. Christians see human beings as having a divided self, composed of spirit and of matter (soul and body), with both due some measure of care in one’s life. The two sides do not always interact happily, of course, but that hasn’t lead to any great changes in American and other Western cultures.
Yes, commerce is often derided by writers, priests, ministers, intellectuals and the rest but this is often recognized as somewhat paradoxical if not altogether inconsistent—after all, most of those doing the deriding tend to be quite happy with the measure of material well being they have managed to achieve and few if any have taken any serious vows of poverty.
Finally, it is undeniable that a vigorous commercial culture tends to be directed to living well here on earth rather than to preparing for everlasting salvation. We may not be able to take it with us but we do like it a lot—namely, material wealth—while it we are dwelling here on earth. And that probably does distract many of us from focusing on what religious leaders consider our spiritual needs and obligations.
The question is whether the Muslim leaders are right: Is this freedom we enjoy in America and the West good for us all or are we becoming decadent, shallow and faithless as we enjoy our lives here on earth? I believe that without addressing this question we will always be vulnerable to the harangue of Muslim leaders (as well as others) and will keep being detested by many Muslim faithful across the globe. And some of this detestation will be deadly at times.
But then perhaps that is to be expected when one holds up as an ideal a sort of human life in which men and women are free and able not only to choose to do what is right but also what is wrong. Perhaps we ought to be more confident and firm in our belief that this is indeed how human beings ought to live. We ought also to stand up firmly in support of the system of politics and law that vigorously protects such a way of life. We should not hesitate to resist the aggression of those who find this so contemptible. They, after all, are quite mistaken in attempting to enforce by law the good life they demand of their faithful—simply no good can come from enforced goodness.
Indeed, if it is such a good life, why do they need all these laws to make people follow its principles?
Tibor R. Machan
Over the last several months there’s been a lot of consternation about why so many Muslims detest America. Why do they find the system of political economy associated with the USA so objectionable?
Put bluntly, their charge that America’s culture is “materialistic” is largely true, if by this they mean that people in America pay a good deal of attention to how well they can live, how much joy life can bring them—including when they go shopping.
Not that Americans do not believe in God or don’t embrace some religious faith but they do not do so with the kind of utter and blind devotion leaders of the Islamic faith demand of Muslims. For a most of these leaders the only government that is legitimate is one that demands of and forces its citizens to fully adhere to the Koran as they interpret it. Nothing else will do and when America associates politically or economically with countries where this goes on or where Muslim leaders want it to go on, the leaders believe it corrupts those societies, leads them astray from the Koran, which for them is a disaster. So, they hate the country from which such influences emanate.
America, in contrast, rests on a classical liberal political tradition in which tolerance reigns supreme as a principle of human relationships. John Locke, the grandfather of the American system of government, was also preoccupied with figuring out how government and church should be related. Out of his and some others’ reflections the American founders took away a liberal theory of government, one that opposes any union of state and church, especially at the federal level but by now also in every state. This has spawned a great many religious denominations in the USA—one needs only to look at all the different churches in one’s own neighborhood to appreciate this.
Yet, Americans tend, in the main, to confine their religiosity to Sundays or the Sabbath while during the rest of the week they go about their personal and professional lives pretty much with little deep concern for how these square with their faiths. Just compare the amount of public prayer Muslims practice to that of Americans!
Moreover, Christianity has by now made relative peace with commerce and the “materialism”—I’d prefer calling it “naturalism”—Muslim leaders find so detestable. Christians see human beings as having a divided self, composed of spirit and of matter (soul and body), with both due some measure of care in one’s life. The two sides do not always interact happily, of course, but that hasn’t lead to any great changes in American and other Western cultures.
Yes, commerce is often derided by writers, priests, ministers, intellectuals and the rest but this is often recognized as somewhat paradoxical if not altogether inconsistent—after all, most of those doing the deriding tend to be quite happy with the measure of material well being they have managed to achieve and few if any have taken any serious vows of poverty.
Finally, it is undeniable that a vigorous commercial culture tends to be directed to living well here on earth rather than to preparing for everlasting salvation. We may not be able to take it with us but we do like it a lot—namely, material wealth—while it we are dwelling here on earth. And that probably does distract many of us from focusing on what religious leaders consider our spiritual needs and obligations.
The question is whether the Muslim leaders are right: Is this freedom we enjoy in America and the West good for us all or are we becoming decadent, shallow and faithless as we enjoy our lives here on earth? I believe that without addressing this question we will always be vulnerable to the harangue of Muslim leaders (as well as others) and will keep being detested by many Muslim faithful across the globe. And some of this detestation will be deadly at times.
But then perhaps that is to be expected when one holds up as an ideal a sort of human life in which men and women are free and able not only to choose to do what is right but also what is wrong. Perhaps we ought to be more confident and firm in our belief that this is indeed how human beings ought to live. We ought also to stand up firmly in support of the system of politics and law that vigorously protects such a way of life. We should not hesitate to resist the aggression of those who find this so contemptible. They, after all, are quite mistaken in attempting to enforce by law the good life they demand of their faithful—simply no good can come from enforced goodness.
Indeed, if it is such a good life, why do they need all these laws to make people follow its principles?
The Trap of Humanitarian Wars
Tibor R. Machan
In moral philosophy altruism (or humanitarianism) has two versions. Under one, everyone must think of and work for others first and what counts for this is up to the beneficiaries. In short, your help is what they consider to be help, not something objective one can know without their input. Under the other, one must still think of and work for others first but what counts for this is something knowable by anyone and could even conflict with what beneficiaries would like to have done for them. The first is subjective, the second objective altruism or humanitarianism.
In connection with domestic public policies one can see the distinction when government gives cash to welfare recipients, so they can get what they want for it, versus when it gives them cheese or food stamps, insisting the poor get what is really good for them whether they like it or not. Both run risks—the first may amount to throwing money away since the poor might squander it, the second may offend by being paternalistic.
When governments go to war for the sake of helping people in foreign countries, it is always a puzzle whether they ought to follow the subjective or objective humanitarian policy. Should they just do for those who are in dire straits what they would like to have done for them or should they provide what will actually do them some good? The former approach trusts the people, rightly or wrongly, to know from what they will gain benefits, the latter trust the invading forces to do so. This is a paradox of humanitarianism – to do good for others, they sometimes need to be treated as children and have this good imposed on them. Otherwise all the help may be for nothing because those receiving it will squander it.
Many in Iraq, for example, seem now to be happy to have gotten rid of Saddam Hussein’s dictatorship but this doesn’t mean they want what the American leaders believe would be best for them, namely, a liberal democratic regime. Rather, massive rallies have been held insisting that Iraq should become an Islamic country, run by Muslim clerics and other leaders. While this may indeed be more popular there than Saddam Hussein had been, it would be pretty harsh on many minorities the members of which do not embrace the Islamic faith, or not, at least, the version favored by the majority.
The impending democracy in Iraq would then mostly likely be illiberal, not liberal. That is to say, those who do not share the faith of the majority would not have constitutional protection against being bullied by the majority. It’d be as if, say, the Jehovah’s Witnesses or some other evangelical faith became the majority in America and could impose its religious practices on everyone else. Instead, now they must try to persuade people and if sent their way, they must leave.
In fact, in a just society it would never be tolerated to have morality or religion forcibly imposed, apart from the minimum protection of everyone’s basic rights. That much is required so that everyone has the chance to choose whether to do this or that, including whether to embrace this or that faith. The rest is entirely a matter of voluntary choice, otherwise it doesn’t count for much at all. Doing what is right, following a religion, because of threats from others, especially government, doesn’t count as doing what is right or following a religion at all.
Humanitarian or altruistic intervention is thus paradoxical. It aims to do good for others, especially political good, but then it must treat these others as if they were like children and couldn’t be trusted with deciding how they should act. Yet, if a country’s leaders have decided to tax their own people billions and billions so as to provide real help to the people of other countries and those people don’t want this help but want to do what is politically wrong, how is one to proceed?
Perhaps the lesson to be gleaned here is that humanitarian wars are wrong, period. The billions of dollars citizens of one country pay to keep a standing military should not be wasted on tasks that are hopeless. Americans should not be required to make the effort to help people who may not even want our help, or only want it to do something not much better than that from which they got liberated.
It isn’t as if Iraqis were incapable of taking part in a liberal democratic political order but the large majority of them may not want to do so, even if that’s wrong. American government officials should make up their mind—will they fight humanitarian wars that get them into the mess of having to impose the right system on unwilling people abroad or will they confine themselves to fighting to defend the people they are supposed to serve?
If the latter, then the only thing that made the war in Iraq just is that Saddam Hussein was very likely to unleash weapons of mass destruction against US citizens and their allies. OK, so he cannot do this any longer. Thus now the US military needs to leave and not play daddy or nanny to the Iraqis.
Tibor R. Machan
In moral philosophy altruism (or humanitarianism) has two versions. Under one, everyone must think of and work for others first and what counts for this is up to the beneficiaries. In short, your help is what they consider to be help, not something objective one can know without their input. Under the other, one must still think of and work for others first but what counts for this is something knowable by anyone and could even conflict with what beneficiaries would like to have done for them. The first is subjective, the second objective altruism or humanitarianism.
In connection with domestic public policies one can see the distinction when government gives cash to welfare recipients, so they can get what they want for it, versus when it gives them cheese or food stamps, insisting the poor get what is really good for them whether they like it or not. Both run risks—the first may amount to throwing money away since the poor might squander it, the second may offend by being paternalistic.
When governments go to war for the sake of helping people in foreign countries, it is always a puzzle whether they ought to follow the subjective or objective humanitarian policy. Should they just do for those who are in dire straits what they would like to have done for them or should they provide what will actually do them some good? The former approach trusts the people, rightly or wrongly, to know from what they will gain benefits, the latter trust the invading forces to do so. This is a paradox of humanitarianism – to do good for others, they sometimes need to be treated as children and have this good imposed on them. Otherwise all the help may be for nothing because those receiving it will squander it.
Many in Iraq, for example, seem now to be happy to have gotten rid of Saddam Hussein’s dictatorship but this doesn’t mean they want what the American leaders believe would be best for them, namely, a liberal democratic regime. Rather, massive rallies have been held insisting that Iraq should become an Islamic country, run by Muslim clerics and other leaders. While this may indeed be more popular there than Saddam Hussein had been, it would be pretty harsh on many minorities the members of which do not embrace the Islamic faith, or not, at least, the version favored by the majority.
The impending democracy in Iraq would then mostly likely be illiberal, not liberal. That is to say, those who do not share the faith of the majority would not have constitutional protection against being bullied by the majority. It’d be as if, say, the Jehovah’s Witnesses or some other evangelical faith became the majority in America and could impose its religious practices on everyone else. Instead, now they must try to persuade people and if sent their way, they must leave.
In fact, in a just society it would never be tolerated to have morality or religion forcibly imposed, apart from the minimum protection of everyone’s basic rights. That much is required so that everyone has the chance to choose whether to do this or that, including whether to embrace this or that faith. The rest is entirely a matter of voluntary choice, otherwise it doesn’t count for much at all. Doing what is right, following a religion, because of threats from others, especially government, doesn’t count as doing what is right or following a religion at all.
Humanitarian or altruistic intervention is thus paradoxical. It aims to do good for others, especially political good, but then it must treat these others as if they were like children and couldn’t be trusted with deciding how they should act. Yet, if a country’s leaders have decided to tax their own people billions and billions so as to provide real help to the people of other countries and those people don’t want this help but want to do what is politically wrong, how is one to proceed?
Perhaps the lesson to be gleaned here is that humanitarian wars are wrong, period. The billions of dollars citizens of one country pay to keep a standing military should not be wasted on tasks that are hopeless. Americans should not be required to make the effort to help people who may not even want our help, or only want it to do something not much better than that from which they got liberated.
It isn’t as if Iraqis were incapable of taking part in a liberal democratic political order but the large majority of them may not want to do so, even if that’s wrong. American government officials should make up their mind—will they fight humanitarian wars that get them into the mess of having to impose the right system on unwilling people abroad or will they confine themselves to fighting to defend the people they are supposed to serve?
If the latter, then the only thing that made the war in Iraq just is that Saddam Hussein was very likely to unleash weapons of mass destruction against US citizens and their allies. OK, so he cannot do this any longer. Thus now the US military needs to leave and not play daddy or nanny to the Iraqis.
Why we are so Different
Tibor R. Machan
When I speak of America’s culture and political system, I have in mind what distinguishes these from the rest of the world’s and from much of human history’s cultures. There is, of course, a lot here that is no different from everywhere else, some great, some OK, and some pretty bad.
But what America has more of than most other places is human liberty. Sure, not all have it in sufficient abundance. Other countries actually have more in certain areas—e.g., in much of Europe you are free to smoke and use drugs, and leave stores open late at night. All in all, however, there is much more freedom in American than elsewhere.
This is vital because freedom is a prerequisite of morality, of acting ethically—people aren’t morally good when they are forced to behave well, however eager some are to make us all good. It is simply an impossible task.
Also, freedom is necessary for our individuality to flourish. In many societies and periods of history the reigning idea is “one size fits all.” Even the greatest thinkers have made this terrible mistake of thinking that one kind of life is best—even healthy—for everyone. It is from this that we got communism, fascism, totalitarianism and other regimes where the objective has been and is to make everyone conform to one vision of human excellence. But no such vision can possibly work because we are unique in the living world in being essentially individuals. Yes, we are social beings, too, but this side of us may not violate our individuality if our human nature is to be respected, honored.
What I am saying here is actually not tough to prove. Just look around you and notice how many decent people are quite different. Some are adventurous, some not, some are loners, some are gregarious, some introverted and some extra—the list could go on and on. Our goals, talents, tastes, and personalities are highly varied, yet oh so human. This is what individualism acknowledges—that we matter as individuals, not as parts of some greater whole. No one can be replaced as the individual who he or she is, and we all know this at least implicitly.
Now in America this is more or less consistently understood. And the price we pay for it is that we realize that what others do, for better or for worse, is something over which they are to have the final say however much it may displease the rest of us. The great cost of individualism is also its great benefit: an enormous variety of ways to live both well and badly.
In America this idea is pretty much accepted, at least at the gut level, even while many people bellyache about it endlessly. All sorts of pressure groups want to have everyone conform to their agendas, to their priorities, yet even as they do this they pretty much accept individualism in many areas of their lives. Such are the contradictions of our culture.
Those of other cultures, however, tend to be more severe. In most places the individualist idea hasn’t sunk in despite its evidence all around. The major source of all the diversity across the globe is nothing other than that people are individuals, apart from whatever else they may be. They have given rise to innumerable varieties of practices, traditions, philosophies, religions, styles of art, special sciences, and customs of food and dress.
What makes America quite irksome to many is that it was designed to accommodate a great deal of human variety; so, it cannot in all honesty offer any kind of utopian, one-size-fits-all vision of social life. With all this variety there is little hope for getting all people to march to the same drummer, to follow the lead of just one guru—or even just one variety of fitness trainer.
And that cannot but annoy those around the globe who want to continue to rule people along such lines.
Tibor R. Machan
When I speak of America’s culture and political system, I have in mind what distinguishes these from the rest of the world’s and from much of human history’s cultures. There is, of course, a lot here that is no different from everywhere else, some great, some OK, and some pretty bad.
But what America has more of than most other places is human liberty. Sure, not all have it in sufficient abundance. Other countries actually have more in certain areas—e.g., in much of Europe you are free to smoke and use drugs, and leave stores open late at night. All in all, however, there is much more freedom in American than elsewhere.
This is vital because freedom is a prerequisite of morality, of acting ethically—people aren’t morally good when they are forced to behave well, however eager some are to make us all good. It is simply an impossible task.
Also, freedom is necessary for our individuality to flourish. In many societies and periods of history the reigning idea is “one size fits all.” Even the greatest thinkers have made this terrible mistake of thinking that one kind of life is best—even healthy—for everyone. It is from this that we got communism, fascism, totalitarianism and other regimes where the objective has been and is to make everyone conform to one vision of human excellence. But no such vision can possibly work because we are unique in the living world in being essentially individuals. Yes, we are social beings, too, but this side of us may not violate our individuality if our human nature is to be respected, honored.
What I am saying here is actually not tough to prove. Just look around you and notice how many decent people are quite different. Some are adventurous, some not, some are loners, some are gregarious, some introverted and some extra—the list could go on and on. Our goals, talents, tastes, and personalities are highly varied, yet oh so human. This is what individualism acknowledges—that we matter as individuals, not as parts of some greater whole. No one can be replaced as the individual who he or she is, and we all know this at least implicitly.
Now in America this is more or less consistently understood. And the price we pay for it is that we realize that what others do, for better or for worse, is something over which they are to have the final say however much it may displease the rest of us. The great cost of individualism is also its great benefit: an enormous variety of ways to live both well and badly.
In America this idea is pretty much accepted, at least at the gut level, even while many people bellyache about it endlessly. All sorts of pressure groups want to have everyone conform to their agendas, to their priorities, yet even as they do this they pretty much accept individualism in many areas of their lives. Such are the contradictions of our culture.
Those of other cultures, however, tend to be more severe. In most places the individualist idea hasn’t sunk in despite its evidence all around. The major source of all the diversity across the globe is nothing other than that people are individuals, apart from whatever else they may be. They have given rise to innumerable varieties of practices, traditions, philosophies, religions, styles of art, special sciences, and customs of food and dress.
What makes America quite irksome to many is that it was designed to accommodate a great deal of human variety; so, it cannot in all honesty offer any kind of utopian, one-size-fits-all vision of social life. With all this variety there is little hope for getting all people to march to the same drummer, to follow the lead of just one guru—or even just one variety of fitness trainer.
And that cannot but annoy those around the globe who want to continue to rule people along such lines.
SARS, Quarantine and Liberty
Tibor R. Machan
Let us assume here that SARS is a contagious disease that can be identified as such by doctors, including ones screening people who arrive from foreign shores. Would it be proper for the government to quarantine such folks?
If the disease is a serious health hazard, quarantine by legal authorities would be proper. Why?
It is the proper task of government to secure citizens’ rights. If someone with a contagious disease chooses to mingle with others who aren’t aware of this person’s disease, this person is very likely about to inflict a serious health hazard on these innocent citizens who haven’t chosen to mingle with the diseased person. So, the authorities entrusted with the job of securing our rights then have the responsibility to keep such people out of circulation.
Of course, whether SARS is such a serious disease is not something I know for sure and so I must put the matter in hypothetical form. If the disease is serious—not merely someone with a bad sneeze who may transmit a slight cold to others with whom contact will be unavoidable—then if this person intends to mingle with others, this person will be intent on embarking on criminal behavior—on assaulting others with his or her disease. No one has the right to do that to other persons who haven’t been forewarned and who have no choice about remaining in the vicinity of the diseased person.
None of this deals fully with the SARS phenomenon. There are, to the best of my knowledge, many others who carry contagious disease other than SARS. Thousands of persons with, for example, influenza travel freely about the globe without anyone going into panic about it. And, yes, influenza can kill! Arguably, then, SARS is something of a media driven scare, not a real serious hazard, compared to others afoot in various parts of the world.
The phenomenon reminds me of the time when thousands of people canceled trips to Europe after the USA bombed Libya back in April 1986 and there was fear of terrorism because of the bombing. One clever economist did some calculations and found that by remaining home, the chances for serious injury and even death for those who canceled their trips increased because of traffic hazards they would face when driving around on US soil. In contrast, flying to Europe and taking a train or a tour bus to various parts would have meant minimal danger to the tourists. No one, to my knowledge, has done a follow-up study on just how many of those who stayed away from Europe met with traffic mishaps. But the initial calculations by the economist seemed right.
In the present SARS scare thousands of people are foregoing vacations in China, Hong Kong, Toronto and other places where SARS has made its appearance. Given the relatively small numbers of those who have been felled by SARS, and given the statistical probability of meeting with traffic mishaps, it seems to me clear enough that this media driven and highly selective scare is once again leading to some unrecorded disasters.
Not much can be done about it, of course. When people get scared, however unreasonable their fear may well be, they will take measures to protect themselves. However, their protection may lead to worse things than what they feared in the first place.
It would be nice, under the circumstances, if the media—primarily news organizations—would report the comparative hazards stemming from SARS versus from other diseases and from the protective measures people are taking to avoid SARS. It should not be necessary for ordinary citizens, who rely so much on news organizations to inform them about what’s what, to become experts in this area. They should, instead, enjoy the services of their news reporters who should, in turn, dig deeper than superficial data that provides little more than grounds for panic rather than information that can help us make intelligent decisions. (For some official, government provided information on SARS, visit http://www.cdc.gov/ncidod/sars/faq.htm.)
Sunday, March 30, 2003
Revisiting Liberation
Tibor R. Machan
One side says, "It’s Liberation," the other says "Its Imperialism." Well, mightn't it be both?
During the heydays of the Soviet Union, its armies were always going about liberating places and people. When Nicaragua was run by the tyrannical regime taking its orders from the USSR, back in the 80s, its leaders spoke incessantly about liberating the people there even when this involved forcibly imposing on them innumerable measures they resisted.
Even in ordinary human relationships, say between friends, it is often thought that imposing certain strictures on someone frees the person, really; so all that complaints are misplaced. Just think if the policy of intervention recommended to friends and family of drug abusers! You coerce so as to set free! Or so the story is told and when it comes to the War in Iraq this can cause confusion for people.
The fact is that although within a given context the term “freedom” or “liberty” can be clear enough, there are several general definitions of it that actually conflict. In one sense, for example, the intervention by friends of a drug abuser amounts to depriving the latter of liberty. That is the liberty we have in mind meaning acting on one’s own judgment, following one’s own choices, determining one’s own actions whatever they may be. Those doing the intervention are, so understood, depriving someone of liberty, of his or her freedom. But if one focuses on the goal of the intervention, well the story changes because forcing someone to stop abusing drugs can free that person to do many far better things.
And if one thinks that millions of people are like the drug abuser, carrying out with a way of life that hinders true progress, true flourishing, then perhaps one believes, also, that they need the kind of liberation that will enable them to do what they should, what will benefit them. That is just how the Soviets saw it when they “liberated” the Czechs and Hungarians and all the rest by invading their countries and occupying and nearly micromanaging them. They were freeing the people of their ignorant way of life. The same goes for the leaders of Nicaragua.
So, then, what is one to think about the liberation of Iraq? It’s a mixed bag, that one.
On the one hand the rhetoric is about the freedom that involves getting rid of other people trying to run one’s life. This is what George Bush is saying when he refers to how after the war the people of Iraq will be free. The USA will have liberated them from the clutches of Saddam Hussein. On the other hand, though, many think the USA wants to control Iraq, run it to conform to its own priorities (such as the production of cheap oil), in which case the liberation is akin to the sort the Soviets perfected. Even many Americans, such as entertainer Bill Maher, tend to believe that people in Iraq just aren’t up to running their own lives, incapable of democratic self-government. Their culture hasn’t prepared them for this; their religion is too much of a yoke around their necks. Thus, maybe unintentionally, they support intervention-type liberation and support those who think Iraqis need Americans and Brits teaching them proper politics.
It is important to know which sort of liberation is in fact going on in Iraq. And that’s not easy to do since when a policy is so controversial as this one, those doing the arguing tend to load their terms and not always let us in on just what they mean by them. Those opposed to USA policy in Iraq have a stake in characterizing it as interventionist liberation, those for it just the opposite. And some obfuscate matters unintentionally.
We are left with the task of scrutinizing not just their terms but, often, their motives, which are awfully difficult to know for sure.
Tibor R. Machan
One side says, "It’s Liberation," the other says "Its Imperialism." Well, mightn't it be both?
During the heydays of the Soviet Union, its armies were always going about liberating places and people. When Nicaragua was run by the tyrannical regime taking its orders from the USSR, back in the 80s, its leaders spoke incessantly about liberating the people there even when this involved forcibly imposing on them innumerable measures they resisted.
Even in ordinary human relationships, say between friends, it is often thought that imposing certain strictures on someone frees the person, really; so all that complaints are misplaced. Just think if the policy of intervention recommended to friends and family of drug abusers! You coerce so as to set free! Or so the story is told and when it comes to the War in Iraq this can cause confusion for people.
The fact is that although within a given context the term “freedom” or “liberty” can be clear enough, there are several general definitions of it that actually conflict. In one sense, for example, the intervention by friends of a drug abuser amounts to depriving the latter of liberty. That is the liberty we have in mind meaning acting on one’s own judgment, following one’s own choices, determining one’s own actions whatever they may be. Those doing the intervention are, so understood, depriving someone of liberty, of his or her freedom. But if one focuses on the goal of the intervention, well the story changes because forcing someone to stop abusing drugs can free that person to do many far better things.
And if one thinks that millions of people are like the drug abuser, carrying out with a way of life that hinders true progress, true flourishing, then perhaps one believes, also, that they need the kind of liberation that will enable them to do what they should, what will benefit them. That is just how the Soviets saw it when they “liberated” the Czechs and Hungarians and all the rest by invading their countries and occupying and nearly micromanaging them. They were freeing the people of their ignorant way of life. The same goes for the leaders of Nicaragua.
So, then, what is one to think about the liberation of Iraq? It’s a mixed bag, that one.
On the one hand the rhetoric is about the freedom that involves getting rid of other people trying to run one’s life. This is what George Bush is saying when he refers to how after the war the people of Iraq will be free. The USA will have liberated them from the clutches of Saddam Hussein. On the other hand, though, many think the USA wants to control Iraq, run it to conform to its own priorities (such as the production of cheap oil), in which case the liberation is akin to the sort the Soviets perfected. Even many Americans, such as entertainer Bill Maher, tend to believe that people in Iraq just aren’t up to running their own lives, incapable of democratic self-government. Their culture hasn’t prepared them for this; their religion is too much of a yoke around their necks. Thus, maybe unintentionally, they support intervention-type liberation and support those who think Iraqis need Americans and Brits teaching them proper politics.
It is important to know which sort of liberation is in fact going on in Iraq. And that’s not easy to do since when a policy is so controversial as this one, those doing the arguing tend to load their terms and not always let us in on just what they mean by them. Those opposed to USA policy in Iraq have a stake in characterizing it as interventionist liberation, those for it just the opposite. And some obfuscate matters unintentionally.
We are left with the task of scrutinizing not just their terms but, often, their motives, which are awfully difficult to know for sure.
Ads in Movies
Tibor R. Machan
During the last few years I’ve gone to rather few movie theaters to see a movie but watched, instead, the offerings of network and cable television when I had in mind to relax a bit, to escape. Certainly, it’s cost-effective to do this and, in any case, I can wait. But lately a friend and I have returned to the cinema. This reminded me why I have decided to stay away for so long and why I grew reluctant to see movies in theaters.
There are now five or six advertisements before the previews begin, so if you get there on time; basically it is like watching network TV.
Only network TV costs nothing. You “pay” by watching the ads, although of course in one’s home one can get other things done during commercials. I recall back in 1969, when I visited London, I watched a bit of the BBC and noticed why I didn’t mind commercials so much—they made time for a visit to well, you know what; or to do a bit of cleaning in the kitchen or to take out the trash. Sitting and watching uninterrupted “commercial free” TV for three hours wasn’t so good for my restless spirit.
Now there is TiVo that allows one to stop programs, to backtrack if one has missed a word or two of the dialogue and to fast-forward if one is bored with some section of a program.
So with commercials blasting at one in theaters, it is no wonder that I hardly every see but 20% of them filled with customers. If it were not for special reasons, I would never go—who wants to pay serious bucks for a movie only to have ads blaring from the screen. And these are huge ads and escaping them means returning to the lobby.
I have friends among academic economists who, I am sure, will inform me that, “This must be the most efficient way to manage movie theaters because, well, that is how they are being managed.” (The Nobel Laureate economist, the late George Stigler, used to admit that this view was the result of the widespread belief among economists that everything that people do is indeed the most efficient way of doing it; so, we do live in the best of all possible worlds, after all, just as the German philosopher G. W. Leibniz had believed back in the 17th century!)
Actually, this idea of efficiency is what we call tautological—it is a redundancy because nothing could be other than efficient under it. If people are lazy, then laziness is important to them; if they waste time, ditto for that; and if they commit crimes well, then that’s what’s efficient for them. Nothing happens unless it is efficient—or so my economist friends will contend.
Not really. People forget to think things through, do not pay attention and miss some much better, even more efficient ways of doing things. And I suspect that’s what is going on here, with movie theaters trying the double jeopardy option—get people to pay and also make them pay with having to watch ads. Maybe it worked for them for a while but now it looks to me that people are getting wise to the double dipping strategy and are refusing to play along.
I suspect if they dropped those ads, they would get more loyalty from their costumers. But then I haven’t done a formal study, so perhaps I am quite wrong here. Still, the notion is plausible enough and perhaps some movie managers will consider it and then we may gradually get rid of this annoying feature of going out and seeing a flick now and then, namely, being hit up with advertisements despite having actually paid for seeing the movie. Just perhaps!
Tibor R. Machan
During the last few years I’ve gone to rather few movie theaters to see a movie but watched, instead, the offerings of network and cable television when I had in mind to relax a bit, to escape. Certainly, it’s cost-effective to do this and, in any case, I can wait. But lately a friend and I have returned to the cinema. This reminded me why I have decided to stay away for so long and why I grew reluctant to see movies in theaters.
There are now five or six advertisements before the previews begin, so if you get there on time; basically it is like watching network TV.
Only network TV costs nothing. You “pay” by watching the ads, although of course in one’s home one can get other things done during commercials. I recall back in 1969, when I visited London, I watched a bit of the BBC and noticed why I didn’t mind commercials so much—they made time for a visit to well, you know what; or to do a bit of cleaning in the kitchen or to take out the trash. Sitting and watching uninterrupted “commercial free” TV for three hours wasn’t so good for my restless spirit.
Now there is TiVo that allows one to stop programs, to backtrack if one has missed a word or two of the dialogue and to fast-forward if one is bored with some section of a program.
So with commercials blasting at one in theaters, it is no wonder that I hardly every see but 20% of them filled with customers. If it were not for special reasons, I would never go—who wants to pay serious bucks for a movie only to have ads blaring from the screen. And these are huge ads and escaping them means returning to the lobby.
I have friends among academic economists who, I am sure, will inform me that, “This must be the most efficient way to manage movie theaters because, well, that is how they are being managed.” (The Nobel Laureate economist, the late George Stigler, used to admit that this view was the result of the widespread belief among economists that everything that people do is indeed the most efficient way of doing it; so, we do live in the best of all possible worlds, after all, just as the German philosopher G. W. Leibniz had believed back in the 17th century!)
Actually, this idea of efficiency is what we call tautological—it is a redundancy because nothing could be other than efficient under it. If people are lazy, then laziness is important to them; if they waste time, ditto for that; and if they commit crimes well, then that’s what’s efficient for them. Nothing happens unless it is efficient—or so my economist friends will contend.
Not really. People forget to think things through, do not pay attention and miss some much better, even more efficient ways of doing things. And I suspect that’s what is going on here, with movie theaters trying the double jeopardy option—get people to pay and also make them pay with having to watch ads. Maybe it worked for them for a while but now it looks to me that people are getting wise to the double dipping strategy and are refusing to play along.
I suspect if they dropped those ads, they would get more loyalty from their costumers. But then I haven’t done a formal study, so perhaps I am quite wrong here. Still, the notion is plausible enough and perhaps some movie managers will consider it and then we may gradually get rid of this annoying feature of going out and seeing a flick now and then, namely, being hit up with advertisements despite having actually paid for seeing the movie. Just perhaps!
Thursday, March 27, 2003
War is the Norm
By TIBOR R. MACHAN
Freedom News Service
Most Americans haven't known war, which is very much to the good. I envy them. I envy my own children who have lived in a relatively free and largely peaceful society. I so much wish they could continue to do so.
Yet, it behooves me to remind folks that this isn't by any means the norm. Human history has been replete with armed conflict. Even as recently as the middle of the 20th century, there was a terrible war and it only seems different for most Americans because even when they lost loved ones in the Pacific or European theater, nothing much happened to disturb domestic tranquility. The skies of American cities didn't rain bombs; American homes, be they apartments or houses, weren't turned into rubble; blood didn't flow on American streets, and no one needed to tell children not to pick up anything because it might be a booby trap.
No, this doesn't mean that many Americans didn't experience the consequences of war. But not quite the way in which the rest of the world did in nearly every epoch. I, for example, was born six months prior to the break out of World War II and spent my first five years in a city, Budapest, completely besieged by ferocious armed conflict. Hungary was an ally of Hitler and the Soviets - and in some rare case even the Americans - waged a just war against the country, good and hard.
I remember night after night having to rush to the basement of my mother
By TIBOR R. MACHAN
Freedom News Service
Most Americans haven't known war, which is very much to the good. I envy them. I envy my own children who have lived in a relatively free and largely peaceful society. I so much wish they could continue to do so.
Yet, it behooves me to remind folks that this isn't by any means the norm. Human history has been replete with armed conflict. Even as recently as the middle of the 20th century, there was a terrible war and it only seems different for most Americans because even when they lost loved ones in the Pacific or European theater, nothing much happened to disturb domestic tranquility. The skies of American cities didn't rain bombs; American homes, be they apartments or houses, weren't turned into rubble; blood didn't flow on American streets, and no one needed to tell children not to pick up anything because it might be a booby trap.
No, this doesn't mean that many Americans didn't experience the consequences of war. But not quite the way in which the rest of the world did in nearly every epoch. I, for example, was born six months prior to the break out of World War II and spent my first five years in a city, Budapest, completely besieged by ferocious armed conflict. Hungary was an ally of Hitler and the Soviets - and in some rare case even the Americans - waged a just war against the country, good and hard.
I remember night after night having to rush to the basement of my mother
Friday, March 07, 2003
I found this observation by Susan Haack quite sound and apt. TRM
Many times in the last year we have heard: "Our values are under threat." They are; and we should--we must--defend them. But not because they are ours; for that really would be a regression to the dark side of human nature. If we take this thought to heart, we shall not, as we should not, fear that in defending them we may be guilty of a kind of cultural imperialism. And we will appreciate that, in the deepest sense, the values at stake are not "ours"--not peculiarly American, English, French, or even Western, but human: values, that is, with the capacity to enhance human flourishing, and to appeal emotionally to human everywhere. [Professor Susan Haack (U. of Miami), "9/11/02," Free Inquiry, Winter 2002/03, p. 12.]
Many times in the last year we have heard: "Our values are under threat." They are; and we should--we must--defend them. But not because they are ours; for that really would be a regression to the dark side of human nature. If we take this thought to heart, we shall not, as we should not, fear that in defending them we may be guilty of a kind of cultural imperialism. And we will appreciate that, in the deepest sense, the values at stake are not "ours"--not peculiarly American, English, French, or even Western, but human: values, that is, with the capacity to enhance human flourishing, and to appeal emotionally to human everywhere. [Professor Susan Haack (U. of Miami), "9/11/02," Free Inquiry, Winter 2002/03, p. 12.]
Inheritance, Capitalism & Freedom
Tibor R. Machan
Few things invigorate critics of free market capitalism as much as inherited wealth. Quite a few defenders of this system tend to stress its supposed reward of hard work, ingenuity, industriousness, thrift and diligence – all virtues one can hardly argue with and which, if one practices them, seem to justify holding on to the often resulting wealth. So, critics focus on inheritance, a species of good luck, which those who benefit from it cannot easily be said to deserve.
And it is true enough – notoriously many beneficiaries of inherited wealth seem to be quite undeserving. They often waste their inheritance away rather rapidly; if not, they do nothing much creative or productive with it; often they spend it on projects that actually turn out to be out and out hostile to the very system that made making the wealth possible in the first place (just take notice of the many rich kids of industrialists who decided to fund collectivist think tanks, magazines, and activism). So, if this result can be associated with free market capitalism, how could any right-minded person defend the system?
The guilt by association ploy does seem to work because even among the most sophisticated critics of capitalism the ultimate ammunition is the view that even those who practice diligence, thrift, industry and other virtues merely inherited their traits of character and thus do not deserve the rewards, after all, contrary to what common sense would suggest.
Now there is a very serious confusion afoot in all this and once noted it should disabuse critics of the idea that inherited wealth and its misuses amount to any liability for freedom and capitalism.
To begin with, we do in fact inherit many of our assets and do not earn them – our good looks and health, if we have them; our talents; even much of what constitutes our personality, something that often helps us make our way to a certain measure of success in our lives. And in each of these cases we can both build on what we have inherited or waste it away good and hard. But none of that makes these assets anyone else’s to take away from one! That would be enslavement, actually, or at least expropriation.
The point is that we all come into life with some assets and some liabilities. That we do or do not have these is something over which we have no control. However, once we find ourselves with them, they are up to us to handle. Inherited wealth is among such assets, yes, and how we make use of it will be our test of character (which, contrary to what some claim, is not inherited but the result of cultivation, attention, self-discipline and thus very much the source of just deserts). So are talent, and beauty and good health, as well as their opposites.
Now, where the free society, with its corresponding free market economy, comes into the picture is in enabling us to handle these to the best of our ability and willingness. It is only in a free society that the moral fiber of human beings can be effectuated, made to count for something. So, yes, your parents left you with something very valuable – an estate, a business, a bunch of stocks and bonds or cash. But whether you do right or wrong by these is in your hands in a bona fide free society. And that is true even if you were just born pretty or witty or otherwise appealing to the rest of us so we will through money at you to gain your services on magazines covers or in comedy clubs. There is, in another words, no end of uneven starts in life – that’s why it is utterly silly to complain that life’s unfair. That is just the way it is, much like the weather.
Luck, in the way of good looks, talents, inheritance and the rest is one of the factors with which life confronts us and we then are tested by how we handle it all. One may hope that those who botch up their good fortune will learn and if not will suffer properly. It would be wrong, however, to sic the government on those who were chosen either by their ancestors or genes or some other factors not under their control to benefit at the starting point.
One more point about inheritance. Unlike good looks and health, inheritance often comes with conditions. You get to enjoy your parents’ estate, provided you carry out some of their wishes – support wild life preservation or the local little league or a fine political cause. One sign of lack of good grace is when those who inherit wealth with such strings initially accept it all but then try to weasel out of the commitment and try to treat it as if they had earned it on their own, free and clear. That again is a character tester – and again one can only hope that others will make careful note it and the deed will go un-rewarded in the end.
To some extent all assets that aren’t earned come with certain provisos. Someone with a great voice might like to have inherited the physique of an athlete, instead. And those with such a fate may wish to fight it, too. Alas, they will usually fail – had they only been content with what they were born with, their happiness, besides their good fortune, might also have been enhanced.
Tibor R. Machan
Few things invigorate critics of free market capitalism as much as inherited wealth. Quite a few defenders of this system tend to stress its supposed reward of hard work, ingenuity, industriousness, thrift and diligence – all virtues one can hardly argue with and which, if one practices them, seem to justify holding on to the often resulting wealth. So, critics focus on inheritance, a species of good luck, which those who benefit from it cannot easily be said to deserve.
And it is true enough – notoriously many beneficiaries of inherited wealth seem to be quite undeserving. They often waste their inheritance away rather rapidly; if not, they do nothing much creative or productive with it; often they spend it on projects that actually turn out to be out and out hostile to the very system that made making the wealth possible in the first place (just take notice of the many rich kids of industrialists who decided to fund collectivist think tanks, magazines, and activism). So, if this result can be associated with free market capitalism, how could any right-minded person defend the system?
The guilt by association ploy does seem to work because even among the most sophisticated critics of capitalism the ultimate ammunition is the view that even those who practice diligence, thrift, industry and other virtues merely inherited their traits of character and thus do not deserve the rewards, after all, contrary to what common sense would suggest.
Now there is a very serious confusion afoot in all this and once noted it should disabuse critics of the idea that inherited wealth and its misuses amount to any liability for freedom and capitalism.
To begin with, we do in fact inherit many of our assets and do not earn them – our good looks and health, if we have them; our talents; even much of what constitutes our personality, something that often helps us make our way to a certain measure of success in our lives. And in each of these cases we can both build on what we have inherited or waste it away good and hard. But none of that makes these assets anyone else’s to take away from one! That would be enslavement, actually, or at least expropriation.
The point is that we all come into life with some assets and some liabilities. That we do or do not have these is something over which we have no control. However, once we find ourselves with them, they are up to us to handle. Inherited wealth is among such assets, yes, and how we make use of it will be our test of character (which, contrary to what some claim, is not inherited but the result of cultivation, attention, self-discipline and thus very much the source of just deserts). So are talent, and beauty and good health, as well as their opposites.
Now, where the free society, with its corresponding free market economy, comes into the picture is in enabling us to handle these to the best of our ability and willingness. It is only in a free society that the moral fiber of human beings can be effectuated, made to count for something. So, yes, your parents left you with something very valuable – an estate, a business, a bunch of stocks and bonds or cash. But whether you do right or wrong by these is in your hands in a bona fide free society. And that is true even if you were just born pretty or witty or otherwise appealing to the rest of us so we will through money at you to gain your services on magazines covers or in comedy clubs. There is, in another words, no end of uneven starts in life – that’s why it is utterly silly to complain that life’s unfair. That is just the way it is, much like the weather.
Luck, in the way of good looks, talents, inheritance and the rest is one of the factors with which life confronts us and we then are tested by how we handle it all. One may hope that those who botch up their good fortune will learn and if not will suffer properly. It would be wrong, however, to sic the government on those who were chosen either by their ancestors or genes or some other factors not under their control to benefit at the starting point.
One more point about inheritance. Unlike good looks and health, inheritance often comes with conditions. You get to enjoy your parents’ estate, provided you carry out some of their wishes – support wild life preservation or the local little league or a fine political cause. One sign of lack of good grace is when those who inherit wealth with such strings initially accept it all but then try to weasel out of the commitment and try to treat it as if they had earned it on their own, free and clear. That again is a character tester – and again one can only hope that others will make careful note it and the deed will go un-rewarded in the end.
To some extent all assets that aren’t earned come with certain provisos. Someone with a great voice might like to have inherited the physique of an athlete, instead. And those with such a fate may wish to fight it, too. Alas, they will usually fail – had they only been content with what they were born with, their happiness, besides their good fortune, might also have been enhanced.
“It’s worth it to me”
[March 2, 2003]
Tibor R. Machan
My gym accommodates a whole lot of different people with umpteen different goals, judging at least by what they look like and do while spending time there. On a recent visit, after my perfunctory – though helpful enough – work out (a row, a bike ride, a walk-run and a swim, as well as some pulling and pushing and the rest, all with great reluctance), I was about to shower and head home when this incredibly well built guy started to look himself over once or twice in front of the mirror right by my locker. I stood by silently as the fellow admired himself, probably checking for the latest improvements on his finely sculptured black physique, but then I decided to ask, “Is it all worth it?” Taking just a slight pause I got the reply, “It is to me.”
This short, pithy, and simple response brought much delight to me, I must admit. It was a perfect way to give notice that some things can be of value to one person that would not be to another, even without it being the case that it is only of value because one says so.
One of the most difficult things human beings have puzzled about throughout the history of recorded thought is whether values are subjective or objective. Is something valuable simply because someone so regards it is or is it valuable for good reasons? Philosophers have gone back and forth about this forever and are continuing to do so – any introductory philosophy, ethics or aesthetics course can tell us that much.
One appeal of the subjectivist position is that it makes room for a lot of different values, for a lot of different folks in a lot of different times and places; the liability of the view is that no one can ever tell whether anything is valuable except from someone’s claiming it is and conflicting opinions abound with no hope of ever settling them peacefully, through rational discussion.
In turn, the appeal of the objectivist position is that there can be reasons for making value judgments, good ones and not so good, and decisions could be reached among people who disagree by considering those reasons. The problem is that objectivism has often been seen as implying that what one person finds good, another must as well. But that is too often implausible.
The problem has been, I figure, that things were thought to be valuable to humanity at large, not to individual human beings. Individuals, however, possess features they share with all other humans, with a large group, with a small group, with just one or two and perhaps some values with no one else. Now these values are all objective – one can be wrong about them, but not because they are values for just some or even for one person.
Just think of clothing or medical care: there are general things about each that are good for us all, and then things get more and more complicated, so that some items of clothing may suit none but one person, just as some medical treatments work only for one patient and for no one else. It is not a matter of opinion, though, whether they are suitable or proper, even if they are a matter of individual traits and attributes.
My fellow gym member probably has a very different life from mine, with different talents and attributes that can be factors in deciding how he should live, what goals are proper for him to pursue. I do not know him at all, so I wouldn’t know. But it felt very good to encounter someone who had confidence in doing what he was doing even when it was clear that it wasn’t at all what most of the rest of us were doing, even there at my gym.
He seemed to know who he is and what were his values and that is inspiring.
[March 2, 2003]
Tibor R. Machan
My gym accommodates a whole lot of different people with umpteen different goals, judging at least by what they look like and do while spending time there. On a recent visit, after my perfunctory – though helpful enough – work out (a row, a bike ride, a walk-run and a swim, as well as some pulling and pushing and the rest, all with great reluctance), I was about to shower and head home when this incredibly well built guy started to look himself over once or twice in front of the mirror right by my locker. I stood by silently as the fellow admired himself, probably checking for the latest improvements on his finely sculptured black physique, but then I decided to ask, “Is it all worth it?” Taking just a slight pause I got the reply, “It is to me.”
This short, pithy, and simple response brought much delight to me, I must admit. It was a perfect way to give notice that some things can be of value to one person that would not be to another, even without it being the case that it is only of value because one says so.
One of the most difficult things human beings have puzzled about throughout the history of recorded thought is whether values are subjective or objective. Is something valuable simply because someone so regards it is or is it valuable for good reasons? Philosophers have gone back and forth about this forever and are continuing to do so – any introductory philosophy, ethics or aesthetics course can tell us that much.
One appeal of the subjectivist position is that it makes room for a lot of different values, for a lot of different folks in a lot of different times and places; the liability of the view is that no one can ever tell whether anything is valuable except from someone’s claiming it is and conflicting opinions abound with no hope of ever settling them peacefully, through rational discussion.
In turn, the appeal of the objectivist position is that there can be reasons for making value judgments, good ones and not so good, and decisions could be reached among people who disagree by considering those reasons. The problem is that objectivism has often been seen as implying that what one person finds good, another must as well. But that is too often implausible.
The problem has been, I figure, that things were thought to be valuable to humanity at large, not to individual human beings. Individuals, however, possess features they share with all other humans, with a large group, with a small group, with just one or two and perhaps some values with no one else. Now these values are all objective – one can be wrong about them, but not because they are values for just some or even for one person.
Just think of clothing or medical care: there are general things about each that are good for us all, and then things get more and more complicated, so that some items of clothing may suit none but one person, just as some medical treatments work only for one patient and for no one else. It is not a matter of opinion, though, whether they are suitable or proper, even if they are a matter of individual traits and attributes.
My fellow gym member probably has a very different life from mine, with different talents and attributes that can be factors in deciding how he should live, what goals are proper for him to pursue. I do not know him at all, so I wouldn’t know. But it felt very good to encounter someone who had confidence in doing what he was doing even when it was clear that it wasn’t at all what most of the rest of us were doing, even there at my gym.
He seemed to know who he is and what were his values and that is inspiring.
Subscribe to:
Posts (Atom)