Monday, November 30, 2009

Two quick notes on epistemology/philosophy of the mind

1. Steven Landsburg has come out with a book that seems rather promising. There is a blog too.

There is this one post in which he takes Richard Dawkins to task about the latter's insistence that evolution by itself destroys the idea of a God. I don't care much about the post itself, but one paragraph caught my eye.
That, however, is just wrong. It is not true that all complex things emerge by gradual degrees from simpler beginnings. In fact, the most complex thing I’m aware of is the system of natural numbers (0,1,2,3, and all the rest of them) together with the laws of arithmetic. That system did not emerge, by gradual degrees, from simpler beginnings.

If you doubt the complexity of the natural numbers, take note that you can use just a small part of them to encode the entire human genome. That makes the natural numbers more complex than human life. Unless, of course, human beings contain an uncodable essence, like an immortal soul—but I’m guessing that’s not the road Dawkins wants to take. (emphasis mine)

Has there ever, ever been a case where this phrase has been more applicable? A small part of the set of natural numbers can be used to represent the genome, not to encode it in the sense that gene encoding is commonly understood. Protein molecules can not be created out of natural numbers.

Landsburg's definition of complexity of genome in terms of encoding the genome is essentially an information-theoretic one. The main idea here is that of Kolmogorov complexity. It is the right approach, of course, but it is essential to remember just what the base information is. The genome itself is not a sequence of non-random and finite set of ACTGs, only its representation is.

Of course, there is a more fundamental though more subtle issue. Kolmogorov complexity proceeds by assuming arithmetic, which assumes the set of natural numbers. To talk of the complexity of the set of natural numbers itself is completely meaningless. It is like saying that the set of alphabets of the English language is more complex than any arbitrarily complex philosophical idea (for e.g. - the map is not the territory) because the idea can be expressed by a small, finite and non-random subset of the alphabet.

Does this mean that I'm opposing Godelian platonism by comparing a mathematical set to a human construct like an alphabet? Not really. I hold an information-theoretic view of the universe. Math is the closest we get to a system that captures the fundamental notion of information.

2. On EconLog, Bryan Caplan responds to Robin Hanson in the latest Caplan-Hanson debate (on philosophy of the mind, this time). He links to an old paper by him on why he is a dualist (mind and matter/body are different - mental phenomena can not be reduced to physical ones) and not a monist/reductionist (all things are made of matter, there is no mind beyond the body, mental phenomena are completely reducible to physical ones). He refers to John Searle approvingly before critiquing him. That's for later though.

What is striking is - Caplan claims to be neither a substance dualist (that the mind is a separate substance than the body) nor a property dualist (the mind is an emergent property of the brain that is not reducible to anything simpler). Substance dualism is rather naive and uninformed by anatomy and neuroscience. Property dualism, however, is an extremely attractive position to adopt if you're going to be a dualist. Why then does Caplan reject it? He explains -

"Neither is my view property dualism; for the essence of a property is that it could not even be conceived as existing apart from something else. For example, "whiteness" could not even be imagined to exist all by itself; the reason is that it is a property, not an independent thing. But we can conceive of the mind all by itself; hence it is not a property. "
It is an old paper, so maybe Caplan has not reviewed it in quite some time, but it is rather unbelievable that he ever used/convinced himself of the trickery of 'whiteness' (as opposed to white) and the ambiguity of 'conceive' to dismiss property dualism. 
I can absolutely conceive of 'white' by itself. You can argue that this is false, and that my conception of white will be a white wall, a white board, white cloth, brilliant sunlight or something physical that possesses the property of 'whiteness'. You will be right. But then, you can't genuinely believe that you can conceive of a mind independently. You can only conceive of mental phenomena - desire, choice, anger, pain. And even then, you can only conceive of how you/other sentient beings perceive these phenomena or how they are transient properties of sentient beings. And there's no reason to assume that something physical (the brain) cannot possess the properties implied by mental phenomena. That, in fact, is the very premise of substance dualism. 
Which is to say, Caplan rules property dualism out either by naivette or by assumption,  in very simple and direct language that makes the mistake rather obvious. 

Wednesday, November 04, 2009

On Rand and Ethics

At Marginal Revolution, Alex Tabarrok has a post on Ayn Rand's relevance and her philosophy, prompted by the recent flurry of activity around the release of two new biographies of her. He links to almost 5-yr old pieces on her by himself, Tyler Cowen and Bryan Caplan.

In his 2005 post, Tabarrok argues that Rand is a lucid, prominent and the first modern proponent of virtue ethics. Now I had no clue what virtue ethics was, so I decided to check it up. The wikipedia entry for virtue ethics says this
Virtue theory is an approach to ethics which emphasizes the character of the moral agent, rather than rules or consequences, as the key element of ethical thinking. This contrasts with consequentialism, which holds that the consequences of a particular act form the basis for any valid moral judgment about that action, and deontology, which derives rightness or wrongness from the character of the act itself rather than the outcomes
Here is the wikipedia article, which I believe is rather muddled. A clearer and crisper explanation of the differences between the three main strands of normative ethics(systems of distinguishing the right from the wrong - deontology, consequentialism, virtue ethics) can be found in this sentence from the wiki article on deontological ethics.
Deontological ethics or deontology (from Greek δέον, deon, "obligation, duty"; and -λογία, -logia) is an approach to ethics that determines goodness or rightness from examining acts, rather than the consequences of the act as in consequentialism, or the intentions of the person doing the act as in virtue ethics.
This is a very good summary of the critical difference and it enables further analysis. A lot of people will consider a 'It was never my intention to hurt you, I just ended up doing something stupid' explanation as legitimate if it comes from a romantic partner or a child. It is difficult for a legal system to adopt such a view in a case of, say, drunken driving. (Most legal systems actually hold such a view partially - we will come to that a little later.) To the extent that you can only infer a person's motives and moral character from their actions, it seems that either virtue ethics is completely subsumed under deontology or the distinction is too subtle for the gross sieve of my comprehension. In any case, though intention and action can be separated, repeated action is the surest indicator of intention that there is - you may not forgive the same mistake thrice even in a partner or a child.

I have never found deontology particularly illuminating beyond the insight that every system of normative ethics will require certain axioms. Deontology's way to distinguishing the right from the wrong is to check if the act conforms to a certain set of pre-decided rules, the moral axioms. It is easy to see that if you allow these rules to be long enough and complex enough and democratic enough, deontology collapses into consequentialism. Kant, for example, is led to conclude that lying is always bad only because he insists on working with the simple and singular predicate of 'lying'. If you allow for more complex and more numerous predicates, it is easy to accommodate lying in those situations where it creates more favourable social outcomes without explicitly mentioning that you are trying to improve the net social outcome. If you do not allow for such predicates, you are asking for the reality to conform to your thoughts, which is not a useful way to approach a theory of practical reason. Ironically, due to simpler and fewer predicates, a functioning large-scale deontological system will necessitate a much larger, more confused and less universal set of moral axioms.

Drunken driving and injury caused under the influence of alcohol is an interesting illustration. The chain of reasoning that outlaws drunken driving appears to me to be this:

1) It is wrong to cause death or injury to other people.
2) Drunkenness leads to significantly reduced motor control which is very likely to cause uncontrolled driving.
3) Uncontrolled driving is likely to cause death or injury to other people on the road for no fault of theirs.
4) Hence, drunken driving should be illegal.

A 'pure' deontological framework will find 'injury caused' a difficult concept to punish as it is a consequence and not an act itself. Thus in a deontological framework, all instances of drunken driving will appear as equal evils to you. You will not be able to engage in the kind of probabilistic reasoning I mention above, as it is fundamentally consequentalist, with the one simple moral axiom of 'don't cause death or injury to other people'.

In fact, since the intention/moral character and the consequence of an act are mutually exclusive and completely characterize the act, consequentialism and virtue ethics are better compared with each other than either with deontology. Most functioning legal systems require the establishment of both actus reus (that the act happened) and mens rea (the intention behind the act) to establish a crime. A distinction is made between drunken driving that caused injury and drunken driving that didn't, and between murder (intended, planned homicide) and the common 'culpable homicide not amounting to murder' used if a car runs over and kills someone when the driver was inebriated. Thus, we see a combination of consequentialism and virtue ethics pervading most modern legal systems.

As Tabarrok says, Rand's ideal of the virtuous man is rational, independent and productive. The insertion of rationality is brilliant stroke to achieve consistency within virtue ethics without sacrificing common wisdom, for it removes the dichotomy between evil acts and evil intents. Competence as morality brings morality closer to a folk reality. If you cause me hurt by being stupid rather than malicious, you are still in the wrong. This makes it possible to scale Rand's system up to the social level - otherwise it would remain stunted to a set of personal ethics (e.g. "yes, Salman Khan ran over and caused the death of n pavement dwellers, but he didn't want to do that any more than you or I did, so I can still think he is not a bad person and like him").

Till this point, Rand seems not only correct but also very original. However, while the moral axioms of rationality, independence and productivity are in consonance when evaluating the individual, they are less helpful in guiding a social framework. First, notice that of the three, only independence truly scales up. As Caplan points out, Rand thought that a large number of people were fundamentally irrational and led themselves to near-slavery (by supporting socialism when a 'superior, more rational, more moral' brain could see that it was doomed). She sure wouldn't want these people to be punished for being thus incompetent and immoral (beyond the market forces). It's virtuous to be rational and productive, ok to not be so competent and evil to be impinging upon somebody else's attempt to be so. And one doesn't have the moral obligation to try and make someone else rational and productive. How different is this from the concept of negative liberty, an idea that has had currency at least since the time of Thomas Hobbes?

Therefore, I believe Tabarrok missed the point when he asserted that Rand's moral defence of capitalism on the basis of selfishness is spot on, for a much more satisfying defence is the simple and older idea of negative liberty. Rand was also neither unique nor the first in requiring that there be a moral defence of capitalism - this had already been done explicitly by Frank Knight, and implicitly by others who identified as classical liberals. Tabarrok was also grossly unjust to every single political philosopher of the modern times when he says that "the modern literature lags behind Rand in connecting ethics and politics". All political philosophers are first moral philosophers. It is impossible to try to create a coherent governance framework without first privileging certain social conditions and outcomes as moral 'goods'. Political philosophers don't make their ethics explicit, because they are usually building upon or deconstructing the politics (and hence ethics) of older political philosophers. Isn't the Hobbesian Leviathan predicated upon the assumption that human life should be neither nasty nor brutish nor short?

It is because of all this that I tend to judge Rand's moral philosophy rather unfavourably. Her insistence on selfishness and egoism as opposed to mere negative liberty opens weak holes in what is otherwise a sound philosophy. Her insistence on evaluating the individual and then creating a society that best aids the most perfect individual lays her philosophy open to the charge of ignoring the social reality and of not being universal enough to merit a discussion as a political system. If you respect the political equivalence of all individuals (everyone must have the same rights as I do) it is disingenious and contradictory to arrive at that set of rights based upon your preference and your own ideals without even considering the preferences and ideals of those whom you want to grant those rights. It is difficult to see what Rand had to offer that may be relevant to any scaled-up political discourse that Thomas Hobbes, John Locke or John Stuart Mill did not.

And it is the intuition that democratic rights be democratic in nature that makes consequential and utilitarian ideas so attractive. You don't need to have a personal ideal of the perfect man as independent and productive and rational to arrive at a political system. You only need the intuition that the average man desires to be free, to be materially prosperous and to be able to make a better decision to arrive at the notion of a social utility function. It is only a small step from there to the idea that a political economy and a legal system that makes decisions that maximize the social utility is the ideal system. And some more small steps with mathematical reasoning will take you to Pareto optimality and Kaldor-Hicks efficiency.

Of course, the version of utility and efficiency that we have right now is very broad - it includes the human preference for liberty and even the human aversion for inequity. Therefore, modern political and economic philosophy explicitly recognises the trade-offs between efficiency, liberty and equality. As an illiberal society is also usually an inefficient society and as the preference for equality can be accommodated to a broad extent in macro aggregates (Gini coefficient, Human Development Indices) it is eminently possible to zero down upon a political system that is quite close to maximizing social utility - the modern liberal democracy, capitalism with a few rules. The debate then becomes about how many of those rules should be there and what those rules should be. Rand doesn't have much to add there.

Of course, in the last two paragraphs I have casually slipped in the idea of positive liberty through efficiency and equality arguments. You can disagree. However, if you must try to create a grand theory to back your belief in laissez-faire, you are much better off looking towards negative liberty and the classical liberals than selfishness and Ayn Rand.

p.s. On epistemology, though, Rand is surprisingly good. From Caplan's post
"We know that we know nothing,' they chatter, blanking out the fact that they are claiming knowledge–"There are no absolutes," they chatter, blanking out the fact that they are uttering an absolute–'You cannot prove that you exist or that you're conscious,' they chatter, blanking out the fact that proof presupposes existence, consciousness and a complex chain of knowledge: the existence of something to know, of a consciousness able to know it, and of a knowledge that has learned to distinguish between such concepts as the proved and the unproved.
If you forgive the subtle fallacy of recursion without base class in the first two sentences (it is perfectly logical, for example, to say that you believe in tolerance and hence are intolerant of intolerance), this is a brilliant paragraph. The "proof presupposes ...." part is sheer genius in its clarity and delivers a short yet fatal blow to a lot of Cartesian and Humean skepticism.

p.p.s My bias towards efficiency and utility, while sympathizing with negative liberty and the human aversion for inequity at the same time may seem contradictory. It is. I have the broad framework clear, and this framework is essentially that of John Stuart Mill. Within that, I prefer to think about the specific tradeoffs on a case by case basis and using inductive logic. If you insist that a social system be logically consistent and yet be able to handle all 'truths' that intuition and reality will throw at it, you have missed Godel's fundamental insight.