Friday 3 July 2009

How might we be mistaken about right and wrong?

I take it that ethical reasoning involves reasoning. This claim doesn't look contentious, but it has implications that do seem surprising, at least to me.

When we reason about a subject we sometimes miscalculate. A mistake in our reasoning leads us to a conclusion that does not follow from our assumptions.

Not only do we miscalculate, we are often led astray by "common-sense intuition." When one studies economics, say, some part of one's education consists of replacing one's everyday intuition with one more appropriate to the subject at hand. Like other subjects, economics has its catalogue of "classical fallacies" – questions to which the right answer turned out to be surprising.

So, too, may we miscalculate in our ethical reasoning. In other words, some of our ethical views may be demonstrably wrong.

Now that does seem strange. If I am told that a view I hold in economics is wrong; well, economics is tricky. I can see how my intuition might be mistaken. For example, it's on the surface plausible that by limiting the number of hours per week that each person can work we can thereby increase the number of jobs available. But, in fact, that is a fallacy – the "lump of labour fallacy." Fine, I'm a grownup, I can change my mind.

But ethical views are stickier. Pick any ethical view you hold strongly – say, that infanticide is morally wrong – and imagine being told that you were wrong in holding this view [1]. Your first rejoinder is not likely to be, "Huh. Well, that was surprising." More likely it will be, "Well, that's just your opinion, and you're clearly bonkers." But if our ethical views are to have any generality at all – to be, in short, useful – not all of our views can be axiomatic: some of them must be deductions from other, more fundamental views. (Otherwise, how would we reason about new circumstances?)

That being so, some of our ethical beliefs are likely to be wrong – in the sense that we have made a mistake in thinking through the issues. It would be shocking if they were not. And it might be worth thinking about how they could be wrong and what we should do about it.

A few words about what I don't mean here. Reasonable people can and do disagree about facts that are relevant to ethical reasoning. For example, you may believe that earthworms feel pain; I may believe that they don't. Whether earthworms do or do not feel pain is (presumably!) a matter of fact which must be determined empirically.

And reasonable people can and do disagree on what ethical principles are to be taken simply as given. You may take it as a principle that certain states of affairs are good in themselves, whereas I may start from the assumption that a state of affairs is good solely to the extent that it increases happiness. We cannot decide empirically between these two viewpoints: we can only chose one or the other and reason therefrom.

But it does seem strange to think that we might be in error about our ethical views because of a calculational mistake.

Well, fine; how then might we be wrong? What would such a mistake look like?

Here's a little argument about a certain ethical issue. I'm hoping that you'll disagree intuitively with the conclusion but agree with each step in the argument. Then we'd know what it looks like to be wrong about some ethical view.

In the UK it used to be the case that sperm donation was anonymous – donors could not later be traced by their biological children. More recently, the law changed so that donation is no longer anonymous. The argument was made that people whose fathers were sperm donors had a right to know the identity of their biological father. They had this right because they had a wish to understand "where they came from," to have a "sense of identity," and that this wish had sufficient force to be a right. Here, for example, is Stephen Ladyman, health minister at the time of the change in the law, quoted by the BBC:
"We think it is right that donor conceived people should be able to have information should they want it about their genetic origins and that is why we have changed the law on donor anonymity."
On the face of it, there's a lot to be said for this argument. People do indeed express a wish to know the identity of their biological fathers because they wish to know "where I came from." And that wish to understand one's identity seems pretty understandable and sincere. And sincere wishes should presumably be at least considered in one's ethical reasoning.

Now, it could be argued that the right of the father to remain anonymous must also be considered. It might be argued that, conversely, it is good, all things considered, for the offspring to be aware of health problems related to their genetic makeup. And it might be argued that one could learn a lot about oneself by learning about people who are genetically related to you, and that that's a good thing too. These are all relevant points. But a distinction appears to exist between knowing information about one's genetic father and knowing the identity of one's genetic father. And I'd like to focus here just on the expressed wish of the offspring to know "where I came from" by way of learning the identity of the father. Whatever you think about the broader arguments, it seems hard to deny that this wish is something that should, at least, be taken into account in the moral calculus of the problem.

Yet I'd like to try to deny just that. I claim that this wish should indeed be discounted, for very strong reasons. (If you are the child of a sperm donor, bear with me here. I might, after all, be wrong.)

I claim that the wish should be discounted because there is nothing that is the object of the wish. It is as if I had said: "I have a right to know the identity of my biological father because I wish to know pink banana elephants." I may well sincerely believe that I wish to know pink banana elephants; but since that wish is meaningless it cannot be part of a valid argument whose conclusion is that I have right to know the identity of my biological father.

Well, now I owe you an argument that there is nothing that is knowing "where I come from". Here is that argument (although you're not going to like it). Suppose that for the purposes of artificial insemination some sperm were artificially created, by generating a random sequence of DNA (within certain parameters to ensure that the result is human). Such a thing, I presume, is possible in principle, if not yet in practice. For a person, X, created from this sperm it seems clear that there would be no such thing as "their biological father." There was no father, just a made up sequence of DNA.

But now, suppose a search was undertaken of the entire population and it transpired that, just by chance, a certain man, Y, had a genome identical to the randomly generated one. (Unlikely, I grant you; but, again, not impossible.) Is there, now, a right for X to know the identity of Y on the grounds that Y is "where X came from"? Surely there is not. Y had nothing whatsoever to do with the "origins" of X. If there is any meaning to the phrase "wherever they came from" it cannot apply to Y.

But this situation is functionally identical to the situation in which the sperm was originally donated by Y. I conclude that there cannot be a meaning to "where X came from" in that case, either. Mere identity of genome does not constitute a causal path from one person to another. One might think that it does, but this is simply mistaken.

------

Two concluding thoughts. First, just to be clear: it seems to me there are lots of good reasons to study the relations, if any, between a person's genotype and phenotype, and to make that information available. If having a certain gene suggests earlier testing for heart disease, then presumably it is a good thing to know that fact. I'm not suggesting that information about one's "genetic neighbours" is not a meaningful thing. I am suggesting that there is no such thing as "where I am from," at least not simply in virtue of genetic similarity.

Second, and finally, there's the question of what, if anything, to do about all this. I think one's reaction to the argument above is likely to be to pick holes in it, especially if one's intuition is that the conclusion is wrong. This being philosophy, not physics, there's no experiment one can do to settle the matter, so argument is all there is. Typically, one proceeds by inventing, ad hoc, a new moral principle that pushes the argument in one's preferred direction. (That's unfair, of course. What it feels like one is doing is uncovering a moral principle that had previously not been made explicit.)

But it seems to me that this reaction is different to what one's reaction would be if one were morally neutral about the conclusion. Generally, it's a good thing to minimise the number of moral principles. As an experiment, I think it's worthwhile running with arguments that lead to strange conclusions, at least to see where they go.

------

[1] See, eg, Peter Singer, Practical Ethics



Thursday 16 April 2009

A Policeman's Lot Is Not A Happy One

The question, of course, is how to be happy. It strikes me that it's a good start to have a job the performance of which can be objectively assessed and in which one is skilled in the art. Plumbers and hairdressers are happy, apparently, and I imagine this is why. Leak: fixed. Hair: cut. Job: done.

If you can't get such a job, then at least try to arrange that whatever job you have wants of you whatever it is that you're good at. (Like Inspector Morse, for example.)

Most of us have jobs that are annoying and stressful, and I will now tell you why. They are annoying and stressful because they don't want from you the thing you're good at: they want the best you can do given the time and money available. So – everyday – you're making trade-offs, compromising your best designs, failing to live up to your potential. That's why teachers are fed up. Also: programmers, designers, architects, and you.

It turns out that there's a way around this. Here's what you do. You arrange for there to be created a single measure of performance that takes into account both the quality of the thing you do and what it cost to get it. Then you make your job maximising that number. You're welcome. So ... are you happy yet? It's funny, but it does seem hard to make this work. Part of the problem is that, typically, you can't get the costs in the same units as the benefits. That's why education is so hard: the cost of running a school is measured in pounds sterling but the benefits are measured in -- what? -- educated students? Hard to compare.

Even when you can do it, it's hard to be happy at it. If you're the sort of person who is happy running a business solely to maximise profit then power to you, it's good for us all that you exist. I don't know why I'm no good at that, but I ain't.

I'm guessing you are not happy yet, either. If so, spare a thought for those who must have the worst job, happiness-wise: the police. (I mean, the police whose job is to stop bad stuff happening, not the police whose job is to catch the bad guys post--bad-guy activity, eg, Inspector Morse.) Not only do you have to make trade-offs; not only are they in the wrong units; but you can't even measure how successful you were. How to measure burglaries prevented? Jewels not taken? Your entire professional life must be full of people complaining that you failed to do your job and cost them money to boot. "I was going to be robbed today, but thanks to the efforts of the Metropolitan Police, I wasn't. Good job, lads." Not likely.

----

This is an argument, by the way, that we should not, repeat not, give away our civil liberties in order to make the police's job easier. Some jobs just are difficult, like bringing up coal from underground and developing a grand unified theory of everything. Sometimes people say they'd like their job to be easier but, actually, that's not what they mean. What they mean is that they'd like their efforts to produce results, and those results to be recognised. (Or, if you're a coal miner, possibly that you'd like not to get a lung disease.)

The job of the police is difficult -- necessarily difficult -- and you can't make it easier unless you change the job. The job of the police is to minimise bad stuff in general. You could ask them to prevent specific bad stuff (eg, riots) by creating other bad stuff (arresting protesters). But if you don't want any bad stuff; well, that's just hard.

But you can make the police happier. Just figure out how to quantify their success at preventing riots. At any rate, I think it behooves us not to confuse the two.

Tuesday 24 March 2009

How to make spam irrelevant

Apparently 90% of all emails are spam. Here's a proposal:
  1. Invent digital cash
  2. Automatically discard any email unless it has 1p attached
Charging for email has been proposed before. I presume the reason it hasn't taken off is that we haven't solved step 1. (At least, not in a way that has been widely adopted.) Typically, it's instead suggested that the ISP should charge for emails. And no-one likes that idea.

It may be that step 1 is insoluble. In that case, this is not a good proposal. Perhaps the title of this post should be "What's the first thing we should do, if we ever invent digital cash."

The nice thing about this proposal is that it's fairly robust. If 1p doesn't do it, charge 10p. If your friends balk at sending you 10p, ask them to return the 10p you sent them. Or whitelist them.

Here's an example criticism of the approach:
One suggestion of actually charging everyone a penny per email is rife with unsolvable issues: who controls the money? Who determines "exempt" status for non-profits? How do poor people or poor countries pay a penny per email? These problems are politically insurmountable.
(That's from Mike Adams at spamdon'tbuyit.org).

Those are good questions, but calling them unsolvable seems a bit harsh. Here is an attempt at solutions: The money is controlled by the issuing bank. (Figuring out how the issuing bank issues the cash is part of the problem of step 1, so I'm avoiding this question to a large extent.) Non-profits should not be exempt since, presumably, we don't want non-profits sending spam. Poor people will have the same problems finding 1p to send you an email as they do getting the internet connection in the first place. If you think that those problems are are a moral issue – and they might be – please send the poor a lot of emails. (That is flippant but intended to be serious.)

Tuesday 17 March 2009

Musical instruments

You may not know this, but there are two kinds of musical instruments in the world. They can be distinguished by immersing them in helium gas. Type A instruments do not change pitch in helium; Type B instruments do.

If you've ever made your voice sound funny by inhaling the helium from a party balloon, you'll know that the human voice is Type B. Quick puzzle: What's the cause of the distinction?


Saturday 7 February 2009

Pound cost averaging (Part II)

Pound cost averaging crops up a lot as an investment strategy.

Most proponents tout pound cost averaging as a way of "beating the market," but, to the extent that there's a meaningful comparison to be made, I claim it does not beat the market. That's not to say that it's a silly strategy; after all, "invest a little bit each month" is quite a sensible strategy if what you want to do is save money. Furthermore, there's a sense in which pound cost averaging does no worse than the market, either. In fact, there's a very sensible sense in which pound cost averaging is exactly the same as the market. I thought it would be interesting to work through John Kay's example to see what happens. To do so, we will need to fill in a number of details. It turns out that there are a lot of essential details to be filled in. 

In Kay's suggested strategy, we invest £100 each year in some particular stock (the single stock being a proxy for whatever equity investment opportunity is open to us). The stock is assumed to have a value in each year of 100p, 50p, 100p, 50p, 100p, ...

After 10 years the accumulated shares will either be worth £1,500 or £750, depending on the year in which they are sold. Therefore the return generated by this strategy is either 50% or −25%. (I am, throughout, ignoring the time value of money).

Now, if we are to decide whether or not that strategy "beats the market," presumably we will need a "market strategy" with which to compare it. What would such a strategy look like? Presumably it would roughly be "just invest everything you have and don't try anything clever." Suppose that every year we have only £100 to invest. In that case, investing everything you have would mean investing £100 per year; in other words, the "market strategy" just is pound cost averaging.

Obviously, in this sense, pound cost averaging is neither better nor worse than the market, but it's rather a trivial sense. 

Perhaps, however, we have choices to make. A simple model that captures at least some of those choices is to imagine that we have been given the £1,000 up front and have to decide how to invest it over the next ten years. In this model, we will need to assume, in addition, that we can also save money at zero risk; in other words, that whatever we don't invest this year can be carried forward to next year. 

In this version, it seems reasonable to define the "market strategy" as the strategy that is: invest the £1,000 straight away, and wait ten years.  Now we'll get back either £2,000 or £1,000 or £500 depending on the year in which the stock is bought, and the year in which it's sold. So the return generated by this strategy is either 200% or 0% or −50%. 

Which strategy is better? It's hard to tell, actually, because they're clearly not directly comparable. It's not like one guarantees a 100% return and the other guarantees only a 50% return. Instead, there's some uncertainty in both.

At this point, you're probably thinking, "why all this caveat venditor nonsense? It's obvious which year to sell your stock in: the year in which they're worth 100p a share. For that matter, it's obvious which year to buy in: the year in which it's worth 50p a share. So why wouldn't you just do that? 

This, I think, is a major problem with Kay's example. He chose a certain set of prices for the share to illustrate a point. But presumably he didn't mean to suggest that these prices would be known in advance. No-one knows what the share prices will be in advance. And presumably he didn't mean to suggest that pound-cost averaging is only a good idea for this specific set of share prices; he is claiming that it's a good idea in general. A more useful model would specify the probability that the share price would increase or decrease, each year, by a certain amount. 

But the end result would be similar to where we are: Both the "invest it all at once" strategy and the strategy of pound cost averaging produce a range of possible answers. The variation in that range is lower under pound cost averaging but the average return is also be lower. (There's a question of how to compute the average, of course.) A strategy of "save everything" would be the extreme of no variation, but no return either.

Here's the point: By varying the split between what you invest and what you save, you get to trade off risk and return: none of either for pure savings; lots of both for pure investing. Conceptually, any point along this trade-off line could be considered to be a "market strategy." Pound-cost averaging just puts you at one point on this trade-off, somewhere between the two extremes. But if that's what you want, you can get the same effect simply by saving some and investing some. There's nothing particularly clever about pound cost averaging; and it certainly doesn't beat the market.