Asimov’s Corollary

Note: I’m going to risk a bit of copyright infringement today, which is generally unlike me.  However, I am not gaining anything from this – and in fact, hopefully I’m giving some publicity to the estate of Issac Asimov, rather than diminishing the value of the works.  If the publishers would like to make another run of this particular book, I’d be more than happy to take it down from the Internet.

Isaac Asimov has always been one of my favorite authors.  I’ve never been certain how serious he was about doing research science as a career, but I’ve always been inspired by the idea of a scientist becoming an author to publicize the value of science.  There’s something noble about it.  Although, like me, Asimov may have had a passion for science, but struggled in the lab.  As he phrased it,  “Whatever gifts I have, none of them includes deftness in experiment work.” (The Magic Isle, 1977)  I’ve always just said that glassware and I don’t get along, but I’m pretty sure we’re on the same page.

In any case, Asimov worked in an age before computers, so my solution of moving into bioinformatics wasn’t an avenue open to him – and unfortunately, neither was blogging. Asimov, however, was a prolific writer in the age before the internet and his style lends itself beautifully to the blogging format.  Although his pieces are slightly longer than the average blog, they have a format that has inspired me – and which I have always tried to emulate.  No surprise, though, Asimov has been a hero of mine almost since I was able to read.

Frankly, if he had lived during the age of the Internet, I think he’d not only have a blog, but he’d be one of the most prolific bloggers around. In a tribute to him, I’d like to republish one of his essays – it’s eerie how well it transcends the generations.  It’s one of his most timeless pieces, in my humble opinion. Asimov, like me, always starts his pieces with some preamble or a short story.  The one used here is a bit dated, but please stick with it, the rest of the piece makes up for it, charting out a manifesto for skeptics, for good science and for rational though. Without further ado, I’m going to republish the full piece here.  Please keep in mind that he wrote this 35 years ago – before I was born, before atheism was acceptable, and while skepticism was just taking off.

Isaac Asimov, you are my hero.

The following piece is reprinted without permission from Chapter seventeen of Isaac Asimov’s book Quasar, Quasar burning bright, originally printed in the February 1977 issue of The Magazine of Fantasy and Science Fiction.  (I believe neither book nor magazine are in press anywhere – and if they are, I highly recommend you pick up a copy just to have this essay on hand.)  Mistakes in the text below are undoubtedly mine.

I have just come back from Rensselaerville, New York, where, for the fifth year, I have led a four-day seminar on some futuristic topics. (This time it was on the colonization of space.)  Some seventy to eighty people attended, almost all of them interested in science fiction and all of them eager to apply their imaginations to the posing of problems and the suggesting of solutions.

The seminar only runs from a Sunday to a Thursday, but by Thursday there is a mass heartbreak at the thought of leaving and vast promises (usually kept) to return the next year.

This year we managed to persuade Ben Bova (editor of Analog) and his charming wife, Barbara, to attend.  They threw themselves into the sessions with a will and were beloved by all.

Finally came the end, at Thursday noon, and, as is customary on these occasions, I was given a fancy pseudo-plaque testifying to my good nature and to my suave approach towards members of the opposite sex. [footnote: See my book The Sensuous Dirty Old Man, (Walker 1971).]

A charming young woman, not quite five feet tall, made the presentation and in simple gratitude, I  placed my arm about her waist.  Owing to her unusually short height, however, I didn’t manage to get low enough and the result brought laughter from the audience.

Trying to dismiss this embarrassing faux pas (though I must admit that neither of us budged), I said “I’m sorry, folks.  That’s just the Asimov grip.”

And from the audience Ben Bova (who, it seems appropriate to say in this particular connection, is my bosom buddy) called out “Is that anything like the swine flu?

I was wiped out, and what does one do when one has been wiped out by a beloved pal? Why one turns about and proceeds to try to wipe out some other beloved pal. – In this case, my English colleague Arthur C. Clarke.

In Arthur’s book, Profiles of the Future(Harper & Row, 1962) he advances what he himself calls “Clarke’s Law.”  It goes as follows:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right.  When he states that something is impossible, he is very probably wrong.”

Arthur goes on to explain what he means by “elderly.” He says: “In physics, mathematics, and astronautics it means over thirty;  in other disciplines, senile decay is sometimes postponed to the forties.”

Arthur goes on to give examples of “distinguished but elderly scientists” who have pished and tut-tutted all sorts of things that have come to pass almost immediately.  The distinguished Briton Ernest Rutherford pooh-poohed the possibility of nuclear power, the distinguished American Vannevar Bush bah-humbugged intercontinental ballistic missiles, and so on.

But naturally when I read a paragraph like that, knowing Arthur as I do, I begin to wonder if, among all the others, he is thinking of me.

After all, I’m a scientist.  I am not exactly a “distinguished” one but nonscientists have gotten the notion somewhere that I am, and I am far too polite a person to subject them to the pain of disillusionment, so I don’t deny it.  And then, finally, I am a little over thirty and have been a little over thirty for a long time, so I qualify as elderly by Arthur’s definition. (So does he, by the way, for he is – ha, ha – three years older than I am.)

Well, then, as a distinguished but elderly scientist, have I been going around stating that something is impossible or, in any case, that that something bears no relationship to reality?  Heavens, yes! In fact, I am rarely content to say something is “wrong” and let it go at that.  I make free use of terms and phrases like “nonsense,” “claptrap,” “stupid folly,” “sheer idiocy,” and many other bits of gentle and loving language.

Among currently popular aberrations, I have belabored without stint Velikovskianism, astrology, flying saucers and so on.

While I haven’t yet had occasion to treat these matters in detail, I also consider the views of the Swiss Erich von Däniken on “ancient astronauts” to be utter hogwash; I take a similar attitude to the widely held conviction (reported, but not to my knowledge subscribed to, by Charles Berlitz in The Bermuda Triangle) that the “Bermuda Triangle” is the hunting ground of some alien intelligence.

Doesn’t Clarke’s Law make me uneasy, then?  Don’t I feel as though I am sure to be quoted extensively, and with derision, in some book written a century hence by some successor to Arthur?

No, I don’t.  Although I accept Clarke’s Law and think Arthur is right in his suspicion that the forward-looking pioneers of today are the backward-yearning conservatives of tomorrow, [Footnote: Heck, Einstein himself found he could not accept the uncertainty principle and, in consequence, spent the last thirty years of his life as a living monument and nothing more.  Physics went on without him.] I have no worries about myself.  I am very selective about the scientific heresies I denounce, for I am guided by what I call Asimov’s Corollary to Clarke’s Law.  Here is Asimov’s Corollary:

When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervor and emotion – the distinguished but elderly scientists are then, after all, probably right.

But why should this be?  Why should I, who am not an elitist, but an old-fashioned liberal and an egalitarian (see “Thinking About Thinking” in The Planet That Wasn’t, Doubleday, 1976), thus proclaim the infallibility of the majority, holding it to be infallibly wrong?

The answer is that human beings have the habit (a bad one, perhaps, but an unavoidable one) of being human; which is to say that they believe in that which comforts them.

For instance, there are a great many inconveniences and disadvantages to the Universe as it exists.  As examples: you cannot live forever, you can’t get something for nothing, you can’t play with knives without cutting yourself, you can’t win every time, and so on and so on (see “Knock Plastic,” in Science, Numbers and I, Doubleday 1968).

Naturally, then, anything which promises to remove these inconveniences and disadvantages will be eagerly believed.  The inconveniences and disadvantages remain, of course, but what of that?

To take the greatest, most universal, and most unavoidable inconvenience, consider death.  Tell people that death does not exist and they will believe you and sob with gratitude at the good news.  Take a census and find out how many human beings believe in life after death, in heaven, in the doctrines of spiritualism, in the transmigration of souls. I am quite confident you will find a healthy majority, even an overwhelming one, in favor of side-stepping death by believing in its nonexistence through one strategy or another.

Yet as far as I know, there is not one piece of evidence ever advanced that would offer any hope that death is anything other than the permanent dissolution of the personality and that beyond it, as far as individual consciousness is concerned, there is nothing.

If you want to argue the point, present the evidence.  I must warn you, though, that there are some arguments I won’t accept.

I won’t accept any argument from authority. (“The Bible says so.”)

I won’t accept any argument from internal conviction. (“I have faith it’s so.”)

I won’t accept any argument from personal abuse. (“What are you, an atheist?”)

I won’t accept any argument from irrelevance. (“Do you think you have been put on this Earth just to exist for a moment of time?”)

I won’t accept any argument from anecdote. (“My cousin has a friend who went to a medium and talked to her dead husband.”)

And when all that (and other varieties of nonevidence) are eliminated, there turns out to be nothing. [Footnote: Lately, there have been detailed reports about what people are supposed to have seen during “clinical death.” – I don’t believe a word of it.]

Then why do people believe? Because they want to.  Because the mass desire to believe creates a social pressure that is difficult (and, in most times and places, dangerous) to face down.  Because few people have had the chance of being educated into the understanding of what is meant by evidence or into the techniques of arguing rationally.

But mostly because they want to.  And that is why a manufacturer of toothpaste finds it insufficient to tell you that it will clean your teeth almost as well as the bare brush will.  Instead he makes it clear to you , more or less by indirection, that his particular brand will get you a very desirable sex partner.  People, wanting sex somewhat more intensely than they want clean teeth, will be the readier to believe.

Then, too, people generally love to believe the dramatic, and incredibility is no bar to the belief but is, rather, a positive help.

Surely we all know this in an age when whole nations can be made to believe in any particular bit of foolishness that suits their rulers and can be made willing to die for it, too. (This age differs from previous ages in this, however, only in that the improvement of communications makes it possible to spread folly with much greater speed and efficiency.)

Considering their love of the dramatic, is it any surprise that millions are willing to believe, on mere say-so and nothing more, that alien spaceships are buzzing around the Earth and that there is a vast conspiracy of silence on the part of the government and scientists to hide that fact? No one has ever explained what government and scientists hope to gain by such a conspiracy or how it can be maintained, when every other secret is exposed at once in all its details – but what of that? People are always willing to believe in any conspiracy on any subject.

People are also willing and eager to believe in such dramatic matters as the supposed ability to carry on intelligent conversations with plants, the supposed mysterious force that is gobbling up ships and planes in a particular part of the ocean, the supposed penchant of Earth and Mars to play Ping-Pong with Venus and the supposed accurate description of the result in the Book of Exodus, the supposed excitement of visits from extraterrestrial astronauts in prehistoric times and their donation to us of our arts, techniques and even some of our genes.

To make matters still more exciting, people like to feel themselves to be rebels against some powerful repressive force – as long as they are sure it is quite safe.  To rebel against a powerful political, economic, religious, or social establishment is very dangerous and very few people dare do it, except, sometimes, as an anonymous part of a mob.  To rebel against the “scientific establishment,” however, is the easiest thing in the world, and anyone can do it and feel enormously brave, without risking as much as a hangnail.

[Footnote: A reader once wrote me to say that the scientific establishment could keep you from getting grants, promotions, and prestige, could destroy your career, and so on.  That’s true enough.  Of course, that’s not as bad as burning you at the stake or throwing you in a concentration camp, which is what a real establishment could and would do, but even depriving you of an appointment is rotten. However, that works only if you are a scientist.  If you are a nonscientist, the scientific establishment can do nothing more than make faces at you.]

Thus, the vast majority, who believe in astrology and think that the planets have nothing better to do than form a code that will tell them whether tomorrow is a good day to close a business deal or not, become all the more excited and enthusiastic about the bilge when a group of astronomers denounce it.

Again, when a few astronomers denounce the Russian-born American Immanuel Velikovsky, they lent the man (and, by reflection, his followers) an aura of the martyr, which he (and they) assiduously cultivate, though no martyr in the world has ever been harmed so little or helped so much by the denunciations.

I used to think, indeed, that it was entirely the scientific denunciations that had put Velikovsky over the top and that had the American astronomer Harlow Shapley only had the sang froid to ignore the Velikovskian folly, it would quickly have died a natural death.

I no longer think so.  I now have greater faith in the bottomless bag of credulity that human beings carry on their back.  After all, consider Von Däniken and his ancient astronauts.  Von Däniken’s books are even less sensible than Velikovsky’s and are written far more poorly, [Footnote: Velikovsky, to do him justice, is a fascinating writer and has an aura of scholarliness that Von Däniken utterly lacks.] and yet he does well.  What’s more, no scientist, as far as I know, has deigned to take notice of Van Däniken. Perhaps they felt such notice would do him too much honor and would but do for him what it had done for Velikovsky.

So Van Däniken has been ignored – and, despite that, is even more successful than Velikovsky is, attracts more interest, and makes more money.

You see, then, how I chose my “impossibles.” I decide that certain heresies are ridiculous and unworthy of any credit not so much because the world of science says,  “It is not so!” but because the world of nonscience says, “It is,” so enthusiastically.  It is not so much that I have confidence in scientists being right, but that I have so much in nonscientists being wrong.


I admit, by the way, that my confidence in scientists being right is somewhat weak.  Scientists have been wrong, even egregiously wrong, many times. There have been heretics who have flouted the scientific establishment and have been persecuted therefor (as far as the scientific establishment is able to persecute), and, in the end, it has been the heretic who has proved right.  This has happened not only once, I repeat, but many times.

Yet that doesn’t shake the confidence with which I denounce those heresies I do denounce, for in the cases in which heretics have won out, the public has, almost always, not been involved.

When something new in science is introduced, when it shakes the structure, when it must in the end be accepted, it is usually something that excites scientists, sure enough, but does not excite the general public – except perhaps to get them to yell for the blood of the heretic.

Consider Galileo, to begin with, since he is the patron saint (poor man!) of all self-pitying crackpots. To be sure, he was not persecuted primarily by scientists for his scientific “errors,” but by theologians for his very real heresies (and they were real enough by seventeenth-century standards).

Well, do you suppose the general public supported Galileo? Of course not. There was no outcry in his favor. There was no great movement in favor of the Earth going round the Sun.  There were no “sun-is-center” movements denouncing the authorities and accusing them of a conspiracy to hide the truth.  If Galileo had been burned at the stake, as Giordano Bruno had been a generation earlier, the action would probably have proved popular with those parts of the public that took the pains to notice it in the first place.

Or consider the most astonishing case of scientific heresy since Galileo – the matter of the English naturalist Charles Robert Darwin.  Darwin collected the evidence in favor of the evolution of species by natural selection and did it carefully and painstakingly over the decades, then published a meticulously reasoned book that established the fact of evolution to the point where no rational biologist can deny it [Footnote: Please don’t write me to tell me that there are creationists who call themselves biologists. Anyone can call himself a biologist.] even though there are arguments over the details of the mechanism.

Well, then, do you suppose the general public came to the support of Darwin and his dramatic theory? They certainly knew about it.  His theory made as much of a splash in his day as Velikovsky did a century later.  It was certainly dramatic – imagine species developing by sheer random mutation and selection, and human beings developing from apelike creatures! Nothing any science fiction writer ever dreamed up was as shatteringly astonishing as that to the people who from earliest childhood had taken it for established and absolute truth that God had created all the species ready-made in the space of a few days and that man in particular was created in the divine image.

Do you suppose the general public supported Darwin and waxed enthusiastic about him and made him rich and renowned and denounced the scientific establishment for persecuting him? You know they didn’t.  What support Darwin did get was from scientists. (The support any rational scientist gets is from scientists, though usually from only a minority of them at first.)

In fact, not only was the general public against Darwin then, they are against Darwin now.  It is my suspicion that if a vote were taken in the United States right now on the question of whether man was created all at once out of the dirt or through the subtle mechanisms of mutation and natural selection over millions of years, there would be a large majority who would vote for dirt.

There are other cases, less famous, where the general public didn’t join the persecutors only because they never heard there was an argument.

In the 1830s the greatest chemist alive was the Swede Jöns Jakob Berzelius.  Berzelius had a theory of the structure of organic compounds which was based on the evidence available at that time.  The French chemist August Laurent collected additional evidence that showed that Berzelius’ theory was inadequate.  He himself suggested an alternate theory of his own which was more nearly correct and which, in its essentials, is still in force now.

Berzelius, who was in his old age and very conservative, was unable to accept the new theory.  He retaliated furiously and none of the established chemists of the day had the nerve to stand up against the great Swede.

Laurent stuck to his guns and continued to accumulate evidence.  For this he was rewarded by being barred from the more famous laboratories and being forced to remain in the provinces.  He is supposed to have contracted tuberculosis as a result of working in poorly heated laboratories and he died in 1853 at the age of forty-six.

With both Laurent and Berzelius dead, Laurents’s new theory began to gain ground.  In fact, one French chemist who had originally supported Laurent but had backed away in the face of Berzelius’ displeasure now accepted it again and actually tried to make it appear that it was his theory. (Scientists are human, too.)

That’s not even a record for sadness.  The German physicist Julius Robert Mayer, for his championship of the law of conservation of energy in the 1840’s, was driven to madness. Ludwig Boltzmann, the Austrian physicist, for his work on the kinetic theory of gases in the late nineteenth century, was driven to suicide.  The work of both is now accepted and praised beyond measure.

But what did the public have to do with all these cases? Why, nothing. They never heard of them.  They never cared. It didn’t touch any of their great concerns.  In fact, if I wanted to be completely cynical, I would say that the heretics were in this case right and that the public, somehow sensing this, yawned.

This sort of thing goes on in the twentieth century, too. In 1912 a German geologist, Alfred Lothat Wegener, presented to the world his views on the continental drift.  He thought the continents all formed a single lump of land to begin with and that this lump, which he called “Pangaea,” had split up and that the various portions had drifted apart.  He suggested that the land floated on the soft, semi-solid underlying rock and that the continental pieces drifted apart as they floated.

Unfortunately, the evidence seemed to suggest that the underlying rock was far too stiff for continents to drift through and Wegener’s notions were dismissed and even hooted at. For half a century the few people who supported Wegener’s notions had difficulty in getting academic appointments.

Then, after World War II, new techniques of exploration of the sea bottom uncovered the global rift, the phenomenon of sea -floor spreading, the existence of crustal plates, and it became obvious that the Earth’s crust was a group of large pieces that were constantly on the move and that the continents were carried with the pieces. Continental drift, or “plate tectonics,” as it is more properly called, became the cornerstone of geology.

I personally witnessed this turnabout.  In the first two editions of my Guide to Science (Basic Books, 1960, 1965), I mentioned continental drift but dismissed it haughtily in a paragraph.  In the third edition (1972) I devoted several pages to it and admitted having been wrong to dismiss it so readily. (This is no disgrace, actually. If you follow the evidence you must change as additional evidence arrives and invalidates earlier conclusions. It is those who support ideas for emotional reasons only who can’t change.  Additional evidence has no effect on emotion.)

If Wegener had not been a true scientist, he could have made himself famous and wealthy.  All he had to do was take the concept of continental drift and bring it down to earth by having it explain the miracles of the Bible. The splitting of Pangaea might have been the cause, or the result, of Noah’s Flood.  The formation of the Great African Rift might have drowned Sodom.  The Israelites crossed the Red Sea because it was only a half mile wide in those days.  If he had said all that, the book would have been eaten up and he could have retired on his royalties.

In fact, if any reader wants to do this now, he can still get rich. Anyone pointing out this article as the inspirer of the book will be disregarded by the mass of “true believers,” I assure you.

So here’s a new version of Asimov’s Corollary, which you can use as your guide in deciding what to believe and what to dismiss:

If a scientific heresy is ignored or denounced by the general public, there is a chance it may be right. If a scientific heresy is emotionally supported by the general public, it is almost certainly wrong.

You’ll notice that in my two versions of Asimov’s Corollary I was careful to hedge a bit.  In the first I say that scientists are “probably right.” In the second I say that the public is “almost certainly wrong.” I am not absolute. I hint at exceptions.

Alas, not only are people human; not only are scientists human; but I’m human too.  I want the Universe to be as I want it to be and that means completely logical. I want silly, emotional judgments to be always wrong.

Unfortunately, I can’t have the Universe the way I want it, and one of the things that makes me a rational being is that I know this.

Somewhere in history there are bound to be cases in which science said “No” and the general public, for utterly emotional reasons, said “Yes” and in which it was the general public that was right. I thought about it and came up with an example in half a minute.

In 1798 the English physician Edward Jenner, guided by old wives’ tales based on the kind of anecdotal evidence I despise, tested to see whether the mild disease of cow-pox did indeed confer immunity upon humans from the deadly and dreaded disease of smallpox. (He wasn’t content with the anecdotal evidence, you understand; he experimented.) Jenner found the old wives were correct and he established the technique of vaccination.

The medical establishment of the day reacted to the new technique with the greatest suspicion. Had it been left to them, the technique might well have been buried.

However, popular acceptance of vaccination was immediate and overwhelming.  The technique spread to all parts of Europe. The British royal family was vaccinated; the British Parliament voted Jenner ten thousand pounds. In fact, Jenner was given semidivine status.

There’s no problem in seeing why. Smallpox was an unbelievably frightening disease, for when it did not kill, it permanently disfigured.  The general public therefor was almost hysterical with desire for the truth of the suggestion that the disease could be avoided by the mere prick of a needle.

And in this case, the public was right! The Universe was as they wanted it to be. In eighteen months after the introduction of vaccination, for instance, the number of deaths from smallpox in England was reduced to one third of what it had been.

So there are indeed exceptions. The popular fancy is sometimes right.

But not often, and I must warn you that I lose no sleep over the possibility that any of the popular enthusiasms of today are liable to turn out to be scientifically correct. Not an hour of sleep do I lose; not a minute.

BlueSeq Knowledgebase

Remember BlueSeq?  The company I gave a hard time after their presentation at Copenhagenomics?  Turns out they have some cool stuff up on the web.  Here’s a comparison of sequencing technologies that they’ve posted.  Looks like they’ve put together quite a decent set of resources.  I haven’t finished exploring it yet, but it looks quite useful.

Via CLC bio blog – Post: Goldmine of unbiased expert knowledge on next generation sequencing.

Nature Comment : The case for locus-specific databases

There’s an interesting comment available in Nature today (EDIT: it came out last month, though I only found it today.) Unfortunately, it’s by subscription only, but let me save you the hassle of downloading it, if you don’t already have a subscription.  It’s not what I thought it was.

The entire piece fails to make the case for locus-specific databases, but instead conflates locus-specific with “high-resolution”, and then proceeds to tell us why we need high resolution data.  The argument can roughly be summarized as:

  • Omim and databases like it are great, but don’t list all known variations
  • Next-gen sequencing gives us the ability to see genome in high resolution
  • You can only get high-resolution data by managing data in a locus-specific manner
  • Therefore, we should support locus-specific databases

Unfortunately, point number three is actually wrong.  It’s just that our public databases haven’t yet transitioned to the high resolution format.  (ie, we have an internal database that stores data in a genome-wide manner at high resolution…  the data is, alas, not public.)

Thus, on that premise, I don’t think we should be supporting locus specific databases specifically –  indeed, I would say that the support they need is to become amalgamated in to a single genome-wide database at high resolution.

You wouldn’t expect major gains in understanding of car mechanics if you, by analogy, insisted that all parts should be studied independently at high resolution.  Sure you might improve your understanding of each part, and how it works alone, but the real gains come from understanding the whole system.  You might not actually need certain parts, and sometimes you need to understand how two parts work together.  It’s only by studying the whole system that you begin to see the big picture.

IMHO, Locus-specific databases are blinders that we adopt in the name of needing higher resolution, which is more of a comment on the current state of biology.  In fact, the argument can really be made that we don’t need locus-specific databases, we need better bioinformatics!

Google+ goes to battle…

After playing with Google+ for part of a day, I have a few comments to make.  Some are in relation to bioinformatics, others are just general comments.

My first comment is probably the least useful:  Google, why the hell did you make me wait 3 days to get into Google+, only to then let EVERYONE into it 3 hours later after activating my invite?  Seriously, you could have told me that I was wasting my time when I was chasing one down.

Ok, that’s out of my system now. So on to the more interesting things.  First, this isn’t Google’s first shot into the social media field.  We all remember “The Wave”.  It was Google’s “Microsoft Moment”, that is to say, their time to release something that was more hype than real product.  Fortunately, Google stepped back from the brink and started over – so with that in mind, I think Google deserves a lot of credit for not pulling a Microsoft. (In my dictionary pulling a Microsoft is blowing a lot of money on a bunch of adds for products that really suck, but will get critical mass by sheer advertising.  eg.  Bling. Cloud. Need I say more?)

Ok, what did google get right?  Well, first, it seems that they’ve been reading the diaspora mailing list, or at least paying attention.  The whole environment looks exactly like Diaspora to me.  It’s clean, it’s simple, and unlike facebook, is built around communities that don’t have to overlap!  With facebook, everyone belongs to a single group, while Diaspora brought the concept of groups, so that you can segment your circles.  Clicking and dragging people into those groups was what convinced me that Diaspora would be a Facebook killer.

Instead, Google+ has leapfrogged and beaten Diaspora.  And rightly so – Diaspora had it’s faults, but this isn’t the right place for me to get into that.  As far as I can tell, everything I wanted from Diaspora has found it’s way into Google+ with one exception: You can’t host your own data.  Although, really, if there’s one company out there that has done a good job of managing user data (albeit it has stumbled a few times) it’s Google.  The “Do no evil” moto has taken a few beatings, but it’s still a good start.

[By the way, Diaspora fans, the code was open source, so if you’re upset that Google replicated the look and feel, you have to remember that that is the purpose of open source: to foster good development ideas. ]

So, where does this leave things?

First, I think Google has a winner here.  The big question is, unlike the wave, can it get critical mass?  I think the answer to that is a profound yes.  A lot of the trend setters are moving here from facebook, which means others will follow.  More importantly, however, I think getting security right from the start will be one of the big draws for Google.  They don’t need to convince your grandmother to switch to facebook – they just need you to switch, and your grandmother will eventually be dragged along because she’ll want to see your pictures. (And yes, Picasa is about to be deluged with new accounts.)

More importantly, All those kids who want to post naked pictures of themselves dancing on cars during riots are going to move over pretty damn quickly.  Whether that’s a good thing or not, I think EVERYONE learned something from the Vancouver Riots aftermath.

So great, but how will this be useful to the rest of us?  Actually, I’ve heard that Google+ is going to be the twitter killer – and I can see that, but I don’t see that as the main purpose.  Frankly, the real value is in the harmonization of services.  Google has, hands down been one of the best Software as a Service (SAS or SAAS) provider around in my humble opinion.  When your Google+ account talks to your email, documents, images – and lets you have intuitive fine grained control over who sees what, I think people will find it to be dramatically more useful than any of the competition.  Twitter will either have to find a way to integrate into Google+ or to figure out how to implement communities of their own. It may be a subtle change, but it’s a sea change in how people interact on the web.

For those of you who are bioinformaticians, you won’t be able to take Google+ lightly either.  Already, I’ve found some of my favorite scientist twitterers on Google+ and some have started posting things.  Once people start getting the hang of the groups, it won’t be long till we’ll see people following industry groups, science networks and celebrities.  (Hell, even PZ Myers has an account already.)

The more I think about it, the more I see its potential as a good collaboration tool, as well.  Let me give an example.  If group management can be made into something like a mailing list (ie, opt-in with a moderator) , a PI could create a “My Lab” group that he only allows his own students and group members to join, it would be a great way to communicate group announcements.  It doesn’t spill out information people who aren’t interested, and other people can’t get into those communications unless someone intentionally “re-tweets” the content.  Merge this with Google calendar, and you have an instant scheduling system as well.

What does Google get out of this?  Well, think targetted google adds.  As long as you’re logged in, Google will know everything about you that you’ve ever thought about.  And is that a bad thing?  Well, only if you’re Microsoft and want to complain about Google’s absolute monopoly of the online advertisement market.  You know what Microsoft?  Better them than you.  (And hey, if Google adds do help me find a good canoe when I’m in the market for one, who’s going to complain?)

Why should one become an academic?

You know what?  No one ever bothers to sell the academic path.  In all the time I’ve been in school, and even during my time in industry, no one has ever tried to tell me why I should want to become an academic.

There are a hell of a lot of blogs saying why one should abandon the path to academia, but not a single one that I could find saying “hey everyone, this is why I think academia is great”.  It’s as though everyone is born wanting to be an academic, and you only have to hear the other side to be convinced away from the natural academic leanings.

Of course, there’s a huge amount of competition for academic positions, so it isn’t exactly like people want to encourage incoming students to go down that path.  All that I see in searching the web is the balanced approach about weighing the two options – and that even assumes that all academia is the same, and all industry is the same.  (A blatant lie, if I ever heard one!)

Anyhow, the best I could do in putting together my list of why one should go into academia is in the following set of links.

If there are any academics out there who want to sell the academic path, this would be a great topic for future posts.  I’d love to read it.

As best as I can glean, the only reasons for it are “better working hours, once you become tenured” and “you can be your own boss”.   Seriously, there must be more to it than that!  Anyone?

Is blogging revolutionizing science communication?

There’s been a lot of talk about blogging changing the nature of science communication recently that I think is completely missing the mark.  And, given that I see this really often, I thought I’d comment on it quickly.   (aka, this is a short, and not particularly well researched post… but deal with it.  I’m on “vacation” this week.)

Two of the articles/posts that are still on my desktop (that discuss this topic, albeit in the context of changing the presentation of science, not really in science communication) are:

But  I’ve come across a ton of them, and they all say (emphatically) that blogging has changed the way we communicate in science.  Well Yes and No.

Yes, it has changed the way scientists communicate between themselves.  I don’t run to the journal stacks anymore when I want to know what’s going on in someone’s lab, I run to the lab blog.  Or I check the twitter feed… or I’ll look for someone else blogging about the research.  You learn a lot that way, and it is actually representative of what’s going on in the world – and the researcher’s opinions on a much broader set of topics.  That is to say, it’s not a static picture of what small set of experiments worked in the lab in 1997.

On the other hand, I don’t think that there are nearly enough bloggers making science accessible for lay people.  We haven’t made science more easily understood by those outside of our fields – we’ve just make it easier for scientists inside our own field to find and compare information.

I know there are a few good blogs out there trying to make research easier to understand, but they are few and far between.  I, personally, haven’t written an article trying to explain what I do for a non-scientist in well over a year.

So, yes, blogging has changed science communication, but as far as I can tell, we’ve only changed it for the scientists.

CPHx: Morten Rasmussen, National High-Throughput Sequencing Centre, sponsored by Illumina – Exploring ancient human genomes

Exploring ancient human genomes
Morten Rasmussen, National High-Throughput Sequencing Centre, sponsored by Illumina


Why study ancient DNA?  By studying modern species, we can only add leaves to the end of the phylogenetic tree, but not to study the nodes, or extinct branches. [my interpretation.]

How do you get ancient DNA? Bones and Teeth, mainly.  Coprolites are now used as well, and soft tissue, if available.  Ice and sediments can also be used in some cases.

Characteristics: The colder and dryer the environment, the better quality of the DNA preservation.  Age is also a factor.  The older the DNA, the less likely it is to have survived.  More than 1 million years is the limit, if conditions were optimal.

Goldilocks principle.  There is a sensitivity limit – you need enough.  Some is too short – you need longer strands.  You also need to worry about modern DNA contamination – mostly microbial.  Thus, within those constraints, you need to work carefully.

Some advantages in next-gen seq tho – no need for internal primers, size constraints are ok, etc.

DNA barcodes are frequently used to look at biodiversity.  Align the sequences to look for conserved regions surrounding a variable region – allowing primers to be designed for either end of the variable region.  If sequences are identical, you can’t distinguish the origin of the DNA.  [obviously a different type of bar-coding than what we usually discuss in NGS.]

Ice core genetics.  Willerslev et al, Science (2007).  Interesting results found in the “silty” ice, which included DNA from warmer climate plants.

Late survival of mammoth and horse…  can use similar techniques as ice cores to soil cores.

Paleogenomics.  DNA is often highly fragmented and full of bacterial contamination.  A big part of this is finding the right sample.. Eg, look in greenland for good samples where the cold will have preserved samples well.  Hair sample found, which was eventually moved to denmark.

Big issue of contamination, however still has to be dealt with.  Fortunately, DNA is held inside the hair, so washing hair with bleach removes most surface contaminants without harming the DNA sample.  Gives good results – vastly better than bone results that can’t use that method.  (84% in this case is homo sapiens, versus 1% recovery for neanderthal bone.)

DNA damage:  Expected damage from ancient DNA as previously observed, but bioinformaticians did not see significant damage.  Turns out that Pfu was used in protocol in this round, and Pfu does not amplify Uracil.  This has the unexpected side effect of “removing” the damage.

Standard pipeline was used, mapping to hg18.  only 46% of reads mapped, because only uniquely mapped reads were used for the analysis.  Multi-mapped reads were discarded, and clonal reads were also “collapsed”.  Still, 2.4 billion basepairs covered, 79% of hg18, 20X depth.

Inference about phenotypic traits:

  • dark eyes
  • brown hair
  • dry earwax
  • tendancy to go bald

Of course, many of those could have been predicted anyhow, but nice to confirm.

Compared to other populations with SNP chip data.  Confirmed that the ancient greenland DNA places the sequenced individual near the chukchis and koryaks (Populations from northern siberia).  That’s good, because it also rules out contamination from the people who did the sequencing. (Europeans.)  Thus, this was probably from an earlier migration than the current greenlanders, consistent with known data about migrations to the region.

What does the future hold:

  • More ancient genomes
  • Targeted sequencing for larger samples.

Why targeted sequencing of ancient DNA?  If you capture the most important bits of DNA, you would generate more interesting data with less effort, giving the same results.


CPHx: Daniel MacArthur, Wellcome Trust Sanger Institute & Wired Science – Functional annotation of “healthy” genomes: implications for clinical application.

Functional annotation of “healthy” genomes: implications for clinical application.
Daniel MacArthur, Wellcome Trust Sanger Institute & Wired Science


The sequence-function intersection.

What we need are tools and resources for researchers and clinicians to merge information together to utilize this data.  Many things need to be done, including improving annotations, fixing the human reference sequence and improved databases of variation and disease mutations.

Data sets used – single high quality individual genome.  Anonymous European from hapmap project.  One of the most highly sequenced individuals in the world.

Also working on a pilot study with 1000 genomes, 179 individuals from 4 populations.

Focussing on loss of function variants.  SNPs with stop codons, disrupting splice sites, large deletions and frame-shift mutations.  Expected to be enriched for deleterious mutations.  Have been found in ALL published genomes – all genomes are “dysfunctional”.  Some genomes are more dysfuntional than others…  however, it might be an enrichment of sequencing errors.

Functional sites are typically enriched for selective pressures, leading to less variation.  The more likely something is to be functional, the more likely you are to find error. [I didn’t express it well, but the noise has a greater influence on highly conserved regions with low variation than on regions with higher variation.]

Hunting mistakes

  1. sequencing errors.  This gets easier to find as time goes by and tech. improves.
  2. reference or annotation artefacts.  False intron in annotation of genes, or otherwise.
  3. Unlikely to cause true loss of function.  eg, truncation in last amino acid of protein.

Loss of function filtering.  Done with experimental genotyping, manual annotation and informatic filtering.  Finally, after all those filtering, you get down to the “true LOF variations.”

example. 600 raw becomes 200 filtered by any transcript, down to 130 filtered on all transcripts.

Homozygous loss of function variants were observed in the high quality genome.  The ones observed cover a range of genes.  the real lof variations tend to be rare, enriched for mildly deleterious effects.

LOF variants affect RNA expression.  Variants predicted to undergo nonsense mediated decay are less frequent. [I may have made a mistake here.]

Can use LOF variants to inform clinical outcomes.  You can distinguish LOF variant genes from recessive disease genes.  ROC AUC = 0.81 (Reasonably modest but predictive model.) Applying this to disease studies at Sanger.


  • More LOF variants for better classification
  • Improve upstream processes
  • Improve human ref seq
  • Use catalogs of LOF tolerant genes for better disease gene prediction

CPHx: Kevin Davies, Bio-IT World – The $1,000 genome, the $1,000,000 interpretation

“The $1,000 genome, the $1,000,000 interpretation”
Kevin Davies, Bio-IT World


Taking notes on a talk by a journalist is pretty much a bad idea.   Frankly, it would be akin to reducing a work of art to a mere grunt.  The jokes, nuances and elegance would all be lost – and if I were some how able to do a good job, it would have the nasty side effect of putting Kevin out of work when everyone spends their time reading my blog instead of inviting him to speak himself  – or worse, instead of reading his book.  (Alas, I haven’t read it myself, either.)

However, in the vein of letting people know what’s happening here, Kevin has taken the opportunity to review some of the early history of next gen sequencing.  It’s splashed with all sorts of wonderful artefacts that represent the milestones: the first solexa genome sequenced (A phage), James Watson’s genome, the first prescription for human sequencing, etc.

More importantly, the talk also wandered into some of the more useful applications and work done on building the genomic revolution for personalized medicine.  (You might consider checking for one great example.  Pulitzer prize winning journalism, we’re told.)  Kevin managed to cover plenty of ways in which the new technologies have been applied to human health and disease – as well as to discover common human traits like freckling, hair curl and yes, even Asparagus anosmia!

Finally, the talk headed towards some of the sequencing centres and technologies we’ve seen here, including Complete Genomics, PacBio and a brief sojourn past Oxford Nanopore.  Some of my favourite technologies – and endlessly interesting topics for discussion over beer.  And naturally, as every conversation on next-gen sequencing must do, Kevin reminds us that the cost of the human genome has dropped from millions of dollars for the first set, down to the sub $10,000 specials.  Genomes for all!



CPHx: Anne Palser, Welcome Trust Sanger Inst., Sponsored by Agilent Technologies – Whole genome sequencing of human herpesviruses

Whole genome sequencing of human herpesviruses
Anne Palser, Welcome Trust Sanger Inst., Sponsored by Agilent Technologies


Herpes virus review.  dsDNA, enveloped viruses.  3 major classes, alpha, beta, gamma.

Diseases include Kaposi’s (KSHV-140kb genome) sarcoma, Burkitt’s lymphoma (EBV – 170kb genome).

Hard to isolate viruses to sequences.  In some clinical samples, not all cells are infected.  When you sequence samples, you get more human DNA than you do virus.  Little known about genome diversity,  All sequences come from cell lines and tumours.  There is no wild type full genome sequence.

Target enrichment method used to try to enrich for virus DNA.

Samples of cell lines used.  Tried 5 primary effusion lymphoma cell lines (3 have EBV, all 5 have KSHV) and 2 burkett lymphoma cell lines (EBV).

Custom baits designed using 120-mers, each base covered by 5 probes for KSHV.  Similar done for EBV1 and EBV2. [skipping some details of how this was done.]

Flow chart for “SureSelect target enrichment system capture process” from illustration.

Multiplexed 6 samples per lane.  Sequenced on Illumina GaII.

Walk through analysis pipeline.  Bowtie and Samtools used at final stages.

Specific capture of virus DNA.

  • KSHV.  77-91% reads map to reference sequence.  Capture looked good.
  • EBV: 52-82% mapping to ref.

Coverage looks good, and high for most of the genome.   Typical for viral sequencing.

SNPs relative to ref. sequence.  500-700 for KSHV, 2-2.5k for EBV relative to reference seq. Nice Circos-like figure showing distribution.


  • Custom SureSelect to isolate virus dna from human dna is successful.
  • full genome sequence viruses obtained.
  • analysing snps and minority species present
  • currently looking at saliva samples, looking estimate genomic diversity
  • looking at clinical pathologies
  • high throughput, cost effective, applicable as a method to analyse other pathogen sequences.