Is blogging revolutionizing science communication?

There’s been a lot of talk about blogging changing the nature of science communication recently that I think is completely missing the mark.  And, given that I see this really often, I thought I’d comment on it quickly.   (aka, this is a short, and not particularly well researched post… but deal with it.  I’m on “vacation” this week.)

Two of the articles/posts that are still on my desktop (that discuss this topic, albeit in the context of changing the presentation of science, not really in science communication) are:

But  I’ve come across a ton of them, and they all say (emphatically) that blogging has changed the way we communicate in science.  Well Yes and No.

Yes, it has changed the way scientists communicate between themselves.  I don’t run to the journal stacks anymore when I want to know what’s going on in someone’s lab, I run to the lab blog.  Or I check the twitter feed… or I’ll look for someone else blogging about the research.  You learn a lot that way, and it is actually representative of what’s going on in the world – and the researcher’s opinions on a much broader set of topics.  That is to say, it’s not a static picture of what small set of experiments worked in the lab in 1997.

On the other hand, I don’t think that there are nearly enough bloggers making science accessible for lay people.  We haven’t made science more easily understood by those outside of our fields – we’ve just make it easier for scientists inside our own field to find and compare information.

I know there are a few good blogs out there trying to make research easier to understand, but they are few and far between.  I, personally, haven’t written an article trying to explain what I do for a non-scientist in well over a year.

So, yes, blogging has changed science communication, but as far as I can tell, we’ve only changed it for the scientists.

What is the purpose of a Post-doc?

No one has ever sat down with me and explained the various purposes of each degree. I’ve figured out what Bachelors and masters degrees are good for (basic training, specialized training respectively) and the purpose of a PhD was spelled out to me when I was in industry (showing you can take responsibility for your own research).  However, no one has ever explained to me why I should want to do a post doc.

As far as I can tell, the best excuse for a post-doc is to learn a skill that you missed during your PhD, or, failing that, if you didn’t get enough publications during your PhD, you can pump out a few more here.

But, why would anyone feel the need to spend another year or two…. or five (or ten!) doing a post doc to learn a set of skills?  I just don’t see that being a great reason.

I’ve also heard that post-docs are often done to get into a good lab, so that one can add to your network of connections, or perhaps to put a good PI’s name on your resume.

I’m not sure I buy either of those, however.  In this day and age, one can make connections through many other methods.  Working for someone isn’t the only way to develop a network.  And really, I would like to think that your future career is decided on more than just having a good PI’s name on your resume.  Finally, the better the PI, the less likely they are to have the time to invest in one of their post-docs.

Last time I checked, Post-docs aren’t even treated like staff at universities, they don’t get benefits, they don’t get paid like the highly trained researchers they are, and they don’t get an actual degree out of it.

So… why do we have post-docs?  And why should I consider doing one?  As a keystone of the academic world, it might have some merit, but is that the only reason why someone should consider it?

Looking for advice on moving to Europe

My wife and I have seriously been contemplating the future.  With the figurative grad school light at the end of the tunnel being visible, if not quite in focus yet, we’ve been seriously considering an opportunity to move to northern Europe.  (Yes, I’m being as generic as possible.)  However, neither one of us have lived outside of North America –  and have only visited Europe a couple of times on vacation.  That makes it pretty hard for us to critically evaluate the opportunity.

Thus, crowd-sourcing!   I was wondering if anyone had any advice they might be able to share with us on what we can do to make that move successfully – both things we should or shouldn’t do.  Or, if people think it’s a great idea, or a bad idea.  Really, we’re trying to cast the net as wide as possible on whatever advice people can give us because it’s really hard to make a decision like that without talking with people who have done it.

Some of the outstanding questions we have:

  • How did you find the language learning curve when moving to a non-English country?
  • How much of your stuff did you take with you?  What did you do with the stuff you left behind?
  • How did you find solutions to the “2-body” problem?
  • How long does the culture shock last?
  • Was it a hassle bringing pets?
  • What are the big “gottchas” that you didn’t see coming?
  • How long did it take to organize your move?  How hard was it?
  • Would you suggest that other people do it?
  • How long did you stay? (yes, not leaving ever is also an acceptable answer.)

And, of course, are we even asking the right questions?

Any advice you can give would be helpful for us, and of course, for other people who are faced with this decision in the future.

Thanks!

Phoenix, Arizona

I’ve only been in Phoenix for a few hours, but I’ve had a chance to form a few impressions that I thought I’d share.  Unfortunately, I didn’t bring my camera as I didn’t think I’d have the opportunity to take any pictures, so unlike my visit to Copenhagen, my description will have to be entirely verbal.   In any case, I’ve had a couple of unexpected hours to wander the streets and learn a few things.  None of them are complaints – but they’re all things that really stood out for me this evening.

First, you don’t sweat in Phoenix.  Sure, your body might try, but the dry desert air sucks the moisture off of you so fast that it doesn’t have the opportunity to accumulate.  In fact, it sucks the moisture out of your pores, sinuses and throat too.   Oddly enough, you don’t really notice until you walk past a restaurant that is sprinkling cold water onto it’s patio as if it’s guests were ferns.  At that point, you notice how dry everything else is – and realize you should have packed an extra bottle of water when you went out walking.

Next, the vegetation in Phoenix’s downtown core is out of this world.  Nothing here grows in soil – anything that isn’t paved is covered in crushed red rock, out of which spiky cactus, succulents and whip-like grasses form tufts of green (or yellow) that look positively Martian.  I did find a strip of what was probably grass, once upon a time, but even that was growing (or had been trying to grow) out of a patch of crushed red rock.

The city is scattered with art, perhaps to made up for the sparseness of the landscape.  My favorite looked to be a 5 story tall net and metal “thing”, suspended above a parking lot, which was sort of reminiscent of what it would look like if you crossed a jelly fish’s dome with a mobius strip, flipped it inside out – and then made it out of enough mesh to shield a small african nation from mosquitos.

The building are tall, straight… and brown.  Actually everything is tall, straight and brown – or some shade between beige and red.  Coming from Vancouver’s green glass landscape, the red somewhat sears the eyeballs.  Everything from the soil to the bike racks to the entire face of a 30 story building are all painted or designed in a desert landscape palette.   You really can’t forget you’re in a desert when you’re in Phoenix.  (Even if the constant barrage of cacti weren’t enough.   For the record, I’ve always been a big fan of cacti, ever since I was a child, so this really isn’t a complaint!)  Even the architecture reminds me of cacti – either tall and narrow, or squat and boxy.  I propose that desert architects are probably inspired by the limited vegetation.

The days are short in the summer.  This is not a bad thing, really – the sun is seriously bright and hot in Phoenix, and I probably got more sun in 3 minutes here than I did all of last year.  However, I was surprised to see darkness descend at 7:30pm.  Summer days in Vancouver, for comparison, stay bright at least another 3 hours.

Finally, downtown Phoenix is pretty empty on a Sunday night.  That probably doesn’t surprise anyone, however.  I’m used to seeing tons of people out and about on in the summer time, enjoying patios and the good weather.  I suppose when your good weather never ends, there’s just that much less pressure to make the best of it.

All in all, it’s really a pretty place – and I’m glad I’ve had the chance to wander around a bit.  And, for the record, they do really good calzones here. (=

Great advice from a master bioinformatician.

For the record, Ewan Birney’s post on “5 statistical things I wished I had been taught 20 years ago” is pure genius.  Anyone who’s designing a bioinformatics program should absolutely take it to heart.

Although I think R is pure Evil, he’s even right on that point.  Being a bioinformatician is way easier if you know how to use it.  I curse the “Statistics for Biologists” course I took nearly 15 years ago for having been a useless collection of crap.  If they’d have covered anything on Ewan’s list, I’d have been a better bioinformatician from the start – and I’d probably have paid a lot more attention to the course.

Cause and Effect, MBA Style.

There’s some discussion going around about the value of an MBA degree – a topic I have an opinion on, although strictly speaking, my opinion on the topic really isn’t important here.  What is important is that those who believe that the top schools give you value for your money in MBA land have published an article on the topic.

MBA Pay: The $3.6 Million dollar degree

What is somewhat confusing to me is that the authors of the article assume that going to the best school is responsible for the high salary.   It may not be a bad assumption, really, but I don’t think it has been demonstrated.

First, the “best” schools are the ones that can be most selective about the students they accept – and have the highest bars to entry (cost, connections, etc) that any student who wants to attend has to pass.  Personally, I think that this really indicates that:

  1. You are selecting students who already have great networks
  2. You are selecting students who are already skilled in many of the positive attributes of good managers. (Great communicators, clear thinkers, etc)

Thus, it shouldn’t be a surprise that these students go on to command high salaries, and are able to get great jobs.   Simply attending an MBA program may add value in terms of fulfilling the qualifications required for some positions, but does it really matter where you go?  Do you learn different things at different schools?  Or is this simply a matter that the top X% of the students go to the best schools and then those individuals are the most highly sought after regardless because of the skills they bring in with them?

Anyhow, I wonder what would happen if the employers didn’t know the names of the schools, or if the salaries for the dropouts from each school were to be compared.

It looks great that the name of the degree gets you the top dollars, but I would love to see that demonstrated beyond a simple correlation.

BlueSEQ revisited

On the first day of the Copenhagenomics 2011 conference, I took notes on a presentation made by Peter Jabbour of BlueSEQ in which I interlaced some comments of my own. I was particularly disappointed in the presentation, which completely failed, in my opinion, to demonstrate the value of the company.  This prompted BlueSEQ marketer Shawn Baker to post a reply that addresses some of my points, but failed to get to the heart of the matter.  However, I had the opportunity to speak to BlueSEQ CEO Michael Heltzen on Friday morning, setting me straight on several facts.  Given what I’d learned, I thought it was important to take the time to revisit what I had said about BlueSEQ.

I understand some people thought my criticism of BlueSEQ was targeted.  Let me set the record straight: Of all of the companies that presented or attended at Copenhagenomics 2011, the only one I have any relationship at all with is CLC bio, and that is – to this point – entirely informal.  Any criticisms I have made about BlueSEQ, or any other any company, are simply my own opinion based on the information presented – and for the record, I do have a little experience with business models.

In this case, the presentation lead me to believe there were a lot of holes in the BlueSEQ business model.  Fortunately, CEO Michael Heltzen was kind enough to patiently answer my questions and explain the business model to me, which has prompted me to change my opinion.

In case you haven’t heard of BlueSEQ, they’re an organization that serves to match users that have unmet sequencing needs (“users”) with groups that have surplus sequencing capacity (“providers”). This is a simplified version of what they do, at least – and was the focus of their presentation at Copenhagenomics 2011.

Initially, BlueSEQ set themselves up during the presentation as a young company that just “went live” recently.  While there’s nothing wrong with that, I have spent time as an entrepreneur and am aware that young companies have a tendency to be a little overly optimistic about their markets and potential for finding customers.  Although BlueSEQ did boast of about a hundred users signing up for their services, I listened carefully but didn’t hear anything about providers having signed up as well.  That set off flags for me.  BlueSEQ CEO Michael Heltzen patiently explained to me that they do, in fact, have 25 providers already signed up – a very impressive number for just over a month of operations.

Having paying clients, or providers in this case, is 90% of the battle for any match-making company and knowing that there are groups paying for BlueSEQ’s services should be music to the ears of any potential investors.  That, on it’s own, provided some significant validation of the company’s business model for me.  Obviously, if people are currently paying for it, then clearly there is value.

And speaking of paying, the presentation did not explain what it was that providers were paying for.  A 10% service fee – charged to providers – was mentioned during the presentation, which seems a little high for nothing more than a service linking buyers with sellers.  I heard the same comment from other people who saw the presentation and voiced their concern (albeit more quietly than I did) that it was a bit disproportional.  However, again, BlueSEQ’s Michael Heltzen provided the explanation:  BlueSEQ doesn’t just match sequencing providers with users –  they provide a complete front-office service, not only promoting the sequencing centre’s business by matching them with the users, but also by handling the initial steps of any inquiries and working with the user to sort out the wet and dry lab requirements of any potential sequencing project.  Suddenly, I think the value of BlueSEQ’s services should be apparent.

Many groups with excess sequencing capacity may find themselves in a position where they have the ability to provide sequencing services, but not the facilities to handle customer requests or promote themselves to find the users who could take advantage of the sequencing services.  Enter BlueSEQ.

This explanation, diametrically opposite to the “web portal” model described during the business presentation, suddenly shows where the potential for an entrepreneurial group can build a concrete business.   The analogy used during the BlueSeq presentation of a web portal where people can buy airline tickets by comparing prices on-line was a poor choice, completely diminishing the value that BlueSEQ provides by interpreting, analyzing and, in-part, educating the sequencing users.  What a service that could be!

With good experimental design being one of the most difficult parts of science, BlueSEQ is in fact sitting in the wonderful position of being the early entry into a completely new business model.  They are able to transform the disjointed requests of novice users into complete experimental plans and then match those experiments with labs that have experience and capacity for performing those experiments well.  The user gains by getting competitive quotes and help in setting up the product they want, while the the provider gains by being able to focus on the service they provide without the complexities of dealing with customers that may not know what they want or need.

Pure genius.

Of course, there are still pitfalls ahead with this type of business model.  There really is no bar to entry for other competitors, other than the experience of the current group. (I’m sure it’s extensive, but there are others out there who could do the same.)  There is also no real guarantee that what they are doing will be cost effective in the long run.  As sequencing becomes cheaper and cheaper, it might actually come to a point where it will be more cost efficient to turn to a professional sequencing company like Complete Genomics that does provide a full service than to a portal and matchmaking service like BlueSEQ.  Of course, those are concerns that I’m sure BlueSEQ has put more thought into than I have – and will be up to them to solve.

As I said last time, and I meant it quite sincerely: Good luck to the business.  I’ll be looking forward to hearing their presentations in the future – and I hope they have only good things to report.

Dueling Databases of Human Variation

When I got it to work this morning, I was greeted by an email from 23andMe’s PR company, saying they have “built one of the world’s largest databases of individual genetic information.”   Normally, I wouldn’t even bat an eye at a claim like that.  I’m pretty sure it is a big database of variation…  but I thought I should throw down the gauntlet and give 23andMe a run for their money.  (-:

The timing for it couldn’t be better for me.  My own database actually ran out of auto-increment IDs this week, as we surpassed 2^31 snps entered into the db and had to upgrade the key field to bigint from int. (Some variant calls have been deleted and replaced as variant callers have improved, so we actually have only 1.2 Billion variations recorded against the hg18 version of the human genome.  A few hundred million more than that for hg19.)  So, I thought I might have a bit of a claim to having one of the largest databases of human variation as well.  Of course, comparing databases really is dependent on the metric being used, but hey, there’s some academic value in trying anyhow.

In the first corner, my database stores information from 2200+ samples (cancer and non-cancer tissue), genome wide (or transcriptome wide, depending on the source of the information.), giving us a wide sampling of data, including variations unique to individuals, as well as common polymorphisms.  In the other corner, 23andMe has sampled a much greater number of individuals (100,000) using a SNP chip, meaning that they’re only able to sample a small amount of the variation in an individual – about 1/3rd of a single percent of the total amount of DNA in each individual.

(According to this page, they look at only 1 million possible SNPs, instead of the 3 Billion bases at which single nucleotide variations can be found – although arguments can be made about the importance of that specific fraction of a percent.)

The nature of the data being stored is pretty important, however.  For many studies, the number of people sampled has a greater impact on the statistics than the number of sites studied and, since those are mainly the ones 23andMe are doing, clearly their database is more useful in that regard.  In contrast, my database stores data from both cancer and non-cancer samples, which allows us to make sense of variations observed in specific types of cancers – and because cancer derived variations are less predictable (ie, not in the same 1M snps each time) than the run-of-the-mill-standard-human-variation-type snps, the same technology 23andMe used would have been entirely inappropriate for the cancer research we do.

Unfortunately, that means comparing the two databases is completely impossible – they have different purposes, different data and probably different designs.  They have a database of 100k individuals, covering 1 million sites, whereas my database has 2k individuals, covering closer to 3 billion base pairs.  So yeah, apples and oranges.

(In practice, however, we don’t see variations at all 3 Billion base pairs, so that metric is somewhat skewed itself.  The number is closer to 100 Million bp –  a fraction of the genome nearly 100 times larger than what 23andMe is actually sampling.)

But, I’d still be interested in knowing the absolute number of variations they’ve observed…  a great prize upon which we could hold this epic battle of “largest database of human variations.”  At best, 23andMe’s database holds 10^11 variations, (1×10^6 SNPs x 1×10^5 people), if every single variant was found in every single person – a rather unlikely case.  With my database currently  at 1.2×10^9 variations, I think we’ve got some pretty even odds here.

Really, despite the joking about comparing database sizes, the real deal would be the fantastic opportunity to learn something interesting by merging the two databases, which could teach use something both about cancer and about the frequencies of variations in the human population.

Alas, that is pretty much certain to never happen.  I doubt 23andMe will make their database public – and our organization never will either.  Beyond the ethical issues of making that type of information public, there are pretty good reasons why this data can only be shared with collaborators – and in measured doses at that.  That’s another topic for another day, which I won’t go into here.

For now, 23andMe and I will just have to settle for both having “one of the world’s largest databases of individual genetic information.”  The battle royale for the title will have to wait for another day… and who knows what other behemoths are lurking in other research labs around the world.

On the other hand, the irony of a graduate student challenging 23andMe for the title of largest database of human variation really does make my day. (=

[Note: I should mention that when I say that I have a database of human variation, the database was my creation but the data belongs to the Genome Sciences Centre – and credit should be given to all of those who did the biology and bench work, performed the sequencing, ran the bioinformatics pipelines and assisted in populating the database.]

11 tips for blogging talks and conferences

While I’m still not quite recovered from the jet-lag from the flight home, I thought I’d take a quick shot at answering a question I was asked frequently last week:  “How do you blog a scientific conference?”  So I thought I’d take a stab at some of the key points in case anyone else has any interest in trying.

  1. Focus! The hardest thing about blogging a conference is the amount of attention it takes.  If you are easily distracted, you’ll miss things – and it can be really hard to get back into a talk once you’ve missed a couple of key points.  Checking your email, twitter or surfing the web are all bad ideas.
  2. Listen! The speaker is really the best source for getting the key points.  If they’re doing a good job, then you don’t even need to see the slides – they’ll summarize the main points and make your job easy.
  3. Know your limits. If you don’t understand something, you’re not going to be able to summarize and explain it.  Frankly, product talks are pretty much impossible to blog – just point to the catalog.
  4. Read the slides.  A really bad speaker can make it hard to blog their talk, but fortunately, that’s what slides are for: summarizing the presenter’s points.  If you can’t follow along with what they’re saying, you can always interpret the slides for yourself.
  5. Know what to omit.  A really good speaker can be incredibly distracting, wandering away from the main point of the talk to tell stories or insert asides.  You don’t need to write down everything, especially if you can’t reproduce it well.  Capturing speakers jokes can be next to impossible.
  6. Think! It may sound odd, but the process of writing notes is about what you think is important.  You have to carefully interpret what the speaker is saying and decide what is that you feel is central to the arguments.  Blindly copying things frequently fails to tell the story well.
  7. Don’t guess! It’s easy to miss something (and yes, you will miss things), but how you handle the things you miss is important.  If you can’t remember a number or an exact phrasing, just summarize it – if you guess about the value or quote someone incorrectly, it can really upset both the speaker who’s work you’ve misrepresented or the audience, who may rely on what you’ve told them.  If you’re not sure on a point, be clear about that as well.  It’s better to err on the side of caution.
  8. Keep your thoughts separate. This can be challenging.  With all that’s going on, it’s easy to mix up your opinions with the speaker’s points, since your notes are really just your interpretation of what you’re hearing.   However, to preserve the integrity of the speaker’s points, you need to ensure that they don’t get confused.  I use a system of brackets to do so but any other clearly marked system will work as well.
  9. Type fast! This should be obvious.  The faster you can type, the more complete your notes will be.  Conference blogging is not for slow typers.
  10. Use the right tools. I blog directly in my blog’s editor, but you can use any other system that works for you.  The most challenging part is to make sure you have autosave on, and that it works well.  There’s nothing worse than losing something you’ve written – especially since you can’t go back to ask a speaker to do their first 10 slides over if something goes wrong.
  11. Practice! This isn’t a skill you develop overnight – the more you do this, the easier it becomes.  Start with a single talk and learn from your mistakes.

So, there you have it.  The top 11 tips I’d give for anyone who would like to blog a talk – or even a whole conference.   And, of course, don’t forget to enjoy the talks.  If you’re not getting something out of listening to someone else speaking, why are you taking notes on it? (-:

Copenhagenomics 2011, in review

It’s early Saturday morning in Copenhagen and Copenhagenomics 2011 is done.  I was going to say that the sun has set on it, but the city is far enough north that the sun really doesn’t do much more than sink a bit below the horizon at night.  That said, the bright summer sunshine has me up early – and ready to write out a few thoughts about the conference.

[Yes, for what it’s worth, I was invited to blog the conference so I may not be completely impartial in my evaluation, but I think my comments also reflect the general consensus of the other attendees I spoke to as well.  Dissenters are welcome to comment below.]

First, I have to say that I think it was an unqualified success.  Any comments I might have can’t possibly amount to more than suggestions for the next year.  The conference successfully brought together a lot of European bioinformaticians and biologists and provided a forum in which some great science could be shown off.

The choice of venue was inspired and the execution was flawless, despite a few last minute cancellations.  These things happen, and the conference rolled on without a pause.  Even the food was good (I didn’t even hear Sverker, a vegetarian Swede, complain much on that count) and the weather cooperated, clearing up after the first morning.

As well, the conference organizers’ enlightened blogging and twittering policy was nothing short of brilliant, as it provided ways for people to engage in the conversation without being here first hand.  Of course, notes and tweets can only give you so much of the flavour – so those who did attend had the benefits of the networking sessions and the friendly discussions over coffee and meals.  The online presence of the conference seemed disproportionately high for such a young venue and the chat on the #CPHx hashtag was lively.  I was impressed.

With all that said, there were things that could be suggested for next year.  Personally, I would have liked to have seen a poster session as part of the conference.  It would have been a great opportunity to showcase next-gen and bioinformatics work from across europe.  I know that the science must be there, hiding in the woodwork somewhere, but it didn’t have the opportunity to shine as brightly as it might have.  It also would have served to bring out more graduate students, who made up a small proportion of the attendees (as far as I could tell). Next year, I imagine that this conference will be an ideal place for European companies and labs to do some recruiting of young scientists – and encouraging more graduate students to attend by submitting posters and abstracts would be a great way to facilitate that.

Another element that seemed slightly off for me was the vendors.  They certainly had a presence and were able to make their presence noticed, but the booths at the back of the room might not have been the best way for companies to showcase their contributions.  That said, I suspect that copenhagenomics will have already outgrown this particular venue by the next year anyhow and that it won’t be a concern moving forward.

While I’m on the subject of vendors, what happened to European companies like Oxford Nanopore, or the usual editor or two from Nature?  Were some UK attendees scared off by the name of the conference?  I’m just putting it out there – it’s entirely possible that I simply failed to bump into their reps.

In any case, the main focus of the conference, the science, was excellent.  There were a few fantastic highlights for me.  Dr. John Quackenbush‘s talk challenged everyone to seriously re-consider how we make sense of our data – and more importantly, the biology it represents.  Dr. Elizabeth Murchison‘s talk on transmissible cancers was excellent as well and became a topic of much conversation.  Heck, three of my fellow twitter-ers were there and each one did a great job with their respective talks. (@rforsberg, @dgmacarthur and @bioinfo)

In summary, I think the conference came off about as smoothly as any I’ve seen before – and better than most.  If I were given the opportunity, this would be a conference I’d pick to come back to again. Congratulations to the organizers and the speakers!