American Hospitals

This is probably not an informative post for most people who’ve visited my blog, but I thought I’d share a perspective.

Last week, I signed up for a health care plan, and discovered that the plan to which I’d signed up was offering free flu shots.  Not being one to pass up on an offer like that, I traipsed down to the local hospital’s paediatric division, to get my daughter ready for the flu season, with a scheduled stop at the adult clinic just down the street on the way home.

Upon arrival, it turned out that the whole family could get our shots at once, saving us a trip across the park to the adult shot clinic – a nice bonus for us.  Anyhow, once the forms were filled out, and the (now expected) confusion about the existence of people without social security numbers was sorted out, the deed was done. (And, I might add that the woman who did it was exceptional – I barely noticed the shot, and my 2 year old daughter looked at the woman and said “Ow…” before promptly forgetting all about it and enjoying the quickly offered princess sticker.  “Princess Sticker!!!”)

In any case, the real story is what happened after – although it was as much a non-event as the actual shot.  We walked back home, taking a short cut through one of the hostpital’s other buildings.  It was new, it was shiny and it was pimped out.  It looked like the set of Grey’s Anatomy or the set of a Holywood sponsored action movie that will shortly be blown into a million pieces by several action heroes.  I half expected the counters to glint and glitter like a cleaning product commercial.

But, it was also, in a way, surreal.  That hospital doesn’t exist to cure people, or to as a place of healing – or even to do research.  Unlike a Canadian hospital, which is the bulk of my experience with hospitals (although I did visit Danish hospitals disproportionately more than you might think for the length of time I was there), the whole building, it’s contents and it’s staff are all there to turn a profit.

It’s not a tangible difference, but it makes you think about the built in drug stores and cafeterias and posters advertising drugs in a slightly different light.

Why are they promoting that drug?  Would that security guard kick me out if he knew I didn’t have my ID card yet?  Is that doctor running down the hall just trying to cram in as many patients as possible?

It’s strange, because superficially, the hospital isn’t any different than a Canadian hospital (other than being newer than any I’ve ever visited, and the ever present posters advertising drugs, of course), and yet it’s function is different.  It’s roughly the difference between visiting a community centre and a country club.  In any other country in the western world, a hospital is open to all members of the community, whereas the hospitals here require a membership.  It’s just hard not to see it through the Canadian lens, which tells us it’s one of those things American’s “just can’t seem to get right.” Well, that’s the Canadian narrative – whether it’s right or wrong.

Anyhow, a hospital is a hospital: the net product of the hospital is keeping people healthy.  Whether it’s for profit or government run, it does the same things and works the same way.

At the end of the day, I can’t say anything other than that the experience was pleasant, and this is the first year that I’ve gotten a flu shot and didn’t get sick immediately afterwards.  So really, all in all, I guess you get what you pay for…  It’s just a new experience to see such a direct connection between the money and the services.

I just have to wonder how Americans see Canadian hospitals. (-:

Ikea furniture and bioinformatics.

I’ll just come out and say it:  I love building Ikea furniture.  I know that sounds strange, but it truly amuses me and makes me happy.  I could probably do it every day for a year and be content.

I realized, while putting together a beautiful wooden FÖRHÖJA kitchen cart, that there is a good reason for it: because it’s the exact opposite of everything I do in my work.  Don’t get me wrong – I love my work, but sometimes you just need to step away from what you do and switch things up.

When you build ikea furniture, you know exactly what the end result will be.  You know what it will look like, you’ve seen an example in the showroom and you know all of the pieces that will go into putting it together.  Beyond that, you know that all the pieces you need will be in the box, and you know that someone, probably in Sweden, has taken the time to make sure that all of the pieces fit together and that it is not only possible to build whatever it is you’re assembling, but that you probably won’t damage your knuckles putting it together because something just isn’t quite aligned correctly.

Bioinformatics is nearly always the opposite.  You don’t know what the end result will be, you probably will hit at least three things no one else has ever tried, and you may or may not achieve a result that resembles what you expected.  Research and development are often fraught with traps that can snare even the best scientists.

But getting back to my epiphany, I realized that now and then, it’s really nice to know what the outcome of a project should be, and that you will be successful at it, before you start it.  Sometimes it’s just comforting to know that everything will fit together, right out of the box.

I’m looking forward to putting together a dresser tomorrow.

Replacing science publications in the 21st century

Yasset Perez-Riverol asked me to take a look at a post he wrote: a commentary on an article titled Beyond the Paper.  In fact, I suggest reading the original paper, as well as taking a look at Yasset’s wonderful summary image that’s being passed around.  There’s some merit to both of them in elucidating where the field is going, as well as how to capture the different forms of communication and the tools available to do so.

My first thought after reading both articles was “Wow… I’m not doing enough to engage in social media.”  And while that may be true, I’m not sure how many people have the time to do all of those things and still accomplish any real research.

Fortunately, as a bioinformatician, there are moments when you’ve sent all your jobs off and can take a blogging break.  (Come on statistics… find something good in this data set for me!)  And it doesn’t hurt when Lex Nederbragt asks your opinion, etither

However, I think there’s more to my initial reaction than just a glib feeling of under-accomplishment.  We really do need to consider streamlining the publication process, particularly for fast moving fields.  Whereas the blog and the paper above show how the current process can make use of social media, I’d rather take the opposite tack: How can social media replace the current process.  Instead of a slow, grinding peer-review process, a more technologically oriented one might replace a lot of the tools we currently have built ourselves around.  Let me take you on a little thought experiment, and please consider that I’m going to use my own field as an example, but I can see how it would apply to others as well. Imagine a multi-layered peer review process that goes like this:

  1. Alice has been working with a large data set that needs analysis.  Her first step is to put the raw data into an embargoed data repository.  She will have access to the data, perhaps even through the cloud, but now she has a backup copy, and one that can be released when she’s ready to share her data.  (A smart repository would release the data after 10 years, published or not, so that it can be used by others.)
  2. After a few months, she has a bunch of scripts that have cleaned up the data (normalization, trimming, whatever), yielding a nice clean data set.  These scripts end up in a source code repository, for instance github.
  3. Alice then creates a tool that allows her to find the best “hits” in her data set.  Not surprisingly, this goes to github as well.
  4. However, there’s also a meta data set – all of the commands she has run through part two and three.  This could become her electronic notebook, and if Alice is good, she could use this as her methods section: It’s a clear concise list of commands needed to take her raw data to her best hits.
  5. Alice takes her best hits to her supervisor Bob to check over them.  Bob thinks this is worthy of dissemination – and decides they should draft a blog post, with links to the data (as an attached file, along with the file’s hash), the github code and the electronic notebook.
  6. When Bob and Alice are happy with their draft, they publish it – and announce their blog post to a “publisher”, who lists their post as an “unreviewed” publication on their web page.  The data in the embargoed repository is now released to the public so that they can see and process it as well.
  7. Chris, Diane and Elaine notice the post on the “unreviewed” list, probably via an RSS feed or by visiting the “publisher’s” page and see that it is of interest to them.  They take the time to read and comment on the post, making a few suggestions to the authors.
  8. The authors make note of the comments and take the time to refine their scripts, which shows up on github, and add a few paragraphs to their blog post – perhaps citing a few missed blogs elsewhere.
  9. Alice and Bob think that the feedback they’ve gotten back has been helpful, and they inform the publisher, who takes a few minutes to check that they have had comments and have addressed the comments, and consequently they move the post from the “unreviewed” list to the “reviewed” list.  Of course, checks such as ensuring that no data is supplied in the dreaded PDF format are performed!
  10. The publisher also keeps a copy of the text/links/figures of the blog post, so that a snapshot of the post exists. If future disputes over the reviewed status of the paper occur, or if the author’s blog disappears, the publisher can repost the blog. (If the publisher was smart, they’d have provided the host for the blog post right from the start, instead of having to duplicate someone’s blog, otherwise.)
  11. The publisher then sends out tweets with hashtags appropriate to the subject matter (perhaps even the key words attached to the article), and Alice’s and Bob’s peers are notified of the “reviewed” status of their blog post.  Chris, Diane and Elaine are given credit for having made contributions towards the review of the paper.
  12. Alice and Bob interact with the other reviewers via comments and twitters, for which links are kept from the article.  (trackbacks and pings) Authors from other fields can point out errors or other papers of interest in the comments below.
  13. Google notes all of this interaction, and updates the scholar page for Alice and Bob, noting the interactions, and number of tweets in which the blog post is mentioned.   This is held up next to some nice stats about the number of posts that Alice and Bob have authored, and the impact of their blogging – and of course – the number of posts that achieve the “peer reviewed” status.
  14. Reviews or longer comments can be done on other blog pages, which are then collected by the publisher and indexed on the “reviews” list, cross-linked from the original post.

Look – science just left the hands of the vested interests, and jumped back into the hands of the scientists!

Frankly, I don’t see it as being entirely far fetched.  The biggest issue is going to be harmonizing a publisher’s blog with a personal blog – which means that most likely personal blogs will probably shrink pretty rapidly, or they’ll move towards consortia of “publishing” groups.

To be clear, the publisher, in this case, doesn’t have to be related whatsoever to the current publishers – they’ll make their money off of targeted ads, subscriptions to premium services (advanced notice of papers? better searches for relevant posts?) and their reputation will encourage others to join.  Better bloging tools and integration will the grounds by which the services compete, and more engagement in social media will benefit everyone.  Finally, because the bar for new publishers to enter the field will be relatively low, new players simply have to out-compete the old publishers to establish a good profitable foothold.

In any case – this appears to be just a fantasy, but I can see it play out successfully for those who have the time/vision/skills to grow a blogging network into something much more professional.  Anyone feel like doing this?

Feel free to comment below – although, alas, I don’t think your comments will ever make this publication count as “peer reviewed”, no matter how many of my peers review it. :(

Womanspace – last lap.

I wrote a comment on Ed Rybicki’s blog, which is still awaiting moderation.  I’m not going to repeat what I said there, but I realized I had more to say than what I’d already written.  Specifically, I have much more to say about a comment he wrote on this article:

PS: “why publish something that you don’t believe in is another story” – no, it’s just that science fiction allows one to explore EVERYTHING, including what you don’t believe in.”

Ed makes a great point – Science fiction is exactly the right vehicle for exploring things that you don’t believe in.  Indeed, it’s been used exactly that way since the genre was invented.  You could say that Guliver’s Travels was a fantastic use of early science fiction, exploring a universe that mocked all sorts of contemporary idiocy that the author (Swift) disagreed with.

So, yes, I see Ed’s point – and he has a good one.  However, I’m going to have to disagree with Ed on the broader picture.  Science Fiction is perfect for exploring issues that you don’t believe in precisely because you can apply them to similar or parallel situations where they demonstrate their flaws.

For instance, if you want to write about how terrible apartheid is, you don’t set a science fiction novel in South Africa in the 1990’s, you set it up in on another planet where two civilizations clash – and you can explore the themes away from the flashpoint issues that are rife in the real world conflict. (Orson Scott Card explores a lot of issues of this type in his novels.)

The issue with Ed’s article – and there are plenty of them to chose from – is that he chose to engage with the lowest form of science fiction: Inclusion of some “vaguely science-like device” that casts no great insight into anything.  Science fiction, as a vehicle, is all about where you take it.

The premise would be equally offensive if he had picked: a race (“Filipino’s only get by because they have access to another dimension to compensate for their height”), a religion (“Christians use another dimension to hide from criticism leveled at their holy book”), or an age (“Anyone who can hold a job after the age of 65 is clearly doing so because they’re able to access another dimension”).

Ed could have made much better use of the vehicle he chose to drive.  He could have invented an alien species in which only one gender has access to a dimension, he could have used the alternate dimension to enable women to do things men can’t (and no, I don’t buy that men can’t shop efficiently) or he could have used his device to pick apart injustices that women face in competing with men.

Instead of using his idea to explore the societal consequences of the pllot device, he uses it to reinforce a stereotype.

That, to me, is not a good use of science fiction.  And the blame doesn’t just go to the author – it goes to the editors.  As a long time reader of science fiction, I can tell when a story doesn’t work and when it fails to achieve it’s desired effect.  This story neither worked, nor causes anyone to question their own values.  (It does, however make me wonder about the editor’s judgment in choosing to print it, as well as the author’s judgment in allowing it to be printed in a high profile forum.)

So, let me be clear – I despise the use of the sterotypes about women that Ed chose to explore. That he believes exploring gender issues this way is any less sensitive than race, religion or age would be is ridiculous – and shows a measure of bad judgement.

Having come up with a great tool (alternate dimensions) for making a comment on society (women and men aren’t treated equally), he completely missed the opportunity to use the venue (science fiction) to set the story in a world where he could have explored the issue and shown us something new.  In essence, he threw away a golden opportunity to cause his audience to ask deep questions and take another look at the issue from a fresh perspective – exactly what science fiction is all about.

Ed’s not a villian – but he’s not a great science fiction writer either.

Blogging about your own work.

Ok, so my titles aren’t nearly as inspired as Cath’s are.  This week hasn’t exactly been encouraging for puns, unless you consider massacring Danish pronunciation as a very complex linguistic joke.

Actually, I only have a glimmer of an idea tonight – but I’m writing because I need something to do to keep me up for an hour or so.  Sorry for the bad pun, but the clock *is* ticking and it’s only three weeks till I’m supposed to be in Denmark – and I still don’t have movers.  It’s driving me completely around the bend.  So, as a therapeutic device, I’m going to write my glimmer of an idea.  Please don’t be too harsh on it.

The idea for the post came from reading Jacquelyn Gill’s blog post, “Why did I start blogging?”  (By the way, please go vote for her to win CollegeScholarships.org Blogging Scholarship. She clearly deserves it!) Her post isn’t quite related, but at the same time, it is – you can go read it to see why, if you’re interested.

One of the things I struggled with for the past two years has been blogging about my own work. Of course, I interpret this as blogging about what you’re currently working on, not the stuff you finished months ago, which is always fair game.  (Blogging your own publications always struck me as blatantly endorsing yourself – something only politicians should need to do.)

Anyhow, blogging your own current work, showing the bumps and warts of science is something I love to do, and as a scientist, something I want to do as often as I can.  However, there are several problems with it.  It tends to tip your hand to the whole world about what you’re working on and that can have some disastrous consequences.

First, if you’re in a medical field, it can be difficult to talk about cases you’re working on, if there’s any form of patient confidentiality.  Many of the projects I’ve been involved in have required me to maintain complete silence about the nature of the project.  Blog + confidentiality = Instant ethics issues, methinks.

Second, if you’re working on a manuscript, presumably you’re going to have to keep everything you do quiet.  Heck, I’ve got a paper in the works for which the journal sent instructions that require absolute silence on whether it’s even been accepted or not, let alone contemplate communicating anything about the topic.  If I say any more about this, I’ll either jeopardize the publication or wind up in jail.  (Have I already said too much?)

Third, if you aren’t working on a manuscript, you’re either not an academic, or you’re working on an open science project.  I was fortunate enough that I my own project was open, allowing me to talk about my Chip-Seq work for the first three years of my PhD – but alas, that work never cumulated into a second paper.   That’s another rant for another day.

That leaves scientists in the awkward position that they either:

  1. blog about someone else’s work – as if they were journalists, describing their own fields,
  2. blog about their own work in vague terms so that their competition doesn’t scoop them,
  3. blog about work they’ve already published.
  4. blog about the unimportant stuff – or the stuff that they don’t plan to publish.

I can think of one exception: Rosie Redfield, who does a good job of writing about what she’s working on, although her recent work has all been about rehashing and verifying (or more accurately not being able to verify) someone else’s results.  (Yes, I’m referring to the arsenic bacteria fiasco.) I have to admit, I don’t follow any other bloggers who discuss their own data in public, but I’m sure there must be some out there…

Still, if this is a problem for academic bloggers, industrial bloggers face an even harder battle to discuss their own data.  I can think of Derek Lowe over at In The Pipeline as a great example of a blogger from industry.  I used to read his blog daily, and back when I was an avid reader, I seem to recall my favorite posts of his were from the lab – but were all about the strange mishaps and challenges faced by chemists, drawn mostly from the past.  Absolutely none of his current work was discussed, unless it ended in spectacular failure. (Those were good stories too…)

So, I often find myself wondering, when I hear people say that scientists should blog more about their own work, who exactly do they expect to follow that advice? (Btw, It’s something that pops up in conversation frequently, although I couldn’t think of a blog entry that makes that case specifically, off hand. If you need a citation, you’ll just have to settle for “personal correspondence.” Sorry.)

Are there a group of scientists who are willing to blog their own work at the expense of getting publications or being fired from their jobs?  Somehow, I have yet to meet this clique – although if I did, I’d have a lot of questions.  And I can’t imagine they’d be in a position to do this for very long.  You don’t get grants renewed without publications – and you wouldn’t have a workplace for very long either, if you kept blogging the secret sauce recipe.

Maybe, however, this is why some scientists chose to leave the lab bench to pick up the mantle of journalism.  Cue Ed Yong, for instance.  So, the solution isn’t that we need more scientists blogging about their own work, but that we need more scientists to leave science to blog about other people’s work…. or perhaps we should just ask them to stay in science and blog about other people’s work already.

Ahem.  Status quo wins again!

Letting the Cat out of the Bag.

It is finally official – I’ll be leaving Canada and going to Europe (Denmark) in December – joining the team at CLC bio in just over a month. You’ll have to excuse my holding off on letting everyone know.  Of course, things have been in the works for some time yet, but the last few pieces have only clicked into place this week.  And, of course, one doesn’t want to jump the gun by announcing these things before everything is in place.

Of course, this doesn’t mean I’ve finished my PhD yet.  There are a still a few more hurdles – my thesis has to go through my committee and the external examiner, and I still need to officially defend it – but it was looking like the soonest that could happen would be February, and with everything going on, my wife and I decided it would be better to just start the process of settling in to Denmark as soon as possible.

So, consequently, if you read my blog, you’ll probably hear a little bit more about some topics that are currently on my mind: learning Danish (lære Dansk), traveling, maybe some cultural collisions (Danish people don’t have closets?)  and possibly some photography, depending on how busy I am.  (Yes, now that I’m not actively writing my thesis for 6-8 hours a day, I seem to have more time.)

But don’t worry – in the next month, I still have a few things I want to blog about, and likely a few papers to review.  Even though I’m leaving Grad School, I’m not leaving science behind.

To be candid, I’m looking forward to starting up at CLC partly because of the job, which already sounds pretty awesome, and because of the people.  I’ve met some of the people I’ll be working with – albeit briefly – and I’m excited to have the chance to work with them.  I can honestly say that they one one of the nicest groups of people I’ve ever met.  Must be something in the water. (-;

Anyhow, to complete the circular nature of this post (like all good fugues, which is the way to write a good post, particularly if you’ve read Gödel, Escher Bach, if that’s not getting way to involved) I have one last point to clarify. As foreshadowed lightly by the title of this post, yes, my pets will be coming with me – and undoubtedly my cat will be thrilled to be let out of the bag once we’ve arrived in Denmark… so the moving process will be bookended, effectively, by letting cats (figurative and literal) out of their respective bags.

Ollie - My wife says we have the same nose.

Where’s the collaboration?

I had another topic queued up this morning, but an email from my sister-in-law reminded me of a more pressing beef: Lack of collaboration in the sciences. And, of course, I have no statistics to back this up, so I’m going to put this out there and see if anyone has anything to comment on the topic.

My contention is that the current methods for funding scientists is the culprit for driving less efficient science, mixed with a healthy dose of Zero Sum Game thinking.

First, my biggest pet peeve is that scientists – and bioinformaticians in particular – spend a lot of time reinventing the wheel.  How many SNP callers are currently available?  How many ChiP-Seq packages? How many aligners?  And, more importantly, how can you tell one from the other?  (How many of the hundreds of snp callers have you actually used?)

It’s a pretty annoying aspect of bioinformatics that people seem to feel the need to start from scratch on a new project every time they say “I could tweak a parameter in this alignment algorithm…”  and then off they go, writing aligner #23,483,337 from scratch instead of modifying the existing aligner.  At some point, we’ll have more aligners than genomes!  (Ok, that’s a shameless hyperbole.)

But, the point stands.  Bioinformaticians create a plethora of software that solve problems that are not entirely new.  While I’m not saying that bioinformaticians are working on solved problems, I am asserting that the creation of novel software packages is an inefficient way to tackle problems that someone else has already invested time/money into building software for. But I’ll come back to that in a minute.

But why is the default behavior to write your own package instead of building on top of an existing one?  Well, that’s clear: Publications.  In science, the method of determining your progress is how many journal publications you have, skewed by some “impact factor” for how impressive the name of the journal is.  The problem is that this is a terrible metric to judge progress and contribution.  Solving a difficult problem in an existing piece of software doesn’t merit a publication, but wasting 4 months to rewrite a piece of software DOES.

The science community, in general, and the funding community more specifically, will reward you for doing wasteful work instead of focusing your energies where it’s needed. This tends to squash software collaborations before they can take off simply by encouraging a proliferation of useless software that is rewarded because it’s novel.

There are examples of bioinformatics packages where collaboration is a bit more encouraged – and those provide models for more efficient ways of doing research.  For instance, in the molecular dynamics community, Charmm and Amber are the two software frameworks around which most people have gathered. Grad students don’t start their degree by being told to re-write one or the other packages, but are instead told to learn one and then add modules to it.  Eventually the modules are released along with a publication describing the model.  (Or left to rot in a dingy hard drive somewhere if they’re not useful.)   Publications come from the work done and the algorithm modifications being explained.  That, to me, seems like a better model – and means everyone doesn’t start from scratch

If you’re wondering where I’m going with this, it’s not towards the Microsoft model where everyone does bioinformatics in Excel, using Microsoft generated code.

Instead, I’d like to propose a coordinated bioinformatics code-base.  Not a single package, but a unified set of hooks instead.  Imagine one code base, where you could write a module and add it to a big git hub of bioinformatics code – and re-use a common (well debugged) core set of functions that handle many of the common pieces.  You could swap out aligner implementations and have modular common output formats.  You could build a chip-seq engine, and use modular functions for FDR calculations, replacing them as needed.  Imagine you could collaborate on code design with someone else – and when you’re done, you get a proper paper on the algorithm, not an application note announcing yet another package.

(We have been better in the past couple years with tool sets like SAMTools, but that deals with a single common file format.  Imagine if that also allowed for much bigger projects like providing core functions for RNA-Seq or CNV analysis…  but I digress.)

Even better, if we all developed around a single set of common hooks, you can imagine that, at the end of the day (once you’ve submitted your repository to the main trunk), someone like the Galaxy team would simply vacuum up your modules and instantly make your code available to every bioinformatician and biologist out there.  Instant usability!

While this model of bioinformatics development would take a small team of core maintainers for the common core and hooks, much the same way Linux has Linus Torvalds working on the Kernel, it would also cut down severely on code duplication, bugs in bioinformatics code and the plethora of software packages that never get used.

I don’t think this is an unachievable goal, either for the DIY bioinformatics community, the Open Source bioinformatics community or the academic bioinformatics community.  Indeed, if all three of those decided to work together, it could be a very powerful movement.  Moreso, corporate bioinformatics could be a strong player in it, providing support and development for users, much the way corporate Linux players have done for the past two decades.

What is needed, however, is buy-in from some influential people, and some influential labs.  Putting aside their own home grown software and investing in a common core is probably a challenging concept, but it could be done – and the rewards would be dramatic.

Finally, coming back to the funding issue.  Agencies funding bioinformatics work would also save a lot of money by investing in this type of framework.  It would ensure more time is spent on more useful coding, more time is spent on publications that do more to describe algorithms and to ensure higher quality code is being produced at the end of the day.  The big difference is that they’d have to start accepting that bioinformatics papers shouldn’t be about “new software” available, but “new statistics”, “new algorithms” and “new methods” – which may require a paradigm change in the way we evaluate bioinformatics funding.

Anyhow, I can always dream.

Notes: Yes, there are software frameworks out there that could be used to get the ball rolling.  I know Galaxy does have some fantastic tools, but (if I’m not mistaken), it doesn’t provide a common framework for coding – only for interacting with the software.  I’m also aware that Charmm and Amber have problems – mainly because they were developed by competing labs that failed to become entirely enclusive of the community, or to invest substantially in maintaining the infrastructure in a clean way.Finally, Yes, the licensing of this code would determine the extent of corporate participation, but the GPL provides at least one successful example of this working.

>Biopartnering North and a short break

>First off, if anyone is going to BioPartnering North 2010 this week in Vancouver, I’ll be there, and would be very happy to talk genomics/biotech and business with you. I was lucky enough to have been found worthy of one of the coveted BIOTECanada bursaries to attend the event, and I plan to get as much out of it as I can. I’ll be at the reception tonight, and undoubtedly I’ll be around throughout the next few days. (And, if you were wondering, I won’t be blogging any talks from BPN.)

Second, I’m pretty sure everyone has noticed that my blogging output has dropped significantly since December, for which there are several good reasons. The first is that I’ve been quite busy. My personal life is now occupied by event planning, while my work life has been dominated by several major projects, of which I will undoubtedly be “ranting” about in posts in the near future.

However (and thirdly), the other reason I’ve not been blogging much is that I also had a conversation with a colleague in december about effective communications. He suggested I read a book on “Non-violent communication.” I’m working my way through it slowly, and have taken a few suggestions to heart. It’s always possible to become a better communicator and, to that end, I’m on a small hiatus while I re-evaluate my use of language. It won’t last long – I like having a blog and I’m already itching to write a few more posts, but it’s an opportunity to do some personal development.

>How to be a better Programmer: Tactics.

>I’m a bit too busy for a long post, but a link was circulating around the office that I thought was worth passing on to any bioinformaticians out there.

http://dlowe-wfh.blogspot.com/2007/06/tactics-tactics-tactics.html

The article above is on how to be a better programmer – and I wholeheartedly agree with what the author proposed, with one caveat that I’ll get to in a minute. The point of the the article is that learning to see the big picture (not specific skills) will make you a better programmer. In fact, this is the same advice Sun Tzu gives in “The Art of War”, where understanding the terrain, the enemy, etc are the tools you need to be a better general. [This would be in contrast to learning how to wield each weapon, which would only make you a better warrior.] Frankly, it’s good advice, and this leads you down the path towards good planning and clear thinking – the keys to success in most fields.

The caveat, however, is that there are times in your life where this is the wrong approach: ie. grad school. As a grad student, your goal isn’t to be great at everything you touch – it’s to specialize in some small corner of one field, and tactics are no help here. If grad school existed for Ninjas, the average student would walk out being the best (pick one of: poisoner/dart thrower/wall climber/etc) in the world – and likely knowing little or nothing about how to be a real ninja beyond what they learned in their Ninja undergrad. Tactics are never a bad investment, but they aren’t always what is being asked of you.

Anyhow, I plan to take the advice in the article and to keep studying the tactics of bioinformatics in my spare time, even though my daily work is more on the details and implementation side of it. There are a few links in the comments of the original article to sites the author believes are good comp-sci tactics… I’ll definitely be looking into those tonight. Besides, when it comes down to it, the tactics are really the fun parts of the problems, although there is also something to be said for getting your code working correctly and efficiently…. which I’d better get back to. (=

Happy coding!

>DTC Snps… no more risk factors!

>I’ve been reading Daniel’s blog again. Whenever I end up commenting on things I don’t understand well, that’s usually why. Still, it’s always food for thought.

First of all, has anyone quantified the actual error rate on these tests? We know they have all sorts of mistakes going on. (This one was recently in the news, and yes, unlike Wikipedia, Daniel is a valid reference source for anything genomics related.) I’ll come back to this point in a minute.

As I understand it, the risk factor is an adjustment made to the likelihood of the general population in characterizing the risk of an individual suffering from a particular disease.

So, as I interpret it, you take whatever your likelihood of having that disease was multiplied by the risk factor. For instance with a disease like Jervell and Lange-Nielsen Syndrome, 6 of every 1 Million people suffer from it’s effects (although this is a bad example since you would have discovered it in childhood, but ignoring that for the moment we can assume another rare disease with a similar rate.) If our DTC test shows we have a 1.17 risk factor because we have a SNP, we would multiply that by 1.17.

6/1,000,000 x 1.17 = 7/1,000,000

if I’ve understood it all correctly, that means you’ve gone from knowing you have a 0.000,6% chance to being certain you have a 0.000,7% chance of suffering from your selected disease. (What a great way to spend your money!)

But lets not stop there. Lets ask about the the error rate on actually calling that snp is. From my own experience in SNP validation, I’d make a guess that the validation rate is close to 80-90%. Lets even be generous and take the high end. Thus:

You’ve gone from 100% knowing you’ve got a 0.000,6% chance of having a disease to being 90% sure you have a 0.000,7% chance of having a disease and a 10% sure you’ve still got a 0.000,6% of having the disease.

Wow, I’m feeling enlightened.

Lets do the same for something like Celiacs disease, which is estimated to strike 1/250 people, but is only diagnosed 1/4,700 people in the U.S.A. – and lets be generous and assume that the SNP in your DTC test has a 1.1 risk factor. (Celiacs is far from a rare disease, I might add.)

As a member of the average U.S. population, you had a 0.4% chance of having the disease, but a 0.02% chance of being diagnosed with it. That’s a pretty big disparity, so maybe there’s a good reason to have this test done. As a Canadian it’s somewhat different odds, but lets carry on with the calculations anyhow.

lets say you do the test and find out you have a 1.1 times risk factor of having the disease. omg scary!

Wait, lets not freak out yet. That sounds bad, but we haven’t finished the calculations.

Your test has the SNP…. 1.1 x 1/250 = 0.44% likelihood you have the disease. Because Celiacs disease requires a biopsy to definitively diagnose it (and treatment does not start till you’ve done the diagnosis), would you run out and submit yourself to a biopsy on a 0.44% chance you have a disease? Probably not unless you have some other knowledge that you’re likely to have this disease already.

Then, we factor in the 90% likelyhood of getting the SNP call correct: You have a 90% likelihood of having a 0.44% chance of having the disease, and a 10% likelihood of having a 0.4% chance of having the disease.

Ok, I’d be done panic-ing about now. And we’ve only considered two simple things here. Lets add one more just for fun.

lets pretend that an unknown environmental stressor is actually involved in triggering the condition, which would explain why the odds are somewhat different in Canada. Since we know nothing about that environmental trigger, we can’t even project odds of coming in contact with it. Who knows what effect that plays with the SNP you know about.

By now, I can’t help thinking that all of this is just a wild goose chase.

So, when people start talking about how you have to take your DTC results to a Genetic Counsellor or to your MD I really have to wonder. I can’t help but to think that unless you have a very good reason to suspect a disease or if you have some form of a priori knowledge, this whole thing is generally a waste. Your Genetic Counsellor will probably just laugh at you, and your MD will order a lot of unnecessary tests – which of those sounds productive?

Let me make a proposal (and I’m happy to hear dissent): Risk factors are great – but are absolutlely useless when it comes to discussing how genetic factors affect you. Lets leave the risk factors to the people writing the studies and ask the DTC companies to make a statement: what are your odds of being affected by a given condition? And, if you can’t make a helpful prediction (aka, a diagnostic test), maybe you shouldn’t be selling it as a test.