blogging as practice for thesis writing…

I’ve been telling all of the students around me that they should try their hand at blogging – in fact, I’ve been telling everyone around me that blogging is definitely something you should try if you have a chance. I know that not everyone has a lot to say, but that it’s a great habit to be in. It’s a great way to practice organized writing (without 140 character limits), putting your views out where others can see it, defending your ideas and, of course, it’s great practice at organizing your thoughts. In other words, it’s a microcosm of your final 6-9 months of your graduate studies.

Indeed, I had no idea how much that advice was actually worth till I sat down yesterday afternoon, and started writing out some parts of my thesis. (Yes, I’m now actually writing my thesis!) The brilliant thing was, after spending so much time writing on my blog, the thesis seems SO much easier than the ones I’ve done in the past. (This is actually my 4th thesis.) The text just flowed, which makes the whole process rather fun.. (Yes, thesis and fun in one sentence!)

When I sat down and organized my thoughts on a subject, things just came together and I knew what I wanted to say and how to say it, which I can only ascribe to the endless practice of writing on my blog. Yes, the style is a bit different and less reflective, but other than minor stylistic changes, the process is almost identical.

So, for anyone who’s going to eventually have to write a thesis, I’ll make a suggestion: start practicing for it by starting a blog, even if no one reads it. Keeping your writing skills in shape is invaluable.

The Death of Microarrays, revisited

Nearly a year ago, back when I was still over on the Nature Blogs site, I decided to announce the obvious: Microarrays are dead.  Of course, I meant that in the research setting, and not that we should all throw out everything that’s microarray related.  In the long term, microarrays are simply going to be pushed further and further into niche applications.  I think I was pretty verbose on that point – there will always be niches for certain technologies and Microarrays will always reign supreme on diagnostics and mail order SNP panels.   My opinion on it hasn’t really changed.

However, I did have an opportunity to talk with people today who work on microarrays, and one of my internet friends reminds me every time one of her microarray projects works, that I’ve already declared them dead, so I figure it’s worth visiting.

The real catalysis for this blog entry, however, came from an article I read on GenomeWeb.  Unfortunately, I don’t have a premium account and the article is not freely available… which begs the question of how I read it in the first place.  I haven’t the faintest. Twitter link?

In any case, the major point of the article was that the death of arrays have been greatly exaggerated and that there are still experiments that do better with arrays than with next gen sequencing.

Well.. yeah.  No one claimed that there weren’t applications.  My internet friend’s arrays are helping her understand horse colour patterns, while I know that diagnostics can be much more efficiently done using arrays.  Clearly, those are niche applications where microarrays have the edge –  and next gen sequencing is overkill in cost, bioinformatics required AND the amount of information gathered.

Unfortunately, in claiming the death of microarrays is premature, the GenomeWeb basically cites the niche applications, rather than demonstrate a resurgence of microarrays into cutting edge science.  I don’t find it particularly convincing, really.

So, here’s my challenge: If you’d like to announce that microarrays aren’t dead, you’ll have to show their use outside of the niche applications.  To be clear, lets enumerate niches where Microarrays will flourish:

  1. Large sample sets with small numbers of genes.  Maybe you want expression levels of transcribed genes over a large number of patients, then microarrays are probably much cheaper, and are likely to remain cheaper,  as long as you don’t mind gene-level resolution.
  2. Diagnostics:  You only want information on an exact set of traits.  Extraneous information is actually a hindrance, rather than a benefit.
  3. Personal medicine: well, this isn’t any different than number diagnostics, except the information is probably going direct to the consumer.
  4. Experiments that would have been cutting edge on drosophila in the 80’s or 90’s.  Not all organisms have been well studied.  Horse colouring, for instance, is just one of those things that hasn’t been explored in great detail and is now the topic of research.  Again, you don’t need the depth of next gen sequencing to study a simple genomic set of traits.

So, did you see the theme?  Simple experiments, nothing too in-depth and nothing where you’re fishing for something new and unexpected.  While you CAN do experimental work with microarrays, I just don’t buy that cutting edge work will happen on that platform anymore.  That’s not to say that there’s nothing left of value (there CLEARLY is), but those aren’t going to be studies that give you new mechanisms or huge insight into a well studied organism.

Microarrays have, from my perspective, passed beyond the realm of cutting edge genomics into the toolbox of “oldschool” applications.  Again, it’s not an insult to microarrays, and oldschool doesn’t mean useless. (For instance, see my posts on complete genomics – they are truly the oldschool of sequencing, and doing some fantastic things with it.)

So, in conclusion, I’m going to stand by my original post and reiterate that microarrays are dead – at least as far as doing cutting edge personal medicine and research. But, hey, that doesn’t mean you have to throw them all out the window.  They’ll still be around, hiding in the quiet corners, pumping out volumns of data… just more slowly than sequencing.  I just don’t expect them to jump out and surprise me with a resurgence and displace next gen technologies, which are only going to keep pushing microarrays further into the shadows.

Complete Genomics User Conference 2011

I’ve just heard that Complete Genomics has a user conference going on from June 15th-17th in San Francisco.  Unfortunately, I’m not going to be there, as I have other travel plans that week, but I’d be very interested in knowing if any other blogger/twitterers are going.  Sounds like an interesting time!

http://www.completegenomics.com/userconf2011/

In typical Complete Genomics style, they’ve even been so kind as to give you the Top 10 reasons to attend.  I think number 10 is pretty convincing, no?

 

Fastest rejection letter ever.

It’s a little early for me to be looking for a job – I’m still working on my thesis and clearly won’t be done before the fall, but hey, it doesn’t hurt to be prepared, right?

So, last night I came across a job posting on Linked in, posted by a “Staffing Consultant”, looking for a scientist for a big company in the states.  They want 6 years of next-gen sequencing experience (despite the fact the field is only 4 years old), and someone with previous experience in a management role.  I thought 4 years of next gen sequencing experience might be close enough and I do have management experience from before returning to school – so hey, why not apply, right?

Unfortunately, my ego might disagree after getting a quick reply.  (By quick, I mean less than 12 hours, which is a pretty impressive turnaround time for any job application.) The reply, however, was somewhat harsh.  I’ve paraphrased it, but this is the gist:

“Thanks for applying for this position.  We’re only looking at candidates with a PhD, work experience and management experience. We have a bunch of Post-doc positions on our web page – you should check them out and apply for something that fits your current level of expertise/education.”

Zing!

On that note, perhaps I was a bit ambitious, but hey, nothing ventured, nothing gained.  (-:

Cancer as a network disease

A lot of my work these days is in trying to make sense of a set of cancer cell lines I’m working on, and it’s a hard project.  Every time I think I make some headway, I find myself running up against a brick wall – Mostly because I’m finding myself returning back to the same old worn out linear cancer signaling pathway models that biochemists like to toss about.

If anyone remembers the biochemical pathway chart you used to be able to buy at the university chem stores (I had one as a wall hanging all through undergrad), we tend to perceive biochemistry in linear terms.  One substrate is acted upon by one enzyme, which then is picked up by another enzyme, which acts on that substrate, ad nauseum.  This is the model by which the electron transport cycle works and the synthesis of most common metabolites.  It is the default model to which I find myself returning when I think about cellular functions.

Unfortunately, biology rarely picks a method because it’s convenient to the biologist.  Once you leave cellular respiration and metabolite synthesis and move on to signaling, nearly all of it, as far as I can tell, works along a network model.  Each signaling protein accepts multiple inputs and is likely able to signal to multiple other proteins, propagating signals in many directions.  My colleague referred to it as a “hairball diagram” this afternoon, which is pretty accurate.  It’s hard to know which connections do what and if you’ve even managed to include all of them into your diagram. (I wont even delve into the question of how many of the ones in the literature are real.)

To me, it rather feels like we’re entering into an era in which systems biology will be the overwhelming force for driving the deep insight.  Unfortunately, our knowledge of systems biology in the human cell is pretty poor – we have pathway diagrams which detail sub-systems, but they are next to imposible to link together. (I’ve spent a few days trying, but there are likely people better at this than I am.)

Thus, every time I use a pathway diagram, I find myself looking at the “choke points” in the diagram – the proteins through which everything seems to converge.  A few classic examples in cancer are AKT, p53, myc and the Mapk’s.  However, the more closely I look into these systems, the more I realize that these choke points are not really the focal points in cancer.  After all, if they were, we’d simply have to come up with drugs that target these particular proteins and voila – cancer would be cured.

Instead, it appears that cancers use much more subtle methods to effect changes on the cell.  Modifying a signaling receptor, which turns on a set of transcription factors that up-regulates proto-oncogenes and down-regulates cancer-supressors, in turn shifting the reception of signalling that reinforce this pathway…

I don’t know what the minimum number of changes required are, but if a virus can do it with only a few proteins (EBV uses no more than 3, for instance), then why should a cell require more than that to get started?

Of course, this is further complicated by the fact that in a network model there are even more ways to create that driving mutation.  Tweak a signaling protein here, a receptor there… in no time at all, you can drive the cell in to an oncogenic pattern.

However, there’s one saving grace that I can see:  Each type of cell expresses a different set of proteins, which affects the processes available to activate cancers.  For instance inherited mutations to RB generally cause cancers of the eye, inherited BRCA mutations generally cause cancers of the breast and certain translocations are associated with blood cancers.  Presumably this is because the internal programs of these cells are pre-disposed to disruption by these particular pathways, whereas other cell types are generally not susceptible because of a lack of expression of particular genes.

Unfortunately, the only way we’re going to make sense of these patterns is to assemble the interaction networks of the human cells in a tissue specific manner.  It won’t be enough to know where the SNVs are in a cell type, or even which proteins are on or off (although it is always handy to know that).  Instead, we will have to eventually map out the complete pathway – and then be capable of simulating how all of these interactions disrupt cellular processes in a cell-type specific manner.  We have a long way to go, yet.

Fortunately, I think tools for this are becoming available rapidly.  Articles like this one give me hope for the development of methods of exposing all sorts of fundamental relationships in situ.

Anyhow, I know where this is taking us.  Sometime in the next decade, there will need to be a massive bioinformatics project that incorporates all of the information above: Sequencing for variations, indels and structural variations, copy number variations and loss of heterozygosity, epigenetics to discover the binding sites of every single transcription factor, and one hell of a network to tie it all together. Oh, and that project will have to take all sorts of random bits of information into account, such as the theory that cancer is just a p53 aggregation disease (which, by the way, I’m really not convinced of anyhow, since many cancers do not have p53 mutations).  The big question for me is if this will all happen as one project, or if science will struggle through a whole lot of smaller projects.  (AKA, the human genome project big-science model vs. the organized chaos of the academic model.)  Wouldn’t that be fun to organize?

In the meantime, getting a handle on the big picture will remain a vague dream at best, and tend to think cancer will be a tough nut to crack.  Like my own work and, for the time being, will be limited to one pathway at a time.

That doesn’t mean there isn’t hope for a cure – I just mean that we’re at a pivotal time in cancer research.  We now know enough to know what we don’t know and we can start filling in the gaps. But, if we thought next gen sequencing was a deluge of data, the next round of cancer research is going to start to amaze even the physicists.

I think we’re finally ready to enter the realms of real big biology data, real systems biology and a sudden acceleration in our understanding of cancer.

As we say in Canada… “GAME ON!”

My lesson learned.

One shouldn’t often engage in a war of words with people who comment on blogs on the Internet.  It’s rarely productive.  In this case, there are a few points I could clarify by responding to a comment with a particularly ugly tone, especially given that it was written by someone with an illustrious career in a field related to my own.  They’ve held the position of chair and vice chair of multiple departments, have been a professor since before I was born and have hundreds of publications…  And yet, this individual has chosen to send me a message with the identity “fuckhead” – accusing me of intimidating a junior grad student.  Instead of using his real name, I’ll use the moniker “fuckhead” that he chose for himself, and I’ll post fuckhead’s comments below, interspersed with my reply.

I do need to acknowledge that my tone in my “Advice to graduate students” was somewhat condescending, due to some rather unfortunate word choices on my part.  I have since edited the post for tone, but not for content.  That said, fuckhead’s comment (and unfortunate choice of moniker) was still inappropriate and deserves a reply – and yes, it is a great cathartic release to reply to a negative comment once in a while.

“Coming on the heels of the previous “why I have not graduated yet” post, this is telling”.

Oddly enough, the point of my post on why I haven’t graduated yet was because I’m unable to find any clear signals in the noise in my data set, while my point on advice to other graduate students was about respecting your colleagues, even if it wasn’t necessarily obvious in my first released version of the post.  Putting the two together might allow for a few interesting conclusions – although I would suggest that it is not the ones that are suggested in the comment.

“It’s one thing to intimidate a student out of the way (rather simple, actually, if you have the slightest clue what you are doing), but what you espouse here will poison you. So you’ve been working on the topic for a while, and some meatball comes along and asks you about the topic, and your reaction is ‘get bent’? What will you tell people when they ask what you spent X years of your life on in graduate school? ‘Get bent’?”

It should be clear right away that fuckhead really doesn’t know me well.  I run a blog to share information and help other people, I have more than 200 posts (answering questions) on seqanswers.com, all of the code I’ve written towards my thesis has been available freely on source forge for 3 years, I’ve dedicated countless hours to helping other bioinformaticians online and always make time to help out my fellow students. (My resume is online somewhere, if you want more than that.) I probably have told a few people to “get bent”, as it were, but in this case, I most certainly didn’t.

It’s rather telling to me that fuckhead didn’t take the time to find out who I am before jumping to the conclusion that I’m obstructive and surly towards my colleagues.  I can be gruff when people don’t take the time to think through their questions, but I always take the time to listen to my colleagues and help them find the information they need.  If my tone is a bit gruff sometimes, we all have off days – and it’s an inherent danger of insufficient blog editing as well.

“Do you think that you are going to cure cancer, or are you trying to make a little tiny dent in the vast universe of ignorance that surrounds humankind?”

Wow… leading question.  While I do joke that my job is to “cure cancer”, I’m fully aware that expectations for graduate students are low and making a “little tiny dent in the vast universe of ignorance” is where the bar is normally set.  In case it wasn’t clear, no one expects me to cure cancer while working on my doctorate.

That said, who’s to say that neither myself nor the incoming graduate student can’t be the one who does find an important cure? Why hobble myself by agreeing to do no more than meet your base expectations.? Fuckhead doesn’t say why he thinks I shouldn’t have big goals – or why he thinks I’m incapable of meeting them.  Nor does he explicitly state his underlying assumption, which is clear here.  To paraphrase: “You’re just a lowly graduate student, and thus you aren’t the one doing the important work.”

For the record, I like to think big – and I like to achieve my goals.

“And if you are after the latter, why not start off with the ignorant student that approached you?”

Ironically, since fuckhead’s main point is that he thinks I’m intimidating junior colleagues, his tone is oddly lacking in self reflection.  The implication that I haven’t helped the graduate student already is plain – and plain wrong.  However, that is between myself and the student, and we are in the process of establishing a better relation on stronger co-operation where my time is respected and the students needs are better met.  After all, that is the goal:  By getting the student to ask more focused questions, he’ll get better answers.

Further, given that I am fuckhead’s junior colleague, I have to ask why he chose to respond to my post with such venom.  He could have taken the time to set me straight by leading me to see his point, rather than writing a biting comment that chastises me for being rude to those who have less experience than I.

Irony, anyone?

“If you have lots of good ideas, some dolt stealing one of them won’t hurt you.”

I’m not afraid of people stealing my work, but one should recall the context of my comments.  Frankly, I am a strong believer in open source and collaborative work and if you want to see the code I’m working on this week, all you have to do is download my work from source forge.

Unfortunately, in academia, one generally doesn’t release data until it’s published – that is the default position – and one I have openly questioned in the past.  But, if I want someone’s unpublished results, I go to them with the respect for the work that went into it.  It is as simple as that.

Besides, as someone with an entrepreneurial past, I’m well aware of the value of ideas. One does not disclose the “secret sauce” to competitors without an NDA (non-disclosure agreement), but when it comes to investors, you have to respect their time and effort and be aware that your idea has no value until you’ve done something with it – and even then, it’s still not the idea itself that has value.

However, the proof is in the pudding, as they say.  If I were afraid of people stealing my ideas, would I be blogging them?

“If you don’t have lots of good ideas, how the hell will you survive on your own as a researcher?”

I haven’t the foggiest clue.  I’ve never found myself lacking in ideas, although I’m shying away from the academic career path for this very reason.  I know the value of sharing ideas and of working in a group to combine and improve ideas.  Unfortunately, I don’t see that kind of environment being created in academia where professors competing for a small pool of grant money hoard their findings so that others will be less effective in competing with them.

If there’s one thing that I hate, it is wasting time reinventing the wheel.  Unfortunately, that appears to be an inherent part of the academic process.  (I’m not talking about independently confirming results, which is an inherent and important part of the scientific method.)

To wrap things up, yes, I’ll go quietly back into my little bubble of the universe in which I will quietly battle the raging sea of ignorance around me, but I can’t promise that I’ll stay there.  However, even as I fade quietly back into obscurity, I do plan to learn from my mistakes and to let others learn from them as well.

The hard lesson I learned today was to watch my own tone when communicating on the Internet, to keep myself from unintentionally sounding arrogant and condescending.  I’d be happy to pass the same lesson on to you, fuckhead.

Advice for new graduate students

[Edit: Someone I respect took the time to suggest that my tone was condescending in this post – and I can see some of it, upon a second reading.  Thus, I have edited the post for tone, and so that it expresses my opinion more accurately.]

I had a talk this morning with a graduate student that recently joined my lab, sparked by an email sent to me last night. One of many I had received from this student with a similar tone.

As one of the “senior” students in this group, I figured it was my responsibility to give the student some advice on communicating more effectively with colleagues.  The student has agreed to let me repost their email so that others could learn from it.  I’ve made a few changes, but for the most part, it’s reproduced as sent.

Hi Anthony,

Thanks for all your help.

My PhD project will definitely involve [discipline of science]. Therefore, I would like to read up as much as possible on it so that I can comfortably discuss and defend my work in the future.

I understand that your dissertation will heavily involve [discipline of science] as well. I am wondering whether you could send me the list of references you included in your dissertation that are relevant to [discipline of science].

I am been trying to cover some of the key papers, especially those describing [topic]. But there seems to be too many papers out there on [discipline of science]..

It would be great if you could help me by providing the reference list.

Thanks!

My reaction to the email was to be rather upset by it.  To comply with the request would require several days worth of work, interrupting other major projects with their own deadline.  The tone of the email did not acknowledge the disruption or the quantity of work necessary – or the amount of work that has gone into my work to date.  Admittedly, references aren’t a hardship to share, once they’re already organized.  Alas, mine aren’t.

I don’t expect everyone to be humble as an incoming graduate student anyways – I certainly wasn’t!  However, it is important to recognize the value of other people’s time and effort and to do your best to match their efforts.

  1. Requesting years worth of work from other people. Everyone in the lab has been working for years, devoting hours/days/weeks to building reference lists, figures, charts, code and other data that solve problems or tasks specific to their own topics of interest.  The goal of your PhD is for you to identify the problems involved in your project and then make your own inroads into solving them.  Where you have a common goal, offer to collaborate, such that you can contribute to the process.  Teamwork is important, and establishing yourself as part of any existing team should be done with respect for the effort that’s already been invested in the team’s project.
  2. Asking someone else to do work for you without repayment. I realize that graduate students aren’t among the most highly paid professionals.  In fact, we’re one of the most poorly compensated in terms of training/reward.  However, that doesn’t mean that grad student’s time is without value.  Every hour a student spends not working on their own project is an hour they they’ll have to postpone their own defense.  That doesn’t mean you shouldn’t ask, but that you should be focused in what you do ask, to respect that people have other tasks on their desks.
  3. Expecting another student to share your objectives.  We’re all in the same lab, but my goals are to make myself comfortable in discussing and defending my own thesis.  That is in essence the purpose of a defense.  If I were able to do that for myself, I’d have already defended and gone on to other things!  (-:  Graduate students should help each other out in preparing for their defenses, but ultimately, it is your responsibility to prepare yourself.  Furthermore, what helps prepare one student for their defense isn’t likely to be the same as what helps prepare another student, even if you are working on the same project.  By all means, work together in preparing, but don’t expect them to do your preparation for you.
  4. Learn to be your own fisherman. Everyone knows the proverb: “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”  When asking for help from others, don’t ask for their fish – ask them to teach you to fish!  This applies doubly for grad students since your goal should be to learn new techniques – not to get others to work for you. (That’s a separate degree called an MBA.)  Instead of asking others for their work, ask them to teach you to replicate it with your own data.  You’ll probably even learn enough to contribute back or help another student along down the road.
  5. Make the same effort to solve your problem that you expect from the person you’ve asked.  It seems simple, but in over 15 years of helping people on the web, I’m constantly surprised by the number of times people will ask someone volunteering their time to do more work than they’ve put in themselves.  This should be one of those things everyone stops to ask themselves before getting help.  “Have I worked hard enough to solve this to merit asking someone else’s help?”  If the answer you want is a quick one word answer someone else should know off hand, then the bar is VERY low – perhaps a couple seconds searching on the web.  If you’re asking someone to take you through the whole process of designing a building, then you’d better have spent the time to read a few books on architecture and engineering.
  6. Be specific! When you’ve taken the time to ask a question – ask one that’s bite sized.  “Where’s a good place to look for advice on magnetization?” is vastly better than “Explain how physics works for me!”

All in all, this all boils down to respect.  Respect the people around you, respect their time and respect their objectives. If you do this, you’ll find your time in grad school is much more efficient and you’ll gain a lot more respect in return.

Why I haven’t graduated yet and some corroborating evidence – 50 breast cancers sequenced.

Judging a cancer by it’s cover tissue of origin may be the wrong approach.  It’s not a publication yet, as far as I can tell, but summaries are flying around about a talk presented at AACR 2011 on Saturday, in which 50 breast cancer genomes were analyzed:

Ellis et al. Breast cancer genome. Presented Saturday, April 2, 2011, at the 102nd Annual Meeting of the American Association for Cancer Research in Orlando, Fla.

I’ll refer you to a summary here, in which some of the results are discussed.  [Note: I haven’t seen the talk myself, but have read several summaries of it.] Essentially, after sequencing 50 breast cancer genomes – and 50 matched normal genomes from the same individuals – they found nothing of consequence.  Everyone knows TP53 and signaling pathways are involved in cancer, and those were the most significant hits.

“To get through this experiment and find only three additional gene mutations at the 10 percent recurrence level was a bit of a shock,” Ellis says.

My own research project is similar in the sense that it’s a collection of breast cancer and matched normal samples, but using cell lines instead of primary tissues.  Unfortunately, I’ve also found a lot of nothing.  There are a couple of genes that no one has noticed before that might turn into something – or might not.  In essence, I’ve been scooped with negative results.

I’ve been working on similar data sets for the whole of my PhD, and it’s at least nice to know that my failures aren’t entirely my fault. This is a particularly difficult set of genomes to work on and so my inability to find anything may not be because I’m a terrible researcher. (It isn’t ruled out by this either, I might add.)  We originally started with a set of breast cancer cell lines spanning across 3 different types of cancer.  The quality of the sequencing was poor (36bp reads for those of you who are interested) and we found nothing of interest.  When we re-did the sequencing, we moved to a set of cell lines from a single type of breast cancer, with the expectation that it would lead us towards better targets.  My committee is adamant  that I be able to show some results of this experiment before graduating, which should explain why I’m still here.

Every week, I poke through the data in a new way, looking for a new pattern or a new gene, and I’m struck by the absolute independence of each cancer cell line.  The fact that two cell lines originated in the same tissue and share some morphological characteristics says very little to me about how they work. After all, cancer is a disease in which cells forget their origins and become, well… cancerous.

Unfortunately, that doesn’t bode well for research projects in breast cancer.  No matter how many variants I can filter through, at the end of the day, someone is going to have to figure out how all of the proteins in the body interact in order for us get a handle on how to interrupt cancer specific processes.  The (highly overstated) announcement of p53’s tendency to mis-fold and aggregate is just one example of these mechanisms – but only the first step in getting to understand cancer. (I also have no doubts that you can make any protein mis-fold and aggregate if you make the right changes.)  The pathway driven approach to understanding cancer is much more likely to yield tangible results than the genome based approach.

I’m not going to say that GWAS is dead, because it really isn’t.  It’s just not the right model for every disease – but I would say that Ellis makes a good point:

“You may find the rare breast cancer patient whose tumor has a mutation that’s more commonly found in leukemia, for example. So you might give that breast cancer patient a leukemia drug,” Ellis says.

I’d love to get my hands on the data from the 50 breast cancers, merge it with my database, and see what features those cancers do share with leukemia.  Perhaps that would shed some light on the situation.  In the end, cancer is going to be more about identifying targets than understanding its (lack of ) common genes.

ridiculous email.

Sometimes it’s fun to write ridiculous emails:

Good morning 1st floor!

You may notice that *all* items in both of the fridges and the freezer have been marked with a yellow sticky piece of paper. This yellow mark symbolizes your refrigerated item’s impending doom.

If there is anything in the freezer that still has this mark on it by Thursday afternoon, it will be sacrificed to the gods of bioinformatics in the hopes of better results and faster processing times for the GSC. (The sacrifice may or may not involve diabolical rituals and a RIP talk.)

Fortunately for your refrigerated items, they may be spared simply by removing the yellow tag.

Unlike last year, I will not be lenient in sparing “fresh looking” items with the yellow tag… as I found some yellow tags from last year in the freezer. (anyone want a frozen dinner?)

With humour,

Anthony

I have to admit, I’ve never threatened doom on slightly chilled and expired food items before.  Thursday afternoon should be rather entertaining, I would think.

Teens and risk taking… a path to learning.

I read an article on the web the other day, in which it was described that teenagers have a different weighting of risk and reward than either young children or adults due to a chemical change that emphasizes the benefits of the rewards, without fully processing the risks.

The idea is that the changes in the adolescent brain emphasize the imagined reward for achieving goals, but fails to equally magnify the resulting negative impulse for the potential outcomes of failure. (I suggest reading the linked article for a better explanation.)

Having once been a teenager myself, this somewhat makes sense to me in terms of how I learned to use computers. A large part of the advantage of learning computers as a child is the lack of fear of “doing something wrong.” If I didn’t know what I was doing, I would just try a bunch of things till something worked never worrying about the consequences of making a mess of the computer.  I have often taught people who came to computers late in their lives, and the one feature that comes to the forefront is always their (justified) fear of making a mess of their computer.

In fact, that was the greatest difference between my father and I, in terms of learning curve: when encountering an obstacle, my father would stop as though hitting a brick wall until he could find someone to guide him to a solution, while I’d throw myself at it till I found a hole through it, or a way around it. (Rewriting dos config files, editing registries and modifying IRQ settings on add-on boards were not for the faint of heart in the early 90’s.)

As someone now in my 30’s I can see the value of both approaches. My father never did mess up the computer, but managed to get the vast majority of things working. On the other hand, I learned dramatically faster, but did manage to make a few messes – all of which I eventually cleaned up (learning how to fix computers in the process). In fact, learning how to fix your mistakes is often more painful than causing the mistake in the first place, so my father’s method clearly was superior in sheer pain avoidance technique (eg, negative reinforcement).

However, in the long run, I think there’s something to be said for the teen’s approach: you can move much more agilely (is that a word?) if you throw yourself at problems with the full expectation that you’ll just learn how to solve them in the end.  One can’t be a successful researcher if fear of the unknown is what drives you.  And, if you never venture out into the fringes of the field, you won’t make the great discoveries.  Imagine if Columbus hadn’t been willing to test his theories (which were wrong, by the way) about the circumference of the earth – and no, even the ancient Greeks knew that the earth was round.

Incidentally, fear of making a mess of my computer was always the driving fear for me when I first started learning Linux.  Back in the days before good package management, I was always afraid of installing software because I never knew where to put it.  Even worse, however, was the posibility of doing something that would cause an unrecoverable partition or damaging hardware – both of which were actual possibilities in those days if you used the wrong settings in your config files.  However, with a distinct risk/reward ratio towards the benefit of getting a working system, I managed to learn enough to dull that fear.  Good package management also meant that I didn’t have to worry about making messes of the software while installing things, but that’s another story.

Anyhow, I’m not sure what this says about communicating with teenagers, but it does reinforce the idea that older researchers (myself included) have to lose some of their fear of failure – or fear of insufficient reward – to keep themselves competitive.

Perhaps this explains why older labs depend upon younger post-docs and grad students to conduct research… and the academic cycle continues.