New spam for breakfast

I received an interesting piece of spam this morning. It came among the usual flurry of easily filtered spam, which is composed mainly of people trying to do SEO (search engine optimization) to get their hits up at the top of the search engine results.   That is to say, it mostly consists of a bogus comment and a link to something like an online pharmacy.

This morning, the link was a surprise…  check it out:

Author : Jaelyn (IP: 173.230.129.176 , li169-176.members.linode.com)
E-mail : www.droman827@misterpaws.net
URL    : http://www.bing.com/
Whois  : http://whois.arin.net/rest/ip/173.230.129.176
Comment: I'm not easily irpemssed but you've done it with that posting.

You’ll notice the usual typo in the comment, which is supposed to help it get past the filters, which it did in this case, but more surprisingly, the IP actually traces back to someone’s web page – just a random blog. My guess is that the computer hosting that blog has a virus which is pumping out the spam.

The most unusual thing about this is that it’s actually promoting bing.com,

  1. the web site it’s promoting is bing.com.
  2. If it is a virus promoting it, it’s almost guaranteed to be running on a Microsoft computer.

If I were into conspiracy theories, I’d wonder if Microsoft has now taken to paying virus creators to promote it’s web site using viruses that target Microsoft computers.

Yeeeesh.  Even Microsoft couldn’t sink that low…  but really, I would like to know who is behind this campaign.  Promoting bing through spam comments is already pretty despicable – but not something I’d put beyond Microsoft.

Womanspace – last lap.

I wrote a comment on Ed Rybicki’s blog, which is still awaiting moderation.  I’m not going to repeat what I said there, but I realized I had more to say than what I’d already written.  Specifically, I have much more to say about a comment he wrote on this article:

PS: “why publish something that you don’t believe in is another story” – no, it’s just that science fiction allows one to explore EVERYTHING, including what you don’t believe in.”

Ed makes a great point – Science fiction is exactly the right vehicle for exploring things that you don’t believe in.  Indeed, it’s been used exactly that way since the genre was invented.  You could say that Guliver’s Travels was a fantastic use of early science fiction, exploring a universe that mocked all sorts of contemporary idiocy that the author (Swift) disagreed with.

So, yes, I see Ed’s point – and he has a good one.  However, I’m going to have to disagree with Ed on the broader picture.  Science Fiction is perfect for exploring issues that you don’t believe in precisely because you can apply them to similar or parallel situations where they demonstrate their flaws.

For instance, if you want to write about how terrible apartheid is, you don’t set a science fiction novel in South Africa in the 1990’s, you set it up in on another planet where two civilizations clash – and you can explore the themes away from the flashpoint issues that are rife in the real world conflict. (Orson Scott Card explores a lot of issues of this type in his novels.)

The issue with Ed’s article – and there are plenty of them to chose from – is that he chose to engage with the lowest form of science fiction: Inclusion of some “vaguely science-like device” that casts no great insight into anything.  Science fiction, as a vehicle, is all about where you take it.

The premise would be equally offensive if he had picked: a race (“Filipino’s only get by because they have access to another dimension to compensate for their height”), a religion (“Christians use another dimension to hide from criticism leveled at their holy book”), or an age (“Anyone who can hold a job after the age of 65 is clearly doing so because they’re able to access another dimension”).

Ed could have made much better use of the vehicle he chose to drive.  He could have invented an alien species in which only one gender has access to a dimension, he could have used the alternate dimension to enable women to do things men can’t (and no, I don’t buy that men can’t shop efficiently) or he could have used his device to pick apart injustices that women face in competing with men.

Instead of using his idea to explore the societal consequences of the pllot device, he uses it to reinforce a stereotype.

That, to me, is not a good use of science fiction.  And the blame doesn’t just go to the author – it goes to the editors.  As a long time reader of science fiction, I can tell when a story doesn’t work and when it fails to achieve it’s desired effect.  This story neither worked, nor causes anyone to question their own values.  (It does, however make me wonder about the editor’s judgment in choosing to print it, as well as the author’s judgment in allowing it to be printed in a high profile forum.)

So, let me be clear – I despise the use of the sterotypes about women that Ed chose to explore. That he believes exploring gender issues this way is any less sensitive than race, religion or age would be is ridiculous – and shows a measure of bad judgement.

Having come up with a great tool (alternate dimensions) for making a comment on society (women and men aren’t treated equally), he completely missed the opportunity to use the venue (science fiction) to set the story in a world where he could have explored the issue and shown us something new.  In essence, he threw away a golden opportunity to cause his audience to ask deep questions and take another look at the issue from a fresh perspective – exactly what science fiction is all about.

Ed’s not a villian – but he’s not a great science fiction writer either.

Where’s the collaboration?

I had another topic queued up this morning, but an email from my sister-in-law reminded me of a more pressing beef: Lack of collaboration in the sciences. And, of course, I have no statistics to back this up, so I’m going to put this out there and see if anyone has anything to comment on the topic.

My contention is that the current methods for funding scientists is the culprit for driving less efficient science, mixed with a healthy dose of Zero Sum Game thinking.

First, my biggest pet peeve is that scientists – and bioinformaticians in particular – spend a lot of time reinventing the wheel.  How many SNP callers are currently available?  How many ChiP-Seq packages? How many aligners?  And, more importantly, how can you tell one from the other?  (How many of the hundreds of snp callers have you actually used?)

It’s a pretty annoying aspect of bioinformatics that people seem to feel the need to start from scratch on a new project every time they say “I could tweak a parameter in this alignment algorithm…”  and then off they go, writing aligner #23,483,337 from scratch instead of modifying the existing aligner.  At some point, we’ll have more aligners than genomes!  (Ok, that’s a shameless hyperbole.)

But, the point stands.  Bioinformaticians create a plethora of software that solve problems that are not entirely new.  While I’m not saying that bioinformaticians are working on solved problems, I am asserting that the creation of novel software packages is an inefficient way to tackle problems that someone else has already invested time/money into building software for. But I’ll come back to that in a minute.

But why is the default behavior to write your own package instead of building on top of an existing one?  Well, that’s clear: Publications.  In science, the method of determining your progress is how many journal publications you have, skewed by some “impact factor” for how impressive the name of the journal is.  The problem is that this is a terrible metric to judge progress and contribution.  Solving a difficult problem in an existing piece of software doesn’t merit a publication, but wasting 4 months to rewrite a piece of software DOES.

The science community, in general, and the funding community more specifically, will reward you for doing wasteful work instead of focusing your energies where it’s needed. This tends to squash software collaborations before they can take off simply by encouraging a proliferation of useless software that is rewarded because it’s novel.

There are examples of bioinformatics packages where collaboration is a bit more encouraged – and those provide models for more efficient ways of doing research.  For instance, in the molecular dynamics community, Charmm and Amber are the two software frameworks around which most people have gathered. Grad students don’t start their degree by being told to re-write one or the other packages, but are instead told to learn one and then add modules to it.  Eventually the modules are released along with a publication describing the model.  (Or left to rot in a dingy hard drive somewhere if they’re not useful.)   Publications come from the work done and the algorithm modifications being explained.  That, to me, seems like a better model – and means everyone doesn’t start from scratch

If you’re wondering where I’m going with this, it’s not towards the Microsoft model where everyone does bioinformatics in Excel, using Microsoft generated code.

Instead, I’d like to propose a coordinated bioinformatics code-base.  Not a single package, but a unified set of hooks instead.  Imagine one code base, where you could write a module and add it to a big git hub of bioinformatics code – and re-use a common (well debugged) core set of functions that handle many of the common pieces.  You could swap out aligner implementations and have modular common output formats.  You could build a chip-seq engine, and use modular functions for FDR calculations, replacing them as needed.  Imagine you could collaborate on code design with someone else – and when you’re done, you get a proper paper on the algorithm, not an application note announcing yet another package.

(We have been better in the past couple years with tool sets like SAMTools, but that deals with a single common file format.  Imagine if that also allowed for much bigger projects like providing core functions for RNA-Seq or CNV analysis…  but I digress.)

Even better, if we all developed around a single set of common hooks, you can imagine that, at the end of the day (once you’ve submitted your repository to the main trunk), someone like the Galaxy team would simply vacuum up your modules and instantly make your code available to every bioinformatician and biologist out there.  Instant usability!

While this model of bioinformatics development would take a small team of core maintainers for the common core and hooks, much the same way Linux has Linus Torvalds working on the Kernel, it would also cut down severely on code duplication, bugs in bioinformatics code and the plethora of software packages that never get used.

I don’t think this is an unachievable goal, either for the DIY bioinformatics community, the Open Source bioinformatics community or the academic bioinformatics community.  Indeed, if all three of those decided to work together, it could be a very powerful movement.  Moreso, corporate bioinformatics could be a strong player in it, providing support and development for users, much the way corporate Linux players have done for the past two decades.

What is needed, however, is buy-in from some influential people, and some influential labs.  Putting aside their own home grown software and investing in a common core is probably a challenging concept, but it could be done – and the rewards would be dramatic.

Finally, coming back to the funding issue.  Agencies funding bioinformatics work would also save a lot of money by investing in this type of framework.  It would ensure more time is spent on more useful coding, more time is spent on publications that do more to describe algorithms and to ensure higher quality code is being produced at the end of the day.  The big difference is that they’d have to start accepting that bioinformatics papers shouldn’t be about “new software” available, but “new statistics”, “new algorithms” and “new methods” – which may require a paradigm change in the way we evaluate bioinformatics funding.

Anyhow, I can always dream.

Notes: Yes, there are software frameworks out there that could be used to get the ball rolling.  I know Galaxy does have some fantastic tools, but (if I’m not mistaken), it doesn’t provide a common framework for coding – only for interacting with the software.  I’m also aware that Charmm and Amber have problems – mainly because they were developed by competing labs that failed to become entirely enclusive of the community, or to invest substantially in maintaining the infrastructure in a clean way.Finally, Yes, the licensing of this code would determine the extent of corporate participation, but the GPL provides at least one successful example of this working.

>Speaking English?

>I have days where I wonder what language comes out of my mouth, or if I’m actually having conversations with people that make sense to anyone.

Due to unusual circumstances (Translation to English: my lunch was forcibly ejected from the fridge at work, which was incompatible with the survival of the glass-based container it was residing in at the time of the incident), I had to go out to get lunch. In the name of getting back to work quickly, as Thursdays are short days for me, I went to Wendy’s. This is a reasonable approximation of the conversation I had with one of the employees.

Employee: “What kind of dressing for your salad?”

Me: “Honey-dijon, please.”

Employee: “What kind of dressing do you want?”

Me: “Honey-dijon.”

Employee: “dressing.”

Me: “Honey-dee-john”

Employee: What kind of dressing for your salad?”

Me: “Honey-dijahn. It says honey-dijon on the board, it’s a dressing, right?”

Employee: “You have the salad with your meal?”

Me: “yes..”

Employee: “You want the Honey Mustard?”

Me: “Yes.”

Sometimes I just don’t get fast food joints – they make me wonder if I have aspergers syndrome. After that conversation, I wasn’t even going to touch the issue that my “sprite, no ice” had more ice than sprite.

>Time to publish?

>Although not quite the first time I’ve been told that I’m slacking in my life, I got the lecture from my supervisor yesterday. To paraphrase: “You’re sitting on data. Publish it now!”

I guess there’s a spectrum of people out there in research: those who publish fast and furious, and those who publish slowly and painstakingly. I guess I’m on the far end of the spectrum: I really like to make sure that the data I have is really right before pushing it out the door.

This particular data set was collected about a year and a half ago, back when 36-bp Illumina reads were all the rage, so yes, I’ve been sitting on it for a long time. However, if you read my notebooks, there’s a clear evolution. Even in my database, the table names are marked as “tbl_run5_“, so you can get the idea of how many times I’ve done this analysis. (I didn’t start with the database on the first pass.)

At this point, and as of late last week (aka, Thursday), I’m finally convinced that my analysis of the data is reproducible, reliable and accurate – and I’m thrilled to get it down in a paper. I just look back at the lab book full of markings and have to wonder what would have happened if I’d have published earlier… bleh!

So, this is my own personal dilemma: How do you publish as quickly as possible without opening the door to making mistakes? I always err on the side of caution, but judging by what makes it out to publication, maybe that’s not the right path. I’ve heard stories of people sending results they knew to be incorrect to the reviewers with the assumption that they could fix it up by the time the reviewers come back with comments..

Balancing publication quality, quantity and speed is probably another of those necessary skills that grad students will just magically pick up somewhere along the way towards getting a PhD. (Others in the list include teaching a class of 300 undergraduates, getting and keeping grants and starting up your own lab)

I think I’m going to spend a few minutes this afternoon (between writing paragraphs, maybe?) looking for a good grad school HOWTO. The few I’ve come across haven’t dealt with this particular subject, but I’m sure it’s out there somewhere.

>3 year post doc? I hope not!

>I started replying to a comment left on my blog the other day and then realized it warranted a little more than just a footnote on my last entry.

This comment was left by “Mikael”:

[…] you can still do a post-doc even if you don’t think you’ll continue in academia. I’ve noticed many life science companies (especially big pharmas) consider it a big plus if you’ve done say 3 years of post-doc.

I definitely agree that it’s worth doing a post-doc, even if you decide you don’t want to go on through the academic pathway. I’m beginning to think that the best time to make that decision (ivory tower vs indentured slavery) may actually be during your post-doc, since that will be the closest you come to being a professor before making the decision. As a graduate student, I’m not sure I am fully aware of risks and rewards of the academic lifestyle. (I haven’t yet taken a course on the subject, and one only gets so much of an idea through exposure to professors.)

However, at this point, I can’t stand the idea of doing a 3 year post doc. After 6 years of undergrads, 2.5 years of masters, 3 years of (co-)running my own company, and about 3.5 years of doing a PhD by the time I’m done… well, 3 more years of school is about as appealing as going back to the wet lab. (No, glassware and I don’t really get along.)

If I’m going to do a post-doc (and I probably will), it will be a short and sweet one – no more than a year and a half at the longest. I have friends who are stuck in 4-5 year post-docs and have heard of people doing 10-year post-docs. I know what it means to be a post-doc for that long: “Not a good career building move.” If you’re not getting publications out quickly in your post-doc, I can imagine it won’t reflect well on your C.V, destroying your chances of moving into the limited number of faculty positions – and wrecking havoc on your chances of getting grants.

Still, It’s more about what you’re doing than how long you’re doing it. I’d consider a longer post doc if it’s in a great lab with the possibility of many good publications. If there’s one thing I’ve learned from discussions with collaborators and friends who are years ahead of me, it’s that getting into a lab where publications aren’t forthcoming – and where you’re not happy – can burn you out of science quickly.

Given that I’ve spent this long as a science student (and it’s probably far too late for me to change my mind on becoming a professional musician or photographer), I want to make sure that I end up somewhere where I’m happy with the work and can make reasonable progress: this is a search that I’m taking pretty seriously.

[And, just for the record, if company needs me to do 3-years of post-doc at this point, I have to wonder just who it is I’m competing with for that job – and what it is that they think you learn in your 2nd and 3rd years as a postdoc.]

With that in mind, I’m also going to put my (somewhat redacted) resume up on the web in the next few days. It might be a little early – but as I said, I’m taking this seriously.

In the meantime, since I want to actually graduate soon, I’d better go see if my analyses were successful. (=

>Risk Factors #2 and thanks to Dr. Steve.

>Gotta love drive-by comments from Dr. Steve:

I don’t have time to go into all of the many errors in this post, but for a start, the odds ratio associated with the celiac snp is about 5-10X *per allele* (about 50X for a homozygote). This HLA allele accounts for about 90% of celiac disease and its mode of action is well understood.

I understand this is just a blog and you are not supposed to be an expert, but you should do some basic reading on genetics before posting misinformation. Or better yet, leave this stuff to Daniel MacArthur.

While even the 6th grade bullies I knew could give Dr Steve a lesson in making friends, I may as well at least clarify my point.

To start with, Dr. Steve made one good point. I didn’t know know what the risk factor for a single given Celiac SNP is – and thanks to Dr. Steve’s incredibly educational message – I still don’t. I simply made up a risk factor, which was probably the source of Dr. Steve’s confusion. (I did say I made it up in the post, but apparently hypothetical situations aren’t within Dr Steve’s repertoire.)

But lets revisit how Risk factors work, as I understand it. If someone would like to tell me how I’m wrong, I’ll accept that criticism. Telling me I’m wrong without saying why is useless, so don’t bother.

A risk factor is a multiplicative factor that indicates your risk of expressing a given phenotype versus the general population. If you have a risk factor of 5, and the general population has a 10% phenotype penetration, that means you have a 50% chance of expressing the phenotype. (.10 x 5 = 0.5, which is 50%).

In more complex cases, say with two independent SNPs, each with an independent risk factor, you multiply the set of risk factors by the probability of the phenotype appearing in the general population. (You don’t need me to do the math for you, do you?)

My rather long-winded point was that discussing risk factors without discussing the phenotypic background disease rate in the population is pointless, unless you know that the risk ratio leads you to a diagnostic test and predicts in a demonstrated statistically significant manner that the information is actionable.

Where I went out on a limb was in discussing other unknowns: error rates in the test, and possible other factors. Perhaps Dr. Steve knows he has a 0% error rate in his DTC SNP calls, or assumes as much – I am just not ready to make that assumption. Another place Dr. Steve may have objected to was my point about extraneous environmental factors, which may be included in the risk factor, although I just passed over it in my previous post without much discussion.

(I would love to hear how a SNP risk factor for something like Parkinson’s disease would be modulated by Vitamin D levels depending on your latitude. It can’t possibly be built into a DTC report. Oh, wait, this is hypothetical again – Sorry Dr. Steve!)

My main point from the previous post was that I have a difficult time accepting is that genomics consultants consider a “risk factor” as a useful piece of genomic information in the absence of an accompanying “expected background phenotypic risk.” A risk factor is simply a modulator of the risk, and if you talk about a risk factor you absolutely need to know what the background risk is.

Ok, I’m done rehashing my point from the previous point, and that takes me to my point for today:

Dr. Steve, telling people who have an interest in DTC genomics to stay out of the conversation in favor of the experts is shooting yourself in the foot. Whether it’s me or someone else, we’re going to ask the questions, and telling us to shut up isn’t going to get the questions answered. If I’m asking these questions, and contrary to your condescending comment I do have a genomics background, people without a genomics background will be asking them as well.

So I’d like to conclude with a piece of advice for you: Maybe you should leave the discussion to Daniel McArthur too – he’s doing a much better job of spreading information than you are, and he does it without gratuitously insulting people.

And I thought Doctors were taught to have a good bedside manner.

>10 minutes in a room with microsoft

>As the title suggests, I spent 10 minutes in a room with reps from Microsoft. It counts as probably the 2nd least productive time span in my life – second only to the hour I spent at lunch while the Microsoft reps told us why they were visiting.

So, you’d think this would be educational, but in reality, it was rather insulting.

Wisdom presented by Microsoft during the first hour included the fact that Silverlight is cross platform, Microsoft is a major supporter of interoperability and that bioinformaticians need a better platform to replace bio{java|perl|python|etc} in .net.

My brain was actively leaking out of my ear.

My supervisor told me to be nice and courteous – and I was, but sometimes it can be hard.

The 30 minute meeting was supposed to be an opportunity for Microsoft to learn what my code does, and to help them plan out their future bioinformatics tool kit. Instead, they showed up with 8 minutes remaining in the half hour, during which myself and another grad student were expected to explain our theses, and still allow for 4 minutes of questions. (Have you ever tried to explain two thesis projects in 4 minutes?)

The Microsoft reps were all kind and listened to our spiel, and then engaged in a round-table question and discussion. What I learned during the process was interesting:

  • Microsoft people aren’t even allowed to look at GPL software – legally, they’re forbidden.
  • Microsoft developers also have no qualms about telling other developers “we’ll just read your paper and re-implement the whole thing.”

And finally,

  • Microsoft reps just don’t get biology development: the questions they asked all skirted around the idea that they already knew what was best for developers doing bioinformatics work.

Either they know something I don’t know, or they assumed they did. I can live with that part, tho – They probably know lots of things I don’t know. Particularly, I’m sure they know lots about doing coding for biology applications that require no new code development work.

So, in conclusion, all I have to say is that I’m very glad I only published a bioinformatics note instead of a full description of my algorithms (They’re available for EVERYONE – except Microsoft – to read in the source code anyhow) and that I produce my work under the GPL. While I never expected to have to defend my code from Microsoft, today’s meeting really made me feel good about the path I’ve charted for developing code towards my PhD.

Microsoft, if you’re listening, any one of us here at the GSC could tell you why the biology application development you’re doing is ridiculous. It’s not that I think you should stop working on it – but you should really get to know the users (not customers) and developers out there doing the real work. And yes, the ones that are doing the innovative and ground breaking code are are mainly working with the GPL. You can’t keep your head in the sand forever.

>I hate Facebook – part 2

>I wasn’t going to elaborate on yesterday’s rant about hating facebook, but several people made comments, which got me thinking even more.

My main point yesterday was that I hate facebook because it’s protocols aren’t open, and is consequently is a “Walled Garden” approach to social networking. (Here’s another great rant on the subject) That’s not to say that you can’t work with it – there are plugins for pidgin that let you chat on the facebook protocol, and there are clients (as was pointed out to me) that will integrate your IMs with the facebook chat for windows. But that wasn’t my point anyways.

My point is that it’s creating it’s own separate protocols, which are each independent of the ones before it. In contrast to a service like twitter, in which the underlying protocol is XML, and is thus easily manipulated, using Facebook requires you work within their universe of standards. (I’m not the first person to come up with this – google will find you lots of examples of other people blogging the same thing.)

On the whole, that’s not necessarily a bad thing, but common, reusable standards are what drive progress.

For instance, without a common HTML standard, the web would not have flourished – we’d have many independent webs. If AOL had their way, they’d still have you dialing up into their own proprietary Internet.

Without a common electricity format, we’d have to pick the appropriate set of appliances for our homes with independent plugs – buying a hair dryer would be infinitely more painful than it would need to be.

Without a common word processing format, we’d suffer every time we try to send a document to someone who’s not using the same word processor that you do. (Oh wait, that’s actually Microsoft’s game – they refuse to properly support the one common document format every one else uses.)

So, when it comes to Facebook, my hate is this – if they used a simple RSS feed for the wall, I could have used that instead of twitter on my site. If they used a simple Jabber format for their chat, I could have merged that with my google chat account. And then there’s their private message system… well, that’s just email, but not accessible by IMAP or POP.

What they’ve done is try to resurect a business model that the web-unsavy keep trying. In the short term, it’s pure money. You drive people into it because everyone is using it. The innovate concept makes it’s adoption rapid and ubiquitous – but then you fall into the trap. The second generation of sites use open standards, and that allows FAR more cool things to be accomplished.

Examples of companies trying the walled garden approach on the net:

AOL and their independent internet, accessible only to AOL subscribers. Current Status: Laughable

Microsoft’s Hotmail, where hotmail users can’t export their email to migrate away. Current Status: GMail fodder.

Yahoo’s communities. Current Status: irrelevant.

Wall Street Journal’s new site. Current Status: ridiculed by people younger than 45.

Apple’s i(phone/pod/tunes/etc). Current Status: Frequently hacked, forced to accept the defacto .mp3 format. (No Ogg yet…)

Ok, that’s enough examples. All I have to say is that when Google (or anyone else) gets around to building a social networking site that’s open and easy to play with, it won’t be long before Facebook colapses.

The moral of the story? Don’t invest too much in your facebook profile – it’ll be obsolete in a few years.

>I hate facebook

>I have a short rant to end the day, brought on by my ever increasing tie-in between the web and my desktop (now KDE 4.3):

I hate facebook.

It’s not that I hate it the way I hate Myspace, which I hate because it’s so easy to make horribly annoying web pages. It’s not even that I hate it the way I hate Microsoft, which I hate because their business engages in unethical practices.

I hate it because it’s a walled garden. Not that I have a problem with walled gardens in principle, but it’s just so inaccessible – which is exactly what the facebook owners want. If you can only get at facebook through the facebook interface, you have to see their adds, which makes them money, if you ever get sucked into them. (You now have to manually opt out of having your picture used in adds for your friends… its a new option for your profile in your security settings, if you don’t believe me.)

Seriously, the whole facebook wall can be recreated with twitter, the photo albums with flickr, the private messages with gmail…. and all of it can be tied together in one place. Frankly, I suspect that’s what Google’s “Wave” will be.

If I could integrate my twitter account with my wall on facebook, that would be seriously useful – but why should I invest the energy to update my status twice? Why should I have to maintain my own web page AND the profile on facebook…

Yes, it’s a minor rant, but I just wanted to put that out there. Facebook is a great idea and a leader of it’s genre, but in the end, it’s going to die if its community starts drifting towards equivalent services that are more easily integrated into the desktop. I can now update twitter using an applet on my desktop – but facebook still requires a login so that I can see their adds.

Anyhow, If you don’t believe me about where this is all going, wait to see what Google Wave and Chrome do for you. I’m willing to bet desktop publishing will have a whole new meaning, and on-line communities will be a part of your computer experience even before you open your browser window.

For a taste of what’s now on my desktop, check out the OpenDesktop, Remember the Milk and microblog ( or even Choqok) plasmoids.