The three ages of CEGMA: thoughts on the slow burning popularity of some bioinformatics tools

The past

CEGMA is a bioinformatics tool that was originally developed to annotate a small set of genes in novel genome sequences that lack any annotation. The logic being that if you can at least annotate a small number of genes and have some confidence about their gene structure, you can then use them as a training set for an ab initio gene finder to go on and annotate the rest of the genome.

This tool was developed in 2005 and it took rejections from two different journals before the paper was finally published in 2007. We soon realized that the set of highly conserved eukaryotic genes that CEGMA used could also be adapted to assess the completeness of genome assemblies. Strictly speaking, we can use CEGMA to assess the completeness of the 'gene space' of genomes. Another publication followed in 2009, but CEGMA still didn't gain much attention.

CEGMA was then used as one of the assessment tools in the 2011 Assemblathon competition, and then again for the Assemblathon 2 contest. It's possible that these publications led to an explosion in the popularity of CEGMA. Alternatively, it may have become more popular just because more and more people have started to sequence genomes, and there is a growing need for tools to assess whether genome assemblies are any good.

The following graph shows the increase in citations to our two CEGMA papers since they were first published. I think it is unusual to see this sort of delayed growth in citations to a paper. The current citations from 2014 suggest that this year will see the citation count double compared to 2013.

cegma citations.png

The present

All of this is both good and bad. It is always good to see a bioinformatics tool actively being used, and it is always nice to have your papers cited. However, it's bad because the principal developer left our group many years ago and I have been left to support CEGMA without sufficient time or resources to do so. I will be honest and say that it can be a real pain to even get CEGMA installed (especially on some flavors of Linux). You need to separately install NCBI BLAST+, HMMER, geneid, and genewise, and you can't just use any version of these tools either. These installation problems have meant that I recently tried making it easier for people to submit jobs to us, which I run locally on their behalf.

These submission requests made me realize that many people are using CEGMA to assess the quality of transcriptomes as well as genomes. This is not something we ever advocated, but it seems to work. These submissions have also let me take a look at whether CEGMA is performing as expected with respect to the N50 lengths of the genomes/transcriptomes being assessed (I can't use NG50 which I would prefer, because I don't know the expected size for these genomes).

cegma.png

Generally speaking, if your genome contains longer sequences, then you have more chance of those sequences containing some of the 248 core genes that CEGMA searches for. This is not exactly rocket science, but I still find it surprising — not to mention worrying — that there are a lot of extremely low quality genome assemblies out there, which might not be useful for anything.

The future

We are currently preparing a grant that would hopefully allow us to give CEGMA a much needed overhaul. Currently we have insufficient resources to really support the current version of CEGMA, but we have many good ideas for how we could improve it. Most notably, we would want to make new sets of core genes based on modern resources such as the eggNOG database. The core genes that CEGMA uses were determined from an analysis of the KOGs database which is now over a decade old! A lot has changed in the world of genomics since then.

The problem that arises when Google Scholar indexes papers published to pre-print servers

The Assemblathon 2 paper, on which I was lead author, was ultimately published with the online journal Gigascience. However, like an increasing number of papers, it was first released to the arXiv.org pre-print server.

If you are a user of the very useful Google Scholar service and you have also published a paper such that it appears in two places, then you may have run into the same problems that I have. Namely, Google Scholar appears to only track citations to the first place where the paper was published.

It should be said that it is great that Google tracks citations to these pre-print articles at all, though see another post of mine that illustrates just how powerful (and somewhat creepy), Google Scholar's indexing power is. However, most people would expect that when a paper is formally published, that Google Scholar should track citations to that as well. Preferably separately from the pre-print version of the article.

For a long time with the Assemblathon 2 paper, Google Scholar only seemed to show citations to the pre-print version of the paper, even when I knew that others were citing the Gigascience version. So I contacted Google about this, and after a bit of a wait, I heard back from them:

Hi Keith,

It still get indexed though the information is not yet shown:

http://scholar.google.com/scholar?q=http%3A%2F%2Fwww.gigasciencejournal.com%2Fcontent%2F2%2F1%2F10+&btnG=

If one version (the arXiv one in this case) was discovered before the last major index update, the information for the other versions found after the major update would not appear before the next major update.

Their answer still raises some issues, and I'm waiting to hear back from my follow up question...how often does the index get updated? Checking Google Scholar today, it initially appears as if they are still only tracking the pre-print version of our paper:

2014-01-27 at 9.36 AM.png

However, after checking I see that 9 out of 10 of the most recent citations are all citing the Gigascience version of the paper. So in conclusion:

  1. Google Scholar will start to track formal versions of a publication even after the paper was first published on a pre-print server.
  2. Somewhat annoyingly, they do not separate out the citations and so one Google Scholar entry ends up tracking two versions of a paper.
  3. The Google Scholar entry that is tracking the combined citations only lists the pre-print server in the 'Journal' name field; you have to check individual citations to see if they are citing the formal version of the publication.
  4. Google Scholar has a 'major' indexing cycle and you may have to wait for the latest version of the index to be updated before you see any changes.

JABBA, ORCA, and more bad bioinformatics acronyms

JABBA awards — Just Another Bogus Bioinformatics Acronym — are my attempt to poke a little bit of fun at the crazy (and often nonsensical) acronyms and initialisms that are sometimes used in bioinformatics and genomics. When I first started handing out these awards in June 2013, I didn't realize that I was not alone in drawing attention to these unusual epithets.

http://orcacronyms.blogspot.com

http://orcacronyms.blogspot.com

ORCA is the Organization for Really Contrived Acronyms a fun blog set up by an old colleague of mine, Richard Edwards. ORCA sets out to highlight strange acronyms across many different disciplines, whereas my JABBA awards focus on bioinformatics. Occasionally, there is some overlap, and so I will point you to the latest ORCA post which details a particularly strange initialism for a bioinformatics database:

ADAN - prediction of protein-protein interAction of moDular domAiNs

Be sure to read Richard's thoughts on this name, as well as checking out some of the other great ORCA posts, including one of my favorites (GARFIELD).

ACGT: a new home for my science-related blog posts

Over the last year I've increasingly found myself blogging about science — and about genomics and bioinformatics in particular — on my main website (keithbradnam.com). Increasingly this has led to a very disjointed blog portfolio: posts about my disdain for contrived bioinformatics acronyms would sit aside pictures of my bacon extravaganza.

No longer will this be the case. ACGT will the new home for all of my scientific contemplations. So what is ACGT all about? Maybe you are wondering Are Completed Genomes True? or maybe you are just on the lookout to see someone Assessing Computational Genomics Tools. If so, then ACGT may be a home for such things (as well as Arbitrary, Contrived, Genome Tittle-Tattle perhaps).

I've imported all of the relevant posts from my main blog (I'll leave the originals in place for now), and hopefully all of the links work. Please let me know if this is not the case. Now that I have a new home for my scientific musings —  particularly those relating to bioinformatics — I hope this will encourage me to write more. See you around!

Keith Bradnam

Paper review: anybody who works in bioinformatics and/or genomics should read this paper!

I rarely blog about specific papers but felt moved to write about a new paper by Jonathan Mudge, Adam Frankish, and Jennifer Harrow who work in the Vertebrate Annotation group at the Wellcome Trust Sanger Institute.

Their paper, now out in Genome Research, is titled: Functional transcriptomics in the post-ENCODE era.

They brilliantly, and comprehensively, list the various ways in which gene architecture — and by extension gene annotation — is incredibly complex and far from a solved problem. However, they also provide an exhaustive description of all the various experimental technologies that are starting to shine a lot more light on this, at times, dimly lit field of genomics.

In their summary, they state:

Modern genomics (and indeed medicine) demands to understand the entirety of the genome and transcriptome right now

I'd go so far as to say that many people in genomics assume that genomes and transcriptomes are already understood. I often feel that too many people enter this field with false beliefs that many genomes are complete and that we know about all of the genes in this genomes. Jonathan Mudge et al. start this paper by firmly pointing out that even the simple question of 'what is a gene?' is something that we are far from certain about.

Reading this paper, I was impressed by how comprehensively they have reviewed the relevant literature, pulling in numerous examples that indicate just how complex genes are, and which show that we need to move away from the very protein-centric world view that has dominated much of the history of this field.

LncRNAs, microRNAs, and piwi-interacting RNAs are three categories of RNA that you probably wouldn't find mentioned anywhere in text books from a decade ago, but which now — along with 'traditional' non-coding RNAs such as rRNAs, tRNAs, snoRNAs etc. — probably outnumber the number of protein-coding genes in the human genome. Many parts of this paper tackle the issue of transcriptional complexity, particularly trying to address the all-important question how much of this is functional?

I found that so many parts of this paper touched on previous, current, and possible future projects in our lab. Producing an accurate catalog of genes, understanding alternative splicing, examining the relationship between mRNA and protein abundances, looking for conservation of signals between species...these are all topics that are near and dear to people in our lab.

Even if you have no interest in the importance of gene annotation — and shame on you if that is how you feel — this paper also serves as a fantastic catalog of the latest experimental techniques that can be used to capture and study genes (e.g. CAGE, ribosome profiling, polyA-seq etc).

If you have ever worked with a set of genes from a well curated organism, spare a thought for the huge amount of work that goes into trying to provide those annotations and keep them up to date. I'll leave you with the last couple of sentences from the paper...please repeat this every morning as your new mantra:

Finally, no one knows what proportion of the transcriptome is functional at the present time; therefore, the appropriate scientific position to take is to be open-minded. We thus do not claim that the annotation of the human genome is close to completion. If anything, it seems as if the hard work is just beginning.

More JABBA awards for inventive bioinformatics acronyms

A quick set of new JABBA award recipients. Once again these are drawn from the journal Bioinformatics.

  1. NetWeAvers: an R package for integrative biological network analysis with mass spectrometry data - the mixed capitalization of this software tool is a little uneasy on the eye. But more importantly, a Google search for 'netweavers' returns lots of links about something entirely different. I.e. NetWeavers (and NetWeaving) is already a recognized term in another field.
  2. GIM3E: condition-specific models of cellular metabolism developed from metabolomics and expression data. - the 3 part of this algorithm's name is deliberately written in superscript by the authors. This implies 'cubed', but I think it is really referring to 3 lots of 'M' related words because the full name of the algorithm is 'Gene Inactivation Moderated by Metabolism, Metabolomics and Expression'. GIM3E is not something that is particularly easy to say quickly, though it is much more Google friendly than NetWeavers.
  3. INSECT: IN-silico SEarch for Co-occurring Transcription factors - making an acronym into the name of a plant or animal name is quite common in bioinformatics. A couple of examples are worth mentioning. There is the MOUSE resource (Mitochondria and Other Useful SEquences) and also something called HAMSTeRS (the Haemophilus A Mutation, Structure, Test and Resource Site). The main problem with acronyms like these is that they can be to hard to find using online search tools (e.g. Google for hamster resources). A secondary issue is that the name just doesn't really connect to what the resource/database/algorithm is about. The INSECT database contains information about 14 different species, only one of which is an insect.
2013-11-26 at 2.38 PM.png

I'll no doubt be posting again the next time I come across some more dubious acroynms.

Top twitter talent: UC Davis genome scientists lead the way

The Next Gen Seq website has just published its 2013 list of the Top N Genome Scientists to Follow on Twitter. Over 10% of this International list of scientists are all staff or Faculty here at UC Davis, which says a lot about the quality of genomics talent here on campus:

It is also worth mentioning that there are so many other people at UC Davis who work in genomics and bioinformatics and who use twitter to effectively communicate their research and engage with the community. E.g.

  • @dr_bik - Holly Bik (Postdoc in Jon Eisen's lab)
  • @ryneches - Russel Neches  (Grad student in Jon Eisen's lab)
  • @theladybeck - Kristen Beck (Grad student in Ian Korf's lab)
  • @sudogenes - Gina Turco (Grad student in Siobhan Brady's lab...and winner of best twitter account name)

Great to see UC Davis recognized like this.

 

Update

Updated at 9:09 am to reflect that Next Gen Seq have now added Vince Buffalo to the list (he was apparently meant to be on the list anyway).

Another winner of the JABBA award for horrible bioinformatics acronyms

It's time to hand out another JABBA (Just Another Bogus Bioinformatics Acronym) award. Joining the recent recipients is a tool described in the latest issue of the Bioinformatics journal.

I don't have any problem with the acronym itself, and this is not a tool which is randomly adding or removing letters from the full name to produce the acronym. So what is my problem? Well the tool — which calculates a score to assess the local quality of a protein structure — is called The Local Distance Difference Test. And the acronym? Oh, the acronym is just 'lDDT' with a lower-case 'L'.

Now, this might not be so bad if it were not for the fact that all fonts used by the Bioinformatics journal (HTML & PDF versions) as well as the author's own website make this 'L' look like the letter I or the number 1.

From the HTML

2013-10-22 at 2.27 PM.png
2013-10-22 at 2.28 PM.png

From the PDF

2013-10-22 at 2.29 PM.png

From the author's website

2013-10-22 at 2.29 PM 2.png

I can't help but imagine that people will only ever read this as IDDT and not LDDT...which of course doesn't bode well if someone ends up Googling for this tool at a later date. Compare a search for LDDT (which finds the correct tool) vs a search for IDDT (which doesn't:

2013-10-22 at 2.32 PM.png
2013-10-22 at 2.32 PM 2.png

Congratulations on being the recipient of another JABBA award!

What's in a name? Better vocabularies = better bioinformatics?

About 7:00 this morning I was somewhat relieved because my scheduled lab talk had been postponed (my boss was not around). But we were still having the lab meeting anyway.

About 8:00 this morning, I stumbled across this blog post by @biomickwatson on twitter. I really enjoyed the post and thought I would mention in in the lab meeting. Suddently though that prompted me to think about some other topics relating to Mick's blog post.

Before I knew it, I had made about 30 slides and ended up speaking for most of the lab meeting. I thought I'd add some notes and post the talk on SlideShare.

What's in a name? Better vocabularies = better bioinformatics?

from

Keith Bradnam

I get very frustrated by people who rely heavily on GO term analysis, without having a good understanding of what Gene Ontology terms are, or how they get assigned to database objects. There are too many published anayses which see an enrichment of a particular GO term as some reliable indicator that there is a difference in datasets X & Y. Do they ever check to see how these GO terms were assigned? No.

New recipient of the Just Another Bogus Bioinformatics Acronym (JABBA) award

It was only a few weeks ago that I gave out the last JABBA award. One of the winning recipients that time was a database — featuring excessive use of mixed-case characters — called 'mpMoRFsDB'.

Well it seems that if you work on 'MoRFs' (Molecular Recognition Features) then you must love coming up with fun acronyms. This week in BMC Bioinformatics we have another MoRFs related tool that is worthy of a JABBA award:

The oh-so-catchy 'MFSPSSMpred' (Masked, Filtered and Smoothed Position-Specific Scoring Matrix-based Predictor) is the kind of name that requires you to first sit down and take a deep breath before attempting to pronounce it. Just imagine having to tell someone about this tool:

"Hi Keith, can you recommend any bioinformatics tools for identifying MoRFs?"

"Why certainly, have you tried em-eff-ess-pee-ess-ess-em-pred?"

Congratulations MFSPSSMpred, you join the ranks of former JABBA winners.