Unpronounceable bioinformatics database names

First a quick reminder that an acronym is something that is meant to be pronounced as an entire word (e.g. NATO, AIDS etc.). Sometimes these end up becoming regular, non-capitalized, words (e.g. radar, laser).

In contrast, an initialism is something where the component letters are read out individually (e.g. BBC, CPU). In bioinformatics, there are also names which are part acronym and part initialism (e.g.GWAS…which I have only every heard pronounced as gee-was).

Most initialisms that we use in everday life tend to be short (2–4 letters) because this makes them easier to read and to pronounce. As you move past 4 letters, you run the risk of making your initialism unprouncible and unmemorable.

So here are some recently published bioinformatics tools with names that are a bit cumbersome to repeat. For each one I include how someone might try to pronounce them. Try repeating these names quickly and for an added test, see how many of these names you can remember 5 minutes after you read this:

5 characters

6 characters

7 characters

And the winner goes to…

Conclusions

If you want people to actually use your bioinformatics tools, then you should aim to give them names that are memorable and pronounceable.

More bioinformatics link rot: where is EUROCarbDB?

Update 2015-01-19 15.19: I contacted the corresponding author about this and now the EurocarbDB link in the original paper works.

First published online a few months ago in the journal Bioinformatics (September 12th, 2014):

The name of this resource is not the snappiest name out there. "Oh, you're interested in resources for glycomics, have you tried EuroCarbDB-open parentheses-cee-cee-ar-cee-close parentheses?", but leaving that aside the paper lists the following URLs as part of the abstract:

Availability and implementation: The installation with the glycan standards is available at http://glycomics.ccrc.uga.edu/eurocarb/. The source code of the project is available at https://code.google.com/p/ucdb/.

The first link says that the server is down. The parent page (http://glycomics.ccrc.uga.edu/ seems to make no mention at all of this resource (not that I can find anywhere). Following the second link in the abstract, I found the following text:

An incubator project for the future direction of the EUROCarbDB project. More to follow.... This new project is in it's infancy - please use the original EUROCarbDB site. A new project will be hosted at UniCarb-DB (http://www.unicarb-db.org to reflect the continued work of the developers

I followed the first of these links to the 'original' EUROCarbDB site. This Google Code page in turn told me that the online version of EuroCarbDB is hosted by the European Institute of Bioinformatics.

Following the link for the online version of EUROCarbDB takes me to what seems to be a closed down site at the EBI titled 'What happened to the EuroCarbDB website?' which has this to say:

The pilot project ended in 2009 but efforts to obtain renewed funding have unfortunately not been successful. The EuroCarbDB website was hosted by the Protein Data Bank in Europe at EMBL-EBI but has now been discontinued

So that's all very helpful then.

Are there too many biological databases?

The annual 'Database' issue of Nucleic Acids Research (N.A.R.) was recently published. It contains a mammoth set of 172 papers that describe 56 new biological databases as well as updates to 115 others. I've already briefly commented on one of these papers, and expect that I'll be nominating several others for JABBA awards.

In this post I just wanted to comment on the the seemingly inexorable growth of these computational resources. There are databases for just about everything these days. Different species, different diseases, different types of sequence, different biological mechanisms…every possible biological topic has a relevant database, and sometimes they have several.

It is increasinly hard to even stay on top of just how many databases are out there. Wikipedia has a listing of biological databases as well as a category for biological databases, but both of these barely scratch the surface of what is out there.

So maybe one might turn to 'DBD': a Database of Biological Datsbases or even MetaBase which also describes itself as a 'Database of Biological Databases' (please don't start thinking about creating 'DBDBBDB': A Database of Databases of Biological Databases!).

However, the home pages of these two sites were last updated in 2008 and 2011 respectively, perfectly reflecting one of the problems in the world of biological databases…they often don't get removed when they go out of date. In a past life, I was a developer of several databases at something called UK CropNet. Curation of these databases, particularly the Arabidopis Genome Resource, effectively stopped when I left the job in 2001 but the databases were only taken offline in 2013!!!

So old, out-of-date, databases are part of the problem, but the other issue is that there seems to be some independent databases that — in an ideal world — should really be merged with similar databases. E.g. there is a database called BeetleBase that describes its remit as follows:

BeetleBase is a comprehensive sequence database and important community resource for Tribolium genetics, genomics and developmental biology.

This database has been around since at least 2007 though I'm not entirely sure if it is still being actively developed. However, I was still surprised to see this paper as part of the N.A.R. Database issue:

iBeetle-Base has been seemingly developed from a separate group of people from BeetleBase. Is it helpful to the wider community to have two databases like this, with confusingly similar names? It's possible that iBeetle-Base people tried reaching out to the BeetleBase folks to include their data in the pre-existing database, but were rebuffed or found out that BeetleBase is no longer a going concern. Who knows, but it just seems a shame to have so much genomics information for a species split across multiple databases.

I'm not sure what could, or should, be done to tackle these issues. Should we discourage new databases if there are already existing resources that cover much of the subject matter? Should we require the people who run databases to 'wind up' the resources in a better way when funding runs out (i.e. retire databases or make it abundantly clear that a resource is no longer being updated)? Is it even possible to set some minimum standards for database usage that must be met in order for subsequent 'update papers' to get published (i.e. 'X' DB accesses per month)?

diArk – the database for eukaryotic genome and transcriptome assemblies in 2014

A new paper in Nucleic Acids Research describes a database that I was not aware of. The abstract features an eye-catching, not to mention ambitious, claim (the emphasis is mine):

The database…has been developed with the aim to provide access to all available assembled genomes and transcriptomes.

The diArk database currently features data on 2,771 species. There are many options to filter your search queries including filtering by 'sequencing type' and by the status of completion. So when I search for 'completed' genome sequencing projects, it reports that there 3,626 projects corresponding to 1,848 species. The FAQ has this to say regarding 'completeness':

The term completeness is intended to describe the coverage of the genome and the chance to find all homologs of the gene of interest.

I was a bit put off by the interface to this database. As far as I can tell, diArk is mostly containing links to other resources (rather than hosting any sequence information). There are lots of very small icons everywhere which are hard to understand (unless you mouse over each icon). When I went to the page for Caenorhabditis elegans, I was struck by the confusing nature of just posting links to every C. elegans resource on the web. There are 12 'Project' links listed. Which one gives you access to the latest version of the genome sequence?

diArk summary of Caenorhabditis elegans data

diArk summary of Caenorhabditis elegans data

As a final comment, I noticed that the latest entry on the diArk news page is from September 2011 which is a bit worrying (nothing newsworthy has happened in the last 3 years?).

Red flag alert for a bogus bioinformatics acronym

The first JABBA award of 2015 goes to a paper that was published at the end of 2014 (thanks to twitter user @chenghlee for bringing this to my attention). The paper, published in BMC Medical Genomics, has a succinct title that contains a very bogus name:

The title doesn't explicitly reveal the source of the acronym 'FLAGS', but you can probably take a guess. From the abstract:

We termed these genes FLAGS for FrequentLy mutAted GeneS

This gets a JABBA award because a majority (3 out of 5) of the letters in 'FLAGS' are not from the intial letters of words.

101 questions with a bioinformatician #21: Stephen Turner

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting their bioinformatics careers.


Stephen Turner is Director of the Bioinformatics Core and Assistant Professor of Public Health Sciences at the University of Virginia School of Medicine.

His blog, Getting Genetics Done, should be required reading for anyone who wishes to get lots of practical, hands-on, advice about doing bioinformatics. This is especially so if you want to know more about R (he has 140 posts on the topic!). He has a great overview about the goal of the blog:

Many resources offer a 10,000-foot view of the current trends in the field, reviews of various technologies, and guidelines on how to effectively design, analyze, and interpret experiments in human genetics and bioinformatics research. By comparison very few resources focus on the mundane, yet critical know-how for those on the ground actually doing the science (i.e. grad students, postdocs, analysts, and junior faculty). Getting Genetics Done aims to fill that gap by featuring software, code snippets, literature of interest, workflow philosophy, and anything else that can boost productivity and simplify getting things done in human genetics research.

You can find out more about Stephen by visiting his aforementioned blog, or by following him on twitter (@genetics_blog). And now, on to the 101 questions...



001. What's something that you enjoy about current bioinformatics research?

I'm faculty in Public Health but my primary position is directing our Bioinformatics Core. That means I get to work on all kinds of projects with a very diverse set of collaborators. Monday I might be assembling plant genomes for a collaborator in the biology department, Tuesday I might be working on RNA-seq in patient kidney biopsies with a urologist in the hospital, the next day I might be figuring out how to best approach hybrid assembly with Nanopore and short read sequencing for a plasmid genome. Every day is something different, and the job never gets boring.



010. What's something that you don't enjoy about current bioinformatics research?

Same answer as 001: working on all kinds of projects with a very diverse set of collaborators.

Seriously, as fun as this can be, I often have to sacrifice depth of expertise for breadth. And I think most other bioinformaticians who exist for collaboration have to do the same. I have to be an expert in data analysis and study design of hundreds of different *-seq assays. I can't spend two months working on hybrid assembly with Nanopore and short read sequencing for one collaborator when I have a PAR-CLIP project, an exome variant-calling/annotation project, a 16S microbial profiling project, and a breakpoint mapping project with other collaborators, all needing the same level of attention to detail.



011. If you could go back in time and visit yourself as a 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

Take some programming classes in college, and try contributing to an open-source project.

I, like many other bioinformaticians, am a self-taught programmer. I cut my teeth on Perl years ago before Python was so popular, and have picked up a handful of other generic programming languages and numerical/statistical computing languages since then. But I'm not a software engineer, and at this point I'll only be able to polish my software development practices so much. Sure, most of my code is version controlled, and I know very well how to modularize code with functions, but there's much more to writing and contributing to good software than this. Good science increasingly relies on great software, and not just in genomics. More formal training would have been nice to have.



100. What's your all-time favorite piece of bioinformatics software, and why?

It's not one piece of software, but the Bioconductor community in general is just awesome. Pick any of the applications I mentioned in questions 001 and 010, and there's probably a Bioconductor package to help you with it. Most packages have great documentation, and reliance on a common set of data structures really simplifies things. The mailing list is responsive, and you don't have to have the same thick skin necessary to email R-help.

If I had to nail it down to just one single application, I'm going to have to be unoriginal and go with BEDTools. Way back when, I used to load genomic intervals into MySQL database tables and write impossibly complex (and slow) queries to do very simple BEDTools-y kinds of operations. Just when you think you have a one-of-a-kind "genome arithmetic" problem that no one has ever seen before, you'll often find that you're not so special after all and there's a BEDTools subcommand or recipe that gets you exactly what you need.



101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality, and why?

Besides knowing the ins and outs of many different kinds of NGS studies, what makes a bioinformatician a great scientist is being really good at lots of things at once: a skilled programmer, a skeptical statistician, an influential writer, a perceptive reader, a captivating speaker, a convincing salesman, a careful financial planner, a creative graphic designer, a thoughtful experimentalist, and a friendly colleague. I'm certainly not all of these things, but I'm still going to go with N.

University of Spin: every British university is ranked #1 for research

The UK government published the latest Research Excellence Framework (REF) results today. One goal of this exercise is to make it easier for everyone to see who is winning and losing at academic research1. The Times Higher Education website has produced a Table of Excellence showing the overall rankings.

The underlying results are broken down by different subject area, measured using three different criteria (‘Output’, ‘Impact’, and ‘Environment’, each of which is further broken down into four main grades (1* through to 4*). All of which means that everyone has something to cheer about.

If you looked at the #REF2014 hashtag on twitter today, you might conclude that everyone is a winner. I’ve gathered together some of these tweets in the Storify below, but also check out the tweets at the end which offer further comment regarding all of this spinning:


  1. In reality, these results will be used to distribute future research funding to universities.

Comparisons of computational methods for differential alternative splicing detection using RNA-seq in plant systems

Marc Robinson-Rechavi (@marc_rr) tweeted about this great new paper in BMC Bioinformatics by Ruolin Liu, Ann Loraine, and Julie Dickerson. From the abstract:

The goal of this paper is to benchmark existing computational differential splicing (or transcription) detection methods so that biologists can choose the most suitable tools to accomplish their goals.

Like so many other areas of bioinformatics, there are many methods available for detecting alternative splicing, and it is far from clear which — if any —  is the best. This paper attempts to compare eight of them, and the abstract contains a sobering conclusion:

No single method performs the best in all situations

Figure 5 from the paper is especially depressing. It looks at the overlap of differentially spliced genes as detected by five different methods. There are zero differentially spliced genes that all methods agreed on.

Liu et al. BMC Bioinformatics 2014 15:364   doi:10.1186/s12859-014-0364-4

Understanding MAPQ scores in SAM files: does 37 = 42?

The official specification for the Sequence Alignment Map (SAM) format outlines what is stored in each column of this tab-separated value file format. The fifth column of a SAM file stores MAPping Quality (MAPQ) values. From the SAM specification:

MAPQ: MAPping Quality. It equals −10 log10 Pr{mapping position is wrong}, rounded to the nearest integer. A value 255 indicates that the mapping quality is not available.

So if you happened to know that the probability of correctly mapping some random read was 0.99, then the MAPQ score should be 20 (i.e. log10 of 0.01 * -10). If the probability of a correct match increased to 0.999, the MAPQ score would increase to 30. So the upper bounds of a MAPQ score depends on the level of precision of your probability (though elswhere in the SAM spec, it defines an upper limit of 255 for this value). Conversely, as the probability of a correct match tends towards zero, so does the MAPQ score.

So I'm sure that the first thing that everyone does after generating a SAM file is to assess the spread of MAPQ scores in your dataset. Right? Anyone?

< sound of crickets >

Okay, so maybe you don't do this. Maybe you don't really care, and you are happy to trust the default output of whatever short read alignment program that you used to generate your SAM file. Why should it matter? Will these scores really vary all that much?

Here is a frequency distribution of MAPQ scores from two mapping experiments. The bottom panel zooms in to more clearly show the distribution of low frequency MAPQ scores:

Distribution of MAPQ scores from two experiments: bottom panel shows zoomed in view of MAPQ scores with frequencies < 1%. Click to enlarge.

What might we conclude from this? There seems to be some clear differences between both experiments. The most frequent MAPQ scores in the first experiment are 42 followed by 1. In the second experiment, scores only reach a maximum value of 37, and scores of 0 are the second most frequent value.

These two experiments reflect some real world data. Experiment 1 is based on data from mouse, and experiment 2 uses data from Arabidopsis thaliana. However, that is probably not why the distributions are different. The mouse data is based on unpaired Illumina reads from a DNase-Seq experiment, wheras the A. thaliana data is from paired Illumina reads from whole genome sequencing. However, that still probably isn't the reason for the differences.

The reason for the different distributions is that experiment 1 used Bowtie 2 to map the reads whereas experiment 2 used BWA. It turns out that different mapping programs calculate MAPQ scores in different ways and you shouldn't really compare these values unless they came from the same program.

The maximum MAPQ value that Bowtie 2 generates is 42 (though it doesn't say this anywhere in the documentation). In contrast, the maximum MAPQ value that BWA will generate is 37 (though once again, you — frustratingly — won't find this information in the manual).

The data for Experiment 1 is based on a sample of over 25 million mapped reads. However, you never see MAPQ scores of 9, 10, or 20, something that presumably reflects some aspect of how Bowtie 2 calculates these scores.

In the absence of any helpful information in the manuals of these two popular aligners, others have tried doing their own experimentation to work out what the values correspond to. Dave Tang has a useful post on Mappinq Qualities on his Musings from a PhD Candidate blog. There are also lots of posts about mapping quality on the SEQanswers site (e.g. see here, here or here). However, the prize for the most detailed investigation of MAPQ scores — from Bowtie 2 at least — goes to John Urban who has written a fantastic post on his Biofinysics blog:

So in conclusion, there are 3 important take home messages:

  1. MAPQ scores vary between different programs and you should not directly compare results from, say, Bowtie 2 and BWA.
  2. You should look at your MAPQ scores and potentially filter out the really bad alignments.
  3. Bioinformatics software documentation can often omit some really important details (see also my last blog post on this subject).