Comparisons of computational methods for differential alternative splicing detection using RNA-seq in plant systems

Marc Robinson-Rechavi (@marc_rr) tweeted about this great new paper in BMC Bioinformatics by Ruolin Liu, Ann Loraine, and Julie Dickerson. From the abstract:

The goal of this paper is to benchmark existing computational differential splicing (or transcription) detection methods so that biologists can choose the most suitable tools to accomplish their goals.

Like so many other areas of bioinformatics, there are many methods available for detecting alternative splicing, and it is far from clear which — if any —  is the best. This paper attempts to compare eight of them, and the abstract contains a sobering conclusion:

No single method performs the best in all situations

Figure 5 from the paper is especially depressing. It looks at the overlap of differentially spliced genes as detected by five different methods. There are zero differentially spliced genes that all methods agreed on.

Liu et al. BMC Bioinformatics 2014 15:364   doi:10.1186/s12859-014-0364-4

Understanding MAPQ scores in SAM files: does 37 = 42?

The official specification for the Sequence Alignment Map (SAM) format outlines what is stored in each column of this tab-separated value file format. The fifth column of a SAM file stores MAPping Quality (MAPQ) values. From the SAM specification:

MAPQ: MAPping Quality. It equals −10 log10 Pr{mapping position is wrong}, rounded to the nearest integer. A value 255 indicates that the mapping quality is not available.

So if you happened to know that the probability of correctly mapping some random read was 0.99, then the MAPQ score should be 20 (i.e. log10 of 0.01 * -10). If the probability of a correct match increased to 0.999, the MAPQ score would increase to 30. So the upper bounds of a MAPQ score depends on the level of precision of your probability (though elswhere in the SAM spec, it defines an upper limit of 255 for this value). Conversely, as the probability of a correct match tends towards zero, so does the MAPQ score.

So I'm sure that the first thing that everyone does after generating a SAM file is to assess the spread of MAPQ scores in your dataset. Right? Anyone?

< sound of crickets >

Okay, so maybe you don't do this. Maybe you don't really care, and you are happy to trust the default output of whatever short read alignment program that you used to generate your SAM file. Why should it matter? Will these scores really vary all that much?

Here is a frequency distribution of MAPQ scores from two mapping experiments. The bottom panel zooms in to more clearly show the distribution of low frequency MAPQ scores:

Distribution of MAPQ scores from two experiments: bottom panel shows zoomed in view of MAPQ scores with frequencies < 1%. Click to enlarge.

What might we conclude from this? There seems to be some clear differences between both experiments. The most frequent MAPQ scores in the first experiment are 42 followed by 1. In the second experiment, scores only reach a maximum value of 37, and scores of 0 are the second most frequent value.

These two experiments reflect some real world data. Experiment 1 is based on data from mouse, and experiment 2 uses data from Arabidopsis thaliana. However, that is probably not why the distributions are different. The mouse data is based on unpaired Illumina reads from a DNase-Seq experiment, wheras the A. thaliana data is from paired Illumina reads from whole genome sequencing. However, that still probably isn't the reason for the differences.

The reason for the different distributions is that experiment 1 used Bowtie 2 to map the reads whereas experiment 2 used BWA. It turns out that different mapping programs calculate MAPQ scores in different ways and you shouldn't really compare these values unless they came from the same program.

The maximum MAPQ value that Bowtie 2 generates is 42 (though it doesn't say this anywhere in the documentation). In contrast, the maximum MAPQ value that BWA will generate is 37 (though once again, you — frustratingly — won't find this information in the manual).

The data for Experiment 1 is based on a sample of over 25 million mapped reads. However, you never see MAPQ scores of 9, 10, or 20, something that presumably reflects some aspect of how Bowtie 2 calculates these scores.

In the absence of any helpful information in the manuals of these two popular aligners, others have tried doing their own experimentation to work out what the values correspond to. Dave Tang has a useful post on Mappinq Qualities on his Musings from a PhD Candidate blog. There are also lots of posts about mapping quality on the SEQanswers site (e.g. see here, here or here). However, the prize for the most detailed investigation of MAPQ scores — from Bowtie 2 at least — goes to John Urban who has written a fantastic post on his Biofinysics blog:

So in conclusion, there are 3 important take home messages:

  1. MAPQ scores vary between different programs and you should not directly compare results from, say, Bowtie 2 and BWA.
  2. You should look at your MAPQ scores and potentially filter out the really bad alignments.
  3. Bioinformatics software documentation can often omit some really important details (see also my last blog post on this subject).

More thoughts on the documentation for the SAM file format

I recently wrote about how bioinformatics documentation is not always very user-friendly. I made some references to the opaque nature of the official SAM Format Specification (specifically the lack of clarity in explaining the nature of what is in column 2 of a SAM file). After writing this post, I raised an issue on the relevant SAMtools GitHub repository:

I feel that many people have trouble understanding what is meant by bitwise FLAG values. The documentation is very technical and not very transparent to people who may be new to bioinformatics.

Many people might be turning to the documentation after looking at their SAM output file. Maybe they see that their output file has a range of integer values in column 2 and are puzzled by the explanation in the documentation (this is very likely if you have no familiarity with bit patterns).

I think this section would be greatly helped by the following:

  1. A reminder that the SAM file itself stores an integer value
  2. An explicit description that a bitwise value of zero means that your read has mapped to the forward strand of the reference
  3. Some specific examples that explain what various integer values correspond to.

Most of the responses to this were of the form well people can go elsewhere if they want to find better documentation that explains this. E.g.

"There is space on SEQwiki for user created format information."

I accept that the official specification for a file format is not necessarily the same thing as a user guide, but people presumably arrive at the official SAM documentation when searching for help with this kind of thing. E.g. if I search for sam format documentation or sam file format, the top hit is the aforementioned SAM Format Specification.

So the advice seems to be that you don't need to bother making your bioinformatics documentation easy to understand because someone else might come along and do this for you.

Kablammo: an interactive, web-based BLAST results visualizer

Another great name for a piece of bioinformatics software! This tool has just been published in the journal Bioinformatics by Jeff Wintersinger and James Wasmuth. From the abstract:

Kablammo is a web-based application that produces interactive, vector-based visualizations of sequence alignments generated by BLAST. These visualizations can illustrate many features, including shared protein domains, chromosome structural modifications, and genome misassembly.

101 questions with a bioinformatician #20: Roy Chaudhuri

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting their bioinformatics careers.


Roy Chaudhuri is a Lecturer in Bioinformatics in the Department of Molecular Biology and Biotechnology at the University of Sheffield, and is part of the Sheffield Bioinformatics Hub. Roy's expertise concerns the comparative genomics and phylogenetics of bacterial pathogens and in a previous life he helped set up the coliBASE and xBASE databases. In a previous-previous life he was also a pioneering website designer (I shouldn't judge: people in glass houses… and all that).

He claims that his current duties involve "research, teaching, publishing, and trying to convince people to give me money". If you would like to give Roy money (perhaps a £1 donation towards his Eccles Cake fund?), you can get in contact with him via the Sheffield Bioinformatics Hub website. You can also find out more about Roy by following him on twitter (@RoyChaudhuri)…but be warned, he is a non-stop tweeter! And now, on to the 101 questions...



001. What's something that you enjoy about current bioinformatics research?

I like that after 16 years as a bioinformatician, I'm still learning new things every day, and that there's no shortage of cool datasets and interesting problems to keep me busy. I also like how far it's possible to get by knowing a little bit of biology and a little Perl.

010. What's something that you don't enjoy about current bioinformatics research?

I worry that too much community effort has been devoted to dealing with problems that are specific to short-read data. I'd like to think that in five years time sequencing will just work, and we will be able to devote our time to dealing with biological quirks rather than technical ones. I'm pretty sure I said the same thing five years ago, though.



011. If you could go back in time and visit yourself as a 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

Most of my advice wouldn't be work-related, but I'd certainly mention that the clock starts ticking on potential fellowship opportunities as soon as you get your PhD. I definitely missed the starting gun on that one.



100. What's your all-time favorite piece of bioinformatics software, and why?

I'll go for Prokka, because it does an astonishingly good job at annotating bacterial genomes (better than many manual attempts...), because Torsten wrote the book (well, blog post) on creating usable command-line bioinformatics tools. I particularly like that it checks for its dependencies at the start, rather than choking half-way through, and because it sometimes finishes with a quote from the Hitchhiker's Guide to the Galaxy.

Other than that, I'm a big fan of MUMmer, and I'm always impressed by how many different things it's possible to achieve by stringing two or three SAMtools commands together. If non-bioinformatics-specific software counts, then I'd also mention GNU Parallel, Perl and UNIX itself.



101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality, and why?

M, because it's Not K.

Searching for sausage rolls: using Google Scholar to look at the popularity of British culinary delights

Sometimes it can be fun to search Google Scholar for words or phrases that you might not expect to ever appear in the title of an academic article. So last night, I conducted an important scientific study and looked at the popularity of various quintessential items of Britsh cuisine:

Updated: 2014-12-10: includes addition of 'Spotted Dick' thanks to reader @MattBashton.

B-O-G-U-S

There's a lot of it about at the moment…

  • MAGeCK: Model-based Analysis of Genome-wide CRISPR/Cas9 Knockout

Slightly bogus.

  • PrEMeR-CG: Probabilistic Extension of Methylated Reads at CpG resolution

Somewhat bogus.

  • ODIN: One-stage DIffereNtial peak caller

Bogus.

  • Gustaf: Generic mUlti-SpliT Alignment Finder

Definitely bogus.

  • iPro54-PseKNC: Predictor for identifying sigma-54 promoters in prokaryote with pseudo k-tuple nucleotide composition

I'm not sure if 'PseKNC' meant to be pronounced 'sequence'? I'm also not sure if the 'Pro' refers to prokaryote' or 'promoters'. Likewise, I'm not sure if the 'i' is for 'identifying' or is just an Apple-style brand prefix. Finally, I'm not even sure if this is meant to be an acronym at all. But it looks bogus to me.

Tales of drafty genomes: part 1 — The Human Genome

One of my recent blog posts discussed this new paper in PLOS Computational Biology:

There has also been a lot of chatter on twitter about this paper. Here is just part of an exchange that I was involved in yesterday:


The issue of what is or isn’t a draft genome — and whether this even matters — is something on which I have much to say. It’s worth mentioning that there are a lot of draft genomes out there: Google Scholar reports that there are 1,440 artices that mention the phrase ‘draft genome’ in their title [1]. In the first part of this series, I’ll take a look at one of the most well-studied genome sequences in existence…the human genome.


The most famous example of a draft genome is probably the ‘working draft’ of the human genome that was announced — with much fanfare — in July 2000 [2]. At this time, the assembly was reported as consisting of “overlapping fragments covering 97 percent of the human genome”. By the time the working draft was formally published in Nature in January 2001, the assembly was reported as covering “about 94% of the human genome” (incidentally, this Nature paper seems to be first published use of the N50 statistic).

On April 14, 2003 the National Human Genome Research Institute and the Department of Energy announced the “successful completion of the Human Genome Project” (emphasis mine). This was followed by the October 2004 Nature paper that discussed the ongoing work in finishing the euchromatic portion of the human genome[3]. Now, the genome was being referred to as ‘near-complete’ and if you focus on the euchromatic portion, it was indeed about 99% complete. However, if you look at the genome as a whole, it was still only 93.5% complete [4].

Of course the work to correctly sequence, assemble, and annotate the human genome has never stopped, and probably will never stop for some time yet. As of October 14, 2014, the latest version of the human genome reference sequence is GRCh38.p1[5] lovingly maintained by the Genome Reference Consortium (GRC). The size of the human genome has increased just a little bit compared to the earlier publications from a decade ago[6], but there is still several things that we don’t know about this ‘complete/near-complete/finished’ genome. Unknown bases still account for 5% of the total size (that’s over 150 million bp). Furtheremore, there are almost 11 million bp of unplaced scaffolds that are still waiting to be given a (chromosomal) home. Finally, there remains 875 gaps in the genome (526 are spanned gaps and 349 unspanned gaps[7]).

If we leave aside other problematic issues in deciding what a reference genome actually is, and what it should contain[8], we can ask the simple question is the current human genome a draft genome? Clearly I think everyone would say ‘no’. But what if I asked is the current human genome complete? I’m curious how many people would say ‘yes’ and how many people would ask me to first define ‘complete’.

Here are some results for how many hits you get when Googling for the following phrases:

Scientists and journalists don’t help the situation by maybe being too eager to overhype the state of completion of the human genome[9]. In conclusion, the human genome is no longer a draft genome, but it is still just a little bit drafty. More on this topic of drafty genomes in part 2!


  1. There are 1,570 if you don’t require the words ‘draft’ and ‘genome’ to be together in the article title.  ↩

  2. The use of the ‘working draft’ as a phrase had been in use since at least late 1998.  ↩

  3. There is also the entire batch of chromosome-specific papers published between 2001 and 2006.  ↩

  4. This percentage is based on the following line in the paper: “The euchromatic genome is thus ~2.88 Gb and the overall human genome is ~3.08 Gb”  ↩

  5. This is the 1st patched updated to version 38 of the reference sequence  ↩

  6. There are 3,212,670,709 bp in the latest assembly  ↩

  7. The GRC defines the two categories as follows:

    Spanned gaps are found within scaffolds and there is some evidence suggesting linkage between the two sequences flanking the gap. Unspanned gaps are found between scaffolds and there is no evidence of linkage.  ↩

  8. Remember, human genomes are diploid and not only vary between individuals but can also vary from cell-to-cell. The idea of a ‘reference’ sequence is therefore a nebulous one. How much known variation do you try to represent (the GRC represents many alternative loci)? How should a reference sequence represent things like ribosomal DNA arrays or other tandem repeats?  ↩

  9. Jonathan Eisen wrote a great blog post on this: Some history of hype regarding the human genome project and genomics  ↩

Recent changes to the ACGT blog

There's probably some published rules for how to write blog posts and I imagine that one of those rules might be don't blog about your blog. Oh well…

Over the last few months I've been making lots of tweaks to this site. Most of them have been subtle changes in order to make the pages less cluttered and more aesthetically pleasing — e.g. did you notice that I moved the 'About Blog Contact' menu bar ~20 pixels closer to the top of the page as it was just a little bit too lonely where it used to be? Aside from these minor cosmetic alterations, I've also made some more substantial changes:

  • You can no longer see tags for individual blog posts (but I have kept the tag cloud on the About page so you can still find all posts that share the same tag).
  • On the same About page, I added an option to let people subscribe to the blog via email (maximum one email per day).
  • Just about every post listed on the main page of this site used to have a 'Read More' link which you needed to click in order to read the entire article. I'm now restricting this only to my longer posts.
  • I've disabled comments from the blog; partly because I wasn't getting a lot of them but mostly because of the many excellent reasons stated on the @avoidcomments Twitter account.
  • I've started including occasional 'link blog' posts. These are short posts — often just a paragraph or two — that typically comment on a paper or on another blog post. The titles of link blog posts are themselves links to the source paper/blog post, and include a little arrow to indicate this. E.g. see here, here, or here.

As a final note, I'd like to thank everyone who has tweeted or otherwise spread the word about this blog. Aside from churning out more posts about bioinformatics tools with bogus names, I do want to make a bit more of an effort to write about some more substantive issues in bioinformatics. Stay tuned!