DOGMA: a new tool for assessing the quality of proteomes and transcriptomes

A new tool, recently published in Nucleic Acids Research, caught my eye this week:

The tool, by a team from the University of Münster, uses protein domains and domain arrangements in order to assess 'completeness' of a proteome or transcriptome. From the abstract…

Even in the era of next generation sequencing, in which bioinformatics tools abound, annotating transcriptomes and proteomes remains a challenge. This can have major implications for the reliability of studies based on these datasets. Therefore, quality assessment represents a crucial step prior to downstream analyses on novel transcriptomes and proteomes. DOGMA allows such a quality assessment to be carried out. The data of interest are evaluated based on a comparison with a core set of conserved protein domains and domain arrangements. Depending on the studied species, DOGMA offers precomputed core sets for different phylogenetic clades

Unlike CEGMA and BUSCO, which run against unannotated assemblies, DOGMA first requires a set of gene annotations. The paper focuses on the web server version of DOGMA but you can also access the source code online.

It's good to see that other groups are continuing to look at new ways of asssessing the quality of large genome/transcriptome/proteome datasets.

What's in a name?

Initially, I thought the name was just a word that both echoed 'CEGMA' and reinforced the central dogma of molecular biology. Hooray I thought, a bioinformatics tool that just has a regular word as a name without relying on contrived acronyms.

Then I saw the website…

  • DOGMA: DOmain-based General Measure for transcriptome and proteome quality Assessment

This is even more tenuous than the older, unrelated, version of DOGMA:

  • DOGMA: Dual Organellar GenoMe Annotator

Transcriptional noise, isoform prediction, and the utility of mass spec data in gene annotation

The human genome may be 80% functional or 8.2% functional. Maybe it's 93.7% functional or only 6.1%. I guess that all we know for sure is that it is not 0% functional (although my genome on a Monday morning may provide evidence to the contrary).

Transcript data can be used to ascribe some sort of functionality to a genome and, in an ideal world, we would sequence full-length cDNAs for every gene. But in the less-than-ideal world we often end up sequencing lots of small bits of RNA using an ever-changing set of technologies. ESTs, SAGE, CAGE, RACE, MPSS, and RNA-Seq have all been used to provide evidence for where genes are and how highly they are being expressed.

Having some transcriptional evidence is (usually) better than not having any transcriptional evidence, but it doesn't necessarily imply functionality. A protein-coding gene that is transcribed may not be translated. Transcript data is used in gene annotation to add new genes, especially in the case of a first-pass annotation of a new genome. But in established genomes, it is probably used more to annotate transcript isoforms (e.g. splice variants). This can lead to a problem for the end users of such data…how to tell if all isoforms are equally likely?

Consider the transcript data for the rpl-22 gene in Caenorhabditis elegans. This gene has two annotated splice variants and there is indeed EST evidence for both variants, but it is a little bit unbalanced:

This gene encodes the large ribosomal subunit protein…a pretty essential protein! Notice how the secondary isoform (shown on top) a) encodes for a much shorter protein and b) has very little transcript evidence. In my mind, this secondary isoform is the result of 'transcriptional noise'. Maybe a couple of lucky ESTs captured the transcript in the process of heading towards destruction via nonsense-mediated decay? It seems highly unlikely that this secondary transcript gives rise to a functional protein though someone who is new to viewing data like this might initially consider each isoform as equally valid.

If we turn on some additional data tracks to look at protein homology to human (shown in orange) and mass spectromety data from C. elegans (shown in red) it becomes clear that all of the evidence is really pointing towards just one functional isoform:

Indeed mass spec data has the potential to really clean up a lot of noisy gene annotations. In light of this I was very happy to see this new paper published in the Journal of Proteome Research (where talented up-and-coming scientists publish!):

Pooling data from 8 mass spec analyses of human data, the authors attempted to see how much protein support there was for the different annotated isoforms of the human genome. They could reliably map peptides to about two-thirds of the protein-coding genes from the GENCODE 20 gene set (Ensembl 76). What did they find?

We identified alternative splice isoforms for just 246 human genes; this clearly suggests that the vast majority of genes express a single main protein isoform.

They also found that the mass spec data was not always in agreement with the dominant isoforms that can be predicted from RNA-Seq data:

…part of the reason for this will be that more RNAseq reads map to longer sequences, it does suggest that either transcript expression is very different from protein expression for many genes or that transcript reconstruction methods may not be interpreting the RNAseq reads correctly.

The headline conclusion that mass spec evidence only supports alternate isoforms for 1.2% of human genes is thought provoking. It suggests to me that we should be careful in relying too heavily on gene annotations which describe large numbers of isoforms mostly on the basis of transcript data. Paraphrasing George Orwell:

All isoforms are equal, but some isoforms are more qual than others

Community annotation — by any name — still isn’t a part of the research process. It should be

In order for community annotation efforts to succeed, they need to become part of the established research process: mine annotations, generate hypotheses, do experiments, write manuscripts, submit annotations. Rinse and repeat.

A thoughtful post by Todd Harris on his blog which lists some suggestions for how to fix the failure of community annotation projects.

I particularly like Todd's 3rd suggestion:

We need to recognize the efforts of people who do [community annotation]. This system must have professional currency to it, akin to writing a review paper, and should be citable…

3 word summary of new PLOS Computational Biology paper: genome assemblies suck

Okay, I guess a more accurate four word summary would be 'genome assemblies sometimes suck'.

The paper that I'm referring to, by Denton et al, looks at the problem of fragmentation (of genes) in many draft genome assemblies:

As they note in their introduction:

Our results suggest that low-quality assemblies can result in huge numbers of both added and missing genes, and that most of the additional genes are due to genome fragmentation (“cleaved” gene models).

One section of this paper looks at the quality of different versions of the chicken genome and CEGMA is one of the tools they use in this analysis. I am a co-author of CEGMA, and reading this paper brought back some memories of when we were also looking at similar issues.

In our 2nd CEGMA paper we tried to find out why 36 core genes were not present in the v2.1 version of the chicken genome (6.6x coverage). Turns out that there were ESTs available for 29 of those genes, indicating that they are not absent from the genome, just from the genome assembly. This led us to find pieces of these missing genes in the unanchored set of sequences that were included as part of the set of genome sequences (these often appear as a 'ChrUn' FASTA file in genome sequence releases).

Something else that we reported on in our CEGMA paper is that, sometimes, a newer version of a genome assembly can actually be worse than what it replaces (at least in terms of genic content). Version 1.95 of the Ciona intestinalis genome contained several core genes that subsequently disappeared in the v2.0 release.

In conclusion — and echoing some of the findings of this new paper by Denton et al.:

  1. Many genomes are poorly assembled
  2. Many genomes are poorly annotated (often a result of the poor assembly)
  3. Newer versions of genome assemblies are not always better

Trying to download the cow genome: where's the beef?

Cows on the UC Davis campus, photo by Keith Bradnam

Cows on the UC Davis campus, photo by Keith Bradnam

I want to download the cow genome. A trip to bovinegenome.org seemed like a good place to start. However, the home page of this site is quick to reveal that there is not only assembly Btau_4.6 that was released by the Baylor College of Medicine Human Genome Sequencing Center, there is also an alternative UMD3.1 assembly by the Center for Bioinformatics and Computational Biology (CBCB) at the University of Maryland.

In many areas of life, choice is a good thing. But in genomics, trying to decide which of two independent genome resources to use, well that can be a bit of a headache. The bovinegenome.org site tells me that 'Btau_4.6' was from November 2011. I have to go to the CBCB site to find out that UMD3.1 was made available in December 2009.

Bovinegenome.org helpfully provides a link to the UMD3.1 assembly. Not so helpfully, the link doesn't work. At this point I decide to download the 'Btau_4.6' assembly and associated annotation. The Btau downloads page provides a link to the Assembly 4.0 dataset (in GFF3 format) under a heading of 'Datasets for Bovine Official Gene Set version 2'. So at this point, I have to assume that maybe that the '4.0' might really be referring to the latest 4.6 version of the assembly.

The resulting Assembly 4.0 page has links for each chromosome (available as GFF3 or gzipped GFF3). These links include a 'ChrUn' entry. You will see this in most modern genome assemblies as there is typically a lot of sequence (and resulting annotation) that is 'unplaced' in the genome, i.e. it cannot be mapped to a known chromosome. In the case of cow, ChrUn is the second largest annotation file.

This page also has four more download links for what bovinegenome.org refers to as:

  • 'Reference Sequences'
  • 'CDS Target'
  • 'Metazoa Swissprot Target'
  • 'EST Target'

From the Assembly 4.0 downloads page (which may or may not mean version 4.6) 

No README file or other explanation is provided for what these files contain and if you try to download the GFF3 links (rather than the gzipped versions), you will find that these links don't work. The 'CDS Target' link lets you download a file named 'incomplete_CDS_target.gff3'. I do not know whether the 'incomplete' part refers to individual CDSs (missing stop codons?) or the set of CDSs as a whole. I also do not know what the 'target' part refers to.

The chromosome-specific GFF3 files only contain coordinates of gene features, which utilize several different GFF source fields:

  • NCBI_transcript
  • annotation
  • bovine_complete_cds_gmap_perfect
  • ensembl
  • fgenesh
  • fgeneshpp
  • geneid
  • metazoa_swissprot
  • sgp

Again, no README file is provided to make sense of these multiple sources of gene information. In contrast to the chromosome-specific files, the 'reference_sequences.gff3' and 'incomplete_CDS_target.gff3' contain coordinate information and embedded sequence information in FASTA format (this is allowed in the GFF3 spec).

If you look at the coordinate information in the reference sequence file, you see multiple chromosome entries per biological chromosome. E.g.

From the reference_sequences.gff3 file

I do not know what to make of these.

If you want to get the sequences of all the genes, you have to go to yet another download page at bovinegenome.org where you can download FASTA files for CDS and proteins of either the Full Bovine Official Gene Set v2 or the Subset of Bovine Official Gene Set v2 derived from manual annotations. What exactly is the process that generated the 'manual annotations'? I do not know. And bovinegenome.org does not say.

So maybe the issue here is that I should get the information straight from the horse's — or should that be cow's? — mouth. So I head over to the Bovine Genome Project page at BCM HGSC. This reveals that the latest version of the genome assembly is 'Btau_4.6.1' from July 2012! This would seem to supercede the 2011 'Btau_4.6' assembly (which is not even listed on the BCM HGSC page), but I guess I'll have to download it to be sure. The link on the above BCM HGSC page takes me to an FTP folder which is confusingly named 'Btau20101111' (they started the 2012 assembly in 2010?).

So it seems obvious that all I need to do now is to: check whether the bovinegenome.org 'Assembly 4.0' dataset is really listing files from the 'Btau_4.6' release; find out whether those files are different from the BCM HGSC 'Btau_4.6.1' release; understand what the differences are between the nine different sets of gene annotation; decipher what the 'CDS target' (a.k.a. 'incomplete_CDS_target') file contains, understand why there are 14,195 different 'chromosome' features in the genome GFF file that describes 30 chromosomes; and finally compare and contrast all of these sequences and annotations with the alternative UMD3.1 assembly.

Bioinformatics is fun!

 

Update 1 — 2014-03-14 14.58 pm: The BCM HGSC has an extremely detailed README file that explains the (slight) differences between Btau_4.6 and Btau_4.6.1 (as well as explaining many other things about how they put the genome together). I like README files!

Update 2 — 2014-03-14 15.18 pm: Of course I could always go the Ensembl route and use their version of the cow genome sequence with their annotations. However, the Ensembl 75 version of the cow genome has been built using the older UMD3.1 assembly, whereas an older Ensembl 63 version of the cow genome that is based on the Btau_4.0 assembly is also available (life is never simple!). One advantage to working with Ensembl is that they are very consistent with their annotation and nomenclature.

Update 3 — 2014-03-14 15.35 pm: I now at least realize why there are 14,195 'chromosome' features in the main GFF file provided by bovinegenome.org. They are treating each placed scaffold as its own 'chromosome' feature. This is particularly confusing because a GFF feature field of 'chromosome' is usually taken to refer to an entire chromosome. GFF feature names should be taken from Sequence Ontology terms (e.g. SO:0000340 = chromosome), so in this case the SO term chromosome_part would be more fitting (and less confusing). 

Update 4 — 2014-03-17 11.16 am: I have received some very useful feedback since writing this blog post:

  • I was kindly pointed towards the NCBI Genome Remapping Service by @deannachurch (also see her comment below) that would let me project one annotation data from one coordinate system to another. This could be handy if I end up using different cow genomes.
  • A colleague of mine pointed me to a great 2009 article by @deannachurch and LaDeanna Hillier that talks about how the simultaneous publication of two cow genomes (from the same underlying data) raises issues about how we deal with publication and curation of such data sets.
  • Several people contacted me to give their opinion as to the relative merits of the UMD and the BCM HGSC assemblies.
  • Several people also echoed my sentiments about working with the same cow data, but by using Ensembl's architecture instead.
  • I was contacted by someone from bovinegenome.org who said that they would fix the issues that I raised. This was after I contacted them to tell them I had blogged about the problems that I had encountered. 

The last point is perhaps the most important one which I would like to dwell on for a moment longer. If you are a scientist and experience problems when trying to get hold of data from a website (or FTP site) don't sit in silence and do nothing!

If nobody tells a webmaster that there is an issue with getting hold of data, then the problem will never be fixed. Sometimes files go missing from FTP sites, or web links break, or simple typos make things unclear....the curators involved may never know unless it is pointed out to them.

Paper review: anybody who works in bioinformatics and/or genomics should read this paper!

I rarely blog about specific papers but felt moved to write about a new paper by Jonathan Mudge, Adam Frankish, and Jennifer Harrow who work in the Vertebrate Annotation group at the Wellcome Trust Sanger Institute.

Their paper, now out in Genome Research, is titled: Functional transcriptomics in the post-ENCODE era.

They brilliantly, and comprehensively, list the various ways in which gene architecture — and by extension gene annotation — is incredibly complex and far from a solved problem. However, they also provide an exhaustive description of all the various experimental technologies that are starting to shine a lot more light on this, at times, dimly lit field of genomics.

In their summary, they state:

Modern genomics (and indeed medicine) demands to understand the entirety of the genome and transcriptome right now

I'd go so far as to say that many people in genomics assume that genomes and transcriptomes are already understood. I often feel that too many people enter this field with false beliefs that many genomes are complete and that we know about all of the genes in this genomes. Jonathan Mudge et al. start this paper by firmly pointing out that even the simple question of 'what is a gene?' is something that we are far from certain about.

Reading this paper, I was impressed by how comprehensively they have reviewed the relevant literature, pulling in numerous examples that indicate just how complex genes are, and which show that we need to move away from the very protein-centric world view that has dominated much of the history of this field.

LncRNAs, microRNAs, and piwi-interacting RNAs are three categories of RNA that you probably wouldn't find mentioned anywhere in text books from a decade ago, but which now — along with 'traditional' non-coding RNAs such as rRNAs, tRNAs, snoRNAs etc. — probably outnumber the number of protein-coding genes in the human genome. Many parts of this paper tackle the issue of transcriptional complexity, particularly trying to address the all-important question how much of this is functional?

I found that so many parts of this paper touched on previous, current, and possible future projects in our lab. Producing an accurate catalog of genes, understanding alternative splicing, examining the relationship between mRNA and protein abundances, looking for conservation of signals between species...these are all topics that are near and dear to people in our lab.

Even if you have no interest in the importance of gene annotation — and shame on you if that is how you feel — this paper also serves as a fantastic catalog of the latest experimental techniques that can be used to capture and study genes (e.g. CAGE, ribosome profiling, polyA-seq etc).

If you have ever worked with a set of genes from a well curated organism, spare a thought for the huge amount of work that goes into trying to provide those annotations and keep them up to date. I'll leave you with the last couple of sentences from the paper...please repeat this every morning as your new mantra:

Finally, no one knows what proportion of the transcriptome is functional at the present time; therefore, the appropriate scientific position to take is to be open-minded. We thus do not claim that the annotation of the human genome is close to completion. If anything, it seems as if the hard work is just beginning.

The joys of dealing with well-annotated 'reference' genomes

Important update included at bottom of this post

Arabidopsis thaliana has one of the best curated eukaryotic genome sequences so you might expect working with Arabidopsis data to be a joy? Well, not always. Consider gene AT1G79920. This gene has two coding variants (AT1G79920.1 and AT1G79920.2) which only seem to differ in their 5' UTR regions:

2013-08-30 at 3.23 PM.png

The primary source for Arabidopsis genome information is The Arabidopsis Information Resource (TAIR) but another site called Phytozome has started collating all plant-related genome data (often pulling it from sites like TAIR).

Both sites allow you to download coding sequences for each gene from their FTP sites, or view the same sequence information on their web sites. You could also download GFF files and using the coordinate information, extract the coding sequences from the whole genome sequence.

A simple sanity check when working with coding sequences is that the length of a coding sequence should be divisible by 3 otherwise there might be a frameshift. There seems to be an issue with AT1G79920 in that it sometimes has a frameshift. I say sometimes because it depends on where you look at the data.

Here is a sequence alignment for exons 5 and 6 of both coding variants of this gene. The sequence identifiers include a 'P' or 'T' to indicate whether the information came from Phytozome or TAIR and they also denote whether it was taken from their web or FTP site. I also insert 'gtag' into one sequence to illustrate where the intron between these exons would occur.

2013-08-30 at 3.25 PM.png

The first boxed rectangle highlights a base that looks like a SNP, but these are coding variants with UTR variation, so the underlying genome sequence in the coding regions should be identical.

The second box is perhaps more disturbing. The web site versions of exon 7 all have a 1 bp deletion which would lead to a frameshift. I guess this error started at TAIR and propagated to Phytozome, but the fact that both sites also have a correct version available on their FTP site is confusing and troubling.

My boss first discovered this by looking at the GFF files for the Arabidopsis genome and this was one of 25 genes with a 'not-divisible-by-3' length error. So it pays to always check — and double check — your data.

Time to send an email to TAIR and Phytozome to report this.

Update

I heard back from TAIR and Phytozome and it seems that there are a small number of likely genome sequence errors in the latest (TAIR10) release of the A. thaliana genome. When these affect genes and would introduce a frameshift error, TAIR make manual edits to the resulting files of CDSs, transcripts, and proteins. They do this when there is clear homology from genes in other species that suggests the change should be made.

So if you work with downloaded FASTA files from the FTP site, you won't see these errors. If you work from GFF files (which is presumably what some of their web displays do), you'll run into these issues. There is a small file (TAIR10sequenceedits.txt) included as part of the TAIR10 release which documents these changes.

Thanks to a speedy and helpful response from both sites. Perhaps I should retitle this post The Joys of Dealing with Fast and Knowledgable Genome Curators?