The problem with posters at academic conferences

I recently attended the Genome Science: Biology, Technology, and Bioinformatics meeting in the UK, where I presented a poster. As I was walking around, looking at other people's posters, I was reminded of the common problem that occurs with many academic posters. Here are some pseudo-anonomous examples to show what I mean (click images to enlarge):

The problem here is not with the total amount of text — though that can sometimes be an issue — but with the width of the text. These posters are 84 cm (33 inches) wide, and it is not ideal to create text blocks that span the entire width of the poster. The reasons behind this are the same reasons why you never see newspapers display text like this…we are not very good at reading information in this manner.

To quote from Lynch & Horton's Web Style Guide; specifically the section on Page Width and Line Length:

The ideal line length for text layout is based on the physiology of the human eye. The area of the retina used for tasks requiring high visual acuity is called the macula. The macula is small, typically less than 15 percent of the area of the retina. At normal reading distances the arc of the visual field covered by the macula is only a few inches wide—about the width of a well-designed column of text, or about twelve words per line. Research shows that reading slows as line lengths begin to exceed the ideal width, because the reader then needs to use the muscles of the eye or neck to track from the end of one line to the beginning of the next line. If the eye must traverse great distances on a page, the reader must hunt for the beginning of the next line.

In contrast to the above examples, there were a couple of posters at the #UKGS2014 meeting that I thought were beautifully displayed. Bright, colorful, clearly laid out, not too much text, and good use of big fonts. Congratulations to Warry Owen et al. and Karim Gharbi et al. for your poster presentation prowess!

When is a citation not a citation?

Today I received a notification from Google Scholar that one of my papers had been cited. I often have a quick look at such papers to see how our work is being referenced. The article in question was from the Proceedings of the 3rd Annual Symposium on Biological Data Visualization: Data Analysis and Redesign Contests:

FixingTIM: interactive exploration of sequence and structural data to identify functional mutations in protein families

The paper describes a tool that helps "identify protein mutations across a family of structural models and to help discover the effect of these mutations on protein function". I was a bit surprised by this because this isn't a topic that I've published on. So I looked to see what paper of mine was being cited and how it was being cited. Here is the relevant sentence from the background section of the paper:

To improve the exploration process, many efforts have been made, from folding the sequences through classification [1,2], to tools for 3D view exploration [3] and to web-based applications which present large amounts of information to the users [4].

Citation number 2 is the paper on which I am a co-author:

  • Chen N, Harris TW, Antoshechkin I, Bastiani C, Bieri T, Blasiar D, Bradnam K, Canaran P, Chan J, Chen C, Chen WJ, Cunningham F, Davis P, Kenny E, Kishore R, Lawson D, Lee R, Muller H, Nakamura C, Pai S, Ozersky P, Petcherski A, Rogers A, Sabo A, Schwarz EM, Van Auken K, Wang Q, Durbin R, Spieth J, Sternberg PW, Stein LD: Wormbase: A comprehensive data resource for Caenorhabditis biology and genomics. Nucleic Acids Res 2005, 33(1):383-389.

The cited paper simply describes the WormBase database and includes only a passing reference to the fact that WormBase contains some links to protein structures (when known), but that's about it. The WormBase paper doesn't mention 'folding' or 'classification' anywhere, which makes it seem a really odd choice of paper to be cited. It makes me wonder how many other papers end up gaining seemingly spurious citations like this one.

Thoughts on the supply of bioinformatics services and training in the UK

I am currently at the 2014 UK Genome Sciences meeting (hashtag #UKGS2014). It has been a long time since I have been at a UK science conference and it has been good to meet old colleagues and acquaintances who I have known from various stages of my career.

From informal chats with various people, it seems that UK universities are tackling their bioinformatics needs in different ways. Some have specialized facilities that try to meet the bioinformatics need from local users (and potentially from those further afield). E.g. the University of Surrey has a Bioinformatics Core Facility, Newcastle University has a Bioinformatics Support Unit, and here at Oxford there is the Computational Biology Research Group.

These examples represent core facilities with dedicated staff. An alternative approach is to bring together — physically or virtually — existing bioinformatics talent, with a view that they will be able to help others. This is the strategy taken by the new Bioinformatics Hub at the University of Sheffield, which brings together six talented folk who are based in different departments. The success of strategies like this may heavily depend on having enough skilled bioinformatics faculty who also have enough time to help others.

Other universities seem to lack any central pooling of bioinformatics expertise, and instead rely on people doing bioinformatics themselves or outsourcing it to places like TGAC. The former approach (doing it yourself) will be fine for some people, particularly those who are comfortable learning new computational skills themselves, but this will not be a good fit for everyone. 

If you are not outsourcing your bioinformatics and you don't have the necessary skills yourself, then the other approach is to attend one or more training courses. Three places that seem to be leading the field for bioinformatics training are TGACCGAT, and Edinburgh Genomics…and all three have a heavy presence at this conference.

Depending on your definition, bioinformatics has been around — as either a recognized skill set, or a field of study — since the early 1990s. The number of people who might consider themselves a bioinformatician has probably grown exponentially since then. Likewise, the demand for skilled bioinformaticians, or for facilities that offer bioinformatics services and training, continues to grow. Clearly, there are different ways of meeting this demand.

The current diversity of approaches to bioinformatics services and training presumably is a reflection on the local supply of, and demand for, such services. If you are about to join a new university, and if you plan on needing some bioinformatics help at some point, it may be useful to first find out more about that university's bioinformatics strategy.

My poster for the UK Genome Sciences meeting is about a new version of our IMEter software

One of the many projects I am involved with looks at Intron-mediated enhancement (IME) of gene expression. Our collaboration with Alan Rose at UC Davis has been a fruitful one, and has led to the development of computational tools that can predict how much an intron might enhance expression.

The initial version of what we called 'the IMEter' was published in 2008 and an improved v2.0 version was published in 2011. The online version of this software only lets you test Arabidopsis introns…not so useful when there are now so many different sequenced plant genomes.

We addressed this limitation in a new — as yet unpublished — v2.1 version which is available online. IMEter v2.1 can now test the expression enhancing ability of introns from 34 different plant species.

The new IMEter is the subject of my poster at the forthcoming UK Genome Sciences meeting in Oxford. The poster, available below via Figshare, explains a little more about how the new version of the IMEter came about. It also discusses some of the problems that arise in trying to adapt a software tool from working with one, very well annotated, genome, to working with many different genomes of varying quality.

5 things to consider when publishing links to academic websites

Preamble

One of the reasons I've been somewhat quiet on this blog recently is because I've been involved with a big push to finish the new Genome Center website. This has been in development for a long time and provides a much needed update to the previous website that was really showing its age. Compare and contrast:

The old Genome Center website…what's with all that whitespace in the middle?

The new Genome Center website, less than 24 hours old at the time of writing.

This type of redesign is a once-in-a-decade event, and provides the opportunity not just to add new features (e.g. proper RSS news feed, twitter account, YouTube channel, respsonvive website design etc.), but also to clean up a lot of legacy material (e.g. webpages for people who left the Genome Center many years ago).

This cleanup prompted me to check Google Scholar to see if there are any published papers that include links to Genome Center websites. This includes links to the main site and also to all of the many subdomains that exist (for different labs, core facilities etc.) It's pretty easy to search Google Scholar for the core part of a URL, e.g. genomecenter.ucdavis.edu and I would encourage anyone else that is looking after an aging academic website to do so.

Here are some of the key things that I noticed:

  1. Most mentions of Genome Center URLs are to resources by Peggy Farnham's lab. Although Peggy left UC Davis several years ago (she is now here), her — very old, and out of date — lab page still exists (http://farnham.genomecenter.ucdavis.edu).
  2. Many people link to Craig Benham's work using http://genomecenter.ucdavis.edu/benham/. This redirects to Craig's own lab site (http://benham.genomecenter.ucdavis.edu), but the redirect doesn't quite work when people have linked to a specific tool (e.g. http://genomecenter.ucdavis.edu/benham/sidd). This redirects to http://benham.genomecenter.ucdavis.edu/sidd which then produces a 404 error (page not found).
  3. There are many papers that link to resources from Jonathan Eisen's group and these papers all point to various pages on a domain that is either down or no longer in existence (http://bobcat.genomecenter.ucdavis.edu).

There is an issue here of just how long is it valid to try to keep links active and working. In the case of Peggy Farnham, she no longer works at UC Davis, so is it okay if I redirected all of her web traffic to her new website? I plan to do this but will let Peggy know so that she can maybe arrange to copy some of the existing material over to her new site.

In the case of Craig's lab, maybe he should be adding his own redirect links for tools that now have new URLs. What would also help would be to have a dedicated 404 page which might point to the likely target page that people are looking for (a completely blank 'not found' page is rarely ever helpful).

In the case of Jonathan's lab, there is a big problem here in that all of the papers are tied to a very specific domain name (which itself has no obvious naming connection to his lab). You can always rename a new machine to be called 'bobcat', but maybe there are better things we should be doing to avoid these situations arising in the first place…

5 things to consider when publishing links to academic websites

  1. Don't do it! Use resources like Figshare, Github, or Dryad if at all possible. Of course this might not be possible if you are publishing some sort of online software tool.
  2. If you have to link to a lab webpage, consider spending $10 a year or so and buying your own domain name that you can take with you if you ever move anywhere else in future. I bought http://korflab.com for my boss, and I see that Peggy Farnham is now using http://farnhamlab.com.
  3. If you can't, or don't want to, buy your own domain name, try using a generic lab domain name and not a machine-specific domain name. E.g. our lab's website is on a machine called 'raiden' and can be accessed at http://raiden.genomecenter.ucdavis.edu. But we only ever use the domain name http://korflab.ucdavis.edu which allows us to use a different machine as the server without breaking any links.
  4. If you must link to a specific machine, try avoiding URLs that get too complex. E.g. http://supersciencelab.ucdavis.edu/Tools/Foo/v1/foo_v1.cgi. The more complex the URL, the more likely it will break in future. Instead, link to your top level domain (http://supersciencelab.ucdavis.edu) and provide clear links on that page on how to find things.
  5. Any time you publish a link to a URL, make sure you keep a record of this in a simple text file somewhere. This might really help if/when you decide to redesign your website 5 years from now and want to know whether you might be breaking any pre-existing links.

 

Random capitalization strikes again, or am I only dreaming?

A paper in BMC Bioinformatics describes a new piece of software:

morFeus: a web-based program to detect remotely conserved orthologs using symmetrical best hits and orthology network scoring

Naturally, my first instincts were to check whether this was a name worthy of a JABBA award, but morFeus does not appear to be an acronym or initialism. I say that because although the name morFeus appears 116 times in the manuscript, no explanation is ever given as to why the software has that name.

My first thought was that maybe it is a reference to Morpheus, the Greek god of dreams, or maybe to the character of Morpheus from The Matrix. I don't really care about why it is called morFeus — a name that my spell checker keeps correcting to morgues — but it is another example of the, seemingly random, capitalization of bioinformatics tools.

When I visited the web server for the morFeus tool, I did notice something in small print at the bottom of the page:

  • morFeus stands for meta-analysis based orthology finder using symmetrical best hits

This is something that also appears as a keyword in the manuscript, but it is not entirely obvious as to whether this really is meant to be an initialism, or why the F is capitalized. I'm completely stuMped.

101 questions with a bioinformatician #13: Michael Schatz

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting theirbioinformatics careers.


Mike Schatz is an Assistant Professor of Quantitative Biology at Cold Spring Harbor Laboratory.  Prior to getting into the world of genomics and bioinformatics, Mike worked for a startup company that specialized in network security (working on encryption software for online banking amongst other things):

It was unplanned serendipity, but code breaking turned out to be perfect training for genomics, and the startup turned out to be perfect training to become a PI. 

His research focuses on the development of scalable algorithms and systems to analyze biological sequence data, concentrating on the alignment, assembly, and analysis of high-throughput DNA sequencing reads. If you visit his lab research page, you will see an impressive list of software tools that he has helped develop.

Aside from his contributions to genomics, I am perhaps more impressed that Mike has made available slides from all of his major research presentations going back to 2005 (over 80 talks). I wish more scientists were as dedicated at sharing talks like this. You can find out more about Mike from his lab website or by following him on twitter (@mike_schatz). And now, on to the 101 questions...

 

 

001. What's something that you enjoy about current bioinformatics research?

What brought me into the field was the opportunity to apply my training and experience in computer science to really meaningful problems in biology and medicine. I’m fascinated by the deep connections between how computers and software are organized and operate compared to how cells and genomes are replicated, transcribed, and evolve.

Right now is by far the most fantastic time to be in a field that is driven by rapid improvements to the biotechnology. How amazing that just 15 or 20 years ago it would have been cheaper and easier to land a team on the moon than to sequence their genomes, but now we do it on a routine basis!

This growth has fundamentally and forever changed the types of questions that we can even ask. The really exciting and scary point is we are still at the very beginning, and are still feeling around in the dark. I recently gave a talk about how long we should expect to wait until we have sequenced one billion genomes (hint: it is a lot sooner than you might expect).

 

 

010. What's something that you *don't* enjoy about current  bioinformatics research?

The FASTQ “file format”. Do we really need the read identifier listed twice (sometimes), newlines within a single record, and an unspecified encoding scheme for quality values that changes every so often depending on when the software was run?

I cringe every time I have to teach it to a new student. There is no rational to it and it's so obviously flawed. It just feels dirty to teach it. I like to think that in 10 or 100 years this will all be sorted out, but today, this and so many other poorly designed systems are entrenched into our day-to-day lives. It is a constant, if dull, irritation that makes everything slow to change, and brittle to use.

 

011. If you could go back in time and visit yourself as an 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

Take more probability and statistics. So much of my life now is spent looking for patterns in enormously large and complex data that the only hope is through statistical analysis. I used to stay up late reading algorithms textbooks, but now this is where I spend my free time.

The one really successful tip I’ve learned is that, even though my intuition for probability is poor, I can often work backwards using a simulator. I’ll write a little code so I can look at what happens to the distribution if this rate goes up, or if the genome was twice as complex. I then use that to guide me to the analytical form. I always understand an algorithm better if I implement it from scratch, and I think that this is an extension of that concept.

 

100. What's your all-time favorite piece of bioinformatics software, and why?

Do I have to pick just one? Ben Langmead blew my mind when he taught me about the FM-index. A very close second was the genome assembler Art Delcher wrote in about 50 lines of awk. More recently my lab went over the SGA algorithm from Simpson and Durbin in great detail. All of these have beauty in their simplicity and elegance — like a great work of art everything locks together perfectly in step.

 

101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality?

S – It is the strongest code, of course! ;)

 

Is there ever a valid reason for storing bioinformatics data in a Microsoft Word document?

Short answer

No.

Long answer

Noooooooooo!!!

Background

Yesterday I finished reviewing a paper. My review was generally very positive and I enjoyed reading the manuscript. The authors linked to some supplementary files that were available on another website. As I'm the type of reviewer that likes to look at every file that is part of a submission, I logged on to the website to see what files were there.

The first file that was listed had a 'docx' extension. Someone might argue that if this file contained a textual description of how the other files were being generated, then maybe there is nothing wrong with somebody using Microsoft Word. I would disagree. Any sort of documentation should ideally be in plain text, and maybe PDF as an alternative.

In any case, I opened the file to see what we were dealing with. The file contained a list of several thousand gene identifiers, one identifier per line. There was nothing else in the thirty-six page file.

This is not an acceptable practice! Use of Microsoft Word to store bioinformatics data will only ever result in unhappiness, frustration, and anger. And we all know what anger leads to…

Supplemental madness: on the hunt for 'Figure S1'

I've just been looking at this new paper by Vanesste et al.  in Genome Research:

Analysis of 41 plant genomes supports a wave of successful genome duplications in association with the Cretaceous–Paleogene boundary

I was curious as to where their 41 plant genomes came from, so I jumped to the Methods section to see: 

No surprise there, this is exactly the sort of thing you expect to find in the supplementary material of a paper. So I followed the link to the supplementary material only to see this:

So the 'Supplemental Material' contains 'Supplemental Information' and the — recursively named — 'Supplemental Material'. So where do you think Supplemental Table S1 is? Well it turns out that this table is in the Supplemental Material PDF. But when looking at both of these files, I noticed something odd. Here is Figure S1 from the Supplemental Information:

And here is part of another Figure S1 from the Supplemental Material file:

You will notice that the former figure S1 (in the Supplemental Information) is actually called a Supporting Figure. I guess this helps distinguish it from the completely-different-and-in-no-way-to-be-confused Supplementary Figure S1.

This would possibly make some sort of sense if the main body of the paper distinguished between the two different types of Figure S1. Except the paper mentions 'Supplemental Figure S1' twice (not even 'Supplementary Figure S1) and doesn't mention Supporting Figure S1 at all (or any supporting figures for that matter)!

What does all of this mean? It means that Supplementary Material is a bit like the glove compartment in your car: a great place to stick all sorts of stuff that will possibly never be seen again. Maybe we need better reviewer guidelines to stop this sort of confusion happening? 

 

The Assemblathon Gives Back (a bit like The Empire Strikes Back, but with fewer lightsabers)

So we won an award for Open Data. Aside from a nice-looking slab of glass that is weighty enough to hold down all of the papers that someone with a low K-index has published, the award also comes with a cash prize.

Naturally, my first instinct was to find the nearest sculptor and request that they chisel a 20 foot recreation of my brain out of Swedish green marble. However, this prize has been — somewhat annoyingly — awarded to all of the Assemblathon 2 co-authors.

While we could split the cash prize 92 ways, this would probably only leave us with enough money to buy a packet of pork scratchings each (which is not such a bad thing if you are fan of salty, fatty, porcine goodness).

Instead we decided — and by 'we', I'm really talking about 'me' — to give that money back to the community. Not literally of course…though the idea of throwing a wad of cash into the air at an ISMB meeting is appealing.

Rather, we have worked with the fine folks at BioMed Central (that's BMC to those of us in the know), to pay for two waivers that will cover the cost of Article Processing Charges (that's APCs to those of us in the know). We decided that these will be awarded to papers in a few select categories relating to 'omics' assembly, Assemblathon-like contests, and things to do with 'Open Data' (sadly, papers that relate to 'pork scratchings' are not eligible).

We are calling this event the Assemblathon 'Publish For Free' Contest (that's APFFC to those of us in the know), and you can read all of the boring details and contest rules on the Assemblathon website.