Beautiful logo redesign as part of the rebranding of Crossref

Crossref — the non-profit organization that helps make academic content easier to find, link, cite and assess — has today announced a rebranding. They will be announcing new names and new logos for all of their products, and the Crossref logo itself gains a beautiful looking new design. So we say 'goodbye' to this:

 

And 'hello' to this lovely logo:

 

The explanation for why they wanted to change the logo makes a lot of sense to me:

We needed an icon to give more flexibility across the web that a word mark cannot do alone. The icon is made up of two interlinked angle brackets familiar to those who work with metadata, and can also act as arrows depicting Metadata In and Metadata Out, two themes under which our services can generally be grouped.

As part of this rebranding, they are formalizing a change from CrossRef to Crossref (with lower case 'R'). Someone had a fun job updating their Wikipedia page:

Wikipedia edit history: CrossRef > Crossref. Click to enlarge.

Assemble a genome and evaluate the result [Link]

There is a new page on the bioboxes site (such a great name!) which details how bioboxes can be used to assemble a genome and then evaluate the results:

A common task in genomics is to assemble a FASTQ file of reads into a genome assembly and followed by evaluating the quality of this assembly. This recipe will explore using bioboxes to do this task.

A third Assemblathon contest came very close to launching earlier this year…except that it didn't — maybe this will be the subject of a future blog post! — and we planned to make biobox containers a requisite part of submitting assemblies. If Assemblathon 3 ever gets off the ground I feel happier knowing that the bioboxes team is doing so much great work that will make running such a contest easier to manage.

Time to toggle the JABBA-award status of this bioinformatics software name

Give me a B.
Give me a O.
Give me a G.
Give me a U.
Give me a S.

What have you got?

Another BOGUS bioinformatics acronym! This time it is courtesy of the journal BMC Bioinformatics:

I think you can already see why this one is going to win a JABBA award. The name 'TOGGLE' derives from TOolbox for Generic nGs anaLysEs. Using their same strategy, they could have also gone for BOGGLE, BONNY, or even BORINGLY.

How to ask for bioinformatics help online

Part two of a two-part series.

In part one I covered where to ask for bioinformatics help. Now it is time to turn to the issue of how you should go about asking for help. Hat tip to reader Venu Thatikonda (@nerd_yie) for pointing me out to this 2011 PLOS Computational Biology article that tackles similar ground to this blog post. Here are my five main suggestions, with the last one being further broken down into 9 different tips:

  1. Be polite. Posting a question to an online forum does not mean that you deserve to be answered. If people do answer, they are usually doing so by giving up their own free time to try to help you. Don't berate people for their answers, or insult them in any way.
  2. Be relevant. Choose the right forum in which to ask your question. Sites like SEQanswers have different forums that discuss particular topics, so don't post your PacBio question in the Ion Torrent forum.
  3. Be aware of the rules. Most online forums will have some rules, guidelines, and/or an FAQ which covers general posting etiquette and other things that you should know. It is a good idea to check this before posting on a site for the first time.
  4. Be clever. Search the forum before asking your question, there is often a good chance that your question has already been asked (and answered) by others.
  5. Be helpful. The biggest thing you can probably do in order to get a useful answer to a question is to provide as many useful details as possible, these include:
    1. Type of operating system and version number, e.g. Mac OS X 10.10.5.
    2. Version number/name of software tool(s) you are using, e.g. NCBI BLAST+ v2.2.26, Perl v5.18.2 etc. A good bioinformatics or Unix tool will have a -v, -V, or --version command-line option that will give you this information.
    3. Any error message that you saw. Report the full error message exactly as it appeared.
    4. Where possible, provide steps that would let someone else reproduce the problem (assuming it is reproducible).
    5. Outline the steps that you have tried, if any, to fix the problem. Don't wait for someone to suggest 'quit and restart your terminal' before you reply 'Already tried that'.
    6. A description of what you were expecting to happen. Some perceived errors are not actually errors at all (the software was doing exactly what was asked of it, though this may not be what the user was expecting).
    7. Any other information that could help someone troubleshoot your problem, e.g. a listing of your Unix terminal before and/or after you ran a command which caused a problem.
    8. A snippet of your data that would allow others to reproduce the problem. You may not be able to upload data to the website in question, but small data snippets could be shared via a Dropbox or Google Drive link, or on sites like Github gist.
    9. Attach a screenshot that illustrates the problem. Many forum sites allow you to add image files to a post.

Any other suggestions?

 

Updates

2015-11-08 09.44: Added link to PLOS Computational Biology article

Gender ratio of speakers at today's Festival of Genomics California conference

The Festival of Genomics Conference California conference starts today. From the speaker lineup I count 132 speakers with a gender ratio of 72.7% men and 27.3% women. This is a good ratio compared to many (most?) genomics conferences — see Jonathan Eisen's many excellent posts on this subject — and it exceeds the background level of women in senior roles in genome institutes around the world (a figure I previously calculated as 23.6%).

However, it was because the ratio of women speakers was below my self-imposed target of 33.3% that I withdrew Front Line Genomic's kind offer of a speaking position and requested that they instead offer my slot to a woman.

I think Front Line Genomics are ahead of many conference organizers in addressing gender bias, and I look forward to seeing the final lineup at their upcoming Festival of Genomics London conference.

This post is to serve as a reminder that we, as a community, still need to do much better at addressing gender bias in our field, and that men can actively help this process by refusing to speak or present at conferences which show extreme bias. Preferably, I would like others to adopt my 33.3% target as a minimum ratio that we should be aiming for (this applies both ways, though there doesn't seem to be much likelihood of men feeling underepresented any time soon).

A timely call to overhaul how scientists publish supplementary material [Link]

Great new editorial piece in BMC Bioinformatics by Mihai Pop and Steven Salzberg that tackles a subject that people probably don't think about too much:

They highlight some of the problems that arise from the growing trend in some journals to publish very short articles that are accompanied by extremely lengthy supplementary material. They single out a few particularly lop-sided papers — including a 6-page article that has 165 pages of supplementary material — and make some solid observations about why this facet of publishing has become problem. Perhaps most importantly, citations that are buried in supplementary material do not get tracked by citation indices.

They conclude the paper with a proposal:

The ubiquitous use of electronic media in modern scientific publishing provides an opportunity for the better integration of supplementary material with the primary article. Specifically, we propose that supplementary items, irrespective of format, be directly hyper-linked from the text itself. Such references should be to specific sections of the supplementary material rather than the full supplementary text.

Yes, yes, a thousand times yes!

Where to ask for bioinformatics help online

Part one of a two-part series. In part two I tackle the issue of how to ask for help online.

You have many options when seeking bioinformatics help online. Here are ten possible places to ask for help, loosely arranged by their usefulness (as perceived by me):

  1. SEQanswers — the most popular online forum devoted to bioinformatics?
  2. Biostars — another very popular forum.
  3. Mailing lists — many useful bioinformatics tools have their own mailing lists where you can ask questions and get help from the developers or from other users, e.g. SAMtools and Bioconductor. Also note that resources such as Ensembl have their own mailing lists for developers.
  4. Google Discussion Groups — as well as having very general discussion groups, e.g. Bioinformatics, there are also groups like Tuxedo Tool Users…the perfect place to ask your TopHat or Cufflinks question.
  5. Stack Overflow — more suited for questions related to programming languages or Unix/Linux.
  6. Google — I'm including this here because I have solved countless bioinformatics problems just by searching Google with an error message.
  7. Reddit — try asking in r/bioinformatics or r/genome.
  8. Twitter — this may be more useful if you have enough followers who know something about bioinformatics, but it is potentially a good place to ask a question, though not a great forum for long questions (or replies). Try using the hashtag #askabioinformatician (this was @sjcockell's idea).
  9. Voat — Voat is like reddit's younger, hipster nephew. However, the bioinformatics 'subverse' is not very active.
  10. Research Gate — you may know it better as 'that site that sends me email every day', but some people use this site to ask questions about science. Surprisingly, they have 15 different categories relating to bioinformatics.
  11. LinkedIn — Another generator of too many emails, but they do have discussion groups for bioinformatics geeks and NGS.

Other suggestions welcome.

 

Updates

2015-11-02 09.53: Added twitter at the suggestion of Stephen Turner (@nextgenseek).

A rare example of a simple, fun, non-bogus name for a bioinformatics tool

Recently published in the journal Genome Biology, we have:

I like this name a lot because it is:

  • Memorable
  • Pronounceable
  • Simple, but also clever (combining elements of HiC and 5C)
  • Fun (a play on 'high five')
  • Not an acronym (so not a bogus acronym either)
  • Unique (can't find any other tools with this name)
  • Relevant (the short name has a connection to the data that the tool works with).

Maybe I need to start designing some sort of 'Anti-JABBA' award?

10 years of Open Access at the Wellcome Trust in 10 numbers [Link]

A great summary of how the Wellcome Trust has helped drive big changes in open access publishing. Of the ten numbers that the post uses to summarise the last decade, this one surprised me the most:

20% – the volume of UK-funded research which is freely available at the time of publication
A recent study commissioned by Universities UK found that 20% of articles authored by UK researchers and published in the last two years were freely accessible upon publication. This figure increases to 24% within six months of publication, and 32% within 12 months.

If you had asked me to guess what this number would be, I think I would have been far too optimistic. Even the figure of 32% of articles being free within 12 months seems lower than I would imagine. Lots of progress still to be made!

Teaser: a solution for our read mapping dilemma?

A paper recently published in Genome Biology by Smolka et al. may offer some help to the problem of choosing which read mapping program to use in order to align a set of sequencing reads to a genome:

The paper starts by neatly summarising the problem:

Recent and ongoing advances in sequencing technologies and applicationslead to a rapid growth of methods that align next generation sequencing reads to a reference genome (read mapping). By mid 2015, nearly 100 different mappers are available, although not all are equally suited for a given application or dataset.

The program Teaser attempts to automate the benchmarking of not just different mappers, but also (some of) the different parameters that are available to these programs. The latter problem should not be underestimated. The Bowtie 2 program describes almost 100 different command-line options in its documentation and many of these options control how Bowtie runs and/or what output it generates.

Teaser uses small sets of simulated read data, leading to very quick run times (< 30 minutes for many comparisons), but you can also supply real data to it. By default, Teaser will test the performance of five read mapping programs: BWA, BWA-MEM, BWA-SW, Bowtie2, and NextGenMap.

Impressively, you can run Teaser on the web as well as a standalone program. The web output includes results displayed graphically for many different test datasets (x-axis):

The paper concludes by asking the community to submit optimal parameter combinations to the Teaser GitHub repository

Teaser is easy to use and at the same time extendable to other methods and parameters combinations. Future work will include the incorporation of benchmarking RNA-Seq mappers and variant calling methods. We furthermore encourage the scientific community to contribute the optimal parameter combinations they detected to our github repository (available at github.com/Cibiv/Teaser) for their particular organism of interest. This will help others to quickly select the optimal combination of mapper and parameter values using Teaser.

I can't wait for the companion program Firecat!

 

2015-10-26 11.05: Updated to remove specific references to software versions of mapping tools.


Help us do science! I’ve teamed up with researcher Paige Brown Jarreau to create a survey of ACGT readers. By participating, you’ll be helping me improve ACGT and contributing to the SCIENCE on blog readership. You will also get FREE science art from Paige's Photography for participating, as well as a chance to win a t-shirt and other perks! It should only take 10–15 minutes to complete.

You can find the survey here: http://bit.ly/mysciblogreaders

ORCID: binding the (academic) galaxy together

Adapted from picture by flickr user Jim & Rachel McArthur

I am a supporter of ORCID's goals to help establish unique identifiers for researchers. Such identifiers can then be used to help connect a researcher with all of their inputs and outputs that surround their career. Most fundamentally, these inputs and outputs are grants and papers, but there is the potential for ORCID identifiers to link a person to much more, e.g. the organisations that they work for, manuscript reviews, code repositories, published slides, even blog posts.

For ORCID to succeed it has to be global and connect all parts of the academic network, a network that spans national boundaries. On this point, I am very impressed by the effort that ORCID makes in ensuring that their excellent outreach materials are not only available in English. As shown below, ORCID's 'Distinguish yourself' flyer is available in 9 different languages. Other material is also available in Russian, Greek, Turkish, and Danish. If your desired language is not available, they welcome volunteers to help translate their message into more languages. Email community@orcid.org if you want to help.

Welcome to the JABBA menagerie: a collection of animal-themed, bogus bioinformatics names…that have nothing to do with animals!

Bioinformaticians make the worst zookeepers:

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z

 

Other suggestions welcome! Only requirements are that:

  1. The name is bogus, i.e. not a straightforward acronym and worthy of a JABBA award
  2. The acronym is named after an animal (or animal grouping)
  3. The software/tool has nothing to do with the animal in question

Great Scott! Five fun facts about DNA sequencing from 1985

As everyone is celebrating a certain 2015–themed calendar event today, I thought we could instead go back to the future past of DNA sequencing.

 

1.

Thirty years ago there were no automated sequencing machines. However, Sanger sequencing technology could still provide longer reads than most of Illumina's machines today, e.g. from this paper (A rapid procedure for DNA sequencing using transposon-promoted deletions in Escherichia coli):

The length of the sequence that could be read from each gel in a single run varied from 175 to 200 nt.

 

2.

The idea of sequencing nuclear genomes was still largely a pipe dream, but smaller genomes were tractable. 1985 saw the addition of the Xenopus laevis mitochondrial genome to the tiny collection of organelle genome sequences. Figure 3 of this paper displayed the full sequence, spread over six pages that looked like this:

Including long DNA sequences in journal articles was a surprisingly common practice at this time.

 

3.

There were two releases of GenBank in 1985. The second release saw the database grow to an astounding set of 5,700 sequences, totalling 5,204,420 bp. For comparison, this year also saw the release of the Commodore 128 home computer which came with 128 KB of RAM. The first 3.5" hard drives were only a couple of years old, and could store 10 MB (so capable of storing the DNA sequences in GenBank, but possibly not the associated annotation).

 

4.

The SEQ-ED program was published, allowing the handling of 'long DNA sequences' that were 'up to 200 Kbp'.

 

5.

Somewhat amazingly, people were writing bioinformatics software for Apple computers. The journal CABIOS included this paper:

But how did people distribute software in the days when there was no GitHub, SourceForge, or indeed…no world wide web?

For both code and source of PEGASE, please send two blank 5" diskettes and indicate precisely your system configuration (there is a slight difference between the Apple II+ and the Apple lIe version which depends on the availability of lower case characters).

Dovetail takes flight [Link]

If you ever want to know about the latest developments in sequencing, you owe it to yourself to follow Keith Robison's blog. In his latest post he talks about the launch of the new de novo assembly service from Dovetail Genomics. Keith concludes:

Personally, a pure service offering is very attractive, since that means not having to find internal resources to learn the new technology and then execute on it. I checked with Dovetail, and while I don't have $40K burning a hole in my pocket, if I did I could grab something out of the garden or from the local seafood market, I really could have a complex genome scaffold of my very own in about two months. That's an exciting vision, and perhaps will be a major force in the sunsetting of science's tolerance for highly fragmented draft genomes.

Readers may also enjoy Bio-IT World's report on this new Dovetail service.

Another survey on bioinformatics practices

I recently wrote about the bioinformatics survey that Nick Loman and Tom Connor published. Well if people are interested, there is another bioinformatics survey happening, organised by Elia Brodsky (@EliaBrodsky).

Elia works at Pine Biotech and he says that the results of the survey will be publicized on their website.

You can take the survey here and you can read more details about it on Elia's LinkedIn post: Bioinformatics - useful or just frustrating?

Another hard-to-pronounce bioinformatics software name

This was from a few months ago, published in the journal Nucleic Acids Research:

So how do you pronounce 'FunFHMMer'? I can imagine several possibilities:

  1. Fun-eff-aitch-em-em-er
  2. Fun-eff-aitch-em-mer
  3. Fun-eff-hammer
  4. Fünf-hammer

Reading the manuscript suggests that 'FunF' stems from 'FunFam(s)' which in turn is derived from 'functional families'. This would suggest that options 1 or 3 above might be the correct way to pronounce this software's name.

The fully expanded description of this web server's name becomes a bit of a mouthful:

Class Architecture Topology Homologous Superfamily Functional Families Hidden Markov Model (maker?)

We asked 272 bioinformaticians…name something that makes you angry: more reflections on the poor state of software documentation.

I'd like to share the details of a recent survey conducted by Nick Loman and Thomas Connor that tried to understand current issues with bioinformatics practice and training.

The survey was announced on twitter and attracted almost 300 responses. Nick and Tom have kindly placed the results of the survey on Figshare so that others can play with the data (it seems fitting to talk about this today as it is International Open Access Week):

When you ask a bunch of bioinformaticians the question What things most frustrate you or limit your ability to carry out bioinformatics analysis? you can be sure that you will attract some passionate, and often amusing, answers (I particularly liked someone's response to this question "Not enough Heng Li").

I was struck by how many people raised the issue of poor, incomplete, or otherwise terrible software documentation as a problem (there were at least 42 responses that mentioned this). The availability of 'good documentation' was also listed as the 2nd most important factor when choosing software to use.

I recently wrote about whether this problem is something that really needs to be dealt with by journals and by the review process. It shouldn't be enough that software is available and that it works, we should have some minimal expectation for what documentation should accompany bioinformatics software.

Keith's 10 point checklist for reviewing software

If you are ever in a position to review a software-based manuscript, please check for the following:

  1. Is there a plain text README file that accompanies the software and which explains what the program does and who created it?
  2. Is there a comprehensive manual available somewhere that describes what every option of the program does?
  3. Is there a clear version number or release date for the software?
  4. Does the software provide clear installation instructions (where relevant) that actually work?
  5. Is the software accompanied by an appropriate license?
  6. For command-line programs, does the program give some sensible output when no arguments are provided?
  7. For command-line programs, does the program give some sensible output when -h and/or --help is specified (see this old post of mine for more on this topic)?
  8. For command-line programs, does the built-in help/documentation agree with the external documentation (text/PDF), i.e. do they both list the same features/options?
  9. For script based software (Perl, Python etc.), does the code contain a reasonable level of comments that allow someone with relevant coding experience to understand what the major sections of the program are trying to do?
  10. Is there a contact email address (or link to support web page) provided so that a user can ask questions and get more help?

I'm not expecting every piece of bioinformatics software to tick all 10 of these boxes, but most of these are relatively low-hanging fruit. If you are not prepared to provide useful documentation for your software, then you should also be prepared for people to choose not to use your software, and for reviewers to reject your manuscript!

Your help needed: readers of ACGT can take part in a scientific study and win prizes

I’ve teamed up with researcher Paige Brown Jarreau (@fromthelabbench on twitter) to create a survey of ACGT readers, the results of which will be combined with feedback from readers of other science blogs.

Paige is a postdoctoral researcher at the Manship School of Mass Communication, Louisiana State University and her research focuses on the intersection of science communication, journalism, and new media. She also writes on her popular From the Lab Bench blog.

By participating in this 10–15 minute survey, you’ll be helping me improve ACGT, but more importantly you will be contributing to our understanding of science blog readership. You will also get FREE science art from Paige's Photography for participating, as well as a chance to win a t-shirt and a $50 Amazon gift card!

Click on the following link to take the survey: http://bit.ly/mysciblogreaders

Thanks!

Keith

P.S. Even if you don't take part in the survey, you should still check out Paige's amazing photography, her picture of a Western lowland gorilla is stunning.