A (still in progress) best practices workflow for reference-guided genome assembly

A couple years ago I led a paper in PLoS ONE outlining a reference-guided genome assembly approach that, more-or-less, simply maps reads from the target species to a high-quality guide genome and exports a consensus, producing a genome sequence for the target species one is interested in. As the paper pointed out, this has been applied between strains of single species, but hasn’t really been tried across larger evolutionary distances. We found that with the 3-5x genome coverage we had, this approach produced a better genome assembly, based on several metrics, than simply performing de novo genome assembly, despite mapping to a guide genome that was quite distantly related in both cases. This was helped along by the fact that we were working with bird genomes, which are the best-case scenario for this type of approach in vertebrates. Obviously, we were making the assumption that enough synteny exists to prevent our method from producing spurious assemblies, and this assumption may or may not hold up once some evidence starts emerging (it currently hasn’t across broader taxonomic scales). Despite this and other shortcomings, I hold out some hope that this type of assembly approach will become more common as more and more high-quality reference genomes from across the tree of life become available. Moreover, I think that even in cases of high sequencing coverage, where a de novo assembly is the norm, this type of approach could complement existing assembly methods to produce a better overall assembly.

Not many have taken notice of this work (at least based on citations), though I recently had an inquiry from someone who was trying to implement it. I’m also coming full circle and playing around with this approach again with some of the work I am currently doing. Admittedly, at the time I was quite new to graduate school and to handling and analyzing whole-genome sequencing data, and being more seasoned now, I thought I would update my approach to one that makes more sense and uses some different tools. The basic pipeline from the before (i.e., the publication) was to use CLC Genomics Workbench to do the following:

  1. Quality trim raw Illumina reads
  2. Map quality-trimmed reads to a guide genome
  3. Export the consensus sequence for regions where reads mapped based on the level of read coverage

When someone asked for guidance with their own data, I was quick to provide what I think is a more sound (and certainly more “open”) pipeline, which was facilitated by all I’ve learned since publishing the original paper. It goes as follows:

  1. Quality trim raw Illumina reads: Can use anything but I’m currently partial to Trimmomatic.
  2. Map quality-trimmed reads to a guide genome: I like BWA the most for this, but any mapper could work. This step can obviously vary a lot depending on the distance between the target and guide, and one has to balance high-quality mapping and sufficient coverage. I’m hoping to do a follow up post about this.
  3. Call any SNPs or INDELs: The two leaders here are SAMtools and GATK, which produce an output VCF file of any variants. The former is more streamlined and quick to implement while the Best Practices approach of the latter is more refined (e.g., INDEL realignment), which might help in this type of application. I’m planning to write another post outlining the GATK approach in the context of reference-guided assembly.
  4. Filter genotype calls by quality: Again, one could use SAMtools or GATK. One can filter based on many things, and the filtering we performed before was based on coverage, but I would now recommend that one uses things like mapping quality and genotype probability/quality scores to perform filtering, since this essentially encompasses the sequencing depth. This produces a refined VCF file with the most confident variant calls.
  5. Export a consensus sequence: SAMtools or GATK will take the filtered VCF and the guide genome and produce a consensus sequence where all positions without a genotype call in the VCF file are outputted as an ‘N’ ambiguity. This preserves some level of scaffolding, which is useful, though it is probably prudent to break longer stretches of Ns, as they aren’t founded on any empirical data derived from the target species, and may incorrectly link contigs.

Overall, I think this approach would do a much better job and uses probabilities for filtering genotype calls rather than simply coverage, a much more arbitrary filtering scheme that scales well with genome sequencing coverage. I’m aiming to have some follow-up posts discussing mapping parameter optimization and the details of mapping, variant calling, and variant filtering. I’m also hoping to implement this more refined approach in some upcoming work that will make its way into a manuscript.