Published in

Oxford University Press, Nucleic Acids Research, 5(40), p. e35-e35, 2011

DOI: 10.1093/nar/gkr1221

Links

Tools

Export citation

Search in Google Scholar

Maximizing signal-to-noise ratio in the random mutation capture assay

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

The 'Random Mutation Capture' assay allows for the sensitive quantitation of DNA mutations at extremely low mutation frequencies. This method is based on PCR detection of mutations that render the mutated target sequence resistant to restriction enzyme digestion. The original protocol prescribes an end-point dilution to about 0.1 mutant DNA molecules per PCR well, such that the mutation burden can be simply calculated by counting the number of amplified PCR wells. However, the statistical aspects associated with the single molecular nature of this protocol and several other molecular approaches relying on binary (on/off) output can significantly affect the quantification accuracy, and this issue has so far been ignored. The present work proposes a design of experiment (DoE) using statistical modeling and Monte Carlo simulations to obtain a statistically optimal sampling protocol, one that minimizes the coefficient of variance in the measurement estimates. Here, the DoE prescribed a dilution factor at about 1.6 mutant molecules per well. Theoretical results and experimental validation revealed an up to 10-fold improvement in the information obtained per PCR well, i.e. the optimal protocol achieves the same coefficient of variation using one-tenth the number of wells used in the original assay. Additionally, this optimization equally applies to any method that relies on binary detection of a small number of templates.