Dissemin is shutting down on January 1st, 2025

Published in

Wiley, Proteomics, 7-8(23), 2023

DOI: 10.1002/pmic.202200041

Links

Tools

Export citation

Search in Google Scholar

A transformer architecture for retention time prediction in liquid chromatography mass spectrometry‐based proteomics

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

AbstractAccurate retention time (RT) prediction is important for spectral library‐based analysis in data‐independent acquisition mass spectrometry‐based proteomics. The deep learning approach has demonstrated superior performance over traditional machine learning methods for this purpose. The transformer architecture is a recent development in deep learning that delivers state‐of‐the‐art performance in many fields such as natural language processing, computer vision, and biology. We assess the performance of the transformer architecture for RT prediction using datasets from five deep learning models Prosit, DeepDIA, AutoRT, DeepPhospho, and AlphaPeptDeep. The experimental results on holdout datasets and independent datasets exhibit state‐of‐the‐art performance of the transformer architecture. The software and evaluation datasets are publicly available for future development in the field.