As a result of the significant disruption that is being caused by the COVID-19 pandemic we are very aware that many researchers will have difficulty in meeting the timelines associated with our peer review process during normal times. Please do let us know if you need additional time. Our systems will continue to remind you of the original timelines but we intend to be highly flexible at this time.
Featured Collection: Validation and transparency for AI-based diagnosis and prognosis in healthcare
AI has led to a surge in the development of diagnostic and prognostic models in healthcare. Rigorous validation is essential to ensure that AI-based prognosis and diagnosis can be used safely and accurately in clinical practice. Transparency is crucial to gain trust in these algorithms and facilitate accountability. To address this, we invite contributions to our Collection focused on the validation and transparency of AI-based diagnosis and prognosis.
Featured Collection: Methodology and reporting of diagnostic and prognostic research
To celebrate the launch of Diagnostic and Prognostic Research in 2017, the Editors-in-Chief, Gary Collins, Nancy R Cook and Karel GM Moons, have created a collection of articles that will address different aspects of diagnostic and prognostic research methodology and reporting. Catch up on the articles so far, with more articles added in due course.
COVID-19 and impact on peer review
Articles
-
-
A scoping review of machine learning models to predict risk of falls in elders, without using sensor data
-
Can we develop real-world prognostic models using observational healthcare data? Large-scale experiment to investigate model sensitivity to database and phenotypes
-
Clinical prognostic models for sarcomas: a systematic review and critical appraisal of development and validation studies
-
Guide to evaluating performance of prediction models for recurrent clinical events
-
A simple, step-by-step guide to interpreting decision curve analysis
-
State of the art in selection of variables and functional forms in multivariable analysis—outstanding issues
-
Elaborating on the assessment of the risk of bias in prognostic studies in pain rehabilitation using QUIPS—aspects of interrater agreement
-
Evaluating the impact of prediction models: lessons learned, challenges, and recommendations
-
Methodological standards for the development and evaluation of clinical prediction rules: a review of the literature
Thank you to our peer reviewers
The editors and staff of Diagnostic and Prognostic Research would like to warmly thank our peer reviewers whose comments have helped to shape the journal.
Aims and scope
Diagnostic and Prognostic Research encompasses diagnostic and prognostic research addressing studies on the evaluation of medical tests, markers, prediction models, decision tools and apps. Diagnostic and Prognostic Research provides a platform for disseminating empirical primary studies, systematic reviews (including meta-analyses) as well as articles on methodology, protocols and commentaries addressing diagnostic and prognostic studies. The journal ensures that the results of all well-conducted diagnostic and prognostic research are published, regardless of their outcome.
Featured supplement: Methods for Evaluation of medical prediction Models, Tests And Biomarkers (MEMTAB) 2018 Symposium
Aiming to tackle the methodological and practical complexities facing the diagnostic, prognostic, screening and monitoring field today, the conference abstracts for this international symposium are published in Diagnostic and Prognostic Research.
Latest Tweets
Your browser needs to have JavaScript enabled to view this timeline
About the Editors
Gary Collins, Editor-in-Chief
Gary Collins is Professor of Medical Statistics at the Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Science, University of Oxford and a fellow at the UK EQUATOR Centre. Professor Collins has a PhD in Mathematical Statistics from the University of Exeter (2000). He has long standing research interests in studies of prognosis, particularly in aspects of model validation. Alongside Professor Karel Moons, Professor Collins led the TRIPOD initiative, to develop reporting guidelines for prediction models studies, the CHARMS checklist for systematic reviews of prediction model studies, and is part of the PROBAST working group for developing a risk of bias tool for prediction model studies.
Nancy R Cook, Editor-in-Chief
Nancy Cook is a biostatistician and Professor in the Department of Medicine at the Brigham & Women’s Hospital and Harvard Medical School, and Professor of Epidemiology at the Harvard T.H. Chan School of Public Health. Dr Cook received her ScD in Biostatistics from the Harvard School of Public Health, and is involved in the design, conduct, and analysis of several large randomized trials as well as observational data. Dr Cook is also interested in developing risk prediction scores using clinical and genetic biomarkers. She helped develop the Reynolds Risk Score for cardiovascular disease as well as reclassification methods for comparing and evaluating risk prediction models.
Karel G M Moons, Editor-in-Chief
Karel Moons is Professor of Clinical Epidemiology at the Julius Center for Health Sciences and Primary Care. Professor Moons is Director of Research in the management team of the Julius Center and heading the research programme ‘Methodology’. Since 2005, Professor Moons has an Adjunct Professorship at VanderBilt University, Nashville, USA. Professor Moons' experience covers the full range of clinical study design and data analysis, varying from diagnostic test evaluation, prognostic (bio)marker studies to therapeutic trials, etiologic and meta epidemiological studies. His main focus concerns the methodology of diagnostic and prognostic research, both primary and meta analytical research. His major expertise is testing and introducing innovations for design and analysis for development, validation and implementation of diagnostic and prognostic prediction models.
Follow
Annual Journal Metrics
-
Speed 2024
Submission to first editorial decision (median days): 15
Submission to acceptance (median days): 198Usage 2024
Downloads: 164,938
Altmetric mentions: 52