PgmNr 353: Validation of scoring metrics to guide the classification of constitutional copy number variants.Authors:
E. Riggs 1; E. Andersen 2; A. Cherry 3; S. Kantarci 4; H. Kearney 5; A. Patel 6; G. Raca 7; D. Ritter 8; S. South 9; E. Thorland 5; D. Pineda-Alvarez 10; S. Aradhya 3,10; C. Martin 1
View Session Add to Schedule
1) Autism & Developmental Medicine Institute, Geisinger , Lewisburg, Pennsylvania.; 2) ARUP Laboratories, University of Utah, Salt Lake City, Utah; 3) Department of Pathology, Stanford Health Care, Stanford, CA; 4) Cytogenetics and Genomics, Quest Diagnostics Nichols Institute, San Juan Capistrano, CA; 5) Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN; 6) Lineagen, Salt Lake City, UT; 7) Children's Hospital of Los Angeles, Los Angeles, CA; 8) Texas Children's Cancer Center, Baylor College of Medicine, Houston, TX; 9) AncestryDNA, Lehi, UT; 10) Invitae, San Francisco, CA
The American College of Genetics and Genomics (ACMG) and the NIH-funded Clinical Genome Resource (ClinGen) are in the process of updating the technical standards for classification and reporting of constitutional copy number variants (CNVs). This update will include points-based scoring metrics designed to guide users through a process for evaluating evidence and assigning classifications (e.g., pathogenic, uncertain, etc.) for both copy number losses and gains. These metrics were developed through an iterative process using expert opinion on the sources and relative strengths of various types of evidence. Through discussion and case examples, the committee assigned relative weights to each evidence type, including: the presence of known dosage-sensitive genes, overlap with CNVs reported in clinically affected individuals and individuals in the general population, case-control studies, segregation data, de novo occurrences, and the number of protein-coding genes included in the CNV.
The scoring metrics were refined through multiple rounds of internal and external testing. A total of 114 CNVs (58 deletions, 56 duplications), previously observed and reported by clinical laboratories, were evaluated by committee members and external reviewers using the scoring metrics. A subset of 47 of these CNVs (26 deletions, 21 duplications) were also evaluated using current classification methods as a baseline for comparison. The testing process aimed to answer three questions: 1) how often reviewers match the original clinical laboratory classification; 2) how often reviewers evaluating the same CNV reach the same classification (i.e., concordance); and 3) how appropriate were the classifications assigned using the scoring metrics, in the opinion of the reviewers. Overall, reviewers’ ability to arrive at classifications concordant with the original clinical laboratory increased from 70.2% at baseline to 79.1% using the scoring metrics, and conflicting classifications that may impact medical management decreased from 39.2% to 23.4%. Classifications calculated using the metrics were considered appropriate by reviewers 89.3% of the time. With increased education, familiarity, and experience, we expect to see steady improvements in inter-laboratory concordance. We will continue to study trends in inter-laboratory concordance using these metrics, as well as usability and user experience, and plan to use this information to guide future improvements of the scoring metrics.