Test Research versus Diagnostic Research: Clinical Application and Interpretation

Short Communication

Austin J Public Health Epidemiol. 2022; 9(3): 1128.

Test Research versus Diagnostic Research: Clinical Application and Interpretation

Sabour S1,2*

¹Safety Promotions and Injury Prevention Research Centre, Shahid Beheshti University of Medical Sciences, Iran

²Department of Clinical Epidemiology, School of Public Health and Safety, Shahid Beheshti University of Medical Sciences, Iran

*Corresponding author: Siamak Sabour, Department of Clinical Epidemiology, School of Health and Safety, Shahid Beheshti University of Medical Sciences, Chamran Highway, Velenjak, Daneshjoo Blvd, Tehran, I.R. Iran

Received: August 17, 2022; Accepted: September 15, 2022; Published: September 22, 2022

Short Communication

Many published diagnostic studies are better characterized as test research than as diagnostic research [1]. Often these studies include a group of patients with the target disease and a group of patients without this disease in whom the results of the index test are also measured. There is a difference between test research and diagnostic research. The objective of test research is to assess whether a single diagnostic test (index test) adequately can show the presence or absence of a particular disease; however, the aim of diagnostic research is that whether index test appreciably adds to the diagnostic information that is readily available in clinical care [2,3]. Thus, the authors must include all tests that are used to detect disease, and then estimate the added value of index test comparing to other tests. Not with standing its limitations, test research—focusing on estimating the accuracy of a single test may offer relevant information. Most notably, it is helpful in the developmental phase of a new diagnostic test, when the accuracy of the test is yet unknown. Furthermore, test research can be valuable in the realm of screening for a particular disorder in asymptomatic individuals. In this context, no test results other than the single screening test are considered [2].

Typically, the results of the index test are categorized as positive or negative and the study results are summarized in a 2×2 table. The table allows for calculation of the four classic measures to estimate diagnostic accuracy in test research. These are Positive Predictive Value (PPV), Negative Predictive Value (NPV), Sensitivity and Specificity. Sensitivity and specificity are not clinically useful in diagnostic study and PPV and NPV are influenced by prevalence of the outcome [2]. In addition to earlier indexes, other accuracy indexes including Likelihood Ratio (LR) of a positive test (the probability of a positive test in the diseased divided by the probability of a positive test in the non-diseased), the likelihood ratio of a negative test (the probability of a negative test in the diseased divided by the probability of a negative test in the non-diseased) are appropriate measures that should be calculated [2-4]. If the index test results are not dichotomous but measured on a continuous scale, Receiver Operating Characteristic (ROC) curves can be produced based on sensitivity and specificity of the different cut-off values of the diagnostic test to be evaluated.

Test research often deviates from the main principle of clinically relevant diagnostic research in that clinical practice is not followed, first and foremost because the diagnostic process by definition involves multiple tests and a natural hierarchy of diagnostic testing. Moreover, test research often does not include representatives of the relevant patient domain, that is, patients presenting with symptoms and signs suggestive of the target disease. Rather, a group of patients with evident disease is selected and compared to a group of no diseased patients, sometimes even healthy individuals who are obviously not suspected of the disease under study. Such selection of study subjects, however, will lead to biased estimates of the test’s performance [2-4].

Diagnostic knowledge is not provided by answering the question, “How good is this test?” Diagnostic knowledge is the information needed to answer the question, “What is the probability of the presence or absence of a specific disease given these test results?” [2]. knowledge produced by diagnostic research needs to be incorporated into a knowledge base that guides daily medical care. No doubt, however, both the validity and the reliability of the study findings play a crucial role in their potential for implementation [2,5,6]. Validity refers to the lack of bias (i.e., lack of systematic error) in the results. Study findings are valid when the quantification of the determinant(s) – outcome relationship is true. The essence of scientific research, in contrast to other forms of systematic gathering of data, is that its results can be generalized [2,7,10]. The type of knowledge provided by clinical epidemiologic research is inferential, probabilistic knowledge. Scientific knowledge contrasts with factual knowledge because it is not time and place specific. It is true for any patient or group of patients as long as the findings on which the knowledge is based permit scientific generalization to those patients [8-10].

Conflicts of Interest

No potential conflict of interest relevant to this article was reported.

References

  1. Tian A, Lin R, Yu J, Zhang F, Zheng Q, Yuan X, et al. The differential diagnostic value of dual-phase 18F-DCFPyL PET/CT in prostate carcinoma. Prostate Cancer and Prostatic Diseases. 2022: 1-8.
  2. Grobbee DE, Hoes AW. Clinical epidemiology: principles, methods, and applications for clinical research: Jones & Bartlett Publishers; 2014.
  3. Sabour S. A Common Mistake in Assessing the Diagnostic Value of a Test: Failure to Account for Statistical and Methodologic Issues. J Nucl Med. 2017; 58: 1182-1183.
  4. Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Annals of internal medicine. 2004; 140:189-202.
  5. Sabour S, Ghassemi F. Accuracy, validity, and reliability of the infrared optical head tracker (IOHT). Invest Ophthalmol Vis Sci. 2012; 53: 4776.
  6. Spering C, Brauns SD, Lefering R, Bouillon B, Dobroniak CC, Füzesi L, et al. Diagnostic value of chest radiography in the early management of severely injured patients with mediastinal vascular injury. European Journal of Trauma and Emergency Surgery. 2022: 1-9.
  7. Sabour S. Validity and reliability of the 13C-methionine breath test for the detection of moderate hyperhomocysteinemia in Mexican adults; statistical issues in validity and reliability analysis. Clin Chem Lab Med. 2014; 52: e295- 6.
  8. Sabour S. Reliability of a new modified tear breakup time method: methodological and statistical issues. Graefes Arch Clin Exp Ophthalmol. 2016; 254: 595-6.
  9. Sabour S. Reproducibility of dynamic Scheimpflug-based pneumotonometer and its correlation with a dynamic bidirectional pneumotonometry device: methodological issues. Cornea. 2015; 34: e14-5.
  10. Sabour S. Reproducibility of semi-automatic coronary plaque quantification in coronary CT angiography with sub-mSv radiation dose; common mistakes. J Cardiovasc Comput Tomogr. 2016; 10: e21-2.

Download PDF

Citation: Sabour S. Test Research versus Diagnostic Research: Clinical Application and Interpretation. Austin J Public Health Epidemiol. 2022; 9(3): 1128.

Home
Journal Scope
Editorial Board
Instruction for Authors
Submit Your Article
Contact Us