A Soft Computing Approach to Acute Coronary Syndrome Risk Evaluation

Research Article

Austin J Clin Cardiolog. 2016; 3(1): 1044.

A Soft Computing Approach to Acute Coronary Syndrome Risk Evaluation

Vicente H1,2, Martins MR3, Mendes T1, Vilhena J1, Gra&nTilde;eda JM4, Gusm&aTilde;o R4 and Neves J2*

¹Departamento de Química, Escola de Ciências e Tecnologia, Universidade de Évora, Portugal

²Departamento de Informatica, Universidade do Minho, Portugal

³Departamento de Química, Escola de Ciências e Tecnologia, Laboratório Hercules, Universidade de Évora, Portugal

4Serviço de Patologia Clínica do Hospital do Espírito Santo de Évora EPE, Portugal

*Corresponding author: Jose Neves, Departamento de Informatica, Universidade do Minho, Portugal

Received: December 15, 2015; Accepted: May 16, 2016; Published: May 18, 2016


Acute Coronary Syndrome (ACS) is transversal to a broad and heterogeneous set of human beings, and assumed as a serious diagnosis and risk stratification problem. Although one may be faced with or had at his disposition different tools as biomarkers for the diagnosis and prognosis of ACS, they have to be previously evaluated and validated in different scenarios and patient cohorts. Besides ensuring that a diagnosis is correct, attention should also be directed to ensure that therapies are either correctly or safely applied. Indeed, this work will focus on the development of a diagnosis decision support system in terms of its knowledge representation and reasoning mechanisms, given here in terms of a formal framework based on Logic Programming, complemented with a problem solving methodology to computing anchored on Artificial Neural Networks. On the one hand it caters for the evaluation of ACS predisposing risk and the respective Degree-of-Confidence that one has on such a happening. On the other hand it may be seen as a major development on the Multi-Value Logics to understand things and ones behavior. Undeniably, the proposed model allows for an improvement of the diagnosis process, classifying properly the patients that presented the pathology (sensitivity ranging from 89.7% to 90.9%) as well as classifying the absence of ACS (specificity ranging from 88.4% to 90.2%).

Keywords: Artificial neuronal networks; Acute coronary syndrome; Acute myocardial infarction; Cardiovascular disease risk assessment; Knowledge representation and reasoning; Logic programming


ACS: Acute Coronary Syndrome; AMI: Acute Myocardial Infarction; ANNs: Artificial Neural Networks; AUC: Area Under the Curve; BMI: Body Mass Index; CK-MB: Creatine Kinase MB; cTnI: cardiac Troponin I; DoC: Degree-of-Confidence; ECG: Electrocardiogram; ELP: Extended Logic Program; FN: False Negative; FP: False Positive; HESE: Hospital do Espírito Santo de Évora; HRP: Horseradish Peroxidase; LP: Logic Programming; LSH: Life Style Habits; NPV: Negative Predictive Value; NSTE: Non-ST-segment Elevation; NSTEMI: Non-ST-segment Elevation Myocardial Infarction; PPV: Positive Predictive Value; QoI: Qualityof- Information; RCM: Related Clinic Manifestations; ROC: Receiver Operating Characteristic; STE: ST-segment Elevation; STEMI-ST: segment Elevation Myocardial Infarction; TN: True Negative; TP: True Positive


The Acute Coronary Syndrome (ACS) stands for a complex and serious medical disorder that is presented in any group of individuals that appear to have clinical symptoms compatible with acute myocardial ischemia, including unstable angina, Non-STsegment Elevation Myocardial Infarction (NSTEMI) and ST-segment Elevation Myocardial Infarction (STEMI). These high-risk coronary manifestations are associated with high mortality and morbidity, with heterogeneous etiology, characterized by an imbalance between the requirement and the availability of oxygen in the myocardium [1].

In Europe, cardiovascular diseases are responsible for more than 2 million deaths per year, representing about 50% of all deaths, and for 23% of the morbidity cases [2,3]. This may be related to factors associated with the processes of industrialization going on in many countries, therefore affecting lifestyles and sedentary, which associated with a less healthy diet sometimes based on processed foods, may contribute to the prevalence of hypertension and increase the serum levels of cholesterol and glucose.

Despite modern treatment the amounts of deaths due to Acute Myocardial Infarction (AMI) and readmission of patients with ACS remain high [2,4]. Patients with chest pain represent a very substantial proportion of all acute medical hospitalization in Europe. Under a thorough medical profile indicative of ischemia, the Electrocardiogram (ECG) is a priority after hospital admission. The patients are often grouped in two categories, namely patients with acute chest pain and persistent ST-segment Elevation (STE), and patients with acute chest pain but without persistent ST-segment elevation, i.e., Non-ST-segment Elevation (NSTE) [3,4]. Besides the higher hospital mortality in STE-ACS patients, the annual incidence of NSTE-ACS patients is higher than STE-ACS ones [5,6]. Furthermore, a long-term follow-up showed the increase of death rates in NSTEACS patients, with more co-morbidity, mainly diabetes mellitus, chronic renal diseases and anemia [2,7]. Premature mortality is increased in individuals susceptible to accelerated atherogenesis caused by accumulation of others risk factors, namely age over 65 years, hypertension, obesity, lipid disorders and tobacco habits. Some studies reported an association of incidence of coronary artery disease and recurrent AMI with the presence of persistently increased titers of anti PhosphoLipid antibodies, such as anti Cardiolipin antibodies, lupus anticoagulant and anti-β2 GlycoProtein I [8].

The diagnosis of ACS relies, besides the clinical symptoms and ECG findings, primarily on biomarker levels [9,10]. Markers of myocardial necrosis such as cardiac troponins (I or T), the isoenzyme of Creatine Kinase MB (CK-MB), of which the primary source is the myocardium, and myoglobin reflect different pathophysiological aspects of necrosis, and are the gold standard in detection of ACS [9,11]. According to the European Society of Cardiology, troponins (I, T) have high sensibility to the myocardial cellular damage and play a central role in the diagnosis process, establishing and stratifying risk and making possible to distinguish between NSTEMI and unstable angina [2,12]. Nevertheless, CK-MB is useful in association with cardiac troponin in order to discard false positives diagnosis, namely in pulmonary embolism, renal failure and inflammatory diseases, such as myocarditis or pericarditis [12,13].

Solving problems related to ACS risk requires a proactive strategy. However, the stated above shows that the ACS assessment should be correlated with many variables and requires a multidisciplinary approach. Thus, it is difficult to assess to the ACS since it needs to consider different conditions with intricate relations among them, where the available data may be incomplete, contradictory and/ or unknown. This work is focused on the development of a hybrid methodology for problem solving, aiming at the elaboration of a clinical decision support systems to predict ACS risk, according to a historical dataset, that associates Logic Programming (LP) based approach (to knowledge representation and reasoning [14,15], and a computational framework based on Artificial Neural Networks (ANNs) [16,17]. ANNs were selected due to their dynamics characteristics like adaptability, robustness and flexibility. Indeed, this approach goes in depth in aspects like the attribute’s Quality-of- Information (QoI) [18] and Degree-of-Confidence (DoC) [19,20], which makes possible the handling of unknown, incomplete or even contradictory data or knowledge.

Materials and Methods

Study protocol

The patients were screened in the Emergency Department of the Hospital do Espírito Santo de Évora (HESE), Portugal, and followed up in the Laboratory of Clinical Pathology of this healthcare unit, with the quantification of cardiac biomarkers. The study protocol was approved by the Ethics Committee of HESE. Demographic data, clinical history, complementary diagnostic tests and the final diagnosis were obtained by accessing the HESE Information System. Patients were undergoing fasting and blood was sampling without tourniquet to prevent the haemolysis, platelet aggregation and tissue factor release [21]. Plasma specimens were performed using lithium heparin or sodium citrate as anticoagulant. All samples obtained were centrifuged at 2000g for 15 minutes at room temperature and the plasma was separated for analysis.

Measurement of cardiac biomarkers

The quantitative measurement of cardiac Troponin I (cTnI), the isoenzime creatinine cinase CK-MB and Myoglobin in human serum and plasma (heparin and EDTA) was performed by chemiluminescent assay using an automatized VITROS 5600® (Ortho Clinical Diagnostics). For the evaluation of cTnI, CK-MB or Myoblobin an immunometric immunoassay technique is used, which involves the simultaneous reaction of cTnI, CK-MB, Myoglobin present in the sample with the specific mouse monoclonal biotinylated antibody and a Horseradish Peroxidase (HRP) - labeled antibody conjugate. The bound HRP conjugate is measured by a luminescent reaction. The HRP in the bound conjugate catalyzes the oxidation of the luminol derivative, producing light. The electron transfer agent (a substituted acetanilide) increases the level of light produced and prolongs its emission. The light signals are read by the system. The amount of HRP conjugate bound is directly proportional to the concentration of cTnI, CK-MB or Myoblobin present.

Knowledge representation and reasoning

In several situations it is necessary to handle with estimated values, probabilistic measures, or degrees of uncertainty. In other words, the decisors are often faced with information with some degree of incompleteness, nebulousness and even contradictoriness. The notion of uncertainty is broader than error or accuracy and includes these more restricted concepts. The accuracy concept is related with the closeness of measurements or computations to their “true” value (or a value held as “truth”), while the uncertainty can be considered as any aspect of the data that results in less than perfect knowledge about the phenomena being studied [22]. Furthermore, knowledge and belief are generally incomplete, contradictory, or even error sensitive, being desirable to use formal tools to deal with the problems that arise from the use of partial, contradictory, ambiguous, imperfect, nebulous, or missing information [15,22]. Some non-classical techniques have been presented where uncertainty is associated to the application of Probability Theory [23], Fuzzy Set Theory [24], or Similarities [25]. Other approaches for knowledge representation and reasoning have been proposed using the Logic Programming (LP) paradigm, namely in the area of Model Theory [26,27] and Proof Theory [14,15]. The proof theoretical approach, in terms of an extension to the LP language to knowledge representation and reasoning is followed in the present work. An Extended Logic Program (ELP) is a finite set of clauses in the form:

{p ← p1,..., pn, not q1,..., not qm? (p1,..., pn, not q1,..., not qm) (n, m= 0) exceptionp1... exceptionpj (j = m,n)} :: scoringvalue

Where “?” is a domain atom denoting falsity, the pi, qj, and p are classical ground literals, i.e., either positive atoms or atoms preceeded by the classical negation sign ¬ [14]. Under this formalism, every program is associated with a set of abducibles [26,27], given here in the form of exceptions to the extensions of the predicates that make the program. The term scoringvalue stands for the relative weight of the extension of a specific predicate with respect to the extensions of the peers ones that make the overall program. The objective is to build a quantification process of QoI and DoC [18-20]. The Qualityof- Information (QoIi) with respect to the extension of a predicatei will be given by a truth-value in the interval [0, 1]. Thus, QoIi = 1 when the information is known (positive) or false (negative) and QoIi = 0 if the information is unknown. Finally for situations where the extension of predicatei is unknown but can be taken from a set of terms, the QoIi ∈ [0, 1] [18]. DoCi stands for an assessment of attributei with respect to the terms that make the extension of predicatei, i.e., it denotes a measure of one’s confidence that the attribute value fits into a given interval, whose boundaries are evaluated in a way that takes into consideration its domain [19].

Therefore, the universe of discourse is engendered according to the information presented in the extensions of a given set of predicates, according to productions of the type:

predicat e i 1jm claus e j ( x 1 ,, x n )Qo I i Do C i                                                                  (1) [email protected]@[email protected]@+=feaaguart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabaqaamaabaabaaGcbaaeaaaaaaaaa8qacaWGWbGaamOCaiaadwgacaWGKbGaamyAaiaadogacaWGHbGaamiDaiaadwgapaWaaSbaaSqaa8qacaWGPbaapaqabaGcpeGaeyOeI0Yaaybuaeqal8aabaWdbiaaigdacqGHKjYOcaWGQbGaeyizImQaamyBaaqab0WdaeaapeGaeSOkIufaaOGaam4yaiaadYgacaWGHbGaamyDaiaadohacaWGLbWdamaaBaaaleaapeGaamOAaaWdaeqaaOWdbmaabmaapaqaa8qacaWG4bWdamaaBaaaleaapeGaaGymaaWdaeqaaOWdbiaacYcacqWIVlctcaGGSaGaamiEa8aadaWgaaWcbaWdbiaad6gaa8aabeaaaOWdbiaawIcacaGLPaaatCvAUfeBSn0BKvguHDwzZbqegeezVjwzGyuyUD2CV52zGmfDKbIuaGqbaiaa=DJicaWGrbGaam4BaiaadMeapaWaaSbaaSqaa8qacaWGPbaapaqabaGcpeGaa83nIiaadseacaWGVbGaam4qa8aadaWgaaWcbaWdbiaadMgaa8aabeaakiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaG[email protected][email protected]

Where U and m stand, respectively, for set union and the cardinality of the extension of predicatei.

Artificial neural networks

ANNs are computational tools which attempt to simulate the architecture and internal operational features of the human brain and nervous system. ANNs can be defined as a connected structure of basic computation units, called artificial neurons or nodes, with learning capabilities. Multilayer perceptron is the most popular ANN architecture, where neurons are grouped in layers and only forward connections exist, often used for prediction as well as for classification. Indeed, this provides a powerful base-learner, with advantages such as nonlinear mapping and noise tolerance, increasingly used in Data Mining due to its good behavior in terms of predictive knowledge [17].

In order to ensure statistical significance of the attained results, 30 (thirty) experiments were applied in all tests. In each simulation, the available data was randomly divided into two mutually exclusive partitions, i.e., the training set with 67% of the available data, used during the modeling phase, and the test set with the remaining data (i.e., 33%) used after training in order to evaluate the model performance and to validate it. The back propagation algorithm was used in the learning process of the ANN. As the output function in the pre-processing layer the identity one was used. In the other layers we used the sigmoid function.

Assessment of cardiac biomarkers and models

The performance assessment of each cardiac biomarker and model is carried out based on the coincidence matrix, created by matching the predicted and target values Table 1. Based on coincidence matrix it is possible to compute sensitivity, specificity, Positive Predictive Value (PPV) and Negative Predictive Value (NPV) of the classifier: