Experimental analyzes and simulations showed that different statistical and machine learning methods had different performance for predicting blood pressure – Scientific Reports

We conducted a set of empirical analyzes to compare the performance of different statistical and machine learning approaches in two different disease groups: patients with acute myocardial infarction (AMI) and hospitalized patients with congestive heart failure (CHF). In each patient group we examined the ability of different methods to predict a patient’s systolic blood pressure at hospital discharge. Model performance was evaluated using independent validation samples.

data sources

We used data from a study that collected data on hospitalized patients with acute myocardial infarction (AMI) or congestive heart failure (CHF) over two different time periods.5. We considered each disease (AMI versus CHF) separately. For patients with AMI, the derivation sample consisted of 8,145 patients who were alive discharged from hospital between April 1, 1999 and March 31, 2001, while the validation sample consisted of 4,444 patients who were discharged alive from hospital between April 1, 2004 and March 31, 2005. For patients with congestive heart failure, consisting of The derivation sample consists of 7156 patients who were discharged alive from the hospital between April 1, 1999 and March 31, 2001, while the validation sample consists of 6818 patients who were discharged alive from the hospital between April 1, 2004 and March 31, 2005. Thus, the derivation and verification samples came from different time periods. Data on patient demographics, vital signs, physical examination at presentation, medical history, and laboratory test results were collected for these samples. For the current study, the score was a continuous variable indicative of the patient’s systolic blood pressure at the time of hospital discharge.

We considered 33 candidate predictor variables in the AMI sample and 28 candidate predictor variables in the CHF sample (Table 1 (AMI sample) and Table 2 (CHF sample) for a list of these variables). These variables consist of demographic characteristics, presentation characteristics, vital signs at hospital presentation, conventional cardiac risk factors, comorbid conditions, laboratory tests, electrocardiogram results, and signs and symptoms.6,7,8. The baseline characteristics of the two derivation samples and the two validation samples are reported in Table 1 (AMI sample) and Table 2 (CHF sample). Differences in covariates between derivation and validation samples were tested using a t-test for continuous covariates and a chi-square test for binary variables.

Table 1 Basic characteristics of patients in AMI derivation and validation samples.
Table 2 Baseline characteristics of patients in the CHF derivation and validation samples.

Use of data in this project is authorized under Section 45 of the Ontario Personal Health Information Protection Act (PHIPA) and does not require review by the Research Ethics Board. All research was conducted in accordance with relevant guidelines and regulations.

Methods for predicting systolic blood pressure when emptying

We examined six different methods for predicting systolic blood pressure at the time of hospital discharge: conventional linear regression estimated using OLS, regression random forests, augmentative trees, artificial neural networks, ridge regression, and lasso. Readers are referred elsewhere for details on these methods9,10,11,12,13,14. The empirical analyzes described in this section are driven by similar analyzes performed in a previous study7 Focusing on predicting the probability of a binary outcome. All methods considered all variables listed in Table 1 as candidate predictor variables. When using OLS regression to predict voided blood pressure, the regression model included all variables as main effects. The relationship between discharged blood pressure and each continuous variable was modeled using constrained cubic lines15th. These six learning methods were chosen for two different reasons. First, five of the six (excluding neural networks) were included in a recent study comparing the relative performance of different learning methods for predicting binary outcomes.7. Second, many of these approaches have been used in the cardiology literature to predict patient outcomes4,16. Our study can thus be considered a neutral simulation study, in which we compare different approaches rather than suggesting a new method17.

For each disease condition, parameter over-tuning was performed in the derivation sample. For both ridge and line regression, the adjustment parameter λ was estimated using the cv.glmnet function from glmnet Package. This function uses a tenfold cross-check in the derivation sample to determine the optimal value of λ. Hyperparameters were tuned for augmented trees, random forests, neural networks, and OLS regression using a user-derived network search.18. The network has one dimension for OLS regression (number of nodes for constrained cubic lines), two dimensions for neural networks (number of neurons in a single hidden layer and weight decay parameter), boost trees (interaction depth and shrinkage or learning rate) and random forests (number of candidate variable samples and minimum size terminal nodes). For a given point on this network (eg, for a given number of sample candidate variables and a minimum size of terminal nodes for random forests), the derivation sample was randomly divided into ten groups of approximately equal size. The selected model, with parameters set to those of the grid point, was suitable for nine groups. The appropriate model was then applied to the remaining group and the expected discharge blood pressure was obtained for each subject in this remaining group. The accuracy of the predictions was measured using R.2. This cross-validation was performed ten times, with each of the ten groups being used once to validate the predictions. located R2 Then the average was calculated across all 10 iterations of this procedure. The network point that led to the highest value of R.2 It was selected for all subsequent applications of that method. For a neural network, we allowed a single hidden layer as it has been suggested that this is sufficient for many practical applications19 (page 158).

In the AMI sample, network searches yielded the following values ​​for hyperparameters: booster trees (interaction depth: 4; shrink/learning rate: 0.065), random forests (number of random variables: 6; minimum terminal node size: 20), regression. OLS (number of nodes: 3), Neural networks (5 neurons in the hidden layer, from network research that took into account the number of neurons ranging from 2 to 15 in increments of 1; weight decay coefficient: 0.05), lasso (λ = 0.08596 ), ridge regression (λ = 0.56553).

In the CHF sample, network searches yielded the following values ​​for hyperparameters: booster trees (interaction depth: 4; shrink/learning rate: 0.065), random forests (number of random variables: 8; minimum terminal node size: 20), regression OLS (number of nodes: 5), neural networks (6 neurons in the hidden layer, from network research that took into account the number of neurons ranging from 2 to 15 in increments of 1; weight decay parameter: 0), lasso (λ = 0.03323 ), ridge regression (λ = 0.96881).

Using the hyperparameters obtained above, each model fit the derivation sample (patients admitted to hospital between 1999 and 2001) and then predictions were obtained for each subject in the validation sample (patients admitted to hospital between 2004 and 2005 ). The accuracy of the predictions was assessed using three scales: R.2mean squared error (MSE), mean absolute error (MAE)20. s2 It was calculated as the square of the Pearson correlation coefficient between the observed and expected discharge blood pressure, while MSE and MAE were estimated as \(\frac{1}{N}\sum \limits_{i=1}^{N}{(Y_{i}-\hat{Y}_{i})^{2}}\) And the \(\frac{1}{N}\sum \limits_{i=1}^{N}{|Y_{i}-\hat{Y}_{i}|}\)respectively, where \ (s \) Indicates the observed blood pressure and \(\hat{Y}\) Indicates the estimated blood pressure.

For all methods, we used the implementations available in the R statistical software (R version 3.6.1, R Foundation for Statistical Computing, Vienna, Austria). For random forests, we used the randomForest function from random Package (version 4.6-14). The number of trees (500) was the default in this application. For augmented trees, we used the gbm function from the . file Gigabyte package (version 2.5.1). The number of trees (100) was the default in this implementation. We used the ols and rcs functions from a file rms Package (version 5.1-3.1) for OLS regression model estimation that includes constrained cubic regression segments. Feed-forward (or multi-layered cognition) neural networks with a single hidden layer were suitable using nnnet Bundle (version 7.3-12) with linear activation function. Ridge and lasso regression was implemented using the cv.glmnet functions (for λ parameter estimation using tenfold cross validation) and glmnet from glmnet Package (version 2.0-18).

The results of experimental analyzes

Figure 1 summarizes the performance of the six different methods for predicting emptying blood pressure in the verification sample (patients in hospital between 2004 and 2005). In the AMI sample, the augmented trees led to predictions with the highest R2 (0.17); However, the differences between five of the six methods were minimal (range: 0.163 to 0.17 for five of the six methods). Note that R.2 It is reported as the ratio of variance in voided blood pressure explained by the model. The least square meter (OLS) regression resulted in estimates with the lowest MSE, while both the OLS regression and the booster trees resulted in estimates with the lowest MAE. as with R2and MSE and MAE do not differ significantly across five of the six methods. Neural network performance differed from that of the other five on all three performance measures.

shape 1
shape 1

Performance sample in verification (case study).

In the CHF sample, random forests yielded predictions with the highest R2 (23.7%); However, the differences between five of the six methods were again negligible (range: 22.2 to 23.7%). Random forests led to the lowest micro- and infinitesimal area estimates, while the augmented trees resulted in the lowest mid-area estimates. as with R2, MAE did not change significantly across five of the six methods (range: 15.0 to 15.2). As in the AMI sample, the neural network performed significantly worse than the other five methods on all three measures.

When comparing the three approaches based on the linear model, neither of the two punished approaches (lasso and ridge regression) had an advantage over conventional OLS regression in any of the disease samples. In both diseases, lasso and ridge slope had very similar performance to each other.

In conclusion, in these experimental analyses, a tree-based machine learning method (either augmented trees or random forest) tends to produce estimates with the greatest predictive accuracy in the validation samples. However, the differences between five of the methods were minimal. Neural networks resulted in significantly worse performance estimates compared to the other five methods.

2022-06-03 19:33:35

Leave a Comment

Your email address will not be published.