• No results found

4.2 Preliminary Experiments

4.2.1 Centralized Learning

The first experiment performed during the project was in regards to centralized learning. We wished to obtain information about how a basic, centralized model would perform on the selected dataset described in Section 3.1.2. The information gathered could be useful in comparing centralized learning with feder-ated learning in terms of model performance. In order to provide a more holistic overview, the experiment was divided into two parts. The first part involved training the artificial neural network using centralized learning, while the second part involved training the convolutional neural network using centralized learn-ing. The experiments utilized the training configuration described in Table 3.

Training Configuration

Table 3: Training configuration for the centralized learning experiment with the ANN model.

4.2.1.1 Centralized Learning with ANN

During the first part of the experiment, the artificial neural network was trained using centralized learn-ing. We chose to train a less complex model to start with. This was due to the fact that it would be useful to observe how well a model with fewer parameters would perform using centralized learning. It would also provide a good point of comparison when training with federated learning in other experiments. This section will illustrate the results achieved when the ANN model was trained with centralized learning.

Metrics

Test Accuracy: 92.8%

Training Accuracy: 98.5%

Test Loss: 0.26

Training Loss: 0.04

Training Time: 227 s

Table 4: Accuracy, loss and training time for the centralized learning experiment with the ANN model.

This table describes a well-performing model that has a relatively short training time.

Classification Report

Class Precision Recall F1-Score Support

Normal 0.99 0.93 0.96 18118

Supra Ventricular 0.61 0.78 0.69 556

Ventricular 0.78 0.94 0.86 1448

Fusion 0.28 0.90 0.43 162

Unknown 0.79 0.99 0.88 1608

Table 5: Precision, recall, F1-Score and support values for the centralized learning experiment with the ANN model. The F1-score shown in the table describes a model that performed well on every class. This is due to the F1-score being relatively high for all classes which indicates a good true positive rate and a good true negative rate. However, one can observe that the model performed worse on theFusion and the Supra Ventricular class compared to the remaining classes.

N S V F U

Predicted label N

S V F U

True label

0.93 0.01 0.02 0.02 0.02

0.16 0.78 0.03 0.01 0.02

0.02 0.00 0.94 0.03 0.01

0.05 0.01 0.04 0.90 0.00

0.00 0.00 0.01 0.00 0.99

0.0 0.2 0.4 0.6 0.8

Figure 28: Confusion matrix for the centralized learning experiment with the ANN model. The confusion matrix shows a clear diagonal indicating that the model had a high true positive and true negative rate.

0 5 10 0.65

0.7 0.75 0.8 0.85 0.9 0.95

1 variable

Training Accuracy Validation Accuracy

Epoch

Accuracy

Figure 29: Graph illustrating the training and validation accuracy of the centralized learning experiment with the ANN model. From this graph one can observe that the validation accuracy and the training ac-curacy converges. This indicates that the model did not overfit on the training data.

0 5 10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Training Loss Validation Loss

Epoch

Loss

Figure 30: The graph illustrates the training and validation loss of the centralized learning experiment with the ANN model. From this graph one can observe that the validation loss and the training loss con-verges, indicating that the model did not overfit on the training data.

4.2.1.2 Centralized Learning with CNN

During the second part of the experiment, the convolutional neural network was trained using centralized learning. This model is more complex and comprise of more trainable parameters. We wanted the second part of the experiment to convey how well the CNN would perform using centralized learning. This sec-tion will provide the results achieved when the CNN model was trained with centralized learning.

Metrics

Test Accuracy: 97.1%

Training Accuracy: 99.3%

Test Loss: 0.14

Training Loss: 0.02

Training Time: 471 s

Table 6: Accuracy, loss and training time for the centralized learning experiment with the CNN model.

This table describes a well-performing model that had a higher test accuracy and lower test loss than the ANN model described in Table 4. However, the training time for this experiment was more than twice as long as the ANN model.

Classification Report

Class Precision Recall F1-Score Support

Normal 0.99 0.98 0.98 18118

Supra Ventricular 0.67 0.76 0.72 556

Ventricular 0.94 0.92 0.93 1448

Fusion 0.72 0.70 0.71 162

Unknown 0.97 0.99 0.98 1608

Table 7: Precision, recall, F1-Score and support values for the centralized learning experiment with the CNN model. The F1-scores shown in the table describes a model that performed well on every class, and that was higher for every class compared to the ANN model described in Table 5. This means that the true positive rates and the true negative rates are high. Similarly to the ANN model, this table also il-lustrates that the model performed worse on theFusion and theSupra Ventricular class compared to the remaining three classes.

N S V F U Predicted label

N S V F U

True label

0.98 0.01 0.00 0.00 0.00

0.22 0.76 0.01 0.00 0.00

0.05 0.01 0.92 0.01 0.01

0.25 0.00 0.04 0.70 0.01

0.01 0.00 0.00 0.00 0.99

0.0 0.2 0.4 0.6 0.8

Figure 31: Confusion matrix for the centralized learning experiment with the CNN model. The confusion matrix illustrates a clear diagonal indicating that the model had a high true positive and true negative rate. Compared to the confusion matrix for the ANN model illustrated in Figure 28, there are more false positives and false negatives for theSupra Ventricular and theFusion class in the CNN model.

0 5 10

0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99

variable

Training Accuracy Validation Accuracy

Epoch

Accuracy

Figure 32: Graph illustrating the training and validation accuracy of the centralized learning experiment with the CNN model. From this graph one can observe that the validation accuracy and the training accu-racy converges. This indicates that the model did not overfit on the training data.

0 5 10 0.05

0.1 0.15 0.2 0.25

Training Loss Validation Loss

Epoch

Loss

Figure 33: The graph illustrates the training and validation loss of the centralized learning experiment with the CNN model. From this graph one can observe that the validation loss and the training loss con-verges. This indicates that the model did not overfit on the training data.