Published online: 2019-06-14
DOI: 10.24075/brsmu.2019.039
Generalized chronic periodontitis (GCP) is a widespread disease. It has a serious negative impact on the quality of a patient’s life, posing a challenge to dentists all over the world. At present, standard therapy regimens for GCP adopted in the Russian Federation do not account for the mucosal barrier state, which is determined by a number of various factors, including the levels of secretory immunoglobulin A (sIgA). In our study, we attempted to assess the functional state of the mucosal barrier in patients with GCP and to provide a rationale for using immunotherapy aimed at restoring the effective barrier function of the oral mucosa. SIgA concentrations, which served as an indicator of the mucosal barrier state, were measured with ELISA. We found that patients with GCP had significantly lower sIgA concentrations in the oral fluid in comparison with healthy individuals. Although therapeutic procedures did help to increase sIgA levels, they still were much lower after therapy than in healthy volunteers (54.6 ± 30.5 µg/ml vs 151.2 ± 105.2 µg/ml). Increased permeability of the mucosal barrier caused sIgA to leak into the peripheral blood serum, where its concentration grew from 0.21 ± 0.28 µg/ml to 0.35 ± 0.47 µg/ml during the treatment course, suggesting damage to the mucosal integrity. This fact needs to be accounted for when treating patients with GCP.
Published online: 2019-05-29
DOI: 10.24075/brsmu.2019.037
The existing emotion recognition techniques based on the analysis of the tone of voice or facial expressions do not possess sufficient specificity and accuracy. These parameters can be significantly improved by employing physiological signals that escape the filters of human consciousness. The aim of this work was to carry out an EEG-based binary classification of emotional valence using a convolutional neural network and to compare its performance to that of a random forest algorithm. A healthy 30-year old male was recruited for the experiment. The experiment included 10 two-hour-long sessions of watching videos that the participant had selected according to his personal preferences. During the sessions, an electroencephalogram was recorded. Then, the signal was cleared of artifacts, segmented and fed to the model. Using a neural network, we were able to achieve a F1 score of 87%, which is significantly higher than the F1 score for a random forest model (67%). The results of our experiment suggest that convolutional neural networks in general and the proposed architecture in particular hold great promise for emotion recognition based on electrophysiological signals. Further refinement of the proposed approach may involve optimization of the network architecture to include more classes of emotions and improvement of the network’s generalization capacity when working with a large number of participants.