EkambaramDilliraj1
PonnusamyVijayakumar2,*
-
(Department of Electronics and Communication Engineering, SRM Institute of Science
and Technology, Kattankulathur-Chennai, Tamilnadu-603203, India de0642@srmist.edu.in)
-
( Department of Electronics and Communication Engineering, SRM Institute of Science
and Technology, Kattankulathur-Chennai, Tamilnadu-603203, India vijayakp@srmist.edu.in
)
Copyright © The Institute of Electronics and Information Engineers(IEIE)
Keywords
Virtual, Augmented and mixed reality (VR, AR, and MR), Machine learning (ML), Deep learning (DL), Rehabilitation, Physiotherapy, Movement recognition
1. Introduction
A physiotherapist can train patients in various rehabilitation strategies by teaching
them how to use image processing techniques with Extended Reality (XR) so that doctors
can monitor the patient's progress from anywhere in the world [36]. More research towards implementing an advanced strategy for patient care, home rehabilitation,
and immersive environment recuperation is progressing worldwide. With this note, we
here list some essential rehabilitation training required for various post-stroke,
scoliosis, musculoskeletal, and post-surgery disorders, and we examine their impact
around the world.
Every year worldwide, more than 13.7 million people suffer a stroke (one quarter of
the over-25 population). A cerebrovascular mishap or stroke is the second most typical
cause of death on the planet [21]. Loss of motor control, paralysis, or severe back pain are all possible outcomes,
necessitating the services of a physiotherapist to help patients reclaim their independence.
[15]. However, traditional rehabilitation requires a routine and leisurely process, and
the patient tends to feel bored and uninterested in attending the training [18]. Scoliosis is a malformation of the spine (which has three protective layers) and
involves intervertebral twists. Young adult idiopathic scoliosis (AIS) affects 2-3%
of the population. Scoliosis affects physical and mental health. Electromyography
(EMG) has been used in many studies to measure paraspinal muscle movement in scoliosis
patients [1,39,40].
Musculoskeletal dysfunction is a costly concern for workers worldwide. Between 2014
and 2018, 4.2 million Americans and 6.6 million Britons had this disorder. Most were
of working age, requiring prompt physiotherapy for rehabilitation [22,23]. Post-surgery monitoring reduces fatalities. Physicians watch human movements, emotions,
and minor body component motion. Human-computer interaction (HCI) reduces patient
care complexity [4]. Doctors use EEG datasets in a patient care system to analyze facial expressions
for anger, happiness, sadness, and surprise [2,5]. Subsection 1.1 describes the variety of typical analyses involved in physiotherapy.
AI improves medical delivery, decision-making, and patient involvement. Narrow artificial
intelligence (nAI) is the present application of AI in society; it contrasts with
generic AI, which simulates human-level intelligence across various areas [25]. Building an effective learning model is challenging, and entails addressing data
sparsity, missing and discarded values, sensor mis-calibrations, and noisy segments
[26]. Data confidentiality is another challenge, especially with cloud platforms and the
Internet of Things. To protect the privacy of assessment system users, data transmission
between platforms should be secure.
1.1 AI-assisted Physiotherapy Analysis
In this section, we present the various AI-assisted rehabilitation analysis strategies
proposed by researchers around the globe-the most common processes used in physiotherapy-as
shown in Fig. 1.
Emotion Detection: The system for analyzing sentiment is divided into four distinct
components. The first component extracts the facial region from the input image by
utilizing a component model that is structured like a tree. The second component carries
out a statistical and deep-learning-based feature analysis of the extracted face region
aimed at finding patterns that are useful and distinctive. The third part of this
process is rating the intensity of pain expressed on the patient’s face using prediction
models based on statistical and deep feature analysis (no-pain, low pain, and high
pain). In the fourth step, the statistical and deep-feature analyses results are integrated
to enhance the performance of the suggested approach [5,32].
Fig. 1. Overview of Various AI-assisted Physiotherapy Rehabilitation Strategies.
DWT was used to analyze EEG-based emotion. Important features were retrieved from
wavelet coefficients, including changed wavelet energy parameters. Central nervous
system signals are more accurate than other modalities. Multimodality in emotion research
is key to an effective HCI, because the human response to events is multimodal [28]. A recent study fused voice, physiological signs [27], facial expressions, bodily motions, and user input (from smartphone/keyboard strokes)
into a multimodal emotion classification system. Among the various modalities to survey
feelings, the electroencephalogram (EEG) addressing electrical cerebrum movement has
accomplished persuasive results over the last 10 years. Feeling assessments from EEGs
could help patients to end or recover from specific sicknesses [2].
Recognition of movement: Discovering human motion is a major research topic in three-machine
representations and engineered combinations. When it comes to visual surveillance,
multi-purpose amusement, development security, and an undeniable level of HCI serves
a vast and vital range of purposes [20]. Patients with frozen shoulders have advancements in their shoulders’ range of motion
(ROM) measured at the clinic and during active recovery appointments. Multiple visits
to monitor for progress are frequently inconvenient for these patients [13].
Another strategy model, consisting of continuous camera movement in view of harmoniously
blended highlights, was presented to focus on steady execution and solidity. Combinations
of extraction and improvement are used to build half-and-half elements, and crossover
highlights are brought together for continuous camera boundary evaluation. This is
accomplished by employing highlight focuses that include lines as scene highlights.
To fulfill the processing needs of portable terminals, a picture highlight improvement
technique that considers the findings of scene structure research has been developed
[17]. On the other hand, ordinary surveillance systems for preventing accidents and other
occurrences do not identify 95% of them after 22 minutes when only one person screens
most closed-circuit television (CCTV) signals [7].
Clinical Evaluation Imitating: To plan and design a method capable of managing and
evaluating lower appendages, EMG signals are used in a manner suitable for a restoration
robot to handle. A genetic algorithm with a support vector machine (SVM) for regression,
the deep-learning-based Visual Geometry Group (VGG16) architecture, an AI-based image
recognition detectable sensor, and a computer-generated reality recovery framework
using a Kinect dream catch sensor, are techniques used for clinical evaluation by
physicians in the present scenario [16,11,13,18].
The following are some methods in which doctors assess patients' progress after a
stroke. Patients needing post-stroke restoration (PSR) must engage in position assessment
using an eight-section brocade exercise, 10 rehabilitation exercises, and physiotherapy
techniques for expansion, flexion, and turns. A physiotherapist will use various methods,
such as nerve re-instruction, task preparation, muscle reinforcing, and different
assistive procedures, to re-establish development needs in daily life. Having a physiotherapist
direct physiotherapy activity to be performed by a patient is laborious, repetitive,
and costly. Using an RGB-Depth camera, a mechanized framework will be developed to
recognize and perceive practices involving the upper appendages [14,15,1]. An irregular pattern classifier demonstrated accuracy of 77% at that moment, whereas
the SVM classifier showed an incredible 85% exactness. It is appropriate for this
study, because the SVM classifier has unparalleled demonstration, as shown in [8].
2. Literature Review
Studies on reality-based image processing with machine learning (ML) and deep learning
(DL) algorithms; parameters monitored during rehabilitation training; the participants,
sample datasets, methods utilized, and the accuracy achieved in the reviewed research
articles are presented in Table 1 [37]. Subsection 2.1 contains the ML algorithms used for recuperative training used in
the reviewed articles.
A literature search was conducted using IEEE, PubMed, Scopus, ScienceDirect, MDPI,
and Physiotherapy Reviews databases. Included works explored various facets of the
following. A variety of wearable technologies have been developed to aid with recovery
from and prevention of stroke, as well as other medical conditions. Different permutations
of keywords from titles and abstracts, as well as their synonyms, were used in each
database. To assess current tendencies, we looked at articles published between 2018
and 2022.
Table 1. Interpretation of the Literature.
Ref.
|
Algorithm Used
|
Parameters Measured
|
Results Achieved
|
[12]
|
CNN
|
Physical and facial emotions: four emotions were identified (happiness, sadness, surprise,
and anger).
|
86.2% accuracy
|
[5]
|
CNN
|
Human sentiment databases with detection of pain levels in participants.
|
UNBC-McMaster shoulder pain database accuracy was 83.71%, and D2 database accuracy
was 75.67%.
|
[29]
|
Bidirectional Encoder Representations from Transformers (BERT) Model
|
Twitter natural language data.
|
F1-score: 89% for four different emotion Tweets
|
[30]
|
Deep learning with a Self-Explaining Neural Network (SENN) model
|
Global vectors for a word representation dataset.
|
The accuracy achieved was 98.8%.
|
[8]
|
Random forest and SVM
|
This work included intervals of inactivity (sitting) and mobility. Electrodes were
implanted in upper trap-dynamic rectus abdominis, external oblique (thoracic), and
erector spine (lumbar) muscles.
|
An SVM classifier using the most critical eight features showed good accuracy (85%).
|
[14]
|
Spatial Transform Networks (STN) with an attention-based multi-scale CNN
|
Pose assessment with eight-section brocade exercises for different body parts like
the left upper arm, left forearm, left thigh, left calf trunk, right upper arm, right
forearm, right thigh, and right calf.
|
ST-AMCNN with different human body poses reached average accuracy of 70.02%.
|
[15]
|
Three-layer convolutional neural network (CNN) and long short-term memory (LSTM)
|
Rehabilitation exercises like (a) neck extension, (b) neck rotation, (c) trunk side
view, (d) trunk front view, (e), (f) elbow joint extension (front and side view),
(g) foot dorsiflexion, (h) foot plantar flexion, (i), (j) knee joint extension and
flexion, (k) trunk extension, and (l) wrist flexion.
|
The accuracy achieved by this framework was 91.3%.
|
[1]
|
CNN-GRU
|
Left shoulder flexion, abduction, elbow flexion, median rotation, internal shoulder
rotation, right shoulder flexion, abduction, elbow flexion, median rotation, and inner
shoulder rotation.
|
The model accuracy was 100%.
|
[16]
|
CNN with VGG16
|
Ultrasound and photoacoustic imaging
|
0.86 was the highest accuracy achieved for the 3-class problem.
|
[4]
|
Principal Component Analysis (PCA-Net)
|
Human-Computer Interaction for Health Monitoring
|
96.9% accuracy using PCANet-3 with a running time of 3411.23sec.
|
[3]
|
YOLOv4 network-based deep learning
|
CT image for intracranial hemorrhage (ICH)
|
The proposed approach's overall precision, recall, and F1-score were 94%, 92%, and
93%, respectively.
|
[2]
|
Hierarchical RNN and DL through CNN
|
EEG images
|
Saliency fusion produced 74.42% of the mean and an SD of 4.76.
|
[11]
|
Support Vector Regression (SVR) with a genetic algorithm
|
Lower limb muscles
|
98.67% accuracy for lower limb muscle force estimation
|
[20]
|
Gaussian algorithm
|
Online available gait dataset
|
95% accuracy for human motion tracking
|
[13]
|
CNN
|
Shoulder motion
|
R2 value was 99.79%.
|
[10]
|
CNN-LSTM with multiple models
|
Online available fall-detection dataset
|
The expected average overlap was 0.167 higher compared to other architectures.
|
[19]
|
Oriented fast and Rotated Brief (ORB) descriptors and Kanade-Lucas-Tomasi (KLT) algorithms
|
Feature point tracking of a human
|
94% tracking accuracy was achieved.
|
[18]
|
VR with Kinect sensors
|
Upper limbs (elbow and shoulder flexion & extension, abduction & adduction)
|
The similarity between real-time and virtual environments was less than 0.4 curve
points.
|
[7]
|
CNN with an AlexNet structure
|
NIST database
|
Fall-detection accuracy: 93.54%
|
[17]
|
Block Orthogonal Matching Pursuit (BOMP) algorithm
|
Human action in pattern-based recognition
|
Maximum of 92% accuracy
|
2.1 Machine Learning
SVMs were the most common type of classifier. They were used mainly for arrangement
problems in action recognition and relapse problems for clinical evaluations when
members are given a clinical score [21,38]. There are several different hyperplanes that, in general, can separate information
tests from one another. The way the support vector machine selects the hyper-plane
separates it from other types of classifiers and makes it stand out as a different
algorithm. The objective of an assist vector machine is to identify the edge that
provides the most significant difference between the two classes. As a consequence
of this, it selects a hyperplane that emphasizes the distance from the information
that is located closest to the line separator [11].
Group learning occurs when multiple models, experts, classifiers, etc., work together
to solve a computational problem. Random forest (RF) then combines the parent trees'
results with a majority vote. RF classifies using deep-choice trees. RF was less used
than before because it is a DT troupe prone to overfitting. It is used when the dataset
is large [11,21]. The calculations related to this process use the following formulas,
The first term is calculated as a Gini index for ‘k’ term classification as follows:
where weight is represented with $p_{k}$ for the k-th category.
Then again, including j, the Gini list at hub m can be determined by utilizing different
Gini list values when expanding. Expecting that VIM$_{jm}$ signifies the change worth
of the Gini file of component j at hub m, GI$_{m}$ means the Gini record before spreading,
and GI$_{l}$ and GI$_{r}$ indicate two new hubs in the wake of fanning; then, at that
point:
If the j feature appears M times in DTi, the significance of this j feature for the
DT is:
Then, the significance of j is:
where n is the number for the DT in RF.
Fig. 2. The architecture of the three-layer CNN-LSTM Model[15].
Scientifically speaking, artificial neural networks (ANNs) were a popular option for
evaluating stroke survivors. The majority of these models relied on multilayer perceptron
(MLP) architectures. As far as action recognition and development grouping are concerned,
MLP achieved outstanding results. Convolutional neural network (CNN) architectures
[21,33] are yet another example of the implemented ANN technology. It is like having multiple
layers in a profoundly organized way. Compared to more surface-level neural network
models, the DNN approach makes significant strides as soon as it can see more deep
layers. The CNN's spatial design and weight-sharing mechanism provide it with a high
degree of bending resistance, which it uses to deal with the problem of image characterization
and recognition. As a two-dimensional vector, input data are easily handled by a CNN.
We used Long Short-Term Memory (LSTM) for this study to categorize data into various
groups [15]. Fig. 2 illustrates the architecture of the three-layer CNN-LSTM model utilized in rehabilitation
training.
Classification from sensing with the above system was done with the help of CNN-LSTM,
which is mathematically derived as follows:
Preparation of the model has as its primary objective modification of the channel
loads in the pursuit of producing a predicted route that is as similar as possible
to the actual course. During the planning phase, the company operates in a forward
direction to achieve the projected value of the final product [15].
K-Nearest Neighbor (KNN) is an algorithm for recommended engine systems in various
applications. The KNN classifier depends on a distance metric and is generally utilized
in applications progressively because it is liberated from the basic suppositions
about the circulation of the dataset [21].
Almost all the articles utilized the following formulas to calculate the accuracy,
regression model, and root mean square error (RMSE):
where TP is true positive; TN is true negative; FP is false positive; and FN is false
negative;
where RSS is residual sum of squares, and TSS is total sum of squares:
where $di-$is the predicted score, and $fi-$actual input from therapist.
Fig. 3 shows the outline of the steps used for extended reality-based recuperation analysis
of a patient by the physiotherapist.
Fig. 3. Extended Reality-based Recuperation Analysis of a Patient.
2.2 Progressive Technologies for the Rehabilitation Process
The immersive environment-based recuperation process gives patients immense enjoyment
when participating in the training. Virtual environment practice significantly focuses
on guiding patients to complete the task assigned by the physicians by accessing virtual
objects that combine motion, trajectory assistance, closed-loop visuals, and more,
to assist them in overcoming various paralysis disorders [6].
Video target-following covers an assortment of interdisciplinary subjects (for example,
design acknowledgment, picture handling, PC illustrations, and manufactured reasoning).
Lately, visual-following examination techniques have gained critical headway, and
researchers have proposed numerous unique algorithms, such as Block Orthogonal Matching
Pursuit (BOMP) [9], the AlexNet-based CNN [7], the Oriented fast and Rotated Brief (ORB) feature descriptor, the Kanade-Lucas-Tomasi
(KLT) algorithm [19], and more.
Extended reality (XR) is a crucial technology for broader applications in tele-rehabilitation
systems like natural part selection, diminished enlistment cost, and broadened assortment
without introducing critical tendencies. Thus, a layout of XR specialist encounters,
and user experiences concerning far-off XR examination could assist us in seeing how
these apply essentially at the current time, and how they handle the entire region
for future improvements in this field [24].
3. Research Findings
In this section, Table 2 lists the reviewed articles' limitations, and future challenges for recuperative
training by a physiotherapist.
3.1 Future Research Endeavors
In this section, we discuss future research endeavors for applications like sentiment/emotion
analysis, human activity recognition, and clinical intimation analysis for the recuperative
training process. Sentiment/emotion analysis based on text, video, audio, and eye
tracking is also an important part of analyzing the emotions of a person.
Emotion analysis based on eye tracking is a very complex system. The key eye-tracking
features used to analyze emotions are the distance between the iris and the sclera,
the speed of the eye movements, the diameter of the pupil, EOG signals, the position
of the pupil, the length of time the eye remains fixed, and pupillary responses [31]. Most researchers found four emotions in the quadrants phase. Further research is
needed to try and identify more nuanced emotions beyond the typical six or eight identified
in emotion classification studies. Improved accuracy in multi-mode surveillance requires
more thorough tracking of the patient's movements. A human monitoring device still
needs to be more accurate in order to follow minute changes in the patient's anatomy
and to provide a formal evaluation to patients by communicating the report in a confidential
way over the Internet.
Table 2. Key findings and future challenges in recuperative training.
Ref.
|
Limitations
|
Challenges
|
[27]
|
Tuning parameters like iterations and algorithms increase accuracy. Real dataset collection
considers the subject number.
|
Optimizing label estimation, feature extraction, and data fusion can improve recognition
rate and accuracy.
|
[8]
|
The accuracy (85%) from this system in classifying the data is significantly less
due to the minimal number of samples.
|
Optimization is required in building a classifier to improve accuracy.
|
[14]
|
Pose estimation matching may fail due to the overlapping of human body parts. This
system provides less accuracy (70.02%) on average when the pose is synchronized with
real-time pose-guided matching.
|
Implementation of video analysis algorithms with this method may improve accuracy.
|
[15]
|
This system fails to provide good matching accuracy for complex rehabilitation exercises.
|
Imparting complex rehabilitation exercise-pose datasets may help to improve the system's
accuracy.
|
[4]
|
The processing time for interaction between humans and computers is long (3411.23
sec).
|
Concentrating on reducing the processing time could increase the performance of the
system.
|
[3]
|
Preparation of separate training datasets for the system is required to increase the
segmentation success rate.
|
Designing a system with an ensemble process improves the computational time of a system.
|
[12]
|
This work did not acquire random facial emotion images. Hence, the accuracy of the
system (86.2%) was reduced.
|
There is a need to improve recognition of facial expressions in real time. Other emotions
like fear and disgust must be added in future work.
|
[5]
|
Feature extraction needs to be done correctly in this system; accuracy (83.71% for
UNBC McMaster datasets and 75.67% for D2 databases) was significantly less.
|
Development of the ensemble process is required to improve accuracy.
|
[11]
|
Online muscle force estimation is required to develop a home-based rehabilitation
system for patients.
|
Optimization in the algorithm is needed to develop more precise home rehabilitation
for patients.
|
[20]
|
This system ensures single-motion monitoring results, which may reduce performance.
|
Enhancement is required to monitor the patient's activity in multi-mode surveillance
to increase accuracy.
|
[13]
|
Tracking of smaller movements in the patient from using a human tracking system needs
greater accuracy.
|
Improvement is needed for robustness in tracking smaller movements by a patient.
|
[31]
|
Only four emotions were analyzed using this framework in quadrant form.
|
A broader range of human emotions is needed, beyond the standard six or eight recognized
in research.
|
4. Conclusion
Focusing on the development of state-of-the-art technology implementation, the application
of rehabilitation training for various disorders can be handled by a physiotherapist.
The worldwide COVID-19 pandemic required the planning of novel remote-working advances,
particularly for recovery therapies [34]. The main theme of this review article is to show the important limitations and challenges
facing current rehabilitative training with the help of new-age technologies proposed
by different researchers. Based on the overview of the various research articles,
we present the following important challenges in tele-rehabilitation.
· Latency problems during a conversation between therapist and patient.
· Lack of accuracy in finding small deflections in the patient.
· Improvement of depth image analysis algorithms for real-time video monitoring
· Accurate estimation of augmented exercise poses in real time needs enhancement.
· Development is required to recognize emotions like fear and disgust in patients.
In any case, with the advancement of figuring stages, modern algorithms (specifically
deep learning) are assuming control, which requires less area information. From the
assessed papers, we distinguished difficulties experienced by specialists in the field,
which connect with information viewpoints, enlistment to embrace investigations, field
intricacy, power utilization, and patients' acknowledgments. We finally gave a few
hints to assist specialists in the field in working on their frameworks. This work
also presents some difficulties and issues for future study. With this survey, we
intend to make it easier for academics who are interested in studying emotion recognition
with different technologies, human action recognition, and clinical evaluation systems.
REFERENCES
Bijalwan, V., Semwal, V.B., Singh, G. and Mandal, T.K. (2022). HDL-PSR: Modelling
Spatio-Temporal Features Using Hybrid Deep Learning Approach for Post-Stroke Rehabilitation.
Neural Processing Letters.
Delvigne, V., Facchini, A., Wannous, H., Dutoit, T., Ris, L. and Vandeborre, J.-P.
(2022). A Saliency based Feature Fusion Model for EEG Emotion Estimation. arXiv:2201.03891
[cs]. [online] Available at: \url{https://arxiv.org/abs/2201.03891} [Accessed 8 Mar.
2022].
Ertuğrul, ö.F. and Akıl, M.F. (2022). Detecting hemorrhage types and bounding box
of hemorrhage by deep learning. Biomedical Signal Processing and Control, 71, p. 103085.
Gan, S., Zhuang, Q. and Gong, B. (2022). Human-computer interaction-based interface
design of intelligent health detection using PCANet and multi-sensor information fusion.
Computer Methods and Programs in Biomedicine, 216, p. 106637.
Ghosh, A., Umer, S., Khan, M.K., Rout, R.K. and Dhara, B.C. (2022). Smart sentiment
analysis system for pain detection using cutting edge techniques in a smart healthcare
framework. Cluster Computing.
Heyse, J., Carlier, S., Verhelst, E., Vander Linden, C., De Backere, F. and De Turck,
F. (2022). From Patient to Musician: A Multi-Sensory Virtual Reality Rehabilitation
Tool for Spatial Neglect. Applied Sciences, [online] 12(3), p. 1242.
Kim, J.S., Kim, M.-G. and Pan, S.B. (2021). A study on implementation of real-time
intelligent video surveillance system based on embedded module. EURASIP Journal on
Image and Video Processing, 2021(1).
Liang, R., Yip, J., Fan, Y., Cheung, J.P.Y. and To, K.-T.M. (2022). Electromyographic
Analysis of Paraspinal Muscles of Scoliosis Patients Using Machine Learning Approaches.
International Journal of Environmental Research and Public Health, [online] 19(3),
p. 1177.
Ma, W. and Xu, F. (2020). Study on computer vision target tracking algorithm based
on sparse representation. Journal of Real-Time Image Processing, 18(2), pp. 407-418.
Mohamed, N.A., Zulkifley, M.A., Kamari, N.A.M. and Kadim, Z. (2022). Symmetrically
Stacked Long Short-Term Memory Networks for Fall Event Recognition Using Compact Convolutional
Neural Networks-Based Tracker. Symmetry, [online] 14(2), p. 293.
Mokri, C., Bamdad, M. and Abolghasemi, V. (2022). Muscle force estimation from lower
limb EMG signals using novel optimised machine learning techniques. Medical & Biological
Engineering & Computing.
Oh, S. and Kim, D.-K. (2022). Comparative Analysis of Emotion Classification Based
on Facial Expression and Physiological Signals Using Deep Learning. Applied Sciences,
[online] 12(3), p. 1286.
Park, C., An, Y., Yoon, H., Park, I., Kim, K., Kim, C. and Cha, Y. (2022). Comparative
accuracy of a shoulder range motion measurement sensor and Vicon 3D motion capture
for shoulder abduction in frozen shoulder. Technology and Health Care, [online] 30(S1),
pp. 251-257.
Qiu, Y., Wang, J., Jin, Z., Chen, H., Zhang, M. and Guo, L. (2022). Pose-guided matching
based on deep learning for assessing quality of action on rehabilitation training.
Biomedical Signal Processing and Control, 72, p. 103323.
Rahman, Z.U., Ullah, S.I., Salam, A., Rahman, T., Khan, I. and Niazi, B. (2022). Automated
Detection of Rehabilitation Exercise by Stroke Patients Using 3-Layer CNN-LSTM Model.
Journal of Healthcare Engineering, 2022, pp. 1-12.
Schlereth, M., Stromer, D., Breininger, K., Wagner, A., Tan, L., Maier, A. and Knieling,
F. (2022). Automatic Classification of Neuromuscular Diseases in Children Using Photoacoustic
Imaging. arXiv:2201.11630 [cs, eess]. [online] Available at: \url{https://arxiv.org/abs/2201.11630}
[Accessed 8 Mar. 2022].
Sun, W. and Mo, C. (2020). High-speed real-time augmented reality tracking algorithm
model of camera based on mixed feature points. Journal of Real-Time Image Processing.
Xiao, B., Chen, L., Zhang, X., Li, Z., Liu, X., Wu, X. and Hou, W. (2022). Design
of a virtual reality rehabilitation system for upper limbs that inhibits compensatory
movement. Medicine in Novel Technology and Devices, 13, p. 100110.
Yue, S. (2020). Human motion tracking and positioning for augmented reality. Journal
of Real-Time Image Processing. J Real-Time Image Proc 18, 357-368 (2021)\colorbox{color-5}{\textcolor{color-8}{.}}
Zhang, X., Xu, Z. and Liao, H. (2022). Human motion tracking and 3D motion track detection
technology based on visual information features and machine learning. Neural Computing
and Applications.
Boukhennoufa, I., Zhai, X., Utti, V., Jackson, J. and McDonald-Maier, K.D. (2022).
Wearable sensors and machine learning in post-stroke rehabilitation assessment: A
systematic review. Biomedical Signal Processing and Control, 71, p. 103197.
Costa, F., Janela, D., Molinos, M., Lains, J., Francisco, G.E., Bento, V. and Dias
Correia, F. (2022). Telerehabilitation of acute musculoskeletal multi-disorders: prospective,
single-arm, interventional study. BMC Musculoskeletal Disorders, 23(1).
Igwesi-Chidobe, C.N., Bishop, A., Humphreys, K., Hughes, E., Protheroe, J., Maddison,
J. and Bartlam, B. (2020). Implementing patient direct access to musculoskeletal physiotherapy
in primary care: views of patients, general practitioners, physiotherapists and clinical
commissioners in England. Physiotherapy.
Ratcliffe, J., Soave, F., Bryan-Kinns, N., Tokarchuk, L. and Farkhatdinov, I. (2021).
Extended Reality (XR) Remote Research: a Survey of Drawbacks and Opportunities. Proceedings
of the 2021 CHI Conference on Human Factors in Computing Systems.
Tack, C. (2019). Artificial intelligence and machine learning{\textbar} applications
in musculoskeletal physiotherapy. Musculoskeletal Science and Practice, 39, 164-169.
Li, Y., Chen, R., Niu, X., Zhuang, Y., Gao, Z., Hu, X., & El-Sheimy, N. (2021). Inertial
Sensing Meets Machine Learning: Opportunity or Challenge?. IEEE Transactions on Intelligent
Transportation Systems.
Panicker, S. S., & Gayathri, P. (2019). A survey of machine learning techniques in
physiology based mental stress detection systems. Biocybernetics and Biomedical Engineering,
39(2), 444-469.
Qiu, S., Zhao, H., Jiang, N., Wang, Z., Liu, L., An, Y., & Fortino, G. (2022). Multi-sensor
information fusion based on machine learning for real applications in human activity
recognition: State-of-the-art and research challenges. Information Fusion, 80, 241-265.
Zad, S., Heidari, M., Jones, J.H.J. and Uzuner, O. (2021). Emotion Detection of Textual
Data: An Interdisciplinary Survey. \textit{2021 IEEE World AI IoT Congress(AIIoT)}.doi:10.1109/aiiot52608.2021.9454192.
Nandwani, P., & Verma, R. (2021). A review on sentiment analysis and emotion detection
from text. Social Network Analysis and Mining, 11(1), 1-19.
Lim, J. Z., Mountstephens, J., & Teo, J. (2020). Emotion recognition using eye-tracking:
taxonomy, review and current challenges. Sensors, 20(8), 2384.
Saxena, A., Khanna, A., & Gupta, D. (2020). Emotion recognition and detection methods:
A comprehensive survey. Journal of Artificial Intelligence and Systems, 2(1), 53-79.
Davoli, A., Guerzoni, G., & Vitetta, G. M. (2021). Machine learning and deep learning
techniques for colocated MIMO radars: A tutorial overview. IEEE Access,9,33704-33755.
Ali, O., Ishak, M. K., & Bhatti, M. K. L. (2021). Early COVID-19 symptoms identification
using hybrid unsupervised machine learning techniques. Computers, Materials, and Continua,
747-766.
Peiffer-Smadja, N., Rawson, T. M., Ahmad, R., Buchard, A., Georgiou, P., Lescure,
F. X., ... & Holmes, A. H. (2020). Machine learning for clinical decision support
in infectious diseases: a narrative review of current applications. Clinical Microbiology
and Infection, 26(5), 584-595.
Kelly, D., Hoang, T. N., Reinoso, M., Joukhadar, Z., Clements, T., & Vetere, F. (2018).
Augmented reality learning environment for physiotherapy education. Physical Therapy
Reviews, 23(1), 21-28.
Fahle, S., Prinz, C., & Kuhlenkötter, B. (2020). Systematic review on machine learning
(ML) methods for manufacturing processes-Identifying artificial intelligence (AI)
methods for field application. Procedia CIRP, 93, 413-418.
De Filippis, R., Carbone, E. A., Gaetano, R., Bruni, A., Pugliese, V., Segura-Garcia,
C., & De Fazio, P. (2019). Machine learning techniques in a structural and functional
MRI diagnostic approach in schizophrenia: a systematic review. Neuropsychiatric disease
and treatment, 15, 1605.
Ngiam, K. Y., & Khor, W. (2019). Big data and machine learning algorithms for health-care
delivery. The Lancet Oncology, 20(5), e262-e273.
Angehrn, Z., Haldna, L., Zandvliet, A. S., Gil Berglund, E., Zeeuw, J., Amzal, B.,
... & Heckman, N. M. (2020). Artificial intelligence and machine learning applied
at the point of care. Frontiers in Pharmacology, 11, 759.
Author
Vijayakumar Ponnusamy received his Ph.D. from SRM IST in 2018. His area of research
was Applied Machine Learning in Wireless Communications (i.e., cognitive radio). He
obtained his Masters in Applied Electronics from the College of Engineering, Guindy,
in 2006. In 2000, he received his B.E. in Electronics and Communication Engineering
from Madras University. He is currently a Professor in the ECE Department, SRM IST,
Chennai, Tamil Nadu, India. He is a certified IoT Specialist and Data Scientist. He
is also a recipient of the NI India Academic Award for Excellence in Research (2015).
His current research interests are machine learning and deep learning, IoT-based intelligent
system design, blockchain technology, and cognitive radio networks. He has authored
or coauthored papers that have been published in more than 85 international journals
and for more than 65 international and national conferences. He is a senior member
of IEEE.
E. Dilliraj received a B.E. in Electronics and Communication Engineering and a
master’s degree in Embedded System Technologies from Anna University. He worked as
an assistant professor of electronics and communication engineering at Prathyusha
Engineering College. His current interests are image processing, machine learning
algorithms, deep learning, artificial intelligence, computer vision, the IoT, and
embedded systems. He is currently a research scholar at the SRM Institute of Science
and Technology.