IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (2024)

  • Record: found
  • Abstract: found
  • Article: found

Is Open Access

research-article

IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (1)

Author(s):

Nabil Almalki 1 , 2 , ,

Mrim M. Alnfiai 1 , 3 ,

Fahd N. Al-Wesabi 4 ,

Mesfer Alduhayyem 5 ,

Anwer Mustafa Hilal 6 ,

Manar Ahmed Hamza 6

Publication date (Electronic): 13 February 2024

Journal: Journal of Disability Research

Publisher: King Salman Centre for Disability Research

Keywords: human activity recognition, deep learning, hyperparameter tuning, bat algorithm, ensemble classification

  • Download PDF
  • XML
  • Review article
  • Invite someone to review

Bookmark

        Abstract

        Internet of Things (IoT)-based human action recognition (HAR) has made a significant contribution to scientific studies. Furthermore, hand gesture recognition is a subsection of HAR, and plays a vital role in interacting with deaf people. It is the automatic detection of the actions of one or many subjects using a series of observations. Convolutional neural network structures are often utilized for finding human activities. With this intention, this study presents a new bat optimization algorithm with an ensemble voting classifier for human activity recognition (BOA-EVCHAR) technique to help disabled persons in the IoT environment. The BOA-EVCHAR technique makes use of the ensemble classification concept to recognize human activities proficiently in the IoT environment. In the presented BOA-EVCHAR approach, data preprocessing is generally achieved at the beginning level. For the identification and classification of human activities, an ensemble of two classifiers namely long short-term memory (LSTM) and deep belief network (DBN) models is utilized. Finally, the BOA is used to optimally select the hyperparameter values of the LSTM and DBN models. To elicit the enhanced performances of the BOA-EVCHAR technique, a series of experimentation analyses were performed. The extensive results of the BOA-EVCHAR technique show a superior value of 99.31% on the HAR process.

        Main article text

        INTRODUCTION

        As older people suffer from age-related diseases or may have disorders of body functions, the necessity for smart health assistance structures increases every year (Gupta et al., 2022). Physical observation is a common technique of observing geriatric patients that can be expensive, necessitates a large number of staff members, and is not possible in light of the huge population aging in the future (Xu et al., 2023). Many ambient assisted living (AAL) applications like assistive human–computer interaction technologies, care-providing robots, and video surveillance systems need human activity detection (Ullah et al., 2019). Although the main operators of the AAL system are the old people, the idea can also be implemented for physically and mentally impaired people in addition to individuals suffering from obesity and diabetes, those who require support at home, and persons of any age engrossed in personal fitness monitoring (Li et al., 2021). Accordingly, the sensor-related realistic monitoring scheme for supporting independent living at home has been the topic of several current research studies in the human action recognition (HAR) field (Rashid and Louis, 2019). Activity detection is described as the process of inferring sensor datasets for classifying a set of human actions. HAR refers to a speedily developing region of study that can offer valuable data on the fitness, health, and well-being of monitored people outside hospital backgrounds (Zhang et al., 2020b). Daily action detection using wearable technology plays a vital role in the domain of ubiquitous healthcare.

        The HAR problem, concerning machine learning, is considered a classifier task (Brishtel et al., 2023), where the task of the method is to relate all input dataset points to sort out a predefined number of categories (e.g. human actions) (Tang et al., 2021). Classification refers to a supervised task that the methods learn in the training phase by linking the input dataset to its respective ground truth label that denotes the activity that was happening when the data was recorded. While the HAR issue tries to ascertain a perfect segmentation of actions (Li et al., 2019), this task can be hard to attain as humans easily do actions and the segmentation of time series into intervals that denote single actions is not direct. Even semantically, it is hard to describe where one action ends and where the next commences (Lattanzi et al., 2022). Hence, much work in HAR can be devoted to segregating the measurements into fixed intervals and detecting the action that took place within each interval.

        This study presents a new bat optimization algorithm with an ensemble voting classifier for human activity recognition (BOA-EVCHAR) technique to help disabled persons in the Internet of Things (IoT) environment. The BOA-EVCHAR technique makes use of the ensemble classification concept to recognize human activities proficiently in the IoT environment. In the presented BOA-EVCHAR approach, data preprocessing can be generally performed at the beginning level. For the identification and classification of human activities, an ensemble of two classifiers namely long short-term memory (LSTM) and deep belief network (DBN) models are utilized. Finally, BOA is used to optimally select the hyperparameter values of the LSTM and DBN models. To elicit the enhanced performances of the BOA-EVCHAR technique, a series of experimentation analyses were performed. The key contributions of the paper are given as follows:

        • Develop a BOA-EVCHAR technique, which combines the BOA with an ensemble voting classifier to address HAR challenges in IoT environments, demonstrating innovation in algorithm design.

        • Utilizing the ensemble classification concept, the BOA-EVCHAR technique enhances the accuracy and robustness of human activity recognition by aggregating the outputs of two distinct classifiers, namely LSTM and DBN models, thereby improving the classification performance.

        • Employing the BOA to optimize hyperparameter values for both the LSTM and DBN models, resulting in improved model performance and adaptability to different activity recognition scenarios.

        The remaining sections of the article are arranged as follows: the Related Studies section 2 offers the literature review and the next section presents the proposed method. Then, the Experimental Evaluation section elaborates on the evaluation of results and the Conclusion section summarizes the work.

        RELATED STUDIES

        Soni et al. (2023) proposed a DNN consisting of two complex layers and a BiGRU model. The model utilized can routinely detect and extract activity factors by employing some of the construction parameters. The raw information attained by employing mobile phone sensors is transferred to a two-layer BiGRU and later to a two-layer convolutional neural network (CNN). In Basly et al. (2022), a deep temporal residual scheme for day-to-day life activity detection that targets improving spatiotemporal feature depiction to enhance HAR achievement is presented. Also, a deep RCN is implemented for retaining discriminatory visual aspects transmitted to appearance and LSTM-NN for capturing the long-term temporal evolution of activities. The LSTM is regarded to use time dependencies that arise during the action to improve feature extraction from the RCN network by supplementing time data for addressing the dynamic HAR issue as a serial label work. Pesenti et al. (2023) suggested a DL-based method utilizing inertial sensors for giving industrial exoskeletons with HAR and adaptive payload compensation. This model also utilized LSTM networks for accomplishing HAR and for the classification of the weight of objects lifted up to 15 kg.

        Islam et al. (2023) presented a new DL-established method called STC-NLSTMNet, with the capacity for extracting spatial and temporal aspects simultaneously and routinely detecting human actions with greater precision. This model majorly consists of DS-Conv blocks, NLSTM, and FAM. In Zhang et al. (2020a), a plain but efficient SGN for a skeleton-based activity detection model is suggested. Also, the model utilized the high-level semantics of joints in the networking for improving the factor depiction capacity. Mekruksavanich and Jitpattanakul (2021) proposed the generic HAR technique for mobile phone sensor data established on LSTM networks for time-sequence domains. Four baseline LSTM networks are relatively investigated for examining the effect of employing diverse types of mobile phone sensor data. Additionally, a fusion LSTM network known as a 4-layer CNN-LSTM is presented for improving the detection accomplishment.

        Hamad et al. (2021) suggested a networking construction established on dilated causal convolution and multi-head self-attention systems that completely distribute frequent constructions for making effective calculations and to preserve the time step order. The presented model is investigated for human actions by employing smart home binary sensor information and embeddable sensor data. In Jugunta et al. (2023), an innovative approach is introduced for optimal fetal health classification, integrating the bat algorithm with a hybrid XGB-RNN model to dynamically adapt to the complexities of fetal health data, leveraging XGB for feature selection and boosting and RNN for capturing temporal dependencies. Tan et al. (2022) suggest an ensemble learning algorithm for smartphone sensor-based activity recognition, integrating a stacked GRU-CNN and a DNN with an extra feature vector, enhancing classification through model fusion.

        Researchers have proposed diverse deep-learning models for activity recognition. Soni et al. employed a DNN with two complex layers and a BiGRU model for routine activity detection from mobile phone sensor data. Pesenti et al. utilized LSTM networks in a DL-based method for industrial exoskeletons, addressing both HAR and adaptive payload compensation. Islam et al. presented STC-NLSTMNet, a DL-based method combining DS-Conv blocks, NLSTM, and FAM for precise human action detection. Mekruksavanich and Jitpattanakul introduced a generic HAR technique based on LSTM networks, exploring diverse sensor data types, while Hamad et al. proposed a networking construction using dilated causal convolution and multi-head self-attention for effective calculations in HAR from smart home binary sensor data.

        THE PROPOSED MODEL

        In this study, we have concentrated on the development and design of the BOA-EVCHAR technique for the recognition of human activities to help disabled persons in the IoT environment. The BOA-EVCHAR technique makes use of the ensemble classification concept to recognize human activities proficiently in the IoT environment. The BOA-EVCHAR technique comprises a three-stage process, namely, data preprocessing, ensemble classification, and hyperparameter adjustment using BOA. Figure 1 illustrates the overall flow of the BOA-EVCHAR method.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (2)

        Figure1:

        Overall flow of the BOA-EVCHAR approach. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; LSTM, long short-term memory.

        Data preprocessing

        At the beginning level, data preprocessing can be generally performed. The normalized has been utilized for turning the observed data into values between 0 and 1 across the research time (Sorguli and Rjoub, 2023). A linear scaling normalized was determined using Equation (1).

        (1)yi=yiyminymaxymin

        where ymin is the input database’s actual value; yYi is the normalized value scaled based on the range; and max and min refer to the maximal and minimal values of the features.

        Ensemble classification

        For the identification and classification of human activities, an ensemble of two classifiers, namely, LSTM and DBN, is utilized. LSTM is a popular recursive neural network that has greater success in different fields, namely, emotion analysis, image processing, and speech detection (Khataei Maragheh et al., 2022). LSTM has the advantage of low complexity, nonlinear predictability, and faster convergence. The memory cell is the core of LSTM that includes a single gateway system and parameters and is used for making useless or valuable data.

        There are three gates in all the cells: forget, input, and output gates. At all the time steps, LSTM takes the present input (xt) and the previous memory cell state (ct−1) as input and evaluates the output (ot), and the present cell state (ct) and σ indicates the sigmoid function (σ(X)=11+ex).

        (2)it=σi(xtWxi+ht1Whi+bi)

        (3)ft=σf(xtWxf+ht1Whf+bf)

        (4)ot=σ0(xtWxo+ht1Who+b0)

        (5)ct=ftct1+it(tanh(xtWxc+ht1Whc+b))

        (6)ht=ottanh(ct),

        where the parameter is determined by: it input gate activation vector, xt input vector, ot output gate activation vector, ft forget gate activation vector, ht hidden layer (HL) output vector, ht1Rdh and dh HL dimensions, and d input vector dimensions; ct shows the cell state and ct−1 denotes the vector value of the memory unit at time t−1. 𝒲 and b parameters denote the weight matrix and bias.

        DBN is a generative method with many layers of hidden variables (Sun, 2023). However, it can be feasible to construct DBNs with comparatively sparse connection, over all models, all the units from all the layers being linked to all the units from all the neighboring layers, and there is no connection within a layer. The DBN is generated by consecutively stacking many RBMs. The learning procedure was separated into two phases, i.e. the RBM was primarily pretrained layer wise and unsupervised; afterward all the networks were supervised by the back-propagation system. To provide the parameter model θ = (wR, bv, bh), the joint probability distribution P(v,h;θ) of the visible layer (VL) and HL is determined using the energy function E(v,h;θ) as follows:

        (7)(ν,h;θ)=1ZeE(v,h;θ).

        Among them, Z=v,heE(v,h;θ) signifies the normalized factor and marginal distribution of the method on v as follows:

        (8)(ν;θ)=1ZheE(v,h;θ).

        For a Bernoulli (VL) distribution–Bernoulli (HL) dispersed RBM, the energy function was expressed as follows:

        (9)E(ν,h;θ)=i=1mbviνij=1nbhjhji=1mj=1nνiwijRhj.

        Particularly, wijR signifies the connection weight of RBM, and bvi and bhi stand for bias of VL and HL nodes, correspondingly. Afterwards, the conditional probability distribution is determined as follows:

        (10)(hj=1ν,θ)=σ(bhj+i=1mνiwijR),

        where σ denotes the sigmoid function.

        (11)(νi=1h,θ)=σ(bvi+j=1nwijRhj).

        With computing gradient of log-likelihood function log P(v;θ), the RBM weighted upgrade equation is attained as follows:

        (12)wijR(τ+1)=wijR(τ)+ηΔwijR

        ΔwijR=Edata(νihj)Emodel(νihj).

        In this equation, τ and η stand for the iteration counts and the rate of learning of the RBM, correspondingly, and Edata(vihj) and Emodel(vihij) imply the probability of observed data from the trained set and the expectation on dissemination defined by the method, correspondingly.

        Hyperparameter tuning using BOA

        In this work, BOA is used to optimally select the hyperparameter values of the LSTM and DBN models. BOA is generated utilizing the main model of frequency tuning dependent upon microbat echolocation (Gaber et al., 2023). The echolocation features of microbats are idealized as the subsequent three principles in the standard bat algorithm. The simulated bats need the subsequent initialized parameters: the d-dimension searching space, position xi, velocity vi, and frequency f1. The following are upgraded principles for novel solution xji and velocity vji in all the steps t:

        (13)fi=fmin+(fminfmax)β,

        (14)vij=vij(t1)+[xj^xij(t1)]fi,

        (15)xij(t)=xij(t1)+vij,

        where β∈[0,1] represents the uniformly distributed arbitrary vector. We realize that the variable fi was exploited for changing the velocity and that variable xij(t) defines the value of position j for bat i at step t dependent upon Equations (13), (14), and (15). The variable xj implies the present global optimum position that is defined by relating each m bat’s answers. Figure 2 depicts the steps involved in BOA.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (3)

        Figure2:

        Steps involved in BOA.

        Song and Gorla utilized a random walk system for all the bats to prevent them from falling as local extremum and to boost their arbitrary searching capability. Next, for the choice of solutions in the present optimum position, the random walk was utilized for generating a novel solution to all the bats.

        (16)xnew=xold+εA¯(t),

        where ∈ε[−1,1] refers to the arbitrary number which controls the walk direction and stride, and A¯(t) denotes the average volume of every bat from step t.

        Besides, along with Equation (17), the loudness Ai and the rate of pulse ri can be upgraded for all the steps in Equation (15). If the prey is exposed, the loudness Ai is normally decreased and the rate of pulse ri is enhanced.

        (17)Ai(t+1)=αAi(t),

        (18)ri(t+1)=ri(0)[1exp(γt)],

        The loudness Ai(0) and the rate of pulse ri(0) are generally selected arbitrarily in the first phase of the bat algorithm. Generally, Ai(0) ∈ [1,2] and ri(0) ∈ [0,1] are set.

        The BOA system has developed a fitness function for accomplishing better classifier efficiency. It solves a positive value for signifying the candidate solution’s best efficiency. Here, the minimized classifier error rate was supposed to be a fitness function, as defined in Equation (19).

        (19)fitness(xi)=ClassifierErrorRate(xi)=NumberofmisclassifiedsamplesTotalnumberofsamples*100

        EXPERIMENTAL EVALUATION

        The HAR results of the BOA-EVCHAR technique are assessed on the HAR dataset (HAR dataset, n.d.) comprising 10,299 samples with 6 class labels as mentioned in Table 1.

        Table1:

        Details of the database.

        ClassNo. of samples
        Sitting1777
        Standing1906
        Lying1944
        Walking1722
        Walking upstairs1544
        Walking downstairs1406
        Total number of samples10,299

        In Table 2, the HAR results of the BOA-EVCHAR approach under 80:20 of TRP/TSP are given. Figure 3 illustrates the HAR outcomes of the BOA-EVCHAR technique on 80% of TRP. In the sitting class, the BOA-EVCHAR technique attains accuy, precn, specy, Fscore, and MCC of 99.24, 97.68, 99.54, 97.70, and 97.24%, respectively. Also, in the standing class, the BOA-EVCHAR method attains accuy, precn, specy, Fscore, and MCC of 99.27, 98.16, 99.58, 98.03, and 97.58%, correspondingly. In addition, in the lying class, the BOA-EVCHAR algorithm attains accuy, precn, specy, Fscore, and MCC of 98.97, 98.14, 99.58, 97.21, and 96.58%, correspondingly. Next, in the walking class, the BOA-EVCHAR approach attains accuy, precn, specy, Fscore, and MCC of 99.37, 97.77, 99.53, 98.18, and 97.80%, correspondingly. Finally, in the walking upstairs class, the BOA-EVCHAR method attains accuy, precn, specy, Fscore, and MCC of 99.09, 97.61, 99.59, 96.93, and 96.40%, correspondingly.

        Table2:

        HAR analysis of the BOA-EVCHAR method on 80:20 of TRP/TSP.

        ClassAccuracyPrecisionSpecificityFscoreMCC
        Training phase (80%)
         Sitting99.2597.2299.4197.8497.38
         Standing99.2798.1699.5898.0397.58
         Lying98.9798.1499.5897.2196.58
         Walking99.3797.7799.5398.1897.80
         Walking upstairs99.0997.6199.5996.9396.40
         Walking downstairs99.4797.1699.5598.0397.72
         Average99.2497.6899.5497.7097.24
        Testing phase (20%)
         Sitting99.4797.7799.5398.4698.14
         Standing99.2297.4299.4097.9397.45
         Lying98.8397.7599.4697.0296.30
         Walking99.3298.9899.8397.6497.26
         Walking upstairs98.7996.4699.3796.0095.29
         Walking downstairs99.2296.1299.3297.3896.93
         Average99.1497.4299.4897.4096.89

        Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (4)

        Figure3:

        HAR outcome of the BOA-EVCHAR approach on 80% of TRP. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        Figure 4 shows the HAR outcomes of the BOA-EVCHAR method on 20% of TSP. In the sitting class, the BOA-EVCHAR technique gains accuy, precn, specy, Fscore, and MCC of 99.47, 97.77, 99.53, 98.46, and 98.14%, correspondingly. Also, in the standing class, the BOA-EVCHAR approach attains accuy, precn, specy, Fscore, and MCC of 99.22, 97.42, 99.40, 97.93, and 97.45%, correspondingly. Moreover, in the lying class, the BOA-EVCHAR method achieves accuy, precn, specy, Fscore, and MCC of 98.83, 97.75, 99.46, 97.02, and 96.30%, correspondingly. Next, in the walking class, the BOA-EVCHAR method reaches accuy, precn, specy, Fscore, and MCC of 99.32, 98.98, 99.83, 97.64, and 97.26%, correspondingly. Lastly, in the walking upstairs class, the BOA-EVCHAR method gains accuy, precn, specy, Fscore, and MCC of 98.79, 96.46, 99.37, 96, and 95.29%, correspondingly.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (5)

        Figure4:

        HAR outcome of the BOA-EVCHAR approach on 20% of TSP. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        In Table 3, the HAR outcomes of the BOA-EVCHAR method under 70:30 of TRP/TSP are given. Figure 5 illustrates the HAR outcomes of the BOA-EVCHAR technique on 70% of TRP. In the sitting class, the BOA-EVCHAR technique attains accuy, precn, specy, Fscore, and MCC of 99.14, 98.06, 99.60, 97.51, and 96.99%, respectively. Also, in the standing class, the BOA-EVCHAR technique gains accuy, precn, specy, Fscore, and MCC of 99.13, 97.64, 99.47, 97.60, and 97.07%, respectively. Moreover, in the lying class, the BOA-EVCHAR technique reaches accuy, precn, specy, Fscore, and MCC of 99.14, 97.50, 99.42, 97.71, and 97.18%, correspondingly. Then, in the walking class, the BOA-EVCHAR technique attains accuy, precn, specy, Fscore, and MCC of 99.11, 97.90, 99.58, 97.33, and 96.80%, respectively. Eventually, in the walking upstairs class, the BOA-EVCHAR technique obtained accuy, precn, specy, Fscore, and MCC of 99.42, 97.30, 99.51, 98.10, and 97.76%, respectively.

        Table3:

        HAR analysis of the BOA-EVCHAR method on 70:30 of TRP/TSP.

        ClassAccuracyPrecisionSpecificityFscoreMCC
        Training phase (70%)
         Sitting99.1498.0699.6097.5196.99
         Standing99.1397.6499.4797.6097.07
         Lying99.1497.5099.4297.7197.18
         Walking99.1197.9099.5897.3396.80
         Walking upstairs99.4297.3099.5198.1097.76
         Walking downstairs99.4397.6999.6397.9497.61
         Average99.2397.6899.5497.7097.23
        Testing phase (30%)
         Sitting99.5199.0499.8098.5798.28
         Standing99.2997.1899.3298.1697.72
         Lying99.0697.7999.4897.5496.96
         Walking99.1697.3099.4697.4996.99
         Walking upstairs99.3597.9999.6697.7797.39
         Walking downstairs99.4898.5499.7898.0697.76
         Average99.3197.9799.5897.9397.52

        Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (6)

        Figure5:

        HAR analysis of the BOA-EVCHAR approach on 70% of TRP. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        Figure 6 portrays the HAR analysis of the BOA-EVCHAR technique on 30% of TSP. In the sitting class, the BOA-EVCHAR technique gains accuy, precn, specy, Fscore, and MCC of 99.51, 99.04, 99.80, 98.57, and 98.28%, correspondingly. Also, in the standing class, the BOA-EVCHAR technique reaches accuy, precn, specy, Fscore, and MCC of 99.29, 97.18, 99.32, 98.16, and 97.72%, correspondingly. In addition, in the lying class, the BOA-EVCHAR technique acquires accuy, precn, specy, Fscore, and MCC of 99.06, 97.79, 99.48, 97.54, and 96.96%, correspondingly. Then, in the walking class, the BOA-EVCHAR approach attains accuy, precn, specy, Fscore, and MCC of 99.16, 97.30, 99.46, 97.49, and 96.99%, correspondingly. Eventually, in the walking upstairs class, the BOA-EVCHAR technique attains accuy, precn, specy, Fscore, and MCC of 99.35, 97.99, 99.66, 97.77, and 97.39%, correspondingly.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (7)

        Figure6:

        HAR outcome of the BOA-EVCHAR approach on 30% of TSP. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition.

        Figure 7 inspects the accuracy of the BOA-EVCHAR method in the training and validation on 70:30 of TRP/TSP. The result specified that the BOA-EVCHAR technique reaches greater accuracy values over higher epochs. Also, the higher validation accuracy over training accuracy displays that the BOA-EVCHAR algorithm learns productively on 70:30 of TRP/TSP.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (8)

        Figure7:

        Accuracy curve of the BOA-EVCHAR method on 70:30 of TRP/TSP. Abbreviation: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition.

        The loss analysis of the BOA-EVCHAR approach in training and validation is given on 70:30 of TRP/TSP in Figure 8. The figure indicates that the BOA-EVCHAR method reaches adjacent values of training and validation loss. The BOA-EVCHAR technique learns efficiently on 70:30 of TRP/TSP.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (9)

        Figure8:

        Loss curve of the BOA-EVCHAR approach on 70:30 of TRP/TSP. Abbreviation: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition.

        The comparison results of the BOA-EVCHAR technique on HAR are reported in Table 4 and Figure 9 (Duhayyim, 2023). The results illustrate that the BOA-EVCHAR technique exhibits effective performance under all measures. Based on accuy, an improving accuy of 99.31% is reached by the BOA-EVCHAR technique whereas the IPODTL-HAR, RF, NNN, SVM, ANN, and LSTM models have yielded decreasing accuy of 99.05, 86.18, 87.50, 88.81, 91.83, and 93.97%, respectively. Next, based on precn, an improving precn of 97.97% is reached by the BOA-EVCHAR method whereas the IPODTL-HAR, RF, NNN, SVM, ANN, and LSTM approaches have yielded decreasing precn of 97.04, 82.70, 85.86, 88.86, 88.56, and 91.82%, correspondingly. Also, based on specy, an improving specy of 99.58% is reached by the BOA-EVCHAR method whereas the IPODTL-HAR, RF, NNN, SVM, ANN, and LSTM models have shown decreasing specy of 98.49, 80.96, 82.76, 87.44, 92.20, and 94%, correspondingly. Finally, based on the Fscore, an improving Fscore of 97.93% is reached by the BOA-EVCHAR method whereas the IPODTL-HAR, RF, NNN, SVM, ANN, and LSTM models have exhibited decreasing Fscore of 96.39, 80.94, 83.06, 88.80, 90.85, and 92.50%, correspondingly.

        Table4:

        Comparative outcome of the BOA-EVCHAR approach with other systems.

        MethodologyAccuracyPrecisionSpecificityFscore
        BOA-EVCHAR99.3197.9799.5897.93
        IPODTL-HAR99.0597.0498.4996.39
        RF algorithm86.1882.7080.9680.94
        NNN algorithm87.5085.8682.7683.06
        SVM algorithm88.8188.8687.4488.80
        ANN algorithm91.8388.5692.2090.85
        LSTM algorithm93.9791.8294.0092.50

        Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; HAR, human action recognition; LSTM, long short-term memory.

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (10)

        Figure9:

        Comparative outcome of the BOA-EVCHAR approach with other systems. Abbreviations: BOA-EVCHAR, bat optimization algorithm with an ensemble voting classifier for human activity recognition; LSTM, long short-term memory.

        CONCLUSION

        In this study, we have focused on the design and development of the BOA-EVCHAR technique for the recognition of human activities to help disabled persons in the IoT environment. The BOA-EVCHAR technique makes use of the ensemble classification concept to recognize human activities proficiently in the IoT environment. The BOA-EVCHAR technique comprises a three-stage process, namely, data preprocessing, ensemble classification, and hyperparameter adjustment using BOA. In the presented BOA-EVCHAR technique, data preprocessing is generally performed at the beginning level. For the identification and classification of human activities, an ensemble of two classifiers, namely, LSTM and DBN models, is utilized. Finally, the BOA is used to optimally select the hyperparameter values of the LSTM and DBN models. To elicit the enhanced performances of the BOA-EVCHAR technique, a series of experimentation analyses were performed. The extensive results show the superiority of the BOA-EVCHAR technique on the HAR process. The BOA-EVCHAR method may have limitations in terms of interpretability and explainability of the optimized hyperparameter values selected by the bat optimization algorithm. Future studies on the BOA-EVCHAR method could explore its adaptability to diverse IoT environments, its ability to integrate with edge computing for real-time applications, and further optimization strategies for enhanced performance.

        CONFLICTS OF INTEREST

        The authors declare no conflicts of interest in association with the present study.

        REFERENCES

        1. Basly H, Ouarda W, Sayadi FE, Ouni B, Alimi AM. 2022. DTR-HAR: deep temporal residual representation for human activity recognition. Vis. Comput. Vol. 38:993–1013

        2. Brishtel I, Krauss S, Chamseddine M, Rambach JR, Stricker D. 2023. Driving activity recognition using UWB radar and deep neural networks. Sensors. Vol. 23(2):818

        3. Duhayyim MA. 2023. Parameter-tuned deep learning-enabled activity recognition for disabled people. Comput. Mater. Contin. Vol. 75(3):6587–6603

        4. Gaber T, Awotunde JB, Folorunso SO, Ajagbe SA, Eldesouky E. 2023. Industrial internet of things intrusion detection method using machine learning and optimization techniques. Wirel. Commun. Mob. Comput. Vol. 2023:3939895

        5. Gupta N, Gupta SK, Pathak RK, Jain V, Rashidi P, Suri JS. 2022. Human activity recognition in artificial intelligence framework: a narrative review. Artif. Intell. Rev. Vol. 55(6):4755–4808

        6. Hamad RA, Kimura M, Yang L, Woo WL, Wei B. 2021. Dilated causal convolution with multi-head self attention for sensor human activity recognition. Neural Comput. Appl. Vol. 33:13705–13722

        7. HAR dataset. n.d.. Human Activity Recognition Using Smartphones. https://archive.ics.uci.edu/dataset/240/human+activity+recognition+using+smartphones

        8. Islam MS, Jannat MKA, Hossain MN, Kim WS, Lee SW, Yang SH. 2023. STC-NLSTMNet: an improved human activity recognition method using convolutional neural network with NLSTM from WiFi CSI. Sensors. Vol. 23(1):356

        9. Jugunta SB, Rengarajan M, Gadde S, El-Ebiary YAB, Vuyyuru VA, Verma N, et al.. 2023. Exploring the insights of Bat Algorithm-Driven XGB-RNN (BARXG) for optimal fetal health classification in pregnancy monitoring. Int. J. Adv. Comput. Sci. Appl. Vol. 14(11):731–741

        10. Khataei Maragheh H, Gharehchopogh FS, Majidzadeh K, Sangar AB. 2022. A new hybrid based on long short-term memory network with spotted hyena optimization algorithm for multi-label text classification. Mathematics. Vol. 10(3):488

        11. Lattanzi E, Donati M, Freschi V. 2022. Exploring artificial neural networks efficiency in tiny wearable devices for human activity recognition. Sensors. Vol. 22(7):2637

        12. Li H, Shrestha A, Heidari H, Le Kernec J, Fioranelli F. 2019. Bi-LSTM network for multimodal continuous human activity recognition and fall detection. IEEE Sens. J. Vol. 20(3):1191–1201

        13. Li B, Cui W, Wang W, Zhang L, Chen Z, Wu M. 2021. Two-stream convolution augmented transformer for human activity recognitionProceedings of the AAAI Conference on Artificial Intelligence; Vol. Vol. 35(1):p. 286–293

        14. Mekruksavanich S, Jitpattanakul A. 2021. LSTM networks using smartphone data for sensor-based human activity recognition in smart homes. Sensors. Vol. 21(5):1636

        15. Pesenti M, Invernizzi G, Mazzella J, Bocciolone M, Pedrocchi A, Gandolla M. 2023. IMU-based human activity recognition and payload classification for low-back exoskeletons. Sci. Rep. Vol. 13(1):1184

        16. Rashid KM, Louis J. 2019. Times-series data augmentation and deep learning for construction equipment activity recognition. Adv. Eng. Inform. Vol. 42:100944

        17. Soni V, Jaiswal S, Semwal VB, Roy B, Choubey DK, Mallick DK. 2023. An enhanced deep learning approach for smartphone-based human activity recognition in IoHTMachine Learning, Image Processing, Network Security and Data Sciences: Select Proceedings of 3rd International Conference on MIND 2021; p. 505–516. Springer Nature Singapore. Singapore

        18. Sorguli S, Rjoub H. 2023. A novel energy accounting model using fuzzy restricted Boltzmann machine—recurrent neural network. Energies. Vol. 16(6):2844

        19. Sun L. 2023. Optimization of physical education course resource allocation model based on deep belief network. Math. Probl. Eng. Vol. 2023:1–8

        20. Tan TH, Wu JY, Liu SH, Gochoo M. 2022. Human activity recognition using an ensemble learning algorithm with smartphone sensor data. Electronics. Vol. 11(3):322

        21. Tang CI, Perez-Pozuelo I, Spathis D, Brage S, Wareham N, Mascolo C. 2021. Selfhar: improving human activity recognition through self-training with unlabeled dataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; Vol. Vol. 5. p. 1–30

        22. Ullah M, Ullah H, Khan SD, Cheikh FA. 2019. Stacked LSTM network for human activity recognition using smartphone data2019 8th European Workshop on Visual Information Processing (EUVIP); Roma, Italy. 28-31 October 2019; p. 175–180. IEEE. New York:

        23. Xu Q, Wei X, Bai R, Li S, Meng Z. 2023. Integration of deep adaptation transfer learning and online sequential extreme learning machine for cross-person and cross-position activity recognition. Expert Syst. Appl. Vol. 212:118807

        24. Zhang P, Lan C, Zeng W, Xing J, Xue J, Zheng N. 2020a. Semantics-guided neural networks for efficient skeleton-based human action recognitionProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA. 15-20 June 2019; p. 1112–1121

        25. Zhang J, Zi L, Hou Y, Wang M, Jiang W, Deng D. 2020b. A deep learning-based approach to enable action recognition for construction equipment. Adv. Civ. Eng. Vol. 2020:1–14

        Author and article information

        Journal

        Journal ID (publisher-id): jdr

        Title: Journal of Disability Research

        Publisher: King Salman Centre for Disability Research (Riyadh, Saudi Arabia )

        Publication date (Electronic): 13 February 2024

        Volume: 3

        Issue: 2

        Electronic Location Identifier: e20240006

        Affiliations

        [1 ] King Salman Center for Disability Research, Riyadh, Saudi Arabia;

        [2 ] Department of Special Education, College of Education, King Saud University, Riyadh 12372, Saudi Arabia ( https://ror.org/02f81g417)

        [3 ] Department of Information Technology, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia ( https://ror.org/014g1a453)

        [4 ] Department of Computer Science, College of Science & Arts, King Khalid University, Abha, Saudi Arabia ( https://ror.org/052kwzs30)

        [5 ] Department of Computer Science, College of Sciences and Humanities–Aflaj, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia ( https://ror.org/04jt46d36)

        [6 ] Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia ( https://ror.org/04jt46d36)

        Author notes

        Correspondence to: Nabil Almalki*, e-mail: nalmalki@ 123456ksu.edu.sa

        Author information
        Article

        DOI: 10.57197/JDR-2024-0006

        SO-VID: 4a2ba3f1-fbfb-425d-84aa-762624739265

        Copyright © Copyright © 2024 The Authors.

        License:

        This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

        History

        Date received : 16 October 2023

        Date revision received : 18 January 2023

        Date accepted : 19 January 2023

        Page count

        Figures: 9, Tables: 4, References: 25, Pages: 10

        Funding

        Funded by: King Salman Center for Disability Research

        Award ID: KSRG-2022-030

        The authors extend their appreciation to the King Salman Center for Disability Research (funder id: http://dx.doi.org/10.13039/501100019345) for funding this work through Research Group no KSRG-2022-030.


        ScienceOpen disciplines: ,Political science,,Special education,Civil law,

        Keywords: deep learning,bat algorithm,human activity recognition,hyperparameter tuning,ensemble classification

        Comments

        Comment on this article

        Sign in to comment

        IoT-assisted Human Activity Recognition Using Bat Optimization Algorithm with Ensemble Voting Classifier for Disabled Persons (2024)

        References

        Top Articles
        Latest Posts
        Article information

        Author: Jeremiah Abshire

        Last Updated:

        Views: 5973

        Rating: 4.3 / 5 (54 voted)

        Reviews: 85% of readers found this page helpful

        Author information

        Name: Jeremiah Abshire

        Birthday: 1993-09-14

        Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

        Phone: +8096210939894

        Job: Lead Healthcare Manager

        Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

        Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.