1. Introduction
Robot remote control has been studied in various fields, such as space exploration by robots [1,2,3], remote firefighting mobile robots for firefighting tasks at fire sites [4,5], military robots used in battlefields [6,7], and disaster rescue robots used at disaster sites [8,9,10]. Additionally, there are surgical robots [11], construction robots [12], and disability assistance robots that increase human work capacity [13]. Despite the development of robots with good mobility and work capabilities, it is difficult for robots to identify interactive scenarios and human intentions.
Therefore, research on human–robot interaction (HRI) technology that recognizes human information, judgments, and expressions has been actively pursued in recent years. HRI technology consists of recognition, judgment, and expression technologies for achieving communication between humans and robots. Various studies have used recognition technology to allow robots to detect human intentions based on face recognition, motion recognition, voice recognition, and hand gesture recognition. Additionally, studies have been conducted on specific HRI applications using facial image recognition [14,15], motion recognition, robot remote control systems based on skeletal data [16], autonomous driving robots based on voice recognition [17], and robotic hand control systems using hand gesture recognition [18].
Hand gesture recognition technologies are categorized into dynamic and static hand gestures. Static hand gestures are easier to recognize because they have little to no movement. However, they can be limited in expressiveness and are far from natural hand movements. Dynamic hand gestures can capture a wider range of natural hand gestures but can be more complex and difficult to classify due to the pattern changes during the gesture. In studies on controlling industrial robots, hand gesture recognition technology is mainly used as a recognition technology for communication based on the similarity between robot arms and human arms. Such a control scheme is intuitive and is used in various fields, including medicine, engineering, and rehabilitation. Hand gesture recognition technology has been applied in studies on controlling a six-degree-of-freedom robot arm [19], virtual reality interaction [20,21], and message notification for patient care systems [22].
Studies on hand gesture recognition have been conducted using electromyography (EMG) signals, which are a type of biosignal. EMG signals are microscopic electrical signals that can be obtained from muscular activity and are widely used in various fields, including robotics and rehabilitation medicine. However, hand gesture patterns must be learned because obtained EMG signals are not always the same, even for the same motion. Based on the recent development of AI technology, it performs excellently on various recognition tasks. Popular models include the convolutional neural network (CNN) and recurrent neural network (RNN). Various studies have been conducted on learning EMG-based hand gestures using CNNs [19,23], RNNs [24,25], and convolutional recurrent neural networks (CRNNs) [26,27]. A CNN is an artificial neural network inspired by the concepts of the visual cortex of the brain. CNNs are mainly used for image deep learning and have achieved excellent performance. An RNN is another type of artificial neural network that excels at processing time series data. Unlike a CNN, an RNN recursively refers to previous states when calculating future states. Long short-term memory (LSTM) has been proposed to solve long-term dependency issues associated with forgetting previous information when the time step grows. Additionally, the gated recurrent unit (GRU) is a high-speed model with performance similar to that of LSTM that simplifies the cells of the time steps constituting LSTM networks. A CRNN is a high-performance classification neural network that combines the feature extraction capabilities of a CNN with the classification capabilities of an RNN for time-series data. In the structure of a CRNN, input data pass through multiple convolutional layers, activation functions, and pooling layers. The resulting flattened features are inputted into an RNN layer, and the class with the highest probability is the output.
Although there have been many studies on hand gesture classification using deep learning, few studies have utilized a universal deep learning model to classify dynamic hand gestures and control robots. This paper proposes a system for dynamic hand gesture recognition on an embedded board using a deep learning model and the control of an industrial robot using a robot operating system (ROS). The proposed system uses a model that learns EMG signals using a CRNN structure. This model can recognize 10 hand gestures and works for users who have not participated in training. Figure 1 presents the configuration of the proposed system.
The main contribution of this study is to propose a novel classifier for dynamic hand gestures using CRNN based on EMG. As shown in [28], there are many studies on the classification of static hand gestures, but there is still a lack of research on the classification of dynamic hand gestures using EMG. Therefore, to classify the dynamic hand gestures, we develop a classifier that includes the CRNN structure trained by EMG data collected from a forearm. To verify the performance of the proposed classifier, we experiment with a gesture classification based on a group that did not participate in the collection of training data, and then the experiment results represent high accuracy compared to other papers. Furthermore, the proposed classifier is implemented on the embedded system in a ROS-based robot control system, resulting in the effective control of an industrial robot by dynamic hand gestures without requiring an external PC.
The remainder of this paper is organized as follows. Section 2 presents the configuration of the proposed system and the hardware components used in the system. Section 3 describes the proposed edge AI system, including defined hand gestures and control schemes, EMG data collection, and training and testing using CRNN structures. In Section 4, we present hand gesture classification and industrial robot control experiments to demonstrate the effectiveness of the proposed method. Finally, Section 5 concludes this paper.
2. System Configuration
2.1. Robot Arm and Gripper
The industrial robot considered in this study is the smallest UR3 robot in the Universal Robot CB3 series. The UR3 robot arm is a cooperative robot arm with a collision detection function. The UR3 robot consists of a six-axis joint robot arm with joints labeled as “base”, “shoulder”, “elbow”, “wrist 1”, “wrist 2”, and “wrist 3”, as shown in Figure 2. As shown in Table 1, the maximum payload and maximum working radius are 3 kg and 500 mm, respectively. As shown in Table 2, the operating range of all joints is ±360°, excluding wrist 3, which can be rotated infinitely. The Universal Robot ROS Driver was installed to provide users with a stable interface between the UR robot and ROS.
As shown in Figure 3, the gripper consists of two fingers, each of which is 140 mm wide. The Robotiq ROS package was installed to control the gripper in the ROS environment.
2.2. Myo Armband Sensor
In this study, we used a commercial EMG sensor called the Myo gesture control armband, as shown in Figure 4, which presents the measurements of muscle integrity. The Myo armband is a lightweight and inexpensive product, so many studies on EMG have used this device. It also supports the desired ROS package, making it suitable for use in this study. The device has eight surface EMG electrodes, a nine-axis inertial measurement unit composed of a three-axis accelerometer, three-axis gyroscope, and three-axis magnetometer, and a Bluetooth module for transmitting EMG data. The Myo armband has a notch filter that is used to perform denoising at a frequency of 50 Hz. It is a gesture-based human interface device developed by Thalmic Labs. When wearing the Myo armband on the forearm, samples are obtained at a rate of 200 Hz. The ros_myo ROS package was installed to enable ROS communication of the data obtained from the Myo armband. The ros_myo ROS package uses a sampling frequency of 50 Hz.
2.3. Embedded Environment
The embedded board used in the proposed system is an NVIDIA Jetson Nano developer kit. The Jetson Nano is designed to support entry-level edge AI applications and devices. It also includes accelerated libraries for deep learning, computer vision, graphics, multimedia, etc. We installed image files, including the NVIDIA JetPack 4.6 SDK based on Ubuntu 18.04, to construct accelerated AI applications. For ROS communication, it was connected to the internet using a wireless LAN adapter Wi-Fi dongle. The CRNN model of our hand gesture classifier was trained using Python 3.6 in the TensorFlow GPU 2.4.0 environment.
3. Methods
3.1. Hand Gestures
The proposed system uses EMG signal data based on hand gestures as inputs. Ten gestures that can intuitively express the movements of industrial robots, grippers, and user intentions were defined. The defined gestures are newly defined gestures, separate from the five classes of gestures recognized by the Myo armband by default. Additionally, as shown in Figure 5, the gestures were defined by considering the reflection of human intentions, differences in hand gestures, and ease of hand gestures. The defined hand gestures are “Close”, “Open”, and “Fist” for gripper control, “Right”, “Left”, “Thumb up”, “Thumb down”, and “Supination”, “Pronation”, for industrial robot arm control, and “Rest” as the default state for classifying hand gestures. All gestures are performed with the right hand. Start at “Rest”, perform a specific action, and then return to “Rest” for 1 s. For “Rest”, hold “Rest” for 1 s.
“Rest” does not control anything on its own but serves as a base state. “Close”, which is the gesture of holding an object, causes the gripper to close. “Open”, which is a spreading gesture, causes the gripper to open. “Fist”, which is the gesture of clenching a fist, causes the gripper to close half way. “Right”, which is the gesture of pointing to the right with the index finger, causes the base of the industrial robot to turn clockwise. “Left”, which is the gesture pointing to the left with the index finger, causes the base of the industrial robot to turn counterclockwise. “Thumb up”, which is the motion of extending the thumb upward, causes the shoulder of the industrial robot to turn clockwise. “Thumb down”, which is the motion of extending the thumb downward, causes the shoulder of the industrial robot to turn counterclockwise. “Supination”, which is the supination of the forearm, causes the wrist 3 joint of the industrial robot to turn clockwise. “Pronation”, which is the pronation of the forearm, causes the wrist 3 joint of the industrial robot to turn counterclockwise.
3.2. Data Acquisition
EMG data are microscopic electrical signals that can be obtained from muscle activity. EMG data are collected when wearing the Myo armband, as shown in Figure 6. One hand gesture operation (one set) was collected in the form of 400 EMG data from 50 samples of 8 channels over 1 s. The 400 EMG data were normalized using min-max normalization and then reshaped into a two-dimensional form with dimensions of 50 × 8 for deep learning model training. Figure 7 illustrates how the EMG data collected for each gesture were normalized and plotted. EMG measurements of different magnitudes corresponding to different hand gestures are the main characteristics of the data. Table 3 presents one set of EMG data. The data were collected from seven subjects (five males and two females) to create a universal classifier that can be used by both men and women. When collecting data for the ten hand gestures, each subject repeated each hand gesture 500 times. Of the 500 datasets collected, 400 and 100 were used as the training and validation data, respectively. Overall, 2800 training data were collected for each hand gesture, resulting in a total of 28,000 training data. We collected 700 validation data for each hand gesture, resulting in a total of 7000 data for validation.
3.3. Proposed CRNN
A CRNN is a high-performance classification neural network that combines the feature extraction capabilities of a CNN with the time series classification capabilities of an RNN. The proposed CRNN structure is presented in Figure 8. The input data are normalized EMG data. The convolutional layer consists of a 3 × 3 convolution filter. The pooling layer is a max-pooling layer with dimensions of 2 × 2. As the neural layers progress, the number of filters increases to 16, 32, and 64 to extract complex features. This structure is repeated three times to maintain the feature map size within a valid range. The extracted features are inputted into the fully connected layer. In the fully connected layer, a multidimensional array is flattened into a one-dimensional array and then inputted into the GRU layer. Finally, the data pass through a dense layer consisting of a rectified linear unit activation function and Softmax function, which output values for the 10 classes. Training was performed for 100 epochs, the learning parameters were set to 171 and 326, and the training data were shuffled. Figure 9 presents the training accuracy and loss according to the number of training epochs of the CRNN model. When the CRNN model is trained for 100 epochs, the accuracy increases and the loss decreases. Regarding the training results, the accuracy is 1.0 and the loss is 0.0247.
There are several methods available for checking the correctness of AI-based solutions. Such methods include formal verification, adversarial training, and abstract interpretation [29,30,31]. We evaluated the performance of our deep learning model using a confusion matrix. A confusion matrix is a measure that facilitates easy evaluation of how often an AI model confuses different classes. The values in a confusion matrix can be used to calculate performance metrics for deep learning model classification evaluation, including accuracy, precision, recall, and F1 score.
4. Experiments and Results
Our hand gesture classification and industrial robot control experiments were conducted in two ways. One experiment was conducted on the subjects from which training data were collected, and another experiment was conducted on other subjects for whom no training data were collected. Both experiments were conducted with the system configuration shown in Figure 1. Both experiments were conducted using the hand gesture classifier model trained on the EMG data from the first group of participants.
4.1. Subjects from Whom Data Were Collected
In the first experiment, the subjects from whom training data were collected were tested. Each subject performed each gesture 50 times, and the seven subjects performed a total of 3500 trials. The confusion matrix in Figure 10 represents the classification results for all the subjects in the first experiment. In this confusion matrix, each row and column represent predicted and actual values, respectively. The individual confusion matrices of the subjects can be found in Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6 and Figure A7 in Appendix A.
These figures reveal high classification rates for subjects 1 (97.0%), 2 (95.4%), 3 (96.6%), 4 (96.2%), 5 (97.0%), 6 (96.6%), and 7 (97.2%). The highest and lowest classification rates were obtained for subjects 7 (97.2%) and 2 (95.4%), respectively. The hand gesture classification and industrial robot control experiment yielded a high classification rate of 96.57% for all subjects participating in learning. The hand gesture with the highest classification rate was “Rest” (100%), while that with the lowest classification rate was “Thumb down” (94%). The hand gesture classifier in the proposed system extracted features accurately and achieved high classification performance for the EMG signals. It is also insensitive and robust to EMG signal variation.
4.2. Subjects from Whom No Data Were Collected
In the second experiment, the subjects from whom no training data were collected were tested. Each subject performed each gesture 50 times, and the four subjects performed a total of 2000 trials. The confusion matrix in Figure 11 presents the classification results for all the subjects in the second experiment. In the confusion matrix, each row and column represent predicted and true values, respectively. The individual confusion matrices of the subjects are presented in Figure A8, Figure A9, Figure A10 and Figure A11 in the Appendix A.
These figures reveal high classification rates for subjects 8 (95.8%), 9 (93.6%), 10 (95.8%), and 11 (95.2%). The highest classification rates were obtained for subjects 7 and 10 (95.8%), while the lowest classification rate was obtained for subject 9 (93.6%). The results of the hand gesture classification and industrial robot control experiment involving the subjects who did not participate in learning yielded a high classification rate of 95.10%. The hand gestures with the highest and lowest classification rates were “Rest” (100%) and “Pronation” (90%), respectively. The hand gesture classifier used in the proposed system exhibited high performance, even for subjects who did not participate in learning. This demonstrates that the hand gesture classifier is a transferable model that can learn the hand gesture data of multiple people, including men and women.
4.3. Comparisons to Previous Studies
Table 4 compares our results to those of previous studies in terms of inputs, methods, numbers of gestures, numbers of data collection and testing subjects, performance, field of application, and edge AI. Images were used as inputs in [32,33], while the other studies used EMG signals as inputs. Compared to [19,26,34,35,36], which used EMG signals as inputs, we achieved superior performance. Previous studies [37,38,39,40] used different deep learning models and EMG data to classify hand gestures. These studies tested the universality of their models by evaluating their accuracy on a group not involved in the data collection. The reported test accuracies in these studies were either lower or comparable to the accuracies achieved in this paper. In this study, an overall accuracy of 96.04% was obtained experimentally for seven participants who participated in learning and four participants who did not. Furthermore, compared to [19,24,26,32,33,34,35,36,40,41], our study contributes more to the development of edge AI systems.
5. Conclusions
We proposed a dynamic hand-gesture-based industrial robot control system using an edge AI platform and CRNN. The proposed system utilizes an edge AI system that can remotely control industrial robots using hand gestures, regardless of location. Embedded systems receive EMG signals collected by a Myo armband. A CRNN is used as a hand gesture classification model based on EMG data in the proposed system. The proposed system remotely controls industrial robots and grippers based on an ROS. The performance of the hand gesture classifier was evaluated through two experiments. The results of the first experiment revealed a high classification rate (96.57%) for subjects who participated in learning. The hand gesture classifier was insensitive to variations in EMG signals and exhibited robust classification performance. The results of the second experiment revealed a high classification rate (95.10%), even for subjects who did not participate in learning. Therefore, our hand gesture classifier is a transferable model that is applicable to both men and women. However, using only a single EMG sensor limits the number of muscles that can be monitored, and deep learning models have the disadvantage of requiring a large number of parameters, even for a basic structure. In the future, to classify more complex movements, we plan to conduct a study using two Myo armbands on the forearm and upper arm. Additionally, we will apply a network-in-network structure to the deep learning structure we developed to make it more lightweight.
Conceptualization, B.P.; methodology, E.K. and J.S.; software, E.K.; validation, E.K. and Y.K.; data curation, E.K.; writing—original draft preparation, E.K.; writing—review and editing, J.S., Y.K. and B.P.; visualization, E.K.; supervision, B.P.; project administration, B.P.; funding acquisition, B.P. All authors have read and agreed to the published version of the manuscript.
Data sharing is not applicable to this article.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Proposed system configuration. EMG data are transmitted from the EMG sensor to the ROS environment on an NVIDIA Jetson Nano. The ROS-based embedded system controls a gripper and UR3 manipulator using the gesture classifier trained via deep learning.
Figure 2. Industrial robot UR3 used to test the proposed system. The manipulator processes six degrees of freedom.
Figure 4. Myo gesture control armband EMG sensor used in this study. The wearable device has eight surface EMG channels.
Figure 6. Band-shaped sensor worn on the user’s right forearm. Channel 4 should be aligned horizontally with the back of the user’s hand.
Figure 7. Plots of data collected using the Myo armband. Each gesture is represented by a graph of eight channels. The characteristics of each gesture can be observed in this graph.
Figure 8. Proposed CRNN architecture consisting of three convolutional layers, GRU layers, and two dense layers.
Figure 10. Confusion matrix for the subjects who participated in data collection.
Figure 11. Confusion matrix for the subjects who did not participate in data collection.
Specifications of the UR3 CB-series.
Technical Specifications of the UR3 | |
---|---|
Weight | 11 kg/24.3 lbs |
Payload | 3 kg/6.6 lbs |
Reach | 500 mm/19.7 in |
Repeatability | ±0.1 mm/±0.0039 in |
Degrees of freedom | 6 rotating joints |
Working range and speed of the UR3 robot joints.
Robot Arm Joints | Working Range | Maximum Speed |
---|---|---|
Base |
|
|
Shoulder |
|
|
Elbow |
|
|
Wrist 1 |
|
|
Wrist 2 |
|
|
Wrist 3 | Infinite |
One set of normalized EMG data.
Ch.1 | Ch.2 | Ch.3 | Ch.4 | Ch.5 | Ch.6 | Ch.7 | Ch.8 | |
---|---|---|---|---|---|---|---|---|
1 | 0.00996 | 0.00996 | 0.07597 | 0.03238 | 0.01308 | 0.00685 | 0.0056 | 0.00623 |
2 | 0.01557 | 0.13574 | 0.06476 | 0.02428 | 0.01619 | 0.01308 | 0.04608 | 0.03674 |
3 | 0.01494 | 0.18991 | 0.06538 | 0.02304 | 0.01557 | 0.01059 | 0.04483 | 0.02491 |
4 | 0.01557 | 0.18493 | 0.06227 | 0.01806 | 0.0137 | 0.00996 | 0.04359 | 0.01681 |
∼ | ||||||||
48 | 0.02242 | 0.16065 | 0.08219 | 0.02491 | 0.01059 | 0.00934 | 0.01121 | 0.01121 |
49 | 0.01743 | 0.16999 | 0.08468 | 0.02242 | 0.00996 | 0.00809 | 0.00934 | 0.00872 |
50 | 0.01681 | 0.17933 | 0.09278 | 0.0193 | 0.00934 | 0.00809 | 0.00747 | 0.00809 |
Comparisons to previous studies. (X: Not used, O: Used).
Input | Method | Gesture | Subjects (Acquisition/Test) | Performance | Field | Edge AI | |
---|---|---|---|---|---|---|---|
[ |
Image | DCNN | 10 | 1/1 | 98% | Gesture recognition | X |
[ |
Image | CNN | 10 | 1/1 | 84.5% | Robotic arm control | X |
[ |
EMG | CNN | 7 | -/18 (11 men and 7 women) | 93.14% | Robotic arm control | X |
[ |
EMG | CRNN | 6 | 6/6 | 92.5% | Gesture recognition | O |
[ |
EMG | K-NN | 6 | -/10 | 86.0% | Gesture recognition | O |
[ |
EMG | FNN | 6 | 120 (90 men and 30 women)/ |
96.87% | Gesture recognition | X |
[ |
EMG | ANN | 6 | 12/12 | 98.7% | Gesture recognition | O |
[ |
EMG | RNN | 17 | - | 86.7% | Gesture recognition | X |
[ |
EMG | SKNN | 6 | 4/4 | 95.83% | Gesture recognition | X |
[ |
EMG |
Adaptive |
7 | 18 (12 men and 6 women)/22 | 92.9% | Gesture recognition | X |
[ |
EMG | CRNN | 5 | 7/12 | 84.2% | Robot arm contorl |
X |
[ |
EMG |
MLP | 10 | 20 (15 men and 5 women)/6 | 78.94% | Gesture recognition | X |
[ |
EMG | ViT | 23 | 6/7 | 97% | Gesture recognition | X |
Proposed
|
EMG | CRNN | 10 |
7 (5 men and 2 women)/
|
96.04% | Robotic arm control | O |
Appendix A
Figure A1. Confusion matrix of a subject who participated in data collection (subject 1).
Figure A2. Confusion matrix of a subject who participated in data collection (subject 2).
Figure A3. Confusion matrix of a subject who participated in data collection (subject 3).
Figure A4. Confusion matrix of a subject who participated in data collection (subject 4).
Figure A5. Confusion matrix of a subject who participated in data collection (subject 5).
Figure A6. Confusion matrix of a subject who participated in data collection (subject 6).
Figure A7. Confusion matrix of a subject who participated in data collection (subject 7).
Figure A8. Confusion matrix of a subject who did not participate in data collection (subject 8).
Figure A9. Confusion matrix of a subject who did not participate in data collection (subject 9).
Figure A10. Confusion matrix of a subject who did not participate in data collection (subject 10).
Figure A11. Confusion matrix of a subject who did not participate in data collection (subject 11).
References
1. Sutoh, M.; Otsuki, M.; Wakabayashi, S.; Hoshino, T.; Hashimoto, T. The right path: Comprehensive path planning for lunar exploration rovers. IEEE Robot. Autom. Mag.; 2015; 22, pp. 22-33. [DOI: https://dx.doi.org/10.1109/MRA.2014.2381359]
2. Huang, P.; Zhang, F.; Cai, J.; Wang, D.; Meng, Z.; Guo, J. Dexterous tethered space robot: Design, measurement, control, and experiment. IEEE Trans. Aerosp. Electron. Syst.; 2017; 53, pp. 1452-1468. [DOI: https://dx.doi.org/10.1109/TAES.2017.2671558]
3. Hassanalian, M.; Rice, D.; Abdelkefi, A. Evolution of space drones for planetary exploration: A review. Prog. Aerosp. Sci.; 2018; 97, pp. 61-105. [DOI: https://dx.doi.org/10.1016/j.paerosci.2018.01.003]
4. Mittal, S.; Rana, M.K.; Bhardwaj, M.; Mataray, M.; Mittal, S. CeaseFire: The fire fighting robot. Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), IEEE; Greater Noida, India, 12–13 October 2018; pp. 1143-1146.
5. Kim, J.H.; Starr, J.W.; Lattimer, B.Y. Firefighting robot stereo infrared vision and radar sensor fusion for imaging through smoke. Fire Technol.; 2015; 51, pp. 823-845. [DOI: https://dx.doi.org/10.1007/s10694-014-0413-6]
6. Jentsch, F. Human-Robot Interactions in Future Military Operations; CRC Press: Boca Raton, FL, USA, 2016.
7. Kot, T.; Novák, P. Application of virtual reality in teleoperation of the military mobile robotic system TAROS. Int. J. Adv. Robot. Syst.; 2018; 15, 1729881417751545. [DOI: https://dx.doi.org/10.1177/1729881417751545]
8. Shin, S.; Yoon, D.; Song, H.; Kim, B.; Han, J. Communication system of a segmented rescue robot utilizing socket programming and ROS. Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), IEEE; Jeju, Republic of Korea, 28 June–1 July 2017; pp. 565-569.
9. Hong, S.; Park, G.; Lee, Y.; Lee, W.; Choi, B.; Sim, O.; Oh, J.H. Development of a tele-operated rescue robot for a disaster response. Int. J. Humanoid Robot.; 2018; 15, 1850008. [DOI: https://dx.doi.org/10.1142/S0219843618500081]
10. Kakiuchi, Y.; Kojima, K.; Kuroiwa, E.; Noda, S.; Murooka, M.; Kumagai, I.; Ueda, R.; Sugai, F.; Nozawa, S.; Okada, K. et al. Development of humanoid robot system for disaster response through team nedo-jsk’s approach to darpa robotics challenge finals. Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), IEEE; Seoul, Republic of Korea, 3–5 November 2015; pp. 805-810.
11. Haidegger, T. Autonomy for surgical robots: Concepts and paradigms. IEEE Trans. Med Robot. Bionics; 2019; 1, pp. 65-76. [DOI: https://dx.doi.org/10.1109/TMRB.2019.2913282]
12. Brosque, C.; Galbally, E.; Khatib, O.; Fischer, M. Human-Robot Collaboration in Construction: Opportunities and Challenges. Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), IEEE; Ankara, Turkey, 26–27 June 2020; pp. 1-8.
13. Sabuj, B.; Islam, M.J.; Rahaman, M.A. Human robot interaction using sensor based hand gestures for assisting disable people. Proceedings of the 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), IEEE; Dhaka, Bangladesh, 24–25 December 2019; pp. 1-5.
14. Chen, X.; Xu, H.; Wang, L.; Wang, B.; Yang, C. Humanoid Robot Head Interaction Based on Face Recognition. Proceedings of the 2009 Asia-Pacific Conference on Information Processing, IEEE; Shenzhen, China, 18–19 July 2009; Volume 1, pp. 193-196.
15. Li, T.H.S.; Kuo, P.H.; Tsai, T.N.; Luan, P.C. CNN and LSTM based facial expression analysis model for a humanoid robot. IEEE Access; 2019; 7, pp. 93998-94011. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2928364]
16. Sripada, A.; Asokan, H.; Warrier, A.; Kapoor, A.; Gaur, H.; Patel, R.; Sridhar, R. Teleoperation of a humanoid robot with motion imitation and legged locomotion. Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), IEEE; Singapore, 18–20 July 2018; pp. 375-379.
17. Jung, S.W.; Sung, K.W.; Park, M.Y.; Kang, E.U.; Hwang, W.J.; Won, J.D.; Lee, W.S.; Han, S.H. A study on precise control of autonomous driving robot by voice recognition. Proceedings of the IEEE ISR 2013, IEEE; Atlanta, GA, USA, 15–17 October 2013; pp. 1-3.
18. Gourob, J.H.; Raxit, S.; Hasan, A. A Robotic Hand: Controlled With Vision Based Hand Gesture Recognition System. Proceedings of the 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI), IEEE; Rajshahi, Bangladesh, 8–9 July 2021; pp. 1-4.
19. Allard, U.C.; Nougarou, F.; Fall, C.L.; Giguère, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A convolutional neural network for robotic arm guidance using sEMG based frequency-features. Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE; Daejeon, Republic of Korea, 9–14 October 2016; pp. 2464-2470.
20. Liu, Y.; Yin, Y.; Zhang, S. Hand gesture recognition based on HU moments in interaction of virtual reality. Proceedings of the 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics, IEEE; Nanchang, China, 26–27 August 2012; Volume 1, pp. 145-148.
21. Clark, A.; Moodley, D. A system for a hand gesture-manipulated virtual reality environment. Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists; Johannesburg, South Africa, 26–28 September 2016; pp. 1-10.
22. Ketcham, M.; Inmoonnoy, V. The message notification for patients care system using hand gestures recognition. Proceedings of the 2017 International Conference on Digital Arts, Media and Technology (ICDAMT), IEEE; Chiang Mai, Thailand, 1–4 March 2017; pp. 412-416.
23. Chen, L.; Fu, J.; Wu, Y.; Li, H.; Zheng, B. Hand gesture recognition using compact CNN via surface electromyography signals. Sensors; 2020; 20, 672. [DOI: https://dx.doi.org/10.3390/s20030672]
24. Samadani, A. Gated recurrent neural networks for EMG-based hand gesture classification. A comparative study. Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE; Honolulu, HI, USA, 18–21 July 2018; pp. 1-4.
25. Toro-Ossaba, A.; Jaramillo-Tigreros, J.; Tejada, J.C.; Peña, A.; López-González, A.; Castanho, R.A. LSTM Recurrent Neural Network for Hand Gesture Recognition Using EMG Signals. Appl. Sci.; 2022; 12, 9700. [DOI: https://dx.doi.org/10.3390/app12199700]
26. Jo, Y.U.; Oh, D.C. Real-Time Hand Gesture Classification Using Crnn with Scale Average Wavelet Transform. J. Mech. Med. Biol.; 2020; 20, 2040028. [DOI: https://dx.doi.org/10.1142/S021951942040028X]
27. Hu, Y.; Wong, Y.; Wei, W.; Du, Y.; Kankanhalli, M.; Geng, W. A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS ONE; 2018; 13, e0206049. [DOI: https://dx.doi.org/10.1371/journal.pone.0206049]
28. Jaramillo-Yánez, A.; Benalcázar, M.E.; Mena-Maldonado, E. Real-time hand gesture recognition using surface electromyography and machine learning: A systematic literature review. Sensors; 2020; 20, 2467. [DOI: https://dx.doi.org/10.3390/s20092467] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32349232]
29. Krichen, M.; Mihoub, A.; Alzahrani, M.Y.; Adoni, W.Y.H.; Nahhal, T. Are Formal Methods Applicable To Machine Learning And Artificial Intelligence?. Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH); Riyadh, Saudi Arabia, 22–24 May 2022; pp. 48-53. [DOI: https://dx.doi.org/10.1109/SMARTTECH54121.2022.00025]
30. Urban, C.; Miné, A. A Review of Formal Methods applied to Machine Learning. arXiv; 2021; [DOI: https://dx.doi.org/10.48550/ARXIV.2104.02466] arXiv: 2104.02466
31. Seshia, S.A.; Sadigh, D.; Sastry, S.S. Toward Verified Artificial Intelligence. Commun. ACM; 2022; 65, pp. 46-55. [DOI: https://dx.doi.org/10.1145/3503914]
32. Ashiquzzaman, A.; Oh, S.; Lee, D.; Lee, J.; Kim, J. Compact Deeplearning Convolutional Neural Network based Hand Gesture Classifier Application for Smart Mobile Edge Computing. Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), IEEE; Fukuoka, Japan, 19–21 February 2020; pp. 119-123.
33. Arenas, J.O.P.; Moreno, R.J.; Beleño, R.D.H. Convolutional neural network with a dag architecture for control of a robotic arm by means of hand gestures. Contemp. Eng. Sci.; 2018; 11, pp. 547-557. [DOI: https://dx.doi.org/10.12988/ces.2018.8241]
34. Benalcázar, M.E.; Jaramillo, A.G.; Zea, A.; Páez, A.; Andaluz, V.H. Hand gesture recognition using machine learning and the Myo armband. Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), IEEE; Kos, Greece, 28 August–2 September 2017; pp. 1040-1044.
35. Benalcázar, M.E.; Valdivieso Caraguay, Á.L.; Barona López, L.I. A User-Specific Hand Gesture Recognition Model Based on Feed-Forward Neural Networks, EMGs, and Correction of Sensor Orientation. Appl. Sci.; 2020; 10, 8604. [DOI: https://dx.doi.org/10.3390/app10238604]
36. Zhang, Z.; Yang, K.; Qian, J.; Zhang, L. Real-time surface EMG pattern recognition for hand gestures based on an artificial neural network. Sensors; 2019; 19, 3170. [DOI: https://dx.doi.org/10.3390/s19143170]
37. Colli Alfaro, J.G.; Trejos, A.L. User-independent hand gesture recognition classification models using sensor fusion. Sensors; 2022; 22, 1321. [DOI: https://dx.doi.org/10.3390/s22041321]
38. Li, Q.; Langari, R. EMG-based HCI Using CNN-LSTM Neural Network for Dynamic Hand Gestures Recognition. IFAC-PapersOnLine; 2022; 55, pp. 426-431. [DOI: https://dx.doi.org/10.1016/j.ifacol.2022.11.220]
39. Colli-Alfaro, J.G.; Ibrahim, A.; Trejos, A.L. Design of User-Independent Hand Gesture Recognition Using Multilayer Perceptron Networks and Sensor Fusion Techniques. Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR); Toronto, ON, Canada, 24–28 June 2019; pp. 1103-1108. [DOI: https://dx.doi.org/10.1109/ICORR.2019.8779533]
40. Zhang, Z.; Kan, E.C. Novel Muscle Monitoring by Radiomyography (RMG) and Application to Hand Gesture Recognition. arXiv; 2022; arXiv: 2211.03767
41. Tepe, C.; Erdim, M. Classification of EMG Finger Data Acquired with Myo Armband. Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), IEEE; Ankara, Turkey, 26–27 June 2020; pp. 1-4.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Recently, human–robot interaction technology has been considered as a key solution for smart factories. Surface electromyography signals obtained from hand gestures are often used to enable users to control robots through hand gestures. In this paper, we propose a dynamic hand-gesture-based industrial robot control system using the edge AI platform. The proposed system can perform both robot operating-system-based control and edge AI control through an embedded board without requiring an external personal computer. Systems on a mobile edge AI platform must be lightweight, robust, and fast. In the context of a smart factory, classifying a given hand gesture is important for ensuring correct operation. In this study, we collected electromyography signal data from hand gestures and used them to train a convolutional recurrent neural network. The trained classifier model achieved 96% accuracy for 10 gestures in real time. We also verified the universality of the classifier by testing it on 11 different participants.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Electronic Engineering, Kumoh National Institute of Technology, Gumi-si 39177, Republic of Korea
2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi-si 39177, Republic of Korea