About the Authors:
Ahmed K. El-Shenawy
* E-mail: [email protected]
Affiliation: Arab Academy for Science, Technology and Maritime Transport, Electric and Control Department, College of Engineering and Technology, Alexandria, Egypt
ORCID http://orcid.org/0000-0003-1333-3659
M. A. ElSaharty
Affiliation: Arab Academy for Science, Technology and Maritime Transport, Electric and Control Department, College of Engineering and Technology, Alexandria, Egypt
Ezz Eldin zakzouk
Affiliation: Arab Academy for Science, Technology and Maritime Transport, Electric and Control Department, College of Engineering and Technology, Alexandria, Egypt
Introduction
For decades, the medical care community has been working on developing a smart environment for special needs and elderly patients [1] [2] and [3]. The autonomous wheel chair is the main element in such an environment which requires highly navigational performance to guarantee efficient integration in any smart environment. Till now, the commercial wheeled chair is partially autonomous to compensate for the user’s physical deficiencies. The user always collides with the problem of local navigation. Therefore, the chair must have the properties of localization, path planning and position update. These themes represent a new trend for the robotics community in the past few years [4] [5] [6]. In [7], wheeled chair is integrated with a framework to estimate the intention of the user. It determines whether the user needs assistance to achieve his/her intention. A deictic approach is used in [8] for driving assistance smart wheelchair between doors and passages. All navigation systems are easier to develop with a predefined map of the working environment. In some environments the reconstruction and maintenance of the maps is needed [9]. The problem of position update is very important, as the whole navigational system depends on calculating accurate position coordinates for the chair. Operating in a wireless environment improves the performance of such chair where the detection of accidents and approximate tracking of a user is efficient [10]. Further more, the efficient routing is one of the current major challenges in the area of wireless sensor networks. Where reducing energy consumption and increasing the network life time is considered [11] [12] [13]. The camera is considered as one of the main sensors to deliver accurate information from environments. For example, it is used to combined a vision-based posture classification scheme to extract further information about the user when an alert occurs [10]. Kalman filters are one of the main common methods for sensor fusion, although it is more likely used to filter and fuse parallel images to compare parallel and converged cameras [14]. However, the availability to use a wireless network and a camera vision sensor is not applicable in many environments. Therefore, the wheel chair should be more dependable on its internal sensors only for its position update. Calculating the accurate position of a wheeled mobile base (chair) may be achieved by using external sensors, internal sensors or a combination of both. Normally, the wheel chair depends mainly on its internal sensors attached to the platform.
However, the internal sensors are affected by the chair’s environmental temperature, humidity, and slippage. For example, the shaft encoders do not detect wheel slippages, and the inertia motion unit (IMU) always has a shifting offset depending on a temperature and humidity. Such offset will create an accumulated error when passed through an integration operation for velocity or acceleration detection. Several approaches have been presented in the literature to minimize and eliminate the effect of this offset [15]. However, such methods are complicated and are not always reliable. On the other hand, the camera tracking system does not have such an disadvantage. Therefore, it is used within the wheel chair control structure.
In certain environments the camera tracking system may not be available or feasible. In this paper, a method is developed to fuse the measurements from internal sensors (IMU and shaft encoder) to achieve accurate position coordinates of the wheeled chair which can yield an accuracy similar to a camera tracking system. This is performed by combining the mean of neural networks and analogical gates to fuse the internal sensors position coordinates.
The work presented in this part of a project funded by the Arab Academy for Science, Technology and Maritime transport. The motivation of the project is to create a “Smart Environment for Elderly and Disabled Users” (SEED). The wheel chair is the main element of this environment, which must be flexible in maneuvering and assisting the user in navigation through his/her home. Flexibility in maneuvering is preliminarily achieved through the use of mecanum wheels attached to the wheeled mobile base platform to increase the degree of freedom (DOF) of motion.
The paper starts with demonstrating the chair configuration in section (1). The navigation system is explained in section (2). Section (3) defines Analogical gates and its properties, especially the OR analogical gate. The integration of the neural networks with the analogical gates is presented in section (4). Finally, the proposed method performance is illustrated in section (5) with experimental results.
1 Wheel Chair Configuration
Having a chair with a flexible motion is one of the main objectives. Therefore the chair must have motion holonomic properties, which implies that the number of robot velocity DOF are equal to the number of position coordinates. A rigid body has six DOFs [16], represented by displacement axes X, Y and Z and the rotational angles θx, θy and θz, A holonomic wheeled mobile robots (WMR) is a robot that can drive in three DOF (X, Y and θz or Φ). A non-holonomic WMR is a the robot that cannot perform the 3DOF mobility. Where the non-holonomic WMR cannot move sideways in the direction of the X axis with respect to the platform co-ordinates [17].
The wheel platforms with holonomic properties have been investigated in several studies [17] [18] and with different configurations [19]. The mecanum wheel is one of the most recommended wheels for holonomic mobility properties. Therefore, it is used as a wheeled base for the wheel chair. The platform configuration used is shown in Fig 1; it has been studied before by several researchers [20] [21] and [22].
[Figure omitted. See PDF.]
Fig 1. The Wheeled Chair configuration.
https://doi.org/10.1371/journal.pone.0169036.g001
The hardware setup used in this paper is V.01 of the wheeled chair shown in Fig 2. The chair consists of two main parts: 1) the wheeled platform configuration and 2) the electric hardware setup. The platform has a rectangular shape supported by four Mecanum wheels. Each wheel is capable of supporting a maximum weight of 15 Kg with a radius of 65 mm. The mecanum wheel used in the configuration consists of nine rollers made from delrin. The chair platform is attached to wheels of +45° rollers and wheels with −45° rollers on each side.
[Figure omitted. See PDF.]
Fig 2. The Chair Real-Time Setup.
https://doi.org/10.1371/journal.pone.0169036.g002
The platform is equipped with four DC motors to actuate the wheel’s angular velocity. The motors operate at 12V DC with a rated speed of 250rpm and rated power of 41 watts. Each motor has planetary steel gear box and an incremental encoder attached to its shaft. The motors are directed by a high power DC motor drivers, which can drive up to 10A continuously and 15A peak current for 10 seconds. Eight ultrasonic sensors are used for distant objects calculations. They are used to assess the collision avoidance behavior. Furthermore, a gyro sensor is used as well for rotational acceleration measurements and for the platform motion control. The main control unit is based on an Arduino Mega 2560 R3.
The omni-directional capabilities of the platform depend on firm contact with the surface. The platform’s parameters are described in Table 1.
[Figure omitted. See PDF.]
Table 1. The platform parameters.
https://doi.org/10.1371/journal.pone.0169036.t001
2 Navigation System
It is assumed that the chair will navigate with a known map and knowledge about its local environment, for example, a user’s apartment. The apartment consists of seven main nodes as follows: 1) bedroom (BR), 2) living-room (LR), 3) bathroom (BT), 4) kitchen (KT), 5) reception (RT) and 6) hallway (CH1) and (CH2), as shown in Fig 3. These are the basic rooms in a typical apartment, however there can be fewer rooms. The Hallway (CH1) is considered as the reference point. The chair should reach first in order to drive from one room to another.
[Figure omitted. See PDF.]
Fig 3. The apartment node map.
https://doi.org/10.1371/journal.pone.0169036.g003
The navigation system depends mainly on motion control from one node to another and on the position control loop that drives between each two nodes. First, the bedroom (BR) is considered as the base node with coordinates (X, Y) = (0, 0), and the other node’s coordinates will be the displacements from that node.
The control system of wheeled chair is shown in Fig 4. The chair is assumed to be controlled by voice commands and joystick commands. First, some symbols will be defined to illustrate the control system and the proposed work:
1. P position coordinates in X, Y, Φ
2. velocity coordinates in
3. acceleration coordinates in
4. velocity coordinates delivered from the camera tracking system
5. PCam Position coordinates delivered from the camera tracking system
6. velocity coordinates delivered from the IMU system
7. PIMU Position coordinates delivered from the IMU system
8. velocity coordinates delivered from the forward kinematics
9. PFK Position coordinates delivered from the forward kinematics
10. Reference velocity coordinates
11. Angular wheels velocity
[Figure omitted. See PDF.]
Fig 4. The Wheeled Chair Control System.
https://doi.org/10.1371/journal.pone.0169036.g004
The robot coordinates are measured and estimated by three main methods: 1) camera tracking, 2) IMU and 3) forward kinematics estimation. Firstly, in this case study, velocity () and position coordinates (Pcam) generated from the camera tracking system are used within the control system. These data are used because they are the most accurate position coordinates, which will be shown in the experimental data section.
Secondly, the measurements are taken directly from the IMU system locked on the chair. The IMU system normally measures the acceleration of the linear motion () and angular acceleration of the angular motion (), using a combination of an accelerometer and gyroscope. By means of integration, the robot velocities () and position coordinates (PIMU) can be calculated.
Thirdly, the robot velocity reference value is the command signal for the wheel chair control system. Shaft encoders on the wheel deliver the wheels angular velocities: . The wheels’ angular velocities are transformed to chair co-ordinates by means of the forward kinematics solution, which is described in [23] using the following model:(1)where, is the sensed wheel velocities, Jf is the forward kinematic solution, and is the desired platform velocities. The actuated inverse solution is(2)where R is the wheel radius and h is the distance from each wheel to the robot coordinates. The wheels encoders measure the angular velocity of each wheel individually: .
The difference between the three systems may be elaborated using the following experiment. As mentioned before, the wheel chair is tested within a known environment and its navigational systems is proven in [23]. The node coordinates of the apartment shown in Table 2.
[Figure omitted. See PDF.]
Table 2. Navigational node coordinates.
https://doi.org/10.1371/journal.pone.0169036.t002
The experiment demonstrates the trajectories of three different position update systems (forward kinematics, IMU and camera tracking systems). Commands given to the system are as follows: first, reaching Kitchen from bedroom (BD → KT), secondly, drive to the reception from kitchen (KT → RC). The control system proposed in [23] generated the following sequence of indexed nodes 1, 5, 6, 3 and 7. Position control system was used to drive the chair from one node to another according to its initial and goal co-ordinates on the (X, Y) axes, while the rotational angle had a reference value of Φ = 0°. The chair trajectory in Fig 5 shows that the proposed system fulfilled the sequence.
[Figure omitted. See PDF.]
Fig 5. Wheel hair trajectories for FR, IMU and camera positioning systems.
https://doi.org/10.1371/journal.pone.0169036.g005
Figs 6 and 7 shows the small environment used to test the navigational systems. Fig 6 represents the first motion to drive from bedroom to CH1, while Fig 7 shows the motion from CH2 to the kitchen.
[Figure omitted. See PDF.]
Fig 6. Driving from BR to Node1.
https://doi.org/10.1371/journal.pone.0169036.g006
[Figure omitted. See PDF.]
Fig 7. Driving from Node 2 to KT.
https://doi.org/10.1371/journal.pone.0169036.g007
The trajectories of the three positioning systems are shown in Fig 5. It is quite clear that the camera tracking system gives the most exact position coordinates, while the others have clearly noticeable errors. To evaluate the performance of the IMU and forward kinematics positioning systems, their resulted trajectories will be compared to the camera positioning system. The error between each trajectory will be calculated referred to the camera trajectory as follows:(3)for the FK positioning system and for the IMU system:(4)
The mean error equations and are used to show how each trajectory is close to the camera (where n is the number of points taken on the trajectory). The forward kinematics (FK) resulted in a (0.381[m]) mean error, while the IMU system resulted in a (0.258 [m]). Since the apartment dimensions are scaled with a ratio of 1:5, therefore, the errors are rescaled with the same ratio to be (1.9[m]) for the FK, and (1.3[m]) for the IMU system.
Theses errors are not acceptable by the control system, which is an important reason for adding the camera tracking system to the chair control system. Alternatively, the next sections will propose a novel fusion system depending on the integration between analogical gates theory and the neural networks algorithms, what we call a “neuro-analogical gate” (NAG).
3 Analogical Gates
The analogical gates are divided into two types: symmetric and asymmetric. The symmetric gates perform an operation similar to their logic counter, such as AND, OR and XOR. In this work, the structure of an OR gate is used; however some of its features will be changed according to the proposed neurological system. The gate will combine a coordinates system on the X and Y axes. These data are combined pair-wise by a binary operation as shown in Fig 8
[Figure omitted. See PDF.]
Fig 8. Analogical Gate.
https://doi.org/10.1371/journal.pone.0169036.g008
This means that if o denotes a binary operation then:
yo = u1 o u2 always exist = = > o is well defined
u1, u2, yo ∈ V, V = {v|v = [vmin, vmax], v ∈ ℜ} = = >V is closed under o
yo = u1 o u2 is unique on U1 × U2 ∀u2 ∈ U1, u2 ∈ U2, U1, U2 ⊆ V
The analogical gates borrow their names from the analogy to Boolean logic gates on the vertices of first and third quadrant in the input space, as shown in Fig 9.
[Figure omitted. See PDF.]
Fig 9. Input Structure.
https://doi.org/10.1371/journal.pone.0169036.g009
An analogical gate is presented by the binary relation
yo = u1 o u2
on the behavior element u1 and u2. Further, for the definition of the analogical gates, we use the exponential function Eq (5) as stated in [24].(5)and u1, u2 ∈ ℜ. Consider the following formulation for OR gate(6)and according to [24], a = 1.028 and b = 0.357, so the formula satisfy the OR conditions. However, any changes in a and b results different the system behavior, as shown in Fig 10. The figure shows the surface horizon for the OR gate with three different a and b values, as shown in Table 3.
[Figure omitted. See PDF.]
Fig 10. Surface representing analogical OR gate for different tuning parameter values.
https://doi.org/10.1371/journal.pone.0169036.g010
[Figure omitted. See PDF.]
Table 3. Values of a and b related to Fig 10.
https://doi.org/10.1371/journal.pone.0169036.t003
Different values of a and b changes the properties of the OR gate. The resulting gates surface horizons may be used to fuse different inputs, as shown in Fig 10.
If the fusion is used for FK and IMU trajectories using the position coordinates, there will be an infinitely possible number of coordinates. However, the chair velocities are limited to ±0.2[m/s] in any direction. Therefore, the fusion considered for the chair velocity levels. The velocity vectors VFK and VIMU represent different values for the chair X or Y velocities.
The parameters a and b values are responsible for tuning the gate to meet the values of VCam. The experiment shown in section (2) includes almost 1200 samples, 120 of them are considered the main operating points to tune their a and b values as shown in Fig 11. Table 4 shows the some operating points used.
[Figure omitted. See PDF.]
Fig 11. Tuning analogical gate.
https://doi.org/10.1371/journal.pone.0169036.g011
[Figure omitted. See PDF.]
Table 4. The platform parameters.
https://doi.org/10.1371/journal.pone.0169036.t004
These operating point are chosen from the different situations found between nodes (5, 6) and (6, 3), representing the horizontal and vertical motion.
The new gate is integrated with neural networks to result the “neuro-analogical gate” (NAG). The variables a, b of the gate are generated from a feed-forward neural network to fuse the trajectory data using analogical gate.
4 Neuro-Tuned Gates
The training process used for the multi-layer feed-forward (MLF) neural network will be supervised, based on the test sets with known inputs and outputs. Each neuron in a particular layers is connected to all neurons in the next layer. The connection between the ith and the jth neurons is characterized by the weight coefficient Wij, the ith neuron in one layer is biased by ϑi. The output value of the ith neuron xi is defined by the function(7)where n is the number of neurons in the previous layer and(8) f(ζi) is the transfer function carried throughout layer j, transferring the signal to the ith neuron, as shown in Fig 12. The tansigmoid function is used in the proposed network,(9)
[Figure omitted. See PDF.]
Fig 12. Single-Input Neuron.
https://doi.org/10.1371/journal.pone.0169036.g012
The back-propagation method is a supervised training algorithm and it is commonly used for training MLF [25]. Mathematically training a network means minimizing the objective function E,(10)where xo is the target stream flow and is the computed value from the output neurons.
The work introduced is this paper aimed for high accuracy with no limits for training time. Therefore using multiple hidden layers is highly recommended [25]. The choice of number of neurons and layers depends mainly on the total number of hidden nodes of the whole neural network. In addition, a decision relating to the number of layers and proportion of neurons between the first and second hidden layer is required. In [26], some methods were used for determining the number of neurons in the hidden layers, such as the Rule of Thumb method which states the following:
* The number of hidden layer neurons are 2/3 of the size of the input layer neurons [27];
* The number of hidden layer neurons should be less than twice of the number of neurons in the input layer;
* The number of hidden neurons should be in the range between the size of the input layer and the output layer neurons.
However, the complexity of the activation function applied on the neurons represents an important impact on the network response. Therefore, the rule of thumb method may not be applicable for some applications. There is also the simple method, which takes the same number of nodes in the input, output and hidden node [28].
This work uses the most common method, the two-phase method, which mainly depends on trial and error. The data is divided into four groups, where two data groups are used in the first phase to train the network and one group is used to test the network in the second phase. The fourth group is used to predict the output values of the trained network. This method is repeated for different number of hidden layer’s neurons to get the best network performance [29].
The proposed MLF consists of input and output layers, as shown in Fig 13, in addition to 5 hidden layers. The input layer has 20 neurons and the output layer consists of 2 neurons, while the hidden layers have 10 neurons each (20-10-10-10-10-10-2). This choice of such structure will be explained further in section 5
[Figure omitted. See PDF.]
Fig 13. Overall MLF structure including hidden layers.
https://doi.org/10.1371/journal.pone.0169036.g013
The neural network is trained using the inputs VFK and VIMU as input vectors and the a and b variables are the output vectors. Furthermore, the output vectors a and b values will be used to tune the analogical gate to the values of VCam as shown in Fig 14.
[Figure omitted. See PDF.]
Fig 14. Neuro-analogical gate (NAG).
https://doi.org/10.1371/journal.pone.0169036.g014
Choosing the proper structure for the neural network depends on many aspects; some of the them were described in the previous section. These structures were tested, and the decision was taken according to the mean square error value that resulted from the target and the estimated output values, as shown in Table 5.
[Figure omitted. See PDF.]
Table 5. Mean square error for different MLF-NN structures.
https://doi.org/10.1371/journal.pone.0169036.t005
A conclusion about the structure can be generated using least mean square error, which is very important in our application, as high accuracy in the chair trajectory is required. The network presented in Fig 13 is trained using the sample data to find the weights and biases of the network. Then the network is used with in the NAG structure presented in Fig 14 and tested for another 25 samples. Finally, the NAG gate is used within the wheel chair control structure shown in Fig 15 and applied for the whole experiment.
[Figure omitted. See PDF.]
Fig 15. The apartment node map.
https://doi.org/10.1371/journal.pone.0169036.g015
The control structure shown in Fig 15 demonstrates the main objective of the NAG, where its main inputs are the velocity coordinates estimated from the forward kinematics () analysis and the velocity coordinates from the IMU readings (). The output is the velocity fused coordinates (), which is fed directly to the odomerty algorithms to generate the fused trajectory (PFu).
5 Experimental Results
This section presents the results of three main experiments to illustrate the performance of the NAG. Firstly, the experiment presented in section (2) is considered where Fig 16 shows the trajectory of the camera tracking system and the results from the NAG system.
[Figure omitted. See PDF.]
Fig 16. Wheel chair trajectories resulted from NAG and camera positioning systems.
https://doi.org/10.1371/journal.pone.0169036.g016
The figure presents the camera system trajectory versus the fusion trajectory. It is quite noticeable that the new trajectory is much more aligned with the camera trajectory, with mean a error of 0.39[m]. There is a reduction of mean error with a percentage of 80% in comparison to FK and 70% in comparison to the IMU system.
The NAG fusion system was tested for infinity trajectory as well as shown in Fig 17 which represents the trajectories of the FK, IMU and camera positioning systems.
[Figure omitted. See PDF.]
Fig 17. Infinity shape trajectories for FK, IMU and camera positioning systems.
https://doi.org/10.1371/journal.pone.0169036.g017
The figure shows the noticeable errors between the trajectories. After applying the proposed NAG fusion system, the trajectory presented in Fig 18 shows that the NAG generated trajectory is more accurate than the ones generated from the FK and IMU positioning systems.
[Figure omitted. See PDF.]
Fig 18. Infinity shape trajectories from NAG and camera positioning systems.
https://doi.org/10.1371/journal.pone.0169036.g018
In Fig 17, 6 main attractive zones can be observed:Zones A, B, C, D, E and F. Zone A: the start and end point, where the chair will start and ends its infinity shape trajectory. The data collected from the IMU and FK positioning systems show that they do not meet at the main point. The camera data shows the real performance where the chair ends at the same point that it started.
The curve zones (B, D, E and F) show the high diversity in trajectory readings the IMU and FK systems in comparison to the camera tracking systems. Such errors will cause false readings and affect the performance of the wheel chair navigational system. We can also notice at the intersection point C where the three trajectories intersections are not even close to each other.
After applying the proposed NAG fusion system, that the generated trajectory overcame the errors and diversities in the zones (B, D and E) as shown in the Fig 18. Zone A shows that the start and end points intersects again with much fewer errors than the ones generated from IMU and FK trajectories. However, zone F shows high errors in comparison to the other zones. This problem may be solved by increasing the data used for training, but for such procedures, the chair should train new network parameters after each operational trajectory with the camera trajectory system.
The efficient performance of the proposed NAG system is elaborated in the following experiment. The main objective of the experiment to illustrate the performance of the chair when an unexpected object is introduced to the environment as shown in Fig 19. The wheel chair should drive from the starting point to the end point without colliding with any object. Therefore, eight ultrasonic sensors are attached to the platform and the algorithm of collision avoidance is introduced in [23]. The object width is one meter long as the wheel chair approaches the collision avoidance behavior is introduced to the system with priority higher than the position control system. After passing the object the position control will have priority on operation and drive towards the end point. The difference in performance between the three positioning systems is illustrated in the figure. The measurements accumulated errors found in the IMU and FK systems shows that the chair drives with distance of 0.3 meters far from the object, which is half the width of the wheel chair. Accordingly, the chair should be colliding with the object with these measurements. Alternatively, the trajectory representing the NAG shows its efficient performance in avoiding the object and reaching the end point, while the other systems trajectories shows distance errors at the end point.
[Figure omitted. See PDF.]
Fig 19. NAG performance during collision avoidance behavior.
https://doi.org/10.1371/journal.pone.0169036.g019
6 Conclusion
A novel method was developed to fuse the data of two internal sensors (rotatory encoders and IMU) to match the measurements of an external camera tracking system. This fusion system generated the most accurate coordinates for a wheel chair for special needs users. The proposed method uses the means of neural networks to tune the parameters of analogical gate which fuses the internal sensor position data. The new method, Neuro-Analogical Gate (NAG), is used for two experimental data. the first to drive in straight lines within a known apartment environment and in infinity shape. The error difference between the NAG trajectory and the camera system is around 70%–80% less compared to the internal sensors. Such results prove the efficient performance of the system.
Supporting Information
[Figure omitted. See PDF.]
S1 File. Tuned operating points.
This file includes the operating points used to tune the analogical gates and to train the nueral network.
https://doi.org/10.1371/journal.pone.0169036.s001
(XLSX)
Acknowledgments
The authors would like to thank the students of electric and control department for helping in developing the V.01 prototype of the wheel chair. They would also like to thank the Arab Academy for Science, Technology and Maritime Transport for funding the project and their support with enough laboratory space.
Author Contributions
1. Conceptualization: AKE.
2. Data curation: AKE MAE.
3. Formal analysis: AKE MAE EEz.
4. Funding acquisition: AKE MAE.
5. Investigation: AKE.
6. Methodology: AKE.
7. Project administration: AKE.
8. Resources: AKE MAE.
9. Software: AKE MAE.
10. Supervision: AKE.
11. Validation: AKE MAE.
12. Visualization: AKE MAE.
13. Writing – original draft: AKE.
14. Writing – review & editing: AKE MAE EEz.
Citation: El-Shenawy AK, ElSaharty MA, zakzouk EE (2017) Neuro-Analogical Gate Tuning of Trajectory Data Fusion for a Mecanum-Wheeled Special Needs Chair. PLoS ONE 12(1): e0169036. https://doi.org/10.1371/journal.pone.0169036
1. Basma M, El-Basioni M, Abd El-Kader SM, Eissa H Independent living for persons with disabilities and elderly people using smart home technology. International Journal of Application or Innovation in Engineering and Management. 2014 Apr; 3(4):11–28.
2. Woochun J. A Study on development of improvement guidelines of smart accessibility for the disabled. Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities and Sociology. 2015 Oct; 5(5): 47–58.
3. Masahiro S, Takamasa I, Koji K, Chandraprakash S, Norihiro H. Effectiveness of social behaviors for autonomous wheelchair robot to support elderly people in japan. PLoS ONE.2015 May; 10(5): e0128031.
4. BONCI A, LONGHI S, MONTERIÙ A, VACCARINI M Navigation system for a smart wheelchair. Journal of Zhejiang University SCIENCE. 2005; 6A(2):110–117.
5. Wang S, Huosheng H, McDonald-Maier K Bézier curve based trajectory planning for an intelligent wheelchair to pass a doorway’ Proceeding of International Conference on Control (CONTROL). 2012 Sept;:339–344.
6. Morales Y, Kallakuri N, Miyashita T, Shinozawa K, Hagita N. Semi-Autonomous Wheelchair Navigation: Towards Brain-Controlled Systems. Robot Society of Japan, 2012.
7. Vanhooydonck D, Demeester E, Hüntemann A, Philips J. Adaptable navigational assistance for intelligent wheelchairs by means of an implicit personalized user model. Robotics and Autonomous Systems. 2010 Aug; 58(8): 963–977.
8. Leishman F, Horn O, Bourhis G Smart wheelchair control through a deictic approach Robotics and Autonomous Systems. 2010 Aug; 58(8): 1149–1158.
9. Habert O, Pruski A Cooperative construction and maintenance of maps for autonomous navigation Robotics and Autonomous Systems. 1997 Oct; 21(4): 341–353.
10. Tabar AM, Keshavarz A, Hamid H Smart home care network using sensor fusion and distributed vision-based reasoning. VSSN’06 Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks. 2006 Oct;: 145–154
11. Ahmadi A, Shojafar M, Hajeforosh SF, Dehghan M, Singhal M An efficient routing algorithm to preserve k-coverage in wireless sensor networks. Journal of Supercomputing. 2013 Dec; 68(2):599–623
12. Sun Z, Zhang Y, Nie Y, Wei W, Jaime L. CASMOC: a novel complex alliance strategy with multi-objective optimization of coverage in wireless sensor networks. Wireless Network. 2016 Feb;:1–22
13. Naranjo PGV, Shojafar M, Mostafaei H, Pooranian Z, Baccarelli E P-SEP: a prolong stable election routing algorithm for energy-limited heterogeneous fog-supported wireless sensor networks. The Journal of Supercomputing. 2016 Jun,:1–23.
14. Yang J, Xu R, Lv Z, Song H Analysis of Camera Arrays Applicable to the Internet of Things’ Sensors (Basel). 2016 Mar; 16(3):421,1–12. pmid:27011189
15. Aznar F, Pujol FA, Pujol M, Rizo R, Pujol M Learning Probabilistic Features for Robotic Navigation Using Laser Sensors PLoS One. 2014; 9(11): e112507 pmid:25415377
16. Featherstone R. Orin D Robot Dynamics: Equations and Algorithms IEEE International Conference on Robotics and Automation. 2000 Apr;:826–834.
17. Holmberg R, Khatib O Development of Holonomic Mobile Robot for Mobile Manipulation Tasks The International Journal of Robotics Research. 2000 Nov;9(11):1066–1074.
18. El-Shenawy A, Wagner A, Badreddin E Controlling a Holonomic Mobile Robot With Kinematics Singularities. The 6th World Congress on Intelligent Control and Automation, 2006 Jun;:8270–8274.
19. El-Shenawy A, Wellenreuther A, Baumgart A, and Badreddin E Comparing Different Holonomic Mobile Robots. IEEE International Conference on Systems, Man, and Cybernetics. 2007 Oct; Montreal:1584–1589.
20. Diegel O, Badve A, Bright G, Potgeiter J, Tlale S Improved Mecanum Wheel Design for Omni-directional Robots Proc. Australian Conference on Robotics and Automation. 2002 Nov; Auckland:27–29.
21. Koestler A, Bräunl T Mobile Robot Simulation with Realistic Error Models. 2nd International Conference on Autonomous Robots and Agents, Palmerston North. 2004 Dec; New Zealand:46–51.
22. Xu P, Mechatronics Design of a Mecanum Wheeled Mobile Robot. Cutting Edge Robotics,Germany:InTech; 2005.
23. El-Shenawy A, Elsaharty M, Zakzouk E Navigation and Control of Mecanum Wheeled Chair for Handicaps. International Review of Mechanical Engineering (I.RE.M.E.). 2014 Sep; 8(5):872–883.
24. Badreddin E Fussy relations for behavior-fusion of mobile robots Proceedings of the IEEE Conference on Robotics and Automation. 1994 May; California:8–13.
25. Rumelhart DE, Hinton GE, Williams RJ Learning internal representations by error propagation: Parallel distributed processing: Explorations in the microstructures of cognition. MIT Press, Cambridge, Mass. 1986: 318–362.
26. Karsoliya S Approximating Number of Hidden layer neurons in Multiple Hidden Layer BPNN Architecture International Journal of Engineering Trends and Technology. 2012;3(6):714–717.
27. Boger Z, Guterman H Knowledge extraction from artificial neural network models IEEE Systems, Man, and Cybernetics Conference. 1997 Oct; FL: 3030–3035.
28. Panchal G, Ganatra A, Kosta YP, Panchal D Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network. International Journal of Computer Theory and Engineering. 2011 Apr; 3(2): 332–337.
29. Panchal F, Panchal M Approximating Number of Hidden layer neurons in Multiple Hidden Layer BPNN Architecture International Journal of Computer Science and Mobile Computing. 2014 Nov; 3(11): 455–464.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2017 El-Shenawy et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Trajectory tracking of mobile wheeled chairs using internal shaft encoder and inertia measurement unit (IMU), exhibits several complications and accumulated errors in the tracking process due to wheel slippage, offset drift and integration approximations. These errors can be realized when comparing localization results from such sensors with a camera tracking system. In long trajectory tracking, such errors can accumulate and result in significant deviations which make data from these sensors unreliable for tracking. Meanwhile the utilization of an external camera tracking system is not always a feasible solution depending on the implementation environment. This paper presents a novel sensor fusion method that combines the measurements of internal sensors to accurately predict the location of the wheeled chair in an environment. The method introduces a new analogical OR gate structured with tuned parameters using multi-layer feedforward neural network denoted as “Neuro-Analogical Gate” (NAG). The resulting system minimize any deviation error caused by the sensors, thus accurately tracking the wheeled chair location without the requirement of an external camera tracking system. The fusion methodology has been tested with a prototype Mecanum wheel-based chair, and significant improvement over tracking response, error and performance has been observed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer