1. Introduction
Multi-robot formation control is one of the most important areas of research in Multi-Robot Systems [1,2], due to general practical applications such as joint handing [3], cooperative rescue [4], group stalking, and exploration [5]. The goal of multi-robot formation control is to maintain a specified geometrical shape of a group of robots by adjusting the pose (positions and orientations) of robots [6], which generally follows the process of forming, maintaining, and switching of formation. The current research on the multi-robot formation mainly focuses on formation control theory [7,8,9], while some research lacks physical experimental realization to validate their theories and algorithms for having no available experimental testbed, especially a general and low-cost indoor multi-robot formation experimental platform.
The positioning system is the basis of a multi-robot formation experimental platform, which determines the position and orientation of robots in real time. Currently, the indoor positioning systems used in multi-robot formation can be roughly classified into two main categories [10]—relative and absolute positioning systems [11]. For the relative positioning systems, the robots’ pose is obtained from onboard sensors or other robots. For example, the pose information has been obtained by the encoders equipped with the robots [12], or a laser scanner mounted on the robot has been used to estimate the relative position of other robots [13]. However, considering formation scalability, the overall cost of the system will increase with the addition of robot number, as new cameras or laser range finders would be required. In addition, relative positioning based on onboard sensors will produce an accumulative error, which will cause distortion of the formation.
In the absolute positioning systems, the pose of a robot is determined via measuring and recognizing the landmarks in indoor environments. In some research [6,14,15], cameras were mounted on the ceiling or overhead place to estimate the position and orientation of robots via measuring the positions of robots’ marks. For example, Guinaldo [16] designed a positioning system with a single camera on the ceiling and each robot was distinguished thanks to three high-brightness LEDs. However, since the camera is susceptible to light and dynamic environment, the image processing including landmark recognition and feature extraction is not robust enough for positioning. To improve the precision of positioning, more cameras need to be equipped. In view of this, Zhang [17] proposed a vision system including 24 OptiTrack cameras to obtain each robot’s position information, but this will also significantly increase the overall cost. To reduce the cost, choosing some economical positioning technologies to replace vision systems is urgently required. In addition, technologies such as Radio-Frequency Identification (RFID) [18], Ultra-Wideband (UWB) [19], and Bluetooth [10] are not suitable to be employed for multi-robot formation due to low precision. Recently, the ultrasonic system has been verified that it can reach a better trade-off between cost and precision, which absorbs the sight of many researchers and practitioners [20,21].
Some formation experiments hve been implemented in the existing robot platforms, such as Koala from K-team [22] or TurtleBot3 from TurtleBot [23], which are often relatively complex and expensive. On the contrary, some commercially available off-the-shelf solutions, like the Create2 robot from iRobot [24] and the LEGO robot used in Reference [17], often lack onboard processing and networking [25]. In more recent work, Kilobots are designed for testing collective algorithms on large groups [26], which mainly serviced for swarm robots. Few of the existing platforms are uniquely designed for multi-robot formation control. For this reason, it is reasonable to design and manufacture our custom robot. What is more, effective formation and coverage control of mobile robots also require a reliable and powerful wireless communication infrastructure for exchanging information among themselves [27]. Since the high-performance wireless local area network (WLAN) technology is relatively low cost, its use for wireless control of multi-robot systems has become a practical proposition [28].
In this paper, the formation of wheeled mobile robots is selected as the research object, and we propose a general and low-cost multi-robot formation experimental platform to facilitate the application validation of theories and methods on the multi-robot formation research. Our multi-robot formation platform contains three key parts—the indoor global-positioning system, the multi-robot communication system, and wheeled mobile robot hardware. The real-time and precise pose of every robot is achieved by the indoor global-positioning system, where the position is obtained by the Marvelmind Indoor Navigation System based on the ultrasonic system and the orientation is obtained by MPU-6050. The mobile robots are made by us, based on an embedded microcontroller STM32 and stepper motors. In addition, a wireless communication network is established for exchanging information among robots based on the ESP8266 Wi-Fi communication module. The control is distributed in the sense that each robot makes, by itself, the decision of when to transmit its state, and the control law is computed locally. Finally, we validate the platform using a formation control leader–follower strategy and complete a series of experiments of formation forming, switching, and maintaining with the external disturbance.
The rest of the paper is organized as follows. Section 2 introduces the indoor multi-robot formation platform. Section 3 explains the leader–follower formation control and setup of the experiment platform. Experimental results will be presented and discussed in Section 4. In Section 5, the main contributions of the paper are summarized, and future research directions are highlighted.
2. Indoor Multi-Robot Formation Platform
In this section, we will introduce an indoor multi-robot formation platform that consists of common and low-cost components. It can be adopted and replicated by researchers who are interested in the multi-robot formation or mobile robot.
2.1. Platform Architecture and Components
Our multi-robot formation platform consists of three components—an indoor global-positioning system, the robot hardware, and the multi-robot communication technology. Figure 1 shows the architecture and the components of the multi-robot formation platform, which are the following:
1. A personal computer to monitor the indoor global-positioning system, which is employed to collect and record the robots’ pose.
2. The modem of the Marvelmind Indoor Navigation System connected with PC through Universal Serial Bus.
3. A Universal Serial Bus (USB) server to link the Dashboard with PC.
4. The Marvelmind dashboard is the dashboard of Marvelmind Indoor Navigation System.
5. A mobile beacon of the Marvelmind Indoor Navigation System mounted on the Robot, which is used to receive position information and send to the robot’s controller.
6. A Wi-Fi communication module ESP8266 installed on each robot, to transmit and receive data as the Transmission Control Protocol(TCP) client.
7. The mobile robot equipped with a microcontroller and other sensors.
8. A Wi-Fi communication module ESP8266 connected to the PC, to transmit and receive data as the TCP server.
9. USB-TTL (Transistor-Transistor Logic) is used to build a connection between communication module ESP8266 and the PC.
10. The data collection application is connected to the PC through I/O API and developed in LabVIEW software, for collecting the robots’ poses and recording them in data files.
11. The MPU-6050 devices combine a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die, together with an onboard Digital Motion Processor, which is the server for the robots’ orientation.
2.2. Indoor Global Positioning System
The indoor global-positioning system for multi-robot contains Marvelmind Indoor Navigation System and MPU-6050. The former is an off-the-shelf indoor navigation system based on the ultrasonic system, which can provide high-precision (±2 cm) indoor coordinates for mobile robots [29], and it can reach the update rate of 16 Hz. It mainly contains three core components i.e., Modem (router), Mobile beacon, and Stationary beacon, as shown in Figure 2.
The MPU-6050 devices combine a 3-axis gyroscope and a 3-axis accelerometer on the same silicon die, together with an onboard Digital Motion Processor, which processes complex 6-axis Motion Fusion algorithms. The device can access external magnetometers or other sensors through an auxiliary master I²C bus, allowing the devices to gather a full set of sensor data without intervention from the system processor. MPU6050 is mounted on the mobile car, which is used to provide each robot’s orientation.
The real-time and precise pose of every robot is achieved by the indoor global-positioning system, where the position is obtained by the Marvelmind Indoor Navigation System based on the ultrasonic system and the orientation is obtained by MPU-6050.
The modem is the central controller of the indoor global-positioning system, which can, not only communicate with the stationary beacon, but can also calculate the position of mobile beacons and send the position to the mobile beacons. Mobile beacons are installed on the mobile robots, receiving position information from the modem and, at the same time, interacting with the microcontroller of mobile robots. Stationary beacons are mounted on the wall and they measure the distance of other mobile beacons through the method of ultrasonic pulses (time-of-flight). The position of the mobile robot is calculated based on the propagation delay of an ultrasonic signal to a set of stationary beacons using trilateration method.
Each mobile robot is equipped with a mobile beacon and can communicate with the STM32 microcontroller unit (MCU) through the serial port. Through the decoding process, mobile robots can obtain the real-time position.
Another part of the indoor global-positioning system is MPU-6050 installed on the mobile car, which is used to provide the orientation of the robot in the forward direction. It is an integrated 6-axis motion-tracking device, which contains 3-axis gyroscope, 3-axis accelerometer and a Digital Motion Processor [30]. The pose of robots is composed of the position information from the Marvelmind Indoor Navigation System and the orientation information from MPU-6050. Each mobile robot is equipped with the MPU-6050, and the orientation of robot can be obtained in real-time through I²C bus.
The coverage area of the indoor global-positioning system is up to 1000 m2. Taking the 5 m × 6 m indoor experimental site as an example, this positioning system can track and compute the robots’ position information up to 25 Hz with the differential precision of ±2 cm in robot position and ±1° in robot orientation.
2.3. Multi-Robot Communication and Monitoring System
For a reliable execution of coordination tasks by multi-robot, communication between robots is a key issue, and in order to observe the experimental process, the in-process data also should be recorded. In this paper, the TCP/IP wireless communication infrastructure is selected to support the communication among robots and the data collection in the process of experiments.
As a low-power and highly integrated Wi-Fi module, ESP8266 incorporates a firmware and provides a simple means of wirelessly communicating. The module has a complete and self-contained Wi-Fi network capability that can be either alone or as a slave to other host MCUs [31]. So, it is available to support communication in our multi-robot formation platform.
Figure 3 illustrates the network architecture showing the data collection terminal and a number of mobile robots. Wi-Fi module ESP8266 mounted on each robot can communicate with the microcontroller via interruptions using the UART at the speed of 38,400 bps. Besides, the module ESP8266 is connected with PC as a wireless data collection terminal through USB-TLL.
In order to communicate with each other, the ESP8266 module needs to be connected to the same Wi-Fi network. Under the same network, the communication among robots will be established when one of the robots is considered as TCP servers and others join the servers as clients. The point-to-point connection between the server robot and the client robot can avoid information loss, and extend the number of server robots according to task requirements. Further details about this application in leader–follower formation will be discussed in next part.
The ESP8266 supports the TCP/IP protocol and fully complies with the 802.11 b/g/n WLAN MAC protocol. The master host can achieve full duplex data transmission of up to four slaves. The clock frequency in master mode is up to 80 MHz, and the clock frequency in slave mode is up to 20 MHz.
As for the computer screen, shown in Figure 4, the real-time monitoring system has been developed in LabVIEW to record the pose information and monitor each robot trajectory. The ESP8266 module connects with PC through USB-TTL, so the PC can get the information collected by the ESP8266 module.
The real-time monitoring system reads the information in the multi-robot task process using the function of LabVIEW’s serial port read, and through the image drawing VI, robot trajectory is recorded. To facilitate data analysis and processing, all strings in the buffer are sorted according to the title of each data frame and record in the table.
In the multi-robot formation platform, the communication method is distributed, and the distributed architecture is originally designed for scalability, although we only use several robots to form a team. Because of the distributed architecture and its communication methods, there is not much limit on the number of follower robots.
2.4. Wheeled Mobile Robot Hardware
In our multi-robot formation platform, for more practicality, lower costs, and better compatibility, we chose to design and manufacture our custom mobile robots. Mobile robot hardware including eight components is shown in Figure 4, and each module installed on the mobile robot is listed in Table 1.
The mobile robot with differential drive serves as the general platform for multi-robot formation control experimentation. It is equipped with a microcontroller Unit-STM32, a mobile beacon, a six-degree-of-freedom gyroscope MPU-6050, tow motors, and power equipment.
The robot is controlled by a low-cost and high-performance microcontroller Unit-STM32. The CPU core is an ARM 32-bit Cortex-M4 CPU with FPU running at 168 MHz with 1 Mbyte of Flash memory and 192 KB of SRAM [32]. The microcontroller Unit-STM32 is selected as the core control cell of the mobile robot, which is used to execute the formation controller algorithm. In our multi-robot formation platform, the control is distributed so that each robot has a microcontroller by itself. This distributed method will not cause the whole operation of multi-robot system paralysis or confusion in the case of robot problems or central processor faults. In addition, it is easy to add or reduce the number of robots in the process of operation.
STM32 is equipped with rich communication interfaces. Through its serial port, it can interact with the indoor global-positioning system. At the same time, it drives and controls the other sensors such as the communication sensors, motion tracking sensor, and so on.
The platform adopts a distributed framework, and each robot is equipped with a microprocessor. Thus, the control is distributed in the sense that each robot makes, by itself, the decision of when to transmit its state, and the control law is computed locally.
The stepper motors mount directly to the bottom circuit board and are controlled with PWM generated by the microcontroller. The 41 mm wheels give the robots a maximum speed of 0.3 m/s, while the minimum controllable speed is around 0.03 m/s.
3. Configure the Platform for the Leader–Follower Formation
To verify the positioning accuracy, the communication stability, and the microprocessor’s execution performance of the controller algorithm in the real experimental environment, the multi-robot formation platform will be applied in leader–follower formation control. This section is divided by subheadings, and should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
3.1. Leader–Follower Formation Control
As the typical representation of multi-robot formation, leader–follower formation control is to maintain the desired separation ( l and the desired orientation ( φ ), between the leader and the follower, which is called l−φ control strategy described in Reference [31].
Considering a group of n non-holonomic wheeled mobile robots, we denote liL∈R as the actual distance between the follower robot Fi and the leader robot L. liL∈R is the desired distance, φiL∈[−ππ] is the actual orientation, which is the angle from the follower robot to the X-axis connecting, and φiL∈[−ππ] is the desired orientation. The follower robot’s desired pose pid=(xid,yid,θid)T with respect to the leader can be obtained using
{xid=xL−liLdcos(θL+φiLd)yid=yL−liLdsin(θL+φiLd)θid=θL,
where (xL,yL,θL)T is the pose of the leader robot.
Comparing the follower robot’s desired pose pid=(xid,yid,θid)T with the follower robot’s current pose pi=(xi,yi,θi)T , the tracking error in a coordinate system can be described as:
(xe,ye,θe)T=(xi−xid,yi−yid,θi−θid)T ,
and the controller for follower robot is selected from Reference [33]:
[νi ωi]=[νidcosθe+k1 xeωid+k2 νi d ye+k3 νi dsinθe],
where (νi,ωi)T are the control inputs, and (νid,ωid)T is the desired velocity of followers.
3.2. Experiment Setup for Leader–Follower Formation Control
Each robot gets their pose in real-time via the indoor global-positioning system. Specifically, the actual position of the robot is obtained through the Marvelmind Indoor Navigation System and actual orientation is obtained through MPU-6050.
To form, maintain, and switch a certain geometric formation based on the controller besides its own pose, the leader’s actual pose must be transmitted to each follower robot. The communication among robots is established as Figure 5, and the implementing process in detail as follows:
Step 1. The leader robot and follower robots connect to the same Wi-Fi network.
Step 2. TCP-Server1 and TCP-Server2 are created by the leader robot and the host computer, respectively.
Step 3. Each follower robot as a TCP-Client connects to TCP-Server1 and TCP-Server2. The pose of the leader robot is transmitted to each follower by the Wi-Fi network from TCP-Server1 to each TCP-client. Meanwhile, the pose information, including the follower’s pose and the leader’s pose, is transmitted to host computer from each TCP-client to TCP-Server2.
During the process of controller calculation, the follower robots work out their desired pose according to Equation (1) and then calculate the errors according to Equation (2). Next, through Equation (3), they get the control inputs νi and ωi which are the follower robots’ line velocity and angular velocity; lastly, they move into the desired formation. The controller calculating process of each follower robot is shown in Figure 6.
4. Experiments of Leader–Follower Formation Control
In order to validate the effectiveness and robustness of our multi-robot formation platform, we will perform two types of leader–follower formation experiments—experiments of trajectory tracking under deterministic environment and a formation-maintaining experiment with the external disturbance. In our experiments, we select the follower robots’ controller from Reference [34], described in detail as Equation (3) with parameters k1 = 1, k2 = 0.6, and k3 = 0.5. Based on the setting of the controller and its parameters, Figure 7 shows the real scenario of leader–follower formation experiments. The process of all the following experiments is illustrated by snapshots of the video, in which the actual trajectory time displayed in the Marvelmind dashboard. The distance error of the follower robot xe2+ye2 and the angular error of the follower robot θe are also recorded by the real-time monitoring system.
4.1. The Experiment of Trajectory Tracking of Leader–Follower Formation under Deterministic Environment
In this section, a triangle formation-switching experiment is first addressed to show the effectiveness of the proposed platform. Then, the circle formation of two robots and the diamond formation with four robots are also performed, respectively, to verify the scalability of the proposed platform.
4.1.1. Experiment of Triangle Formation Switching
We use the platform to perform a formation switching from the triangle formation (l = 0.60 m) to another triangle formation (l = 0.30 m) while tracking a straight line. The initial pose of leader robot L is (3.26 m,1.25 m,π/2) , the linear velocity is ν=0.05(m/s) and angular velocity is ω=0(rad/s) . The initial pose of follower robots F1 and F2 are (2.84 m,0.61 m,π/2) and (3.88 m,0.56 m,π/2) . In addition, initial velocity of the three robots are 0. The velocity constraints of all the robots are set as νMax=0.3(m/s) and ωMax=π/6(rad/s) .
The experimental process in Figure 8 shows that the leader robot leads the two follower robots F1 and F2, forming and maintaining a desired triangle formation. The formation begins to switch into another triangle formation at T = 43 s, and after 25 s a new desired formation is formed autonomously. From Figure 9a,b, we can see at T = 43 s, the distance and angular errors suddenly increased because of the formation switching, but during T = 43–50 s, under the autonomous control of the controller, the follower robots calculated their new control inputs to form the new desired formation, at the same time the distance error was decreasing rapidly. At last, the three robots formed the new triangle formation autonomously. In summary, the leader–follower formation based on our platform can implement the application and verification of the theories and methods in the multi-robot formation. The experiment video of the triangle formation switching can be found at Supplementary Materials https://youtu.be/7WtsZoNVp5A.
4.1.2. Experiment on Scalability Formation Control
To validate adaptability to complex formations and the scalability of the multi-robot formation platform, we also consider two robots forming a typical circle formation and four robots forming a diamond formation.
Figure 10a shows the leader robot performing a circle path and the follower robot tracking the leader robot to form a typical line formation. Tracking the circle trajectory for the follower robot is a challenge because of the changing orientation. The follower robot tracked the circle trajectory smoothly, which demonstrates the adaptability of our multi-robot platform to complex formations. Figure 10b shows the leader robot guided the three follower robots F1, F2, and F3 forming and maintaining a desired diamond formation. The experiments have been performed in a different number of robots, which validates the scalability of the multi-robot formation platform. The experiment video of circle trajectory of the leader robot with one follower robot can be found at Supplementary Materials https://youtu.be/4caScl5PF_U. The experiment video of the line trajectory of leader robot with three follower robots to realize a diamond formation can be found at Supplementary Materials https://youtu.be/NYSVTKw46vU.
4.2. The Experiment of Triangle Formation Maintaining with External Disturbance
Considering the complex external environment in a practical application, introducing external disturbance is the best way to test the anti-interference capability of the platform, so as to verify the robustness of our platform. The external disturbances include lateral disturbance and longitudinal disturbance. The experiment was to use the platform and control the three robots maintaining a triangle formation with external disturbance while tracking a straight line. The initial pose of leader robots L was (3.24 m,2.29 m,π/2) , the linear velocity was ν=0.05(m/s) and angular velocity was ω=0(rad/s) . The initial pose of follower robots F1 and F2 were (2.61 m,1.96 m,π/2) and (3.83 m,2.10 m,π/2) . In addition, initial velocity of three robots were both 0. The velocity constraints of all the robots were set as νMax=0.3(m/s) and ωMax=π/6(rad/s) . The experiment video of line trajectory of a triangle formation maintaining with external disturbance can be found at Supplementary Materials https://youtu.be/HGQjjoYARJc.
Figure 11 illustrates the robustness of the leader–follower formation process collected from the video at different times. The leader robot L led the two follower robots F1 and F2 and maintained a desired triangular formation l = 0.30 m; at T = 32 s and T = 50 s follower robot F1 received two external disturbances—the first external disturbance was longitudinal disturbance and the second external disturbance was lateral disturbance. As shown in Figure 12, when F1 moved with the external disturbance, compared with F2, its formation tracking errors changed fast, but finally, it also converged nearly to zero and achieved the triangle-like formation, as desired. These results demonstrate our multi-robot platform is fault-tolerant. During the process, though a robot of the system faces the external disturbances in the formation process, it can recover the desired formation quickly; moreover, the other robots are not affected. Therefore, our platform shows a good robustness under external disturbances.
For the above two types of leader–follower formation experiments, our platform can effectively implement the application of the existing theories and methods, and shows better scalability and robustness, since real-time and accurate pose information are supported by our indoor global-positioning system. Therefore, our platform is expected to be used as an available test platform to evaluate and verify the feasibility and correctness of the theoretical methods in the multi-robot formation.
5. Conclusions and Future Works
We propose a new multi-robot formation experimental platform based on an indoor global-positioning system. This general and low-cost multi-robot formation platform can provide the precise and real-time pose of every robot based on the Marvelmind Indoor Navigation System and the six-degree-of-freedom gyroscope MPU-6050. The former provides the centimeter-level precise position and the latter provides the orientation information of each robot. In addition, robots exchange information based on ESP8266 Wi-Fi communication module. Then, we performed and analyzed a set of leader–follower formation experiments by our platform, including formation forming and switching under deterministic environment, and a formation-maintaining experiment with the external disturbance. The results illustrate that our experimental platform can be applied to formation control successfully and verify the correctness and effectiveness of the theoretical methods for robot motion control in the multi-robot formation.
Our general and low-cost multi-robot formation platform based on the indoor positioning system with three foundation technologies of actual pose acquisition, indoor global positioning, and multi-robot communication can be used in the fields of multi-robot coordination, formation control, and search and rescue missions.
In the future, based on the indoor global-positioning system, other sensors will be equipped on the robot to provide pose information jointly by means of multi-sensor information fusion, so as to improve positioning accuracy of mobile robots. At the same time, the communication system will be improved to face the individual failures due to complex and unknown environments. In order to effectively reduce the impact of formation failure of a certain body robot on the formation, the existing peer-to-peer communication mode is promoted to the broadcast communication mode.
Figure 1. The architecture and the components of the multi-robot formation platform.
Figure 2. The multi-robot indoor global-positioning system based on the trilateral measurement.
Figure 8. Snapshots of the video about line trajectory of a triangle formation switching.
Figure 9. The formation tracking errors of the follower robots F1 and F2 in formation-switching experiment; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
Figure 10.(a) Snapshots of the video about circle trajectory of the leader robot with one follower robot to realize a line formation. (b) Snapshots of the video about line trajectory of leader robot with three follower robots to realize a diamond formation.
Figure 11. Snapshots of the video about line trajectory of a triangle formation maintaining with external disturbance.
Figure 12. The formation tracking errors of the follower robots F1 and F2 in the experiment of triangle formation maintaining with external disturbance; (a) is the distance error of follower robots; (b) is the angular error of follower robots.
ID | Module | Item | Description |
---|---|---|---|
- | Dimension | - | L∗W∗H: 21 cm × 18 cm × 8 cm |
1 | Microcontroller | STM32F407ZEG6 | Microcontroller unit |
2 | Stepper motor | MG42S1 | Differential motors |
3 | Communication | ESP8266 | Wireless communication unit |
4 | Mobile beacon | Marvelmind HW v4.9 | Providing the position of robots |
5 | Orientation | MPU-6050 | Providing the orientation of robots |
6 | Buck module | DC-DC Buck module | Voltage converter 12 V–5 V |
7 | Oled Display | 0.96 OLED | Displaying |
8 | Switch | - | Power switch |
Supplementary Materials
The experiment video of the triangle formation switching can be found in https://youtu.be/7WtsZoNVp5A. The experiment video of circle trajectory of the leader robot with one follower robot can be found in https://youtu.be/4caScl5PF_U. The experiment video of the line trajectory of leader robot with three follower robots to realize a diamond formation can be found in https://youtu.be/NYSVTKw46vU. The experiment video of line trajectory of a triangle formation maintaining with external disturbance can be found in https://youtu.be/HGQjjoYARJc.
Author Contributions
H.Y. designed the experimental framework and provided experimental and financial support; X.W. designed the leader-follower formation control; X.B. executed the experiment; S.Z. analyzed the data; all the authors wrote the paper.
Funding
This research was funded by the National Natural Science Foundation, China (No.51775435), and the Programme of Introducing Talents of Discipline to Universities (B13044).
Acknowledgments
In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).
Conflicts of Interest
The authors declare no conflict of interest.
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
1. Wang, G.; Li, D.; Gan, W.; Jia, P. Study on formation control of multi-robot systems. In Proceedings of the Third International Conference on Intelligent System Design and Engineering Applications, Hong Kong, China, 16-18 January 2013; pp. 1335-1339.
2. Chen, H.; Yang, H.A.; Wang, X.; Zhang, T. Formation control for car-like mobile robots using front-wheel driving and steering. Int. J. Adv. Robot. Syst. 2018, 15, 172988141877822.
3. Alonso-Mora, J.; Baker, S.; Rus, D. Multi-robot formation control and object transport in dynamic environments via constrained optimization. Int. J. Robot. Res. 2017, 36, 1000-1021.
4. Eoh, G.; Jeon, J.D.; Choi, J.S.; Lee, B.H. Multi-robot cooperative formation for overweight object transportation. In Proceedings of the IEEE/SICE International Symposium on System Integration, Kyoto, Japan, 20-22 December 2011; pp. 726-731.
5. Yasuda, Y.; Kubota, N.; Toda, Y. Adaptive formation behaviors of multi-robot for cooperative exploration. In Proceedings of the IEEE International Conference on Fuzzy Systems, Brisbane, Australia, 10-15 June 2012; pp. 1-6.
6. Mariottini, G.L.; Morbidi, F.; Prattichizzo, D.; Valk, N.V.; Michael, N.; Pappas, G.; Daniilidis, K. Vision-based localization for leader-follower formation control. IEEE Trans. Robot. 2009, 25, 1431-1438.
7. Yan, Z.; Xu, D.; Chen, T.; Zhang, W.; Liu, Y. Leader-follower formation control of uuvs with model uncertainties, current disturbances, and unstable communication. Sensors 2018, 18, 662.
8. Qian, D.; Tong, S.; Li, C. Leader-following formation control of multiple robots with uncertainties through sliding mode and nonlinear disturbance observer. ETRI J. 2016, 38, 1008-1018.
9. Poonawala, H.A.; Satici, A.C.; Spong, M.W. Leader-follower formation control of nonholonomic wheeled mobile robots using only position measurements. In Proceedings of the Control Conference, Istanbul, Turkey, 23-26 June 2013; pp. 1-6.
10. Mainetti, L.; Patrono, L.; Sergi, I. A survey on indoor positioning systems. In Proceedings of the International Conference on Software, Telecommunications and Computer Networks, Split, Croatia, 17-19 September 2015; pp. 111-120.
11. Consolini, L.; Morbidi, F.; Prattichizzo, D.; Tosques, M. Leader-follower formation control of nonholonomic mobile robots with input constraints. Automatica 2008, 44, 1343-1349.
12. Rosales, A.; Scaglia, G.; Mut, V.; Sciascio, F.D. Formation control and trajectory tracking of mobile robotic systems-A linear algebra approach. Robotica 2011, 29, 335-349.
13. Huang, J.; Farritor, S.M.; Qadi, A.; Goddard, S. Localization and follow-the-leader control of a heterogeneous group of mobile robots. IEEE/ASME Trans. Mechatron. 2006, 11, 205-215.
14. Xu, D.; Han, L.; Tan, M.; Li, Y.F. Ceiling-based visual positioning for an indoor mobile robot with monocular vision. IEEE Trans. Ind. Electron. 2009, 56, 1617-1628.
15. Nascimento, R.C.A.; Silva, B.M.F. Real-time localization of mobile robots in indoor environments using a ceiling camera structure. In Proceedings of the Robotics Symposium and Competition, Arequipa, Peru, 21-27 October 2014; pp. 61-66.
16. María, G.; Ernesto, F.; Gonzalo, F.; Sebastián, D.C.; Dictino, C.; José, S.; Sebastián, D. A mobile robots experimental environment with event-based wireless communication. Sensors 2013, 13, 9396-9413.
17. Kamel, M.A.; Ghamry, K.A.; Zhang, Y. Real-time fault-tolerant cooperative control of multiple uavs-ugvs in the presence of actuator faults. In Proceedings of the International Conference on Unmanned Aircraft Systems, Arlington, VA, USA, 7-10 June 2016; pp. 1267-1272.
18. Saab, S.S.; Nakad, Z.S. A standalone RFID indoor positioning system using passive tags. IEEE Trans. Ind. Electron. 2011, 58, 1961-1970.
19. Alarifi, A.; Alsalman, A.M.; Alsaleh, M.; Alnafessah, A.; Alhadhrami, S.; Mai, A.A.; Alkhalifa, H.S. Ultra wideband indoor positioning technologies: Analysis and recent advances. Sensors 2016, 16, 707.
20. Yazici, A.; Yayan, U.; Yücel, H. An ultrasonic based indoor positioning system. In Proceedings of the International Symposium on Innovations in Intelligent Systems and Applications, Istanbul, Turkey, 15-18 June 2011; pp. 585-589.
21. Díaz, E.; Pérez, M.C.; Gualda, D.; Villadangos, J.M.; Ureña, J.; García, J.J. Ultrasonic indoor positioning for smart environments: A mobile application. In Proceedings of the Experiment@international Conference, Faro, Portugal, 6-8 June 2017; pp. 280-285.
22. Koala Robot. Available online: https://www.k-team.com/koala-2-5-new (accessed on 23 June 2018).
23. Turtlebot3 e-manual. Available online: https://www.turtlebot.com (accessed on 23 June 2018).
24. Irobot Create2 Programmable Robot. Available online: https://www.irobot.com (accessed on 23 June 2018).
25. Michael, N.; Fink, J.; Kumar, V. Experimental testbed for large multirobot teams. Robot. Autom. Mag. IEEE 2008, 15, 53-61.
26. Rubenstein, M.; Cornejo, A.; Nagpal, R. Programmable self-assembly in a thousand-robot swarm. Science 2014, 345, 795-799.
27. Bhuiya, A.; Mukherjee, A.; Barai, R.K. Development of wi-fi communication module for atmega microcontroller based mobile robot for cooperative autonomous navigation. In Proceedings of the IEEE Calcutta Conference, Kolkata, India, 2-3 December 2017; pp. 168-172.
28. Winfield, A.F.T.; Holland, O.E. The application of wireless local area network technology to the control of mobile robots. Microprocess. Microsyst. 2000, 23, 597-607.
29. Marvelmind Navigation System Manual(v2018_01_11). Available online: https://marvelmind.com/pics/marvelmind_navigation_system_manual.pdf (accessed on 23 June 2018).
30. Mpu-6050 Datasheet. Available online: https://www.invensense.com/products/motion-tracking/6-axis/mpu-6050/MPU-6000-Datasheet1.pdf (accessed on 23 June 2018).
31. Esp8266 Ex Datasheet. Available online: http://espressif.com/sites/default/files/documentation /0aesp8266ex_datasheet en.pdf (accessed on 23 June 2018).
32. Stm32f407xx Datasheet. Available online: https://www.st.com/resource/en/datasheet/stm32f405rg.pdf (accessed on 23 June 2018).
33. Desai, J.P.; Ostrowski, J.; Kumar, V. Controlling formations of multiple mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, 20-20 May 1998; pp. 2864-2869.
34. Kanayama, Y.; Kimura, Y.; Miyazaki, F.; Noguchi, T. A stable tracking control method for an autonomous mobile robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Cincinnati, OH, USA, 13-18 May 1990; Volume 381, pp. 384-389.
1School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, China
2Beijing Electro-mechanical Engineering Institute, Beijing 100074, China
*Author to whom correspondence should be addressed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The current research on the multi-robot formation mainly focuses on formation control theory [7,8,9], while some research lacks physical experimental realization to validate their theories and algorithms for having no available experimental testbed, especially a general and low-cost indoor multi-robot formation experimental platform. [...]technologies such as Radio-Frequency Identification (RFID) [18], Ultra-Wideband (UWB) [19], and Bluetooth [10] are not suitable to be employed for multi-robot formation due to low precision. What is more, effective formation and coverage control of mobile robots also require a reliable and powerful wireless communication infrastructure for exchanging information among themselves [27]. Since the high-performance wireless local area network (WLAN) technology is relatively low cost, its use for wireless control of multi-robot systems has become a practical proposition [28]. [...]we validate the platform using a formation control leader–follower strategy and complete a series of experiments of formation forming, switching, and maintaining with the external disturbance.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer