Introduction
In recent years, autonomous robots have been put to practical use in industrial sectors, disaster sites, and medical fields because of their high adaptability to the real world,[1] and various types of robots have been reported, including wheeled,[2–5] humanoid,[6–8] and multilegged robots.[9,10] Autonomous robots are particularly valuable in extreme environments and isolated spaces where human access is not feasible.[11–13]
Additionally, there is a rapid spread and enhancement of small electronic portable devices such as smartphones and wearable devices. The miniaturization of internal electronic components has also advanced. Microbatteries and micro-supercapacitors, with a few micrometers thick, have been developed.[14] However, conventional positioning devices (such as ball-screw stages) are still considerably larger and heavier than the objects they handle, and there are issues of energy, space efficiency, and versatility. To address these issues, various precision actuation technologies have been developed for positioning devices:[15] electromagnetic,[16,17] electrostatic,[18–20] electrothermal,[21,22] magnetostrictive,[23,24] and piezoelectric[25–28] actuators are being used. Piezoelectric actuators are used in mobile robots and/or grippers because of their nanometer precision, electromagnetic resistance, and high scalability.[29–36]
However, as shown in Figure 1A, while many miniature robots and grippers have been reported,[1–10,15,26,27,29–42] there are no reports on autonomous untethered mobile micromanipulators that integrate these technologies and adapt them to real-world applications. In a prior study, we developed a mobile robot inspired by the rhinoceros beetle driven by piezoelectric actuators (HB-1: Holonomic Beetle-1) and a mobile micromanipulator equipped with a Z-axis stage and tweezers (HB-2).[36] The prior study demonstrated the high versatility and utility of these robots, with a high resolution of 12 nm and the ability to traverse steps, ditches, and inclined surfaces. However, these devices required wired connections to large external driving systems, and the cable length limited their driving range. It was also found that cable tension affects directional accuracy. To address these limitations, this study introduces an autonomous, untethered, holonomic mobile micromanipulator powered by piezoelectric actuators (HB-3), as depicted in Figure 1B. This latest version is significantly smaller, with the total volume, including its driving system, being 97.9% smaller than that of its predecessors, greatly enhancing its portability and size. The operational area is merely confined to the robot's volume, facilitating autonomous and untethered operations. This compact design allows for single-handed installation and immediate precision tasks in isolated or confined spaces. This mechanism is equipped with an internal camera and an integrated driving circuit using a single-board computer to eliminate the issues caused by the power-supply cables identified in prior research[36] and performs autonomous, untethered, pick-and-place operations using image recognition of the internal camera images based on machine learning. The specifications of HB-3 and comparison of untethered and/or autonomous robots are presented in Table 1 and 2. It is important to note that the payload capacity of this robot is defined by the maximum weight it can transport while in motion with an object secured by the manipulator, rather than the weight it can sustain when an object is placed on its upper surface. For additional information, please see the Supporting Information and the previous report.[36]
[IMAGE OMITTED. SEE PDF]
Table 1 Specifications of HB-3.
Size | Weight | Maximum speed | Drivable range | DoF | Gripper payload |
90 × 116 × 104 [mm3] (without tool/camera) | 515 [g] | 1.8 [mm s−1] | 1600 × 1600 [mm2] (per battery charge) | 4 + 1 (XYZθ + Open/Close) | 200 [g] |
Table 2 Comparison of untethered and/or autonomous robots.
Robot type | HB-3 | HB-2[36] | Gripper with HAMR[33] | Tether-less legged robot[29] | Autonomous untethered soft robot[39] |
Prinsiple | Alternating tripod gait | Alternating tripod gait | Legged locomotion | Bounding gait motion | Legged locomotion |
Actuator | Piezoelectric actuator | Piezoelectric actuator | Piezoelectric actuator | Piezoelectric actuator | Dielectric elastomer actuators |
Demention [mm3] | 90 × 116 × 104 | 90 × 90 × 103 | 45 × 40 × 23 | 90 × 60 × 11 | 40 |
Weight [g] | 515.0 | 210.0 | 1.5 | 65.7 | 1.0 |
DoF | 4 (XYZθ) | 4 (XYZθ) | 2 (Xθ) | 2 (Xθ) | 2 (Xθ) |
Resolution [nm] | 12 | 12 | N/A | N/A | N/A |
Operating time [s] | 427 | 178 | N/A | N/A | N/A |
Error rate [%] | 5.6 | 9.0 | N/A | N/A | N/A |
Tool payload [g] | 200 | N/A | 2.8 | No tool | No tool |
Max. speed [mm s−1] | 1.8 | 11.6 | N/A | 203.5 | 6 (When wireless) |
Autonomy | Autonomous (stand-alone) | Autonomous (with outernal PC) | Manual | Manual | Autonomous (stand-alone) |
Power supply | LiPo battery | External | External | LiPo battery | LiPo battery |
The autonomous capabilities of the holonomic walking robot, HB-3, demonstrate its ability to operate independently in confined spaces without the need for human assistance. In a feasibility study, we successfully demonstrated the robot's utility by assembling chip components in a visually restricted environment and accurately arranging liquids with a diameter of several hundred micrometers under microscopy. These tasks underscore the practical benefits of HB-3, highlighting its ability to function in environments where human presence is constrained by limited space or harsh conditions. This capability allows tasks typically requiring expensive equipment, such as clean benches or draft chambers, to be carried out more efficiently, making a valuable contribution to various scientific fields.
Results
Autonomous Pick-and-Place Experiment—Setup and Results
We used autonomous pick-and-place experiments and measured the positioning accuracy and operation time—to evaluate the performance of the image recognition and mm-order object manipulation. The manipulator comprised two sets of linear-motion mechanisms using stepping motors (PG15S-J20, MinebeaMitsumi)—to drive the Z-axis and tweezers. The Z-axis travel distance and the movable distance between the tips of the tweezers were each 13 mm.
The experimental setup is shown in Figure 2A. The mobile robot, positioned on a polyoxymethylene resin plate, utilized images captured by its internal camera (720 × 1280 pixels) to operate small tweezers for autonomous pick-and-place tasks. A separate recording camera was mounted above the robot. The target objects were positioned along a circular arc with a 10 mm radius from the tweezer tips at three distinct locations (A, B, and C as shown in Figure 2B). The HB-3 robot successfully executed these autonomous target-object pick-and-place operations. The relevant parameter values are provided in Table 3. To ensure minimal influence from room lighting, the experimental environment was enclosed with blackout curtains and maintained at a consistent brightness level.
[IMAGE OMITTED. SEE PDF]
Table 3 Parameter values for autonomous pick-and-place experiments.
Distance threshold (allowable error) | Attitude-angle threshold (allowable error) | Driving frequency | Measurement resolution | Moving distance of manipulator in Z-axis | Gripper opening width |
1 [mm] (x: ±0.5 mm) (y: ±2.7 mm) | 1 [°] (±7.6 [°]) | 20 [Hz] | 0.008 [mm] | 5 [mm] | 5 [mm] |
The following section presents the results of the positioning accuracy and operation-time measurements conducted during the autonomous pick-and-place experiments.
Figure 2C shows the post-placement positioning errors. The errors from a total of fifteen experiments across initial positions A, B, and C, as well as the average positioning error, were plotted. The graph includes a red-colored rectangle (1.0 × 5.4 mm) centered on the target position. An experiment is considered successful if the target object is placed within this rectangle, which corresponds to the pad size of a typical 3.2 × 2.5 mm chip component. Positioning errors exceeding ±0.5 mm on the x-axis or ±2.7 mm on the y-axis would result in a critical deviation from the landing area, which cannot be compensated by the self-aligning properties of the melting solder paste owing to its surface energy minimization effect. Normally, circuits assembled and then subjected to reflow soldering function correctly if the chip components are positioned within this red area, as they will align accurately due to the self-alignment effect. This explains why ≈87% of placements (13 out of 15) were successful. The average positioning error was −0.08 ± 0.31 mm on the x-axis and 0.16 ± 0.41 mm on the y-axis, equating to ≈5.6% of the object's length. The values on the y-axis show a positive bias due to the telecentricity of the internal camera; however, these values have been adjusted using the correction method discussed later in the text.
Angular errors are shown in Figure 2D; the average angular error at the initial positions A, B, and C, as well as the overall average from the 15 experiments, is shown. The red lines in the graph indicate the ±1° threshold. The average angular error over the 15 experiments was −0.4 ± 1.2°. Only the average error at position B exceeded the threshold; therefore, decreasing the error is a subject for future studies.
Figure 2E presents the operation-time breakdown per task. The average total operation time was ≈426 s, which was 2.4 times longer than that of the automatic-placement experiment in a previous study.[36] Operation-time percentages are listed in Table 4, indicating that object-detection time (the time when the HB-3 was not moving) accounted for 68% of the total time.
Table 4 Percentage of time consumed by each operation.
Task | Percentage | ||
Pick and place | 18 | ||
Detection | before pickup | 68 | 53 |
after pickup | 15 | ||
Move (before pickup) | 5 | ||
Move (after pickup) | 10 | ||
Total | 100 |
Remote Manipulation—Chip Mounting in an Isolated Confined Space and Droplet Arrangement
The components of an astable-multivibrator circuit were mounted on an electronic-circuit board to demonstrate the remote manipulation capability of the HB-3 in a confined, isolated space (Figure 3A,B). The circuit comprised four chip resistors, two chip LEDs, two NPN-type bipolar transistors, and two electrolytic capacitors. Table 5 lists component dimensions and masses.
[IMAGE OMITTED. SEE PDF]
Table 5 Component dimensions and mass.
Components | Dimensions [mm3] | Mass [mg] |
Resistors | 1.6 × 0.8 × 0.45 | 2.3 |
LEDs | 1.6 × 0.8 × 0.6 | 1.2 |
Bipolar transistors | 2.9 × 2.5 × 1.1 (SOT-346) | 12.1 |
Electrolytic capacitors | 6.6 × 7.1 × 5.9 | 295.0 |
Figure 3C shows the sequence of remote control of the HB-3. Ten components were mounted on an electronic board pre-applied with solder paste, followed by manual reflow. We confirmed that the astable-multivibrator circuit operates correctly. The circuit included two-chip LEDs that blink alternately, as shown in Figure 3D. For more details, please refer to the movie attached in the Supporting Information.
Figure 3E,F illustrates the use of the HB-3 robot in tasks requiring high positioning resolution. For this purpose, the manipulator was adapted to function as an injector with an inner diameter of 200 μm, enabling the precise arrangement of droplets several hundred micrometers in diameter on a 500 μm pitch grid. The operation was conducted remotely, with images from side and top cameras displayed on a PC to guide the positioning. The injector piston, connected to a stepper motor via a rack and pinion system, is designed to dispense ink continuously. For additional details, please refer to the video provided in the Supporting Information.
Discussion
Autonomous Pick-and-Place Experiment
In summary, the HB-3 successfully performed an autonomous pick-and-place task involving a chip capacitor, with an average task time of 426.3 s, resulting in positioning errors of −0.08 ± 0.31 mm on the x-axis, 0.16 ± 0.41 mm on the y-axis, and −0.4 ± 1.2° in the angular direction. Notably, at the initial position B, the placement positions exhibited a bias toward the y-axis, and the average angular error exceeded the 1° threshold, with an attitude angle error greater than 1° during the release phase. The total average operation time was ≈2.4 times longer than that recorded in the automatic placement experiment of a previous study.
For the y-axis positioning error, we implemented a correction formula to counteract the effects of lens telecentricity. Typical non-telecentric lenses feature principal rays that do not align parallel to the optical axis, causing objects at the edges of the field to appear visibly skewed in three-dimensional (3D) images, as shown in Figure 4A.
[IMAGE OMITTED. SEE PDF]
In this experiment, the placement accuracy was determined by image recognition using the internal camera. To prevent overlap of the tweezers and object after placement and the resulting contour-recognition obstruction, HB-3 was programmed to move backward—to position the tweezer tip away from the object—before measuring the placement accuracy. This caused the chip capacitor to shift to the edge of the field, moving its geometric center from the target point, as shown in Figure 4A. To avoid this, the hardware may be modified either to eliminate the need for backward movement or to use a telecentric lens for the internal camera. The camera could be placed below the tips of the tweezers rather than on the top, or the manipulator could be changed from a single- to a double-opening mechanism.
An alternative method compensates for lens characteristics using computational corrections.[43] As depicted in Figure 4B, when the robot retreats by a distance δ (as specified in the experimental parameters), the detected geometric center (red point) shifts upward by v/2 compared to the true centroid (blue point). The correction formula is provided later.
This value represents the y-directional shift attributable to the non-telecentric lens. Using this correction, the placement error in the y-direction is adjusted to 0.16 ± 0.41 mm from the initial 0.26 ± 0.41 mm. For a detailed explanation of lens telecentricity and the derivation of these correction values, please refer to the Supporting Information.
The angular error observed at initial position B may stem from setting the initial tweezer width narrower than that of the object. As shown in Figure 4C, in several experiments, the left-side tweezer deflected while grasping the object, and the top end rotated upon release. This issue was partly due to the tweezers not being aligned parallel to the object edges. However, given the typical pad size of a 3.2 × 2.5 mm chip component, an angular error of up to 7.6° is considered acceptable. For improved precision in attitude angle, it would be beneficial to incorporate a mechanism that automatically adjusts the initial tweezer width using a limit switch and ensures the tweezers are placed parallel to the object. For a detailed explanation of allowable positioning error, please refer to the Supporting Information.
The primary reason for the task time taking 2.4 times longer than it did in the previous study[36] is the insufficient processing speed of the Raspberry Pi central processing unit (CPU) in HB-3. The previous study used a Core i7-11700 CPU with a 2.5 GHz clock frequency, whereas the Raspberry Pi 4 B has a Cortex-A72 CPU with only a 1.5 GHz clock frequency that is ≈1.7 times slower. To reduce the operation time, a high-performance CPU that can be mounted on a mobile micromanipulator is essential. Another solution is to use an external high-performance computer for object detection, with the mobile micromanipulator transmitting sensor data (such as video images).
The time required to detect the target position for placement (63 s) was two-sevenths of the time needed to detect the target position for pickup (224 s) in Figure 2E. This is because the Hough circle transform (HCT), and not machine learning, was used to detect the target position for placement—resulting in a much shorter detection time. Thus, object detection using HCT algorithms could also provide a solution.
The resolution of these issues can enhance the versatility of HB-3, making it possible to be used for cooperative work by multiple robots[44] and more precise operations with mm-sized objects.
Remote Manipulation—Chip Mounting in an Isolated Confined Space and Droplet Arrangement
In this section, we discuss the results of mounting chip components in a confined, visually isolated space. The results of the experiment produced a functional circuit board, as depicted in Figure 3D. Demonstrating the capability for precision tasks, ink droplets with a diameter of several hundred micrometers were accurately deposited on a 500 μm pitch grid using an injector. To enhance the positioning accuracy along the Z-axis of various component heights, an additional side camera was utilized. In future iterations, integrating an internal side-view camera alongside the existing top-view camera would broaden the applicability in diverse environments. The absence of a side camera in the HB-3 for this study was due to the limited processing capabilities of the computer used; the additional image processing demands of two USB cameras exceeded the capacity of the Raspberry Pi. Upgrading to a more powerful computing system would allow for the inclusion of both cameras.
Moreover, in the chip-component mounting experiments, the components were remotely mounted onto a board pre-coated with solder paste, and the reflow process was carried out manually. Automating both the application of solder paste and the reflow process would significantly enhance the efficiency and reliability of this procedure.
In the droplet arrangement experiments, we attached the injector and performed precise droplet placement under microscopy for chemical and biomedical applications. This also demonstrated the potential for multi-scale applications across various magnifications enabled by different types of microscopes. Future studies should aim to demonstrate micrometer-scale autonomous micromanipulation using higher magnification microscopy to fully utilize the 12 nm resolution capability of the HB series.
While these improvements represent key areas for future development, we have successfully highlighted the primary contribution of the HB-3 as a breakthrough technology in precise mobile micromanipulation. This technology enables autonomous, untethered, stand-alone operations in confined, isolated spaces.
We anticipate that the HB-3 will be capable of not only mounting chips and arranging droplets but also handling a wide range of applications. For instance, traditional machinery often requires the rearrangement of production lines to incorporate new functions. In contrast, HB-3, with its precise manipulation in confined isolated spaces, can add new functions to existing machinery in hazardous or hard-to-reach locations, reducing the costs associated with rearrangements. HB-3 can also serve as a substitute for draft chambers or clean benches, handling materials hazardous to humans. It can perform a variety of tasks in confined, isolated environments using different tools, such as measurement probes, soldering irons, screwdrivers, and other precision instruments,[45,46] as shown in Figure 4D.
In fields like chemistry and materials science, significant research has been conducted on self-driving labs utilizing robotic arms, which demonstrate the ability of autonomous systems to carry out complex tasks and manage harmful materials. Given that HB-3 is smaller, lighter, and operates independently without external power, operation, or sensing systems, it is expected to be even more beneficial in laboratories where large spaces cannot be allocated.[47]
Experimental Section
XYθ Mobile Robot Realizes Holonomic Motion
The mobile robot is composed of an XYθ stage, disc-shaped inner and outer legs, and Z-axis piezoelectric actuators that connect the inner legs to the XYθ stage (Figure 5A,B). The stage was equipped with piezoelectric actuators (AE0203D18H18DF, TOKIN), the characteristics of which are detailed in Table 6. The displacements of the stacked piezoelectric actuators were enhanced via a displacement magnification mechanism. The XYθ mobile robot achieved precise positioning with three degrees of freedom (X, Y, and θ), facilitated by altering the relative positions of the legs through the XYθ stage and alternating the grounded legs using the Z-axis actuators, as illustrated in Figure 5C. For a detailed analysis and information on input waveforms, please refer to previous studies.[36]
[IMAGE OMITTED. SEE PDF]
Table 6 Specifications of AE0203D18H18DF.
Maximum stroke (150 Vpp) | Resonance frequency (blocked-free) | Capacitance | Blocked force (150 V) | Operating voltage | Mass | Dimensions |
19.0 [μm] | 76 [kHz] | 0.40 [μF] | 200 [N] | 0–150 [V] | 1.4 [g] | 2 × 3 × 18 [mm3] |
HB-3 Drives Stand-Alone with Integrated Driving Circuit
Figure 6A highlights significant advancements over the previous study.[36] Initially, an external device with a total volume of 51 000 cm3 was necessary to handle tasks such as control command generation, waveform generation, and amplification. In this study, we had completely reengineered and compacted the system to suit autonomous operations in isolated and confined spaces. By integrating these functions into a compact circuit board, we had achieved a dramatic reduction in volume to just 1100 cm3 (9.0 × 11.6 × 10.4 cm), resulting in a 97.9% decrease in space required for these operations. Moreover, we had enhanced the practicality of the HB series by reducing manufacturing costs through the strategic use of commercially available components.
[IMAGE OMITTED. SEE PDF]
The configuration of the driving circuit for autonomous untethered control is depicted in Figure 6B. This setup includes a power supply circuit, a single-board computer (Raspberry Pi 4 B, Raspberry Pi Foundation), a microcontroller board (Arduino Nano Every, Arduino Foundation), a D/A converter (LTC1660CN, Analog Devices), five piezo drivers (PDu100, Piezo Drive), and two motor drivers (STSPIN220, Pololu). Figure 6C shows the power supply circuit that adapts the input voltage from the Li-Po battery to 5 V using two three-terminal regulators, essential for the driving circuit. The actuator driving circuit (Figure 6D) drives two motors and eight piezoelectric actuators. For an in-depth explanation and technical details, please refer to the circuit schematics and output signal analysis provided in the Supporting Information.
The driving and controlling processes can be described as follows: 1) To initiate control, Raspberry Pi was remotely operated using Wi-Fi communication; 2) Commands were sent from the Raspberry Pi to the Arduino using serial communication to generate analog voltages for the piezo drivers PDu100; 3) The piezo drivers PDu100 apply the desired amplified voltage to the piezoelectric actuators; and 4) the Arduino also inputs signals to the motor driver based on commands from Raspberry Pi.
The Position and Orientation Angle Detected by Image Recognition
The purpose of image recognition is to detect the position and orientation angle of the target object and manipulator tip, as observed by the internal camera (Figure 6E).
The process was as follows: 1) Object detection: a) The mask image of the detected object was output as an inference by inputting the image of the object into a model trained via machine learning; 2) position and orientation-angle detection: a) The output mask image was binarized. The geometric center was calculated based on the extracted contours. For the tweezers, the midpoint between the two tips was considered the position coordinate; b) The orientation angle of the target was determined using OpenCV as the angle of the approximated rectangle. For the tweezers, the region of interest was centered on the position coordinates. The tweezer angles were determined from the average of the angles between the two tips; and c) The measurement resolutions for distance and angle (Δl and Δθ) were calculated based on a pixel resolution (σ) of 98 μm/pix, and the number of contour pixels, N, was used to determine the geometric center as follows.
Note that Equation (2) is similar to the standard error because the center position is calculated by averaging the contour positions.
Automatic Pick-and-Place
A flowchart illustrating the autonomous operation during pickup is presented in Figure 6F. During the placement phase, the target position for pickup was substituted with the target position for placement. The detection of the object and tweezers, along with robot navigation, was repeated until the differences between the measured position and angle of the object and the target values fell below the established threshold values.
Acknowledgements
This work was supported by Nakanishi Scholarship foundation, NSK Foundation for Advancement of Mechatronics and Takahashi Industrial and Economic Research Foundation. R.K., R.M., and C.S. contributed equally to this work.
Conflict of Interest
The authors declare no conflict of interest.
Author Contributions
Ryosuke Kinoshita: conceptualization (lead); data curation (lead); investigation (lead); methodology (lead); and writing—original draft (equal). Rintaro Minegishi: conceptualization (equal); investigation (lead); methodology (lead); visualization (lead); writing—original draft (lead); and writing—review and editing (lead). Chihiro Sekine: conceptualization (lead); data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); software (lead); validation (lead); and writing—review and editing (lead). Yohei Tsukui: writing—review and editing (equal). Yuta Sunohara: writing—review and editing (supporting). Yuko Nishimura: writing—review and editing (supporting). Shogen Sekiguchi: writing—review and editing (supporting). Ohmi Fuchiwaki: funding acquisition (lead); project administration; supervision (lead); and writing—review and editing (equal).
Data Availability Statement
The data that support the findings of this study are available in the Supporting Information of this article.
C. A. Aubin, B. Gorissen, E. Milana, P. R. Buskohl, N. Lazarus, G. A. Slipher, C. Keplinger, J. Bongard, F. Iida, J. A. Lewis, R. F. Shepherd, Nature 2022, 602, 393.
H. Surmann, A. Nüchter, J. Hertzberg, Robot. Auton. Syst. 2003, 45, 181.
J. Li, J. Wang, H. Peng, L. Zhang, Y. Hu, H. Su, Neurocomputing 2020, 410, 342.
H. Peng, X. Chen, Actuators 2021, 10, 184.
N. Pico, H. Jung, J. Medrano, M. Abayebas, D. Y. Kim, J. H. Hwang, H. Moon, J. Mech, Sci. Technol. 2022, 36, 959.
K. Kawaharazuka, K. Tsuzuki, Y. Koga, Y. Omura, T. Makabe, K. Shinjo, M. Onitsuka, Y. Nagamatsu, Y. Asano, K. Okada, K. Kawasaki, M. Inaba, IEEE Robot. Automat. Mag. 2020, 27, 84.
X. Yu, Z. Fan, X. Wang, H. Wan, P. Wang, X. Zeng, F. Jia, Comput. Electr. Eng. 2021, 96, 107459.
C. Fattal, I. Cossin, F. Pain, E. Haize, C. Marissael, S. Schmutz, I. Ocnarescu, Disabil. Rehabil. Assist. Technol. 2022, 17, 418.
A. Bouman, M. F. Ginting, N. Alatur, M. Palieri, D. D. Fan, T. Touma, T. Pailevanian, S. K. Kim, K. Otsu, J. Burdick, A. A. Mohammadi, in 2020 IEEE/RSJ Inter. Conf. on Intelligent Robots and Systems (IROS) 2022 https://doi.org/10.1109/IROS45743.2020.9341361.
C. D. Bellicoso, K. Krämer, M. Stäuble, D. Sako, F. Jenelten, M. Bjelonic, M. Hutter, in 2019 Inter. Conf. on Robotics and Automation (ICRA), 2019, https://doi.org/10.1109/ICRA.2019.8794273.
S. Ma, Y. Chen, S. Yang, S. Liu, L. Tang, B. Li, Y. Li, Cyborg Bionic Syst. 2023, 4, 0067.
X. Li, H. Yu, H. Feng, S. Zhang, Y. Fu, Cyborg Bionic Syst. 2023, 4, 0025.
X. Quan, R. Du, R. Wang, Z. Bing, Q. Shi, Cyborg Bionic Syst. 2024, 4, 0096.
S. Zheng, X. Shi, P. Das, Z. S. Wu, X. Bao, Adv. Mater. 2019, 31, 1900583.
Q. Chang, W. Chen, S. Zhang, J. Deng, Y. Liu, Adv. Intell. Syst. 2024, 6, 2300780.
G. Shan, Y. Li, L. Zhang, Z. Wang, Y. Zhang, J. Qian, Rev. Sci. Instrum. 2015, 86, 101501.
D. Ajinkya, L. Petit, M. Khan, F. Lamarque, C. Prelle, Actuators 2021, 10, 310.
H. Demaghsi, H. Mirzajani, H. B. Ghavifekr, Microsyst. Technol. 2014, 20, 2191.
H. Conrad, H. Schenk, B. Kaiser, S. Langa, M. Gaudet, K. Schimmanz, M. Stolz, M. Lenz, Nat. Commun. 2015, 6, 10078.
A. Albukhari, U. Mescheder, Actuators 2023, 12, 163.
A. Potekhina, C. Wang, Actuators 2019, 8, 69.
M. Sang, G. Liu, S. Liu, Y. Wu, S. Xuan, S. Wang, S. Xuan, W. Jiang, X. Gong, Chem. Eng. J. 2021, 414, 128883.
J. Liu, C. Jiang, H. Xu, Sci. China Technol. Sci. 2012, 55, 1319.
V. Apicella, C. S. Clemente, D. Davino, D. Leone, C. Visone, Actuators 2019, 8, 45.
X. Zhou, S. Wu, X. Wang, Z. Wang, Q. Zhu, J. Sun, P. Huang, X. Wang, W. Huang, Q. Lu, Front. Mech. Eng. 2024, 19, 6.
S. Zhang, Y. Liu, J. Deng, X. Gao, J. Li, W. Wang, M. Xun, X. Ma, Q. Chang, J. Liu, W. Chen, J. Zhao, Nat. Commun. 2023, 14, 500.
R. K. Jain, S. Majumder, B. Ghosh, S. Saha, J. Manuf. Syst. 2015, 35, 76.
G. Yan, J. Braz. Soc. Mech. Sci. Eng. 2021, 43, 387.
H. H. Hariri, G. S. Soh, S. Foong, K. L. Wood, in Proc. of the ASME 2019 Inter. Design Engineering Technical Conf. and Computers and Information in Engineering Conf. 2019, 5B, https://doi.org/10.1115/DETC2019‐97353.
B. Zhu, C. Li, Z. Wu, Y. Li, Sens. Actuators, A 2024, 369, 115154.
J. Li, B. Xu, J. Deng, W. Chen, Y. Liu, Sens. Actuators, A 2024, 365, 114898.
A. Fath, T. Xia, W. Li, Micromachines 2022, 13, 1422.
T. Abondance, K. Jayaram, N. T. Jafferis, J. Shum, R. J. Wood, IEEE Robot. Autom. Lett. 2020, 5, 4407.
J. Brufau, M. Puig‐Vidal, J. Lopez‐Sanchez, J. Samitier, N. Snis, U. Simu, S. Johansson, W. Driesen, J.‐M. Breguet, J. Gao, T. Velten, J. Seyfried, R. Estana, H. Woern, Proceedings of the 2005 IEEE Inter. Conf. on Robotics and Automation, 2005, 844
Y. Liu, J. Li, J. Deng, S. Zhang, W. Chen, H. Xie, J. Zhao, Adv. Intell. Syst. 2021, 3, 2100015.
M. Suzuki, Y. Iida, Y. Tsukui, H. Kusama, R. Kinoshita, E. Kusui, Y. Sunohara, R. Minegishi, Y. Sugiyama, Y. Nishimura, C. Sekine, O. Fuchiwaki, Adv. Intell. Syst. 2024, 6, 2300517.
A. Hsu, W. Chu, C. Cowan, B. McCoy, A. Wong‐Foy, R. Pelrine, J. Lake, J. Ballard, J. Randall, J. Micro‐Bio Robot. 2018, 14, 1.
F. Schmoeckel, S. Fatikow, J. Intell. Mater. Syst. Struct. 2020, 11, 191.
X. Ji, X. Liu, V. Cacucciolo, M. Imboden, Y. Civet, A. E. Haitami, S. Cantin, Y. Perriard, H. Shea, Sci. Robot. 2019, 4, eaaz6451.
H. Lu, Y. Hong, Y. Yang, Z. Yang, Y. Shen, Adv. Sci. 2020, 7, 2000069.
Z. Zhang, X. Wang, J. Liu, C. Dai, Y. Sun, Annu. Rev. Control Robot. Auton. Syst. 2019, 2, 181.
H. Llewellyn‐Evans, C. A. Griffiths, A. Fahmy, Microsyst. Technol. 2020, 26, 1745.
B. Pan, L. Yu, D. Wu, Exp. Mech. 2013, 53, 1719.
M. Rubenstein, A. Cornejo, R. Nagpal, Science 2014, 345, 795.
K. Tanaka, T. Ito, Y. Nishiyama, E. Fukuchi, O. Fuchiwaki, IEEE Robot. Autom. Lett. 2022, 7, 1324.
O. Fuchiwaki, Y. Tanaka, H. Notsu, T. Hyakutake, Microfluid. Nanofluid. 2018, 22, 80.
M. Abolhasani, E. Kumacheva, Nat. Synth. 2023, 2, 483.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025. This work is published under https://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With increasing miniaturization of portable electronic devices, conventional positioning systems have become inefficient in terms of size and energy consumption. As workspaces shrink, the need for precise autonomous manipulation in confined and isolated environments increases, particularly in areas such as electronics and biological assemblies. Herein, an autonomous, untethered, holonomic, mobile micromanipulator driven by piezoelectric actuators with nanometer resolution is developed. The micromanipulator comprises a driving circuit, single‐board computer, battery, manipulator, and small camera; it is lightweight (515 g), compact (90 × 116 × 104 mm), and untethered, capable of independently operating in confined and dangerous spaces for humans. The micromanipulator can autonomously position a single object with a positioning error of 0.18 ± 0.51 mm and an angular deviation of −0.4° ± 1.2°, utilizing image recognition, machine learning, and visual feedback. Moreover, the micromanipulator is verified to function in extreme conditions that simulate draft chambers and clean benches, where it successfully performs circuit mounting in confined space. The device can arrange droplets under microscopy for applications in chemical and biomedical fields, demonstrating its versatility and precise positioning capabilities. Additionally, its multi‐scale applications are explored from centimeters to sub‐micrometers and adjusted to the magnification capabilities of various microscopes.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer