Content area
This paper presents the design, construction, and development of a 6degrees-of-freedom robotic arm, specifically tailored to the conditions at our university. The arm is powered by stepper motors and controlled via a programmable logic controller, while utilizing image processing data from a Raspberry Pi board. The objective of this research is to study automated pick-and-place operations, specifically targeting the handling of fruits such as oranges and apples. The system integrates advanced motion control techniques with vision-based object recognition to enable precise and reliable manipulation of the fruits. The robotic arm is equipped with an endeffector capable of handling objects with varying shapes and sizes, ensuring safe and efficient grasping and placement. Image processing algorithms are employed to identify and localize the fruits in real time, allowing the robotic arm to perform tasks in dynamic environments with minimal human intervention. Calibration, motion planning, and feedback control strategies are optimized to ensure high accuracy and prevent collisions or damage to the fruits. The system's performance is evaluated through a series of experiments that demonstrate its capability to effectively pick and place oranges and apples, making it a promising solution for applications in agricultural automation and food processing.
ABSTRACT
This paper presents the design, construction, and development of a 6degrees-of-freedom robotic arm, specifically tailored to the conditions at our university. The arm is powered by stepper motors and controlled via a programmable logic controller, while utilizing image processing data from a Raspberry Pi board. The objective of this research is to study automated pick-and-place operations, specifically targeting the handling of fruits such as oranges and apples. The system integrates advanced motion control techniques with vision-based object recognition to enable precise and reliable manipulation of the fruits. The robotic arm is equipped with an endeffector capable of handling objects with varying shapes and sizes, ensuring safe and efficient grasping and placement. Image processing algorithms are employed to identify and localize the fruits in real time, allowing the robotic arm to perform tasks in dynamic environments with minimal human intervention. Calibration, motion planning, and feedback control strategies are optimized to ensure high accuracy and prevent collisions or damage to the fruits. The system's performance is evaluated through a series of experiments that demonstrate its capability to effectively pick and place oranges and apples, making it a promising solution for applications in agricultural automation and food processing.
Keywords:
6-degrees-of-freedom Conveyor belt Image processing Programmable logic controller Robotic arm Stepper motors
1. INTRODUCTION
In modern agriculture and food processing, the need for automation has grown significantly to meet the demands of increased production, efficiency, and quality control. One of the critical tasks in fruit processing is the sorting of fruits, such as apples and oranges [1], based on various factors like size, color, shape, and quality [2]. The advancement of robotics and computer vision, automated systems are increasingly being deployed to perform these tasks with higher precision and speed. The use of robotics for sorting apples and oranges through image processing involves the integration of robotic arms and cameras equipped with sophisticated image analysis algorithms. These systems capture high-resolution images of the fruits, process the images to extract features such as color, texture, and shape, and then use this data to classify the fruits into different categories. The process is not only more efficient but also improves the accuracy of sorting, as it eliminates human error and variability. The application of computer vision [3]-[5] in robotics enables the system to detect and identify subtle differences between fruits, such as distinguishing ripe from unripe fruits or identifying blemishes and defects that might be invisible to the human eye. Image processing algorithms, such as edge detection, pattern recognition, and machine learning models, are commonly used to achieve Vol. 23, No. 3, June 2025, pp. 758~769 ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v2313.26617 O 758 Integration of image processing with 6-degrees-of-freedom robotic arm for advanced automation Paanthong Sroymuk, Paramust
Juntarakod, Viroch Sukontanakarn Field of Mechatronic Engineering, Faculty of Engineering, Rajamangala University of Technology Isan, Khon Kaen, Thailand Article Info ABSTRACT Article history: Received Aug 26, 2024 Revised Jan 29, 2025 Accepted Mar 11, 2025 Keywords: 6-degrees-of-freedom Conveyor belt Image processing Programmable logic controller Robotic arm Stepper motors This paper presents the design, construction, and development of a 6degrees-of-freedom robotic arm, specifically tailored to the conditions at our university. The arm is powered by stepper motors and controlled via a programmable logic controller, while utilizing image processing data from a Raspberry Pi board. The objective of this research is to study automated pick-and-place operations, specifically targeting the handling of fruits such as oranges and apples. The system integrates advanced motion control techniques with vision-based object recognition to enable precise and reliable manipulation of the fruits. The robotic arm is equipped with an endeffector capable of handling objects with varying shapes and sizes, ensuring safe and efficient grasping and placement. Image processing algorithms are employed to identify and localize the fruits in real time, allowing the robotic arm to perform tasks in dynamic environments with minimal human intervention. Calibration, motion planning, and feedback control strategies are optimized to ensure high accuracy and prevent collisions or damage to the fruits. The system's performance is evaluated through a series of experiments that demonstrate its capability to effectively pick and place oranges and apples, making it a promising solution for applications in agricultural automation and food processing.
This is an open access article under the CC BY-SA license. sophisticated image analysis algorithms. These systems capture high-resolution images of the fruits, process the images to extract features such as color, texture, and shape, and then use this data to classify the fruits into different categories. The process is not only more efficient but also improves the accuracy of sorting, as it eliminates human error and variability. The application of computer vision [3]-[5] in robotics enables the system to detect and identify subtle differences between fruits, such as distinguishing ripe from unripe fruits where they perform complex tasks such as assembly, welding, and painting, thereby improving efficiency and precision. This research aims to endeffector to the of robotic automation in industrial and logistical applications by proposing a mechanical design and manufacturing process for a 6-DOF robotic arm. The goal is to achieve a cost-effective solution without compromising functionality. Therefore, the study of 6-axis robotic arms is both theoretically and practically important, offering new opportunities for innovation and enhancing work processes across various industries. Table 1 shows the research results related to apple and orange detection using image processing methods.
The methods presented in the table include both traditional techniques such as color and shape detection, as well as advanced methods like convolutional neural networks (CNN) and machine learning approaches, which yield higher accuracy in detecting apples and oranges under various conditions. The structure of this paper is as follows: the article begins with an introduction to the robotic arm. Section 2 presents the research methodology, followed by section 3, which discusses the results and evaluation of the experiment involving the robotic arm for fruit sorting. Finally, section 4 provides the conclusion of the study.
2. THE PROPOSED METHOD
In this section, we are developing a sorting conveyor machine for apples and oranges, which will require a Raspberry Pi and a USB camera [22], [23]. We will use TensorFlow and OpenCV [24] to detect the apples and oranges. A 6-DOF robotic arm connected to the Raspberry P1 will then sort the fruits and move them into baskets. The methodology section 1s divided into two parts. In the first part, we present the design of the 6-DOF robotic arm and the conveyor belt. In the second part, we employ a machine learning algorithm [25], [26] to identify the apples and oranges. The overall architecture diagram of the control system design is shown in Figure 1.
The Raspberry Pi board instructs the conveyor belt to move the fruits to the detection point, where the camera processes the images using a program designed for this purpose. The processed data is then sent to the robot, which sorts the fruits into the target baskets. A summary flowchart of the system design is shown in Figure 2.
3. MATERIAL AND METHOD
3.1. The mechanical design of 6-DOF robotic arm
The construction of a 6-DOF robotic arm mimics the functionality of a human arm and can grasp fruit objects along a predetermined trajectory. The prototype is designed to replicate the robotic arm [27] for educational purposes, as shown in Figure 3. A 6-DOF configuration is chosen to enable the arm to achieve any position and orientation within a given spherical volume. All joints are revolute, each providing one degree of freedom. A 3D model of the robotic arm was developed using CAD software [28], as shown in Figure 3(a). The model incorporates detailed representations of various components, including motors, gearboxes, bearings, timing belts, pulleys, lead screws, nuts, fasteners, robot links, and joints. The mechanical components of the robot can be machined from aluminum. The fully assembled robot arm, which includes a suction cup for handling test blocks, is depicted in Figure 3(b).
3.2. The electrical control design
The concept proposed for plant process control is shown in Figure 4, which consists of an HMI touchscreen, FX3U, Raspberry Pi, USB camera, conveyor system, stepper motor driver, and stepper motor. The HMI is used to control all the motors, and two PLCs manage these drivers, with each PLC controlling three stepper motors. The robot arm in this case study is equipped with its PLC control software [29], [30], which includes an HMI. Modifications were made to the HMI to simplify robot programming for learners. Figure 5 shows the electric control wiring circuit of the robotic arm prototype.
The construction of a 6-DOF robotic arm mimics the functionality of a human arm and can grasp fruit objects along a predetermined trajectory. The prototype is designed to replicate the robotic arm [27] for educational purposes, as shown in Figure 3. A 6-DOF configuration is chosen to enable the arm to achieve any position and orientation within a given spherical volume. All joints are revolute, each providing one degree of freedom. A 3D model of the robotic arm was developed using CAD software [28], as shown in Figure 3(a). The model incorporates detailed representations of various components, including motors, gearboxes, bearings, timing belts, pulleys, lead screws, nuts, fasteners, robot links, and joints. The laboratory.
3.3. The apple and orange sorting algorithm design
In this section, we use python programming and the OpenCV platform to implement our project. Python is a high-level programming language, while OpenCV is an open-source computer vision and machine learning software library. It contains algorithms designed to detect, recognize, identify, and classify fruits in our case study. TensorFlow platforms end-to-end open-source platform for building machine learning applications. The procedure steps include image acquisition, image pre-processing, image segmentation, feature extraction, classification, and identification, as shown in Figure 7.
4. RESULTS AND DISCUSSION
The experimental setup is shown in Figure 8. The robotic arm sorts and places the apples and oranges into basket 1 or basket 2 based on image detection and the programmed settings. First, we supply electric power to the Raspberry Pi and then run the Python code we created. Next, we position the camera above the fruits on the moving conveyor belt. When a fruit, identified as either an apple or an orange, is detected, the robotic arm moves to drop it into a basket, keeping it separated from the other fruits.
4.1. Apples and oranges detection
In the beginning, we chose fruit images from a database available on GitHub [31] to prepare the data for training the neural network. We then cropped the images to the size of the fruits to achieve some standardization. After acquiring about 200 images of different fruits in various positions, we classified them into two folder directories: testing and training for each fruit. Some images from the folders are shown in Figure 9.
The camera captures images of apples and oranges with high accuracy for identification. Once the detection is complete, the data is used to classify the fruits. In this step, the robotic arm receives commands to grab the detected fruits based on the classification results, as shown in Figure 10.
4.2. The arm robot motion in sorting apple and orange fruit
The experiment was conducted to validate the proposed control method for detecting apples and oranges. The process includes the following steps, as shown in Figure 11. The robotic arm moves to pick and place the apple or orange into the designated basket based on location information, which is controlled by adjusting the angles of the six stepper motors. Table 3 displays the angles of each stepper motor during specific motions.
The screenshots of the robotic movements while sorting apples and oranges, with the fruits placed according to their assigned baskets. Figure 11 illustrates the robot's operation sequence: Figure 11(a) the robot in the home position, waiting for the object to stop; Figure 11(b) image acquisition via a camera for environment capture; Figure 11(c) object detection algorithms identifying the apple; Figure 11(d) the robot moving towards the basket based on path planning; Figure 11(е) the robot stopping at the target; Figure 11(f) placing the apple into the basket; Figure 11(g) returning to the home position; Figure 11(h) the conveyor leading the orange to image acquisition; Figure 11(1) the robot picking up the orange; Figure 11(j) moving to the target position; Figure 11(k) placing the orange in the basket; and Figure 11(1) returning to the home position.
The experiment, conducted ten times, measured object detection and classification times on a conveyor belt, with the robotic arm sorting apples and oranges into designated containers. The average sorting time was 1.53 seconds for oranges and 1.67 seconds for apples, including a one-second pause for the conveyor belt. The average time to pick and place the fruits was 13.63 seconds for oranges and 12.83 seconds for apples. The longer time for oranges was due to the greater distance to the orange container. The robotic arm, controlled by a programmable logic controller and integrated with a Raspberry Pi and machine learning, successfully sorted and placed the fruits into their respective containers. Figure 12 shows the relationship between the number of apples and oranges over time, when they are detected and signals are sent to the robotic pick-and-place system, respectively. Figure 13 is the graph that compares the robot's time usage for each cycle of operation for different types of fruit.
5. CONCLUSION
In this study, we have successfully developed a robotic arm-based intelligent sorting system for apples and oranges, utilizing camera-based image processing for real-time fruit recognition and sorting. The results of the experiments showed significant improvements in sorting accuracy and operational efficiency, proving the effectiveness of the proposed system.
This research applies image processing techniques to a custom-built robot designed for educational purposes. The system utilizes a Raspberry Pi board with an appropriate camera and a program to detect apples and oranges on a conveyor belt driven by a stepper motor, which is controlled by a programmable logic controller. Overall testing showed that the components were affordable, and the integrated system worked reasonably well. However, it does not yet match the capabilities of industrial robots. In the future, the researchers aim to further develop this system to be used in low-cost industrial applications within the country.
The integration of image processing with a 6-DOF robotic arm requires highly sophisticated image recognition algorithms to accurately identify objects, track them in real-time, and process visual data under various lighting and environmental conditions. Ensuring accuracy and robustness of these algorithms can be computationally expensive and time-consuming to develop. Combining image processing hardware (such as cameras or sensors) with robotic hardware involves challenges in terms of communication protocols, software compatibility, and hardware interfacing. Ensuring seamless integration while maintaining performance efficiency across multiple components is often complex. While image processing provides valuable input, its ability to function effectively in unpredictable environments (such as varying lighting, occlusions, or background clutter) may be limited. The robotic arm's ability to adjust and adapt to new or unforeseen conditions can also be constrained by the system's predefined parameters.
Future work can focus on further optimization of the image processing algorithms, expanding the system to handle additional types of fruits, and improving the arm's speed and adaptability in dynamic environments. Overall, this project demonstrates the feasibility of combining robotics with image processing for intelligent, automated fruit sorting, offering valuable insights into the role of automation in modern agriculture.
ACKNOWLEDGEMENTS
The authors would like to thank the Department of Mechatronics, Faculty of Engineering, and RMUTI Khon Kaen for providing the laboratory facilities and experimental lab practices for this research.
FUNDING INFORMATION
This research was supported by Thailand Science Research and Innovation (TSRI) and Rajamangala University of Technology Isan under research proposal ID: 71834.
AUTHOR CONTRIBUTIONS STATEMENT
This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration.
CONFLICT OF INTEREST STATEMENT
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
INFORMED CONSENT
We have obtained informed consent from all individuals included in this study.
ETHICAL APPROVAL
The research related to animal use has been complied with all the relevant national regulations and institutional policies for the care and use of animals.
DATA AVAILABILITY
The authors confirm that the data supporting the findings of this study are available within the article [and/or its supplementary materials].
REFERENCES
[1] A. K. Pothula, Z. Zhang, and R. Lu, "Evaluation of a new apple in-field sorting system for fruit singulation, rotation and imaging," Computers and Electronics in Agriculture, vol. 208, 2023, doi: 10.1016/j.compag.2023.107789.
[2] A. E. Elwakeel et al., "Designing, Optimizing, and Validating a Low-Cost, Multi-Purpose, Automatic System-Based RGB Color Sensor for Sorting Fruits," Agriculture, vol. 13, no. 9, 2023, doi: 10.3390/agriculture13091824.
[3] N. M. Baneh, H. Navid, J. Kafashan, H. Fouladi, and U. Gonzales-Barrón, "Development and evaluation of a small-scale apple sorting machine equipped with a smart vision system," AgriEngineering, vol. 5, pp. 473-487, 2023, doi: 10.3390/agriengineering5010031.
[4] M. H. Dairath et al., "Computer vision-based prototype robotic picking cum grading system for fruits," Smart Agricultural Technology, vol. 4, 2023, doi: 10.1016/j.atech.2023.100210.
[5] Y. D. Sean, D. D. Smith, V. S. P. Bitra, V. Bera, and Sk. N. Umar, "Development of Computer Vision System for Fruits," Current Journal of Applied Science and Technology, vol. 40, no. 36, pp. 1-11, 2021, doi: 10.9734/cjast/2021/v40i3631576.
[6] S. Guofeng and B. Guangxia, "Arduino-based intelligent handling robot design," Advances in Computer, Signals and Systems, vol. 7, no. 1, pp. 67-74, 2023, doi:10.23977/acss.2023.070109.
[7] H.-W. Lee, "The Study of mechanical arm and intelligent robot," IEEE Access, vol. 8, pp. 119624-119634, 2020, doi: 10.1109/ACCESS.2020.3003807.
[8] P. R. Hingu and D. N. Panchal, "Industrial robot and automation," The International Journal of Engineering and Science (IJES), vol. 10, no. 2, pp.15-23, 2021, doi: 0.9790/1813-1002011523.
[9] Md. A.-A.-Noman, A. N. Eva, T. B. Yeahyea, and R. Khan, "Computer vision-based robotic arm for object color, shape, and size detection," Journal of Robotics and Control (JRC), vol. 3, no. 2, pp. 180-186, 2022, doi: 10.18196/jrc.v3i2.13906.
[10] S. A. Korchagin et al., "Development of an Optimal Algorithm for Detecting Damaged and Diseased Potato Tubers Moving along a Conveyor Belt Using Computer Vision Systems," Agronomy, vol. 11, no. 10, 2021, doi: 10.3390/agronomy11101980..
[11] M. A. J. Al-Sammarraie et al., "Predicting fruit's sweetness using artificial intelligence-Case study: Orange," Applied Sciences, vol. 12, no. 16, p. 8233, 2022, doi: 10.3390/app12168233.
[12] Q. Zhang and W. H. Su, "Real-Time recogition and localization of apples for robotic picking based on structural light and deep learning," Smart Cities, vol.6, pp. 3393-3410, 2023, doi: 10.3390/smartcities6060150.
[13] L. Fu, F. Gao, J. Wu, R. Li, M. Karkee, and Q. Zhang, "Application of consumer RGB-D cameras for fruit detection and localization in field: critical review," Computers and Electronics in Agriculture, vol. 177, Oct. 2020, doi: 10.1016/j.compag.2020.105687.
[14] Normalisa, A. Rachmaniar, D. Diana, M. Saefudin, and R. Parulian, "Application of computer vision detection of apples and oranges using python language," Journal of Information System, Informatics and Computing, vol. 6, no. 2, pp. 455-466, 2022.
[15] T. Tung, N. V. Tinh, D.T. P. Thao, and T. V. Minh, "Development of a prototype 6 degree of freedom robot arm," Results in Engineering, vol.18, 2023, doi: 10.1016/j.rineng.2023.101049.
[16] C. -Y. Liu, J. -J. Liang, T. -H. S. Li and K. -C. Chang, "Motion Imitation and Augmentation System for a Six Degrees of Freedom Dual-Arm Robot," in IEEE Access, vol. 7, pp. 153986-153998, 2019, doi: 10.1109/ACCESS.2019.2949019.
[17] R. Guida, M. C. De Simone, P. Dasic, and D. Guida, "Modeling techniques for kinematic analysis of a six-axis robotic arm," IOP Conf. Series: Materials Science and Engineering, vol. 568, pp. 1-62019, doi: 10.1088/1757-899X/568/1/012115.
[18] P. Chu, Z. Li, K. Lammers, R. Lu, and X. Liu, "Deep learning-based apple detection using a suppression mask R-CNN," Pattern Recognition Letters, vol. 147, pp. 206-211, 2021, doi: 10.1016/j.patrec.2021.04.022.
[19] M. R. O. Espinosa, L. R. Porto, V. S. W. Orlando, A. M. G. Tommaselli, A. P. Dal Poz, and N. N. Imai, "Evaluation of YOLO Efficiency in Automatic Orange Detection in Multi-Exposure Images," ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. X-3-2024, pp. 303-308, 2024, doi: 10.5194/isprs-annals-x-3-2024-303-2024.
[20] S. Zeeshan, T. Aized, and F. Riaz, "The Design and Evaluation of an Orange-Fruit Detection Model in a Dynamic Environment Using a Convolutional Neural Network," Sustainability, vol. 15, no. 5, Feb. 28, 2023, doi: 10.3390/su15054329.
[21] Z. Cai, W. Huang, Q. Wang, and J. Li, "Detection of early decayed oranges by structured-illumination reflectance imaging coupling with texture feature classification models," Frontiers in Plant Science, vol. 13, Aug. 10, 2022, doi: 10.3389/fpls.2022.952942.
[22] H. X. Huynh, B. H. Lam, H. V. C. Le, T. T. Le, and N. Duong-Trung, "Design of an IoT ultrasonic-vision based system for automatic fruit sorting utilizing size and color," Internet of Things, vol. 25, 2024, doi: 10.1016/j.iot.2023.101017.
[23] D. Keča, I. Kunović, M. J. Matić, and A. S. Krzic, "Ball Detection Using Deep Learning Implemented on an Educational Robot Based on Raspberry Pi," Sensors, vol. 23, no. 8, 2023, doi: 10.3390/s23084071.
[24] W. Fuertes, K. Hunter, D. S. Benítez, N. Pérez, F. Grijalva and M. Baldeon-Calisto, "Application of Convolutional Neural Networks to Emotion Recognition for Robotic Arm Manipulation," 2023 IEEE Colombian Conference on Applications of Computational Intelligence (ColCACI), Bogotá D.C., Colombia, 2023, pp. 1-6, doi: 10.1109/ColCACI59285.2023.10225880.
[25] D. G. Koç and M. Vatandaş, "Classification of Some Fruits using Image Processing and Machine Learning," Turkish Journal of Agriculture - Food Science and Technology, vol. 9, no. 12, pp. 2189-2196, 2021, doi: 10.24925/turjaf.v9i12.2189-2196.4445.
[26] C. V. Duya, H. L. Ducb, P. L. Hoaia, and D. D. Anha, "Design and Development of Robot Arm System for Classification and Sorting Using Machine Vision," FME Transactions, vol. 50, pp. 181-192, 2020, doi: 10.5937/fme2201181C.
[27] A. R. Al Tahtawi, M. Agni, and T. D. Hendrawati, "Small-scale Robot Arm Design with Pick and Place Mission Based on Inverse Kinematics," Journal of Robotics and Control (JRC), vol. 2, no. 6, 2021, doi: 10.18196/jrc.26124.
[28] L. Villaverde and D. Maneetham, "Kinematic and Parametric Modeling of 6DOF(Degree-of-Freedom) Industrial Welding Robot Design and Implementation," International Journal of Technology, vol. 15, no. 4, pp. 1056-1070, 2024, doi: 10.14716/ijtech.v15i4.6559.
[29] J. Kwantongon, W. Suamuang, and K. Kamata, "A Teaching Demonstration Set of a 5-DOF Robotic Arm Controlled by PLC," International Journal of Information and Education Technology, vol. 12, no. 12, pp. 1458-1462, 2022, doi: 10.18178/ijiet.2022.12.12.1772.
[30] H. Şahin, R. Güntürkün, and O. Hız, 'Design and Application of Plc Controlled Robotic Arm Choosing Objects According to Their Color," Electronic Letters on Science and Engineering, vol. 16, no. 2, pp. 52-62, 2020.
[31] Chtchou, "Fruit Image Database for Machine Learning," GitHub, 2023. [Online]. Available: https://github.com/Chtchou /fruitimage- database. [Accessed: 11-Oct-2024].
© 2025. This work is published under https://creativecommons.org/licenses/by/3.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.