Automation, Control and Intelligent Systems
Volume 4, Issue 2, April 2016, Pages: 42-47

Research on Picking Goods in Warehouse Using Grab Picking Robots

Juntao Li1, Xiaoqing Zhao2

1School of Information, Beijing Wuzi University, Beijing, China

2Graduate Department, Beijing Wuzi University, Beijing, China

Email address:

(Juntao Li)
(Xiaoqing Zhao)

To cite this article:

Juntao Li, Xiaoqing Zhao. Research on Picking Goods in Warehouse Using Grab Picking Robots. Automation, Control and Intelligent Systems.
Vol.
4, No. 2, 2016, pp. 42-47. doi: 10.11648/j.acis.20160402.16

Received: April 12, 2016; Accepted: April 25, 2016; Published: May 30, 2016


Abstract: For warehouse picking robots need the help of human vision and hands to quickly identify and get the desired products, this paper describes the procedure to grab picking robot. After receiving an order, grab picking robots use manipulator to grasp the desired products to realize the function of picking goods in warehouse by using technical means such as cameras, image processing.

Keywords: Gab Picking Robot, Picking-Up in Warehouse, Image Processing Technology


1. Introduction

With the development of economy, the users’ needs tend to the direction of small batch and many varieties, and the amount and type of goods increase dramatically in a distribution center. In distribution center operations, the proportion of picking is becoming larger and larger. And the picking operation is the most wasted on human costs and time costs. The picking efficiency directly affects the operation efficiency and management benefit, and is also important factors that affect service level in the distribution center. The picking link of traditional distribution center requires a lot of manual operation, and the human account needed occupies over 50% of the total employment in a distribution center, and the intensity of labor by this operation mode is large [1].

With the development and application of grab picking robots (as shown in Fig. 1), the picking speed is improved effectively and manpower is saved effectively. At present, the use of robots in the logistics industry is mainly concentrated in stacking, unstacking, handling and several other parts. With the development of robot vision technology, there have been robot applications on the aspects of picking [2]. With the development of robot visual identification system and the combination between vision system based on two-dimensional plane and three-dimensional laser and multi-function parallel manipulator, robots can automatically identify the fetching goods parameters such as color, size and location, adopt corresponding manipulator to grasp it, then put it on the collocated carrier or transmission line, and finally realize the function of choosing commodities.

Fig. 1. Grab picking robot.

In November 2015, the Fetch and Harvest executives said that at present their robots could only be used as a mechanical mule, identifying and getting the required products with the aid of human vision and dexterous hands of. But the Fetch was developing a robot equipped with cameras and grab arm to take goods on the arrival of the shelves by themselves eventually [3].

The operation process of grab picking robot mainly includes the acquisition of goods image, the identification of goods characteristics, the extraction of goods contour, the determination of goods location and grabbing accurately. First of all, grab picking robots collect the images of goods through the camera, and identify the goods color and texture features to determine whether this goods is the desired goods. Secondly, grab picking robots take the contour of the goods to further determine whether this goods is the desired product and to adjust the robots hands according to the contour characteristics of the goods. Thirdly, determine the location where the goods locate according to the existing road signs. Finally, grab picking robots accurately grasp the required goods (as shown in Fig. 2).

Fig. 2. The operation process of grab picking robots.

2. The Acquisition of Goods Image

The work of acquiring goods image is mainly performed by the camera and image acquisition card. This paper uses the camera instead of color identification device as the "eyes" of the goods identification and sorting system to finish the function of photographing goods image and uses image acquisition card to finish the function of conversing goods image.

The camera has passed CCD (charge-coupled device) image sensor and large scale integrated circuit of digital signal, which can produce images of high quality and high resolution. The image acquisition card uses the six-way composite video input or three-way Y/C input or a combination of both by the multiplexer switches, chooses one or A set as the current input by software, put the input into digital decoder. And digital decoder will convert the color signals into brightness and color difference signal, output to the A/D converter to convert the modulus. Digital signal after image processing algorithm will be showed by VGA card and stored by computer memory with the use of PCI bus. The image acquisition card’s working principle is as shown in Fig. 3. Software development kit provides a rich interface function, being convenient to control image acquisition card to collect image, and set all kinds of working parameters of image acquisition card. According to different functions, the interface function is classify into image acquisition card control function and supported image acquisition function. Image acquisition card control function mainly achieve the function of Setting image acquisition card’s parameters, including image acquisition card’s data format, source, hue, color saturation, brightness, contrast, display mode, acquisition methods, shielding and so on. When you start work, Image acquisition card control function is usually called to accomplish the image acquisition card initialization work; Supported image acquisition function is responsible for displaying the captured data by sending it to the display card or processing by sending it to the memory. Image acquisition function pushes the image data into memory in the process of program execution, completes the processing of the image by using image processing algorithms in the memory, and shows the result data on the terminal display.

Fig. 3. The working principle of the image acquisition card.

3. The Identification of Goods Characteristics

To judge whether the cargo is the required goods, the goods features need to be taken according to the collected image at first, which mainly contain color features and texture features, etc. Because of the uncertainty of the position and angle of goods, the position and angle of the goods are changing within the image, so the selected goods characteristics must satisfy the rotating invariance and translation invariance. This paper chooses the goods color features and texture features as a basis for the identification of goods.

3.1. The Identification of Color Characteristics

Color is a kind of important visual property of the image, and the definition of color features is clear, and the extraction of color features is easy relatively, so there has been widely in the study of object recognition, target tracking, image retrieval, and other fields [4]. Color features is a global characteristic, is one of the most commonly used in object recognition, and describes the surface properties of scenery corresponding to the images or the image area. The advantage is not affected by changes in the image rotation and translation, and not affected by image size change after further normalization, while the disadvantage is that it does not express the spatial distribution information of color.

3.2. The Identification of Texture Characteristics

Texture feature is also a kind of global features. It describes the surface properties of scenery corresponding to the images or the image area. Unlike color features, texture features is not based on the pixel, and it need to be statistical computed in the area containing multiple pixel. In the pattern matching, the regional characteristic has great superiority, not failing to match due to the local deviation. As a kind of statistical characteristics, texture characteristics often have rotation invariance, and have a stronger ability to resist noise. Texture feature, however, also has its disadvantages that the calculated texture may have larger deviation when the resolution of the image changes.

4. The Extraction of Goods Contour

The cargo’s outline is one of the most basic characteristics of the goods image, and it can be extracted by detecting the edge of the goods, which can detect for the required goods and can realize the positioning of goods to control the manipulator movement to grab goods.

4.1. The Process of the Image

The goal of the image processing is to provide the image characteristic value of the discriminated goods type and to output the results of goods identification. Machine vision is based on the theory of digital image processing, and its core content is image processing and recognition [5]. The image processing in machine vision is to transform the captured images by the visual system hardware, so as to achieve the aim of suiting for machine recognition. The original image captured directly often contains all kinds of noise and interference, which requires filtering and increasing the image and improving the quality of the image with all kinds of transformation methods to extract image characteristics and identify. This is the process of image processing.

4.1.1. Graying

The images captured by cameras are colorized. Although the colorized images contain rich information of the goods, the processing needs large amount of calculation. In order to meet the real-time requirement of picking goods, the collected image is conducted the graying process at first in this paper.

The images captured by cameras are RGB mode, that is, each pixel is a combination of R (red), G (green) and B (blue) color values of 0 ~ 255 and the goal of extracting the color's component is to get the each pixel RGB value information of the input image. R, G, B corresponds each color’s value and I refers the grey value of black and white image. The gray scale processing methods mainly include:

(1) The maximum method: grey value is equal to one of the biggest three values.

I=Max(R, G, B)               (1)

(2) The average method: grey value is equal to the average value of the three.

I=(R+G+B)/3                (2)

(3) Weighted average method: grey value is equal to the weighted average value of the three. WR, WG, WB is the corresponding weights of R, G, B according to the indicators of color sensitivity.

I=WRR+WGG+WBB            (3)

Because people’s susceptibility of color is green, red, blue from high to low, a more reasonable gray image will be gained when after continuous study, namely:

I=0.3R+0.59G+0.11B           (4)

In this algorithm, we choose a charge pal as the goal goods and use the weighted average method shown in the type (4) to process image graying. The unprocessed photo and the processed result are shown in Fig. 4 and Fig. 5.

Fig. 4. The unprocessed photo.

Fig. 5. The grayscale image.

4.1.2. Denoising

The greyscale image is gained by the inputted RGB mode directly, while not only the effective value of the original image is inputted but the input noise points in the process of obtaining grayscale. In order to extract target image from the gray image effectively and reduce the noise influence, the gray image need to be denoised. So you must choose the appropriate filter, which can effectively reduce the noise interference, and keep the goods location information of the image not destroyed as possible. The median filter is chosen for filtering through a large number of experiments [3].

Median filter is a nonlinear digital filter technique, often used to eliminate the noise in the image or other signals. Its main design thought is to check the input signal sampling and determine whether it represents a signal, and use observation window composed of an odd number of sampling to realize the function. There is observation window’s number sorted and the median in the middle of a window as the output. Median filter preserves the image edge details while removing impulse noise, salt and pepper noise. So median filter is used to remove noise in this paper, and specific steps are as follows:

(1) Select a certain size of template and move the template along the row or column direction of image data;

(2) Order by pixel gray value within the window after each move;

(3) Replace the original pixel gray value of window center position by the sorted median.

This can be represented as in mathematics:

(5)

Of which i, j is for the current processing pixel coordinates and the scope of k, l depends on the shape and size of the used template.

In this paper, we adopt median filter to process goods images based on the 3 * 3 square template, and the experiment result is shown in Fig. 6.

Fig. 6. The grayscale photo after a median filter.

4.2. The Extraction of Goods Contour

Cargo outline is one of the most basic characteristics of the goods image, and it can be extracted by detecting the edge of the goods, which can not only detect for the required goods but realize the positioning of goods to control the manipulator movement to fetch goods. Sobel operator is easy to implement on the space, having smooth effect on noise. It can be affected by noise little and can provide more accurate edge information direction. Canny operator can detect the real weak edge by using the non-maximum inhibition and double threshold methods. So the combined method of Sobel operator and Canny operator is used to extract more accurate edge of goods. Specific steps are as follows:

(1) Use Sobel operator to extract the edge of the gray image A1(I, J) and select double of gradient amplitude mean values as binarization closed, the edge image gained by which is regarded as A2(I, J).

(2) Project A2(I, J) along the horizontal and vertical directions and determine the edge location of goods through the wave formed by goods edge.

(3) Get rid of the image outside of the goods and goods location to obtain the image A3(I, J) of goods support area.

(4) Detect the cargo support area carefully and get the edge image A4(I, J). First of all, use the gaussian template smoothing images with 9 * 9 size. Secondly, calculate gradient amplitude and direction with a first order partial derivatives of finite difference. Thirdly, inhibit the gradient amplitude non-maximum. Finally, detect the image edge by using closed values, and regard the gradient amplitude size as the image threshold value which make 30% of points for the edge points.

(5) Connect Discontinuity points of A3(I, J) and search the edge that can be connected within a radius of eight adjacent points of A4(I, J) when it reaches the edge endpoint until you connect all the clearance of A3(I, J). The goods edge image gained is written as A5(I, J).

Fig. 7. The edge of goods image.

We have received the edge of goods image (as shown in Fig. 7), but in the actual cases, the detected edge of goods are mostly single pixel edge existing breakpoints. If we do not repair binary image before contour extraction, we do not get a whole goods contour, because the inherent pattern also can produce an edge of goods. So we need to remove isolated points in the image at first, and then use the rectangular probe operator of 3*3 size to conduct expansion operation on the edge image of the goods which has been removed the isolated points. At this point the processed image no longer exists discontinuity points, so the outside contour can be extracted by using edge tracking algorithm. The principle of edge tracking algorithm is arranging discharge point X0, X1... according to certain order, forming the boundary of the object and terminating tracking when the point column forms a closed loop, namely behind is. So the whole algorithm includes three steps. First of all, we scan images along the line to find the edge point, then we look for subsequent points with window operator. Finally we calculate the number of edge pixel points contained in the outline, determine a reasonable threshold, ignore the number of contour pixel points and outlines lower than the threshold, and eliminate the influence of tiny silhouettetherefore gain the complete outer contour of goods (as shown in Fig. 8).

Fig. 8. The complete outer contour of goods.

According to the extracted contour, the weight of the goods can be calculated easily and the location of the goods can be determined, which provides convenience for cargo recognition and grab.

5. The Determination of Goods Location and Grabbing Accurately

The intersecting circle method in the triangulation is used to determine the location of the cherry-pick goods in the warehouse. We establish a right Angle coordinate system with the origin of a corner to determine the absolute coordinates of the two landmarks in warehouse, and the angle of the two way points from the goods to the landmarks, and set for an only circle according to the principle that the inscribed Angle with arc is equal to find out the center of the circle and radius. When the number of landmarks in the image is N and, circles can be determined. Then we take the equations of the two to find the solutions that one is the coordinates the known signpost and the other is the coordinates of the goods.

Grab picking robots determine whether to grab the objects according to the color feature, texture feature and contour features, control the size of the manipulator according to the contour, and move to the cargo location to realize accurate grab eventually.

6. Summary and Prospect

The development and use of grab picking robots fully implement that human vision will be altered to identify goods automatically and that human hands will be altered to grasp goods automatically. Grab picking robots collet the goods image through the camera, and then identify the characteristics of goods such as color characteristics and texture characteristics to determine whether this goods is the desired goods. Besides, grab picking robots do further processing to the collected image to get the goods contour and to further determine whether this goods is the desired goods. Finally, the robot resizes manipulator according to the goods contour to grasp accurately.

The successful development of grab picking robot, is just a link in the storage operation. To fully realize the storage unmanned, intelligent storage technologies such as AGV car are also need to fuse. The successful development of grab picking robot is beneficial to improve the efficiency of the warehouse picking, save labor costs and increase enterprise competitiveness.

Acknowledgements

The study is supported by Beijing Intelligent Logistics System Collaborative Innovation Center and Iintelligent Logistics System Beijing Key Laboratory (NO: BZ0211).


References

  1. Xiaoguang Zhou, Ximei Zhang, Yukun Liu. A flexible picking system based on mobile robot in distribution center [J]. Logistics technology, 2015, 34 (4): 238-240.
  2. Jiantao Tian, Pei Su, Na Wang. Research on visual robot goods identification technology [J]. Journal of Beijing union university, 2014, 28 (4): 12-17.
  3. The industrial control network. Amazon's intelligent warehouse force robot market more popular [EB/OL]. [2016-4-2]. http://www.gkong.com/item/news/2015/11/85518.html.
  4. Qing Li, Yujin Zhang. Image classification method based on the characteristics of the elements and association rules [J]. Electronic journals, 2002, 30 (9): 1262-1265.
  5. Azriel Rosenfeld. Introduction to machine vision [J]. IEEE Control Systems Magazine , 1985 , 5: 14-17.
  6. Kuirong Cong, Jie Han, Faliang Chang. Visual robots extract goods contour and position [J]. Journal of Shandong University (engineering science), 2010, 40 (1): 15-18.
  7. Baoxia Cui, Chi Zhang, Tingting Luan. Landmarks extraction method of panoramic vision robot through localization [EB/OL]. 2016 [2016-3-31]. http://www.cnki.net/kcms/detail/21.1189.T.20160302.1642.006.html.
  8. Guannan Xu, Qingguan Xia, Zhonghui Zhang. Analysis of visual robots target detection [J]. Mechanical and electronic information, 2015 (30): 128-131.
  9. Jintian Huang, Huailin Cui. Research on robot visual positioning based on cameras [J]. Journal of Guangdong technical teachers college, 2014, 36 (3): 69-72.
  10. Benfa Zhang, Xiangping Meng. Overview of mobile robot localization method [J]. Shandong industrial technology, 2014 (22): 250.
  11. Zhiqiang Zhang, Zhiyou Niu, Siming Zhao. Overview of Freshwater fish species identification based on machine vision technology [J]. Journal of agricultural engineering, 2011 (11): 388-392.

Article Tools
  Abstract
  PDF(1896K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931