Combination of an Inertial Measurement Unit and a Camera for Defining the Position of an Airplane

: In this article we propose a method of using the extended Kalman filter combined with an inertial navigation system and a camera to assist in determining the location and calculate the parameters of a helicopter. These observations are combined with an inertial measurement instrument which uses the Kalman filter to collect accurately and promptly information about the location of a plane. The algorithms is simulated on Simulink and hardened by using dsPIC33F256 chip


Introduction
Nowadays, inertial navigation systems are widely used to determine the location of vehicles such as: aircrafts, helicopters, cars [6], [7], [10] ... The core of an inertial navigation system is the inertial measurement unit (IMU) which is used to measure the linear acceleration and the angular velocity of rotation. By combining these signals in the real-time domain, an inertial navigation system is able to observe the position, velocity and direction of the vehicle. However, the estimating process can not be used for long periods because the uncertainty in the calculation process will increase over time. Causes of this error are noise and systematic errors in the inertial measurement. In this situation, current inertial navigation systems often combine GPS and the extended Kalman filter to get the smallest position error. However, during loss of GPS signals the system will encounter dangerous restrictions: errors during navigation state estimation quickly accumulate over time due to a big nonlinear drift phenomena in measurements of inertial sensors.
In this paper, an extended Kalman filter is used in combination with an inertial measurement instrument and an observation camera to support the inertial navigation in aircraft positioning. The method that is proposed to test is the use of a fixed landmark to calculate, predict the position of the aircraft.

Inertial measurement unit
In many papers use of IMU for calculating position was mentioned [1][2][3][4][5], in case of planes one often uses two coordinate systems: the attached coordinate system (OX b Y b Z b ) is connected with the center of mass of the plane and the earth reference system (OX e Y e Z e ) is fixed on the ground. The Euler angle is calculated based on the equation [ where φ, θ and ψ are the Euler angles and ω x , ω y , ω z are the measured angle ratios based on the 3 spindles . The rotation matrix which is used to convert the measured accelerations from the frame reference system to the earth reference system is calculated [5]: The acceleration which is measured based on the accelerometers will be converted to the earth reference system: Where f xb , f yb , f zb are the measured accelerations; f X , f Y , f Z are the accelerations of the aircraft on the earth reference system.
By combining the accelerations of the aircraft in the earth reference system we obtain the position of the aircraft. This position will "diverge" over time because of various errors, such as accelerometers, noise ....
To reduce this error in an optimal way, the absolute position measurement is necessary to align predictions during the process. We propose a method which uses a camera to image the target, analyze the image of the target and and combines the camera, IMU and the extended Kalman filter to determine the location of the target.

Color space converting
The first problem of image processing softwares is converting a color space to another color space. The default color space in most of cameras is the RGB color space since it is very quick and easy to calculate, convert from another color space into the RGB color space and vice versa using only system resources at the minimum level. But this is also a problem for the RGB color space since the discrete distribution of colors can create a lot of problems in color image processing. The solution to this problem is the HSL color space, which rearranges geometry of the RGB color space to increase the relevance in terms of the senses rather than the one presented in the Descartes system, so the colors will be classified in a simple mode versus the RGB color space.

Destination color detecting
The camera is used to detect the color of the destination in order to calculate the distance between the target point and the plane through the identification and handling of captured images. The problem of identification and image processing of fires are presented by the authors presented in document [6,8].

The relationship between the positions in 3D and 2D spaces in figures
The model of the perspective projection of the camera allows us to deduce the position in the 3-dimensional space from the position in the 2-dimensional space of the camera image (figure 1). The model allows us to transform an arbitrary point (A, B, C) to a pixel (b, c) on the image plane (the camera image) by the equation: Where f is the focal length of the camera. The focal length depends only on the image width in pixels of the camera image (w) and the angle of the field of view (FOV), the 2 above components are the characteristics of the camera, hence we have:

Estimating location of the object
After discovering the object it is time to track and keep the position of objects in the image frame of reference. These moments are then used to calculate the center of the figure, the region, the spindle and various characteristics. By doing this, 2 moments i and j are calculated: In particular, X and Y are the orders of the moment of taking the value based on i and j, the space moments from the above equation are determined: ij M ji : the space moment, I(X, Y): the pixel density of (X,Y). Next, the "center" points: Where µ ij is the center point; Xc and Yc the central coordinates of the object, calculated using equations: 10 01 The locations needed to calculate the boundary are the largest top-right X; the largest top-right Y; the smallest left to X; the smallest left to Y. Based on these positions, one can calculate the exact distance between the camera and waypoints using the equations presented in the sections below.

Calculation of the camera location
To regulations camera position, assuming the target milestone is circular with a diameter of 30 cm. First, the distance between the camera and the mark will be calculated with the assumption that landmark in the center of and perpendicular to the camera image. Then, the complex situations that are not in the central landmark of the photos will be studied next.

Calculate the Distance Between the Camera and Mark the Milestones in Between Shots and Perpendicular Than Photos
When the milestones is located exactly at the center of the image and perpendicular to the camera (Figure 2) that is a circle with a fixed diameter is used to calculate the distance from the center of the camera. In figure 2, the reference system is built from the axis X c , Y c and Z c is the reference system camera, which is located in the heart of the camera; θ hor , θ ver is the specification of the camera; r the radius of the milestones; xT and y T the edges of the image size; res is the number of pixels. The distance between the camera and the center of the milestones is determined based on the following equation: Figure 3 shows the character's look ectopic camera angle θ hl .. Angle θ hl is determined based on the equation:

Locate the camera
The distance between the camera and the center of the mold, the distance between the camera and the center of the center of the image was calculated as shown in Figure 4. In this form of image center have been labeled using the reference frame image containing the axis X i , Y i and Z i , the position of the markers in the reference frame image is presented by xii, yii and the distance between the camera and the center of the reference frame image center denoted d.
To calculate the position of the camera in the earth reference system we use three frames of reference: the reference system image, the camera reference system and the earth reference system ( Figure 5).
Homogeneous transition matrix between the reference system and reference system camera image is calculated: Which c i R matrix rotation, c i p the center position of the reference frame image in the camera reference system. Suppose that before the plane motion, the camera reference frame is placed in the center of the earth reference system and coincides with the direction of directional reference frame earth. Therefore, the position of the mold before moving plane is calculated based on the following equation: The position of the camera during motion will be determined:

Inertial Measurement Instrument with Kalman Filter
Two types of sensors are used in this topic to complete part test instrument using sensor techniques YG-80. Inertial navigation system faster than observers but have errors from time to time in the migration process. Tools observe fast but it is not enough that the sensor has high reliability and no errors from time to time in the migration process. The sensor has a high accuracy differential sensor is used as the compass and tilt sensor using accelerometers IMU to determine Euler an-gles [3,4,10]. The sensor used as the observation equipment to assess error of inertial navigation system using extended Kalman filter (EKF).
Observations based on extended Kalman filter is a combination of continuous filters and discrete-time. Process models using linear model of continuous time dynamics of helicopters to predict the desired value for the state variable, while the measurement model runs on jump time Discrete to standardized process models estimated by the measurement camera. EKF measurement vectors include the vertical axis and the horizontal axis in the Cartesian (in pixels) of the central objectives of the camera and "square root" of the target area in pixels. Using comparable methodologies to predict the value of vector measurements with actual measurements from the image processor, extended Kalman filter can maintain estimates for the position and the orientation of the plane Helicopter.

Process Models
Model EF process dynamics model helicopters used 15 state: quaternion orientation, position and velocity as well as the components in the inertial reference system, the movement acceleration and gyroscope gyration. In the process modeling phase estimation algorithm of EKF, with 2 events. First, the state estimates are updated using state vector derivatives taken directly from the inertial measurements. Together, the covariance matrix P is updated by Lyapunov equations. Process model can be represented by the following equations [4]: Which x the prediction function status, ( ( ), ) f x t t the helicopter model nonlinear, P do error covariance matrix, A matrix represents the dynamics helicopter linearization and Q is a diagonal matrix representing the interference pattern is available in the system. The value of the matrix Q is set from the first rough approximation method is derived from "model assumes", then adjusting the value is based on data obtained after flight test. Figure 6. The vector is used to describe the measurement model EKF.

Measurement Model
Helicopters generate the expected value of the measurement vectors using known position, combined with the anticipated direction of the state of the helicopter by modeling process dynamics. As seen in Figure 6, the vector from the helicopter to the objective of the camera reference frame may be determined by taking the difference between the known position and the position should be achieved, 'predicted position helicopters are represented as components of the inertial reference system. Results can then be converted from the inertial reference system to reference system camera by: The previous description of the relationship of the image plane provides the foundation for vector measurement model, The camera allows measurements from u and v follow directly the equation (4) and (5). To find the square root of the target region in the pixels, α, actual window area is projected onto the image plane, opening the square root, then scaling of the pixels by the following equation: vectors relative position of the target is generated from a helicopter similar to the composition of the reference frame camera. N is a vector of vectors typically target, derived from target already know. The area of the target has to be represented by A, f is the focal distance of the camera is similar to previously described In the evaluation phase the status of this update, the coefficient of amplification of the Kalman filter was premeditated. Amplification coefficient was used between actual measurements and calculation results given by the model measurements (16), so adjust evaluation process models based on empirical equations: Where K is the Kalman amplification factor, V is a diagonal matrix representing the measurement noise of the sensor camera, C is the Jacobian of measurement vectors involving state vector, denoted (∂Z/∂X) and h(X (-) ) the vector measurements were calculated by measuring the model of equations (16). Negative sign in the above equation denotes the predictive value obtained from the equation modeling processes. The results obtained from the equations (17) is used by process models in the next time step toward further shift of state vector and covariance matrix system and the process is repeated again. The constant value to the measurement error covariance matrix V system are preliminary estimates based on the physical characteristics of the camera, such as focal length and the orientation of the CCD array. These parameters can be adjusted by using the data obtained after the test flight, but the wrong number by using the approximate initial value may be acceptable for proving the feasibility of the filter.
It was found that the equations from (17), EKF require vector Jacobian matrix of measurement for vehicle state of the helicopter, C, to incorporate information pounding camera with inertial data. The measurements from the camera only related to the location and direction, so partial to the state variables (the displacement velocity means, accelerometer and gyro) in size 0. Linearization Figure measurement of the location of vehicles in the inertial reference system receives from the equation (16.17) as follows: Linearized model of vector measure with the quaternion orientation similar calculation: Which Rp associated vectors of the target location from the device center, expressed similar to the components of the framework reference system. However, it should be noted that there exists a significant time delay on transmit visual information from the computing power of the image processing. Time used for "framegrabber" to connect images, and computer time to calculate vertex measurement are not small, should be considered. To reduce this effect, a simple delay compensation schemes have been used. In the update step of EKF, computer use aircraft state information is stored in memory to eliminate the time delay used to predict the status update. This helps prevent information EKF comparison between predicted current status with a snapshot camera was made earlier. System latency is estimated by empirical data from flight testing by finding values that delay was minimized least squares error between the result measured by the method predicts camera by modeling applications to save measurement data obtained from the GPS status and the results obtained by the camera actually measured from the aircraft.

Model Test
Model tested include: 01 cameras, 01, hardened IMU uses algorithms dsPIC33F chip (Figure 7). To simulate, verify and develop the idea of the algorithm, the authors have used the advantages of Vietnam Matlab-Simulink environment for building simulation programs offline. The measured data of the sensor when the test is written to the file, which is then put into simulation programs to calculate and compare with standard reference values to assess the reliability of the program algorithm. Once the algorithm has been tested on the software offline, these algorithms are passed through the processor chip dsPIC33F also in Matlab-Simulink environment to ensure the real-time speed and hard program of products. To test the evaluation system, the authors conducted experiments: in the laboratory using three degrees of freedom turntable 1573 Series "Automatic Positioning System and Rate Table" Ideal Aerosmith firm (figure 8); Tested on moving vehicles (cars, planes) are synthetic tests assess the accuracy and reliability of the device.

Test Results
With such systems, the results Figure 9 shows the raw data are given by IMU. Figure 9. The measurements obtained from IMU. These data will be used to compare the case: the desired orbit, the orbit when only using the camera and orbit use when combined IMU-Camera ( Figure 10).

Conclusion
In this paper, an extended Kalman filter is used in combination with measurement of inertial and CCTV to support inertial navigation in aircraft positioning. The method is proposed to test the use of a fixed landmark to calculate, predict the position of the aircraft. The algorithm combines camera and IMU have used extended Kalman filter is simulated in MATLAB / Simulink and use processor chips dsPIC33f256 l y to solidify the product. Results obtained when tests showed this method receive accurate information and faster than using the camera case or simply use separate IMU. Our products have been tested for accuracy, reliability with simulation and experimentation in different cases. However, the product still need to experiment, to develop it further enhances the versatility, flexibility and stability.