International Journal of Mechanical Engineering and Applications
Volume 3, Issue 3-1, June 2015, Pages: 49-56

Objectives of Meeting Movements - Application for Ship in Maneuvering

Nguyen Xuan Phuong1, Vu Ngoc Bich2

1Faculty of Navigation, Ho Chi Minh City University of Transport, Ho Chi Minh city, Vietnam

2Department of Science Technology – Research and Development, Ho Chi Minh City University of Transport, Ho Chi Minh city, Vietnam

Email address:

(N. X. Phuong)
(Vu N. B.)

To cite this article:

Nguyen Xuan Phuong, Vu Ngoc Bich. Objectives of Meeting Movements - Application for Ship in Maneuvering. International Journal of Mechanical Engineering and Applications. Special Issue: Transportation Engineering Technology — part II. Vol. 3, No. 3-1, 2015, pp. 49-56. doi: 10.11648/j.ijmea.s.2015030301.18


Abstract: The paper devotes the formulation of the problem of optimizing the oncoming traffic and gives a description of the concept and control system that implements the navigation of ships in maneuvers. In nautical practice, the ship has been encountered in the special situations, such as: avoiding collision, maintaining the time arriving the pilot station, picking up pilot, berthing as schedules, sailing in confined water area... In order to solve this issue, the authors present their researches about the task of interception optimal time and the normal and degenerate problem; also they give the remarks about globally-optimal control and optimal control. Accordingly, the result is applied for ship control in maneuvering.

Keywords: Interception Optimal Time, the Normal and Degenerate Problem, Ship in Maneuvering


1. Introduction

In nautical practice, the ship has been encountered in the special situations, such as: avoiding collision, maintaining the time arriving the pilot station, picking up pilot, berthing as schedules, sailing in confined water area... in order to solve these issues, we will formulate the problem of optimizing the oncoming traffic and give a description of the concept and control system that implements the navigation of ships in maneuvers. The optimization problems can be classified for which you are to minimize the transition time from the initial state to the final area relates to the tasks of the optimal time. In this section we formulate the problem precisely control the optimal time to be considered at a particular physical example. Most of this section is devoted to a discussion of the problem from a geometric point of view. We show that the time-optimal problem essentially reduces to finding [1,4,11,12]:

1) The first time at which the area of reachable states meets the area S;

2) Control, which it carries out.

2. The Task of Interception Optimal Time

The vessel will be considered as a dynamic system with state , an exit  and the control , defined by the equations [2, 7, 11, 12]:

(2.1)

(2.2)

Let's assume that

(2.3)

Also that

(2.4)

Thus, f - a n-dimensional vector function; B[x,t] - the matrix-function of the size n × r and h is a m-dimensional vector function. We will consider that components of a vector of control u(t) are limited on size by inequalities [11]

(2.5)

Let  - a vector with m components. We will agree to name a  desirable exit. Let- Error vector.

Let t0 - initial time and x(t0) - starting state of dynamic system.

It is required to find control, which:

1)       Satisfies to restrictions (2.5);

2)       Operates system in such a manner that during the final moment of time

(2.6)

Where E - some set subset from ;

3)       Minimises transition time Tt0.

If the dynamic system [3, 7] described by (2.1) and (2.2), is completely observable, to everyone y(t) there corresponds a unique status x(t). Hence, area S in space of statuses can be defined parity:

(2.7)

We use Pontryagin’s minimum principle [4, 13] to receive the systematized approach to the decision of problems on optimum speed. Received results in the analytical form can be used for numerical representation of decisions. We will consider control, optimum on speed, for mobile area St. The system is given.

(2.8)

Set smooth area S is defined by parities:

(2.9)

Components  are limited on size by parity:

(2.10)

Functional it is defined in a kind:

(2.11)

Where T - it is free.

To find such control u(t), that it:

- Satisfied to restrictions (2.10);

- Translated x(t0) systems (2.8) in area S;

- Minimised functional J(u).

On the basis of a minimum principle [4, 13] it is possible to assert that there is (optimum) additional vector p*(t) corresponding to optimum control u*(t) and an optimum trajectory x*(t). Existence p*(t) is a necessary condition. It is necessary, those components  and, satisfied to the initial equations:

(2.12)

3. Normal and Degenerate Problem

3.1. Normal Task

Suppose [1, 6, 15, 16] that the interval has a countable set of points t1j, t2j, t3j,…,

(3.1)

Such that

(3.2)

In this case, the problem of optimal speed will be called normal.

Fig. 3.1 shows the function  and the corresponding. Function  vanishes only in isolated moments in time, and therefore control, time-optimal, a piecewise constant function with simple jumps. If all functions have the same properties, the task is a normal control. It is usually said that the control will switch when t = tᵞj and that when the number of switches is equal to the greatest number (or ¥). Control  shown in Fig. 3.1 will switch 4 times. Consequently, the number of switches is four.

Fig. 3.1. A function  that gives a well-defined control .

3.2. Degenerate Problem

Assume [1, 6, 15, 16] there is an interval  of one (or more) of sub-slot, such that

(3.3)

This problem is called degenerate, and the interval [T1,T2]j (or intervals) - interval degeneracy.

Function qj*(t) shown in Fig. 3.1, is equal to zero for all t of [T1,T2], and therefore corresponds to a degenerate problem. Thus, in the degenerate case the problem exists at least one time sub-slot  for which the ratio does not determine the optimum control, and as a function of x*(t) and p*(t).

Fig. 3.2. Shown in the figure corresponds to the function qj*(t) of a degenerate problem of optimal control.

The last statement does not mean that the optimal control does not exist or cannot be determined. It only means that a necessary condition does not give a definite relation between x*(t), p*(t), u*(t), t. Degenerate problems are typical for ship in addressing the meeting of movements.

We consider the problem of optimal normal speed. In this case, thus excluded u*(t) from all the necessary conditions. Therefore, all the conditions are laid down by u*(t), in step 1 will be reduced to the necessary conditions beyond the control of u*(t). As we will see in step 3, this fact will allow us to find the control-optimal.

State two theorems that summarize these ideas.

Theorem 1. Relay Principle [1, 11, 12]. Let u*(t) - optimal control for the problem, but also x*(t) and p*(t) - its corresponding phase trajectory and an additional vector. If the task is normal, components u1*(t), u2*(t),…, ur*(t) of control u*(t) shall be determined by the relations:

(3.4)

for the t ϵ [t0,T*] Equation (3.4) can be written more compactly:

(3.5)

Thus, if a normal task, the components of the control-optimal are a piecewise-constant (or relay) functions of time. The following theorem can be proved by direct substitution.

Theorem 2. Prerequisites [1, 11, 12]. Let u*(t) – optimal control for the problem, x*(t) – state at time-optimal trajectory and p*(t) – corresponding to an additional vector. Let T* – minimum time. If a normal task, it is necessary to:

A) Satisfies the degenerate problem (3.2);

B) The condition x*(t) and an additional vector p*(t) comply with the simplified canonical equations:

(3.6)

(3.7)

for the k = 1, 2,…,n and t ϵ [t0,T*];

C) Hamiltonian along the optimal trajectory is determined by the equation

(3.8)

D) The final time T* the relation

(3.9)

E) At the initial time

(3.10)

the final time T*

(3.11)

(3.12)

We give a geometric interpretation of Theorem 2 - Prerequisites

Assume that  and r = 2. As shown in Fig. 3.3, the matrix size B’[x*(t),t] associated with the conversion 2 × 3, displaying 3-dimensional vector p*(t) a 2-dimensional vector q*(t) = B’[x*(t),t]p*(t).

Fig. 3.3. Geometric interpretation of the fact that the control of u*(t) should minimize the scalar product [u*(t), q*(t)].

In order to minimize the scalar product [u*(t), q*(t)], vector control u*(t) must have a maximum value and be directed opposite to the vector q*(t). So if, q*(t) is in the first quadrant, the vector u*(t) should be "resting" on the angle A square restrictions. If q*(t) in the second quadrant, the u*(t) should be sent to angle B, and so on.

Prerequisites lead to a symmetric method for finding optimal control. This will be discussed in detail in the steps below. Results of degenerate problem and associated optimal values are necessary conditions. If this control u(t) and the corresponding trajectory is not satisfied any of the necessary conditions, it follows that u(t) is not optimal control.

Steps are set ratio that must be met for optimal control u*(t), states x*(t), corresponding p*(t), and a minimum time T*. The essence of the challenge is to find the optimal control, and so the question arises: how can using all of these theorems to find the optimal control problem. The answer to this question will be given below. In addition, each step of our argument will be entitled, which will allow to trace the logical connection between them.

Step 1. Formation of the Hamiltonian [6, 16]. We form the Hamiltonian H[x(t),p(t),u(t),t] system

 and functional .

Hamiltonian using expressions can be written as

(3.13)

which emphasizes that x(t), p(t), u(t) – Vectors representing a function of time. At this point, we do not impose restrictions on any vector values x(t), p(t), u(t), or by t.

Step 2. Minimizing the Hamiltonian [6, 16]. Hamiltonian H[x(t), p(t), u(t), t] depends on  variables. Let us assume that we have fixed x(t), p(t), u(t) and t and consider the behavior of the Hamiltonian (which now is only a function of u, as x(t), p(t), and t are constant) when changing u(t) limitations in Ω. In particular, we want to find a control in which the Hamiltonian has the absolute minimum. Therefore, we define H-minimal control as follows.

Definition 1. H-minimal control [16]. Admissible control u0(t), H-called minimal if it satisfies

(3.14)

for all u(t) ϵ Ω and all x(t), p(t) and t.

Previously, it was found that the minimum control H - u0(t), for the Hamiltonian of the type (3.13) is given by equation:

(3.15)

or in vector form,

(3.16)

Substitute the H-minimal control u0(t), expression in (3.13):

(3.17)

Consequently,

(3.18)

The right side of (3.18) is a function only of the x(t) and p(t). We define the function H0[x(t), p(t), t] by the relation

(3.19)

These definitions and equations are not explicitly linked with the trajectories and optimal values.

Step 3. Restriction x(t), and p(t). We require that the (as yet undetermined) vectors x(t) and p(t) satisfies the differential equation [4]:

(3.20)

(3.21)

or, equivalently, differential equations

(3.22)

(3.33)

for .

Note that

(3.24)

and

(3.25)

Step 4. The purpose of this section is to find the optimal control u*(t), transfers the system from a given initial state x(t0) to S. We assume that this problem is normal. Model the equation (3.22) and (3.23) on a computer. At a certain initial time  use we have taken the initial values of the phase coordinates as the initial conditions of the system (3.22). As initial values of the functions p1(t0), p2(t0),...,pn(t0) will use some of the expected values [6, 16].

Let qj(t), j = 1, 2,..., r – functions defined by the relations

(3.26)

Assume that

(3.27)

Equations (3.27), (3.26) and (3.15) imply that the number of equal 1 or -1. Thus, the solution of the equations (3.22) and (3.23) it is determined, at least for t, close to t0. We denote the solutions of equations (3.22) and (3.23) through

(3.28)

to emphasize their dependence on a known initial state x(t0) and the intended initial value p(t0).

Simulation is as follows. Measuring signals x(t0) and p(t0), at each time we get and register signals:

(3.29)

(3.30)

(3.31)

(3.32)

(3.33)

(3.34)

Using concrete (randomly selected) value p(t0), sequentially for each time t in some interval [t0, T], ask ourselves the following questions:

Question 1. If qj(t) = 0, then qj(t) ≠ 0 ? If , then ? (And so on). If the answer to the first question is positive (i.e. "Yes"), then we ask the second question. If the answer is negative (i.e. "No"), then we change the value p(t0) and repeat again the first question.

Question 2. If the answer to the first question is "Yes", is there a time T, for which satisfies

(3.35)

If the answer to the second question is "No", we change p(t0) and start all over again. If the answer is "Yes", then ask a third question.

Question 3. If the answer to the second question is "Yes", are there permanent third question. e1, e2,…, en-β such that the relation of:

(3.36)

If the answer is "No", then we must change p(t0) and start all over again. If the answer is "Yes", then go to question 4.

Question 4: If the answer to the third question is "Yes", are there permanent k1, k2,…, kn-β such that the relation of:

(3.37)

If the answer is "No", then we must change p(t0) and start all over again. If the answer is "Yes", it means that we found p(t0) one in which the answers to all questions 1 - 4 are positive. In this case, we remember accepted p(t0) and begin to experiment at first, until we find all the vectors p(t0), for which the answers to questions 1 - 4 are positive. The logical sequence of questions is shown in Fig. 3.2.

Step 5. Possible control-optimal. Formalize the results of the modeling done in step 4. We have identified the set , which is a set of initial values , corresponding to a given x(t0) and having the property that the answers to all questions 1 - 4 will be positive (i.e. "Yes"). It is clear that  is a subspace of the n-dimensional space Rn. You can imagine  as a "way out" of the logical process shown in Fig. 3.4 more precisely  is defined as follows [1, 4, 16].

Fig. 3.4. Logic diagram modeling that can be used for finding the optimal control.

Definition 2. Let  – area of initial states an additional variable , with the following properties [4, 11, 16]:

1) For each  corresponding solutions of (3.22) and (3.23), denoted by

(3.38)

satisfy the relation

(3.39)

only on a countable set of points t;

2) There is a time  (depending on x(t0) and ), such that it is possible to find the constants e1, e2,…, en-β and k1, k2,…, kn-β, that respects the following relationships:

(3.40)

(3.41)

(3.42)

You can return to Theorem 2 and compare the relation (3.40) and (3.9), (3.41) with (3.11) and (3.42) and (3.12). By virtue of the fact that the functions  zero only on a countable set t, and also similar to the equations (3.22) and (3.23) with (3.6) and (3.7) we obtain the following lemma.

Lemma 1. Each solution  and, produced by the element of the set , satisfies all the necessary conditions for simplified Theorem 2 [6, 16].

We have shown that H-minimal control u0(t) (see. Definition 1. H-minimal control) is given by [see. ratio (3.16)] for any x(t), p(t) and t. As for the , and , find

(3.43)

Comparing the expression (3.43) with (3.16) and taking into account Lemma 2, we obtain the following lemma.

Lemma 2. Each control , product of the elements of , satisfies the necessary conditions of theorem 1 - Relay principle. Note that [6, 16]

(3.44)

for all .

Now to clarify the meaning of Lemmas 2 and 3, and the usefulness of the necessary conditions for finding the control-optimal.

To be specific, let us assume that there are three different control-optimal, transforming the system from a given initial state x(t0) to S. All three controls, by definition, require the same minimum time T*. We denote these (time-optimal) controls so

(3.45)

If you draw a 4 modeling step, we define the set . Suppose that we can find [the expression (3.43)] five different departments, corresponding to the elements . These controls will be

(3.46)

and the corresponding slots in which they are defined is denoted

(3.47)

respectively. It can be argued that three of the five departments (3.46) will be identical to the three offices, the optimal time [see. (3.45)]. For definiteness, we assume:

(3.48)

The question arises: what is the significance of controls  and? These two controls must be locally-optimal. Since there is the principle of minimum conditions for a local, it cannot distinguish local from global optimal controls. The only way to determine which departments  are globally optimal - is to measure and compare the times  and, thus, found that

(3.49)

For this reason, we emphasize that the necessary conditions give only controls that can be optimal. In the next section we discuss the results obtained above.

In the previous sections were obtained necessary conditions for optimal control and developed a systematic method for determining the idealized offices, one of which may be the best in performance, but also established (Theorem 1) that if the problem is normal, then the components of the control-optimal, are piecewise constant functions of time [1, 4, 11, 16].

As for the normal components of the problem optimal control must be piecewise constant functions of time, one of the necessary conditions, namely:

allow you to restrict the search for optimal class control . This is perhaps the most useful result obtained from the minimum principle, while the rest of the necessary conditions give more appropriate boundary conditions and transversely conditions.

It should be noted that the Hamiltonian [6, 16]

(3.50)

and differential equations

(3.51)

System is fully defined and functional and thus independent of the boundary conditions and at the region S. In addition, the minimum control Hu0(t) (cm. Definition 1. H-minimal control), defined by the equation [6]

(3.52)

independently (functional) of the boundary conditions imposed. Thus, steps 1 - 3, are exactly the same for any problem about the optimal speed. Necessary conditions for the Hamiltonian and an additional variable in the final time T* together with a given initial state and equations region S provide enough boundary conditions for the solution of the system 2n differential equations.

We showed step by step process used to determine the controls, the resulting trajectories  and appropriate additional variables , meet all the necessary conditions. In order to highlight these values, we make the following behavior.

Definition 3. Extreme variables. The control  called extreme if  and the corresponding trajectory  and an additional variable  meet all the conditions [i.e. Equation (3.38) and (3.40) - (3.44)]. It will also be called  and extremely trajectories state and an additional variable, respectively [1, 9, 13].

4. Remarks

In general, can be a lot of extreme control. Each extreme control gives a trajectory that may be optimal either locally or globally. Since extreme control satisfies all the necessary conditions, we can note the following [1, 4, 14].

Remark 1. If the optimal control u*(t) exists and is unique and there is no other local optimal controls, there is only one extremely control , which is the optimal time, i.e. e. .

It is clear that the assumption of the absence of other locally-optimal controls made in Remark 1 makes the principle of minimum of necessary and sufficient condition.

Remark 2. If there is only a variety of optimal controls and if there m2 control, optimal locally, but are not optimal globally, then all will be m1 + m2 extreme control.

Remark 3. If a globally-optimal control does not exist and there m2 different locally optimal controls, there is a m2 extreme control.

Therefore, the existence of extreme control does not imply the need for a globally-optimal control.

Remark 4. If the optimal control exists, it can be found by calculating the time T required by each of the extreme control and control by minimizing T.

These remarks lead to the conclusion that dealing with the problem of optimal control, we need to know the answers to the following questions:

1) Whether there is a control-optimal?

2) Only if the optimal control?

3) Whether a task is normal?

4) Does not contain additional information that is necessary conditions for the data system and the area S?

Unfortunately, for arbitrary nonlinear systems and areas of S answers to these questions have not yet been received. There are, however, a number of results for a class of linear systems. Since this class of systems is extremely important, we will devote a few paragraphs to it to get additional results that are important, both from theoretical and practical points of view.

5. Conclusion

Accordingly, the research devotes the formulation of the problem of optimizing the oncoming traffic and gives a description of the concept and control system that implements the navigation of ships in maneuvers. In sum, we can conclude as following [5,10,14]:

- The substantiation of statement of problems of control is made by a meeting of movements and geometrical interpretation of a problem of a finding of ship control, optimum on time, in the form of moving areas in space of statuses is offered in due course.

- Possibilities of a principle of a minimum for a finding of optimum controls are considered and ways of reception of numerical decisions are offered.

- The reasons of occurrence normal and degenerate control in problems of ship control are established by a meeting of movements [8].


References

  1. Athans M, Falbi P. Optimal control. - M.: Engineering, 1968- 765 p.
  2. Basin A.M, Moskvin G.I. Coastal Vessel Traffic Control System. – M.: Transport, 1986. – 160p.
  3. Blekhman I.I. Synchronization of dynamic systems. - M.: Science, 1971. – 494p.
  4. I. M. Ross A. Primer on Pontryagin's Principle in Optimal Control, Collegiate Publishers, 2009.
  5. Clarke, D. The foundations of steering and maneuvering. Proceedings of the IFAC conference on maneuvering and controlling marine crafts, IFAC, Girona, Spain, 2003.
  6. Loparev VK, Markov AV, Maslov Y, Structure V. Applied mathematics in engineering and economic calculations/ Collection of scientific papers. St. Petersburg, 2001, pp 58-61.
  7. Kulibanov YM. Dynamic model in inverse problems of traffic control. Collection of scientific papers "Managing transport systems" SPb.: SPGUVK, 1995, pp 90-97.
  8. Inose H., T. Hamar; Traffic Control. - M.: Transport, 1983. - 248p.
  9. Levine, William S., ed.. The Control Handbook. New York: CRC Press, 1996. (ISBN 978-0-8493-8570-4.)
  10. Zemlyanovsky DK. Calculation Elements Maneuvering for Preventing Collisions. Proc. Inst / Novosibirsk Institute of Water Transport Engineers. - 1960. 46p.
  11. Croft E.A, Fenton R.G, Benhabib B. "Time-optimal interception of objects moving along predictable paths."Assembly and Task Planning Proceedings IEEE International Symposium on, 1995, pp 419-425.(ISBN 0-8186-6995-0)
  12. Ik Sang Shin; Sang-Hyun Nam; Roberts, R.G.; Moon, S.B. "Minimum-Time Algorithm For Intercepting An Object On The Conveyor Belt By Robot", Computational Intelligence in Robotics and Automation, 2007. CIRA 2007. International Symposium on, pp 362-367.
  13. Sethi, S. P.; Thompson, G. L. Optimal Control Theory: Applications to Management Science and Economics (2nd ed.). Springer, 2000. (ISBN 0-387-28092-8.)
  14. Geering, H. P. Optimal Control with Engineering Applications. Springer. 2007 (ISBN 978-3-540-69437-3.)
  15. Johnstone, Peter, Notes on Logic and Set Theory, Cambridge University Press, 1987 (ISBN 978-0-521-33692-5.)
  16. Arfken, G. Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, 1985

Biography

Nguyen Xuan Phuong, (1967, Hanoi); Marine Master; PhD in Systems Analysis, Control and Information Processing, (2011, Russia). He currently is a lecturer of Navigation faculty, Ho Chi Minh City University of Transport (Vietnam). His research interests are within general linear/nonlinear control theory for maneuvering systems with applications toward guidance, navigation, and control of ocean vehicles.

 

Vu Ngoc BICH (1961, Haiphong), PhD in Automated design system (2007, Russia), Associate Professor (2013). He is Director of Science Technology – Research and Development Department at the Ho Chi Minh city University of Transport (HCMUTRANs). Main research areas are design and construction the ship, R&D, education. Former Dean of Naval Architecture and Offshore Engineering at HCMUTRANs. He has authored 6 books and 20 publications in scientific papers and presentations on national conferences.

Article Tools
  Abstract
  PDF(537K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931