Application of Dynamic Programing in Agriculture, Economics and Computer Science

: In this paper we have studied the Dynamic programming problem and major area of applications of this approach has been introduced. Dynamic programming provides a means for determining optimal long-term crop management plans. However, most applications and their analysis on annual time steps with fixed strategies within the year, effectively ignoring conditional responses during the year. We suggest an alternative approach that captures the strategic responses within a cropping season to random weather variables as they unfold, reflecting farmers’ ability to adapt to weather realizations. Multistage decision problems a problem of dynamic programming problem there is numerically challenging. So for the analytical results, dynamic programming is able to obtain the optimal agricultural product problem, and also decides how many it consumes and how many it saves in material and permanently store in each period economically. However, in this study, the problem is considered deterministic in which all input parameters are constant. The objective is to find a sequence of actions (a so-called policy) that minimizes the total cost over the decision making horizon the purpose of this paper has been to introduce application of dynamic programming techniques by way of example. The end result of the model formulation reveals the applicability of dynamic programming in resolving long time of the problem.


Introduction
The term dynamic programming was originally used in the 1940s by Richard Ernest Bellman to describe the process of solving problems where one needs to find the best decisions one after another [1,12]. Uses dynamic programming to determine an optimal path from a number of alternatives paths, in order to move from a given initial state to a desired final position.
Dynamic programming is used to solve the multistage optimization problem in which dynamic means reference to time and programming means planning or tabulation [15]. It refers to a computational method involving recurrence relations. This technique was developed by Richard Bellman in the early 1950's. It arose from studying programming problems in which changes over time were important, thus the name 'dynamic programming'. However, the technique can also be applied when time is not a relevant factor in the problem. The idea is to divide the problem into 'stages' in order to perform the optimization recursively. It is possible to incorporate stochastic elements into the recursion [5]. In this paper, we will describe about the concept of application of Dynamic programming in computer science, economics, agriculture etc. Approach for solving a problem by using dynamic programming and it's applications of dynamic programming are also prescribed in this paper. [7,8,10] is to understand and contribute to the cutting edge research on optimization problems such as the classical optimal stopping problem and optimal resource allocation within the specific framework of approximate dynamic programming, complex decision making problems under uncertainty like reliability analysis, resource allocation, biological sequence manipulation and risk management. Dynamic programming is so powerful device that encourages tremendous growth in researches for solving sequential decision problems, and research related to dynamic programming has led to fundamental advances in theory, numerical methods, and econometrics [3]. Agriculture is mostly dominated by smallholders, farmers. One of their main problems is how to utilize their products most effectively so that they can gain more income. Although those farmers generally have a good skill in plantation, they have been, however, facing essential decision making problems on what and when should be planted. One of main factors affecting these problems is product price and planting cost fluctuation throughout the year. After preliminary study, it is revealed that this fluctuation is in a seasonal manner. At present, some farmers choose plants to grow based on the current market price, or what they traditionally plant. The obvious flaw of these plans is that the prices after products harvested are not generally as good as expected leading to losing capital or profit [11]. Now let us try to understand certain terms, which we come across very often in this paper.
Stage: A stage signifies a portion of the total problem for which a decision can be taken. At each stage there are a number of alternatives, and the best out of those is called stage decision, which may be optimal for that stage, but contributes to obtain the optimal decision policy.
State: The condition of the decision process at a stage is called its state. The variables, which specify the condition of the decision process, i.e. describes the status of the system at a particular stage are called state variables. The number of state variables should be as small as possible, since larger the number of the state variables, more complicated is the decision process.
Policy: A rule, which determines the decision at each stage, is known as Policy. A policy is optimal one, if the decision is made at each stage in a way that the result of the decision is optimal over all the stages and not only for the current stage.
Decision: At every stage, there can be multiple decisions out of which one of the best decisions should be taken. The decision taken at each stage should be optimal; this is called as a stage decision.
Principle of Optimality: Bellman's Principle of optimality states that ''An optimal policy (a sequence of decisions) has the property that whatever the initial state and decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision." Introduced and studied properties of solutions for functional equations arising in dynamic programming of multistage decision processes [10]. Also used to determine a solution for the problem of pricing contingent claims or options from the price financial market. In this situation, there is a price range for the actual market price of the contingent claim [4].
The main result of this work is the determination that the maximum price is the smallest price that allows the seller to hedge completely by a controlled portfolio of the basic securities. A similar result is obtained for the minimum price (which corresponds to the purchased price). In order to "solve" a dynamic program, we seek to find a state-dependent rule for choosing which action to take (a policy) that minimizes the expected total cost incurred.

Application of Dynamic Programming Problem
Applications of dynamic programming there are many areas where we can find the optimal solution of the problem using dynamic programming are bioinformatics, control theory, information theory, operations research, agriculture, economics and many applications of computer science like artificial intelligence graphics [7,13].
The versatility of the dynamic programming method is really appreciated by exposure to a wide variety of applications. My intent is to understand and contribute to the cutting edge research on optimization problems such as the classical optimal stopping problem and optimal resource allocation within the specific framework of approximate dynamic programming, complex decision making problems under uncertainty like reliability analysis, resource allocation, biological sequence manipulation and risk management [7,8].

Application of Dynamic Programming Problem in Agriculture
In dynamic programming the whole time period over which the decisions are made -the planning horizon -is divided into discrete and finite number of time periods in which the decisions are made [16]. Recently the dynamic programming and optimal control techniques were applied in several studies concerning agro-ecosystem modeling [14].
Farmers are faced with many decision problems in crop and Livestock production which are multi-stages and stocking. They have been many application of dynamic programming to such decision problems, many primarily for illustrate purposes. It is argued that the advent of farmer access to computer will lead to on farm use of dynamic programming. Application of dynamic programming to forestry, fisheries and agricultural policy are also reviewed.
Example: A farmer must take three items. Such items are tomato, potato and orange. Each item has its value. This value is 1, 3, and 6 respectively. The farmer assigns the priority weight 3, 4, and 5 respectively. And the total capacity is 8. How many of each item should be the farmer profitable for these items.
Solution: Using knapsack technique to solve this problem. The mathematical representation of this problem is, where W is total Wight.

Application of Dynamic Programming in Economics
Dynamic programming is applicable in economics, because it decides how many it consumes and how many it save in material and permanently store in each period. Example: -In a company labor force wants to produce three products. Such products are Bread, Injera and Table. The daily carrying (holding) cost in birr of the manufacturing process is 1, 1 and 2 respectively. The ordering cost (set-up cost) is 5, 7 and 9 respectively. Also the students use those products (demand) is 5, 2 and 5 respectively. Then the unit production cost is 2 each for the first 5 unit and 4 each for additional units. Given the initial at item 1 is 1. Find the optimal solution of this problem.

Application of Dynamic Programing in Computer
Where, C 1 =Computer 1 C 2 =Computer 2 C 3 =Computer 3 C 4 =Computer 4 C 5 =Computer 5 Example: -Suppose that there are ten computers in one class room and one power control, this computer connected by different length of cable are each computers connected directly or indirectly each other. The graphically representations of such computer are Where, P=power control A=computer 1, G=computer 7 B=computer 2, H=computer 8 C=computer 3, I=computer 9 D=computer 4, J=computer 10 E=computer 5 F=computer 6 Solution: Using shortest path technique solves the problem. Let us analyze the node at each stage. Let us analyze node p. The distance from node p to node ? is 3. The distance from node p to node ? is 5. The distance from node p to node ? is 4. Stage 1: the minimum distance from node p to node ? ( 3. The minimum distance from node p to node ? ( 5. The minimum distance from node p to node ? ( 4. Let us analyze node ? G . The shortest distance to node ? G ( =min { ℎK% " :( $9;" 'L K 9K:" ( + :( $9;" F%K< 9K:" ( K 9K:" ? G . i=feasible =minM 3 + 1 = 4 5 + 7 = 12 3 + 2 + 7 = 12 , the shortest distance is 4 i.e. from node? K 9K:" ? G . Let us analyze node ? N .
Stage 4:-The shortest distance from node p to node ? X ( 14. i.e. the path is p → ? → ? G → ? X .
Thus the shortest path of the connection of a cable when in the class room in this path.

Conclusion
In this work analyzed, an attempt was made to evaluate the relevance of dynamic programming as an optimization tool. The focus was on the application of dynamic programming to handling the optimal allocation of the available agriculture production, connected cable in computer class, and economics in different techniques such as knapsack technique and shortest path. Dynamic programming deals with sequential decision processes, which are models of dynamic systems under the control of a decision maker. At each point in time at which a decision can be made, the decision maker chooses an action from a set of available alternatives, which generally depends on the current state of the system. Cases of large scale different expansion problems were also considered and finally the optimal release policy operations. Dynamic programming problem of financing of a number investment projects within the limits of the target program of financing with long enough term of realization an actual example of use of method of financial management in current conditions. Dynamic programming is one of the most effective methods of decision of similar problems.