Optimizations of Manufacturing Capabilities Through Systems Reliability Analysis and Redundancy Compliance with Operations Design and Safety Considerations

System and machine reliability is an important consideration that must be made when attempting the optimization of manufacturing capability; it has to be factored into the system design, layout and construction. Consideration has to be given to how reliability factors which influence the required optimization of the system, and the necessary level of its redundancy to comply with manufacturing process and safety considerations. These considerations must be made when commissioning and operating the system, with specific attention to associated maintenance requirements. These considerations and effects that redundancy engineering can have upon them are reviewed in this work indicating the latest ideas on their implementation and improvement. System availability is a consideration which is of paramount importance in the design of industrial structures. As the system becomes more complicated the cost of improving reliability also increases. Redundancy is the main avenue of increasing system availability. One of the main objectives for carrying out this research is to establish a system which optimize manufacturing capabilities through systems reliability analysis and redundancy compliance with operations design and safety considerations in a steel rolling mill. Repairable failures have been considered in most power system’s reliability analysis and that a modeling concept for unavailability due to ageing must be developed. A Normal or Weibull distribution is suggested as the means to estimate the failure probability density function due to the ageing process and a combined model is proposed including calculations for repairable and ageing failures. An example using seven generating units is used to verify the correctness of the constructed model. The results indicate that ageing failures have significant impact on the unavailability of components particularly in the case of older systems.


Introduction
The redundancy optimization problem is solved when the design goal is achieved and its effects reduced through the selection of discrete and continuous components available. According to [1], redundancy optimization is achieved by the examination and analysis of the minimal configuration and maintenance costs of series parallel multistate systems when under reliability constraints. The maintenance policy specifies the priorities between the system components and the use of shared maintenance team. The optimization approach developed [2] is analytical and uses the universal "z" transform and Markov chain techniques to develop a heuristic model. In developing a direct optimization method, there is a need to establish a system which supports the entire maintenance structure. A genetic algorithm (GA) based optimization model was proposed by [3] to improve the design efficiency whilst considering the design constraints. This is carried out through object oriented programming to develop a knowledge based system for the design of a series parallel system. This program becomes an effective tool to decide the related characteristics of each component. The conclusion is reached that the proposed system requires further study to optimize the GA parameters, including data entry and statistical analysis from the design knowledge base According to [4], a manufacturing process in which unscheduled stoppages can critically affect plant availability, productivity and product quality. For many years steel companies have practiced condition-based monitoring in strategically vital areas such as the Hot Strip Mill. These monitoring methods include vibration analysis, oil and wear debris analysis and performance measurement using numerous techniques to measure parameters such as electric current and temperature. The present methods allow maintenance personnel to detect and often diagnose pending equipment failure but they are not able to predict remaining equipment life with any certainty. According to [5], a predictive model is proposed which utilizes a Weibull distribution to define the expression modeling the failure intervals. This equation is solved using a Monte Carlo approach with the time to failure (TTF) being predicted as a cumulative probability distribution. The paper defines the application of condition monitoring measurements as applied using two separate regimes, designated as the stable and failure zones. In the stable zone condition monitoring methods indicate that the operation is normal and a reliability monitoring method is used. In the failure zone the condition monitoring methods identify the existence of a problem and both reliability and condition monitoring information are combined to predict the remaining machine life. The paper investigated both simulated and case studies and concluded that the prediction model is highly dependent on both the quality and accuracy of the condition based measurements.
The important parameter in reliability engineering can be examined using the effects of ageing in a power generating system. The failures can be classified as either repairable random failures or non-repairable ageing failures [6]. Reliability analysis in its various forms is a well-established tool used in many industrial applications. It impinges on many aspects of our lives from everyday issues such as domestic transport through to futuristic concepts such as space travel [7]. The discrepancies in the production process are primarily surrounding the conflicting use of failure rates and force of mortality. A motor can be system in its own right, but when taking into the context of a manufacturing process which could contain several hundred motors; it would be considered as a part. Most statistical systems analysis methods are based on one or more of the processes. There is wealth of data available regarding statistical modeling on the reliability of repairable systems. However these are predominantly biased towards statistical investigations into identifying whether there is a reliability analysis system available for a particular system, the relative merits of differing reliability analysis methods when applied to a particular system and manufacturing either a derivation of the current reliability analysis techniques or a combination of several techniques in order to create a new reliability analysis technique. These investigations have predominantly been performed as academic exercises and some have contributed towards the statistical understanding of systems operational behavior. The General Renewal Process model is an adaptation of the Power Law process which contains an ageing factor [8].

=
(1) The General Renewal Process addresses the situation where the system falls between the two extremes of repair status, as good as new or as bad as old by introducing a repair effectiveness factor, classed as q which is ranked between 0 and 1 where 0 is homogeneous Poisson process and 1 is nonhomogeneous Poisson process [9]. The ageing factor (virtual age) takes into account the repair effectiveness q by considering it as a factor of time t through the equation.
A Monte Carlo simulation using the minimum likelihood estimator calculated variables is used to derive the instantaneous failure intensity and its corresponding time between failures. This program uses two methods of calculating the virtual age of the system where the last repair is returned to full operating status and where all previous repairs are returned to full operating status. The first case is considered for all analysis through the derivation of the partial derivatives from the natural log of the likelihood function and equating to a maximum [10].
The maximum likelihood estimation (MLE) of the three variables Beta (β) and Lambda (λ) and the virtual age are obtained from the partial differential of the repair effectiveness factor q.
The Chi 2 goodness of fit test is a statistical procedure that is used to identify if the assumed underlying data distribution is correct. These tests are predominantly based on either of two basics distribution parameters [11]. The cumulative distribution function which called distance tests and probability density function known as the area test are the two parameters. The Chi 2 test is an area test and is suitable for large data sets and follows a well defined path by assuming that the data is a specified normal distribution and obtain the distribution parameters as mean and variance. This process yields the composite distribution hypothesis which has more than one element that is jointly be true and regarded as the null hypothesis. The negation of the null hypothesis is called the alternative hypothesis. The assumed hypothesized distribution is tested using the data set and finally the null hypothesis is rejected whenever anyone or more of the elements in it is not supported by the data [12]. The formula that explores the difference in expected and observed values follows a Chi 2 distribution pattern. The procedure is summed up as division of the data range of X into k sub-intervals, counting the number of data points in each sub-interval and superimposing the PDF of the assumed theoretical distribution. After that, the comparison of the empirical histogram with the theoretical PDF is done before testing if the results agree probabilistically with the distribution assumption supported by the data. According to [13], if they do not agree the assumption is most likely incorrect. The formula for the Chi 2 statistic is > is the expected number of data points in cell i, ? is the observed number of data points in cell i, k: total number of cells or subintervals in range, n: sample size for implementing the Chi 2 test, k-1-number of estimated parameters Chi 2 degrees of freedom and 6 @ 7 is the Chi 2 distribution table with degrees of freedom, y.
Treating the Y i values as one group and sequencing from the smallest to the largest gives the ordered Z values Z 1 , Z 2 ….Z m . This allows to calculation of the parametric Cramer von Mises statistic

Research Methodology
In this research work, the three major indices observed are the production time measured in seconds, the number of cobbles formed and the fine quality of the rolling mill measured in mesh.
In general, the quality of the mesh depends on the obstructions caused by cobble formations and the time lapses encountered during the rolling process. It is expected that optimum times is used to enable effective process which in turn yields adequate and standard mesh. The start and end time of production was observed and converted to seconds which is the unit of measurement. A start and end time forms a cycle. The number of cobble formation associated with each cycle is recorded. The corresponding quality level was also recorded. Then, multiple regression approach was used to estimate the model and associated parameter estimates of the model. The level of variability and the associated sums of squares were not overlooked. They were analyzed to check the suitability of the model for the optimization process. The uniqueness of this work is that the general model of the two effects of the independent variables was estimated before considering the individual effects on the dependent variable to enable the efficient optimization of the objective function. The general model estimate of the model parameters gave the desired objective function since it is the basis by which the fine quality is achieved. It was subjected to the production cycles and number of cobbles formed. At this point, the optimal condition of fineness when the systems redundancy and manufacturing capabilities are fully utilized is established.
It is worthy to note that another important measure of reliability is the mean life. This is an expression of components or systems' operating lifespan. Reliability (R) is the probability that a component or system will perform as designed when needed. Like all probability figures, reliability may range in value from 0 to 1, inclusive. Given the tendency of manufactured devices to fail over time, reliability decreases with time. During the useful life of a component or system, reliability is related to failure rate by a simple exponential function, O = > D where, R = Reliability as a function of time [14]. Thus, reliability exhibits the same asymptotic approach to zero over time that we would expect from a first-order decay process such as a cooling object approaching ambient temperature or a capacitor discharging to zero volts. A practical example of this equation in use would be the reliability calculation for a model. Obviously, a system designed for high reliability should exhibit a large R value and a small PFD value Just how large R needs to be (how small PFD needs to be) is a function of how critical the component or system is to the fulfillment of our human needs. The degree to which a system must be reliable in order to fulfill our modern expectations is often surprisingly high. Suppose the reliability of system is 99 percent (0.99). This sounds rather good, However, when we actually calculate how many hours of breakdowns to be experience in a typical year given this degree of reliability, the results are seen to be rather poor (depending on certain standards of expectation). If the reliability value is 0.99, then the unreliability is 0.01. Suppose an industrial manufacturing facility requires steady electric power service all day and every day for its continuous operation. This facility has back-up diesel generators to supply power during utility outages, but they are budgeted only for 5 hours of back-up generator operation per year. The reliable the power service needs to be in order to fulfill this facility's operational requirements may be calculated simply by determining the unreliability (PFD) of power based on 5 hours of blackout per year's time [15]. Thus, the utility electric power service to this manufacturing facility must be 99.943 percent reliable in order to fulfill the expectations of no more than 5 hours (average) back-up generator usage per year. A common order-of-magnitude expression of desired reliability is in the value. A reliability value of 99.9 percent would be expressed as three nine and a reliability value of 99.99 percent [16]. The Weibull cumulative distribution function can be transformed so that it appears in the familiar form of a linear regression modelA = i6 + j by using the equations Comparing this equation with the simple equation for a line, we see that the left side of the equation corresponds to Y, lnx corresponds to X, β corresponds to m, and -βlnα corresponds to b. Thus, when we perform the linear regression, the estimate for the Weibull parameter comes directly from the slope of the line [17]. The estimate for the parameter must be calculated as

Results and Discussions
Thirty samples of data were gathered over a period of three months from August 2015 to October 2015. The mill was monitored until cobble formation occurred. The regression coefficient of the cycles which is −0.97412 means that a unit decrease in the time of cobble formation will produce on the average 0.97 unit decrease in the meshes. The lower the particles, the smoother and good the products subsist. That is ∆ = 1 unit, ∆Y = −0.97on the unit of the particles in support of the production processes and reduction in the number of cobble formations keeping number of cobbles constant if the production time is fixed. Also, ∆ 7 = 1, then ∆Y = −306.80unit decrease in Y.    Critically viewing the observed and the predicted plot, the errors are randomly distributed which is a clear indication that the model was a true representation of the data. The fineness measured in kilogram remains the dependent variable while the number of cobble formation and the cycles are the independent variables. The effects of the two independent variables were simultaneously considered on the mesh level of the grinding processes. After that, number of cobbles was negligibly held constant to determine the effect of the variation in the number of minutes consumed during the production processes. Assuming all things to be normal, there is a vice effect as a result of the lapses created by the formation of cobbles on the fineness of the grinding. In that regard, the linear model of the number of cobbles and cycles were respectively plotted against the fineness measured in mesh.

Figure 2. Estimated Cycles and Cobble Formations against Fineness.
The variations on the estimated models of the three conditions are plotted and it was observed that error terms are normally distributed. Since the optimum goal is to minimize the grinding mesh in order to maximize the fineness under the two variables effects of cobbles and production times, the optimization conditions were presented and analyzed. The analysis shows that the production process could be optimized at a ratio of one second to 0.003351 cobbles when all the production processes are adequately fixed. At a point when 16437.44and 7 55.08737 , the rolling mesh decreased by17673.4.

Conclusion
The shape parameter and 7 , indicate whether the rate of cobble formation and cycles are respectively increasing, constant or decreasing. A ¦ 1.0 indicates that the assembly has a decreasing cobble formation rate. This scenario is typical of infant mortality and indicates that the assembly is failing during its burn-in period. The 7 § 1 indicates increasing cobble formation rate cycles rate which leads to excess time wastage due to the increase in the number of cobbles formed. This is typical of systems with components that are wearing out. Generally, there is an indication that the cobble formation and undesirable filth formation in the production mesh is as a result to fatigue and sub assemblies wearing out. The intercept is a measure of the scale or spread in the distribution of data. The ratio of the number of cobbles formation and the number of seconds which the optimal mesh would have achieved indicates 0.003351 which implies a loss percent of 99.97. This is the system optimization opportunities not utilized.
It was discovered that thermal inconsistencies on the product, such as unbalanced thermal profile between operator side and drive side, big temperature drop between head and tail, excessive skid marks, lower average temperature, can result in difficulties to manage the rolling process and, in the worse cases, can lead to cobbles. Generation of a cobble leads to the following loss of production due to loss of about 16437 seconds in removal of cobble, damage to mill equipment like entry guides, side guards due to pulling and tugging by crane, threat of burn injury since coil remains at a high temperature, noise pollution caused by removal of strip removal over the roller table; and loss of good material resulting in financial loss. There is a loss of 1313.095 mesh in the revival of each cobble formation.