The SPQR («Semper Paratus ad Qualitatem et Rationem») Principle in Action

We show an application of the SPQR Principle [«Semper Paratus ad Qualitatem et Rationem (“Always Ready for Quality and Rationality”)»] as the way to analyse papers and books; it seems that very few people have taken care of Quality of Methods (Deming, Juran, Gell-Mann, Shewhart, Einstein, Galilei). The case analysed here is about a Design of Experiment application to Large-Scale Metrology and to Control Charts.


Introduction: "The Problem Outline"
Many researchers use citations of papers and books as index of the Quality of the methods given in those papers and books: according to the author this is a very BAD attitude. On the contrary they should use the correct (Scientific) way to analyse the data and make decisions about the methods suggested.
Another wrong attitude is found in the web: Open Access Journals are criticised because they are "means for tricking people" (asking fees for publishing papers). For example, for Science Publishing Group, they say either [1] ′′Science Publishing Group is another scam Open Access journal publisher or academic vanity press..... the journals put out by the Science Publishing Group are not read by scientists and have no impact factor.′′ or [2] ′′They will distribute it globally and pretend it is real research, for a fee. It's untrue? And parts are plagiarized? They're fine with that. Welcome to the world of science scams, a fast-growing business that sucks money out of research, undermines genuine scientific knowledge, and provides fake credentials for the desperate. ′′ In my opinion, the bad quality of the paper published does not depend on the fee, asked by the OA Publishers (OAP), but on the very low quality of the authors and of the Peer Reviewers; the same happens for ′′well reputed magazines and journals′′ [see the long bibliography of Fausto Galetto].
Due to that, the author stated the SPQR Principle [«Semper Paratus ad Qualitatem et Rationem (′′Always Ready for Quality and Rationality′′)»] as the way to analyse papers and books; it seems that very few people have taken care of Quality of Methods. To the author knowledge, they are Deming, Juran, Gell-Mann, Shewhart [3][4][5][6][7][8]. Fausto Galetto would like to know somebody else who did that... In his last years of life A. Einstein wrote: «An Academic career poses a person in an embarrassing position, asking him to produce a great number of scientific publications; only strong personalities can resist to this seduction toward the superficiality… I am very grateful to Marcel Grossmann if I had the fortune not to be in this hard position.» It is not surprising that professors, researcher, managers, scholars and students learn wrong ideas, in the Quality field, BECAUSE we have a very widespread book with many wrong concepts {e.g., D. C. Montgomery falls in contradiction! He spreads wrong concept on Quality [9,10]}. Is Wiley & Sons an OAP? The Quality Engineering Group (QEG, comprising several professors) suggests the Montgomery books to students; therefore it is not a surprise that the case we analyse here has various problems [11,12]. In the web you can find: ««Welcome to the website of the Quality Engineering Group QEG members think that Bibliometrics is very important for quality of papers.... See § 7 and the References (Galetto papers) You can find the drawbacks of Bibliometrics in the F. Galetto paper [97] ′′Bibliometrics: Help or Hoax for Quality?′′. (there are some ideas of QEG!!!).
The case we analyse here [32, a QEG book] (Springer-Verlag London is OAP?) is a very interesting application of DOE (Design Of Experiments) to Large-Scale Metrology settings. It is important for our purpose because in this case we do not have the data and then we are in the situation where many times a reader is: the conclusions of the authors are given and the reader must ′′Take it or leave it′′, without any possibility of verifying them! It is the same in [33, a QEG paper] (Surely IEEE Trans Instrum Meas is not an OAP) The wrong documents from 9 to 31 are not published by OAP (Open Access Publishers): the publishers do not ask the fee to the authors, they ask the fee to the readers! As for OAP the Quality of the documents depends on the authors.... You see that in the paper [111] Six Sigma Hoax: The Way Professionals Deceive Science.
In order not to be cheated, the only way left to the reader is to use his own intelligence together with the SPQR Principle...
We mainly use excerpts from the book (published in 2011) and in one of the last papers, I could see (2010) [32,33].
Reader be SPQR «Semper Paratus ad Qualitatem et Rationem», to understand clearly the issue, remembering Deming's, Gell-Mann ideas and Quality Tetralogy that must be in the mind of every Scholar…. (see figure 1, given in this introduction because it shows the prerequisite of Quality) The present paper is offered to Managers, to Students (aiming at becoming Future Managers), to Young Researchers (aiming at becoming Scientific Researchers), to Scholars (aiming at learning Scientific ideas), and to Professors who want to learn the BASICS of Decisions based on the Scientific Analysis of problems and solutions in order to make Quality Decisions in their work of practical Research, Theoretical Research and Management.
It aims at showing in some detail the several aspects related to Management of Quality and Problems Solving, because only good methods are crucial for suitable decision taking. Decision-making is something which concerns everybody, both as maker of the decision (after either a serious or non-serious analysis) and as sufferer of the decision of other people (as well, after either a serious or non-serious analysis by them). Often we need data to decide: we analyse them to decide and we must take into account the consequences of our decisions; unfortunately always the data are affected by variability (they are uncertain to us) and therefore we need to consider uncertainties in detail and introduce them into the analysis for "decision-making under uncertainty". The worst thing a reader may encounter is when he does not have the data to analyse: this is the case here! The two figures 2 and 3 (the 1 st an Excerpt, and the 2 nd of Fausto Galetto) are given to let the reader see the experimental setting for the Distributed Large-Scale Dimensional Metrology.
There is a frame like a parallelepipedon; at the bottom there is the item to be measured, the measurand; on the top face, the ceiling, there is a set of the transceivers (optimised in position

Quality Tetralogy
Quality Tetralogy and number) that receive and send UltraSound (US) signals with a probe; the US signals take a certain time, the TOF (Time Of Flight), used to measure the measurand.
In the paper [33] (2010) one finds the following figure where one sees the 3 factors used in the DOE; Figure 2. Excerpt (from the paper [33]). the 3 factors are d: the horizontal distance between the network devices (C 1 , …, C n ) and the probe (crickets) θ: the angle (misalignment angles) between the normal vector of network devices (C 1 , …, C n ) and the probe (crickets) V: the battery charge of the crickets (on the probe) In the figure 3 you see a. a network (or ′′constellation′′) of sensing devices, distributed within the working volume; b. a portable probe to ''touch'' the points of interest on the surface of the measured object (′′measurand′′), so as to obtain their spatial coordinates; In the book [32] (2011) the 3 rd factor (V) is not considered. The purpose of the book [32]

The Experiments Carried out, First Part
In their book [32], at § 7.2.2 Description of the Experiments, one finds the experiments carried out for constructing the correction model. Network devices were assumed to be parallel with respect to the devices to be localized. In the current practice, this condition is generally satisfied because network devices (C 1 , …, C n ) are arranged on the ceiling, at the top of the measuring area and Crickets to be localized are generally mounted on the portable probe and oriented upwards. This configuration is a practical solution to obtain a good coverage and to maximize the measuring volume.
In this configuration, the misalignment angles related to a generic network device (C i ) and the one related to the device(s) to be localized, with respect to their distance: a. transmitter (T) and receiver (R) are positioned facing each other; b. the distance (d) between transceivers is known and represents the 1 st factor of the factorial plan; c. transmitter face is parallel with receiver face, but they are not perpendicular with respect to the direction of the distance. A misalignment angle (θ) is introduced and represents the 2 nd factor of the factorial plan. The reference point for determining the transceivers' distance and misalignment angle corresponds to the centre of each US (Ultra Sound) transceiver. Figure 4. Excerpt from the book [32].   A scholar has three ways to find the value of the parameter λ of the power ′′transformed response RV Y λ ′′: a) If ϕ(µ)∝µ α and we transform the original response Y to Y λ we can find the gradient dY λ /dY=Y λ-1 of the function Y λ so that σ Y λ ∝ dY λ /dYσ Y =µ λ-1 µ α [the gradient evaluated at µ]; choosing λ+α-1=0 the ′′transformed response RV Y λ ′′ has constant variance. Sometimes we know theoretically the relationship σ Y =ϕ(µ)∝µ α and we can take advantage of that; for example, if we know that the exponential distribution is suitable to the data on hand, we know that σ Y =µ and therefore λ=0: the transformation of the data is ln(Y), because Y λ =exp[λln(Y)]=1+λln(Y) + [λln(Y)] 2 /2+ [λln(Y)] 3 /6+... so that we have the limit (Y λ -1)/λ=ln(Y) for λ→0... b) If we do not know theoretically the relationship σ Y =ϕ(µ)∝µ α we can take advantage of the data. We need replicated data so that we can compute s i (estimate of σ i ) and m i (estimate of µ i ) for any i-th experimental condition: since σ i ∝ µ i α , then ln(σ i )= constant+αln(µ i ), a straight line with slope α. We compute a the estimate of α and we estimate λ=1-a. If a=2 then the ′′transformed response RV′′ would be the reciprocal Y -1 .

Probe Tranceivers
The transformation does not assure by itself that the ′′transformed response RV′′is normally distributed. To get this result we need the 3 rd way: c) We postulate that the normal distribution apply to the error of the ′′transformed response RV Y λ ′′ in the linear model "matrix" W=Xβ+E (see the ANOVA in any good book), where W=(Y λ -1)/λ; the Mean Square Residual (Sum of Square Residual/df) in the ANOVA table, which we get with the Maximum Likelihood Method, depends on λ; let's name it MS R (λ). We compute a set of values MS R (λ 1 ), MS R (λ 2 ),..., MS R (λ n ), and we chose as estimate of λ the one λ 0 providing the minimum MS R (λ 0 ). Obviously this depends on the data and on the assumed model.
Since the Maximum Likelihood method is NOT an optimisation method we see that the QEG statement ««... exponential y * =y λ , where λ is the parameter of the transformation... optimization method for determining the transformation parameter»» is FALSE!!! Let's go back to the QEG data.

The Experiments Carried Out; Comparison with a Previous Experiment
What can a scholar do with the DOE shown in the paper [33] (2010)?
Let's see. There are 105 treatment combinations replicated 5 times (see the table in figure 6).
For each of these combinations, 50 measurements of the TOF are performed, taking the average value.
As QEG did in the book, they say:  QEG statement ««The most common transformation is the exponential y * =y λ , where λ is the parameter of the transformation. Box and Cox proposed an optimization method for determining the transformation parameter»» is FALSE because the Maximum Likelihood method is used, which is NOT an optimisation method!!! These transformed data (original data or means?, the QEG do not say anything about this...) are the data analysed; the ANOVA, of QEG, is in their figure 12 (figure is 7 in this paper) From the QEG ANOVA table we can derive the following table  ′′With the support of the Minitab_Best-Subsets tool, we find that the terms with coefficients K 3 and K 4 have slightly influential contributions [our figure 8]. In fact, considering several competing multiple regression models of order not larger than two (see Figure 7.15), the model with the three terms (d, θ 2 and dθ) is the one with the Mallows' Cp (4.3) closest to the number of predictors plus the constant (4). In general, Mallows' Cp is used in statistics to assess the fit of a regression model that has been estimated using ordinary least squares. It is applied in the context of model selection, where a number of predictor variables are available for predicting some outcome, and the goal is to find the best model involving a subset of these predictors. As anticipated, the best model is the one with the Mallows' Cp closest to the number of predictors plus the constant (Mallows 1973). In this specific case, this fact was also confirmed by an initial regression, based on the model in Eq. 7.4, in which the contribution of the terms d, θ 2 and dθ appeared to be secondary[our figure 9!!!].′′ Notice (and meditate) the QEG contradiction ««this fact was also confirmed by an initial regression, based on the model in Eq. 7.4 ("large model"), in which the contribution of the terms d, θ 2 and dθ appeared to be secondary.»»: IF the terms d, θ 2 and dθ were NOT ′′important′′, i.e. ′′NOT significant′′, they CANNOT become ′′significant′′ in the formula 7.5 ("reduced model")!!! We do not have the data... Therefore we must accept that (take it or leave it): ′′As a consequence, terms with coefficients K 3 and K 4 were removed from the model and a new second order model, representing a compromise solution between best-fitting and reduction of the number of predictors was constructed using Eq. 7.5. ("reduced model") The two formulae are numbered as QEG did: they are excerpts.  Notice (and meditate) the QEG wrong statement that follows the Excerpt (figure 9) ««It is important to note the presence of the last term (K 6 dθ), which accounts for the interaction between the two factors.»»: K 6 dθ is NOT ′′the interaction between the two factors′′ but ONLY the interaction of the linear effects both of d and θ; in fact, in the formula 7.4, K 6 dθ accounts for 1 df and NOT for 32=(5-1)(9-1) df!!! [see figure 10 of the weighted regression] Moreover the regression coefficients K i are different in the formulae (7.4) and (7.5) Figure  7.

13). The final regression equation is:
TOF-Error=84.6+0.0207d+0.0314θ 2 +0.000336dθ (7.6) In Eq. 7.6 [figure 10, weighted regression], TOF Error , d and θ are respectively expressed in µs, mm and degrees (°). This model can be useful for correcting the systematic error in TOF measurements. Given that the variation in the response standard deviation is not very large, it emerged that Eq. 7.6 [figure 10, weighted regression] is not very dissimilar to the result that would be obtained by a simple (non weighted) linear regression.
The QEG members go on by saying: ′′The regression output is quantitatively examined by an ANOVA (see Figure 7.16) [you see it in "our" Figure 10]

Analysis of the Experiments Carried Out, Using SPQR
Let's see, on the contrary to the QEG findings, the SPQR in action Figure 11. The SPQR Principle.
Since 37 runs times 50 data (per run) gives a total number of data 1850, from the ANOVA of QEG we deduce that only a total of data 1700 were used: 150 data (3 runs) were discarded; which? They are the 3 runs at 60°C!!! See the figure 12 taken from the book [32].

Those 3 points (runs) would have generated quite different estimates of the regression coefficients and perhaps quite a different formula!!!
We use the QEG regression formula TOF Error =84.6 + 0.0207d + 0.0314θ 2 + 0.000336dθ (7.6) to generate the data on the 34 runs carried out [comparing them with the points in the graphs] and on the 3 missing (we do that to easily use the orthogonal contrasts giving a great insight in the matter).
Comparing the QEG graph with the Simulated data of the QEG formula it is clear that the important curvature is lost! Moreover it is clear, for people who know a little of Mathematics, that there is a linear effect of θ!!! (see the following graphs, in the figure 14) The SPQR Principle   Using the orthogonal contrasts for the Linear effect of d, d Linear , the Linear effect of θ, θ Linear , the Quadratic effect of θ [due to θ 2 ], θ Quadratic , and the effect of the interaction of Linear effect of d and the Linear effect of θ, dθ Linear , Linear , one finds the following ANOVA table, where all the effects in it are significant at 0.0001 level: It is clear that the linear effect θ Linear of the factor θ is highly significant, while θ Linear was missed by QEG.
The coefficients of the regression formula [with the orthogonal polynomials] are in table 5 It is clear that QEG missed the significant linear effect θ Linear . Now we use the QEG graphs to ′′generate′′ the data of the complete experiment and we will use the G-Method to find the estimates...
Since we did not have the original data, we decided to ′′use′′ the data ′′recovered′′ from the graphs. Here they are:  distance  0  15  30  45  60  500  102  104  128  158  200  1000  117  118  145  172  258  1500  118  123  159  197  310  2000  130  137  175  218  2500  137  147  183  232  3000  142  151  199  238  3500  154  170  218  267   disalignment angle ° C  distance  0  15  30  45  60  4000  175  210  258 Since we do not have the QEG original data we can only use the data from the graphs in order to find the value of λ; doing that we find a value -0.54 (rounded to -0.5, reciprocal square root transformation) as given in the following figure: Two values of λ were computed [see figure 15], on the left assuming that the data on the graphs were single data (34 from the graphs, table 6), on the right assuming that the data on the graphs were means of 50 data.
The   we confirm that the most suitable transformation is the one with λ=-0.5! (it depends on the 34 data from the graphs) IF we have had the original data we could have done much better....

Analysis of the Experiments Carried out, Using SPQR and Response Surface Methodology (RSM)
To have a better insight of the results, using only the 34 data of the graphs, we use now the Response Surface Method (RSM), which is nothing different from the G-Method applied to the data in the table 6.
Using the orthogonal polynomials versus θ, for each disalignment angle, we find (estimate) the regression coefficients b 0 , b 1 , b 2 : they are uncorrelated because we used the orthogonal polynomials! Therefore we can easily find the Confidence Intervals (with 99% CL). The standard deviation was taken from the QEG ANOVA... Table 10 provides the confidence intervals.
Since the regression coefficients b 0 , b 1 , b 2 depend on the angle we show their relationship in the figures 17, 18, 19.    Therefore it is absolutely unwise to pretend that a unique formula, such the (7.6), could give the response variable (the graphs of QEG were very clear on that, but the professors did not realise that!!!).
Minitab was of no help on that: GIGO! Making a sound regression of the data of the graphs (using only the 34 collected data from the graphs) one finds Table 11.
Acting as QEG did one finds that the ′′Mallows′ C p ′′ is 9, and therefore the coefficients of Table 8  is quite unsuitable to provide what is needed (as we said before when we transformed the data)!
The Response Surface is given in the graph of figure 21; on the base you can see the contour lines.  The author thinks that the reader of the paper should agree that the QEG statement ′′The model fits well with experimental data.′′ is false.
In the author's opinion it would be better, on the contrary, to put SPQR in action! ′′Quality of Quality Methods is important′′ (F. Galetto), as it was appreciated by J. Juran at Vienna EOQC Conference! To compare the two surfaces we did the 1 st following graph; in the 2 nd where you see a different view of the previous surface; the difference is more evident with the contour lines on the base.  It is evident, as it must be, that the two formulae provide different estimates of the TOF Error ; therefore... see table 13.
Since we do not have the original data we cannot compare the two regression models directly.
We can only use the data from the graphs. Making the ANOVA for the models we get the MS of Residuals; their ratio are given in the Table 13 Comparison of TOF Error models: QEG versus Fausto Galetto. From Table 13 we see clearly that the two models are Significantly Different, at 0.5% significance level. The following figure compares graphically the various RSM via the direct regression and the four regressions-antistransformed of the transformed data...  Any sensible scholar does have to conclude that the QEG equation TOF Error =84.6 + 0.0207d + 0.0314θ 2 + 0.000336dθ (7.6) is quite unsuitable to provide what is needed...
Therefore the negative considerations [1,2] on the Open Access Publishers are valid also for other publishers: see several F. Galetto papers e.g, "Comment on: 'New Practical Bayes Estimators for the 2-parameters Weibull distribution, IEEE Transactions on Reliability vol. 37, 1988", "(1989) Quality of methods for quality is important, EOQC Conference, Vienna", "(1990) Basic and managerial concerns on Taguchi

Open Access Versus Non-open Access
We ended the previous section with the statement ′′Therefore the negative considerations [1,2] on the Open Access Publishers are valid also for other publishers: see several F. Galetto papers....′′.
We prove here that Non-Open Access Publishers have the same problems of the OAP: the cause is the incompetence of the authors and of the Peer Reviewers (Referees). All the F. Galetto papers proved that for many years (see those in § 7).
Here we consider only two of them: both are related to the Quality Engineering Group of Turin Politecnico... I invited them many times to be scientific... without success! According to prof. F. Franceschini [a member of QEG!], papers published in Quality Magazines are, by definition, good papers: many times that is not true.
The papers considered were found by chance while looking for other papers for other ideas.
Let's stand-back a bit and meditate, starting from a managerial point of view, using published documents (found in magazines used by managers and professionals, and suggested to students), and analysing them using the SPQR Principle.
We start with the paper "Learning curves and p-charts for a preliminary estimation of asymptotic performances of a manufacturing process" [Total Quality Management  Considering all the samples one finds the following Control Chart The QEG member F. Franceschini, being cheated by the data and by the graph, decided to interpolate a curve whose equation was p=a/t + c; the coefficients are estimated by the formulae The From that any sensible researcher or scholar (who knows the Basics of Statistics) can compute the Confidence Intervals (CI) of the parameters estimates.
Unfortunately, the QEG member F. Franceschini did not compute them! IF he had computed the CI (assuming normal distribution) the QEG member F. Franceschini would have found that the value 0 belongs to them: therefore, according to Franceschini formulae, the parameters estimates are not significantly different from 0!!! Therefore, pretending that the formula p=a/t + c provides the asymptotic defectiveness is nonsense: the QEG member F. Franceschini did not realise that.... Look at the figure with 40 more samples... that show QEG nonsense!!! The author thinks that the reader of the paper should agree that the QEG fellow was wrong! The referee of the paper could not find what students can find. If you look at the future data (given in Montgomery book) you find different results … [see the previous figure 25] In the author's opinion it would be better, on the contrary, to put SPQR in action! ′′Quality of Quality Methods is important′′ (F. Galetto), as it was appreciated by J. Juran at Vienna EOQC Conference! Since Total Quality Management is surely a journal of a Non-Open Access Publisher it is clear that Quality of papers depends on the authors and not on the publishers.
QEG member have been very active on Control Charts; they invented firstly the "Qualitometro I method (1998) … in order to evaluate and check on-line service quality" because "there is now a strong need for proper evaluation tools", [Franceschini, Romano, Rossetto, 1998]. Later (1999 and 2000) it was presented and discussed "a new proposal for data processing that enhances elaboration capabilities of Qualitometro I. This new procedure, named Qualitometro II, is able to manage information given by customers on linguistic scales, without any arbitrary and artificial conversion of collected data. Collecting and treating data by means of the Qualitometro II eases this process providing a method for performing elaboration closer to customers fuzzy thoughts. … Qualitometro II method can be interpreted as a Group Decision Support Tool for service quality design/redesign … able to handle information expressed on linguistic scales, without any artificial numeric scalarization." Hence they introduce a "new instrument that can fulfil the formal properties of a linguistic scale and allow for the expression of the variety in the decisional logic of the evaluator. … The fuzzy operator that is used allows for this flexibility in the decision logic." (underlinement is due to F. Galetto). In 2005 QEG member invented the Qualitometro III method in papers related to ′′Ordered Samples Control Charts for Ordinal Variables′′ (Quality and Reliability Engineering International)... They write: "The paper presents a new method for statistical process control when ordinal variables are involved. This is the case of a quality characteristic evaluated by on ordinal scale. The method allows a statistical analysis without exploiting an arbitrary numerical conversion of scale levels and without using the traditional sample synthesis operators (sample mean and variance). It consist of different approach based on the use of a new sample scale obtained by ordering the original variable sample space according to some specific 'dominance criteria' fixed on the basis of the monitored process characteristics. Samples are directly reported on the chart and no distributional shape is assumed for the population (universe) of evaluations". NOTICE (and meditate): it very interesting to notice that some students of mine, L. Perri (2002), E. Mori (2006) and J. Baucino (2008) found the drawbacks of fuzzy sets in control charts for services and other Control Charts [in books and papers]: using those rules for analysing the process behaviour one can find that they provide at least 20% out of control events for random data "uniformly distributed" on the scale points: such data must be "in control" by definition!!! (F. , 2004, L. Perri 2002 It is clear that there is something wrong in the way of using fuzzy sets in control charts for services.
There is not space for showing how much are wrong fuzzy ideas applied to Quality. [see References] We only mention that those wrong ideas come from Yager (1981) "A new methodology for ordinal multiobjective decision based on fuzzy sets", where he invented a method to avoid the "tyranny of numbers" because "… forcing the decision maker to supply information with greater precision than he is capable of providing. This may lead to incorrect answers…".
Quality S. El-Ferik and M. Ben-Daya wrote [115] ′′The effect of ageing on the deterioration rate of most repairable systems cannot be ignored. Preventive maintenance (PM) is performed in the hope of restoring fully the performance of these systems. However, in most practical cases, PM activities will be only able to restore part of the performance. Bridging the gap between theory and practice in this area requires realistic modelling of the effect of PM activities on the failure characteristics of maintainable systems. Several sequential PM models have been developed for predetermined PM interval policies but much less effort has been devoted to age-based ones. The purpose of this paper is to develop an age-based model for imperfect PM. The proposed model incorporates adjustment factor in the effective age of the system. The system undergoes PM either at failure or after a predetermined time interval whichever of them occurs first. After a certain number of such PMs, the system is replaced. The problem is to determine both the optimal number of PMs and the optimal PM's schedule that minimize the total longterm expected cost rate. Model analysis relating to the existence and uniqueness of the optimal solutions is provided. Numerical examples are presented to study the sensitivity of the model to different cost function's factors and to illustrate the use of the algorithm.′′ Fausto Galetto always tried to teach his students to be Scientific (using their own Intelligence) when dealing with Reliability... always warning them to be very careful in order not to be cheated by incompetent authors allowed to publish papers by incompetent referees. A rule always told them was: ′′IF a new model does not provide known results in known Scientific cases that model is to be considered as not-scientific′′ (Relativity Theory provides the Newtonian Theory when the speed of frames is very low with respect to the light speed c).
The two authors compute wrongly the expected cycle length with the formula ! ", $% & ' ∑ ) * + , -. / , -/ 0 &1 (5) Any scholar, researcher, student can see that (5) is wrong by reading about the Reliability Integral Theory in the books [105][106][107][108]!!! It is clear that (5) is wrong because it provides an expected cycle length bigger than the one of a system with complete renewal at any preventive maintenance!!! Many papers and books deal with preventive maintenance ONLY in the STEADY State case (i.e. when the "planning horizon is infinite")! Only Fausto Galetto considered the THEORY of preventive maintenance when the "planning horizon is finite". [105][106][107][108]  The difference in the optimised preventive maintenance interval can be very important as the can see in the book [108]; there anybody can find the Theory for understanding the errors, as given in the document [116] Galetto, F., 2017, Imperfect-age-maintenance_WRONG paper found in Academia.edu, Published in the Academia.edu.
Any scholar, researcher, student must follow the concepts in the figures 26 and 27, IF they want to act with Quality... Any scholar, researcher, student must consider that, IF they want to act with Quality, the Knowledge-Making process and the Knowledge itself need to have Quality got through Quality Tools and Methods as it is depicted in the figure 28, Quality Tools and Quality Methods to avoid the Disquality. Unfortunately too many researchers think that citations of papers and books are an index of the Quality of the methods given in those papers and books: according to the author this is a very BAD attitude. On the contrary they should use the correct (Scientific) way to analyse the data and make decisions about the methods suggested.
Compare the F. Galetto findings opposite to what is found in the web where Open Access Journals are criticized because they are "means for tricking people" (asking fees for publishing papers). For example, for Science Publishing Group, they say either [1]

Conclusion (Using SPQR)
We showed that, using Logic, Science and the SPQR Principle, we can understand if a ′′proposed method′′ is to be used or it must be refused.
The author thinks that this is very important for any student, researcher and scholar, especially if they look at figure 1.
The  While attending (as an ′′intelligent pupil′′) a Post-Graduate University course on DOE (2001), provided by ′′Montgomery fans′′ (someone of QEG was teaching there) Fausto Galetto had the opportunity to experience the incapability of teaching ′′scientifically′′ the matter they were dealing; at that time Fausto Galetto invented the Disquality Vicious Circle ′′Presumption-Ignorance-Presumption-Ignorance′′ because the lecturers were unable to teach ′′scientifically′′... (Figure 27, published on 2008).
He thinks that the readers (Professors, Managers, Researchers, Scholars) must stay with STEM (Science, Technology, Engineering and Mathematics), i.e. LOGIC to prevent and avoid DISquality! (see the Quality Tetralogy) IF the scholars want to make Quality (of papers, of books, of teaching) they must remember Figure 26 (FAUSTA GRATIA for Quality in order to avoid the Disquality) and Figure 27 (The Disquality Vicious Circle).
Since "Quality of Methods for Quality is important" [50] and there are methods misleading (e.g. Taguchi  F2 Variation is in everything and everywhere, all the time. From F1 any scholar must not hide the information about the truth present in the data... From F2 we derive that «"variation" is NOT the enemy of Quality», as several "intelligent (are they ????)" people say! Variation is in every phenomenon and is important: if life was developing for millions of years that was merit of the VARIATION! The sons of relatives have more problems than the sons of NON_relatives… Biodiversity is the foundation of ecosystems to which human well-being is intimately linked.
Every "author's opinion" is based on this long experience in the Quality Field: they are not only opinions, they are hard facts. See the figures and the papers: Fausto Galetto during the "students' defence of their final thesis" (to get their degree in Engineering) used to open the written thesis at a "random" page and to ask the future graduate what he meant with some statements found in there. 90%-98% of the students did not know how to provide any answer to the questions: moreover, 50%-60% said "I copied it from the web!" That was not the biggest problem: it always astonished me the fact that the (Professor) Referee of the thesis did not know the matter/answer himself! These are hard facts, not opinion; the same were for Deming and Gell-Mann…, and Einstein… SPQR was used by Galileo Galilei and by the great scientist Isaac Newton when he said "If I have seen farther than others, it is because I have stood on the shoulders of giants"; the process of Science is such that the discoveries of one people generation serve for the next one, by knowledge accumulation. This is true for any discipline (e.g. Logic, Mathematics, Physics, Probability, Statistics, Medicine, Economics, Reliability…): any building needs sound foundations [fundamental principles F1 and F2].
When using other people words (like those of Newton, Galilei, Einstein, Deming, Gell-Mann…) the Fausto Galetto tries to show that very great scholars have been providing correct hints to the readers in order to help them increasing their knowledge… The Knowledge-Making process and the Knowledge itself must have Quality got through Quality Tools and Methods; this is depicted in the figure 28, Quality Tools and Quality Methods to avoid the Disquality. Figures 26,27,28 were completely disregarded by QEG when they, based on an idea by Kosmulski, who (2011) proposed to classify a paper as "successful" when receiving more citations than those made; they decided (in their paper "An informetric model for the success-index" appeared on Scientometrics, 2012) to propose to classify a publication as "successful" when it receives more citations than a specific comparison term (CT). In the intention of the QEG authors CT should be a suitable estimate of the number of citations that a publication -in a certain scientific context and period of time -should potentially achieve. According to this definition, the success-index is defined as the number of successful papers, among a group of publications examined, such as those associated to a scientist or a journal. QEG gave particular emphasis to a theoretical sensitivity analysis of the success-index (s-index).
The F. Galetto paper [97] shows the many drawbacks of this QEG attitude. [again QEG!, as we saw before] This shows that the Open Access Publishers are not the problem: the problems are generated by incompetent authors even though when they go to "good (so called!) publishers"...
Other cases are found in Research Gate documents. Any sensible Scholar must take into account that the Scientific Attitude provides good results, using the SPQR Principle.
Doing that any serious scholar can see the drawbacks both of Open Access Publishers (OAP) and Non-Open Access Publishers (NOAP): the bad quality of the paper published does not depend on the fee, asked by the OA Publishers (OAP), but on the very low quality of the authors and of the Peer Reviewers; the same happens for ′′well reputed magazines and journals′′ (NOAP).
We saw that several NOAP Journal published papers of the Quality Engineering Group (QEG, comprising several professors suggesting the Montgomery books to students; therefore it is not a surprise that the case we analyse here has various problems [11,12]