Education Journal
Volume 4, Issue 6, November 2015, Pages: 343-351

Multicriteria Decision Methods as an Alternative for Evaluating the UACh Research System

Sandra Santiago-Rodríguez1, José Luis Romo-Lozano2,*, Marcos Portillo-Vázquez1, Ma. Amparo M. Borja-de la Rosa2

1División de ciencias Económico-Administrativas, Universidad Autónoma Chapingo, Texcoco, México

2División de Ciencias Forestales; Universidad Autónoma Chapingo, Texcoco, México

Email address:

(J. L. Romo-Lozano)

To cite this article:

Sandra Santiago-Rodríguez, José Luis Romo-Lozano, Marcos Portillo-Vázquez, Ma. Amparo M. Borja-de la Rosa. Multicriteria Decision Methods as an Alternative for Evaluating the UACh Research System. Education Journal. Vol. 4, No. 6, 2015, pp. 343-351. doi: 10.11648/j.edu.20150406.14


Abstract: Research is a core university activity that contributes to the formation of critical thinking by students and teachers and promotes knowledge and scientific development that may help built better societies. The good performance of a university research system depends on, among other things, the ability to properly distribute the limited financial resources that are allocated to this activity. A common problem in grading activities usually considered in research is the integration of a long list of criteria and sub-criteria. The aim of this study was to determine how financial resources are distributed among all the research centers and institutes at the Universidad Autonoma Chapingo (UACh). Three methods were used for weighting criteria: simple ranking, point distribution and analytic hierarchy process. The aggregation of the values was carried out using TOPSIS and weighted sum methods and the resulting distributions were compared to the traditional way of distributing resources. It was concluded that although the differences were not significant, the TOSPIS method provides a more reliable allocation.

Keywords: Analytic Hierarchy Process, TOPSIS, Budget Allocation


1. Introduction

Research is one of the core activities in the university education process, which, if well established, can make a major contribution to the development of critical thinking skills in students and teachers, promote knowledge and scientific development, and generate behaviors that can help build a better country. Success in research is not only a matter for universities, but for society in general (Kao and Pao, 2009: 261).

Failure to strengthen university research condemns educational institutions to being mere users of the knowledge generated by others, and to just transferring the knowledge and experience gained by alien realities and societies, thereby turning down the opportunity to provide knowledge and solutions from one’s own reality (Loret de Mola, 2008). In Mexico, as in most developing economies, the subject of scientific research and knowledge generation has been rightly placed as a priority on the agenda of higher-level educational institutions.

Given its recognized importance and the limited nature of financial resources allocated to it, research has been incorporated into various assessment processes that seek to better distribute financial resources and promote efforts towards achieving priority goals.

This study was aimed to determine how financial resources are distributed among all the research centers and institutes at the Universidad Autonoma Chapingo (UACh). Three methods were used for weighting criteria: simple ranking, point distribution and analytic hierarchy process. In the aggregation phase two methods were used: TOPSIS and weighted sum method. This study concluded that although the differences were not significant, The TOPSIS method provides a more reliable allocation.

2. Evaluation of University Research

Different approaches have been used to evaluate research systems at universities throughout the world. In Colombia, for example, the government has developed various methods and indices, including the model of management indicators, which is based on viewing the university as an organization or management unit that receives inputs, processes them and delivers products. The results were aimed at stimulating the improvement of the university system of the Republic of Colombia (Ministerio de Educación Nacional, 2013). This model of management indicators is based on analyzing the degree of optimization of inputs received by each university. The indicators of this model are: the capacity index, the index of training results, the research index, the extension indicator and the welfare indicator.

In Spain, there has been an explosion in the number of new organisms and mechanisms for evaluating research or the quality of universities, which has led to the creation of: the Agencia de la evaluación de la calidad y Acreditación (ANECA); the Agència de Gestió d´Ajust Universitaris i de Recerca (AGUR); the Agencia per a la Qualitat del Sistema Universitari de Catalunya (AQU) in Catalonia; the Agencia de Calidad, Acreditación y Prospectiva de las Universidades de Madrid; the Agencia Canaria de Evaluación de la Calidad y Acreditación Universitaria (ACECAU); and similar entities for evaluating and funding research activities in other Autonomous Communities (Sanz, 2005: 2). In the framework of this trend, research institutes at the Universidad de Zaragoza were evaluated. In this research, a meta-evaluation consisting of conducting online surveys addressed to the directors of the institutes, as well as to the assessors themselves, was used. The evaluation made it possible to comprehensively assess for the first time all evaluative processes carried out in Aragon on University Research Institutes (IUIs) (Agencia de Calidad y Prospectiva Universitaria, 2014:5).

The evaluation of research in Argentine universities, their contexts, cultures and limitations, is another investigation conducted from the standpoint of budget allocation, which holds that by nature in an environment where there are economic resources there will also be different power groups, which triggers conflicts. Therefore, this article proposes efficiently managing and administering the budget in order to contribute to building a society based on knowledge. For this, it was necessary to evaluate the researchers, which consisted of measuring their frequency of publication in indexed journals, and evaluation of projects in terms of their originality and quality (Lattuada, 2010: 158-159).

In evaluating research in Mexico, a distinction is made between two main frameworks: the framework for evaluating graduate programs and the framework for evaluations conducted in the National System of Researchers (SNI). In 1991, Mexico’s National Science and Technology Council (CONACyT) established the Register of Graduate Program Excellence, an initiative that graded the quality of multiple master’s and doctoral programs that had proliferated in previous decades and used it as a mechanism to provide appropriate support to students. Assessments were directed towards research-oriented programs (Canales, 2011: 38). For its part, SNI, in its nearly 30 years of existence, has clearly played an important role in promoting research in Mexico, encouraging, evaluating and grading the performance of affiliated researchers.

3. Basic Concepts in Decision-Making

The great expansion that Multicriteria Decision Making (MCDM) methods have undergone since the 1960s has resulted in a large number of articles and theoretical books (Roy, 2005: 4). In recent years, these methods have been further refined because of the great importance and numerous applications of MCDM in economics. This is one of the most important techniques mentioned in the literature for analyzing decision-making when dealing with multiple goals and a number of conflicting decision criteria.

Due to important developments in the field of MCDM, different authors saw the need to classify these methods in order to be able to appreciate the most salient features thereof in decision-making. The best-known classifications were made according to the characteristics of the information, with one of the most prominent being that of compensatory and non-compensatory information. As a result, different approaches can be found in: (Saaty, 1980), (Zopounidis, et al., 2010: 17), (Figueira, et al., 2005: 5), (Bao, et al., 2012: 109), and (Triantaphyllou, 2000: 3).

Decision-making is a process carried out every day by human beings, and it is also one of the activities of humans that best reflects their level of development and freedom (Moreno & Escobar, 2000: 97). Decision-making inspired many thinkers and great philosophers of the past, such as Aristotle, Plato and Thomas Aquinas to name a few, to reflect on and analyze the ability of humans to decide, and in some manner they claimed it to be the ability that distinguishes humans from animals (Figueira, et al., 2005: xxi). Among the most used concepts in the application of MCDM are:

The decision-making unit refers to the individual or group of individuals who possess qualities of intellect and who are thus assigned the responsibility to make the decision (Romero, 1996: 19).

The analyst is the figure modeling the specific situation and eventually makes recommendations regarding the final choice. The analyst does not express personal opinions, but limits himself/herself to recognizing those of the decision-maker and treating them objectively (García, et al., 2009: 11).

The goals represent the aim of improving the attributes considered. The improvement can be interpreted as meaning ‘more of a better attribute' or ‘less of a better attribute.’ The first case corresponds to a maximization process and the second to a minimization process (Romero, 1996: 20).

The decision criteria are the points of view or parameters used to express the preferences of the decision-maker. These are represented by the row vector,

The concept of alternative corresponds to the particular case in which the decision-making unit is in the quandary of having to choose one action from among many. The alternatives represent different action options available to the decision-maker (Triantaphyllou, 2000: 1). These are represented by the vector,

The attributes refer to the values that the central decision-maker is faced with in a given problem involving multiple choices. One of the conditions of the attributes is that they can be measured, and as a result they can be expressed as a function of the corresponding decision variables (Romero, 1996: 19).

3.1. Weighting Methods

The simple ranking method is the simplest way of weighting variables. It consists of the decision-maker ranking the criteria in ascending order, based on his/her own opinion or personal experience as to the appropriate degree of importance to be given to the criteria, i.e. assume a vector of criteria, the first with allocation number one, which will be the most important criterion and the nth will be of less importance (Aznar, et al., 2012: 67). This method allows detecting if there is any inconsistency in the responses provided by the experts, by comparing with other weighting methods.

The point distribution method involves giving a group of experts the set of criteria to weight. They are then asked to distribute 100 points among the set of criteria considered, and to perform the same distribution among the sub-criteria. The expert acts based on his/her experience and subjective judgments about the importance of each criterion and sub-criterion. In this method the intensity of preferences is measured by the scores awarded on the basis of criteria or sub-criteria positions (OECD, 2008), i.e. the most important criterion is the one that gets the most points, and the least important is the one that gets the fewest points. Because of the simplicity of this method, the weights are obtained directly.

The Analytic Hierarchy Process weighting method was proposed by Professor Thomas L. Saaty in the 1970s (Saaty, 1980). This method is based on the idea that the complexity inherent in a decision-making problem with multiple criteria can be resolved by ranking the problems posed, which means that this method of multicriteria decision making is characterized by breaking down and organizing the problem visually in a hierarchical structure.

In the literature one can find various studies of MCDM applied to the evaluation of education (Joo & Alvarado, 2013), and of competitiveness and efficiency of industrial sectors and companies (Berumen & Llamazares, 2007). Methodological manuals have been developed for political-social programs and projects (Pacheco & Contreras, 2008) and natural resource management and decision-making (Mendoza & Martins, 2006). There have also been studies on a simulation-based budget determination procedure for public building construction projects (Yu, et al., 2008) and safety risk assessment using AHP during planning and budgeting of construction projects (Aminbakhsh, et al., 2013).

AHP generates a weight for each evaluation criterion according to pairwise comparisons of the criteria decision-maker. The higher the weight the better the performance of the option with respect to the criterion considered. Finally, the AHP combines the criteria of the weights and the scores of the choices, thus awarding an overall score for each option. The overall score of a particular option is a weighted sum of scores for each criterion. The preference of the elements is determined based on judgments of the degree of importance of one element relative to another. To make comparisons, the Saaty number scale (Table 1) is needed to indicate to what degree one element dominates or is more important than another (Zopounidis, et al, 2010: 95).

Table 1. Fundamental scale of Saaty absolute numbers.

Degree of Importance Scale Definition
1 Equal importance The two activities contribute equally to the goal
3 Moderate importance Experience and judgment slightly favor one activity over another
5 Strong importance Experience and judgment strongly favor one activity over another
7 Very strong importance One activity is strongly favored over another; element is very dominant as shown in practice
9 Extremely important The evidence is in favor of one activity over another, to the greatest extent possible
2, 4, 6, 8 Intermediate values between two judgments They are used to express preferences that are between the values of the above scale
Reciprocal values If activity i has one of the above numbers, by comparing i to j, the inverse of i with respect to j is obtained.

The Analytic Hierarchy Process is founded on four axioms, which are (Moreno, 2002: 32), (Papadopoulos, 2011: 15): 1) reciprocal comparison, the decision maker must be able to make comparisons and establish the strength of his/her preferences. The intensity of these preferences must satisfy the reciprocal condition, considering the evaluation matrix. If x21 is x times more preferred than x12, then x12 is 1/x times more preferred than x21; 2) homogeneity, preferences are represented by means of a limited scale. The elements to be compared are of the same order, magnitude or hierarchical level; 3) independence, when preferences are expressed, it is assumed that the criteria are independent of the properties of the alternatives; and 4) expectations, this axiom says that when a decision is made, it is always assumed that the hierarchical structure is complete. That is, that all alternatives and criteria considered relevant to the resolution of the problem are represented in the hierarchy.

3.2. Aggregation Methods

Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is one of the classical methods for decision making developed in 1981 by Hwang and Yoon (Bao, et al., 2012: 109). This method is based on the concept that the best alternative should have the shortest distance from the ideal positive solution (IPS) and the longest from the ideal negative solution (INS). The main TOPSIS applications are used in: efficiency maximization in public services (Ercole, et al., 2007); logistics and programming for purchasing multiple products from multiple suppliers (Jolai, et al., 2011); selection of industrial milling machines (Real & Maldonado, 2011); and comparative performance evaluation of organizations (Bai, et al., 2014). TOPSIS has proven to be a very powerful method in computational intelligence systems and industrial engineering (Kahraman, 2012).

The main procedure of the classical TOPSIS method can be described in the following seven steps (Triantaphyllou, 2000: 18).

Identification of the decision matrix

Let A be a vector of alternatives A= {Ai, for i=1, 2, 3,…, m} and C a vector of criteria C= {Cj, for j= 1, 2, 3,…, n}. These criteria have an associated weight, represented by w= {wj, for j= 1, 2, 3,…, n}. The decision-making unit must be able to assign for each alternative and criterion the choice set, in this case a numerical value that takes the place of xij {i=1, 2, 3,…, m; j= 1, 2, 3,…, n}; this value expresses a judgment of the alternative Ai with respect to the criterion Cj, as shown in Figure 1.

Figure 1. Decision matrix.

Normalization of the decision matrix

This step transforms the various dimensions of the attributes into dimensionlessness, allowing comparisons among criteria. The normalization used by TOPSIS is calculated using the expression (1).

(1)

Calculation of the weights of the weighted decision matrix

Weights denoted by  multiply respectively each column of the matrix, generating the weighted matrix as follows:

Identification of the ideal positive solution (IPS) and ideal negative solution (INS)

These solutions have the form  for IPS and  for INS.

where  and

These alternatives are fictitious, but it is reasonable to assume that in the benefit criteria, the decision maker wants to have the maximum value of all the alternatives. For costs, the decision-making unit needs the alternative with the minimum value.

Calculation of the distances separating IPS and INS

The distance of IPS and INS is estimated with the expressions (2) and (3):

(2)

(3)

Where the difference in the Euclidean distance is given by (4):

(4)

Estimation of the closest distance to the ideal solution

The closest distance between the alternatives, called the proximity index, is estimated by using the results from the previous step and it gives (5).

(5)

Prioritization of alternatives

According to the proximity index , the set of alternatives can be ranked from the most preferred for the feasible solutions to the least preferred. Therefore, the best alternative is the one with the shortest distance to the positive ideal solution.

The weighted sum method is one of the simplest and most applied methods cited in the literature (Triantaphyllou, 2000: 6), and it also the most widely used in single dimension problems. If there are (m) alternatives and (n) criteria, it is said that the best alternative is the one that satisfies (in the case of maximization) the following expression.

(6)

Where  corresponds to the score of the best WSM alternative; n is the number of decision criteria; xij is the current value of the i-th alternative, in terms of the j-th criterion; and wj, is the weighted value or degree of importance of the j-th criterion.

The main assumption governing this method is the additive utility. The total value of each alternative is equal to the sum of the products given. In single-dimension cases, all units are the same (e.g., Mexican pesos, meters or seconds), which is why the WSM can be used without complications. Several important studies have used this method, especially in evaluating environmental programs (Morillas, et al., 1997) and in agrarian valuation (Aznar & Guijarro, 2012). This method can generate Pareto-optimal solutions (Mela, et al., 2012) in the decision-making of any field of study (Triantaphyllou, 2000), in addition to many more applications.

4. Methodology

The methodology developed in this research consisted of five stages: 1) Survey design; 2) Interview with decision-making experts; 3) Estimation of the criteria and sub-criteria weights using the simple ranking, point distribution and Analytic Hierarchy Process (AHP) methods; 4) Review of inconsistencies; and 5) Aggregation by TOPSIS and WSM procedures.

The information used to obtain the weights was acquired by conducting interviews using a designed survey. The content of the interview was based on the standards for evaluating productivity and budget allocation of the research centers and institutes issued by the Research and Graduate Studies Office (DGIP) of the Universidad Autónoma Chapingo, and on the requirements of the different methods applied. The instrument was divided into three criteria: Institutional Assessment, Researcher Evaluation and Productivity of the Center or Institute, and the criteria were subdivided into four, two and five sub-criteria respectively in Figure 2, which shows the hierarchy of the criteria and sub-criteria used.

4.1. Institutional Assessment Subcriteria

Structure.- This classifies the two research entities, which are Centers or Institutes.

Productivity trend.- This involves checking every year the trend in productivity, i.e. the trend may be positive, which indicates an improvement in research quality and quantity, or negative, which shows a decrease in research at the center or institute.

Holding Seminars.- This refers to holding some scientific event in the year being evaluated, and publishing the results of the event.

Institutional compliance.- This considers those tasks that involve submitting reports and documentation relevant to the evaluation of the Centre and Institute.

Figure 2. Hierarchical structure of criteria and sub-criteria used for budget allocation.

Researcher Evaluation Subcriteria.- In this category the sub-criteria academic training and SNI membership, in addition to being full-time members of the Center or Institute at the University and having a project registered in the DGIP, are graded.

4.2. Productivity Subcriteria of the Center or Institute

Productivity is defined as the quantitative record of activities related to research carried out by members of the Centers or Institutes and which together comprise the overall productivity of each Center or Institute. In other words, this category reflects the results obtained by researchers through various recognized "research products." It is important to note that due to the characteristics of the decision matrix, it was only possible to find the values recorded in the productivity sub-criteria in an aggregate manner, so that in estimates made at the aggregation stage the value of the criterion that groups them, i.e., the productivity criterion of the center or institute, was used.

To gather the information, a group of experts meeting the following characteristics were consulted: 1) have been or are coordinators of research centers and institutes; 2) extensive knowledge in research; 3) have a Ph.D.; 4) have experience in research at the university level, among other qualifications.

Once the information on possible inconsistencies present in the weighting process was obtained, the information obtained from the simple ranking and point distribution methods was analyzed. The analysis consisted of comparing the order of importance declared in the simple ranking with the point distribution established. Additionally the application of these two methods allowed giving some context to the experts interviewed and helped prepare their attitude prior to the application of the AHP method, which requires a little more effort in the pairwise comparisons.

Excel software was used to estimate the weights of the simple ranking and point distribution methods. In estimating weights derived from the AHP method, Expert Choice version 11.5.1860 software was used, and individual weights obtained in the criteria and sub-criteria were added using the geometric mean and the normalized aggregated eigenvector. Normalization was done with the sum method. The final AHP weight was used in the aggregation of the TOPSIS and WSM procedures.

5. Analysis of Results

In the application of the three weighting methods used (Table 2), the similarity in the weights obtained by the point distribution method and AHP for the criteria, which maintain a substantial difference with the results obtained in the simple weighting method, stands out. It can be accepted, in this context, that the results of the weighting by simple ranking are less accurate because it is limited to establishing an order of importance of the criteria, although it must also be recognized that its use allowed putting the evaluators into context, sparing them from falling into excessive inconsistencies when weighting using the other two methods. That is, if the weighting expert establishes a particular criterion in first order of importance by simple ranking, it is conceivable that when it comes to distributing points, that importance will be specified by allocating a greater amount to those criteria that he/she rated as more important. The same could be said for the comparison between the point distribution method and AHP. Even though the differences between them are small, in this case it is possible to rely more on the AHP result; the pairwise comparison using the Saaty scale offers the possibility of also specifying the assigned values, especially when it comes to a very manageable number of criteria (3). Similar arguments can be made for weighting the subcriteria. For these reasons it was decided to use the resulting criteria and sub-criteria AHP weights when applying the aggregation methods.

Table 2. Final weights of the simple ranking, point distribution and AHP methods.

Criteria Subcriteria Simple Ranking Weighting Point Distribution Weighting Weighting by AHP
Weight Final Weight Weight Final Weight Weight Final Weight
Criteria Sub-criteria Criteria Sub-criteria Criteria Sub-criteria
Institutional Evaluation Productivity Trend 0.23 0.08 0.11 0.06 0.12 0.05
Institutional Compliance 0.05 0.02 0.03
Holding Seminars 0.05 0.02 0.03
Structure 0.04 0.02 0.01
Researcher Evaluation SIN 0.30 0.19 0.19 0.14 0.18 0.15
Academic Training 0.11 0.05 0.03
Productivity of the Center or Institute 0.47 0.47 0.70 0.70 0.70 0.70
Sum 1.00 1.00 1.00 1.00 1.00 1.00

The inconsistency results found using the AHP weighting process ranged from 5 to 7%. The literature recommends an inconsistency value of less than 10%, which is considered permissible. If greater than 10% but less than 20%, it is considered a major problem. If we demand perfect coherence, we would find it difficult to obtain in reality. While there is sufficient consistency to maintain coherence between the decisions taken, the consistency need not be perfect (Ishizaka & Labib, 2009: 10), (Álvarez, et al., 2010: 593).

In the aggregation phase of the multicriteria process, calculations were made using the decision matrix issued by the university’s research office, both for the TOPSIS method and WSM. From the decision matrix, we proceeded to the construction phase of the normalized matrix and all the steps laid down by the method were followed. C*, which represents the closest distance to the ideal solution, was calculated by applying equation (5), and then the information was normalized using the sum method, in order to estimate the percentages. Finally, values were put in order from the highest to the lowest (Table 3). Similarly, the results from applying the weighted sum method were generated, normalized and expressed as percentages as shown in Table 3. It can be clearly seen in these results that both methods produce the same order with minor percentage differences, with the greatest difference (1.2%) being for the Horticulture Institute and the second greatest (0.53%) for ISEHMER; for the other institutes the difference is equal to or less than 0.18%.

Table 3. Results of the TOPSIS and Weighted Sum approaches.

Spanish Acronyms Name of Center or Institute TOPSIS Method Weighted Sum Method
C* Normalized C* % Weighted Sum Normalized Sum %
IH Horticulture Institute 0.984 20.68 5829.83 19.46
IISEHMER Socio-environmental, Educational and Humanistic Research Institute for Rural Areas 0.618 12.98 3730.07 12.45
CIRENAM Research Center for Natural Resources and the Environment 0.303 6.37 1856.58 6.19
IIPCA Institute for Research and Graduate Studies in Animal Science 0.296 6.22 1851.22 6.17
IDERS Institute for Rural Development and Sustainability 0.259 5.44 1634.76 5.45
IIBIODEZA Institute for Innovation in Biosystems and Sustainable Development in Arid Zones 0.256 5.37 1608.74 5.37
IIAUIA Institute of Agricultural Engineering and Integrated Water Use 0.236 4.95 1507.49 5.03
CISECA Center for Research and Service in Agricultural Economics and Trade 0.235 4.94 1500.64 5
CIEMA Center for Research in Economics and Applied Mathematics 0.213 4.47 1363.56 4.55
CICUBA Research Center for Basic Crops 0.202 4.24 1307.41 4.36
IDEA Institute of Food 0.192 4.04 1202.56 4.01
CIAAO Center for Research in Agroecology and Organic Agriculture 0.177 3.71 1160.87 3.87
CIETBIO Ethnobiology and Biodiversity Research Center 0.16 3.37 1054.76 3.52
IIARDER Research Institute for Regional Agriculture and Rural Development 0.149 3.12 932.98 3.11
CENIDERCAFE Research Center for Development of Coffee-growing Regions 0.137 2.88 915.86 3.05
IIPPF Institute for Research and Graduate Studies in Plant Protection 0.112 2.35 748.75 2.49
CIDEAMS Center for Research, Development and Education in Multifunctional Agriculture 0.089 1.87 622.54 2.07
CISEF Research Center for Sustainability of Forest Ecosystems 0.074 1.55 511.5 1.7
IIPDICEA Institute for Research and Graduate Studies in the Administrative Economic Sciences Division 0.065 1.36 499.64 1.66
CIBED Center for Research in Bioenergy for Sustainable Rural Development 0.004 0.09 118.13 0.39
Sum 100 100

In comparing the percentages obtained using the TOPSIS, WSM and traditional distribution methods (Table 3), some small differences in percentages can be seen. In the case of the comparison between TOPSIS and the traditional distribution method, the greatest disparity is 3.1% as observed with the Horticulture Institute, whereas the rest are less than 0.88%. Similarly, by comparing the weighted sum method with traditional distribution, the highest difference is 1.88%, which occurs for the same institute (IH), whereas the rest of the differences are less than 0.6%.

It should also be noted that there are some differences in the order obtained by the TOPSIS and weighted sum methods, compared to the order obtained by traditional distribution. These variations in order are indicated in Table 4 by numbers in bold and involve three Institutes (IDERS, IIBIODEZA and IIAUIA) and one Center (CISECA).

Table 4. Budget allocation in percentage, resulting from the use of TOPSIS, WSM and Traditional methods.

Center or Institute TOPSIS Weighted Sum Method Traditional Budget Allocation Method
IH 20.68 19.46 17.58
IISEHMER 12.98 12.45 12.16
CIRENAM 6.37 6.20 6.08
IIPCA 6.22 6.18 5.88
IDERS 5.44 5.46 5.40
IIBIODEZA 5.37 5.37 5.49
IIAUIA 4.95 5.03 4.90
CISECA 4.94 5.01 4.96
CIEMA 4.47 4.55 4.52
CICUBA 4.24 4.36 4.48
IDEA 4.04 4.01 4.32
CIAAO 3.71 3.88 4.03
CIETBIO 3.37 3.52 3.87
IIARDER 3.12 3.11 3.47
CENIDERCAFE 2.88 3.06 3.28
IIPPF 2.35 2.50 2.70
CIDEAMS 1.87 2.08 2.48
CISEF 1.55 1.71 1.92
IIPDICEA 1.36 1.67 1.72
CIBED 0.09 0.39 0.76
Sum 100.00 100.00 100.00

6. Conclusions

Since the results obtained by using the Weighted Sum and TOPSIS methods in the aggregation stage were very similar, from this application alone it cannot be said which of the two is the more desirable. This closeness in results could be due to the fact that the number of criteria and volume of information used in this study do not represent a highly complex problem. However, with a larger number of criteria, the TOPSIS method may be recommended given its greater degree of structuring. Regarding weighting methods, the AHP method is considered the most desirable because the pairwise comparisons performed using the scale proposed by Saaty contain more information to incorporate intensities in the preferences expressed by the experts.

Even though the results obtained by applying the two aggregation methods differ little from the traditional distribution, we conclude that the use of the TOPSIS or Weighted Sum method ensures a better distribution of resources due to their underlying theoretical foundation.

Multicriteria methods provide the opportunity to compare different weighting and aggregation methods. It would be advisable for future studies to apply other different weighting and aggregation procedures using some robustness tests.


References

  1. Agencia de Calidad y Prospectiva Universitaria. (2014). Evaluación de Institutos Universitarios de Investigación de la Universidad de Zaragoza, Zaragoza, España: Universitaria de Aragón.
  2. Álvarez, M., Arquero, A. & Martínez, E. (2010). Empleo del AHP (Proceso Analítico Jerárquico) incorporado en SIG para definir el emplazamiento óptimo de equipamientos universitarios. Facultad de Informática (U.P.M.), pp. 579-595.
  3. Aminbakshs, S., Gunduz M. & Sonmez, R. (2013). Safety risk assessment using Analytic Hierarchy Process (AHP) during planning and budgeting of construction projects. National Safety Council and Elsevier Ltd, Volumen 46, p. 99–105.
  4. Aznar, J. & Guijarro, F. (2012). Nuevos métodos de valoración, Modelos Multicriterio. Segunda ed. Valencia España: Universidad Politécnica de Valencia.
  5. Bai, Ch., Dhavale, D. & Sarkis, J. (2014). Integrating Fuzzy C-Means and TOPSIS for performance evaluation: An application and comparative analysis. Expert Systems with Applications, 41, 4186-4196.
  6. Bao, Q., Ruan D., Shen Y., Hermans E. & Janssens D. (2012). TOPSIS and its Extensions: Applications for Road Safety Performance Evaluation. In: C. Kahraman, ed. Computational Intelligence System in Industrial Engineering (pp. 109-132). First edition. Paris, France: Atlantisn Press.
  7. Berumen, S. & Llamazares F. (2007). La utilidad de los métodos de decisión multicriterio (como el AHP) en un entorno de competitividad creciente. Red de Revistas Científicas de América Latina y el Caribe, España y Portugal, 20(34), pp. 65-87.
  8. Canales, A. (2011). El dilema de la investigación universitaria. Perfiles Educativos, vol. XXXII. Instituto de Investigaciones sobre la Universidad y la Educación. México. pp. 34-44.
  9. Ercole, R. A., Alberto, C. L. & Carignano, C. (2007). TOPSIS en medición multicriterio de eficiencia. XXX Congreso Argentino de Profesores Universitario de Costos.
  10. Figueira, J., Greco, S. & Ehrgott M. (2005). Multiple Criteria Decision Analisis: State of the Art Surveys. First ed. Boston, United States of America: Springer Science Business Media.
  11. García, Ma. del S., Lamata, Ma. T. & Ruiz R. (2009). Métodos para la comparación de alternativas mediante un Sistema de Ayuda a la Decisión (S.A.D.) y "Soft Computing". Primera ed. Cartagena: Universidad Politécnica de Cartagena.
  12. Ishizaka, A. & Labib A. (2009). Analytic Hierarchy Process and Expert Choice: Benefits and Limitations. ORInsight, 22(4), p. 201–220.
  13. Jolai, F., Ahmad Y., Shahanaghi K. & Azari K. (2011). Integrating fuzzy TOPSIS and multi-period goal programming for purchasing multiple products from multiple suppliers. Journal of Purchasing & Supply Management, 17, 42–53.
  14. Joo, J. & Alvarado, V. (2013). Evaluación multicriterio/multiobjetivo aplicada a datos sobre educación: una primera aproximación. Revista Educación y Tecnología, 3, 112-123.
  15. Kahraman, C. (2012). Computational Intelligence Systems in Industrial Engineering. Primera ed. Paris, France: Atlantis Press.
  16. KAO, Chiang and Hwei-Lan Pao, (2009). An evaluation of research performance in management of 168 universities. Scientemetrics, Vol 78, No. 2. pp. 261-277. DOI: 10.1007/s11192-007-1906-6.
  17. Lattuada, M. (2010). La evaluación de la investigación en las universidades argentinas. Contextos, culturas y limitaciones. Revista Iberoamericana de Ciencia, Tegnología y Sociedad, pp. 1-8.
  18. Loret de Mola, V. (2008). La investigación en la universidad peruana: una propuesta de debate. Alternativa Financiera, Universidad de San Martín de Porres, Perú. pp. 119.
  19. Mela, T. T., & Markku H. (2012). Comparative study of multiple criteria decision making methods for building design. Advanced Engineering Informatics, 26, 7116-726.
  20. Mendoza, G. A. & Martins, H. (2006). Multi-criteria decision analysis in natural resource management: A critical review of methods and new modeling paradigms. Forest Ecology and Management, 230, 1-22.
  21. Ministerio de Educación Nacional R. d. C. (2013). Propuesta Metodológica para la Distribución de Recursos, Artículo 87 de la Ley 30 De 1992 Vigencia 2013, Bogotá, Colombia.
  22. Moreno, J. M. (2002). El Proceso Analítico Jerárquico (AHP). Fundamentos, Metodología y Aplicaciones. Revista Electrónica de Comunicaciones y Trabajos de ASEPUMA, 1, 21-53.
  23. Moreno, J. M. & Escobar, Ma. T. (2000). El pesar en el proceso analítico jerárquico 1. REDALYC, 14(1), 95-115.
  24. Morillas, A., Díaz B. & González, J. (1997). Análisis de concordancia comparativa difusa. Propuesta y evaluación mediante un caso práctico. Estadística Española, 142, 67-97.
  25. OECD, (2008). Handbook on Constructing Composite Indicators. First edition. European Commission.
  26. Pacheco, J. F. & Contreras E. (2008). Manual metodológico de evaluación multicriterio para programas y proyectos. Primera ed. Santiago de Chile: Comisión Económica para América Latina y el Caribe (CEPAL).
  27. Papadopulos, A. (2011). Overview and selection of multi-criteria evaluation methods for mitigation/adaptation policy instruments, Greece: National and Kapodistrian University of Athens.
  28. Real, A. & Maldonado, A. (2011). Selección de fresadoras con TOPSIS usando ponderaciones de AHP. CULCyT, 8, 95-102.
  29. Romero, C. (1996). Análisis de las Decisiones Multicriterio. Primera ed. Madrid: HB&h Dirección de Arte y Edición.
  30. Roy, B. (2005). Paradigms and Challenges. In: J. Figueira, S. Greco & M. Ehrgott, edits. Multiple Criteria Decision Analysis (pp. 4-24). Boston, United States of America: Springer Science.
  31. Saaty, T. L. (1980). The Analytic Hierarchy Process. New York: McGraw-Hill.
  32. Sanz, L. (2005). Evaluación de la investigación y sistema de ciencia. Documento de Trabajo 04-07. Unidad de Políticas comparadas del CSIC, Madrid. pp. 1-8.
  33. Triantaphyllou, E. (2000). Multi-Criteria Decision Making Methods: A Comparative Study. Promera ed. Louisiana USA: Springer-Science Business Media.
  34. Yu, T. L., Wei Ch. W. & Han H. W. (2008). AHP- and simulation-based budget determination procedure for public building construction projects. Automation in Construction, 17, 623–632.
  35. Zopounidis, C., Pardalos, M. & Hearn, D. W. (2010). Handbook of Multicriteria Analysis. First ed. Heidelberg, Germany: Springer.

Article Tools
  Abstract
  PDF(835K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931