American Journal of Theoretical and Applied Statistics
Volume 4, Issue 6, November 2015, Pages: 484-495

Use of Exponential Smoothing Technique in Estimation of Returns in a Financial Portfolio (A Case of the Matatu Public Transport Business in Kenya)

Jumba Minyoso Sandra1, *, Joel Cheruiyot Chelule2, Mungatu Joseph3

Department of Statistics and Actuarial Studies, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya

(J. M. Sandra)

Jumba Minyoso Sandra, Joel Cheruiyot Chelule, Mungatu Joseph. Use of Exponential Smoothing Technique in Estimation of Returns in a Financial Portfolio (A Case of the Matatu Public Transport Business in Kenya).American Journal of Theoretical and Applied Statistics.Vol.4, No. 6, 2015, pp. 484-495. doi: 10.11648/j.ajtas.20150406.19

Abstract: This study sought to develop consistent estimators for the conditional mean and conditional volatility using exponential smoothing technique and to use the estimators for the conditional mean and conditional volatility to estimate VaR and ES of a financial asset in a given financial portfolio. In particular, we take the Kenyan Matatu business as our financial portfolio and we estimate the ES of the daily returns obtained from Matatus travelling the Nairobi –Eldoret highway as provided by CLASSIC SACCO. In estimating the conditional mean and conditional volatility of the returns of our portfolio, the study explored the exponential smoothing technique, whereby exponentially decreasing weights were assigned to the returns. The study proved that the estimators for the conditional mean and conditional volatility are consistent and also that the estimators for the conditional mean and conditional volatility when conditional mean is known, are asymptotically normal. Further the study gives the estimators for the VaR and ES and proves that the VaR estimator is consistent.

Keywords: Expected Shortfall, Exponential Smoothing, Value At Risk, Conditional Mean, Conditional Volatility

1. Introduction

1.1. Background Information

In Kenya, when the demand for public transport began to outstrip its supply shortly after Independence in 1963, the government decided to tolerate and eventually regularize what had previously been informal commercial vehicles commonly known as Matatus (McCormick D. W., 2011). By the mid-2000s, it is estimated that half of Nairobi’s population relied on Matatus to meet their public transport needs(Dorothy McCormick, 2013)

This Matatu industry has a direct impact on over 500,000 people who are drivers, touts, or employees of PSV Sacco’s and stage attendants (MOA, 2014). As businesses, Matatus are privately owned; they do not enjoy government subsidies and are required to be registered and pay taxes.

In a legal notice issued on 23rd December 2010, by the Ministry of transport, all the Matatus were to join SACCOs or limited liability companies by the end of the year 2010. By this time some of the SACCOs were effectively managing their routes, providing credit to their members for purchase of vehicles or repairs while one had established a fueling station for its members’ vehicles (Kibua1, 2004) with Classic Sacco being one of them. Classic Sacco manages over 170 Matatus plying the Nairobi-Eldoret route. Nairobi being the capital city of Kenya is approximately 164 miles away from Eldoret making this an approximately 5 hour drive distance when using public transport.

Returns from the Matatu business are a complete and scale-free summary of the investment opportunity in the industry

Increase of financial risks in financial portfolios the Kenyan Matatu industry; for example, have underlined the need for better financial risk measures.Matatu business owners deal with the risks associated with changes in prices that can be summarized by the variances of future returns, directly, or by their relationship with relevant covariance in a portfolio context. Forecasts of future standard deviations can provide up-to-date indications of risk, which might be used to avoid unacceptable risks (Aas & Dimakos, 2004). The nature of financial risks in different portfolios keeps on changing with time and therefore the methods to measure them should adapt accordingly. Furthermore, these methods should be easy to understand even in complex situations. It is in this context that quantitative risk measures have become vital in the management of risks. This study is therefore motivated by the need to know how to estimate market risks in the Matatu Industry market.

There are many types of risks i.e. market risks, credit risks and operational risks. In this study, we are focused on one type, the market risks. Market risks are risks that arise from the changes in the prices of financial assets and liabilities (or volatilities) and measured by changes in the value of open positions or in earnings.The market risks will be quantified using a statistical tool called ES. But we cannot avoid mentioning VaR when dealing with ES, since VaR precedes ES, by definition and computation.

1.2. Statement of the Problem

Matatus account for 80% of the total public transport in the country (Muriungi, 2013).The Matatu business can yield good returns for an investor if well managed. However the Matatu business experiences many challenges. The main challenge is keeping track of these returns so as to run a profitable business. In most cases, this happens due to poor monitoring of exposure to market risks by the management. It has been difficult to manage these market risks in the absence of accurate details available from regulatory authorities and related institutions. We intend to address this challenge by estimating ES of the returns as provided by the managing body Classic SACCO.

2.Literature Review

2.1. Volatility and Value at Risk

The main characteristic of any financial asset is its return. Return is typically considered to be a random variable. Its primary usage is to estimate the value of market risk. Volatility is also a key parameter for pricing financial derivatives. All modern option-pricing techniques rely on a volatility parameter for price evaluation (Ladokhin, 2009). Volatility is also used for risk management applications and in general portfolio management.

Volatility concept and its usage in financial risk management refer to the spread of all outcomes of an uncertain variable. In finance, we are interested in the outcomes of assets returns (Dr. Erik Winands, 2009). Volatility is associated with the sample standard deviation of returns over some period of time. Volatility is a quantified measure of market risk. Volatility is related to risk, but it is not exactly the same. Risk is the uncertainty of a negative outcome of some event (e.g. stock returns) while volatility measures a spread of outcomes (Tes´arov´a & Gapko, 2012). This includes positive as well as negative outcomes.

Volatility can be used in some risk management applications, such as Value at Risk (VaR). Accurate estimates of volatility are important for option pricing, portfolio analysis and risk management methodologies, such as value at risk (Madhavan & Yang, 2003). The observation that many financial series exhibit volatility clustering has led to the development of a great many time series methods for volatility forecasting

As a risk measure, volatility is an important concept in finance and plays a critical role in a wide range of applications such as asset pricing, portfolio management, risk management, and option valuation. (Tian, 2008). Volatility modeling is required for VaR estimation.

The main characteristic of any financial asset is its return. Return is typically considered to be a random variable. Its primary usage is to estimate the value of market risk

Value at risk (VaR) is commonly used in the financial industry to quantify risk in asset portfolios. Value-at-risk (VaR) has become a standard measure used in financial risk management due to its conceptual simplicity, computational facility, and ready applicability (Yoshiba, 2002). Many authors suggest that VaR has several conceptual problems (Artzner P. F., 1997) for example,

VaR measures only percentiles of profit-loss distributions, and thus disregards any loss beyond the VaR level ("tail risk"),

VaR is not coherent since it is not sub-additive.

2.2. Expected Shortfall

Expected shortfall is defined as the conditional expectation of loss given that the loss is beyond the VaR level. ES can be used to alleviate the problems present in VaR  (Yoshiba, 2002) since expected shortfall considers the loss beyond the VaR level and has been proven to be sub-additive which assures its coherence as a risk measure. Expected shortfall has less of a problem in disregarding the fat tails and the tail dependence than VaR does. VaR is not a risk measure because it does not fulfill the axiom of sub—additivity (Tasche, 2001)

2.3. Past Reviews on Exponential Smoothing

The two most popular time series approaches are GARCH models and smoothing methods. In contrast to the statistical rigor of GARCH models, smoothing methods provide a pragmatic, ad hoc approach to volatility forecasting (Taylor, 2008) Exponential smoothing is a popular approach, which has been found to perform well in empirical studies. (Boudoukh, 1997) It involves the allocation of exponentially decreasing weights to past squared shocks. Another common application of exponential smoothing is inventory control, where it is used to predict the level of a time series of demand.

An advantage of exponential smoothing is that it uses declining weights which "captures the cyclical behavior of return volatility." Exponential smoothing’s simple and popular approach to volatility forecasting is to estimate the variance as a simple moving average of past squared shocks. (Boudoukh, 1997) writes that this estimator has two clear weaknesses. Firstly, if volatility clusters, there is strong appeal to giving more recent information greater weighting, and, secondly, the choice of how many past periods to include in the moving average is arbitrary. Exponential smoothing is also widely used to produce forecasts for the level of a time series (Gardner, 1985). An alternative is to use GARCH the down-side of which is that it adds (some) complexity. Exponential smoothing may be a good compromise between quality and complexity for Value at Risk. (pat, 2013)

The exponential estimator is asymptotically fully adaptive to unknown conditional mean functions when its asymptotic properties are established and compared with those for the local linear estimator and the exponential estimator also shows superior performance when it is applied to estimate conditional volatility functions, ensuring its non-negativity (Su. et al, 2002).(Huy-Nhiem Nguyen, 2010) shows that the noisy nature of realized volatility may benefit from smoothing techniques

At the same time Exponential smoothing and exponentially weighted moving average give one of the best results among all volatility forecasting models. These two models have the lowest RMSE using the testing set. (Ladokhin, 2009)

2.4. Concepts and Definitions

We consider the problem of forecasting a value each period, which will be exceeded with probability  by the current portfolio, where  is a specified confidence level associated to the Value at Risk (VaR). Let  be real-valued and  represent information available at time . We will assume that  is measurable. Let the sequence of random variables  taking the values in R be stationary. Here,  can be considered as the response variable and (the past information of  are the predictor variables (or covariates). Specifically, we want to estimate the conditional mean and the conditional volatility of , given the past information,, with the assumption that the function is completely determined by , for . So, we have our underlying process of interest of the form as in Mwita (2003).

(2.1)

Where

1)  is the conditional mean function of  given the past information,,

2)  is the conditional volatility function of  given the past information,,

3)  and  are the standard error terms, which we assume to be independent of  and normally distributed with mean 0 and variance 1.

The conditional quantile of (2.1) given  is then given by

or simply

where  is the quantile of and .Suppose  are independent and identically distributed (i.i.d) standard normal random variables, then

Definition (Risk)

Risk is the dispersion (volatility) of unexpected outcomes, generally the value of assets or liability of interest. Alternatively, we can also define risk as the quantifiable likelihood of loss or less-than-expected returns.

Definition (Risk measure)

Consider a set Q of real-valued random variables. A function  is called a risk measure if it is

(i)        monotonous:

,

(iii)     positivelyhomogeneous:

and

(iv)     translation invariant:

Definition (Value at Risk)

The Value at Risk, , based on negative returns or losses at time t given the past information of  is defined by,

is the conditional -quantile of  given .In essence, we find  such that

(2.2)

is the probability of extreme losses greater than the  usually taking the values 5% or 1% corresponding to one or ten day(s) periods respectively.

Definition (Expected Shortfall)

Supposeis a random variable denoting the negative returns of a given portfolio on a specified time horizon  and is the VaR at the  percent confidence level. The Expected Shortfall is defined by the following equation:

(2.3)

2.5. Exponential Smoothing (ES)

Exponential smoothing is a popular scheme used to produce a smoothed time series. This forecasting procedure was first suggested by C. C Holt in about 1958. By definition, Exponential Smoothing is simply an adjustment technique, which takes the previous period’s forecast and adjusts it up or down based on what actually occurred in that period. It accomplishes this by calculating a weighted average of the two values. The formula takes the form:

Where  is the yesterday’s actual value;  is the forecasted value;   is the weighing factor or Smoothing constant and t is the current time period.

Whereas in single moving averages, the past observations are weighted equally, exponential smoothing scheme weight past observations using exponentially decreasing weights to forecast future values. In other words, recent observations are given relatively more weight than the older observations.

2.5.1. Single Exponential Smoothing

This smoothing scheme begins by setting  to , where  stands for smoothed observations or exponentially weighted moving average (EWMA), and X stands for the original observation. The subscripts refer to the time periods .For, the smoothed value  is found by computing

(2.4)

There is no . The smoothed series starts with the smoothed version of the second observation. Equation (2.4) is the basic equation of Exponential Smoothing and the constant or parameter   is called the smoothing constant.The first forecast is very important, and the smaller the value of the  , the more important is the selection of the initial exponentially weighted moving average. The initial smoothed observation plays an important role in computing all the subsequent EWMA’s.

The following are some of the methods of initialization: Setting  to ; Setting  to the target of the process; Setting  to the average of the first four or five observations.

2.5.2. Double Exponential Smoothing

Double exponential smoothing uses two constants and is better at handling trends. Single exponential smoothing does not excel in following the data when there is a trend. This situation can be improved by the introduction of a second equation with a second constant,  , which must be chosen in conjunction with  .The following are the two equations associated with double exponential smoothing:

(2.5)

(2.6)

Here, we note that the current value of the series is used to calculate its smoothed value replacement in double exponential smoothing. There are several methods to choose the initial values for  and .  is in general set to   while the following are three suggestions for :

; AND/OR

Smoothing equation (2.5) adjusts  directly for trend of the previous period, , by adding it to the last smoothed, . This helps to eliminate the lag and brings  to the appropriate base of the current value.The second smoothing equation (2.6) then updates the trend, which is expressed as the difference between the last two values. The equation is similar to the basic form of single exponential smoothing but here applied to the updating of the trend.

Why exponential smoothing?

Exponential smoothing relies on only two pieces of data (the last period’s actual value and the forecasted value for the same period). Therefore, it minimizes the data storage requirements. If we use the moving averages method to forecast, we need to have M past values. This is cumbersome if there are many items for which forecasting is required. Exponential smoothing technique is very simple in concept and easy to understand and is often used on large-scale statistical forecasting problems because it is both robust and easy to apply.

3. Estimators for the Conditional Mean and Conditional Volatility

3.1. Estimator for the Conditional Mean

Refer to model (2.1). We want to estimate the conditional mean, , given past information, . In improving the normal method of estimating, i.e., , where  is the sample size and  are the observations, we introduced horizons  as the time we look back to estimate the conditional mean of the returns for the day. So as to achieve better estimate of , the horizon should be large enough. In our case, we chose to use a horizon of 250 days.We also thought about the influence of the past on the today’s mean and noted that the behavior of the matatu returns several days ago should not influence the as much as yesterday’s behavior. So we weighted the returns with weights, . These weights should decrease exponentially as the returns get older. That is, recent returns are given relatively more weight than older returns. Therefore, the weights are given by

for some      (3.1)

where  is the weighting factor or smoothing constant.It is expected that these weights should sum up to 1, but because we cannot look back an infinite time span, the sum of the   may not sum up exactly to 1. But for horizons,, large enough, the sum is approximately equal to 1. For example, suppose we look at a horizon of 250 days (as in our case) and a  of 0.95, the  ’s sum up to 0.99997.We now define our estimator for the conditional mean. Let

be a sequence of returns, where  is the size of the horizon we take at time  and is the sample size large enough such that we can have a number of horizons that can give a vector of estimates for the conditional mean. The estimator for the conditional mean is given by

(3.2)

Where  , is the smoothing constant; t is the current time period;  is the time horizon.

Definition (Point estimator)

If we have a random sample  from a density, say  , which is known except for  , then a point estimator of  is a statistic, say , whose value is used as an estimate of  .

Definition (Convergence)

A sequence of random variables are said to converge in probability to a constant  if for every ;

or alternatively,

Convergence in probability is denoted by

Definition (Weak Consistency)

Let  be a sequence of point estimators of , where  . The sequence  is defined to be a weakly consistent sequence of point estimators of  if the following conditions are satisfied:

;

Assumptions

(A1)

(A2)

(A3) where  represents an observation in one horizon e.g  and represents an observation in another horizon e.g

.Some remarks on the assumptions are in order. Assumption (A2) is made to guarantee that all variables in the model have a finite variance and assumption (A3) shows that correlation between observations diminishes to zero as the distance between  and  becomes large.

Theorem 3.1(Consistency for the conditional mean estimator)

Let  be a random sample of returns with mean  and variance .Then,

is a weak consistent estimator of .

Proof of theorem 3.1

We show that  and

. We proceed as follows:

(3.3)

(3.4)

(3.5)

(3.6)

(3.7)

(3.8)

(3.9)

(3.10)

(3.11)

which implies that   .

Remarks

a)  Equation (3.4) is obtained from equation (3.3) by noting that . That is, the time horizon we have selected is large. Therefore, by the central limit theorem, the returns are normally distributed with mean  and variance  i.e. . Hence, we have and . In a similar way, equation (3.9) is obtained from equation (3.8), under assumption (A3).

b)  Equation (3.5) is obtained by summing equation (3.4) over the horizon, . Similarly, equation (3.10) is obtained from equation (3.9).

c)  Equations (3.6) and (3.11) are obtained by noting that .

d)  Equation (3.7) is obtained from equation (3.6) by noting that the weights sum up to 1 as .

Definition (Asymptotic normality, see Alexander et al (1974))

A sequence of estimators  of  is defined to be best asymptotically normal (BAN) if and only if the following four conditions are satisfied:

(i)        The distribution of   approaches the normal distribution with mean 0 and variance  as n approaches infinity.

(ii)       For every  ,

(iii)     Let  be any other sequence of simple consistent estimators for which the distribution of   approaches the normal distribution with mean 0 and variance .

(iv)     is not less than  for all  in any open interval.

Lemma 3.1 (George, G. (1973))

Let {}  be a sequence of density functions, and let  be a density function. Let  be the characteristic function corresponding to  and  be the characteristic function corresponding to. Then,

(i)        If  for all continuity points of , then, for every .

(ii)       If  converges, as  , and  , to a function   which is continuous at  , then g is a characteristic function, and if  is the corresponding density function, then

for all continuity points  of .

Lemma 3.2 (George, G. (1973))

Let  and  be density functions such that ,  , and let  be continuous. Then the convergence is uniform in .

Theorem 3.2 (Asymptotic normality for the conditional mean estimator)

Assume (A1), (A2), (A3) hold. Then, is asymptotically normal with mean 0 and variance

.That is,

as , where

]

Proof of theorem 3.2

We apply the central limit theorem. Let

and . Then, we show that

uniformly in . But before we proceed to the proof proper, it is in order that we make the following comments as in George, G. (1973):

(i)        Let ,  be two sequences of numbers. We say that  is  and we write , if. For example, if

and , then , since . Clearly, if , then

. Therefore, , .

(ii)       If  , then .

We now begin the proof. Let  be the characteristic function of  and  be the characteristic function of ; that is,  ,

By lemma 3.1, it suffices to proof that  . This will imply that  . Then, by lemma 3.2, the convergence will be uniform. We have

where  with . Hence, for simplicity, writing  instead of , when this last expression appears as a subscript, we have

Now consider the Taylor expansion of around zero up to the second order term. Then

Since

we get

Thus

Taking limits as   we have, , which is the characteristic function of . Hence, the prove.

3.2. Estimator for the Conditional Volatility

Refer to model (2.1).We want to estimate the conditional volatility, , of  , given past information, . As we did in section (3.1), we introduced the horizons . Again, we also thought about the influence of the past on the today’s mean and noted that the behavior of the matatu returns several days ago should not influence the as much as yesterday’s behavior. So we weighted the returns with the same weights, , as in equation (3.1).

We now define our estimator for the conditional volatility. Let   be a sequence of returns defined as in section (3.1). We define the estimator for conditional volatility under the following situations:

1.  When the conditional mean,, is known, the estimator is given by

(3.12)

2.  When the conditional mean,, is unknown, the estimator is given by

(3.13)

Where in both cases, , and are as defined in section (3.1).

The justification for squaring the returns,, is as follows:Refer to our model (2.1), that is, . Assuming , we have . Thus,

. Taking conditional expectation, we have E(. .

Theorem 3.3 (Consistency for   when is known)

Let be a sequence of returns with mean  and variance,

.

Then,

is a simple consistent estimator for .

Proof of theorem 3.3

We show that  and

We proceed as follows:

E(

=

=

(3.19)

which implies that

Remarks

a)        Equation (3.15) is obtained from equation (3.14) by noting that . That is, the time horizon we have selected is large. Therefore, by the central limit theorem, the returns are normally distributed with mean  and variance  i.e.  . Hence, we have  and . In a similar way, equation (3.20) is obtained from equation (3.19), under assumption (A3).

b)        Equation (3.16) is obtained by summing equation (3.15) over the horizon, . Similarly, equation (3.21) is obtained from equation (3.20).

c)         Equations (3.17) and (3.22) are obtained by noting that .

d)        Equation (3.18) is obtained from equation (3.17) by noting that the weights sum up to 1 as .

Theorem 3.4 (Asymptotic normality for  when  is known)

Assume (A1), (A2), (A3) hold. Then, is asymptotically normal with mean 0 and variance

.That is,

as , where

.

Proof of theorem 3.4

We apply the central limit theorem. Let

And  . Then, we show that

uniformly in  . But before we proceed to the proof proper, it is in order that we make the following comments as in prove of theorem (3.2):

I.            Let ,  be two sequences of numbers. We say that  is  and we write , if. For example, if

and , then , since . Clearly, if , then

. Therefore, , .

II.            If  , then .

We now begin the proof. Let  be the characteristic function of  and  be the characteristic function of ; that is,  ,

By lemma 3.1, it suffices to proof that  . This will imply that  . Then, by lemma 3.2, the convergence will be uniform. We have

where  with . Hence, for simplicity, writing  instead of  , when this last expression appears as a subscript, we have

Now consider the Taylor expansion of around zero up to the second order term. Then

Since

we get

Thus

Taking limits as   we have, , which is the characteristic function of . Hence, the prove.

Theorem 3.5 (Consistency for when  is unknown)

Let  be a random sample of returns with mean   and variance,

Then,

is a simple consistent estimator for .

Proof of theorem 3.5

We show that  and

We proceed as follows:

has a Chi-square distribution with  degrees of freedom. Therefore,

and

Hence,

(3.23)

which implies that and so is . Hence, the proof.

3.3. The Smoothing Constant

The choice of the smoothing parameter is very important. However, although the theory of selecting this parameter is widely expanding, there is yet no one particular method that is universally acceptable as the standard. In any case, the basic principle underlying the choice of a smoothing parameter, is that the chosen value should result in the minimum mean square error.

Definition (Mean-Squared Error (MSE))

Let   be an estimator of .

is defined to be the mean-squared error of the estimator .

3.4. Estimators for Value at Risk and Expected Shortfall

3.4.1. Estimator for Value at Risk (VaR)

The Value-at-Risk summarizes the expected maximum loss (or worst loss) over a target horizon at a given confidence level.In our case, we use a target horizon of 250 days and a 99% confidence level. . That means the Value-at-Risk we give is the amount of money we will at maximum lose the next day with probability of 99%.Or in other words, the probability to lose at the next day more than the calculated Value-at-Risk is less than 1%.

We obtain today’s returns,, as  where  and  are today’s and yesterday’s returns respectively.. The conditional -quantile function for our econometric model (2.1) is given by

While the estimator for the conditional -quantile is given by

where  and   are as defined in equations (3.2) and (3.13) respectively.

Consistency for the estimator for Value at Risk

We show that and . We proceed as follows:

First, we check the bias

(4.6)

Next, we check the asymptotic variance.

(4.7)

(4.8)

(4.9)

(4.10)

Hence, the proof.

Remarks

a)        Equation (4.4) is obtained from equation (4.13) by using assumption (A3) and noting that

and .

b)        Equation (4.5) is obtained from equation (4.4) by noting that the weights sum up to 1 as .

c)         In equation (4.9),  and  are obtained as in equations (3.11) and (3.24) respectively.

3.4.2. Estimator for the Expected Shortfall

Expected shortfall is the conditional expectation of loss given that the loss is beyond the VaR level, and measures how much one can lose on average in the states beyond the VaR level.

Suppose  is a random variable denoting the negative returns of a given portfolio and   is the VaR at the - confidence level. The Expected Shortfall is given by

The function for expected shortfall based on our econometric model (2.1) is given by

and the estimator for the expected shortfall is given by

where  and  are as defined in equations (3.2) and (3.13) respectively.  is the number of negative returns that exceed  and  is an indicator function defined as

4. Results and Discussion

Here, we present analysis of average returns from Matatu business from May 2010 to August 2014. The results are given in the figures below.

Figure 1. Plot of returns from May 2010 to August 2014.

In figure 1, the upper (positive) sides are the negative returns (losses) while the lower (negative) sides are the positive returns (profits). This is because we have used the negative log returns. We observe that the returns tend to cluster as the threshold increases. This suggests that the returns are auto correlated. The clustered returns over time represent clustering of volatilities. This is supported by Engle and Manganelli (2002) who noted that the distribution of returns tend to be auto correlated. The long spikes on either side indicate extreme returns.

Figure 2. Matatu conditional means plotted against time.

In figure 2, we can observe that the conditional mean tend to be stationary for a long period of time as it keeps on reverting to the long term value, that is, 0. There are also some few positive and negative "outliers" or extreme means that seem not be consistent with the rest of the means. The negative "outliers" suggests that the market conditions were very favorable and the positive outliers suggest that the market conditions were very unfavorable.

In figure 3, it is evident that volatility varies with time. We can also observe that large volatilities tend to be followed by large volatilities. Similarly, small volatilities tend to be followed by small volatilities. We can also see that there was a shock as indicated by the extremely large volatilities that lasted for a short period (around day 500 to 550). A shock indicates either extreme gain or extreme loss in the market, depending on the conditions.

Figure 3. Matatu Conditional volatilities plotted against time.

Figure 4. Matatu returns plotted together with .

Figure 5. Plot of MATATU returns with  and Expected Shortfall.

In figure 4, we have superimposed 0.99-conditional quantile on the returns. VaR is always based on negative returns hence the plot of   on the upper side. From the definition of VaR, 0.99-conditional quantile means that there is one chance in a hundred chances, under normal market conditions, for a loss greater than the VaR set by a given portfolio’s management, to occur in a given day. Therefore, the 0.99-conditional quantile measures the maximum loss a portfolio can incur at 99% confidence level. It is clear from figure 4 that the 0.99-conditional quantile responds well to the distribution of the returns. We can also see from the figure that there are some few returns that exceed the VaR.

In figure 5, we have superimposed the  and expected shortfall over the  on the returns. The straight line represents the expected shortfall. We can observe from the figure that the straight line representing the expected shortfall is slightly above the line representing the . Therefore, it is clear from the figure that VaR tells us the most we can expect to lose if a tail event does not occur while the expected shortfall tells us what we can expect to lose if a tail event does occur.

In our study, we have analyzed 963 MATATU returns. As per the definition of VaR at 99% confidence level, we expect, under normal market conditions, about 10 returns to exceed the . In our results, we have obtained 9 exceedences and expected shortfall of 0.1688. Therefore, our risk measuring method is good.

5. Conclusion

In estimating the conditional mean and conditional volatility of the returns of our portfolio, we explored the exponential smoothing technique, whereby we assigned exponentially decreasing weights to the returns. We noted that exponential smoothing technique is easy to understand and apply. We proved that the estimators for the conditional mean and conditional volatility are consistent. We also proved that the estimators for the conditional mean and conditional volatility when conditional mean is known, are asymptotically normal. Further, we have given the estimators for the VaR and ES and proved that the VaR estimator is consistent.

Recommendations

Exponential smoothing techniques has a few demerits including that it lags i.e. the forecast will be behind as the trend increases over time and it also may fail to account for the dynamic changes at work in the real world, and the forecast may constantly require updating to respond to new information.

In this respect, we note that exponential smoothing technique may not be the best estimation technique and that other techniques, GARCH estimation may be explored.

We also suggest that the following work may be done in future as a continuation of this research:

a)  The choice of the optimal smoothing parameter. In our case, we have just fixed its value.

b)  The proof of the asymptotic normality for the conditional volatility estimator when the conditional mean is unknown. Also, the asymptotic normality for the VaR estimator should be shown.

References

1. Aas, K., & Dimakos, X. K. (2004). Statistical modelling of financial time series: An introduction. Norwegian Computing Center;APPLIED RESEARCH AND DEVELOPMENT.
2. Alexander, M. F. (1974). Third edition. In M. F. Alexander, Introduction to the Theory of Statistics (pp. 241-242, 295-296). McGraw-Hill.
3. Amin, A. N. (2009). Reducing Emissions from Private Cars:Incentive measures for behavioural change*. United Nations Environment Programme.
4. André Lucasa, X. Z. (2014). Score Driven Exponentially Weighted Moving Averages and Value-at-Risk Forecasting. University Amsterdam and Tinbergen Institute.
5. Artzner, P. D. (1997). Thinking coherently. Risk.
6. Artzner, P. F. (1997). "Thinking Coherently," Risk, 10 (11), Mathematical Finance, 68–71.
7. Baki Billah, M. L. (2006, April–June). Exponential smoothing model selection for forecasting. International Journal of Forecasting, 22(2), 239-247.
8. Boudoukh, J. R. (1997). Investigation of a class of volatility estimators. Journal of Derivatives, 4 Spring,, 63-71.
9. Claudio H. da S. Barbedo, G. S. (2005). Evaluation of Foreign Exchange Risk Capital Requirement Models. Brazilian Review of Finance, Vol 3, No 2.
10. Communications, M. o. (2004). Republic of Kenya. Nairobi.: Transformation of Road Transport Report.
11. Dorothy Mccormick, W. M. (2012). Paratransit Operations And Regulation In Nairobi Matatu Business Strategies And The Regulatory Regime unpublished.
12. Dorothy McCormick, W. M. (2013). Institutions and Business Strategies of Matatu Operators: A Case Study Report. Nairobi: African center for excellence forstudies in public and non-motorised Transport.
13. Dr. Erik Winands, S. L. (2009). FORECASTING VOLATILITY IN THE STOCK MARKET. amsterdam: VU University.
14. Gardner, E. J. (1985). Exponential smoothing: the state of the art. Journal of Forecasting, 1-28.
15. Gizycki., J. A. (1999). Value at Risk: Volatility and Forecasting of the Variance Covariance Matrix.
16. Huy-Nhiem Nguyen, Q. N. (2010). Exploring the Cost of Forecast Error in Inventory Systems. Fayetteville, AR 72701, U.S.A: Department of Industrial Engineering;University of Arkansas.
17. Kibua1, P. O. (2004). EFFORTS TO IMPROVE ROAD SAFETY IN KENYA, ACHIEVEMENTS AND LIMITATIONS OF REFORMS IN THE MATATU INDUSTRY. Nairobi: Institute of Policy Analysis and Research (IPAR).
18. Ladokhin, S. (2009). Volatility modeling in financial markets. Amsterdam: VU University.
19. Legal Notices Nos. 161, 8. a. (2003 and 2004). Republic of Kenya.
20. Madhavan, A., & Yang, J. (2003). Practical Risk Analysis for Portfolio Managers and Traders. ITG Inc.
21. McCormick, D. W. (2011). Business Strategies of Matatu Operators in Nairobi: A Scoping Study ACET Project 14 Paratransit operations and regulation in Nairobi. University of Nairobi.
22. McCormick, D. W. (2011). Institutions and business strategies of matatu operators in Nairobi: A scoping study ACET. African Centre of Excellence for Studies of Public and Non-Motorised Tr, Working Paper No. 14-2.
23. MOA. (2014, november 20). matatuownersassociation. Retrieved from matatu owners association website: http://www.matatuownersassociation.com/index.php/about-us/our-profile
24. Mwita, P. (2003). Semi-Parametric Estimation of Conditional Quantiles for Times Series with applications in finance. PhD Thesis,University of Kaiserslautern.
25. Nyasetia, O. B. (2013). The influence of entreprenuarial personality, human capital and entry barriers on performance of entreprenuers in the informal transport business in nairobi, kenya. Unpublished.
26. Pat. (2013, may 28). value-at-risk-with-exponential-smoothing. Retrieved from portfolio probe: http://www.portfolioprobe.com/2013/05/28/value-at-risk-with-exponential-smoothing/.
27. Poon, S.-H. (2005). A Practical Guide to Forecasting Financial Market Volatility.pg1. John Wiley & Sons.
28. Rau-Bredow, H. (2002). Value at Risk, Expected Shortfall, and Marginal Risk Contribution. Leo Weismantel Str. 4 D-97074 Würzburg.
29. Roussas, G. G. (1973). A First Course in Mathematical Statistics. Addison-Wesley.
30. S. Srinidhi, A. K. (2013). A conceptual model for demand forecasting and service positioning in the airline industry. Journal of Modelling in Management, 8(1), 123 – 139.
31. S., E. (2015, March 3). Four Principles for Great sales forecast.. Retrieved from forbes: http: www.forbes.com
32. Sirikhorn Klindokmai, P. N. (2014). Evaluation of forecasting models for air cargo. The International Journal of Logistics Management, 25(3), 635 - 655.
33. Suwanvijit, W. C. (2009). Statistical model for short-term forecasting sparkling beverage sales in Southern Thailand. International Business and Economics Research Journal, 73-81.
34. Tasche, C. A. (2001, may 9). Expected Shortfall: a natural coherent alternative to Value at Risk. Expected Shortfall: a natural coherent alternative to Value at Risk. Milano, Milano, Italy. Retrieved from http://arxiv.org/pdf/cond-mat/0105191.pdf
35. Taylor, J. W. (2008). Using Exponentially Weighted Quantile Regression to Estimate Value at Risk and Expected Shortfall. Journal of Financial Econometrics, Vol. 6, pp.382-406.
36. Tes´arov´a, B. V., & Gapko, P. P. (2012). Value at Risk: GARCH vs. Stochastic Volatility Models: Empirical Study. Prague: Charles University in Prague.
37. Tian, G. J. (2008). Forecasting Volatility Using Long Memory and Comovements: An application to option valuation under SFAS 123R. Forthcoming Journal of Financial and Quantitative Analysis.
38. Winands, S. L. (2009). Forecasting Volatility in the Stock Market. Amsterdam: VU University Amsterdam.
39. Yamai, Y. Y. (2001). On the validity of value-at-risk: comparative analyses with Expected Shortfall. Institute for Monetary and Economic Studies, Bank of Japan.
40. Yoshiba, Y. Y. (2002). On the Validity of Value-at-Risk: Comparative Analyses with Expected Shortfall. Research Division I, Institute for Monetary and Economic Studies, Bank of Japan.

 Contents 1. 1.1. 1.2. 2. 2.1. 2.2. 2.3. 2.4. 2.5. 3. 3.1. 3.2. 3.3. 3.4. 4. 5.
Article Tools