• No results found

Bayesian estimation of NIG-parameters by Markov chain Monte Carlo methods

N/A
N/A
Protected

Academic year: 2022

Share "Bayesian estimation of NIG-parameters by Markov chain Monte Carlo methods"

Copied!
18
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

chain Monte Carlo methods

Jostein Lillestl

Instituteof Financeand Management Science

The Norwegian Schoolof Economicsand BusinessAdminstration

Bergen, Norway

E-mail: jostein.lillestol@nhh.no

Firstdraft September16, 1998

Revisionof January 15,2001

Abstract

The NormalInverseGaussian(NIG)distributionrecentlyintro-

ducedbyBarndor-Nielsen(1997) isapromisingalternative for

modellingnancialdataexhibitingskewnessandfattails. Inthis

paperweexploretheBayesianestimationofNIG-parametersby

Markov ChainMonte CarloMethods.

KEY WORDS: Normal Inverse Gaussian distribution, Bayesian Analysis,

MarkovChainMonte Carlo.

This work is partly done at Institut fur Statistikund



Okonometrie, Hum-

boldtUniversitat zuBerlinwithsupportfromtheRuhrgasArena program.

(2)

The empirical distributions of stock returns are typically skew and heavy

tailed, and dierent familiesof distributions have been proposedto model

returns. Mostnotablyis thestable Paretian familywitha longhistory,see

McCulloch (1996) and Adleret.al. (1998). Morerecentlythe classof NIG-

distributions,anacornymforNormalInvertedGaussian,hasbeenproposed

byBarndor-Nielsen(1997). Heinvestigateditsproperties,themodellingof

NIG-processes,theestimationofNIG-parametersandthettorealnancial

data, see also Rydberg (1997). The NIG-framework has several desirable

propertiesandopportunitiesformodeling. Anumberofproblemsinnance

canberecastedwithinthisframework,thustakingskewnessandheavytails

into account, asdemonstrated byLillestl (2000).

In this paper we focus on the estimation of NIG-parameters from the

Bayesian viewpoint using Markov Chain Monte Carlo Methods. We will

explore a convenience prior leading to simple updating formulas. This is

aconjugate priorwhen an unobserved variateis included inthe parameter

set. We willgivesomeexamplesonestimatingrealandsimulateddata,and

make comparisonswiththe maximumlikelihoodestimates.

2 The NIG-distribution and a convenience prior

The NIG-distribution may be characterized by four parameters: ;Æ;;

whichrelates mainlyto location,scale, peakedness/heavytailand skewness

respectively. Themoment-generatingfunctionofaNIG(;;Æ)-variateX

isgiven by

M

X

(t)=Eexp(tX)=exp(t+Æ(

p

2

2

q

2

(+t 2

)))

From thisitis easilyderived (let = p

2

2

)

EX=+Æ

varX =Æ

2

3

Skewness=3

1

(Æ) 1=2

Kurtosis=3(1+4(

)

2

) 1

Æ

(3)

However,thedistributionhasasimplecharacterizationasthemarginaldis-

tribution of(X;Z) where

X jZ =zN(+z;z)

Z IG(Æ;

p

2

2

) where 0j j<

Here IG(Æ;)isthewellknownInverted Gaussiandistribution(alsonamed

Walddistribution),see JohnsonandKotz (1995).

Barndor-NielsenhasstudiedtheestimationofNIG-parametersbymax-

imumlikelihoodmethods. Giventhecomplicateddensitythisleadsto like-

lihoodequationsthatareverycomplicatedand requiresextensive program-

mingcombinedwithnumericalskills. Theprogram"hyp"developedbyBl

sildet. al. (1992)solvesthetask,butthismaynotbereadilyavailable. An-

other possilibilityisto usean EM type algorithm formaximum likelihood.

Thisismoreeasilyprogrammablewithlesschallengingnumerics,aslongas

thecomputer environmentsupportseasy calculation of Besselfunctions,as

demonstrated byKarlis(2000). Stillanotherpossibilityisto usethesimple

characterization of thedistribution above and thefact thatIG-variates are

easy to simulate, see Michael et.al. (1976), and explore simulation-based

approaches to the estimation problem within a Bayesian framework. The

BayesianapproachtoestimationofNIGparametersisconcurrentlyexplored

byKarlis(2001), whoalso considersthe caseof covariates.

Now let Y =(X;Z) with distribution as inthe characterization above,

i.e. probabilitydensityf(y)=f(x;z)=f(z)f(xjz) where

f(xjz)=(2) 1=2

z 1=2

e 1

2z

(x (+z)) 2

f(z)=(2) 1=2

Æe Æ

z 3=2

e 1

2 (Æ

2

z 1

+ 2

z)

that is

f(x;z)/Æe

Æ

z 2

e x+

x

z 1

2 (

2

+ 2

)z 1

2 (

2

+Æ 2

)z 1

Ifwe let=(;Æ;;) we seethatthe joint densityiswithintheexpo-

nentialfamily

f(y)=g()h(y)e P

4

j=1

j ()t

j (y)

(4)

1

()=

2

()=

3

()= ( 2

+ 2

)

4

()= ( 2

+Æ 2

)

and

t

1

(y)=x

t

2

(y)=xz 1

t

3 (y)=

1

2 z

t

4 (y)=

1

2 z

1

g()=Æe

Æ

Thismeansthatthe conjugate priordistributionof is thenof theform

p()/g() a0

e P

4

j=1

j ()a

j

where (a

0

;a

1

;a

2

;a

3

;a

4

) is a vector of superparameters, where a

0 , a

3 and

a

4

>0. Given n independent realizations y

1

;y

2

;:::;y

n

ofthe variateY, the

posteriorof isof thesame form with

a 0

0

=a

0

+n and a 0

j

=a

j +

n

X

i=1 t

j (y

i )

Thisisanaugmentedposteriorinthesensethatwe arereallyinterested

in a distribution of given the observable X, and the Z in Y = (X;Z)

is treated as unobservable or missing. However, we can get at the desired

posteriorbyusingMarkov chain Monte Carloideas.

A closer look at the chosen distribution for = (;Æ;;) shows that

(;) is independent of (Æ;) and that (;) is bivariatenormal with cor-

relation

= 1

2 a

0

p

a

3 a

4

(5)

=

1

2(1 2

)a

4 (a

2 a

0

2a

3 a

1 )

= 1

2(1 2

)a

3 (a

1 a

0

2a

4 a

2 )

2

= 1

2(1 2

)a

4

2

= 1

2(1 2

)a

3

Note therefore theexpressions for = 2

and

= 2

given by thetwo paren-

theses respectively. Note also thebindingequation 2

= 2

=a

4

=a

3

. By the

reparametrization(;)to(;)where =+ a

0

2a4

weget independent

of .

Thedistribution of(Æ;) is

p(Æ;) /Æ a0

e a3

2

+a0Æ a4Æ 2

for Æ 0; 0:

By a lineartransformation to getridof thecross-term itisseen that

Æ 2

Gamma(

a

0 +1

2

;a

4 a

2

0

4a

3 )

jÆNormal(

a

0

2a

3 Æ;

1

2a

3

) truncated at zero

In the case that the conjugate prior is too restrictive to represent our

data, it maybe worthwhileto introduceanother layer of superparameters.

Most convenient isto take a

i

's to beindependentand

a

i

Normal(c

i

;b

i

) for i=1;2

a

i

Exponential(b

i

) for i=0;3;4:

If we leta=(a

0

;a

1

;a

2

;a

3

;a

4

),the posteriorof agiven isdeterminedas

a

1

Normal(c

1 +b

1

;b

1 )

a

2

Normal(c

2 +b

2

;b

2 )

a

3

Exponential(b

3 +

2

+ 2

)

a

4

Exponential(b

4 +

2

+Æ 2

)

a

0

Exponential(b

0

+ Æ log(Æ)):

(6)

(;) and thatof(Æ;), and thattheconjugate priorimposescrossrestric-

tions. These may be unrealistic and not remediedbyintroducingsuperpa-

rameters. A possibilitywould be to model the two parts separately,which

leadsto estimation of heteroscedastic normal regression and estimation of

inverse Gaussian parameters, as suggested by Karlis (2001). There exists

various partametrizations of the IG-distributionthat may lead to dierent

solutionsto thisBayesianestimationproblem.

3 A MCMC scheme for the posterior

Inorder togetattheposteriorp(jx)weusetheMCMCschemeusingfull

conditional.

Let w=(x;z;) where x is observed, z unobserved, parameters. Let

w

s

be asubsetof thecomponentsof w andw

s

thecomplementaryset.

p(w

s jw

s

)/p(w)

whereonly thefactors involving components of w

s

in any product formula

needtobe retained. Aspecialcaseof thisisp(;zjx),where themarginal

p( jx) is ourinterest. Various schemesfor sampling from theposterior is

given Robertand Cassela(1999). TheNIG-modelts undertheheadingof

dataaugmentation, whichis aspecialcaseof theGibbs sampler.

The fullconditionalssuÆcient forthecurrent problemaregiven by

1. p(;zjx)/p(x;zj)p()

2. p(

s

jx;z;

s

)/p(x;zj)p(

s j

s )

3. p(z jx;)/p(xjz;)p(z j)

Ifthisis writtenoutinourcase, we willsee thatformula 3leadsto

Z

i

i:i:d: IG

(Æ 0

; 0

)

whereIG

(Æ;)denotesthedistributionwithdensitysimilartoIG,butwith

z 2

asmultiplicativefactor replacingz 3=2

. Bothdistributionsaremember

ofthefamilyofgeneralizedinverseGaussiandistributionsGIG(;Æ;)with

= 1=2 and = 1respectively.

(7)

Æ 0

=(Æ 2

+(X

i )

2

) 1=2

0

= p

2

+ 2

=

TheZ 0

i

scan besimulatedusingtheideasinMichaelet.al. (1976), orby

rejection samplingmethods, asdescribedinsome detail inan appendix.

Formula2forthecompleteparametersetaswellasfor(;)and(;Æ)

separately justreiterates theresult ofour choice of conjugate priorfor the

augmented (X;Z), wherethesuperparametersareupdatedaccording to

a 0

0

=a

0 +n

a 0

1

=a

1 +

X

X

i

a 0

2

=a

2 +

X

Z 1

i X

i

a 0

3

=a

3 +

1

2 X

Z

i

a 0

4

=a

4 +

1

2 X

Z 1

i

New can thenbesimulatedaccordingto thedistributionalstructuregiven

above. That is, (;) bivariate normal and Æ 2

Gamma and computed

from Æ and a simulatedNormal variate.

Inthe caseof anotherlayer ofparameters we getanotherset to visitin

each round, determined by the posterior given at the end of the previous

section.

4 The choice of superparameter values

We will now examine the choice of superparametersin order to reect our

(lack of) knowledge about parameters. It is of course convenient to have

few superparametersto address as is thecase with ourconjugate prior. A

drawback may be little exibility, i.e. our choices aect the parameters

jointlyin a manner which may not be transparent. A basic restriction for

theexpressions of variances to stay positiveis

a

3 a

4

>

1

4 a

2

0

From theve updatingequationswe seethat more informationis accumu-

lated as a

0 ,a

3 and a

4

are increasing (since the z's are non-negative). The

(8)

0

a

i

's, but with some prior knowledge, and choosing a larger a

0

, this has to

go along with larger a

3

and/or a

4

as well to match the restriction. The

parameter maythenbe helfulforcalibrationpurposes.

A possible consequence of few parameters is when we try to express

ignoranceinsome sense, itmayhave unwanted and even contradictory im-

plications. This is in fact the case here, where a

0

= 0 at rst sight, is a

naturalchoice forignorance. Thismeansthat =0and consequently

=

a

2

2a

4

= a

1

2a

3

2

= 1

2a

4

=E 2

2

= 1

2a

3

=EÆ 2

i.e. we have implicitely assumed that the more certain you are about

(resp. , the smaller you expect Æ (resp. ) to be, and judged from the

correspondingvariances ofÆ 2

and 2

youareevenmore certainaboutthat.

So,a

0

=0 is anignorantschoice to represent ignorance!

It is likely that opinions are initially formed by observed centers and

shapesof empiricaldistributions. It seems therefore natural to rst reect

ontheparameters(;)(stage 1),andthenon(Æ;)(stage2),Wemayrst

askto whatextent ourknowledgeofeitherof theseisaected bytheother.

InordertomatchtheexpressionfortheexpectedvalueEX=+Æ 1

,a

large/smalldepartingfromitspriorexpectedvaluehastobebalancedbya

small/largecomparedtoitsexpectation,i.e. andhavetobenegatively

correlated, asreected by theconjugate prior. It is easilychecked that the

linearcombination =+t withthesmallestpriorvariance isgivenby

=+ a

0

2a

4

Note from section 2 that this is stochastically independent of . EX is

such alinear combination,and it isnot unresonableto assume thatwe are

more certain about EX than any other linear combination of of and .

This assumption as well as its immediate consequenses will be referred to

later as A0. We see that the corresponding prior mean and variance are

(9)

A0 : = a

2

2a

4

2

= 1

2a

4

Note that nowhas disappeared and that theexpressionsare thesame as

for inthe case of = 0. If we go back to theoriginal expressionfor EX

and use thepriorindependenceof (;) and (Æ;) we gettheequation

A0 : E(

Æ

)=

a

0

2a

4

=

r

a

3

a

4

i.e. ourassumptionA0hasimplicationsforthetwootherparametersaswell.

Note also that ifthe"ignorance" assumptiona

0

=0 isused inconjunction

with A0,it impliesthatÆ =0, which leadsto theone-point distribution at

, whichis quitetheoppositeof ignorance!

Let assumption A1 be that the the priormean equalto zero. We then

get

A0 +A1 : a

2

=0

In the case of a non-zero prior mean, we could as well subtract the prior

mean from allobservationsand startfrom there.

In order to determine a

1

we have to be more specic about or . In

thecase of a

2

=0 we see that

=

= a

0

2a

4

, = 2

= a

0

2a

3 a

1 ,

= 2

=a

1 :

Thus

determinesthesignofa

1

and (opposite) inthiscase, whichholds

inparticular forA0 +A1.

Let us also look into assuming =0 (assumption A2) and

=0 (as-

sumptionA3). Thisgivesthefollowingrestrictionsonthesuperparameters

respectively:

A2 : a

2

= a

0

2a

3 a

1

A3 : a

1

= a

0

2a

4 a

2

withobviousinequalitiesforthepriorexpectationslessthanorgreaterthan

zero. NotethatanytwoofA1,A2andA3takentogetherareequivalentand

corresponds to a

1

=a

2

=0, omitting the case of 2

=1 leading to innite

priorvariances of , and Æ 2

.

We mayjust want combine A0 and A2to get

A0 + A2 : E(

Æ

)=

a

2

a

1

(10)

about thisthanthe priorof (;), and it is likelythat we want to express

ignorance. Wethereforehavetochooseparametersattherststagecarefully

inordertogiveroomforthis. Notethatthepriorof(Æ;). isdeterminedby

theparameters a

0 ,a

3 and a

4

. Recall thebindingresctriction above, where

infactthe ratioof a

3 and a

4

is determined by thepriorvariances of and

. Assaid earlierthisisanunwantedrestriction,which wehave to balance

oaccordingto where ourknowledge isbest. Notethat

Æ 2

Gamma(

a

0 +1

2

;a

4

(1

2

))

It will be of interest to see numerically how ad hoc assumptions rep-

resenting various types of knowledge will work, when trying to balance or

ignore theconsequences for other parameters. Among these are ignorance

assumptionstaking a

0

= 1 ora

0

=2 (assumptionB1 and B2), and saying

youareequallyuncertainaboutand,thustakinga

3

=a

4

. Ofparticular

interestis thecase a

0

=a

3

=a

4

, whichmeansthat = 1=2.

(11)

Reliable estimates of theparameters of a heavy tailed distiributionwillre-

quire a minimumof observations in the tails, and small data sets of inde-

pendent observations are not likely to give good results. Our experience

so far with the NIG familybased on simulated data suggests that diÆcul-

ties may arise even for 100 observations and that about 400 observations

are desirable, see Lillestl (2000). We provide here two examples of NIG

parameter tting, one for simulated data and one for nance data. The

rst dataset NIG2121 is 400 simulatedindependent NIG(2,1,2,1) variates.

The second data set FTARET is the monthly nominal returns of the FT-

ActuariesAll-ShareIndexfortheUKfromJanuary1965to December1995

(372 observations). The empiricaldistributionsare shownas histogramsin

Figure 1.

Simulated NIG(2,1,2,1)

0 2 4 6

x

0 0.1 0.2 0.3 0.4 0.5 0.6

FTA returns

-20 0 20 40 60

return

0 0.02 0.04 0.06

Figure1: Histograms ofdatasets NIG2121 andFTARET

Descriptive measuresare giveninTable1.

Mean StdDev Skewness Kurtosis

NIG2121 2.55 0.85 1.22 3.51

FTARET 5.53 6.05 1.12 14.43

Table 1: Descriptivemeasures

(12)

parametersand two layers ofparameters (MC2)andcompare thiswiththe

correspondingML-estimate. TheMCestimationwithasinglelayerisbased

onapriortakinga

0

=a

3

=a

4

=1 anda

1

=a

2

=0and iterating10rounds

beforea sampleof ,,and Æ istaken. Thisisrepeated 400times and is

thebasisfor calculating theposterior meansand the smoothed histograms

to represent posteriordensities. The posteriormeans are estimated by the

average ofthesampledvalues. Estimates obtainedbyRao-Blackwellization

isalsocomputedforcomparisonandasacheckforconvergence. Forthetwo-

layerestimationthevaluesofthesuperparametersb

i

'sandc

i

'sarechosento

matchtheexpectedvaluesofthea

i

'sforthesingle-layerspecicationabove.

The ML-estimates are obtained by the program 'hyp', and the results are

conrmed by the EM-algorithm of Karlis mentioned above. We have also

computed moment estimates (MM) obtained by inverting the expressions

fortherst fourmomentsgiven insection 2.

Æ

NIG2121 (MC) 1.360 0.868 2.205 0.407

NIG2121 (MC2) 1.669 0.965 2.142 0.579

NIG2121 (ML) 2.490 1.343 1.876 1.049

NIG2121 (MM) 2.445 1.396 1.872 0.972

FTARET (MC) 0.823 0.742 1.428 1.971

FTARET (MC2) 0.818 0.733 1.486 2.019

FTARET (ML) 0.174 -0.004 5.662 5.697

FTARET (MM) 0.083 0.015 5.001 2.872

Table 2: Parameter estimates

We see that the parameter estimates of the ML-method and the MC-

methodturnedoutsomewhat dierentforbothdatasets. Let usrst com-

ment on theNIG2121 data.

TheMC-methodhasoverestimatedwhiletheML-methodhasunderes-

timatedthisparameter. The otherthree parametersareunderestimatedby

theMC-method and overrstimated bythe ML-method, except forÆ which

are about on target by the latter method. The low estimate for Æ by the

MC-methodissomewhatdisturbing. Weseethataddingthesecondparam-

eter layer has lead to improvement forthe MC-method. Now theestimate

of both and are closer to their true values by this method than the

ML-method, with about on target. The estimate of Æ is also improved

somewhat, butisstillat some distant fromthe truevalue. We seethat the

(13)

comparetherstfourmomentsbypluggingintheestimatedparameterval-

uesinthetheoretical expressionsforthemomentsinsection2andcompare

with their true values. The main dierence between the methods is that

MCattributes more skewness andkurtosis thanthetrueonesoftheparent

distribution, while ML attributes less skewness and kurtosis, and is closer

to "target".

The smoothed histograms of sampled posterior densities for NIG2121

based on theMC2simulationsare given in Figure2.

Posterior density alpha

1.5 2

alpha

0 0.5 1 1.5

Posterior density mu

2 2.1 2.2 2.3

mu

1.4 1.5 1.6 1.7 1.8

Posterior density beta

0.5 1 1.5

beta

0 0.5 1 1.5

Posterior density delta

0.4 0.5 0.6 0.7 0.8

delta

0.5 1 1.5 2 2.5 3

Figure 2: Smoothedhistogramsofsampledposteriordensitiesfor NIG2121

Lookingattheposteriordistributionsforthesimulateddataweseethat

the true parameter values for is at the central part of the distribution,

while the true and are out in the tails. For Æ we have been rather

unsuccessful indeed!

Forthe dataset FTARET there are even more strikingdierences. We

see that The ML-method has given a much higher estimate for and Æ

than the MC-method. On the other hand both and are higher with

(14)

has not lead to appreciable changes. Her the MM-estimates dier more

from the ML-estimates, but stay closer the the MC-estimates, except for

theparameterÆ.

The smoothed histograms of sampled posterior densities for FTARET

basedon theMC2simulationsaregiven inFigure3.

Posterior density alpha

0.5 1

alpha

0 0.5 1 1.5 2

Posterior density mu

0 0.5 1 1.5 2 2.5

mu

0 0.2 0.4 0.6 0.8

Posterior density beta

0.5 1

beta

0 0.5 1 1.5 2

Posterior density delta

1.8 2 2.2 2.4

delta

0 0.5 1 1.5 2 2.5

Figure3: SmoothedhistogramsofsampledposteriordensitiesforFTARET

It may be instructive to compare the ts in terms of QQ-plots. For

theNIG2121 data we maycompare withthetrue distribution. Exact com-

putationsrequire access to a Bessel function routine, and we may as well

simulate the distribution. We have simulated n=10000 observations from

the true distribution as well as from the distribution with the estimated

parametersbythe ML-method and MC-method respectively. TheQQ-plot

isgiven inFigure4. We clearlyseethattheMC-tisinferiortotheML-t,

and that the MC2-tis an imrovement which makes the t comparable to

thesuccessof theML-t,butthatthetwotsobviouslyhavesome distinct

featuresthat separatesthem.

(15)

QQ-plot NIG2121 ML vs True

0 5 10

x

0 5 10 x

QQ-plot NIG2121 MC vs True

0 5 10

x

0 5 10

x

QQ-plot NIG2121 MC2 vs True

0 5 10

x

0 5 10

x

Figure 4: QQ-plotfordistributionsttedbyMLand MCvsTrueNIG2121

QQ-plot FTARET ML vs True

0 50

x

0 50 x

QQ-plot FTARET MC vs True

-20 0 20 40 60

x

-20 0 20 40 x

QQ-plot FTARET MC2 vs True

-20 0 20 40 60

x

-20 0 20 40 x

Figure 5: QQ-plot for distributions tted by ML and MC vs Observed

FTARET

(16)

lated from thedistributions tted bythe ML-method and MC-method re-

spectively. We chose here to simulate dataof thesame size,namely n=372

observations. The resultingQQ-plot isgiven inFigure 5. It seemsthat the

ML-tis superiorto bothMC-ts.

Thecomparisonsabove arebasedonlimitedexperiencesofar,andmore

suitablechoicesofpriorspecicationsmayhopefullyleadtofurtherimprove-

ment. Admittedly,theresultsobtainedwiththecurrentparametrizationare

notimpressive,andalternativeparametrizationsshouldbecomparedbefore

generalconclusions can be drawn.

Acknowledgements

This research is supported from Deutsch-Norwegisches Stipendprogramm-

Ruhrgas Stipendium for visiting Institut fur Statistik und



Okonometrie,

Humboldt Universitat zu Berlin. I am grateful for this support, and to

Wolfgang Hardle and his sta at Sonderforchungsbereich 373: Quantika-

tionundSimulation



OkonomischeProzesse, forprovidinganexcellent envi-

ronment for reaserch and the computingfacilitites based on theirprogram

XploRe. I also have to thank Michael Kjr Srensen and Dimitris Karlis

forproviding the ML-estimates based on my data, and Dimitris Karlis for

valuable personal communications.

(17)

Adler,R.J.,FeldmanR.E.andTaqqu,M.S.(eds.) (1998)APracticalGuide

toHeavyTails: StatisticalTechniquesandApplications. Boston: Birkhauser.

Barndor-Nielsen,O.(1997)NormalinverseGaussianprocessesandstochas-

tic volatilitymodelling. Scandinavian Journal of Statistics, 24,1-13.

Blsild, P. and Srensen, M.K. (1992) 'Hyp' - A computer program for

analyzing by means of the hyperbolic distribution. Research Report

No.300, Department of TheoreticalStatistics, Universityof Aarhus.

Damien, P.and Walker, S.(1997) Samplingnonstandarddistributionsvia

the Gibbssampler. Preprint.

Johnson, N.L.,Kotz, S.and Balakrishnan,N.(1995) Continuous Univari-

ate Distributions vol.2. New York: Wiley.

Karlis,K.(2000)AnEMtypealgorithmformaximumlilelihoodestimation

of theNormal-Inverse Gaussian distribution. Research report Athens

University of Economics and Business.

Karlis, K. (2001) Bayesian estimation for theNIG distribution. Personal

communications.

Lillestl,J.(2000)RiskanalysisandtheNIGdistribution. JournalofRisk,

2,41-55.

McCulloch,J.H.(1996)Financialapplicationsofstabledistributions. Sta-

tistical Methodsin Finance,Handbook of Statistics, vol.14 Maddala,

G.S. andRao, C.R.(eds.) New York: North Holland.

Michael,J.R.,Schuhany,W.R.andHaas, R.W.(1976)Generatingrandom

variables using transformations with multiple roots. The American

Statistician, 30,88-89.

Robert, C.P.andCassela,G.(1999) MonteCarloStatistical Methods. New

York: Springer Verlag.

Rydberg, T.H. (1997) The normal inverse Gaussian Levy process: Simu-

lation and approximations. Communications in Statistics-Stochastic

Models ,13,887-910.

(18)

Let V be a chisquare variate with 2 degrees of freedom and compute the

rootswithrespectto Z of

V =

(Z Æ) 2

Z

Theyaregiven by

Z = Æ

+

1

2 2

(V p

V 2

+4ÆV)

LetZ

1 and Z

2

be theminusand plusrootrespectively,andnote thatZ

2

=

Æ 2

=Z

1

. Let=Æ= and

Z =Z

1

withprobability

2

2

+Z 2

1

=Z

2

withprobability

Z 2

1

2

+Z 2

1

=

2

2

+Z 2

2

Then Z is IG

(Æ;) An alternative way if simulating IG* variates is by

rejectionsampling asfollows

1. Generate Z byZ 1

beingexponentialwithparameter= 1

2 Æ

2

.

2. ComputeT =exp(

1

2

2

)

3. Generate U Uniform[0,1],ifT >U keep Z otherwisenot.

This procedure follows from taking q(z j ) = z 2

exp(

1

2 Æ

2

z 1

) as enve-

lope. Aproblemwiththisprocedureisthatmanyobservationsarelikelyto

be rejected. Some improvement are obtained by taking z 1

Gamma(k, )

instead, and adjust k to the situation at hand. However the improvement

isonlyslight.

A simulation method that works for any generalized inverse Gaussian

distributionisproposed byDamien and Walker(1997), usingaGibbs sam-

plerwith auxiliarylatent variables.

Referanser

RELATERTE DOKUMENTER

The Profile Likelihood (PL), the Profile Posterior (PP) and the Markov Chain Monte Carlo (MCMC) methods were used to estimate the shape of the posterior distribution for

Minimum and mean Markov chain Monte Carlo (MCMC) efficiency among the three model parameters for the Wolverine example.. Values are averaged over three independent chains, error

The first contribution of this thesis is a strategy for producing high quality Quasi- Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spher- ical

text Jonathan Brouillat, Christian Bouville, Brad Loos, Charles Hansen, and Kadi Bouatouch, A bayesian monte carlo approach to global illumination, Computer Graphics Forum 28

Since light transport paths in MCMC methods are sampled according to the path contributions over the sampling domain covering the whole image, bright pixels receive more samples

While in GMJMCMC a fixed number of MJMCMC (Mode Jumping Markov Chain Monte Carlo) iterations is performed for each population of the genetic algorithm to compute marginal

Keywords : Bayesian binomial regression, latent Gaussian field, mode jumping Markov chain Monte Carlo, integrated nested Laplace approximations, Bayesian variable selection,

We fitted a social relations model (SRM) using an iterative Markov chain Monte Carlo algorithm implemented in the ‘amen’ R package [37] to understand the relationship between