![gmm eviews 9 gmm eviews 9](https://www.coursehero.com/thumb/02/67/0267838de81d261343abcc91adb11810ada771a0_180.jpg)
With T = 175, I'm in a somewhat better position than those considered in the simulation studies I've just cited. So, where does that leave us? I'm glad that I used the continuous-updating version of the GMM estimator in my illustration. "The test for overidentifying restrictions are, by construction, more conservative when the weighting matrix is continuously updated, and in many cases this led to a more reliable test statistic."."The continuous-updating estimator typically had less median bias than the other estimators, but the Monte Carlo sample distributions for this estimator sometimes had much fatter tails." (p.278)."Continuous updating in conjunction with criterion-function-based inference often performed better than other methods for annual data however, the large-sample approximations are still not very reliable." (p.278).The finite-sample properties of the GMM estimator depend very much on the way in which the moment conditions are weighted.consider a sample size of 100 for the part of their study most relevant here. However, they are essentially median-unbiased in the other four cases considered.ģ. In the three cases of over-rejection on the part of the J-test, the estimates of the parameters are downward median-biased.These outcomes depend very much on the choice of instruments. The J-test exhibits minimal size distortion in four of the seven experimental designs considered, and is is biased toward over-rejection of the null in the other three cases.Kocherlakota considers a sample of T = 90 observations.
![gmm eviews 9 gmm eviews 9](https://docplayer.net/docs-images/104/163184490/images/29-0.jpg)
"The test of the overidentifying restrictions performs well in small samples if anything, the test is biased toward acceptance of the null hypothesis."Ģ."There is a variance/bias trade-off regarding the number of lags used to form instruments: with short lags, the estimates of utility function parameters are nearly asymptotically optimal, but with longer lags the estimates concentrate around biased values and confidence intervals become misleading.".Tauchen considers sample sizes of 50 and 75 - much smaller than I was using. Three relevant studies are those of Tauchen (1986), Kocherlakota (1990), and Hansen et al. Several studies have examined the performance of GMM in precisely the context that I was using it in my own example. My first reaction was: "let's just bootstrap this thing and find out." My second reaction was: "there has to be plenty of evidence out there already, so let's not re-invent the wheel." Indeed, this is the case. Is this enough for the asymptotics to "kick in" for GMM estimation of an Euler equation of the type I was considering? The example that I had provided, which was just a teaching example, used 175 quarterly observations. So, I'm more than sympathetic to the general point that Stephen made. For instance, consider my own work on bias corrections for MLEs (see here, here, and here). Indeed, lots of work has been done to explore the finite-sample properties of such estimators. Of course, this question applies to almost all of the estimators that we use in practice - IV, MLE, GMM, etc. The GMM estimator is weakly consistent, the "t-test" statistics associated with the estimated parameters are asymptotically standard normal, and the J-test statistic is asymptotically chi-square distributed under the null. In a comment on a post earlier today, Stephen Gordon quite rightly questioned the use of GMM estimation with relatively small sample sizes.