svyglm.Rd
Fit a generalised linear model to data from a complex survey design, with inverse-probability weighting and design-based standard errors.
# S3 method for survey.design
svyglm(formula, design, subset=NULL,
family=stats::gaussian(),start=NULL, rescale=TRUE, ..., deff=FALSE,
influence=FALSE)
# S3 method for svyrep.design
svyglm(formula, design, subset=NULL,
family=stats::gaussian(),start=NULL, rescale=NULL, ..., rho=NULL,
return.replicates=FALSE, na.action,multicore=getOption("survey.multicore"))
# S3 method for svyglm
summary(object, correlation = FALSE, df.resid=NULL,
...)
# S3 method for svyglm
predict(object,newdata=NULL,total=NULL,
type=c("link","response","terms"),
se.fit=(type != "terms"),vcov=FALSE,...)
# S3 method for svrepglm
predict(object,newdata=NULL,total=NULL,
type=c("link","response","terms"),
se.fit=(type != "terms"),vcov=FALSE,
return.replicates=!is.null(object$replicates),...)
Model formula
Survey design from svydesign
or svrepdesign
. Must contain all variables
in the formula
Expression to select a subpopulation
family
object for glm
Starting values for the coefficients (needed for some uncommon link/family combinations)
Rescaling of weights, to improve numerical stability. The default
rescales weights to sum to the sample size. Use FALSE
to not
rescale weights. For replicate-weight designs, use TRUE
to
rescale weights to sum to 1, as was the case before version 3.34.
Other arguments passed to glm
or
summary.glm
For replicate BRR designs, to specify the parameter for
Fay's variance method, giving weights of rho
and 2-rho
Return the replicates as the replicates
component of the
result? (for predict
, only possible if they
were computed in the svyglm
fit)
Estimate the design effects
Return influence functions
A svyglm
object
Include the correlation matrix of parameters?
Handling of NAs
Use the multicore
package to distribute
replicates across processors?
Optional denominator degrees of freedom for Wald tests
new data frame for prediction
population size when predicting population total
linear predictor (link
) or response
if TRUE
, return variances of predictions
if TRUE
and se=TRUE
return full
variance-covariance matrix of predictions
For binomial and Poisson families use family=quasibinomial()
and family=quasipoisson()
to avoid a warning about non-integer
numbers of successes. The `quasi' versions of the family objects give
the same point estimates and standard errors and do not give the
warning.
If df.resid
is not specified the df for the null model is
computed by degf
and the residual df computed by
subtraction. This is recommended by Korn and Graubard and is correct
for PSU-level covariates but is potentially very conservative for
individual-level covariates. To get tests based on a Normal distribution
use df.resid=Inf
, and to use number of PSUs-number of strata,
specify df.resid=degf(design)
.
Parallel processing with multicore=TRUE
is helpful only for
fairly large data sets and on computers with sufficient memory. It may
be incompatible with GUIs, although the Mac Aqua GUI appears to be safe.
predict
gives fitted values and sampling variability for specific new
values of covariates. When newdata
are the population mean it
gives the regression estimator of the mean, and when newdata
are
the population totals and total
is specified it gives the
regression estimator of the population total. Regression estimators of
mean and total can also be obtained with calibrate
.
svyglm
always returns 'model-robust' standard errors; the
Horvitz-Thompson-type standard errors used everywhere in the survey
package are a generalisation of the model-robust 'sandwich' estimators.
In particular, a quasi-Poisson svyglm
will return correct
standard errors for relative risk regression models.
This function does not return the same standard error estimates for the regression estimator of population mean and total as some textbooks, or SAS. However, it does give the same standard error estimator as estimating the mean or total with calibrated weights.
In particular, under simple random sampling with or without replacement
there is a simple rescaling of the mean squared residual to estimate the
mean squared error of the regression estimator. The standard error
estimate produced by predict.svyglm
has very similar
(asymptotically identical) expected
value to the textbook estimate, and has the advantage of being
applicable when the supplied newdata
are not the population mean
of the predictors. The difference is small when the sample size is large, but can be
appreciable for small samples.
You can obtain the other standard error estimator by calling
predict.svyglm
with the covariates set to their estimated (rather
than true) population mean values.
svyglm
returns an object of class svyglm
. The
predict
method returns an object of class svystat
glm
, which is used to do most of the work.
regTermTest
, for multiparameter tests
calibrate
, for an alternative way to specify regression
estimators of population totals or means
svyttest
for one-sample and two-sample t-tests.
Lumley T, Scott A (2017) "Fitting Regression Models to Survey Data" Statistical Science 32: 265-278
data(api)
dstrat<-svydesign(id=~1,strata=~stype, weights=~pw, data=apistrat, fpc=~fpc)
dclus2<-svydesign(id=~dnum+snum, weights=~pw, data=apiclus2)
rstrat<-as.svrepdesign(dstrat)
rclus2<-as.svrepdesign(dclus2)
summary(svyglm(api00~ell+meals+mobility, design=dstrat))
#>
#> Call:
#> svyglm(formula = api00 ~ ell + meals + mobility, design = dstrat)
#>
#> Survey design:
#> svydesign(id = ~1, strata = ~stype, weights = ~pw, data = apistrat,
#> fpc = ~fpc)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 820.8873 10.0777 81.456 <2e-16 ***
#> ell -0.4806 0.3920 -1.226 0.222
#> meals -3.1415 0.2839 -11.064 <2e-16 ***
#> mobility 0.2257 0.3932 0.574 0.567
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> (Dispersion parameter for gaussian family taken to be 5171.966)
#>
#> Number of Fisher Scoring iterations: 2
#>
summary(svyglm(api00~ell+meals+mobility, design=dclus2))
#>
#> Call:
#> svyglm(formula = api00 ~ ell + meals + mobility, design = dclus2)
#>
#> Survey design:
#> svydesign(id = ~dnum + snum, weights = ~pw, data = apiclus2)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 811.4907 30.8795 26.279 <2e-16 ***
#> ell -2.0592 1.4075 -1.463 0.152
#> meals -1.7772 1.1053 -1.608 0.117
#> mobility 0.3253 0.5305 0.613 0.544
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> (Dispersion parameter for gaussian family taken to be 8363.101)
#>
#> Number of Fisher Scoring iterations: 2
#>
summary(svyglm(api00~ell+meals+mobility, design=rstrat))
#>
#> Call:
#> svyglm(formula = api00 ~ ell + meals + mobility, design = rstrat)
#>
#> Survey design:
#> as.svrepdesign.default(dstrat)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 820.8873 10.5190 78.038 <2e-16 ***
#> ell -0.4806 0.4060 -1.184 0.238
#> meals -3.1415 0.2939 -10.691 <2e-16 ***
#> mobility 0.2257 0.4515 0.500 0.618
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> (Dispersion parameter for gaussian family taken to be 1029221)
#>
#> Number of Fisher Scoring iterations: 2
#>
summary(svyglm(api00~ell+meals+mobility, design=rclus2))
#>
#> Call:
#> svyglm(formula = api00 ~ ell + meals + mobility, design = rclus2)
#>
#> Survey design:
#> as.svrepdesign.default(dclus2)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 811.4907 37.1474 21.845 <2e-16 ***
#> ell -2.0592 1.6177 -1.273 0.211
#> meals -1.7772 1.4901 -1.193 0.241
#> mobility 0.3253 0.6307 0.516 0.609
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> (Dispersion parameter for gaussian family taken to be 1045388)
#>
#> Number of Fisher Scoring iterations: 2
#>
## use quasibinomial, quasipoisson to avoid warning messages
summary(svyglm(sch.wide~ell+meals+mobility, design=dstrat,
family=quasibinomial()))
#>
#> Call:
#> svyglm(formula = sch.wide ~ ell + meals + mobility, design = dstrat,
#> family = quasibinomial())
#>
#> Survey design:
#> svydesign(id = ~1, strata = ~stype, weights = ~pw, data = apistrat,
#> fpc = ~fpc)
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.835837 0.455626 1.834 0.0681 .
#> ell -0.002490 0.013252 -0.188 0.8512
#> meals -0.003152 0.009199 -0.343 0.7322
#> mobility 0.060897 0.031935 1.907 0.0580 .
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> (Dispersion parameter for quasibinomial family taken to be 1.108677)
#>
#> Number of Fisher Scoring iterations: 5
#>
## Compare regression and ratio estimation of totals
api.ratio <- svyratio(~api.stu,~enroll, design=dstrat)
pop<-data.frame(enroll=sum(apipop$enroll, na.rm=TRUE))
npop <- nrow(apipop)
predict(api.ratio, pop$enroll)
#> $total
#> enroll
#> api.stu 3190038
#>
#> $se
#> enroll
#> api.stu 29565.98
#>
## regression estimator is less efficient
api.reg <- svyglm(api.stu~enroll, design=dstrat)
predict(api.reg, newdata=pop, total=npop)
#> link SE
#> 1 3187252 31072
## same as calibration estimator
svytotal(~api.stu, calibrate(dstrat, ~enroll, pop=c(npop, pop$enroll)))
#> total SE
#> api.stu 3187252 31072
## svyglm can also reproduce the ratio estimator
api.reg2 <- svyglm(api.stu~enroll-1, design=dstrat,
family=quasi(link="identity",var="mu"))
predict(api.reg2, newdata=pop, total=npop)
#> link SE
#> 1 3190038 29566
## higher efficiency by modelling variance better
api.reg3 <- svyglm(api.stu~enroll-1, design=dstrat,
family=quasi(link="identity",var="mu^3"))
predict(api.reg3, newdata=pop, total=npop)
#> link SE
#> 1 3247986 21129
## true value
sum(apipop$api.stu)
#> [1] 3196602