5.5 Training and testing data

As an extension of the logistic regression technique, let’s use a logistic model “trained” on a portion of our PUMS data to try to predict the language of some other “testing” portion of our data, and explicitly evaluate how well the model performs in this test, relative to the actual values from the PUMS data. First, let’s use sample() in a similar way to how we used it in Chapter 4.2 to split our PUMS data. We’ll pick an arbitrary 80% of the PUMS data to use as “training” data and the other 20% to “test” our model.

sample <- sample(
  c(TRUE, FALSE), 
  nrow(bay_pums_language), 
  replace = T, 
  prob = c(0.8,0.2)
)

train <- bay_pums_language[sample, ]
test <- bay_pums_language[!sample, ]

Then, we run svrepdesign() and svyglm() with train:

train_design <- svrepdesign(
  data = train,
  type = "ACS",
  repweights = bay_pums_language_wts[sample, ],
  weights = ~as.numeric(PWGTP)
)

train_model <- svyglm(
  formula = english ~ AGEP + white + hispanic,
  family = quasibinomial(),
  design = train_design,
)
summary(train_model)
## 
## Call:
## svyglm(formula = english ~ AGEP + white + hispanic, family = quasibinomial(), 
##     design = train_design, )
## 
## Survey design:
## svrepdesign.default(data = train, type = "ACS", repweights = bay_pums_language_wts[sample, 
##     ], weights = ~as.numeric(PWGTP))
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.2587429  0.0334595   7.733 3.57e-11 ***
## AGEP        -0.0097488  0.0005473 -17.813  < 2e-16 ***
## white        1.8656538  0.0315953  59.048  < 2e-16 ***
## hispanic    -1.7959523  0.0507220 -35.408  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for quasibinomial family taken to be 60055.67)
## 
## Number of Fisher Scoring iterations: 3

Then, we create a set of predictions for the test dataset with this training model. Note that we can just feed newdata = test which will identify the correct fields from test to use for the independent variables, and the resultant predictions will be distinct from the “real” results that are untouched in the field english.

test_predicted <-
  predict(train_model, newdata = test, type = "response")

Finally, as one way of evaluating the performance of this model, we can create a 2x2 matrix using table(). table() is a general function for doing summary statistics, but if we give it the “real” values of english (yes or no) as one vector and give it our predicted probabilities (true or false above 0.5), then table() will give us the counts of each of four possible pair combinations these two vectors.

summary_2x2 <-
  test %>% 
  mutate(
    english = ifelse(
      english == 1, 
      "Yes (English)", 
      "No (ESL)"
    )
  ) %>% 
  pull(english) %>% 
  table(test_predicted > 0.5)
summary_2x2
##                
## .               FALSE TRUE
##   No (ESL)       4042 1783
##   Yes (English)  2067 6807

This 2x2 matrix can be read as follows:

  • The bottom-right cell is the number of test records which were truly English speakers, and our model correctly predicted this using just the variables of age, White/non-White, and Hispanic/non-Hispanic.
  • The top-left cell is the number of test records which were ESL speakers, and our model correctly predicted this as well. So far, 74% of records were correctly predicted one way or the other.
  • The top-right cell is the number of test records which were ESL speakers, but our model incorrectly predicted them to be English speakers. These are called “Type 1 errors” or “false positives”.
  • The bottom-left cell is the number of test records which were English speakers, but our model incorrectly predicted them to be ESL speakers. These are called “Type 2 errors” or “false negatives”.

Depending on the purpose of the model, one may have slightly different objectives here, but generally one would want to limit both false positives and false negatives. This “training” and “testing” technique can be applied to simple linear regression models too, where the objective may be to reduce the average error between the predicted result and the “real” result in the test data.