Language: English Français

# weight - quality of surveys

This question was posted the Assessment and Surveillance forum area and has 2 replies.

### Mark Myatt

Frequent user

20 Mar 2015, 11:09

I am not sure what you mean by "weight" in this context. I assume that by "age ratio" and "age and sex ratio" you mean simple tests that compare distributions of age and sex against theoretical distributions of these variables and, when the null hypothesis is rejected, you will mark down the quality of the survey. There are issues with this approach: (1) The truth value of the hypothetical distribution ... it is (e.g.) possible that (e.g.) something like a 934:1000 (M:F) sex ratio will not hold for the population. Looking at older people (e.g.) you may find that there are more women than men as women tend to live longer than men. Other issues that may cause odd sex ratios included sex-selective (usually female) infanticide, sex-selective abortion, exposure to pollution, and war. A uniform age ratio (e.g.) equal number in each age-band may only apply over a narrow range of ages (e.g. year-centred age-bands of 1, 2, 3, 4, and 5 years for children). A strong but short lived shock may remove a band of younger children from a population and the resulting gap will appear at different ages as time passes. If the hypothetical population has a low truth-value then you may have a "false discovery" in that you say that a survey has issues of representing a population when it does not. That it, the survey may accurately reflect the true age and / or sex structure of the population with the mismatch being between the true and the hypothetical distributions. Care needs to be taken with selection of theoretical distributions and interpreting results. (2) Survey sample sizes may be large. Hypothesis tests like a chi-square goodness of fit are very strongly influenced by sample size (see this post). Small deviations that do not reach significance at small sample sizes may appear to be significant at large sample sizes. This is another false discovery problem. I think you can get round this by calculating effect sizes (e.g. sum of proportional deviations from expected) and have a quantitative measure or "quality". (3) Survey designs may be complex. Most testing mechanisms ignore complex designs. This is probably not a great issue as the tests are done on aggregated results. I would be tempted to use a blocked and weighted bootstrap with such samples. I'm not sure this addresses teh question but it does show that the problem is not straightforward. I think that a combined score would be better than a score based on a single variable. I hope this is of some use.

### Scott Logue

Normal user

23 Mar 2015, 21:36

Can you kindly provide an example to your question because I am not exactly sure what you are asking.