# Wide confidence interval SAM

This question was posted the Assessment and Surveillance forum area and has 11 replies.

### Anonymous 24408

Normal user

7 Feb 2019, 18:08

I'm looking at a national SMART survey. The confidence intervals appear quite wide for SAM. As an example one province has SAM prevalence of 2.1 (95%CI 1.1 - 4.0), another has SAM prevalence of 2.0% (95%CI 1.0-3.8%).

Could I assume that there is an error in the sample size calculation and/ or missing data values?

When conducting national anthropometric surveys, what needs to be done to ensure greater precision for the SAM estimate?

Many thanks

### Bradley A. Woodruff

Self-employed

Technical expert

7 Feb 2019, 23:03

I do not think there is any error. In a hypothetical survey sample of 1000 children with a design effect of 1.5, the confidence intervals around an estimate of 2% prevalence for severe acute malnutrition would be 1.0, 3.4, similar to your examples. To achieve a statistically significant difference from the 2% cut-off, our hypothetical survey's estimate of prevalence would have to be 3.4% or greater. Fortunately, few populations have such an elevated prevalence of severe acute malnutrition in preschool-age children.

Because these prevalence rates are low, any differences between survey estimates and the 2% cut-off will be relatively small, thus requiring high precision in the survey estimate to achieve statistical significance at a p<0.05 level. If a survey's primary outcome of interest is the prevalence of severe acute malnutrition, the survey must be powered accordingly and will probably require a larger-than-average sample size.

### Jay Berkley

Professor/KEMRI-Wellcome Trust Research Programme

Frequent user

8 Feb 2019, 08:22

In agreement with Bradley. When a survey is designed the sample size can be calculated according to the expected prevalence and the desired precision (width of the confidence intervals). Surveys are also usually designed to give prevalence in one population (eg nationally or in a region). In order to compare two populations then the sample size would need to be calculated specifically to do that comparison with a specified power to demonstrate a difference (usually at least 80 percent which means accepting a 20 percent chance that a true difference will be missed). A comparison is likely to require a very large sample size. Making a comparison using a smaller sample size that was not designed to make a comparison will usually inevitably end up with a result of 'no evidence of a difference' because confidence intervals of one population will overlap with the prevalence estimate of the other population.

Hope that's helpful.

Jay

### Jay Berkley

Professor/KEMRI-Wellcome Trust Research Programme

Frequent user

8 Feb 2019, 14:09

Likely as you say that the survey was designed to obtain a national prevalence of GAM, so wide CIs for SAM and not powered to either compare SAM to previous years or between provinces - you would need to look at the details of the design to confirm that.

Also as you say, there is no evidence of differences from previous years. Looking at the CIs, the province with an estimate of 2 percent SAM has somewhere more than 50 percent probability of being truly 2 percent, the others have a higher probability. So intervention depends on how certain you want to be. However I strongly recommend reading André Briend's post earlier today on emergency thresholds.

Jay

### Bradley A. Woodruff

Self-employed

Technical expert

8 Feb 2019, 17:10

Dear All:

Just a small point of clarification. Overlap in the confidence intervals around the estimates from two subgroups does *not* necessarily mean that the two estimates are not statistically significant. This is because the confidence intervals of the separate groups are calculated using the smaller sample size of each group separately, but the calculation of the p value for a difference between groups uses the variance derived from the pooled sample sizes of the two groups together. So the comparison of the two groups has substantially more precision than is reflected in the confidence intervals of the separate groups.

If the confidence intervals of two groups do not overlap, the estimates in those two groups are definitely statistically significantly different. However, if the confidence intervals *do* overlap, you can roughly estimate statistical significance by whether or not the confidence intervals for each group include the point estimates of the other group.

In the example from Anonymous 24408, let's look at Province A. The 2014 estimate of the prevalence of severe acute malnutrition is 1.7% (95%CI: 1.0, 3.1). These confidence intervals DO include 2.0% which is the estimate of prevalence in 2018. In addition, the 2018 result is 2.0% (95% CI: 1.0, 3.8), so again, the confidence intervals include 1.7% which is the estimate from the 2014 survey. In this example, we can tentatively conclude that these two surveys do not provide evidence that the prevalence of severe acute malnutrition in Province A has changed between 2014 and 2018, but I would never base a report's or publication's conclusions on such a guesstimate. I only use this technique if I am reading a report and do not have access to the actual survey data. More definite conclusions about statistical significance need to be based on an appropriate chi square test accounting for whatever complex sampling design was used.

### Sameh Al-Awlaqi

Public Health and Nutrition Consultant

Normal user

8 Feb 2019, 21:28

Hi Anonymous 24408,

I believe that my esteemed colleagues Bradley and Jay have answered most of your questions. Just a quick heads up from SMART perspective.

You can check if there a statistical significance between the prevalence of the two surveys by using the CDC statistical calculator for two surveys. It comes with the SMART training package (managers training), please check the annexes on the SMART methodology website. The difference will be mainly based on the interpretation of the p. value as my colleagues indicated above. C.I. overlap is one quick way to do it as well.

You have to consider other circumstances when you draw comparisons, such as seasonality, sample size, design effect, other nutritional deficiencies, livelihood status or other health interventions in the area, and whether CMAM services have expanded since the last survey or not, or if there was an outbreak of acute watery diarrhoea since the last survey etc. You may wish to talk to health professionals and community members to get insight into your interpretation.

I have worked on SMART surveys and CMAM in Darfur and Yemen, I know sometimes you feel pressured to find and report progress in the nutrition status of children within your programme areas to satisfy donors. SMART reports when they are not consistent over time they may cause confusion and frustration. However, when you look at your programme indicators, admission trends and performance, you should know if your intervention is working or if it needs a bit of improvement. In all cases, interpret your SMART findings based on your context, you and your field team could tell if the nutrition situation is getting worse.

Hope that helps!

Sincerely,

Sameh

### Jay Berkley

Professor/KEMRI-Wellcome Trust Research Programme

Frequent user

9 Feb 2019, 01:50

Extending Bradley's point on have the actual data, although each individual province shows no evidence of a difference between the two time points, if you are able to obtain the actual numbers (number of children surveyed, number with GAM and number with SAM) for each if the 4 provinces for each of the two time points, then it would be possible to combine these into a single 'meta-analysis' of the overall change in proportion of children who are undernourished within the 4 provinces combined that would have more power to detect a change. Is that data available?

If you have any problem posting a response, please contact the moderator at post@en-net.org.