# Impact of CMAM

This question was posted the Coverage assessment forum area and has 22 replies. You can also reply via email – be sure to leave the subject unchanged.

### Zoheb

WFP

Normal user

5 Dec 2011, 10:20

I am working as Provincial Coordinator Nutrition Cell Government of Balochistan (Pakistan). We have been implementing CMAM in food insecure districts of our province from the last 3 years. We have seen results and the communities have endorsed the program. However what needs to be assessed at this stage is the IMPACT of the program as a whole.

Can anybody help us in this regard as how to go for it???

### Mark Myatt

Consultant Epideomiologist

Frequent user

5 Dec 2011, 13:03

If we define "impact" as "how well does my program address need?" then we can work this out using a combination of routine monitoring data and a coverage estimate.

The calculation is straightforward:

Met Need = Effectiveness * Coverage

We can break this down a little. Effectiveness is the proportion of admissions that are discharged as cured. You should have this from your program's routine program monitoring statistics but be careful with how you deal with defaulters as these can easily get lost and this makes the program look more effective than it really is.

Coverage is:

Coverage = Number in the program / Number who should be in the program

For example, if your program has a cure-rate of 85% (a pretty typical value for a CMAM program meeting SPHERE minimum standards) and coverage of 60% (an achievable value above SPHERE minimum standards for a sedentary rural program) then:

Met Need = Effectiveness * Coverage Met Need = 0.85 * 0.60 Met Need = 0.51 Met Need = 51%

The simplicity of this calculation hides a difficulty ... Coverage is **not** usually available from routine program monitoring statistics. You will have to spend time and money to estimate coverage.

There are a few ways of estimating coverage. Indirect methods that rely on estimates of total case-numbers from (e.g.) SMART surveys multiplied by some correction factor are very cheap but are **very inaccurate** and should (IMO) not be used. Direct estimates from (e.g.) SMART surveys are also problematic because of the small numbers of SAM cases found. For example, if prevalence is 2% and the survey sample size is 700 then there will be about 14 SAM cases from which to estimate CMAM coverage (with this sample size you'd get a 95% CI of about +/- 30 percentage points which is very poor precision). An additional problems with this approach is that the PPS sampling method preferentially selects larger communities which (i) may have lower SAM prevalence than smaller communities and (ii) are where CMAM clinics tend to be located. This will tend to give biased estimates of coverage. Other direct methods exist. The most relevant here are probably CSAS and SQUEAC. SLEAC might be useful if you are interested in estimates over several health delivery units such as health districts. All of these methods provide a coverage estimate and data on barriers to access. SQUEAC provides a more in-depth analysis of barriers and boosters to access. Having barriers data is essential because it allows use to optimise programming to maximise met need.

You should be aware that any method will only give you an estimate of how well the program is doing at the time of the survey. If there are seasonal limitations to access (and these are quite common) then you should probably not take this as an accurate estimate of how well your program has done in the past (although you may be able to make some reasonable adjustments to get at this using some of the SQUEAC tools).

As to "how to go for it???" ... I think that you should contact one of the agencies that have experience with CSAS or SQUEAC (SLEAC will only be useful over a wide-area which might be the case for you) to help you do the first investigation and to train you and your staff in the method(s). Agencies that can help you include VALID International Ltd., ACF, CONCERN, SC-US, WVI, and Tufts University. There may be others but these are the ones that I have worked with.

I hope this is of some use.

### Franck

Epidemiologist

Normal user

18 Jan 2013, 11:51

thank you for the proposal as an indicator of program impact. the indicator is a ratio or proportion (I don't think because et numerator not inclus in denominator). What are the statistical tools that are possible to use in comparisons (eg test statistic)?

### Mark Myatt

Consultant Epideomiologist

Frequent user

18 Jan 2013, 19:53

The indicator is a proportion. It is an estimate of the proportion of cases of SAM that are found, recruited, retained and cured. The are two proportions ... (1) the coverage proportion and (2) the proportion cured. These are multiplied together. Both proportions are subject to uncertainty which which usually be expressed as a 95% credible (confidence) interval. You can use these to create an approximate 95% credible interval on met need. For example:

Coverage proportion = 60% (95% CI = 48%; 69%) Number cured in previous 6 months = 604 Number of exits in previous 6 months = 711 Cured proportion = 85% (95% CI = 82%; 87%) Met need = 0.60 * 0.85 = 0.51 (51%) Lower CL = 0.48 * 0.82 = 0.39 (39%) Upper CL = 0.69 * 0.87 = 0.60 (60%)

A "test" on two met needs could be done by looking for non-overlap of CIs.

Any help?

### Franck

Epidemiologist

Normal user

14 Feb 2013, 17:49

In routine data we don't have CI of cure proportion. It's have sens to havec CI because it's not echantillonnage survey but en exhaustive analysis of admission ?. how can we get this, if it's possible?.

The indicator is a proportion but the CI comparaison not a test and we can have indetermine situation (overlap of CIs).

Can anybody have an idea about use of test in this cas

Thanks you all for your help

### Mark Myatt

Consultant Epideomiologist

Frequent user

15 Feb 2013, 07:49

One position : The proportion cured is not an estimate in the sense that we might estimate a proportion from a sample survey because it is taken from a census (i.e. 100%) sample of exits over a period of time. This means that we have no sampling error. We do have a risk of error in recording of dates and classification of exits (e.g. I have seen defaulters and transfers systematically misclassified as cured - there is a case-study in the SQUEAC manual). If you can eliminate these errors then you do not have an estimate of the value of the true proportion ... you have the value of the true proportion itself. This means that you do not need CIs and hypotheses tests. If the observed proportion cured is (e.g.) 80% then you can say that the true proportion cured is 80% and is above 75% (i.e. the SPHERE minimum standard). If one program has an observed proportion cured of 80% and another has an observed proportion cured of 90% you can say that the second program is better, in terms of cure rate, than the first. I do not think this is an unreasonable approach.

Another position : We do have a sample. We could have (e.g.)chosen a different period, started on a different day, &c. and we would have had a different set of exits. The jargon for this is a consecutive sample which is a systematic sample with a sampling interval of one with an arbitrary start point. This type of sample is common in clinical audit applications. There are the errors as above. Also, we might not have all the records for the period. We might have used records from a subset of clinics that had reported or that we had visited. It we have error then we have uncertainty. We commonly use CIs to express uncertainty. If you take this position then you would calculate the CI in the usual manner (this is what I did in my previous post).

I prefer the second approach because it factors in some uncertainty that (surely) must exist.

A simple "test" for a difference between two estimates of met need is to look for non-overlapping CIs on the two estimates. If the CIs of the two estimates overlap then no significant difference. If there is no overlap then there is significant difference. It is a "rough and ready" but workable approach. Using standard tests (e.g. the z-test or chi-square test) is not straightforward since the estimates are compound estimates.

I'd be interested to hear what others think.

I hope this helps.

### Franck

Epidemiologist

Normal user

15 Feb 2013, 13:12

Hi Mark thanks you, your answers heps me

can we say "If the CIs of the two estimates overlap then no significant difference. "?

i think it's not rigoureusement true to say "If the CIs of the two estimates overlap then no significant difference". We need a test before kown ?

Is it false?

Thanks you again

### Mark Myatt

Consultant Epideomiologist

Frequent user

15 Feb 2013, 14:34

When comparing two parameter estimates, it is ALWAYS true that if the confidence intervals do not overlap then a test statistic will be significant at the 1 - alpha level (where alpha is the percentile used for the CI - p < 0.05 if two 95% CI do not overlap). The opposite case (i.e. overlap means non-significance) is NOT always true. This depends upon the statistical test being used. I will start by using the example of a t-test.

A t-test of the difference between two means and the overlapping CI method can produce conflicting results. This is due to the difference between how "distance" is measured. The CI uses the magnitude of the standard errors but the t-test uses the square root of the sum of the squares of the two standard errors. If you do the maths you will see that the overlapping CI method is a more conservative test than the t-test. The t-test is a good for small sample sizes (i.e. n < 60). As the sample size increases we can use the z-test. This test uses the magnitude of the two standard errors to measure distance and the overlapping CI method and the z-test will return the same results for all cases.

Turning now to the difference between two proportions. The z-test for the difference between two proportions uses the standard error. A chi-square test on one degree of freedom is simply z-squared (e.g. z = 1.96 is chi-square = 3.8416 both have p = 0.0499 for a two-tailed test). For this application the overlapping CI method is identical to a z-test or a chi-square test. This means that the overlapping CI method is well-behaved. It is, I think, the test that you want. Note that these methods (z-test and chi-square test) have constraints. They need:

n * p > 5 n - (n * p) > 5

where "n" is the sample size used for each estimate and "p" is the estimate of the proportion. These constraints will almost always be true for this application. I think, therefore, that it is safe to use the overlapping CI method for this application.

What do others think?

Here is a simple how-to guide to using the overlapping CI method for testing the difference between two proportions.

I hope this helps.

### Bradley A. Woodruff

Self-employed

Frequent user

15 Feb 2013, 15:48

Regarding the determination of statistical significance of difference(s) between two or more point estimates using confidence interals: Comparing confidence intervals for two point estimates in an attempt to determine statistical significance is fraught with peril. Yes, if the two sets of confidence intervals do not overlap, then the difference between the point estimates is statistically significant at the level used to calculate the confidence intervals (usually 0.05). However, the converse is NOT true. If the confidence intervals do overlap, then the difference between the point estimates may or may not be statistically significant. This is because when calculating a p value for the difference between two point estimates, a weighted, pooled average of the variance is used which results in a standard error less than the sum of the separate standard errors for the two point estimates. Much has been written on this topic with several proposals of how to determine statistical significance from confidence intervals (see reference list below), but the bottom line is that, if the confidence intervals overlap, you must use a statistical test, such as the t-test or chi square, which is specifically designed to test statistical significance.

Payton ME, et al. Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance? Journal of Insect Science 3:24.

Braitman LE. Confidence Intervals Assess Both Clinical Significance and Statistical Significance. Annals of Internal Medicine 1999;114 (6):515- 517.

Berry G. Statistical significance and confidence intervals. THE MEDICAL JOURNAL OF AUSTRALIA 1986;144:618-619.

Cole SR, et al. Overlapping confidence intervals. J AM ACAD DERMATOL 1999;41 (6):1051-1052.

Austin PC et al. A brief note on overlapping con?dence intervals. J Vasc Surg 2002;36:194-5.

Beaulieu-Prévost D. Confidence intervals: from test of statistical significance to confidence intervals, range hypotheses and substantial effects. Tutorials in Quantitative Methods for Psychology 2006;2(1):11-19.

Wolfe R and Hanley J. If we’re so different, why do we keep overlapping? When 1 plus 1 doesn’t make 2. Canadian Medical Association Journal 2002;166(1):65-66.

Cumming G and Fidler F. INTERVAL ESTIMATES FOR STATISTICAL COMMUNICATION: PROBLEMS AND POSSIBLE SOLUTIONS. IASE / ISI Satellite, 2005.

Browne RH. On Visual Assessment of the Significance of a Mean Difference. BIOMETRICS 1979;35:657-665.

Cumming G. Inference by eye: Reading the overlap of independent con?dence intervals. Statist. Med. 2009; 28:205–220.

Schenker N and Gentleman JF. On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals. The American Statistician 2001;55(3):182-186.

### Kevin Sullivan

Professor

Normal user

15 Feb 2013, 16:39

Viewing error bar graphs with point estimates and confidence limits can be useful in terms of assessing precision and for making comparisons between two or more estimates. If a statistical test is desired, the p-value should be calculated and not guessed at by comparing confidence limits.

However, in some situations an investigator may have two point estimates with 95% confidence limits but not have access to the original data in order to perform a statistical test. It is still possible to perform a statistical test by "reverse" engineering the variance estimates. An example for a proportion can be found here:

http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm

### Mark Myatt

Consultant Epideomiologist

Frequent user

15 Feb 2013, 16:54

Thank goodness someone understands what we are talking about :)

That's a very useful calculator. I have made the link to the calculator clickable here.

### Franck

Epidemiologist

Normal user

15 Feb 2013, 17:05

Many thanks for your replies - this helps a lot.

thx

### Mark Myatt

Consultant Epideomiologist

Frequent user

15 Feb 2013, 17:12

Woody's reply prompted me to look through the EN-NET "archive". We have been here before and come to the same conclusions ... even with a link to Kevin's useful calculator ... see here.

Good to get this sorted out (again).

### Franck

Epidemiologist

Normal user

15 Feb 2013, 18:14

I want to try to resume what i understand after this brainstroming

When we comparer IC it Always true that i have difference when CI is not overlap, when CI is overlap we need to pay attention on some condictions.

We can use bar graphs with point estimates and confidence limits to term of assessing precision and have an idea.

It's safe to perform a statistical test by "reverse" engineering the variance estimates (For example use link :http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm) before make final a conclusion

It's a good resume of what we talk of

Thx

### Mark Myatt

Consultant Epideomiologist

Frequent user

16 Feb 2013, 04:29

I think we can summarise ...

The overlapping CI approach is a testing approach but it is does not provide a well behaved test. Specifically, the test is conservative in the sense that it can inappropriately fail to reject the null hypothesis (i.e. of no difference). This will be a particular problem when differences are not large and / or sample sizes are small. A better approach for this application is to use the approach that is implemented in Kevin's calculator application. For other applications you might use the z-test of the chi-sqaure test.

### Tamsin Walters

en-net moderator

Forum moderator

17 Feb 2013, 13:54

Please remember this is a public forum and derogatory comments are not appropriate.

Many thanks.

### Mark Myatt

Consultant Epideomiologist

Frequent user

17 Feb 2013, 14:28

I can see no derogatory comments. Everything above looks well-mannered. The comment about someone knowing what they are talking about (which seems to have been removed) was made at my own expense. Thats is, I was making a derogatory comment about myself. Tahnk fro protecting me from my own lame humour (oops, there I go again).

### Tamsin Walters

en-net moderator

Forum moderator

17 Feb 2013, 17:58

Apologies, Mark. My error. It was not clear to me where the comment was aimed - and might not have been to other users.

Best wishes,

Tamsin

### Mark Myatt

Consultant Epideomiologist

Frequent user

18 Feb 2013, 03:44

OK. No worries. I had made an error that was corrected by colleagues. It was a **clumsy** attempt at humour at my own expense that may have been misinterpreted. No offence taken.

### Mark Myatt

Consultant Epideomiologist

Frequent user

2 Sep 2013, 08:40

Kevin,

Just checking on previous posts and find that this link:

http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm

appears to be dead.

Is there a new link?

Mark

### Mark Myatt

Consultant Epideomiologist

Frequent user

3 Sep 2013, 10:38

Just fixing the link to Kevin's calculator. It is now here.