# Impact of CMAM

This question was posted the Coverage assessment forum area and has 22 replies. You can also reply via email – be sure to leave the subject unchanged.

### Zoheb

WFP

Normal user

5 Dec 2011, 10:20

### Mark Myatt

Frequent user

5 Dec 2011, 13:03

The calculation is straightforward:

Met Need = Effectiveness * Coverage

We can break this down a little. Effectiveness is the proportion of admissions that are discharged as cured. You should have this from your program's routine program monitoring statistics but be careful with how you deal with defaulters as these can easily get lost and this makes the program look more effective than it really is.

Coverage is:

Coverage = Number in the program / Number who should be in the program

For example, if your program has a cure-rate of 85% (a pretty typical value for a CMAM program meeting SPHERE minimum standards) and coverage of 60% (an achievable value above SPHERE minimum standards for a sedentary rural program) then:

Met Need = Effectiveness * Coverage

Met Need = 0.85 * 0.60

Met Need = 0.51

Met Need = 51%

The simplicity of this calculation hides a difficulty ... Coverage is

**not**usually available from routine program monitoring statistics. You will have to spend time and money to estimate coverage.

There are a few ways of estimating coverage. Indirect methods that rely on estimates of total case-numbers from (e.g.) SMART surveys multiplied by some correction factor are very cheap but are

**very inaccurate**and should (IMO) not be used. Direct estimates from (e.g.) SMART surveys are also problematic because of the small numbers of SAM cases found. For example, if prevalence is 2% and the survey sample size is 700 then there will be about 14 SAM cases from which to estimate CMAM coverage (with this sample size you'd get a 95% CI of about +/- 30 percentage points which is very poor precision). An additional problems with this approach is that the PPS sampling method preferentially selects larger communities which (i) may have lower SAM prevalence than smaller communities and (ii) are where CMAM clinics tend to be located. This will tend to give biased estimates of coverage. Other direct methods exist. The most relevant here are probably CSAS and SQUEAC. SLEAC might be useful if you are interested in estimates over several health delivery units such as health districts. All of these methods provide a coverage estimate and data on barriers to access. SQUEAC provides a more in-depth analysis of barriers and boosters to access. Having barriers data is essential because it allows use to optimise programming to maximise met need.

You should be aware that any method will only give you an estimate of how well the program is doing at the time of the survey. If there are seasonal limitations to access (and these are quite common) then you should probably not take this as an accurate estimate of how well your program has done in the past (although you may be able to make some reasonable adjustments to get at this using some of the SQUEAC tools).

As to "how to go for it???" ... I think that you should contact one of the agencies that have experience with CSAS or SQUEAC (SLEAC will only be useful over a wide-area which might be the case for you) to help you do the first investigation and to train you and your staff in the method(s). Agencies that can help you include VALID International Ltd., ACF, CONCERN, SC-US, WVI, and Tufts University. There may be others but these are the ones that I have worked with.

I hope this is of some use.

### Franck Alé

Normal user

18 Jan 2013, 11:51

### Mark Myatt

Frequent user

18 Jan 2013, 19:53

Coverage proportion = 60% (95% CI = 48%; 69%)

Number cured in previous 6 months = 604

Number of exits in previous 6 months = 711

Cured proportion = 85% (95% CI = 82%; 87%)

Met need = 0.60 * 0.85 = 0.51 (51%)

Lower CL = 0.48 * 0.82 = 0.39 (39%)

Upper CL = 0.69 * 0.87 = 0.60 (60%)

A "test" on two met needs could be done by looking for non-overlap of CIs.

Any help?

### Franck Alé

Normal user

14 Feb 2013, 17:49

The indicator is a proportion but the CI comparaison not a test and we can have indetermine situation (overlap of CIs).

Can anybody have an idea about use of test in this cas

Thanks you all for your help

### Mark Myatt

Frequent user

15 Feb 2013, 07:49

Another position : We do have a sample. We could have (e.g.)chosen a different period, started on a different day, &c. and we would have had a different set of exits. The jargon for this is a consecutive sample which is a systematic sample with a sampling interval of one with an arbitrary start point. This type of sample is common in clinical audit applications. There are the errors as above. Also, we might not have all the records for the period. We might have used records from a subset of clinics that had reported or that we had visited. It we have error then we have uncertainty. We commonly use CIs to express uncertainty. If you take this position then you would calculate the CI in the usual manner (this is what I did in my previous post).

I prefer the second approach because it factors in some uncertainty that (surely) must exist.

A simple "test" for a difference between two estimates of met need is to look for non-overlapping CIs on the two estimates. If the CIs of the two estimates overlap then no significant difference. If there is no overlap then there is significant difference. It is a "rough and ready" but workable approach. Using standard tests (e.g. the z-test or chi-square test) is not straightforward since the estimates are compound estimates.

I'd be interested to hear what others think.

I hope this helps.

### Franck Alé

Normal user

15 Feb 2013, 13:12

Hi Mark thanks you, your answers heps me

can we say "If the CIs of the two estimates overlap then no significant difference. "?

i think it's not rigoureusement true to say "If the CIs of the two estimates overlap then no significant difference". We need a test before kown ?

Is it false?

Thanks you again

### Mark Myatt

Frequent user

15 Feb 2013, 14:34

A t-test of the difference between two means and the overlapping CI method can produce conflicting results. This is due to the difference between how "distance" is measured. The CI uses the magnitude of the standard errors but the t-test uses the square root of the sum of the squares of the two standard errors. If you do the maths you will see that the overlapping CI method is a more conservative test than the t-test. The t-test is a good for small sample sizes (i.e. n < 60). As the sample size increases we can use the z-test. This test uses the magnitude of the two standard errors to measure distance and the overlapping CI method and the z-test will return the same results for all cases.

Turning now to the difference between two proportions. The z-test for the difference between two proportions uses the standard error. A chi-square test on one degree of freedom is simply z-squared (e.g. z = 1.96 is chi-square = 3.8416 both have p = 0.0499 for a two-tailed test). For this application the overlapping CI method is identical to a z-test or a chi-square test. This means that the overlapping CI method is well-behaved. It is, I think, the test that you want. Note that these methods (z-test and chi-square test) have constraints. They need:

n * p > 5

n - (n * p) > 5

where "n" is the sample size used for each estimate and "p" is the estimate of the proportion. These constraints will almost always be true for this application. I think, therefore, that it is safe to use the overlapping CI method for this application.

What do others think?

Here is a simple how-to guide to using the overlapping CI method for testing the difference between two proportions.

I hope this helps.

### Bradley A. Woodruff

Self-employed

Frequent user

15 Feb 2013, 15:48

Payton ME, et al. Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance? Journal of Insect Science 3:24.

Braitman LE. Confidence Intervals Assess Both Clinical Significance and Statistical Significance. Annals of Internal Medicine 1999;114 (6):515- 517.

Berry G. Statistical significance and confidence intervals. THE MEDICAL JOURNAL OF AUSTRALIA 1986;144:618-619.

Cole SR, et al. Overlapping confidence intervals. J AM ACAD DERMATOL 1999;41 (6):1051-1052.

Austin PC et al. A brief note on overlapping con?dence intervals. J Vasc Surg 2002;36:194-5.

Beaulieu-Prévost D. Confidence intervals: from test of statistical significance to confidence intervals, range hypotheses and substantial effects. Tutorials in Quantitative Methods for Psychology 2006;2(1):11-19.

Wolfe R and Hanley J. If we’re so different, why do we keep overlapping? When 1 plus 1 doesn’t make 2. Canadian Medical Association Journal 2002;166(1):65-66.

Cumming G and Fidler F. INTERVAL ESTIMATES FOR STATISTICAL COMMUNICATION: PROBLEMS AND POSSIBLE SOLUTIONS. IASE / ISI Satellite, 2005.

Browne RH. On Visual Assessment of the Significance of a Mean Difference. BIOMETRICS 1979;35:657-665.

Cumming G. Inference by eye: Reading the overlap of independent con?dence intervals. Statist. Med. 2009; 28:205–220.

Schenker N and Gentleman JF. On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals. The American Statistician 2001;55(3):182-186.

### Kevin Sullivan

Professor

Normal user

15 Feb 2013, 16:39

However, in some situations an investigator may have two point estimates with 95% confidence limits but not have access to the original data in order to perform a statistical test. It is still possible to perform a statistical test by "reverse" engineering the variance estimates. An example for a proportion can be found here:

http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm

### Mark Myatt

Frequent user

15 Feb 2013, 16:54

That's a very useful calculator. I have made the link to the calculator clickable here.

### Mark Myatt

Frequent user

15 Feb 2013, 17:12

Good to get this sorted out (again).

### Franck Alé

Normal user

15 Feb 2013, 18:14

When we comparer IC it Always true that i have difference when CI is not overlap, when CI is overlap we need to pay attention on some condictions.

We can use bar graphs with point estimates and confidence limits to term of assessing precision and have an idea.

It's safe to perform a statistical test by "reverse" engineering the variance estimates (For example use link :http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm) before make final a conclusion

It's a good resume of what we talk of

Thx

### Mark Myatt

Frequent user

16 Feb 2013, 04:29

The overlapping CI approach is a testing approach but it is does not provide a well behaved test. Specifically, the test is conservative in the sense that it can inappropriately fail to reject the null hypothesis (i.e. of no difference). This will be a particular problem when differences are not large and / or sample sizes are small. A better approach for this application is to use the approach that is implemented in Kevin's calculator application. For other applications you might use the z-test of the chi-sqaure test.

### Tamsin Walters

en-net moderator

Forum moderator

17 Feb 2013, 13:54

Many thanks.

### Mark Myatt

Frequent user

17 Feb 2013, 14:28

### Tamsin Walters

en-net moderator

Forum moderator

17 Feb 2013, 17:58

Best wishes,

Tamsin

### Mark Myatt

Frequent user

18 Feb 2013, 03:44

**clumsy**attempt at humour at my own expense that may have been misinterpreted. No offence taken.

### Mark Myatt

Frequent user

2 Sep 2013, 08:40

Just checking on previous posts and find that this link:

http://www.sph.emory.edu/~cdckms/compare%202%20proportions.htm

appears to be dead.

Is there a new link?

Mark

### Mark Myatt

Frequent user

3 Sep 2013, 10:38