# Sample size for rapid nutritional assessment of U5 using MUAC

This question was posted the Assessment and Surveillance forum area and has 14 replies. You can also reply via email – be sure to leave the subject unchanged.

### Mohammad Monsurul Hoq

Normal user

19 Nov 2012, 06:14

Hi all,

We would like to conduct a rapid nutritional assessment in an area affected by flood three times in the last quarter. Could anyone suggest about the size of the sample I should consider for this purpose with reference. Earlier in South Sudan we used to consider at least 100 children for any rapid assessment using MUAC but I could not find any reference.

Regards

Monsurul

### Mark Myatt

Consultant Epideomiologist

Frequent user

19 Nov 2012, 15:31

The RAM (Rapid Assessment Method) and similar methods in Ethiopia, Sierra Leone, and Sudan use that n = 192 collected using 16 clusters of 12 children. The first stage sample is taken using a stratified spatial sample or CSAS. The within-cluster sample is collected as 4 clusters of 3 children using a map/segment/sample method or QTR + EPI5. A PROBIT estimator is used. This yields useful precision.

### Anonymous 585

CMN

Normal user

20 Nov 2012, 04:03

Just a question regarding the RAM, while the method may work in areas such as Ethiopia, Sudan and Somalia, would this sample selection and sample size provide accurate enough estimates or probabilities of GAM and SAM (even with PROBIT calculators) in areas with a high density of children under 5, where in a small geographical area there could be in excess of 25,000 children, and would this methodology be adequate from which to base projects and to support decisions made by NGO's and the donor community.

### Mark Myatt

Consultant Epideomiologist

Frequent user

20 Nov 2012, 10:08

I suppose that you will have to ask "NGO's and the donor community". Here is some information that might help them decide ...

It is still early days for RAM. Work is proceeding. We have experienced few technical setbacks.

RAM is a cluster-sampled method and has all the limitations of such a method WRT clustered phenomena. For example, it will have quite poor precision WRT some WASH indicators in rural settings. It is not a "magic bullet" for all that is weak in current approaches. It does provide some sample size savings. It also does not require population data in advance of sampling.

Work on testing the PROBIT method (still ongoing as we try to improve accuracy and precision) used computer-based simulations based on a survey area population of 100,000 total persons with 17% aged between 6 and 59 months (i.e. 17,000 children). This was felt to be typical of areas in which RAM might be applied and is not so different, in terms of sampling, to the 25,000 you mention above. Results show the method yields similar precision to (e.g.) a SMART survey with c. 3 times the n = 192 sample size. The gain in precision in field-use is likely to be somewhat higher than this because the within-cluster sampling methods employed reduce variance loss (and DEFF) compared to the proximity sampling commonly used in SMART surveys. SMART surveys could, of course, adopt methods such as MSS or QTR + EPI5 for within cluster sampling which should result in improved precision.

There is an issue of accuracy (or bias). The classical estimator is generally unbiassed. The PROBIT estimator is not unbiassed. The level of bias is, however, small. Here is an example of relative precision and bias for GAM prevalence estimates with variants of the PROBIT estimator and the classical estimator using computer-based simulation:

PERFORMANCE OF RAM/PROBIT CANDIDATE ESTIMATORS FOR GAM PREVALENCE Method Location Dispersion Error (%) Rel. Prec. (%) ------- ------------------ ---------------- --------- -------------- PROBIT Mean SD 0.8667 23.99 Mean (transformed) SD (transformed) 0.7321 24.05 Median MAD * 1.42860 0.1852 24.58 Median IQR / 1.34898 0.0670 24.66 Tukey's Trimean IQR / 1.34898 0.1059 24.62 Mid-hinge IQR / 1.34898 0.1947 24.58 ------- ------------------ ---------------- --------- -------------- CLASSIC NA NA -0.0006 27.22 ------- ------------------ ---------------- --------- --------------

The classical method is unbiased. PROBIT with (e.g.) median and IQR is slightly more precise (i.e. the 95% CI will be about 10% narrower) than the classical method at the tested sample sizes (i.e. n = 192 for PROBIT and n = 544 for CLASSIC). The bias for this PROBIT variant is 0.067% (i.e. almost zero). This shows the method to underestimate prevalence slightly.

I think the work outlined above shows that RAM can do as well as SMART with GAM (it does much better with SAM in terms of precision) with smaller sample sizes. If SMART is good enough "NGO's and the donor community" then RAM/PROBIT should also be good enough.

I do not want to make too strong a claim for RAM. More testing is required and the method remains a very promising approach.

Here are some notes WRT indicators ...

PROBIT makes very full use of the data compared to the classical estimator. This can account for the improved performance. This is an example of indicator redesign. The approach here as been to reverse the frequentist formula of:

probability = proportion

so that:

proportion = probability

This approach can be applied to other indicators. For example, estimating proportions using survival probabilities for continuing breastfeeding is about eight times more efficient (i.e. in terms of sample size requirements) than using the proportion-based approach of the current IYCF indicators. It is important to note that the survival-based indicator has problems that limit its utility for frequent M&E but the general approach works.

Another approach is to use whole-sample indicators. IYCF indicators are plagued with issues of sample size because the denominators become vanishingly small for some indicators. For example, the indicator for continuing breastfeeding at 12 months (CBF12M) would have a sample size of n = 50 from a survey sample of n = 900 (large for SMART). If CBF12M is clustered then n = 50 might become an effective sample size of n = 25 or fewer. It is, however, possible to rethink the IYCF indicator set to have indicators that apply to the entire sample but still provide useful information. We have been using this approach in a couple of countries with some success.

I am not sure about the issue of population density. RAM is a general purpose survey method employing a spatial sample in the first stage. This has allowed the method to be used in urban settings which are (by definition) areas of high population density.

I hope this helps.

### Mark Myatt

Consultant Epideomiologist

Frequent user

20 Nov 2012, 10:15

BTW ... forgot to say that EPI uses n = 210 (as 30 clusters of 7). EPI coverage is often quite patchy so we end up doing M&E on our most important and effective child-survival program using surveys with an effective sample size of n = 100 or fewer. The main differences between EPI and RAM are that RAM uses a (smaller) spatial sample in stage 1 and a more representative sample in stage 2.

### Anonymous 585

CMN

Normal user

20 Nov 2012, 11:30

Thank you for your detailed explanation on RAM, they certainly gives clarity to my questions. Would you have at hand any reference/literature on the methodology, including analyzing the data to have a greater understanding. Thanks

### Mark Myatt

Consultant Epideomiologist

Frequent user

20 Nov 2012, 11:59

Much literature has been produced in co-operation with partners (governments, UNOs, NGOs) and I am not free to distribute that without first seeking permissions. There is the original RAM proposal, the first development update (with new PROBIT estimators tested), and the Sierra Leone M&E manual (material on sampling). These cover some aspects of RAM. Data analysis is by a blocked and weighted bootstrap (BWB) estimation procedure (this will be described in brief by HelpAge in their upcoming report from CHAD). The BWB procedure takes into account the sample design (i.e. blocking for the cluster-sample design and weighting by a "roulette wheel" algorithm for posterior weighting).

If you need a detailed technical briefing on RAM then you should contact VALID or me directly.

### Anonymous 585

CMN

Normal user

20 Nov 2012, 12:12

much appreciated, especially for the rapidness of you replies. thank you!

### Mark Myatt

Consultant Epideomiologist

Frequent user

26 Nov 2012, 16:35

Here is the second RAM development update. This has results of testing a few more variants of the PROBIT indicator. All results are for GAM and SAM by MUAC (< 125 mm and < 115 mm). There is also some material describing IYCF indicators in RAM type surveys.

### Mark Myatt

Consultant Epideomiologist

Frequent user

27 Nov 2012, 11:02

Glad to have been of use.

I am a little confused by the methodology you describe. Best if you send me the methodology. My email address is:

mark@twinketoes.cinderella.brixtonhealth.com

without the "twinkletoes.cinderella" (this is just to hinder spam).

### Sinead O Mahony

Nutrition Advisor, GOAL

Normal user

6 Mar 2017, 18:18

Hi, I want to develop a rapid nutrition assessment guideline to act as a plug in to MIRA and IRA assessment in locations where IPC is 3, 4 and 5. The assessment would be a precursor to determine if a full SMART is needed. After reviewing numerous resources I have seen quite a few organisations and cluster guidelines which recommend a sample of 100 children 6-59 months when doing a rapid nutrition assessment. I was wondering if anyone knows what the rational behind this number is, I haven't been able to find it? Thanks, Sinead.

### Mark Myatt

Consultant Epideomiologist

Frequent user

7 Mar 2017, 10:10

Sometimes "rapid" can mean "quick and dirty but cheap". That is not always the case. I think you need to be sure to use a representative sampling method to avoid selection biases.

The n = 100 is useful when using a classification approach. A truncated sequential sampling approach as used (e.g.) for HIV drug resistance can provide accurate and reliable prevalence classifications into < 5%, 5% to 15%, and > 15% classes using a sample size of just = 47. Using n = 100 would provide finer classifications and / or smaller errors.

If you use n = 100 and a simple estimator on (e.g.) a 15% prevalence the 95% CI will be something like:

+/- 1.96 * sqrt(0.15 * (1 - 0.15) / 100) = 7%

assuming a simple ramdom sample. With a design effect of 2.0 it will be about +/- 10%. I think you will be better off with a classifier.

The alternative is to use the RAM (Rapid Assessment Methodology). This uses a small spatial sample (n = 192 collected as 16 clusters of 12 children), a PROBIT estimator, and computer intensive methods (that used to mean waiting a week for the answer but now means waiting a minute or two). The RAM methodology has been used by HelpAge (as RAM-OP), GOAL (prevalence of low H/A), UNICEF (M&E surveys, nutritional surveillance), VALID (M&E surveys, prevalence surveys), GAIN (M&E, prevalence, coverage), ACF (nutritional surveillance), SCF (nutritional surveillance), and others for a variety of purposes. The method costs about 60% the cost of a SMART survey and gives similar precision to a SMART survey for GAM prevalence and much better precision for SAM prevalence. With RAM you would not need to do a second SMART survey. Additional savings can be made using a Bayesian-PROBIT estimator. ACF have achieved good results with n = 132.

Let me know if you need more information on anything in this post.

### Silke Pietzsch, Action Against Hunger

Technical Director /Action Against Hunger USA

Normal user

7 Mar 2017, 13:45

Hi Sinead,

thanks for your question.

i think it could be good for you to be in touch with the IPC team directly, who have as well a Nutrition Adviser for these questions. maybe best to liaise with them if you are eager to align it to IPC. please reach out to Sophie Chotard the Global IPC program manage: Sophie.Chotard@fao.org

thank you!

### Kennedy Musumba

SMART Program Manager

Normal user

7 Mar 2017, 20:09

The Rapid SMART Methodology would be helpful in circumstances similar to Anonymous 3089’s. This is an emergency tool developed to rapidly estimate the prevalence of GAM and SAM in contexts where information is required quickly or time for data collection is limited.

The tool has been piloted in South Sudan, Madagascar, Afghanistan, India, Myanmar, and Iraq.

The Rapid SMART Guideline is available HERE

Thanks