First ... sorry for not replying sooner. I usually reply after an e-mail notificaton of a posting and either I did not get one or I did and it got overlooked. ENN are looking into this. Anyway ...
There a couple of "problems" that you have to address:
Multiple indicators : It seems to me that the bulk of the indicators mentioned are simple proportions or percentages. What you need to do is make a list of each indicator and, for each indicator, write down the expected level (if you do not have any good idea then use 50%) and the desired level of precision (i.e. the width of the 95% confidence interval).
When you have done this you should use a sample size calulator such as
SampleXS:
http://www.brixtonhealth.com/samplexs.html
Or
GNU sampsize, which is available online at:
http://sampsize.sourceforge.net/iface/index.html
And calculate the required sample size for each indicator. The largest sample size that you calculate here is the the smallest sample size that will yield the desired precision for [u]all[/u] of your indicators.
It is, unfortunately, not as simple as that. The indicators apply to different units. For example, the nutritional anthropormetry indicator applies to individual chidlren but a sanitation indicator may apply to a household. You have to account for this when you calculate sample sizes. I will give an example with three indicators:
GAM : Expected prevalence = 12%, desired precision = +/- 3%
EPI : Expected coverage proportion = 70%, desired precision = +/- 10%
Safe disposal of faeces (SDF) : Expected proportion safe = 50%, precision = +/- 10%
When I use GNU sampsize for these I get 451, 81, and 97 respectively. The problem here is that the indicators apply to different units:
GAM : Children aged 6 - 59 months
EPI : Children aged 6 - 24 months
SDF : Households
You have to find some way of "standardising". The easiest way is to work with households and express the sample size in terms of the number of households required. If we assume that we will find 1.25 children aged between 6 and 59 months in a sampled household then we would need to sample 451 / 1.25 = 361 households to find 451 children aged between 6 and 59 months. If we expect to find 0.25 children aged between 6 and 24 months in a household then we would need to sample 81 / 0.25 = 234 households to find 81 chidren ages between 6 and 24 months. And, of course, you need to sample 97 households to find 97 households. So the sample sizes expressed in numbers of households is:
GAM : 361
EPI : 234
SDF : 97
Again, it is not as easy as that. For a cluster sampled survey we have a design effect (DEFF) to consider. This will be different for different indicators. It will be particulary high for anything that tends to cluster spatially (either within or between villages) such as infectious diseases or program coverage. You need to make a guess at these and multiply the calculated sample size by the expected DEFF. If we assume that GAM is not very clustered (DEFF = 1.5) and that EPI and SDF are likely to be more clustered (e.g. DEFF = 3 and DEFF = 2 respectively) then our sample sizes are now:
GAM : 361 * 1.5 = 541
EPI : 234 * 3 = 702
SDF : 97 * 2 = 194
If you use a cluster sampled apprach then you will need about 30 clusters (do not go much below this). To calculate the within-cluster sample size you shoudl divide the largest sample size by the number of clusters. In our example this will be 702 / 30 = 24 households. It is common practice with sample size calculations to round up the results of calculations.
This may seem to be a compicated procedure but should not present problems if you take it step-by-step.
The "homogeneity assumption" : The main problem with the desigh you propose is that it provides a single estimate for an indicator. This is OK as long as it makes sense to have a single estimate. As a survey area gets larger the chances of a single estimate being meaningful decreases. For example, imagine that you cover two districts in a single survey. One district has an active EPI program and the other does not. If the true EPI coverage in the first district is 80% and the true EPI coverage in the second district is 30% then your survey might tell you that EPI coverage is about 55%. You would conclude that EPI coverage was poor everywhere but the truth is that in one disctrict it is pretty good while in the other district it is very bad. Also, neither district has an EPI coverage even close to 55%. IMO, such an estimate applies nowhere.
[u]A survey that produces misleading results is worse than having no survey at all.[/u] What you will see in this context is a large design effect and a very wide 95% confidence interval. Not very useful. In the situation that you describe I would be very wary of doing a wide-area survey that yielded a single estimate for the wide area. Stratification (as the term is used in MICS) is one approach but to produce useful results (e.g. per-district results) you would really need to do a full sample size survey in each area. Another form of stratification that might be more useful is spatial stratitification. With this the area is divided into a set of small areas and a sample is taken from each. The trick is to make the small-area sample representative and make clever use of the data you collect. Such methods have been used for estimating CMAM program coverage and for the Myanmar Periodic Review (which uses hexagonal / triangular areas and reuses data to effcetively triple the small area saple size for free). With small samples you may need to classify (e.g. EPI < 50%, between 50% and 80%, or > 80%) rather than estimate indicator proportions.
In short ... you could do a MICS type survey but you should be aware that it may yield misleading results.
You should be aware that not everyone shares my poor opinion of survey designs like the MICS. I suggest that the forum administrator make a direct request to a responsible person in UNICEF for their opinions on this.
I hope this helps.