Menu ENN Search
Language: English Français

Setting up of Nutrition serveillance

This question was posted the Assessment forum area and has 9 replies. You can also reply via email – be sure to leave the subject unchanged.

» Post a reply

Anonymous 81

Public Health Nutritionist

Normal user

16 Nov 2013, 04:43

I wonder if anyone advise me or direct me to guideline/manual regarding how to setup sentinel site community nutrition surveillance system. This includes sample size, how to determine number of sentinnel sites, selection of sentinnel sites, selection of subjects.

Regards

Mark Myatt

Consultant Epideomiologist

Frequent user

18 Nov 2013, 12:35

A few years ago I designed a surveillance system for Save the Children (HUMS) which was taken up be ACF (Listening Posts) in a couple of countries. I have zipped up the documentation (and other material) I made for Save the Children. You can download this from here. You may find this useful.

Perhaps SC-UK and ACF can add something here about their experiences with the system.

Altmann

Normal user

18 Nov 2013, 14:34

As just written by Mark, we (ACF) implemented the "Listening Post" methodology in Liberia and Burkina Faso, and currently, in Central African Republic.

This sentinel site surveillance methodology includes:
- a random selection of 96 children < 2 years (by CSAS + EPI5)
- monthly follow-up measurements (MUAC and weight) among the same children selected at baseline (=longitudinal approach), with top-up replacement of old children
- additional indicators can be added, depending on what you want to monitor (e.g. diarhhea, diet diversity)

Our experience is quite successful and has shown reliable data, with relative precise estimates (+/- 10%). Challenges include:
- the interpretation of the data (especially during the first year as you have no comparison trend)
- the definition of thresholds, to “alert” stakeholders/authorities
- to maintain your surveillance system "in live", even if it's not directly linked with some kind of intervention.
Costs are relatively small but technicity of the system requires high level of supervision.
For more information, do not hesitate to contact me (mal@actioncontrelafaim.org)
Best,
Mathias

Cecile Basquin

Nutrition Advisor / Action Against Hunger-ACF USA

Normal user

19 Nov 2013, 00:18

Hi there,

Some more info from Action Against Hunger – ACF- USA: we developed a small sample survey surveillance system (based on SMART methodology) back in 2009/2010 through a collaboration with UNICEF and the Centre for Disease Control and Prevention (CDC, Atlanta, USA). The system is based on conducting anthropometric surveys for the need of representativeness and direct comparability with data collected in previous surveys, i.e., to establish trends of GAM/SAM and monitor aggravating factors linked to malnutrition.
A multistage cluster sampling approach is used as for most anthropometric surveys and follow the SMART methodology approach. It has a 25 clusters by 12 households design. This sampling size though "small" yet ensures that variations of acute malnutrition of a minimum of 4% would be detected between two rounds of surveys using the CDC “2 surveys” calculator. Also, the CDC "probability calculator" can be used to present results / give a GAM threshold with 85% probability to be exceeded (relevant for recommendations purposes).
Those small scale surveys can be run 2 or 3 times a year (during key seasonal events) and apart from nutrition indicators, a set of key indicators about health, wash, food security, child care and feeding practices can be also collected for the purpose of early warning.
ACF teams in Uganda and in Kenya have been using this surveillance system since several years now, it has generated lots of information, and allowed trends of wasting over time/seasons (among other indicators) to be established. Recent discussions with ACF Kenya and Uganda teams revolve around various interesting topics such as i) reducing the number of indicators to be collected on a regular basis (to use only those relevant to early warning); ii) integrating this surveillance system to existing national early warning systems; iii) handing over its management to local authorities, etc.
For more information:
CDC calculators: http://www.cdc.gov/globalhealth/gdder/ierh/researchandsurvey/calculators.htm
on ACF-USA website, one can find surveillance reports as well as results of a meta-analysis of the Uganda surveillance data recently done jointly with the Government of Uganda
http://www.actionagainsthunger.org/media/technical-surveys

Mark Myatt

Consultant Epideomiologist

Frequent user

19 Nov 2013, 09:03

Andrew Hall (Save the Children) sent a link to this recent paper that may be of interest.

Anonymous 81

Public Health Nutritionist

Normal user

19 Nov 2013, 12:27

Dear Mark,

Thank you very much for your usual support.

Anonymous 81

Public Health Nutritionist

Normal user

19 Nov 2013, 14:59

Dear Mark, Cecile and, Altmann
Thank you very much for your response. I would like to add follow up question. I think Listening post (LP) method seems relevant and feasible for my case. So, my follow up question on LP is regarding selecting of the study group (in my case it will be 0-59 months). According to the LP implementation guide, total sample require is 96 children and these the same should be followed up longitudinally. During the implementation process, there will be continuous top-up replacement for those retired/older children and these should be replace with younger kids. Moreover, those who missed for various reasons should also be replaced with children with the same age group. It seems too much demandy in terms of manpower and supervision. My question is to ask whether or not there is another follow up options. Instead of following the same cohort (longitudinal), is it possible to use just repeated cross sectional approach? To clarify, if we want to collect the surveillance data quarterly/monthly, in every quarter/month can we select 96 children randomly from the respected sentinel sites. This avoids the complicated process of top-up and other replacements. Given limited capacity for supervision, this continues or on-going replacement process could introduce bias. The other issue could be change of practices. If caregivers know that we are following these children regularly, then they might change their practices and can give more attention or more care to those children and this might lead very positive result of the study which is not reflection the community.

Thanks

Mark Myatt

Consultant Epideomiologist

Frequent user

19 Nov 2013, 16:05

A few points ...

Sample size : n = 96 is a minimum. If you can do more then, within reason, you should do more. (n = 132 has been used).

Age-group : A more restrictive and younger age-group is used because (1) the younger age-group is more susceptible to GAM and SAM, (2) a single narrow age-group simplifies the weight gain analysis, (3) the younger age-group is in the "first 1000 days", and (4) a narrow age-group means a smaller population means a larger sampling fraction. Going for the 6-59 month age-group will, I think, reduce the sensitivity of the surveillance system and complicate analysis. A good reason to go with the 6-59 mont age group is if you have been doing SMART periodically (i.e. several times a year) for several years. Even then you could re-analyse the SMART data for the narrower age-group. A larger sample size (i.e. larger than n = 96 will be required).

Top-up workload : I will leave this to people who have run LP to respond to this. It does not seem to be a problem. The numbers retiring and lost at each round should be quite small. Note that you do need to top-up as not doing so results in an ageing cohort which will, over time, move out of risk.

Alternative follow-up options : The longitudinal approach means we can do more with a small sample size because sampling variation between rounds in minimised. If you use a repeated cross-sectional sample approach then you will need a larger sample size to cut through the noise introduced by sampling variation. The sample size in Cecile's post above (n = 300) looks a bit small to me when using a classical estimator of prevalence but I am sure that CDC will have got that right. One issue with a repeated cross-sectional sample approach is that sick children tend to be hidden from surveys in some locations and this leads to SAM kids being excluded. This is not a big issue for surveillance as we do not worry too much about a consistent bias.

Bias : The observer effect (see the article in the post starting with "Andrew Hall (Save the Children) sent ..." above) is an issue. The issue of non-consistent bias that is raised is interesting. I wonder when (if) this stabilises. If it stabilises quickly then we can, I think, discount it as we are not usually concerned about consistent bias in surveillance systems. If it does not stabilise then periodic change in sites is (as the article suggests) an option. I am not convinced that the Heisenberg Uncertainty Principle (mentioned in the article) is the correct model. The Hawthorne Effect is probably a more useful model here. In the UK NHS we have the BOHICA effect which is what happens when we rely on the Hawthorne Effect to continually increase productivity. BOHICA stands for "Bend Over Here It Comes Again". "BOHICA" is a florid term for the tendency of observer effects to fade over time. I think the big risk is poorly considered (or gaming) intervention based on surveillance data. This occurs when the sentinel sites get the most attention because they are the only sites for which we have data or because intervening there makes the problem disappear by legerdemain.

In summary ... I think you will be OK with your proposed method but that you will need to increase the sample size at each round. I think that you could use a sample size of n = 192 collected from m = 16 clusters and use a PROBIT estimator for prevalence. This approach has been used in Sudan.

I hope this is of some use.

Tamsin Walters

en-net moderator

Forum moderator

20 Nov 2013, 14:25

From Edward Kutondo:

Hi. Sentinel sites are often selected using purpose method in view of vulnerability. Ideally you need to detect changes and follow trends hence you select a few sites that will be able to achieve this. The indicators need to be sensitive to change. Small sample sizes are preferable eg 30 households per site - this has been used in South Sudan , Uganda and Kenya.


However note that random methods are highly recommended. In this case the study subjects are selected using simple or systematic sampling methods depending on characteristics of the population.


Below are links for additional information.


http://www.google.co.ke/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&ved=0CEUQFjAD&url=http%3A%2F%2Fwww.unicef.org%2Fnutritioncluster%2Ffiles%2FM10P2.doc&ei=LU6MUqjwItKjhgf3t4DQBQ&usg=AFQjCNGxLh24aCCobb_siqlv6CmGJw9fJQ&bvm=bv.56753253,d.d2k


http://www.pophealthmetrics.com/content/10/1/18


Edward K.

Sam Oluka

Nutritionist / Food Scientist

Normal user

23 Nov 2013, 07:58

Thanks Edward for the link. A blessed weekend fro Uganda.

Samuel

Back to top

» Post a reply