Sometimes "rapid" can mean "quick and dirty but cheap". That is not always the case. I think you need to be sure to use a representative sampling method to avoid selection biases.
The n = 100 is useful when using a classification approach. A truncated sequential sampling approach as used (e.g.) for HIV drug resistance can provide accurate and reliable prevalence classifications into < 5%, 5% to 15%, and > 15% classes using a sample size of just = 47. Using n = 100 would provide finer classifications and / or smaller errors.
If you use n = 100 and a simple estimator on (e.g.) a 15% prevalence the 95% CI will be something like:
+/- 1.96 * sqrt(0.15 * (1 - 0.15) / 100) = 7%
assuming a simple ramdom sample. With a design effect of 2.0 it will be about +/- 10%. I think you will be better off with a classifier.
The alternative is to use the RAM (Rapid Assessment Methodology). This uses a small spatial sample (n = 192 collected as 16 clusters of 12 children), a PROBIT estimator, and computer intensive methods (that used to mean waiting a week for the answer but now means waiting a minute or two). The RAM methodology has been used by HelpAge (as RAM-OP), GOAL (prevalence of low H/A), UNICEF (M&E surveys, nutritional surveillance), VALID (M&E surveys, prevalence surveys), GAIN (M&E, prevalence, coverage), ACF (nutritional surveillance), SCF (nutritional surveillance), and others for a variety of purposes. The method costs about 60% the cost of a SMART survey and gives similar precision to a SMART survey for GAM prevalence and much better precision for SAM prevalence. With RAM you would not need to do a second SMART survey. Additional savings can be made using a Bayesian-PROBIT estimator. ACF have achieved good results with n = 132.
Let me know if you need more information on anything in this post.