- Open Access
- Open Peer Review
This article has Open Peer Review reports available.
A study on the implementation fidelity of the performance-based financing policy in Burkina Faso after 12 months
© The Author(s). 2018
Received: 20 August 2017
Accepted: 8 December 2017
Published: 11 January 2018
Performance-based financing (PBF) in the health sector has recently gained momentum in low- and middle-income countries (LMICs) as one of the ways forward for achieving Universal Health Coverage. The major principle underlying PBF is that health centers are remunerated based on the quantity and quality of services they provide. PBF has been operating in Burkina Faso since 2011, and as a pilot project since 2014 in 15 health districts randomly assigned into four different models, before an eventual scale-up. Despite the need for expeditious documentation of the impact of PBF, caution is advised to avoid adopting hasty conclusions. Above all, it is crucial to understand why and how an impact is produced or not. Our implementation fidelity study approached this inquiry by comparing, after 12 months of operation, the activities implemented against what was planned initially and will make it possible later to establish links with the policy’s impacts.
Our study compared, in 21 health centers from three health districts, the implementation of activities that were core to the process in terms of content, coverage, and temporality. Data were collected through document analysis, as well as from individual interviews and focus groups with key informants.
In the first year of implementation, solid foundations were put in place for the intervention. Even so, implementation deficiencies and delays were observed with respect to certain performance auditing procedures, as well as in payments of PBF subsidies, which compromised the incentive-based rationale to some extent.
Over next months, efforts should be made to adjust the intervention more closely to context and to the original planning.
Results-based approaches (RBAs) are expanding fast in low- and middle-income countries. In particular, performance-based financing (PBF) is advanced by some as a solution to contribute to improving health system performance and thereby, achieving Universal Health Coverage (UHC) . Despite its proliferation on a global scale, the research on PBF has thus far been lacking , only flirting with observed effects. Indeed, the great majority of studies have focused on demonstrating PBF results [3–6], without explaining the presence of both positive and negative effects. More research is still needed to understand the ‘how and why’  of these effects, as shown by Ssengooba et al. . In fact, very few studies have systematically investigated the implementation of PBF [2, 8–13].
Implementation evaluation is however important, if not essential. It is used to identify which elements were implemented as planned and which were not, discern the intervention’s strengths and weaknesses, and study its internal validity by assessing the cause and effect relationship. In addition to contributing actively to continuous improvement of the intervention , this facilitates the interpretation of an intervention’s implementation and its results. Hence, it strengthens its internal validity [15, 16] and helps avoid Type III errors in evaluation that would lead to mistakenly attributing a lack of effects to the intervention itself without considering the quality of its implementation. The premise underlying the study of implementation fidelity is that a program’s implementation influences its effectiveness. The appropriate degree of fidelity, however, is the subject of debate between defenders of total fidelity to the theoretical model and proponents of essential adjustments. Deviating from the plan is seen by some as compromising the program’s effectiveness  and by others as key to making necessary adjustments to contextual factors surrounding the intervention [18, 19]. Nevertheless, all agree there are certain core elements specific to each program that are essential to ensure its effectiveness implemented and that these must be implemented with fidelity [18, 20].
Thus, the aim of the present study was to deepen our understanding of the “black box” of PBF, taking Burkina Faso as a case study. In Burkina Faso, the achievement of UHC continues to be compromised by insufficient health funding and poor health system performance , with child and maternal mortality rates among the highest globally . The situation prompted the Ministry of Health (MOH) to undertake a PBF intervention through a World Bank(WB)-supported project, as a trial project in three districts first in 2011 and since 2014 under the form of a three-year pilot project. Our objective was to analyze the implementation fidelity of the PBF pilot project in Burkina Faso in order to understand and assess the degree of implementation 12 months post-launch.
PBF was introduced in Burkina Faso as a trial project in the districts of Boulsa, Léo, and Titao over nine months, from April to December 2011. Its results, deemed “encouraging” by evaluators , justified its expansion on a larger scale. In 2014, the PBF mechanism took the form of a three-year pilot project funded by the WB. That intervention affected all echelons of the health system, which is organized as a three-tiered pyramid providing primary, secondary, and tertiary care. The base tier consists of 1698 health and social promotion centers (CSPS - centre de santé et de promotion sociale) and 47 medical centers with surgical units (CMA - centre médical avec antenne chirurgicale). The second tier consists of 9 regional hospitals (CHR – centre hospitalier régional), and the third, 4 university hospitals (CHU - centre hospitalier universitaire) . Altogether, the PBF intervention covered four of the fourteen health regions encompassing 15 districts, 11 CMAs and 561 CSPSs. In some health districts (HD) its implementation was coupled with a community-based indigent-selection intervention and a community-based health insurance (CBHI) program, to influence the service demand side as well.
At the CSPS level, the present PBF exercise was designed as a randomized controlled trial. The four randomization categories were: 1) PBF 1: the health center is paid for indicators of activities performed, based on contractually set prices; 2) PBF 2: same as PBF 1, coupled with an intervention for community-based selection of indigents to be exempted from user fees; 3) PBF 3: same as PBF 2, with an additional subsidy (“equity bonus for indigent care”) to encourage initiatives aimed at increasing service use by indigents; and 4) PBF 4: same as PBF 1, to which is added a community-based insurance plan coupled with community-based indigent selection. This last component of PBF was implemented only in the Boucle de Mouhoun health region.
The principle underlying PBF is that health centers are remunerated based on the quantity and quality of services they provide based on a matrix of quantitative indicators along with other quality measures, the whole formalized in a contract between the health center and a designated contracting and auditing agency. In the context of Burkina Faso, the provision of services is monitored by means of monthly audits of quantitative indicators, quarterly audits of quality measures, community-based (local) audits of the accuracy of information entered into health center registers, and user satisfaction surveys. The quantitative and qualitative audits are subsequently cross-audited in a sample of health centers. PBF financial incentives are distributed based on the results of those audits. The funding received is intended to cover health center expenses, and a maximum of 30% may also be paid to staff in the form of individual performance bonuses. Thus, the process is based on the key premise that financial incentives are effective in improving the quality and quantity of health services . Health centers may also qualify for three additional bonuses: 1) the “quality improvement” bonus, to cover expenses associated with improvements to infrastructure and equipment; 2) the “inter-district and inter-health center equity” bonus, which is intended to compensate for inequalities by adjusting the price associated with the quantitative indicator based on a classification established between health districts and between health centers within a same district1; and 3) the “indigent care” bonus, specific to PBF 3, which financially compensates health centers that provide care to indigents who, upon presentation of their card, do not pay for medical care or medications. In practice, this compensation takes the form of a higher purchase price for certain indicators as ambulatory consultations for patients aged five years and over with physicians, specialized nurses, or state-certified midwives (female or male) and patient admissions to hospital.
Analysis framework and activities analyzed
This study was part of a larger research project designed as a multiple case study with several embedded levels of analysis . The selection of cases is presented and explained elsewhere ; for the present study we retained, in three health districts, 1 CHR, 2 CMAs, and 18 CSPSs.
Our study was based on the implementation fidelity analysis framework of Carroll et al. , which is considered to be particularly comprehensive . We first compiled an exhaustive list of all activities planned based on the intervention’s official documentation (as implementation guide, action plans and meeting reports). For each activity listed, we specified the content, coverage, and temporality to assess whether the “active ingredients” of the intervention have been received by the “beneficiaries” where, as often and for as long as it should have been . The term content refers to the activity that was implemented and is under analysis. Coverage refers to the public affected by the activity. Temporality refers to the timing, or time frame, of the activity implemented.
We therefore began with an exhaustive compilation of this list based on four dimensions identified through a full review of all activities: 1) planning (training workshops and recruitment); 2) application (intervention launch, performance audits, determination and payment of subsidies); 3) tools (purchase contracts, reporting systems); and 4) action research. We then made a careful selection of activities considered to be core, understood as fundamental to the intervention’s effectiveness [18, 20]. Thus, for instance, activities having to do with supplying office materials and furniture for technical services supporting the PBF implementation at the central level were not retained. Another such example is the analysis of data related to the introduction and payment of bonuses for indigent care, which referred only to the 5 CSPSs classified as PBF 3 (n = 5), as they were the only ones receiving these bonuses. Thereafter, we grouped certain core activities together to facilitate understanding and use of the list. This list of core activities was finally submitted to and validated by the authorities in charge of the PBF program who recognized the listed activities as essential.
Data were collected at two points in time to identify: 1) the activities planned and 2) the activities that had been implemented after one year of operation. To construct the list of planned activities, we analyzed in-depth all official documents available as of March 31, 2014. Then, in February and March 2014, we interviewed eight key informants who knew the intervention well, using a non-random sampling approach for which the inclusion criteria (criterion-i) were position held, availability, and knowledge of the intervention .
The empirical data were then collected 12 months after the intervention’s launch, between December 2014 and February 2015. One year was considered to be sufficient time for the stakeholders to have installed the intervention. The data were obtained from interviews with key actors involved in implementing PBF in each of the 21 health centers (CHR, CMAs, CSPSs). Two sampling strategies were applied based on the type of care facility. First, the CSPS sample included directors of centers, managers of drug depots, health workers (assistant head nurses, maternal care managers, managers of nursing curative consultations, managers of expanded immunization programs), and available support personnel. Then, given the limited availability of CMA and CHR personnel, the preferred approach was to speak with the head of the facility and anyone else available who knew about the PBF program implementation.
Two interview techniques were used, depending on the number of resource persons available. Focus groups were conducted in the primary care facilities (CSPSs and CMAs) and individual interviews in the secondary care facility (CHR). In all, 83 resource persons were interviewed: 76 persons in 18 focus groups in the CSPSs, six persons in two focus groups in the CMAs, and one individual interviewed in the CHR.
Persons interviewed were invited to react to all the activities listed for their level and tier of care. For each activity, respondents were asked to provide a substantiated and detailed assessment of its implementation in terms of three statuses: implemented as planned, not implemented as planned, or modified. They could also mention other activities that had been added. The ethnographic notes and logbooks of the survey team were also used as sources of empirical data and help a very few times to correct the status picked when it was not coherent with the health worker’s statement. All the data were entered into a matrix encompassing the different dimensions studied.
The data were analyzed following the analysis framework method  using the framework adapted from Caroll et al. . Qualitative data regarding activity content fidelity were translated into quantitative data by considering the proportions of activities categorized by respondents as: 1) implemented as planned; 2) not implemented as planned; 3) modified; and 4) added. A fifth option, “non-response” (NR) was added to take into account missing responses. Our approach can be illustrated by taking as an example the activity of receiving tools for reporting purposes. Of the 21 health centers studied, 19 reported they had implemented the activity according to plan, one saw the activity modified, and one reported it was not implemented. The activity was therefore predominantly implemented as planned (90%, 19/21) and presented low rates of modification (5%, 1/21) and of non-implementation (5%, 1/21). Temporal fidelity was studied in terms of frequency (e.g. monthly, quarterly) and, when the activity allowed, duration, understood as number of days planned for the activity.
Regarding the content of the intervention implemented, we observed that the majority of the intervention components were implemented with fidelity (65.5%, 249/380), while 25.8% (98/380) were not implemented and 7.4% (28/380) underwent modifications (NR: 1.3%, 5/380). Thirteen activities (3.4%, 13/380) were added during the implementation because of contextual circumstances. For example, a training activity on the PBF portal was added at the CHR, which had not been planned. More than half of the added activities (61.5%, 8/13) occurred during the planning process and consisted of information sessions and recruitment activities. They were fully implemented (8/8). The remaining 38.5% (5/13) involved the application dimension, and in particular, the performance audit process. These latter activities presented a lower rate of implementation (60%, 3/5), with only two of the five activities added to the application dimension having ultimately not been implemented.
Content fidelity of the implementation by dimension
Planning(N = 34)
Application(N = 262)
Tools(N = 63)
Action research(N = 21)
Implemented as planned
Fidelity of implementation content by health district (HD)
HD1(N = 124)
HD2(N = 126)
HD3(N = 130)
Total(N = 380)
Implemented as planned
Fidelity of implementation content by tier and level of care
CSPS(N = 321)
CMA(N = 36)
CHR(N = 23)
Implemented as planned
There was relatively strong homogeneity among health districts in the implementation of performance audit activities. The second district presented, overall, the greatest number of modifications with respect to performance audit activities (Fig. 1).
For activities related to the determination and payment of subsidies, implementation fidelity was evenly distributed among the three districts (Fig. 2). Modification rates varied by health district.
Assessing the temporality of an activity allows determining the time frame under which the activity was carried out and comparing it to the original plan. Overall, 63.4% (241/380, NR: 1.1%) of activities adhered to the planned schedule, but 14.2% (40/280) of the activities implemented were altered. Temporal disparities were most often observed in HD2 (17.0%, 16/94), as opposed to 12.0% (11/92) in HD1 and 13.7% (13/95) in HD3, and in the application dimension, where 18.5% (30/162) of the activities implemented were altered. Given the low implementation fidelity for this dimension, at this stage we can only present a few nuances. The activities most affected by temporal disparities were performance audit activities (42.9%, 45/105) and payment of subsidies (57.7%, 15/21). Exceptions were the quantitative (95.3%, 20/21) and qualitative (100%, 21/21) audits, which were largely conducted according to schedule. One health center reported, however, that it was subjected to a quarterly audit after only one month of implementation, due to delays in launching the intervention. Conversely, the activities most often altered—if they were even conducted—were local audits (0%, 0/3), satisfaction surveys (33%, 1/3), and cross-audits (50%, 2/4). Most often, these numbers were due to implementation delays. Still, one health center advanced its local audit by one quarter (T2 2014 instead of T3 2014). With respect to subsidies, the duration of activities to calculate payments conducted at the facility level was often brief, at one day.
The empirical data showed most planned activities were implemented in accordance with the national plan. However, more detailed examination offers a more contrasted view, which is very reasonable in such a context, with implementation discrepancies being inevitable  and already observed elsewhere [11, 12, 32]. Disparities between the plan and the implementation varied considerably according to the components under study. The planning dimension, which encompassed both recruitment and training activities, was generally implemented with fidelity. The same was true for reception of the performance implementation tools, which appeared to have transpired with no major difficulty. Most bottlenecks were related to performance auditing and payment of subsidies, as previously observed in Sierra Leone [33, 34], Nigeria  and in Benin .
This situation is also very familiar to African countries, which adopt user fees exemption policies and then experience significant delays in health center reimbursements [35–38]. In fact, in the case of PBF in Burkina Faso, local audits, audits based on satisfaction surveys, and cross-audits all encountered significant implementation gaps. The same was also true for incentive bonuses, which were infrequently paid even though they had been mainly calculated in the majority of cases.
The observed gaps in these two components after one year of implementation are worrisome because these mechanisms are at the heart of PBF intervention theory. The risk is twofold. First, delays in incentive bonuses threaten the intervention’s credibility by undermining the trust of care providers  and creating, as in Sierra Leone [33, 34], Nigeria [10, 11, 40], Congo , and even India , frustrations that have a deterrent effect , thwarting the objective of motivating care providers. In Nigeria, for example, payment delays created a climate of uncertainty among health workers regarding the promised subsidies [10, 11]. As doubts took hold, care providers preferred to focus on activities with immediate benefits to themselves rather than make additional efforts for illusory subsidies . Moreover, such delays can undermine the intervention’s ability to achieve its intended purpose of improving health services quality and quantity. In Uganda, Ssengooba et al.  showed, in fact, that excessive delays led to loss of institutional memory regarding the PBF intervention, and particularly regarding performance targets, thereby impeding achievement of the intended outcomes.
This situation is not unique to PBF. Olivier de Sardan and Ridde , in studying user fees exemption policies, also found that payment delays jeopardized the outcomes by giving rise to diversionary tactics focused on more profitable procedures.
Furthermore, the fact that no health center in our study implemented the PBF intervention entirely according to plan adds fuel to the current debate about the adaptability of interventions, including PBF. On one side are the proponents of absolute adherence to the theoretical model and, on the other, advocates of a relative compliance in which core elements are retained [18, 20]. The latter maintain that such adaptations are needed so that context is taken into account, to ensure the intervention’s effectiveness and viability [18, 19, 44]. In Uganda, Ssengooba et al.  observed, in fact, that a lack of financial resources prompted certain adaptations in order to achieve the intended outcomes despite budget cuts. In Rwanda, the performance assessment grid was modified twice, in 2008 and 2010, to adapt the PBF intervention more closely to the changing needs of the hospital sector, that is, to bring the grid in line with stricter quality norms .
In our study, 7.4% (28/380) of activities had been modified and 3.4% (13/380) had been added, for a combined intervention adaptation rate of 10.8%. However, it would seem that is not so much the adaptation rate that should be examined, as the nature of these adjustments. Rebchook, Kegeles, Huebner, and the TRIP Research Team , cited by Perez et al. , identified three typologies of adaptations based on their secondary effects: 1) adaptations that profoundly alter the intervention to the point where it no longer produces the intended outcomes; 2) minor or major adaptations that do not impede fidelity and may even make the intervention more effective; and 3) added activities that do not affect implementation fidelity a priori. More concisely, some adaptations are deemed “acceptable” and others “risky” or even “unacceptable” , and some are considered to have positive or negative effects, or none at all, on the intervention . However, almost no studies have examined such adaptations. Breitenstein, Robbins, and Cowell  suggest these adaptations should be identified and evaluated to determine the nature of their effects. It would also be interesting, in future qualitative studies, to explore the 13 adaptations observed in our study and their effects on the intervention.
The main strength of our study lies in the originality of its subject. In fact, the current literature on PBF is focused primarily on impact evaluations to demonstrate effects (positive and negative), but without attempting to understand them. In our case, the results can be used to support subsequent analysis of the implementation process and intervention outcomes. Our study does, however, present certain limitations. The first has to do with a weakness in the sample. Only two CMAs and one CHR were considered in our study, thereby precluding any extrapolations; they did, however, provide a view of the situation that remains to be confirmed or countered in our next studies. This is especially important given the scarcity of studies on PBF in hospitals in Africa. A second limitation emerged during data collection, with respect to the availability of resource persons with information on the PBF intervention. In some cases, it was complicated to locate anyone who knew about it. Sometimes there was indeed only one person with knowledge about PBF, such that we conducted just one interview in that facility, which in itself provided useful information on the quality of implementation. High turnover of health personnel, poor transmission of information, and lack of institutional memory are phenomena that have been frequently observed in Africa . Even when resource persons were available, they sometimes had difficulty recalling when exactly specific activities were implemented. When necessary, temporal fidelity was questioned quarterly, which helped informants to recall better. A third limitation concerns the understanding of implementation fidelity in its broadest sense. Our research objective was to determine the extent to which the PBF intervention had deviated from its planned model. This objective refers to the notion of content fidelity. However, Dumas, Lynch, Laughlin, Phillips Smith, and Prinz  assert that content fidelity is not sufficient to comprehend implementation fidelity as a whole, in all its subtleties. Those authors argue that process fidelity must also be considered as investigated by Ridde et al. , observing how the plan is implemented by those on the ground, which corresponds to “implementation quality” . This angle of research merits further development for a more comprehensive picture of implementation fidelity.
Twelve months post-launch, or one-third of the way through the pilot project, the implementation fidelity of the planning dimension—encompassing information, training, and recruitment activities—had provided a solid foundation for the intervention. However, delays and sometimes serious implementation deficiencies were also observed, particularly with regard to performance audit mechanisms and PBF subsidy payments. Supplementary bonuses, such as the bonus for indigent care, were also rarely distributed, or not at all. Poor implementation of these core activities of the PBF intervention is a harbinger of delays in intended outcomes and could, more broadly, even jeopardize its viability. The intervention is, however, still in its early days. There are many months left in which the intervention can be more closely adapted to both the context and the original plan. More fundamentally, we take the opportunity provided by this study to point out that the field of implementation fidelity research has thus far produced very little literature, and we invite global health researchers to consider the significance and advantages offered by such an angle of study as it pertains to intervention effectiveness . Clearly, such fidelity analysis is necessary, but not sufficient. More research needs to be developed and conducted to better understand the PBF implementation process in Africa, as well as its impacts (intended or not) and their heterogeneity.
Bonuses create variations in the price of the quantitative indicator ranging from +0% to +40%, depending on the district, with a further +0% to +40%, depending on the health center. Thus, the base price for an assisted delivery is set at 1.500 F CFA, but a health center classified at +20% (inter-health center equity bonus) and located in a district classified at +30% (inter-district equity bonus) would receive 2.250 F CFA for the procedure.
We would like to thank the PBF Technical Service for their welcome and their interest in our study. We also wish to thank the health workers we encountered in the health centers for making themselves available and sharing their experiences. This study would not have been successfully conducted without the contributions of Maurice Yaogo and Sylvie Zongo, who managed and supervised the data production, and Guillaume Kambire, Saliou Sanogo, Mariétou Ouedraogo, Souleymane Ouedraogo, Vincent Paul Sanon, Idriss Gali Gali, Vincent Koudougou, Assita Keita, Maimouna Sanou, Elodie Ilboudo, and Ahdramane Sow, who collected data in the health centers.
VR holds a CIHR-funded Research Chair in Applied Public Health. The research project is a part of the “Community research studies and interventions for health equity in Burkina Faso”. This work was supported by the Canadian Institutes of Health Research (CIHR) under Grant number ROH-115213.
Availability of data and materials
The datasets supporting the conclusions of this article are included within the article and it Additional file 1.
The authors’ responsibilities were as follows- OB, AT and VR designed the research, AB, NZ and PZ managed and supervised the data collection, OB carried out the analysis of the data and wrote the first draft of the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
The study was approved by Burkina Faso’s national health research ethics committee and the research ethics committee of the University of Montreal Hospital Research Center. All persons and authorities involved in this study were informed of its objectives and gave their informed consent.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Gautier L, Ridde V. Health financing policies in sub-Saharan Africa: government ownership or donors’ influence? A scoping review of policymaking processes Global Health Research and Policy. 2017;2:23.View ArticlePubMedGoogle Scholar
- Renmans D, Holvoet N, Orach CG, Criel B. Opening the « black box » of performance-based financing in low-and lower middle-income countries : a review of the literature. Health Policy Plan. 2016;31(9):1297–309.View ArticlePubMedGoogle Scholar
- Bonfrer I, Van de Poel E, Van Doorslaer E. The effects of performance incentives on the utilization and quality of maternal and child care in Burundi. 2014;123:96–104.Google Scholar
- Skiles MP, Curtis SL, Basinga P, Angeles G, Thirumurthy H. The effect of performance-based financing on illness, care-seeking and treatment among children: an impact evaluation in Rwanda. BMC Health Serv Res. 2015;15:375.View ArticlePubMedPubMed CentralGoogle Scholar
- Rudasingwa M, Soeters R, Basenya O. The effect of performance-based financing on maternal healthcare use in Burundi: a two-wave pooled cross-sectional analysis. Glob Health Action. 2017;10(1)Google Scholar
- Rajkotia Y, Zang O, Nguimkeu P, Gergen J, Djurovic I, Vaz P, Mbofana F, Jobarteh K. The effect of a performance-based financing program on HIV and maternal/child health services in Mozambique-an impact evaluation. 2017;32(10):1386–96.Google Scholar
- Basinga P, Mayaka S, Condo J. Performance-based financing: the need for more research. Bull World Health Organ. 2011;89(9):698–9. https://doi.org/10.2471/BLT.11.089912.View ArticlePubMedPubMed CentralGoogle Scholar
- Ssengooba F, McPake B, Palmer N. Why performance-based contracting failed in Uganda--an “open-box” evaluation of a complex health system intervention. Soc Sci Med. 2012;75(2):377–83. https://doi.org/10.1016/j.socscimed.2012.02.050.View ArticlePubMedGoogle Scholar
- Chimhutu V, Lindkvist I, Lange S. When incentives work too well: locally implemented pay for performance (P4P) and adverse sanctions towards home birth in Tanzania - a qualitative study. BMC Health Serv Res. 2014;14(1) https://doi.org/10.1186/1472-6963-14-23.
- Ogundeji YK. Pay-for-performance for health service providers: effectiveness, design, context, and implementation [unpublished doctoral dissertation]. York (UK): University of York; 2015.Google Scholar
- Ogundeji YK, Jason C, Sheldon T, Olubajo O, Ihebuzar N. Pau for performance in Nigeria: the influence of context and implementation on results. Health Policy Plan. 2016;31(8):955–63.View ArticlePubMedGoogle Scholar
- Antony M, Bertone MP, Barthes O. Exploring implementation practices in results-based financing: the case of the verification in Benin. BMC Health Serv Res. 2017;17(1):204.View ArticlePubMedPubMed CentralGoogle Scholar
- Shroff ZC, Tran N, Meessen B, Bigdeli M, Ghaffar A. Taking Results-Based Financing from Scheme to System. 2017;3(2):69–73.Google Scholar
- Meyers DC, Katz J, Chien V, Wandersman A, Scaccia JP, Wright A. Practical implementation science: developing and piloting the quality implementation tool. Am J Community Psychol. 2012;50(3–4):481–96. https://doi.org/10.1007/s10464-012-9521-y.View ArticlePubMedGoogle Scholar
- Duerden MD, Witt PA. Assessing program implementation: what it is, why it’s important, and how to do it. J Ext. 2012;50(1):1–8.Google Scholar
- Rossi PH, Lipsey MW, Freeman HE. Assessing and monitoring program process. In: Rossi P, Lipsey M, Freeman H, editors. Evaluation: a systematic approach thousand oaks. CA: SAGE Publications; 2004. p. 169–202.Google Scholar
- Moore JE, Bumbarger BK, Cooper BR. Examining adaptations of evidence-based programs in natural contexts. J Prim Prev. 2013;34(3):147–61. https://doi.org/10.1007/s10935-013-0303-6.View ArticlePubMedGoogle Scholar
- Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3–4):327–50. https://doi.org/10.1007/s10464-008-9165-0.View ArticlePubMedGoogle Scholar
- Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: q synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80. https://doi.org/10.1007/s10464-012-9522-x.View ArticlePubMedGoogle Scholar
- Fixsen DL, Naoom SF, Blase K, Friedman RM, Wallace F. Implementation research: a synthesis of the literature (FMHI publication no. 231). Tampa, FL: University of South Florida, Louis de la parte Florida mental health institute, the National Implementation Research. Network. 2005;Google Scholar
- Ridde V, Belaid L, Samb OM, Faye A. Les modalités de collecte du financement de la santé au Burkina Faso de 1980 à 2012. Santé Publique. 2014;26(5):715–25.View ArticlePubMedGoogle Scholar
- World Health Organization. World health statistics 2014. Geneva: WHO; 2014.Google Scholar
- Bicaba A. Évaluation finale de la phase test du financement basé sur les résultats dans les districts sanitaires de Boulsa, Leo et Titao. SERSAP, DEP/MS: Ouagadougou; 2013.Google Scholar
- Ministère de la Santé (Burkina Faso)/Direction générale des études et des statistiques sectorielles. Annuaire Statistique 2015. Ouagadougou, Burkina Faso; 2016.Google Scholar
- Witter S, Toonen J, Meessen B, Kagubare J, Fritsche G, Vaughan K. Performance-based financing as a health system reform: mapping the key dimensions for monitoring and evaluation. BMC Health Serv Res. 2013;13:367. https://doi.org/10.1186/1472-6963-13-367.View ArticlePubMedPubMed CentralGoogle Scholar
- Ridde V, Turcotte-Tremblay A-M, Souares A, Lohmann J, Zombré D, Koulidiati JL, Yaogo M, Hien H, Hunt M, Zongo S, et al. Protocol for the process evaluation of interventions combining performance-based financing with health equity in Burkina Faso. Implement Sci. 2014;9(1) https://doi.org/10.1186/s13012-014-0149-1.
- Van den Branden S, Van den Broucke S, Leroy R, Declerck D, Hoppenbrouwers K. Evaluating the implementation fidelity of a multicomponent intervention for oral health promotion in preschool children. Prev Sci. 2015;16(1):1–10. https://doi.org/10.1007/s11121-013-0425-3.View ArticlePubMedGoogle Scholar
- Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2(1) https://doi.org/10.1186/1748-5908-2-40.
- Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health Ment Health Serv Res. 2015;42(5):533–44. https://doi.org/10.1007/s10488-013-0528-y.View ArticleGoogle Scholar
- Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In: Bryman A, Burgess RG, editors. Analyzing qualitative data. New York: Routledge; 1994. p. 173–94.View ArticleGoogle Scholar
- Olivier de Sardan J-P. For an anthropology of gaps, discrepancies and contradictions. Antropologia. 2016;3(1):111–31.Google Scholar
- Paul E, Lamine Dramé M, Kashala J, Ekambi Ndema A, Kounnou M, Aïssan J, Gyselinck K. Performance-based financing to strengthen the health system in Benin: challenging the mainstream approach. International Journal of Health Policy Management. 2017;Google Scholar
- Cordaid. External verification performance based financing in healthcare in Sierra Leone (Vol. 1). The Hague: Cordaid; 2014.Google Scholar
- Bertone MP, Largard M, Witter S. Performance-based financing in the context of the complex remuneration of health workers: finding from a mixed-method study in Sierra Leone. BMC Health Serv Res. 2016;16:286.View ArticlePubMedPubMed CentralGoogle Scholar
- Fusheini A, Marnoch G, Gray AM. Implementation challenges of the National Health Insurance Scheme in selected districts in Ghana: evidence from the field. International Journal of Public Administration. 2016:1–11. https://doi.org/10.1080/01900692.2015.1127963.
- Ndikubagenzi J, Butoyi P. Evaluation de l’impact de la mesure de subvention des soins pour les enfants de moins de cinq ans et pour les accouchements sur les structures et la qualité des soins. Observatoire de l’Action Gouvernementale: Bujumbura, Burundi; 2009.Google Scholar
- Olivier de Sardan J-P, Ridde V. L’exemption de paiement des soins au Burkina Faso, Mali et Niger: les contradictions des politiques publiques. Afrique contemporaine. 2012;243(3):11. https://doi.org/10.3917/afco.243.0011.View ArticleGoogle Scholar
- Ridde V, Queuille L, Kafando Y, Robert É. Transversal analysis of public policies on user fees exemptions in six west African countries. BMC Health Serv Res. 2012;12(1) https://doi.org/10.1186/1472-6963-12-409.
- Bertone MP, Meessen B. Studying the link between institutions and health system performance: a framework and an illustration with the analysis of two performance-based financing schemes in Burundi. Health Policy Plan. 2013;28(8):847–57. https://doi.org/10.1093/heapol/czs124.View ArticlePubMedGoogle Scholar
- Bhatnagar A, George AS. Motivating health workers up to a limit: partial effects of performance-based financing on working environements in Nigeria. Heal Policy Plan. 2016;31(7):868–77.View ArticleGoogle Scholar
- Fox S, Witter S, Wylde E, Mafuta E, Lievens T. Paying health workers for performance in a fragmented, fragile state: reflections from Katanga Province, Democratic Republic of Congo. Health Policy Plan. 2014;29(1):96–105. https://doi.org/10.1093/heapol/czs138.View ArticlePubMedGoogle Scholar
- Wang H, Juval RK, Miner SA, Fischer E. Performance-based payment system for ASHAs in India: what does international experience tell us? Vistaar Project: Bethesda (MD); 2012.Google Scholar
- Turcotte-Tremblay A-M, Gali-Gali IA, De Allegri M, Ridde V. The unintended consequences of community verifications for performance-based financing in Burkina Faso. Soc Sci Med. 2017;191:226–39.View ArticlePubMedGoogle Scholar
- Dusenbury L. Quality of implementation: developing measures crucial to understanding the diffusion of preventive interventions. Health Educ Res. 2005;20(3):308–13. https://doi.org/10.1093/her/cyg134.View ArticlePubMedGoogle Scholar
- Janssen W, Ngirabega J de D, Matungwa M, Van Bastelaere S. Improving quality through performance-based financing in district hospitals in Rwanda between 2006 and 2010: a 5-year experience. Trop Dr. 2015;45(1):27–35. https://doi.org/10.1177/0049475514554481.Google Scholar
- Rebchook GM, Kegeles SM, Huebner D, Research Team TRIP. Translating research into practice: the dissemination and initial implementation of an evidence–based HIV prevention program. AIDS Educ Prev. 2006;18(Suppl):119–36. https://doi.org/10.1521/aeap.2006.18.supp.119.View ArticlePubMedGoogle Scholar
- Perez D, Lefevre P, Castro M, Sanchez L, Toledo ME, Vanlerberghe V, Van der Stuyft P. Process-oriented fidelity research assists in evaluation, adjustment and scaling-up of community-based interventions. Health Policy Plan. 2011;26(5):413–22. https://doi.org/10.1093/heapol/czq077.View ArticlePubMedGoogle Scholar
- O’Connor C, Small SA, Cooney SM. Program fidelity and adaptation: meeting local needs without compromising program effectiveness. What Works, Wisconsin–Research to Practice Series. 2007;4:1–5.Google Scholar
- Durlak JA. Studying program implementation is not easy but it is essential. Prev Sci. 2015;16(8):1123–7. https://doi.org/10.1007/s11121-015-0606-3.View ArticlePubMedGoogle Scholar
- Breitenstein S, Robbins L, Cowell JM. Attention to fidelity: why is it important. J Sch Nurs. 2012;28(6):407–8. https://doi.org/10.1177/1059840512465408.View ArticlePubMedGoogle Scholar
- Jaffré Y, Olivier de Sardan J-P. Une médecine inhospitalière: les difficiles relations entre soignants et soignés dans cinq capitales d’Afrique de l’Ouest. Paris: Karthala; 2003.Google Scholar
- Dumas JE, Lynch AM, Laughlin JE, Phillips Smith E, Prinz RJ. Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. Am J Prev Med. 2001;20(1 Suppl):38–47.View ArticlePubMedGoogle Scholar
- Ridde V, Yaogo M, Zongo S, Somé PA, Turcotte-Tremblay AM. Twelve months of implementation of health care performance-based financing in Burkina Faso : a qualitative multiple case study. Int J Health Plann Manag. 2017;Google Scholar
- Ridde V. Need for more and better implementation science in global health. BMJ Glob Health. 2016;1(2) https://doi.org/10.1136/bmjgh-2016-000115.