Healthcare network in Chiapas
Public health services in Mexico are decentralized to state governments. The role of the Federal Ministry of Health is mostly normative, establishing healthcare and guidelines, and partly financial. The MOH in Chiapas is responsible for health service provision. It’s healthcare network has three organizational levels: first, central level, in charge of planning, procurement, alignment with federal policies, and technical oversight; second, health districts that manage a geographic area encompassing between 40 to 140 health facilities of multiple levels of care; and, third, health facilities, including hospitals, clinics and mobile units (see Fig. 1). A wide range of services are offered, from community health promotion and prevention to highly complex medical care.
About 40% of the almost 5 million people living in Chiapas are served by the MOH, which makes it the largest healthcare provider [13]. Most of the population served is rural and poor. Chiapas is among the states with the highest poverty rates, lowest life expectancy and lowest per capita health expenditure [14]. It is also within the five states with highest maternal mortality [15]. The MOH operates over 1000 health facilities distributed along the state’s 28 thousand square miles.
Health information Systems in Chiapas
Data routinely collected by the MOH in Chiapas is segmented, untimely, insufficient and underutilized. All health facilities report at least monthly (including the hardest-to-reach); however, manual record-keeping and labor-intensive compilation processes imply poor quality data and long compilation processes (2–3 months). To comply with federal and central level information requirements, data is siloed into multiple information systems: vital statistics, vaccinations, service production, infrastructure, and disease surveillance. Information on availability of supplies and quality of care is not routinely collected. Advanced data analysis is limited to epidemiology units, so data is not regularly used for benchmarking and performance improvement.
In terms of decisions for equipment, medicines and supplies, the central level makes procurement decisions considering yearly compilations of production data and historical trends. Commodities are distributed by district on mainly by-needs basis. Health facilities are mostly recipients and have a minimal role in the decision-making process, which is confined to reporting stock-outs.
Salud Mesoamerica Initiative in Chiapas
SMI is a results-based aid program supporting countries as they strive to improve maternal and child health in the poorest areas. In Chiapas, the SMI program encompassed two phases lasting 24 months in four health districts, with a total of 301 health facilities. SMI and the MOH in Chiapas agreed to a set of indicators and targets for each phase. If the MOH met 80% of the agreed upon targets at the end of each phase, it could receive a performance award —US$1.9 million for the first phase. Indicators were measured at health facilities and followed an “all or nothing” rule, meaning that if any item from the composite indicator was missing for a given facility, the indicator received a value of zero for that facility. All indicators were externally and independently verified by the Institute for Health Metrics and Evaluation (IHME) from the University of Washington [16].
As the end of the first phase approached on 2014, the MOH had to make sure all health facilities were properly equipped and supplied with the basic maternal and child health inputs (see Additional file 1). A needs assessment conducted a year earlier to identify gaps and procure missing input. Yet, the MOH did not have the tools or mechanisms needed to know if anything was still missing or if inputs had been adequately distributed. Further, no communication mechanisms were established for health facilities to notify about missing inputs. The results of IHME’s external measurement showed that, although the MOH achieved significant improvements in availability of supplies and equipment between the baseline and the follow-up surveys, Chiapas was unable to meet targets and obtain the award [17].
The SMI Donors Committee established a Performance Improvement period, with an additional external measurement, for Chiapas to continue to the second phase. Considering the shorter timeframe, donors agreed that only indicators that were not met had to be measured. Over the following 9-months, the MOH implemented a rapid-improvement initiative, assisted by electronic decision-support tools, to effectively identify and close any remaining gaps [17].
Three electronic decision-support tools were designed through an iterative process focused on streamlined implementation. First, a Data Collection Tool (DCT) to gather and report data at the locally; second, a Data Analysis Tool (DAT) to compile and analyze data at health district and central levels; and, third, a Sampling Tool (ST) to support stratified sampling of health facilities. The design and implementation of the decision-support tools took advantage of a newly designed Quality Assurance (QA) Strategy. Although an ad-hoc design was necessary to meet the needs of the rapid-improvement initiative, the design had a broader focus of piloting the QA Strategy.
New roles and responsibilities
During the first phase of the SMI program, SMI provided technical assistance designing a QA Strategy. The strategy shifted decision-making power towards health providers and health districts and encouraged a proactive approach to problems. Each organizational level was responsible for solving issues at their reach and following through on those raised to upper organizational levels. The strategy included transforming supervision teams into QA teams, with the new roles of measuring, evaluating and supporting quality of care at health facilities. Teams reported to health districts and, on average, oversaw a subdistrict health network with 20–30 health facilities. They were responsible for visiting health facilities and reporting findings at regular intervals. Teams met weekly with district managers to review the status of each issue and to raise to upper management those that could not be directly resolved. The central level and districts met periodically to resolve pending issues. Lastly, the strategy established common targets for the MOH, which corresponded to specific targets at health facilities, health district and central level. The strategy was endorsed by the Minister of Health and health district managers.
Decision-support tools design and features
All the tools were designed in-house by one of the authors DRZ and programmed in MS Excel. MS Excel was widely available throughout the MOH and almost everyone had experience using it. QA teams were equipped with laptops and, since they were stationed in health districts (located in cities), they had Internet access and could send data daily.
Data collection tool
The DCT was designed to collect and report data per the criteria established by SMI indicators. The tool included the following features:
A data collection checklist including all required equipment and supplies, in which the QA team recorded the date, if the item was available, not available or available but not functional, and described the finding and action taken. The checklist was color coded to track data collection progress and prevent submission of incomplete data. Data for five supplies, medicines and equipment indicators: antenatal care, delivery care, emergency care, child care, and family planning methods (for a detailed list of inputs for each indicators see Additional file 1). Different requirements were included depending on four levels of Essential Obstetric and Neonatal Care (EONC): Ambulatory without a doctor, small facilities staffed with a nurse or auxiliary nurse that mainly provide basic antenatal and child care; ambulatory with a doctor, facilities staffed with a doctor and a nurse offering outpatient care; basic, facilities able to attend normal deliveries and provide initial emergency care; and complete, hospitals attending c-sections and resolving most obstetric and neonatal complications [18].
A report by health facility displaying a heat map of measurement results for each criterion by data collection date. Results could be displayed either for all indicators together or for each indicator individually. This display allowed the QA team to compare previous results with current results and discuss findings with health staff.
A logbook showing the history of annotations for findings and actions taken at the health facility. If an input was consistently lacking, the QA team could review what actions had been taken.
A subdistrict healthcare network report displaying a heat map with the latest measurement results of each facility within the specified time-period. Results could be displayed for indicators independently or together and filtered by level of care. This gave QA teams an overview of all health facilities under their responsibility and enabled producing different heat maps for different time periods.
Function to export databases to facilitate sharing with the district level.
Function to delete database, with an alert confirmation message to prevent accidental deletion.
The DCT was fully automated using macros and password protected to avoid format changes. The list of health facilities was preloaded with individual codes for each facility, so facilities could be clearly identified in the final database. At regular intervals, resulting data was sent by email to the district and the central level.
Data analysis tool
The DAT was designed to facilitate the analysis of aggregated data collected by all teams. The DAT incorporated different features, including:
Instructions describing steps needed to update the database using individual files provided by QA teams (due to time constraints, this process was not automated).
Summary report displaying data collection progress and aggregate indicator results. For example, line graphs, bar graphs, and tables showing the number of health facilities surveyed by date, the number of health facilities surveyed by health district, indicator results by level of care, among others.
Detailed reports displaying results for criteria required by each indicator. For instance, reports included bar charts showing the percentage of health facilities with availability of each equipment or medical supply, and they showed how gaps affected the final indicator result.
Findings and Actions report displaying the report recorded for all unmet criteria, which could be filtered by date, health facility level, health district or criteria. This helped identify systematic issues detected by multiple teams.
Given that the DAT was only meant to be used by specific people in four health districts and at the central level, the file was not password protected. All the tables and graphs were automated using Excel PivotTables tools, and the Findings and Actions report was automated using macros. The tables and graphs generated by the DAT could be easily compiled in a slide show to discuss results within districts or at the central level.
Sampling tool
The ST was designed to select a stratified random sample of health facilities. Health facilities were stratified according to each EONC level, which helped visualize challenges specific to each level of care and prioritize life-saving commodities. The user could input a sample size (equal or larger than 30), and the tool would automatically provide a stratified sample of health facilities.
Training and pilot
After designing the DCT, training sessions and pilot tests with QA teams were performed. Training sessions were performed separately in each of the four districts. The first training was led by SMI and subsequent sessions were led by the MOH. Training took place through a 1-day session aimed at standardizing measurements followed by a 2-day pilot in the field. Training topics included an overview of the DCT, defining measurement criteria, and data collection from a fictional health facility. Given that the QA teams were proficient performing basic word and spreadsheet processing tasks (enter and edit data, change formatting, use menus, print), training to use the tool was minimal. During field visits, the QA teams were briefed to discuss findings with health facility staff and to act upon findings. In the pilot, software problems were also identified and corrected, and feedback from the users was obtained to improve the tool’s usability.
The DAT was designed during the initial implementation of the DCT. Training took place through a long-distance video-conference presentation, which mainly focused on compiling the data and creating visualizations. Once data became available, technical support was provided both on-site and long distance. No training was provided for the ST.
Technical support
The implementation of all electronic tools was supported by two quality improvement consultants providing on-site technical assistance and long-distance technical support from SMI through online audio and video conferences. The consultants also accompanied and coached QA teams on field visits. Debugging was an important part of technical support. When issues were detected, a new version was made available ensuring compatibility with previously collected data.
Rollout
Three waves of data collection and analysis were performed by the MOH. In the first two waves, the MOH performed a census of all health facilities. For each wave of data collection, a data collection plan with set deadlines was agreed between the central level and district leaders for each QA team to collect data from their subdistricts. In the first wave, the MOH collected data from 298 health facilities (November 2014—January 2015) and in the second wave of all 301 (February 2015—April 2015). In the third wave, a sample of 29 health facilities selected by the ST was collected (May 2015). The external survey to measure Performance Improvement results was collected by IHME a month after (June 2015).