Data sources utilized by the index are not always the most current due to data collection limitations (e.g. covid-19 has caused disruptions in the collection of CDE data).
The Index is limited in that it does not offer data for schools that were not large enough to warrant the construction of a School-based Health Center. Thus, schools that did not meet specific enrollment targets were excluded from the dashboard. This includes rural schools (designed as such by the USDA) with an enrollment under 500 students, urban schools (without a high school) with less than 500 students, and urban schools (with a high school) with less than 1000 students. California had more than 10,000 active public schools in 2020-21. The final dashboard for the Student Health Index includes 4,821 schools.
The lack of available data on health indicators at a school-level restricted the Student Health Index to using proxies for the health outcomes. Some health indicators are included, but they are not school-specific, instead linked to specific schools geographically through the census tract. However, community-level data does not always accurately reflect the characteristics of a school’s population. As a result, school-level indicators in the Index were weighted more heavily than community-level indicators.
Additionally, race was not included as a measure in the Student Health Index because of California’s Proposition 20, which prohibits the allocation of public resources based on race and ethnicity. However, the dataset does contain measures of non-white students at each school.
The Index has also been limited as a quantitative measure of need, which may overlook the influence of other factors that might be better illuminated through qualitative evidence (e.g. stakeholder engagement, focus groups, interviews, etc.).
The CDC SVI has been acknowledged to be limited in capturing accurate representations of small-area populations that experience rapid change between censuses (e.g. New Orleans in the years following Hurricane Katrina).
The Index is also limited, like other mapping tools, by the lack of homogeneity within any census tract or county/parish. There may very well be more vulnerable communities and individuals living in overall less vulnerable areas. Homeless populations may also specifically not be represented within studies that rely on geocoding by residential address. Length of residence within a geographic area may also impact results.
The index is also limited by calculations that account for where people live, but not necessarily where they work or play. The lives of individuals are not necessarily restricted to the boundaries of a census tract or county/parish.
Lastly, vulnerability is only one component of several components that are important for public health officials and policymakers to consider—the hazard itself, the vulnerability of physical infrastructure, and community assets and resources are other elements that must be taken into account for reducing the effects of a hazard.
This data resource has also been critiqued by Bakkensen et al. for not having been explicitly tested and empirically validated to demonstrate that the index performs well (a problem they identify as characterizing multiple indices).
Bakkensen, Laura A., Cate Fox-Lent, Laura K. Read, and Igor Linkov. 2017. “Validating Resilience and Vulnerability Indices in the Context of Natural Disasters.” Risk Analysis 37 (5): 982–1004. https://doi.org/10.1111/risa.12677.
There are missing data points within the dataset (attributed to non-reported information). This dataset has also been acknowledged to be limited in its prioritization of government data, which could have political limitations that may skew the degree of severity for disasters reported.
The index does not include certain neighborhood characteristics critical to health because they did not meet the criteria for inclusion (described in question 3). For instance, this included physician ratios (the number of physicians per 100,000 population) because data was missing for a majority of census tracts. In fact, the steering committee was unable to locate much data on health care access or quality at the census-tract level (only data on health care insurance coverage was available).
The index was previously critiqued in ways that led to a shift from framing data in terms of “disadvantage” towards a framework of “opportunity”. This led to not only a renaming of the index (from “the Health Disadvantage Index to the Healthy Places Index) but also a shift in reporting of data (e.g. highlight the percentage of the population with a BA degree or higher rather than the percentage of population without a college degree).
The HPI is also limited in terms of the effects of confounding, with some indicators with strong evidence of health effects showing contrary associations with life expectancy at birth by census tract. The steering committee has also acknowledged that the HPI might not be accurate for census tracts undergoing rapid population change (e.g. due to immigration, rapid gentrification, or other changes).
The HPI notably does not correlate strongly with CalEnviroScreen, which the steering committee for the HPI noted failed to identify one-third of census tracts with the worst conditions for population health. The HPI is ultimately more centered on considering environmental factors as a part of overall health, rather than as a central determinant. However, this disconnect between CalEnviroScreen and the HPI may also be a reflection of the challenges environmental injustice advocates have faced in linking environmental factors to health outcomes (which might not be as visible and geographically direct as the links between health and other indicators).
Lawsuit led by River Region Crime Commission (RRCC) to retrieve LTR information
http://www.la-fcca.org/Opinions/PUB2004/2004-04/2003CA0079.Apr2004.Pub.12.pdf
Article by Barbara Allen (2005). The problem with epidemiology data in assessing environmental health impacts of toxic sites
https://www.witpress.com/Secure/elibrary/papers/EEH05/EEH05048FU.pdf
“The registry focuses on cancer incidence, which can be caused by a number of factors, instead of the risk faced by people exposed to emissions from industrial operations. In Terrell's view, that has allowed companies and by the state Department of Environmental Quality to misconstrue its significance.” (Mitchell 2021)
“While scientists will argue that the one-year reporting standard, as set by the state statute, is arbitrary, a five-year reporting timetable is equally arbitrary and less sensitive to changing health patterns. More problematic, however, were the eight large geographic regions. Each region consisted of as many as twelve parishes (a parish is a county in Louisiana) and in the case of the regions that include the parishes of the chemical corridor, industrial parishes are “diluted” by non-industrial parishes, making the determination of elevated cancer rates near chemical plants impossible to decide. The LTR also tends to downplay the rarer cancers, both adult and pediatric, saying the “rates tend to fluctuate because of small numbers...[and] are less reliable and should be cautiously interpreted” [4]. This infuriates the residents and researchers as these rare cancers are of major concern as they may be linked to chemical exposure.”
Response to new health study (March 2021)