Putting Equity First in Health Care Innovation

Healthcare innovation should be driven by the desire to improve outcomes for underrepresented areas, not by the latest and greatest developing technologies.

The majority of innovations in the health care system, including those aimed at achieving goals partially or fully in line with health equity, are focused on state-of-the-art technologies such as wearables, sensors, digital applications, artificial intelligence, remote patient monitoring, and so forth, which may exacerbate rather than lessen disparities.

Modern technological solutions in healthcare, at least in part, are justified by the idea that, by reducing long-standing prejudices and increasing the capacity of an overworked and inefficient system, digital tools can be a great equalizer. Even while these technologies have the potential to improve access to and efficiency of healthcare, there is little proof that the amount of money invested in tech-centric innovation results in long-lasting, beneficial change for communities where health inequities are rife. These issues have the weight and complexity of centuries tainted by the marginalization and exploitation of oppressed people, in contrast to the technology that is all too frequently utilized to treat them. It is necessary to recognize and address the systemic disparities that negatively impact the physical and mental well-being of populations living on the margins, particularly those that are ingrained in the healthcare system, to solve the issues these communities face. While using cutting-edge technology carelessly might have disastrous effects on vulnerable communities, it can also be a potent instrument for attaining equity in health care.

How Innovation in Technology Cannot Promote Fair Health Care

Instead of being a “new” technology, one of the most obvious examples of contemporary healthcare technology used in the US that perpetuates disparities is a new version of an old technology that resulted in worse health outcomes than its predecessor. Originally developed by Hewlett-Packard in the 1970s, pulse oximeters monitor blood oxygen levels by estimating the quantity of light absorbed by human tissue. The company took care to confirm the tool’s accuracy on varied skin tones by testing it among individuals of color.

Nevertheless, optical color-sensing, the technology used in most modern pulse oximeters—which are now mostly made by a tiny biotech company—often fails to reliably measure blood oxygen levels in individuals with darker skin tones. Even with this acknowledged flaw, pulse oximeter readings were praised as a critical “biomarker” for early hospitalization and during triage when COVID-19 initially struck. It’s disturbing to note that when the gadget showed they didn’t need oxygen, several patients of color who complained to ER doctors about having trouble breathing were instead sent home.

The rapidly expanding fields of artificial intelligence (AI) and machine learning (ML) are other examples of technologies being used to aid in or evade clinical decision-making, but they have also been demonstrated to occasionally worsen health inequities. These systems are frequently based on homogenous data sets and biased criteria that either misrepresent the data of minority communities or do not accurately represent the patient population as a whole. For example, even when all other parameters are the same, Black patients are ranked lower on the kidney transplant list than White patients according to an algorithm designed for that purpose.

This is true even though Black Americans account for almost 35 percent of dialysis patients, compared to White Americans’ 13 percent, and Black Americans are approximately four times as likely to have renal failure than White Americans. Some institutions stopped using the algorithm after research showed significant differences, while others started the process of replacing it. Researchers have found significant racial inequities in built-in AI algorithms that have a negative influence on patient care in two instances when the algorithms have been used to guide clinical decisions and identify individuals most in need of medical attention.

However, despite these unsettling results, inventors have not stopped creating and applying algorithms for other health-related uses without taking the necessary precautions to guarantee that they do not injure patients based on race—a social construction that has no place in clinical decision-making. Many of these algorithms are not regularly or completely checked for their impact on health consumers, especially vulnerable health consumers. Nor have they been subjected to rigorous evaluation or peer-reviewed publications.

Industry and government funding for artificial intelligence, machine learning, and other advanced technologies is still flowing despite the growing concerns around their usage. Significant investments have been made by the industry in the digital health space. Grand View Research estimates that the market will be worth $211 billion in 2022 alone. Additionally, it is anticipated that the industry will expand by up to 18.6% year between now and 2030. Not to be outdone, the US government is investing heavily in digital health. To address health and socioeconomic disparities that have been made worse by the COVID-19 epidemic, the US Department of Health and Human Services has set aside $80 million to improve US public health informatics and data science. An additional $50 million from the federal budget is set aside for the recently announced Digital Health Security (DIGIHEALS) initiative, which will protect digital health data.

Underprivileged groups are harmed by several aspects of various seemingly benign but pervasive technology, such as wearables, remote monitoring systems, smartphone apps, and even telehealth services, which are frequently promoted as health equality solutions. These include the following:

  • More than education, work, and access to healthcare, internet connection has emerged as a major predictor of health, impacting healthcare outcomes. For populations that are physically or socially isolated, in addition to a lack of access, other factors including poverty, low involvement with digital health, obstacles to digital health literacy, and language hurdles may make these solutions ineffective.
    Instead of giving marginalized people the resources they need to adopt healthier lives, self-monitoring apps focus on persuasion to encourage users toward healthier decisions and behaviors. More troublingly, well-intended rewards for healthy habits could end up favoring the wealthy and punishing the underprivileged.
  • There is an inherent waste issue when “innovations” are pushed onto marginalized populations in spite of technological mismatches with local requirements, values, capabilities, or connectedness. This can include outdated software, misplaced hardware, and ineffectual regulations. Poor communities just cannot afford to waste precious funds that could have been used for more long-term, scientifically supported health treatments.
  • High-touch care has gradually been supplanted by high-tech solutions due to digital health innovations’ promise for cost savings and scalability. Human contact is still crucial to health care, though, and high-touch models have been associated with better access to preventative care for some marginalized groups of people. Although telehealth and other digital health technologies have their uses, patient-provider interactions should be enhanced by these tools rather than replaced.

 

Transitioning to Equity

How do we move from technical innovation that is expensive and further marginalizes the weak to innovation that is fair, human-centered, significant, and long-lasting for the disadvantaged? Four fundamental ideas ought to guide the process:

Make healthcare institutions answerable: Holding healthcare institutions responsible for developing just and long-lasting solutions that advance equality is the first step towards developing an ecosystem for digital health that benefits everyone. To achieve this, it is necessary to make sure that healthcare institutions fulfill their promises to promote health equity and thoroughly evaluate any new developments in digital health from an equity standpoint before making them available to the general public.

Include a range of viewpoints from important decision-makers: To create equitable innovations, it is essential to involve a varied range of stakeholders who may contribute a range of lived experiences to the healthcare innovation process. The most likely to suffer from severe health inequalities are also frequently underrepresented in R&D, have historically faced marginalization and underrepresentation in the tech sector, and are almost absent from senior and executive positions in the healthcare sector. This excludes significant opinions from the decision-making process when it comes to the development of advances in digital health.

Involve marginalized groups in product testing and research: Forging a more equitable path for health care innovation requires adopting equitable research paradigms, such as community-based participatory research, providing opportunities for members of marginalized communities to engage in co-creation with researchers and designers, or simply expanding the pool of research participants. Throughout the development and testing phases of new digital health solutions, they enable underrepresented groups the chance to offer suggestions and comments that only they can provide.

Try to swap out pricey, high-tech solutions for less expensive ones: The expense of health care is already way too high for all Americans. The monetary burden of modern technology is even more disproportionate for marginalized communities, who typically have lower wages and are already financially burdened by a lack of asset transfer across generations. Businesses and the federal government investing in digital health solutions must take care to prevent the public from being inundated with redundant and superfluous technologies that drive up the already prohibitively high expenses of the health care system.

Putting Money Into Social Ventures That Advance Health Equity

Redefining innovation will require significant support from the nonprofit sector. There are, however few, instances of academics and charitable organizations developing digital solutions with the express purpose of assisting underprivileged populations. For example, University of Southern California researchers created an algorithm to determine who in a particular homeless group is best suited to disseminate critical information regarding HIV prevention among young people. Additionally, a German NGO created a smartphone app that provides users with information on over 750,000 locations worldwide, color-coded to indicate whether they are completely, partially, or not wheelchair accessible.

Social innovation for health should be interpreted as innovation in social connections, power dynamics, and governance transformations; it may also involve institutional and systems transformations. Tangible objects that meet societal and structural needs are significant. Addressing the biases ingrained in our present healthcare system must be a top priority for the government, the for-profit and charity sectors, and everyone hoping to create an egalitarian healthcare system that responsibly employs emerging technologies. Before developing clinical decision tools that may exacerbate prejudice, for example, a deeper knowledge of implicit bias among health care practitioners is required. Additionally, all significant players in digital health ecosystems must build trust with underrepresented minority groups to increase participation in clinical trials, pilot programs, and other research initiatives.

Mission-driven and socially conscious organizations should be in charge of creating new standards, heuristics, and roadmaps that promote social innovation in the healthcare industry. Most importantly, all organizations can be held accountable for establishing the conditions required to generate responsible digital health solutions if these organizations can decolonize health-related research and development, rigorously test new technologies, and assess their impact on vulnerable groups.

While certain digital health solutions may help underprivileged communities live better lives, their abuse or overuse may exacerbate already severe problems in the healthcare system, which have led to significant health inequities. Those that the US health care system fails the most are more likely to benefit from strategic investments in health-focused social innovations than from pouring more public and private funding into tech solutions that have either proven to contribute little to health equity or have made an already unfairly biased health care system even more so. With significant investments in health innovation coming soon, we have two options: either we build a system that benefits everyone, or we keep going in the direction of careless technology that harms communities of color. Let’s hope we make the proper decision for the benefit of everyone residing on the outskirts of the US healthcare system.

 

Leave a Comment