Three decades of HIV …and still learning

On June 5, 1981, five cases of Pneumocystis carinii pneumonia were reported among young men in Los Angeles, in what was to become the first account of AIDS. The pandemic that transformed Earth now marks its thirtieth anniversary this month. And while much attention is appropriately being paid to declarations made about the Millennium Development Goals and reports on the broad state of the disease in different continents (see the mortality graphs below), it’s easy to forget some of the critical lessons we’ve learned over the years from this unparalleled pandemic. In this week’s blog post, we’ll revisit some of the historical lessons we’ve learned HIV: from the redefinition of the behaviorist model of health promotion, to the detailed tax records of the pharmaceutical industry.

Redefining the behaviorist model

The simplistic, 1980’s view of controlling the HIV pandemic was this: if people simply stopped engaging in “risky behavior”, HIV infections would cease. The problem was that this view met crushing real-world data: the reality that even when a majority of persons became aware of how HIV was transmitted, many were still living in circumstances that did not lend themselves to easily avoiding infection. During the early 1990’s, critical economics and anthropology literature revealed that many of the women most at risk for HIV were those selling sex for money. Globalization and health was connected for the first time, as Walden Bello described how destructive trade policies in Thailand led to a mass forced-migration of young women into urban spaces as rural farms collapsed under unequal foreign trade rules; the resulting increase in prostitution left many unable to negotiate condom use, or trading sex so often that such use would eventually fail. And among male laborers, such as truck drivers or miners (as in Catherine Campbell’s famous studies), dangerous living amidst violence or through high-risk occupations meant that the higher likelihood of dying in industrial accidents or road-side deaths left many men to view a long-term disease threat as a secondary consideration. Their wives at home would become a hidden second epidemic as the men returned from their migration routes.

So what the critical ethnographies of the 1990’s revealed was that HIV was not simply a matter of “behavior”, but of the context and agency of persons who engage in behavior, often in social and economic systems characterized by duress and hopelessness (who cares about HIV when there’s food to put on the table tonight?) Paul Farmer’s writings from the central plateau of Haiti captured the dilemma most passionately; Catherine Campbell famously captured the academic results in her review of behavioral studies: “providing information about health risks changes the behavior of, at most, one in four people–generally those who are more affluent and better educated.” More recent data about the behaviorist approach is not more encouraging. Hollywood advertisements said that “we are all at risk for HIV”—but this was decidedly not true, as the social and sexual networks of poor people in Washington D.C. meant that 1 in 20 people had the infection, while the social and sexual networks of those more privileged were effectively isolated from the disease. Comments about African sexuality abounded, but hard data revealed that the Japanese and Europeans were far more promiscuous than those populations with the highest rates of HIV.

This was to say little of those in urban American, Eastern Europe and Southeast Asia who had essentially no drug rehabilitation options while facing mental illness and injection drug addiction. Some of the most creative studies of the 90’s were those finding “hidden populations”—by, for example, bringing injection drug users back into the spotlight of epidemiologists to show that needle exchange programs resulted in a marked reduction in HIV transmission. How do you keep track of a population that does not regularly visit official clinics and hospitals? Instead of following people, Edward Kaplan and Robert Heimer kept track of their needles, and back-calculated how the rate of infection among the needles implied changes in the rate of infection among drugs users. Exchanging dirty needles for clean ones was calculated to result in a net cost-savings to the state by averting future HIV medical expenses.

What are acceptable deaths?

The redefinition of common epidemiological approaches was among the HIV activist movement’s great accomplishments. Cost-effectiveness studies were previously used to claim that some people would simply be able to receive life-saving treatment, as their lives were simply not “cost-effective” to save in this manner. There was both a moral and a methodological problem with these studies. The moral problem was made clear by the activist movement: who decided which lives were worth saving, and which were not? In the words of then-WHO HIV Director Michael Merson, “In the ’90s it became clear we were not going to have a major heterosexual epidemic in the States…[HIV] was no longer a threat to the West,” and so the urgency of lost lives changed.

The methodological problem was more complex: traditional cost-effectiveness models have used (and often still use) a decision-tree framework in which the costs of one medical approach are pitted against another; the probabilities and costs of each downstream branch of possibilities are summed (for example, we can either treat a person with antiretrovirals or not, and the person will have a certain probability of getting pneumonia in each case; the pneumonia has a certain cost to manage, which can be compared against the cost of initially treating the person with antiretrovirals). This approach fails to account for how antiretrovirals dramatically reduce the transmissibility of the virus, and therefore fails to consider downstream infections averted by treatment, which markedly change the cost calculation.

But the elitism of public health decision-making—that is, the idea that a privileged few in Geneva can decide who will live or die—was challenged at a deeper level by HIV activism. The arguments that prevention and treatment would be pitted against one another—the false dichotomy that makes sense only in theory, but not in the actual public health practice of working in a community with families that have both HIV-negative and HIV-positive people—were sustained by arguments as strange as the USAID Director’s claim to Congress that “Africans can’t tell time”, and the idea that poor places had too little infrastructure to sustain chronic medical therapy (the same places that had given tuberculosis therapy for years).

The former idea was simply ignorant of the vast diversity of cities and towns in a continent large enough to fit many others inside it.


The latter idea was also based on stereotypes—ones challenged by Partners in Health, Doctors Without Borders, and other groups that demonstrated how well-supported community health workers could achieve viral suppression and adherence results (less drug resistance) among poor patients in low-income countries far better than patients in the US and Europe.

The real driver behind the argument, of course, was that no one wanted to pay.  (To read perhaps the most telling history of this sequence of logic, read Barton Gellman’s series in The Washington Post from the year 2000—essential reading for anyone interested in HIV: Part 1Part 2, and Part 3).

Investigating pharma

The claim that pharmaceuticals would simply have to be inaccessible to the poor was among the most sustained claims in the history of HIV, and the one least supported by the data.

As the White House led by Bill Clinton and Al Gore threatened the governments of Brazil, India and South Africa for attempts to distribute lower-cost generic versions of antiretrovirals, supporting massive trade sanctions and lobbying efforts to restrain access to medicines through the World Trade Organization negotiations process, the key argument was that high prices were essential to the research and development of key drugs. A look at the Securities and Exchange Commission reports from the pharmaceutical industry’s own tax records revealed a different truth.

While the industry was making profits as a percentage of revenue far higher than any other industry:

…the profit dollars were not, in fact, going into research and development:

And most of the key antiretroviral drugs had, in fact, been produced by tax-payer dollars through National Institutes of Health funding to universities, who licensed the drugs (often after the clinical trials stage) to the pharmaceutical industry at rock-bottom royalty rates. The industry spent millions creating an “intellectual echo chamber” (their words) of university professors who would support them against price controls or competition, even though the cost of losing the African market was just “three days fluctuation in exchange rates” per industry executives. But some university students engaged in activism to change licensing policies, resulting in dramatic price reductions of key medications as competition entered the marketplace through the availability of generic medications:


Yet, critically, less than 40% of HIV patients in low- and middle-income countries have access to antiretrovirals currently.

Redefining sustainability

The rethinking around antiretrovirals was also based on an essential premise: that while we also try to tell people in the United States to eat healthier food, we recognize that their food consumption environment poses the highest risk of diabetes among the poor, and we consider it unethical to take away insulin from diabetics. So why should we consider it acceptable now to take away antiretrovirals from those with HIV, as has happened in the past few years? As the recession looms over aid budgets (despite the dramatically smaller share these budgets play in the overall deficit than, for example, military spending), arguments have been made to regress back to a time when “sustainability” was confused with “short-term” or “the simplest intervention”.

A new wave of regressive arguments have been fueled out of select universities and foreign policy councils, where a conservative backlash has attracted inexperienced students who like the idea that HIV has garnered “too much attention”, letting public health advocates fight amongst themselves over pennies rather than recognizing the common social and economic determinants of disease and the collectively small funding for global health. Alternatively, they could recognize that HIV is a symptom of a larger set of processes that contribute to most diseases among the poor (for example, the determinants of a new wave of chronic disease)–and that much can be accomplished for other conditions by learning from HIV activists.

HIV activists have reconfigured our understanding of social networks to appreciate that there is nothing sustainable about letting vast sums of productive people die of a treatable disease—that such actions neither sustain an economy nor, more importantly, a society. The best means of handling a crisis of orphans, after all, is to keep their parents alive.

What has come most recently out of this redefinition of sustainability is the proposal to alter the global aid environment to recognize what was appreciated long ago about domestic support systems: that when times are hard, and economies crash, we help our neighbors by creating a domestic safety net, placing our taxes into a common pool to provide food stamps, shelter, and the like to those who are at the losing end of hurricanes or recessions. There is no reason why, in a global economy, those systems should be isolated to within-country transactions. After all, debt and resources flow from poor countries to rich ones (not the other way around), and the international aid system has done a very limited and inconsistent job of sending resources in the other direction. Gorik Ooms and others have called for an equivalent to domestic social safety net systems on a global level—employing the lessons of HIV (used to create the Global Fund, the first international group to redefine international exchanges and sustainability) to the broader dilemmas of the current aid environment.

Like Gorik’s proposal, much of what we learn from the history of HIV is less about the virus per se than about what it teaches us about the potential of our society, as well as its ills. HIV has revealed much about our social assumptions, about whose thoughts and rights we value (and don’t), and who gets to decide life and death. At this thirtieth anniversary of the pandemic, it’s time for us to catalogue and preserve these lessons, especially as HIV is being followed by waves of new epidemics as varied as drug-resistant tuberculosis and diabetes. These new challenges offer us the possibility of forgetting our history and politics, or alternatively taking the opportunity to note the past and remember the many meanings of AIDS.

3 responses to “Three decades of HIV …and still learning

  1. Pingback: An update on global access to medicines | [ EpiAnalysis ]

  2. Pingback: Gutting the research and development treaty | [ EpiAnalysis ]

  3. Pingback: University-based research and neglected diseases | [ EpiAnalysis ]

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s