The UX Of Ethics: Should Google Tell You If You Have Cancer?

By Mark Wilson for CO.DESIGN

Big data can spot things we’d never see coming. But are companies like Google and Amazon responsible for telling us the bad news?

“If I’m on a park bench, and I’m next to someone, and I hear them talking about symptoms of cancer, am I obligated to turn around and tell them they might have cancer?”

Samuel Volchenboum, Director of the Center for Research Informatics and Associate Professor of Pediatric Oncology at the University of Chicago, lets the question float in the air for a moment before breaking his own silence.

“You’ll get different answers depending who you ask.”

Medical ethics is a complicated topic. For a practicing doctor tending to their patients inside a hospital, the rules are relatively clear. But as soon as that doctor steps into the real world, their vast knowledge of the human body gives them the power to diagnose a potentially unwilling populace—people who might not be asking for a diagnosis of a terminal illness, or to have the telltale signs of their chemical addiction broadcast to every fellow passenger on the subway.

It’s an age-old dilemma for doctors, but a new one for companies like Google, Facebook, Microsoft, Amazon, and Apple, whose artificial intelligence products are rapidly approaching the same diagnostic power. Google Search can already predict coming flu trends with some level of success. It’s not hard to imagine a system that can track my searches over a year—an ache, a cough, a rash—recognizing a cascade of symptoms that point to a disease with surprising accuracy. Apple could measure if my finger taps were slower in iOS to flag a sudden cognitive decline. An accelerometer in an Android phone could easily track a sudden shift in my gate.

If Target knows when a teen is pregnant based on just a few purchases, then imagine what Facebook could learn from that and so much more—what she searches for, what she says and to whom, and where she goes? In this world, where our machines can spot trends in our data that could save our lives, a new ethical question is emerging. What are the ethical responsibilities of Google, or Apple, or Microsoft, or any other companies that sit on insights mined from mountains of our personal data?

Or put differently, if Google knows I’m sick before I do, is it ethically obligated to tell me? And if so—how does it tell me?

BAD PRECEDENTS CAN SCARE THE INDUSTRY AWAY

This question might seem like nothing more than a thought experiment. Yet there’s some precedent in Mountain View.

Google Flu Trends, which launched in 2008, used data from 40 search queries to predict regional flu outbreaks. This was the golden era of Google’s “don’t be evil” PR machine. The project had been produced during an employee’s fabled 20% time, like a gift from the coding gods. The media fawned over the technology, and so did the CDC. In fact, the CDC and Google joined forces to pool their different data sets.

But the media and public perception is a tricky thing. As the Atlantic recounts, a paper published in Science titled “The Parable of Google Flu: Traps in Big Data Analysis,” pointing out that Google’s big data experiment didn’t predict the flu as well as the CDC did all along. It didn’t matter that the paper also stated an important fact, buried by the lede—that Google Flu Trends mixed with CDC data created better predictions than either resource had alone. The media had a field day with the hubris of Google Flu Trends all the same. Wired called it an “epic failure.” Time said it “showed the failings of big data.”

As a result of the negative press, Google stopped running Flu Trends internally, and handed the data to third-party academics, instead.

It just goes to show that the stakes are high when it comes to health data, and even Silicon Valley’s brightest companies, which are known to buck public policy in the pursuit of innovation, can back away from a fight after one round of bad press.

“There seems to be such a sacred fence around the medical relationship,” Volchenboum says. “For some reason we put it on a different pedestal. Does it need to be there? Maybe.” Case in point, when the $9 billion medical startup Theranos promised to bring a Silicon Valley approach to blood testing, vital patient diagnoses were lost to a horrifying overhype cycle.

HOW GOOGLE IS CREEPING TOWARD MEDICAL AUTHORITY

In fact, a source familiar with the matter tells me that Google has been slowly rethinking how it deals with medical search results. Like all Google search results, items are ranked by “relevance”—an inherently tricky concept. Relevance is determined by test users who look at algorithmic search results and rank them.

Notably, just because something is relevant doesn’t make it true. Medical community consensus doesn’t prioritize Google results for symptoms you search, just like scientific consensus doesn’t drive results on global warming. And the difference between relevance and veracity is why, for every online health query that may lead you to the Mayo Clinic or WebMD, you may also find your way to snake oil healing sites or rants on the dangers of vaccines.

3058943-inline-i-poxNow, Google is quietly using a new tool to sidestep the complicated ethics of medical search results and endorse a right answer. A search for “chicken pox” brings up a Google card that features an illustration of pox as well as quick takeaway information about the symptoms, diagnosis, commonality, and treatment. This information is part of Google’s Knowledge Graph—the same consensus-based data that allows Google to just tell you how many feet are in a mile rather than burying that information in a search result link. And it’s easy to imagine how Google could slow-burn build its Knowledge Graph results to the point where they always have the medical answer to your search query, without putting the onus on the user to fact-check various websites to ensure they’re pursuing proper medical treatment.

SHOULD GOOGLE JUMP BACK INTO MEDICAL PREDICTION—ETHICALLY SPEAKING?

Even still, the evolution of Google’s Knowledge Graph is hardly the parallel to a doctor who spots a passenger’s melanoma on the subway and needs to decide if they should speak up. The question remains: If Google or another technology company has the ability to spot that I have cancer before I do, should it ethically have to tell me?

As complicated as this question sounds, it turns out that most experts I asked—ranging from an ethicist to a doctor to a UX specialist—agreed on the solution. Google, along with Facebook, Apple, and their peers, should offer consumers the chance to opt-in to medical alerts.

“Key to these messy questions is what users/customers expect and have ‘bought in to,’ not some abstract principle,” says Sandy Pentland, director of the MIT Connection Science and Human Dynamics labs. “That is called informed consent. People have to know what is going on, what to expect, and have opted in, not just failed to opt out.”

“These companies are going to have to come up with a way to allow users to be like, ‘Listen, there’s a possibility we might know you have cancer before you do,’” says Charles Fulford, executive creative director at Huge. “‘We cannot do anything with that information, or we can alert you to this information, or we can tell your doctor to reach out to you.”

Opting in would set a precedent for the relationship between you and your company of choice. It’s kind of like telling a friend, “Let me know if I get some of this kale salad stuck in my teeth.” Everyone involved in that lunch understands their role and responsibility. And who wouldn’t jump at the chance to have a company like Google not just serving you highly personalized ads, but highly personalized warnings that could save your life?

“I would definitely opt in, because I’m an information consumer. I understand the risks,” Volchenboum says. “From my point of view, my patients are doing this already, in that they’re constantly bringing me things they find [online].”

Indeed, we’re all self-diagnosing with the Internet as it is. But there is one potential ethical pitfall: false positives. An incorrect diagnosis, or a suggestion to get something checked out, can lead to lost dollars and increased medical risks. Volchenboum points to how the American Cancer Society has shifted the recommended starting age of mammograms from 40 to 45 because machines often spot tissue anomalies that might not be dangerous breast cancers—and treating innocuous symptoms can lead someone to an unnecessary surgery with serious complications.

“You have to consider, the number of false positives you get could be enormous, and could put incredible drain on society to answer the questions that come up, and the unnecessary follow-up testing that could result,” Volchenboum says. “You have to think about both sides. Systematically, how could this be applied in a way that wouldn’t put a burden on society with higher costs.”

Aside from his practice, Volchenboum runs a startup named Litmus Health that’s attempting to answer some of these questions by analyzing a patient’s Fitbit data—which he admits doctors have no idea how to parse today—for usable insights. Most of the biometric data we have is nothing more than noise to the medical community. And in this regard, while it may be Google’s ethical responsibility to tell us when we could be seriously ill, if it’s not done in the right way, at the right time, for the right reasons, it will only create more noise.

“It’s going to be a balance,” Fulford says. “With every piece of software that’s built, it’s just going to take constant monitoring.”

A QUESTION WITH IMPLICATIONS FAR BEYOND MEDICINE

Even if we are able to solve all of the medical implications of an omniscient Google through opting in, society will soon have to deal with even more consequences stemming from big companies wielding big data. Since AI is rapidly becoming more knowledgeable than we are, companies will have to figure out what that really means in how they speak to and decide for their clients.

It’s not just about spotting cancer before we do. As our lives grow more and more automated by the cloud, systems will be making all sorts of decisions on our behalf—sometimes out of safety, and sometimes just out of convenience. Understanding how these systems are thinking and acting on our behalf is crucial to trusting them.

“Right now, these things are taken care of in privacy policies and you’re given a 20-page document, and you have to scroll through those 20 pages before you can accept—which is just a really antiquated way to deal with it,” Fulford says. “There’s going to have to be that messaging, cadence, in how it’s said, and how these algorithms update that this message is communicated to the user is going to be a fine science.”

Shopping Amazon is a perfect example of how this lack of transparency plays out today. Amazon changes the price of items constantly—up to several times in the same day. It might offer several SKUs for the same item, with no particular motivation to show you the Bounty towel pack that’s the least expensive per sheet.

And with its new Dash buttons, which order a predetermined product with the touch of a button, the consumer is even further distanced from Amazon’s strange pricing structure. That product’s price may have changed since the button was installed, or a more cost-effective alternative may be available, or they may have saved more money by grouping a shipment in as a Subscribe and Save—all without the user knowing it. “[At Amazon] we do make an effort to ensure that pricing is very apparent, through my experience,” says Daniel Rausch, Amazon’s director of Dash. “On Dash, that gets very tricky.”

And what is Amazon to do? Does it need to say, “Dash will be the most convenient, but possibly not the cheapest, way to get your razors—do you opt in?” Of course that wouldn’t work so well either, but it does point to an interesting possibility: The way a system’s AI works, and just what it’s willing to disclose to a user, could become a competitive selling point in the near future.

In fact, products of the future could be positioned completely around the varying ethical stances of their algorithms at play—especially when those are split-second decisions that are completely out of a user’s control. Imagine a driverless car that’s deciding whether to run over a pedestrian who’s standing into the middle of the road, or to save their life by crashing you into a wall?

“From a brand’s point of view, say you’re Tesla. You’re going to say, ‘We look out for human beings as a whole.’ But then you have car brand X that says, ‘We’re going to save you at all costs,’” Fulford says. “All of the sudden, brands have to say, our algorithm will do this in this situation. That will define them ethically as a brand, and that’s going to be a selling point to people. That’s fascinating to me.”

 

Share Article: