We exchange a huge amount of traceable information in our global economy. This was true even before the arrival of COVID-19. What’s more, we already know the information we generate and share can be used in AI innovations to track the virus. The Fred Hutchinson Cancer Research Center matched the virus genome collected from a Seattle teenager to one collected six weeks earlier from a woman who had returned from China.
So it can be done with AI insight. But can it be done in countries obsessed with data privacy?
In the shadow of coronavirus, the issue of data privacy is no longer convenience vs. anonymity. It’s no longer about missing out on a targeted ad for something we might want to buy or letting our friends see whose concert we’re at. What we’re facing now, as a global society, is the question of what human price we are willing to pay to protect our data privacy as individuals.
Leveraging technology to compensate for the economic and demographic shifts caused by coronavirus will require some radical cultural changes.
To Share or Not to Share: That Is the Data Privacy Question
To put this issue into perspective, consider two countries at opposite ends of the data sharing spectrum: China and Germany.
In China, everyone is tracked. They receive points for things like buying healthy food and lose them for things like not following government-sanctioned media.
Meanwhile, in Germany, people are sending mass requests for Google Maps to blur images of their homes in aerial view. As a result, Google Street View is much more limited there than in other countries. Data privacy is coveted.
China took a lot of heat for overstepping privacy measures before COVID-19. But where are they now?
Byung-Chul Han wrote in El País that someone leaving the Beijing metro could have their temperature measured by a device produced by Chinese AI champion MEGVII. The temperature would be evaluated for COVID-19 suspicion and shared with all the people traveling in the same metro car. With a single reading of someone’s health information, a potential hot spot could be identified, even neutralized, in real time.
This is the trade-off for data collection with no privacy.
Outside of China, particularly in the United States and European Union, personal freedoms are keeping huge amounts of sensitive data out of reach. Smartphone geotracking features combined with AI innovations could be a powerful instrument against irresponsibility in times like these. In Barcelona on March 21, the newspaper La Vanguardia reported that eight people were arrested for holding a drug-fueled, shall we say “adult party” during the quarantine emergency. Simple access to geotracking would be an easy solution to breaking up big groups—including those with more innocent intentions.
Artificial intelligence needs data. But reluctance to give up privacy even on a case-by-case basis, choosing instead absolute refusal under data privacy laws, leaves us with limited deep learning. This means we must rely on simpler technology assistance, like the chatbot brainchild of the World Health Organization and Whatsapp: text “hi” to +41 79 893 1892, and you’ll receive a text from the WHO with the latest COVID-19 information.
Can We Trust the Machines?
The choice with AI innovations is data privacy or trust—trust of the machines and the humans behind them, neither of which are 100% trustworthy.
YouTube stated in its blog on March 16 that it “will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”
This action will privilege outcome over accuracy—a bold move in favor of machine learning. But will it work?
Jaron Lanier and Glen Weyl make a very good point on Wired when they note the rising confusion about AI in its transformation from a technology to an ideology. Can institutions and companies win on their own terms in the long run against top-down technocracies like China? How long will compliance requirements and data privacy even be a relevant topic of discussion?
More to the point, no algorithm could have predicted COVID-19. The questions about privacy through the end of 2019 were posed in a time of much lower stakes. Now that we’re here, with human lives on the line, shouldn’t we let AI do all the good it can against the pandemic and learn to forgive the AI innovation learning curve?
A new attitude towards machine inaccuracy could be one of the disruptive and unforeseen positives to come out of the COVID-19 crisis.
The real black swan effect, however, could be human beings finally agreeing to standardize and cooperate against a common enemy, data privacy be damned. The doorway to this massive cultural shift would almost certainly be at the big data and AI innovation level, where shared efforts are becoming the new normal.
There’s even a chance this cooperation might have further collateral social outcomes. After weeks, even months in quarantine, people may come to value face-to-face contact, going to work, and enjoying the outdoors more than ever before.
The only thing standing in our way is a willingness to share our data so we can bring this pandemic to an end once and for all.