The role that technology plays in our lives has gone beyond anyone’s expectations during the ongoing pandemic. In my own role as a technology venture capitalist, I am lucky enough to see the world through the prism of many startup tech entrepreneurs. I am grateful. It is fascinating to work with entrepreneurs who have ambitious visions, technological insights, and, above all, a sense of daring I never saw in the stodgy corporate world of making quarterly forecasts and saying politically correct things to bosses and underlings.
But there is also a large proportion of overconfident entrepreneurs (and investors backing them) who think hunting Moby Dick is simply a matter of going out to buy tartar sauce for a seafood meal that they are sure will follow the hunt. Such overconfidence has a predictable ending. It is the entrepreneur and investor who end up beached, while the whale, a large and easy target as it seems, slips away into the deep.
The covid pandemic has changed the prism of tech investing in such a way that it now seems to reveal many more colours than it normally would—i.e., the seven colours of a rainbow that form the basic spectrum of white light. The investing world, both in public as well as private markets, now seems to believe that the application of technology to a problem can somehow deliver a solution that appeared impossible to find just a few months earlier.
This is especially true of startups in the telemedicine arena. Medical payers, providers and equipment manufacturers had long attempted to use telemedicine in an attempt to increase profits and provide care to patients in far-flung areas without easy access to doctors. Despite the existence of the internet, online platforms for video conferencing, and sophisticated peripheral ‘smart’ devices in patients’ homes (or on their wrists), telemedicine never really came into its own before the pandemic. This is because physical interaction between doctors and patients is an integral part of the healing process. Medical procedures like vaccinations and surgeries need to be performed on patients, of course, but even when it comes to regular consultation, healing is a holistic process. It cannot be robotized.
I have written before about the conversations I have had with John Fox, then a professor at University of Oxford’s department of engineering science. At the time, Fox was an interdisciplinary scientist working on reasoning, decision-making and other theories of natural and artificial cognition. For many years, Fox was a scientist with Cancer Research UK (CRUK), and made major contributions to the prevention, diagnosis and treatment of cancer. He later became chief scientific officer at both OpenClinical and Deontics, one a not-for-profit foundation supported by CRUK, and the other a startup that is trying to apply advances in artificial intelligence (AI) to the practice of medicine.
Some years ago, Fox said to me that psychologists have known for a long time that human decision-making is flawed, even if sometimes amazingly creative, and overconfidence is an important source of error in routine settings. A large part of the motivation for applying AI to medicine comes from the knowledge that to err is human and that overconfidence is an established cause of clinical mistakes. Overconfidence is a human failing, and not that of a machine; it has a huge influence on our personal and collective successes and failures.
That, however, was an earlier view. We now know that bias can and does creep in into AI programs, too. In a recent column, I wrote about Generative Pre-trained Transformer 3 (GPT-3), a new deep-learning technology that holds promise as a programming assistant. It is based on billions of words picked up as part of its ‘learning’ curriculum on the internet. Its creators have been very careful about when and how much of it they will release for general use. While the commercial incentive to create a ‘cheat sheet’ computer programming language for everyone must certainly be large, its creators have been reticent, as they have recognized that a lot of what is said on the internet (especially on social media) is hateful, racist and biased, and so the widespread use of GPT-3 before it is purged of such biases would yield net negative outcomes.
It is with this lens then that we must approach telemedicine and also mental health via the internet. There is now a boom in mental health apps and teletherapy. The MIT Technology Review said last June that there had been a 19-fold increase in the downloads of such apps even early in the pandemic, and a 14-fold increase in those who said they were downloading the apps to relieve anxiety. It is probably the overconfidence wrought by such a usage jump that has prodded many entrepreneurs and investors to focus on this field. In Tech Review, John Torous, director of digital psychiatry at the Harvard-Beth Israel Deaconnes Medical Center, is cited as saying these apps may in hindsight mark a turning point, with people increasing their access to mental healthcare, but when they’re used as standalone tools or for single interventions, the evidence of meta-analysis shows that they are just not as effective. While these apps may be used as adjuncts to therapy, the available evidence suggests that therapy alone is more effective.
There is a time and place for the Ahabs of the world who would hunt Moby Dick. In the end, it was Ahab who lost his mind. The waters of mental health also run deep.
Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India