Let’s look at the medical profession; machines are already on par with or even outperforming doctors. One study determined that the AI program trained to read pathology images can instantly diagnose certain lung cancers with 97% certainty. A different study found that AI has up to 11% fewer false positives in reading radiology scans than human experts in some circumstances.
During the pandemic, the IBM and MIT team behind the AI system Watson put the technology to several different applications:
- Identifying Covid-19 patients at high risk for sepsis.
- Designing proteins to block the virus from merging with human cells.
- Testing the effectiveness of face mask materials.
- Predicting if already approved drugs will help fight the virus and plan for the large-scale manufacturing and supply of vaccines.
Although many of these applications are experimental, the results are remarkable. Why, for so many tasks, does AI work better than human beings? Simply because a process like a diagnosis is fundamentally about collecting, organizing, and analyzing data, and computers can do far better than the human brain. A seasoned doctor might have observed tens of thousands of patients throughout a long career and have read hundreds of journal articles. An AI program will analyze tens of millions of patients’ data and hundreds of thousands of studies — in minutes, if not seconds. This is why computers now help fly planes and even trade stocks. They can beat world champions at chess, Jeopardy!, and video games. Put simply, AI can, in theory, do complicated analytics tasks better than people — the more complicated, the greater the advantage for the computer.
For now, computers have limits. When the Covid-19 outbreak began, many hoped that AI might find solutions that humans could not. The results were different. Various obstacles got in the way. For one thing, computers need mountains of data to see patterns, and with the novel coronavirus, there was little data at the start. For months after that, the information remained to be incomplete. Historical data on other viruses haven’t been of much use either because the many differences — in lethality, how the virus mutates, and so on — are crucial.
Location-tracking data has also failed to live up to its promise. Although some East Asian countries found some success in predicting hotspots and identifying super-spreaders, technology has its shortfalls. Installing a location-tracking is optional, and since not everyone does so, the data give only part of the picture. Even in Singapore, where social coherence and confidence in government are high, by June 2020, only some 30% of the population had downloaded the government’s Covid-19 tracking app requiring everyone to supply their health data, as in China, is not an option in most democracies. At any rate, it’s mostly a controversial subject. China, South Korea, and Singapore do not successfully fight Covid-19 to the invasive new technology. Instead, what made the difference were the hallmarks of proper pandemic response: fast, widespread testing, and old-fashioned contact tracing, conducted through in-person interviews.
The stumbling blocks that AI has faced in the fight against the novel coronavirus do not reflect some underlying flaw with the technology: they reveal its limits in a particular situation where much is unclear, and useful data is hard to come by. With time, there will be more and better data about the disease and innovative ways to use it — from mass thermal scanning for temperatures to facial recognition, both of which could be used to detect potential illness among large crowds in public spaces quickly. It is already possible for AI to predict which patients will worsen and get better based on recognized patterns. There is also the ongoing use of AI in path-breaking medical research — in mapping the three-dimensional structure of proteins, for example — which will continue to yield impressive results that could help in treatment and vaccines. And of course, as the research on Covid-19 increases, AI is already helping scientists make sense of it all, analyzing the thousands of new studies being produced each week around the world far more efficiently than humans could. However, all in all, the experience of this pandemic has highlighted not just the strengths but also the limitation of AI — as for now.
I believe the most lasting effect the Covid-19 on AI will likely have less to do with any particular medical breakthroughs than with the rise of robots. More robots in more settings will allow the economy to function while reducing the dangers of infection. A study published by MIT Tech Review finds that between 32–50 million US jobs could be frequently assisted by technology to reduce health risks posed by human interaction and safeguard productivity in a time of crisis. Some of those jobs are likely candidates for replacement, like cashiers. Others are more complicated, like cooks, but there are already robots that can do that work effectively but certainly not deliciously.
And the more robots there are, the more they can tap into artificial intelligence to boost their productivity in the same way that once you attach software to a machine, it becomes the controlling factor. Once you introduce artificial intelligence into any system, it gradually does the same, becoming a multiplier. We are on track to introduce AI in most of our institutions and organizations for the simple reason that it makes them work better. But that will surely mean fewer humans are needed to work because AI will make things much more efficient — for blue-collar and white-collar professions alike. You don’t need as many paralegals or young lawyers if the machine can scan documents for cases, facts, and patterns.
And we certainly don’t need as many drivers if computers can control cars, buses, and trucks. Autonomous driving will be a massive boon to safety. Over a million people worldwide die every year in roadway accidents. According to the US Department of Transportation, some 94% of crashes in the US alone occur because of driver error. But in a driverless world, what happens to the almost 4 million Americans — mostly men, mostly without a college degree — who work as drivers? For now, their career prospects are on the upswing as Amazon and other digital retailers boon. In the long term, while drivers might not lose their jobs, they will lose their ability to command livable wage — because their jobs become less valued. Computers are quickly shrinking the human role down to the last mile. Autopilot already flies many commercial planes much of the time. AI-driven long-haul trucking is already being tested on public roads, even as local delivery vans and workers are still used to deliver the final part. Even that limited role, too, may fade as AI drones increasingly take over the “last mile” problem. AI may not always produce unemployment, and it might have effects on a longer time horizon, decades from now. But it will be the game-changer of our lifetimes.
Discussions about the future of work should recognize that the future is already with us. Philosophers used to theorize about how to keep people afloat once technology replaced a critical mass of jobs. Now, Covid-19 has forced countries to experiment with some near-universal basic income. In the US, this idea went mainstream in a matter of months — no longer just the visionary quest of the underdog presidential candidate Andrew Yang but a proposal that, in a brief form, was passed by Congress to stave off economic disaster. During the pandemic, governments concluded that people could not earn money through no fault of their own and deserved to be paid for not working. Further down the line, could the state decide that people forced out of work by AI similarity deserve to be compensated?
In his 1930 essay, Economic Possibilities for Our Grandchildren, the economist John Maynard Keynes considered this exact question. He looked forward to the world of fifteen-hour workweeks made possible by technology. But even if or when such work materializes, we will need to find a way to give people things to do. That could involve creating new jobs in various fields, from education to public work projects to park and wilderness maintenance — just as FDR’s celebrated Works Progress Administration and Civilian Conservation Corps hired millions of Americans to expand infrastructure and beautify the country. Some of these jobs would involve work for work’s sake. As Keynes wrote, “We shall endeavor to spread the bread thin on the butter — to make what work there is still to be done to be as widely shared as possible.”
A full-color example of this future is George Jetson of the 1960s cartoon show. George’s job at Spacely space sprockets, Inc., is to push a few buttons three hours a day, three days a week. Everything else is automated. But it is still a job, and it gives him and his family the contours of family and a social life that was more or less recognizable to someone in the 1960s. That’s one vision of our automated, digital future — one in which the center more or less holds. Patterns of life are readjusted but not destroyed. You see, early examples of this possible world in the Finish Prime Minister see it in the flexible jobs that characterize the gig economy, such as driving for Uber or DoorDash, where workers can choose their hours. You see it in the ever-greater number of hours people spend in the office futzing around on social media. And we see it in the rise of what the anthropologist David Graeber colorfully calls “BS jobs.” He describes several types, including “box tickers,” who generate lots of paperwork to suggest that things are happening when things aren’t, and “task-masters,” who manage people who don’t’ need management.
Keynes is right! A big problem with technological revolutions is that with so much of the work increasingly being done by technology, humans would have to find a sense of purpose. Human beings, especially men, have historically given them an identity, a sense of accomplishment, and dignity. These are not irrelevant attributes. That’s why I have always found the idea of a universal basic income unsatisfying, preferring the expansion of a program like the Earned Income Tax Credit, which nearly tops up the wages of low-income workers. It incentivizes work but guards against immiseration. It’s an idea that has attracted support from the far Left as well as from libertarians. I’m convinced it is not as popular as other, less effective policies — like raising the minimum wage — because it is challenging to express and symbolically. Expanding it substantially, as we should, would be very expensive. But if we recognize this problem’s scale — potentially permanent mass unemployment or underemployment — it seems money well spent.
Keynes also worried that free time would be a problem with the decline of work because people are not good at leisure. He noted that much of the aristocracy’s passivity, which already faced this problem, was a gloomy indication of what might come to the broader public eventually. In his novel, Ian McEwan reflects on this “problem of leisure,” describing humanity in an AI-run world:
We could become slaves of time without purpose. Then what? A general renaissance, a liberation into love, friendship and philosophy, art and science, nature worship, sports and hobbies, invention, and the pursuit of meaning? But genteel recreation wouldn’t be for everyone. Violent crime had its attractions too, so did bare-knuckle cage-fighting. VR pornography, gambling, drink and drugs, even boredom and depression. We wouldn’t be in control of our choices.
This scenario is the logical endpoint of the rise of robots and AI. Automation would do less work, but there would still be new jobs generated. For those unable to find good work, government assistance would expand significantly. There would also be more time and more technological access to seek fulfillment in recreation and leisure. People would naturally adapt to this new world differently; some feel liberated, others trapped. But a darker alternative future is one in which the trends gradually deepen, yet the government doesn’t respond with a large-scale program. Inequality gets worse; more jobs disappear, real wages stagnate, and most people’s quality of life falls. This is a future in which wealth moves into the hands of a rich few, while everyone else is left behind, the worst crippled by alcoholism, drug addiction, and suicide — the demand for populism increases. We’re currently in the foothills of these futures, but it is unclear which one lies ahead.
AI-powered computers are already black boxes. We know that they get to the right answer, but we don’t know how or why. What role does that leave for human judgment? Henry Kissinger has asked whether the rise of AI will mean the end of the Enlightenment. That eighteenth-century movement elevated human reasoning above age-old superstitions, dogma, and worship. Immanuel Kant called the Enlightenment “man’s emergence from his self-imposed immaturity.” Humanity had to grow up — we had to understand the world ourselves. But if AI produced better answers than we can without revealing its logic, we will be going back to our species’ childhood and relying on faith. As was said of God, we will worship AI, work in a mysterious way, His wonders to perform. Perhaps Gutenberg’s period to AlphaGo will prove to be the exception, a relatively short era in history when humans believed they were in control. Before that, for millennia, they saw themselves as small cogs in a vast system they did not fully comprehend, subject to laws of God and nature. The AI age could return us to a similarly humble role. However, this time, humans may work hand in hand with higher intelligence, not subservient to it but not entirely above it, either in some ways that are a more accurate reflection of our real place in this vast, unfathomable universe.
It is worth keeping in mind that, along with the AI revolution, we are witnessing another one that is also likely to have transformational effects — the bioengineering revolution. To put it simply, we are getting better at creating better human beings — more robust, healthier, and longer-living. With gene selection, parents can already choose fertilized eggs that are free of known genetic diseases. (Many fear that soon they will also select for babies: blond, blue-eyed, and male.) The scholar Yuval Noah Harari argues that human beings have not changed much physically or mentally for all the social, political, and economic changes over the millennia until now. The combination of these twin revolutions — in biology and computing — will allow human beings to expand their physical and mental capacities. These results, he says, will be the creation of a god-like superman: Homo Deus.
Perhaps that is what lies in the store for us. The future of AI and biotechnology is the subject of great debate beyond the scope of this piece and my knowledge. I believe that we have a long way to go before we reach truly general intelligence in a machine — one that can, for example, not just solve a scientific problem, but grasp the underlying logic of innovation, the very notion of science itself. Could it invent new modes of inquiry and new knowledge fields in all areas, as humans have done repeatedly? In any event, one thing seems clear: so far, this technological revolution has affected not so much of replacing humans but rather refocusing them. When we witness people who work in hospitals in predominantly the developing world and have deployed AI extensively to make up for doctors’ shortfalls, they point out that the machines’ superior ability to make diagnoses has allowed doctors and nurses to focus on patient care. These professionals are now more deeply engaged in helping patients understand their conditions, ensuring that they take their medicines, and convincing them to change their diets and habits. They also act as coaches, providing the moral and psychological support that is key to recovery. In many ways, these are more essential human tasks than purely analytic ones, likely reading X-rays or interpreting lab results. This development represents a new division of labor, with machines and humans doing what they do best.
The pandemic has shown that these technological revolutions are further along than we might have thought — but also that digital life can feel cramped, a poor simulacrum of the real world. For many people, these shifts will be scary. Some jobs will go away, but overall, productivity will rise, generating more incredible wealth that could help all. Everyone’s quality of life could improve. These are real concerns about privacy, handling data, and the government’s role in regulating companies and themselves in this domain. But these are not unsolvable problems; we can have the benefits of digital life and protect our privacy. And if we can take care as we develop the rules around the AI and bioengineering revolutions, we will not lose our humanity. Indeed, we could enhance it.
People worry that as AI becomes more highly developed, we will rely on our computers for so much that we will end up thinking of them as friends and becoming unable to function without them. But already, my phone can give me more information than any human I know. It can solve complex tasks in a nanosecond. It can entertain me with content from across time and space. And yet, I have never mistaken it for a friend. The smarter a machine becomes at calculating data and providing answers, the more it forces us to think about what is uniquely human about us, beyond our ability to reason. Intelligent machines might make us prize our human compassion even more, for their creativity, humor, unpredictability, passion, and intimacy — this is not such a strange thought. For much of history, humans were praised for many qualities other than their power to calculate — bravery, loyalty, generosity, faith, love. The movement to digital life is broad, fast, and real. But perhaps one of its most profound consequences will be to make us appreciate the things in us that are most human.
*This essay is based on Fareed Zakaria’s book. Ten Lessons for a Post-Pandemic World.
Learn more about ReadyAI here.