At the 2021 Australian and US Open tennis championships, all the line judges were replaced by machines. This was, in many ways, inevitable. Not only are these machines far more accurate than any human at calling balls in or out, but they can also be programmed to make their calls in a human-like voice, so as not to disorient the players. It is a little eerie, the disembodied shriek of “Out!” coming from nowhere on the court (at the Australian Open the machines are programmed to speak with an Australian accent). But it is far less irritating than the delays required by challenging incorrect calls, and far more reliable. It takes very little getting used to.
In the slew of reports published in the 2010s looking to identify which jobs were most at risk of being automated out of existence, sports officials usually ranked very high up the list (the best known of these studies, by Carl Benedikt Frey and Michael Osborne in 2017, put a 98% probability on sports officiating being phased out by computers within 20 years). Here, after all, is a human enterprise where the single most important qualification is an ability to get the answer right. In or out? Ball or strike? Fair or foul? These are decisions that need to be underpinned by accurate intelligence. The technology does not even have to be state-of-the-art to produce far better answers than humans can. Hawk-Eye systems have been outperforming human eyesight for nearly 20 years. They were first officially adopted for tennis line calls in 2006, to check cricket umpire decisions in 2009 and, more recently, to rule on football offsides.
Yet despite the arrival of this smart technology, there are now many more people employed in sports officiating than ever before. Wimbledon has decided to retain its line judges, in part for aesthetic reasons. As the only major tournament played on grass, looking good against a green backdrop is a key part of the money-making machine (hence the requirement that the players wear nothing but white). The line judges are mainly there for their uniforms. In 2022, as for the previous 17 years, these were designed by Ralph Lauren.
Cricket matches, which traditionally featured just two umpires, currently have three to manage the complex demands of the technology, plus a referee to monitor the players’ behaviour, which still involves a large element of human discretion (who’s to say what is meant by “upholding the spirit of the game”?). Football matches have up to five officials, plus the large teams of screen-watchers needed to interpret the replays provided by the video assistant referee system (VAR). The NBA Replay Center at Secaucus, New Jersey, which employs 25 people full time, along with a rota of regular match officials, would not look out of place at Nasa.
Efficiency – even accuracy – turns out not to be the main requirement of the organisations that employ people to give decisions during sports games. They are also highly sensitive to appearance, which includes a wish to keep their sport looking and feeling like it’s still a human-centred enterprise. Smart technology can do many things, but in the absence of convincingly humanoid robots, it can’t really do that. So actual people are required to stand between the machines and those on the receiving end of their judgments. The result is more work all round.
It remains surprisingly hard to know which jobs are likely to go with the arrival of AI, though we can be confident that the scope and character of employment will change. Many studies of the risks of automation, like Frey and Osborne’s, choose to treat work as a series of tasks, which can then be measured by their suitability to be undertaken by machines. This assumes that the barriers in the way of human replacement are simply the current limitations of the technology, which for now include a continuing inability to exhibit a range of human-centric cognitive and mobility skills. Robots are good at repetitive tasks, even when these are highly complex, along with ever more voluminous data crunching, but they often struggle with simple forms of human interaction. If your job involves creativity, aesthetic judgment, truly fluid movement or social sensitivity, then on these measures you are likely to be safe for now. Robots can dance, but they still need human choreographers to look convincing.
However, a task is not a job, not in the modern meaning of the term. Jobs are positions created by organisations that have their own requirements. It is too simplistic to think of these requirements as nothing more than the efficient performance of a task, even in the case of heartless, money-grubbing corporations. It also matters whether the job makes sense in relation to the corporation’s needs. Most jobs continue to require people to fill them, because people are the human element of the impersonal organisations that provide the jobs. Otherwise, the machines would appear to be running the show, which is a dangerous look in a world where people still count.
How things look isn’t everything. There are significant parts of every organisation where appearance doesn’t matter so much, in the backrooms and maybe even the boardrooms that the public never gets to see. Behind-the-scenes technical knowledge that underpins the performance of public-facing tasks is likely to be an increasingly precarious basis for reliable employment. This is true of many professions, including accountancy, consultancy and the law. There will still be lots of work for the people who deal with people. But the business of gathering data, processing information and searching for precedents can now more reliably be done by machines. The people who used to undertake this work, especially those in entry-level jobs such as clerks, administrative assistants and paralegals, might not be OK.
The future of employment necessarily involves a complex set of relationships, which are far more likely to alter what we understand by work than to abolish it. There are the relationships between people and machines, some of which may turn out to be zero-sum (more work for them means less work for us), but most of which are still liable to be mutual. Doctors who use technology to diagnose cancers are going to need to hone other skills – including better ways of communicating what the machines are saying – but these will be far easier to acquire than expecting the technology to hone the skills possessed by the doctors. Then there is all the work for the managers, lawyers and ethicists who will need to decide on whether the doctor–machine relationship is proceeding as hoped, and what to do when it goes wrong. In the age of AI, there will be no shortage of work in hospitals.
In the world of work, it’s still people, organisations, machines – in that order. Might the order change? Could the organisations come to prioritise the machines over the people, or the machines come to take the most consequential decisions on behalf of the organisations?
History offers a partial guide to what might happen. Worries about automation displacing human workers are as old as the idea of the job itself. The Industrial Revolution disrupted many kinds of labour – especially on the land – and undid entire ways of life. The transition was grim for those who had to switch from one mode of subsistence existence to another. Yet the end result was many more jobs, not fewer. Factories brought in machines to do faster and more reliably what humans used to do or could never do at all; at the same time, factories were where the new jobs appeared, involving the performance of tasks that were never required before the coming of the machines. This pattern has repeated itself time and again: new technology displaces familiar forms of work, causing massively painful disruption. It is little consolation to the people who lose their jobs to be told that soon enough there will be entirely new ways of earning a living. But there will.
It does not always happen, however, that new tasks are found for the previous generation of workhorses. This was most notably the case with actual workhorses. Throughout the 19th century, fast-growing industrial production was heavily dependent on the labour of horses to transport people and goods, above all in the United States. From this labour a huge number and variety of jobs were created for the people needed to sustain horse-powered enterprises.
Just 50 years later the horse-powered economy had almost entirely disappeared from urban areas, if not yet from rural ones. Fifty years after that it was more or less gone throughout the whole country. But an enormous number of new jobs were created to service the needs of the automobile. By 1950 the auto industry had generated 7m or more net new jobs, which accounted at that point for 11% of the total American workforce.
What weren’t created were many new jobs for the horses. Their skill set – pulling, carrying, not complaining about it – turned out to be insufficiently adaptable for the new era. Eventually there was nothing much left for them to do, outside the leisure industry. When the first model-T Ford (22 horsepower) rolled off the production line in 1908 there were around 25 million horses in the United States, alongside 90 million people. When the first Ford Falcon (260 horsepower) appeared in 1960, there were just 3 million horses left – and nearly 180 million people. The horse as worker was effectively obsolete.
Might humans go the way of the horse? Our skill set too could turn out to be insufficiently adaptable, once machines are able to do most of the things we can, at many, many times the speed. The “human power” of deep learning technology – for instance, 3,000 years of chess knowledge picked up by AlphaZero in less than 24 hours (human power: 1,000,000+) – is exponentially greater than the horsepower of even the swiftest automobiles. It is true that smart machines also lack adaptability, above all the ability to switch between entirely different tasks. But cars lack plenty of kinds of adaptability compared with horses: they can’t step over obstacles, or move sideways, or swim through streams. That didn’t stop us building an entire economy around them, and road networks suited to all their limitations, sacrificing many millions of our own lives over the following century to inevitable road accidents in the process.
Where humans most clearly differ from horses, however, is that we are not uncomplaining. We are the opposite. We have agency, expressed through our ability to communicate our choices. When the horses were phased out, it was by organisations in which they did not have a say. We do have a say in the organisations that might choose to phase us out. We had better use it.
One reason we should is that these organisations have agency too. Without our input they will make their own choices. And smart machines, also unlike horses, can make their own choices as well. It is fanciful to suppose that they would choose to phase us out once we did not serve their purposes any more. They don’t have that kind of agency – the human kind. But their ability to perform certain tasks better than we can is sufficient to allow them to shape how we live if we choose to let them. Just as cars came to shape how we live once we chose to let them.
It was never up to the horses. It was not up to the cars. It is not even up to the new breed of self-driving vehicles. It is still up to us, our states and our corporations.
What makes a job different from other kinds of work is its relationship to the passing of time. When work comprises a task or a series of tasks, time is measured by how long the task takes to complete, and by how long its results last. Often there is a connection between the two, but not necessarily: the books that take the longest to write are by no means the ones that are sure to endure; the songs that take the longest to compose are not always the ones that people want to sing. Hallelujah by Leonard Cohen, which is among the most covered songs of the past 50 years, took 10 years and endless revisions to get right, but Forever Young by Bob Dylan, which has been re-recorded almost as often, was knocked out in a single sitting (as were many of Dylan’s songs).
For Hannah Arendt, what distinguished work, and what made it potentially so satisfying, was this possibility of durability: things could be built that might long outlast their makers. These artefacts could be anything from books and songs to tables and chairs to states and constitutions.
At the same time, many jobs are longer lasting than the tasks they require. Being president is a single job with many responsibilities, some of which demand attention only from one day to the next. The job of a furniture maker could last an adult lifetime, during which many hundreds or even thousands of tables and chairs might be built. What gives jobs their durability is that the organisations that generate them have sufficient longevity of their own.
But often the price of job security is drudgery or repetition. Doing the same job for many years can be boring. That is why organisations try to offer progression as well as stability. A career is more than just a job – it usually involves changing roles within a given field, and when appropriate changing organisation too. Being a banker is a career; working for a bank is a job; devising a banking product is a task; closing a deal is an action. Yet being a banker also means running the serious risk of getting fired. It is potentially more rewarding than being a civil servant – in financial terms anyway – but it is less reliable.
These trade-offs – between job security and variety, between risk and reward – have been familiar throughout the history of modern employment. They echo the wider trade-offs between the personal and the impersonal, the human and the artificial, that define the modern age. Where the balance is struck varies with time and place. During much of the 20th century – the great age of the career, and indeed of “careers advice” in schools – large corporations, along with the state, could offer solid prospects for career progression. It was possible to remain in the same organisation for a working lifetime and still have a range of satisfying working experiences.
The 21st century is different. Many large organisations have relatively few employees compared to their 20th-century equivalents. The rapid proliferation of smaller, shorter-lived companies also means that longevity is not what it was. In the age of the startup, getting a job, even (and perhaps especially) a highly paid one, doesn’t guarantee much security. Jobs are shorter lived, and as a result careers are far more fragmented. For anyone first entering the workplace in the third decade of the 21st century, it makes little sense to talk in terms of “careers advice” at all. The experience of work is far more likely to involve a portfolio of different occupations, some inevitably undertaken at the same time. A single person might have many jobs, but many jobs are unlikely to make a single career.
The rise of smart technology has a lot to do with this. Some of it is simple uncertainty – trying to picture the arc of a working life is almost impossible when the rate of change is so rapid. Apocalyptic warnings about the impending hollowing out of the professions make training to be a lawyer, or an accountant, feel far riskier than it once did. That doesn’t mean that people have stopped training to be lawyers – the number of enrolments at US law schools is continuing to grow, as is the number of schools offering law degrees. But there is a widespread expectation that this will result in many more qualified lawyers than there are legal jobs, let alone legal careers. The hope must be that knowing the law will still provide good training for the increasing variety of human-oriented tasks that a portfolio career might require, with or without machines to do the heavy lifting.
But even in the shorter term, new technology has changed the relationship between careers, jobs and tasks. Performing tasks is what machines are good at. The better they get at it, the more work becomes task-oriented. In many ways, talking about the prospect of machines taking people’s jobs is a misnomer, because once machines do the work these are no longer jobs. Machines don’t require job security, any more than they require the other appurtenances of modern employment regimes: holidays, healthcare, positive feedback, redundancy pay. Jobs are what humans do.
The potential upside of the AI revolution is enormous. It is not hard to see how these systems could be deployed to make human beings vastly better off, by liberating us from drudgery, sparing us from disease, transporting us safely and stimulating us endlessly. The biggest boosters of the new generation of thinking machines promise what would until very recently have seemed impossible: lifespans extended by hundreds of years, telepathic communication, an exponential explosion of creativity and scientific discovery. It all seems unlikely but, given the current rate of progress, who’s to say they are wrong?
At the same time, it is very easy to see the looming downsides, including the real risk of catastrophe. Even if we can work out what to do with our spare time, how to distribute these new resources equitably and whether we really want to know what everyone else is thinking, there is still the chance that we will lose control of the intelligent systems we have built. They are meant to work for us, but already it is possible to suspect that we will end up working for them. If they become much smarter than we are, will they still want to do our bidding? Will they even care about us at all? After all, these are just machines. For now, and probably for ever, they are going to lack a conscience, a heart, a soul. We built them to expand our horizons, but if we cannot keep them tethered to a human-centred perspective, it may be the last thing we do.