From augmenting to automating: the need for an inventive machine patent standard
While not widely appreciated, inventive machines have been autonomously generating patentable inventions for decades. If these machines had instead been natural persons, they would have qualified as legal inventors. Instead, because laws governing inventorship are silent on so-called computer-generated works, the role of machine intelligence in invention has been neglected. This is a problem because failing to provide protections for computer-generated works results in uncertainty for businesses and innovators, and ultimately hinders innovation by failing to optimally incentivise the development of inventive machines.
It is a problem for another reason, because machines are playing an ever-increasing role in research and development, and this is not adequately taken into account by our patent standards. To receive a patent, an invention needs to be novel, useful, and inventive. Whether a patent application is inventive, or has an inventive step, is evaluated from the perspective of a hypothetical person having ordinary skill in the art—essentially, an average worker in the field of an invention. If an invention would appear obvious to a skilled person, then it cannot receive a patent; if an invention is nonobvious, it has an inventive step.
AI is already augmenting researchers in a variety of scientific fields. Even without contributing as an inventor, AI can still contribute to an invention, for instance, by performing literature searches, data analysis, and pattern recognition. This makes average workers more sophisticated and knowledgeable than they would be otherwise. Because this skilled person reflects hypothetical average workers, the augmentation of average workers by AI should result in a higher bar to the inventive step. The skilled person should be a skilled person augmented by AI. Yet this augmentation is not necessarily taken into account by current standards, and this will result in too low a bar to patentability.
The problem will get worse as inventive machines transition from augmenting researchers to automating research. Advances in AI are resulting in a proliferation of technologies that can outperform people at certain tasks. In 2017, Alphabet’s Deepmind’s AI Go-playing program AlphaGo beat the game’s world champion. That feat was widely lauded in the AI community because of the sheer complexity of Go—there are more board configurations in the game than there are atoms in the universe. Go was the last traditional board game at which people had been able to outcompete machines. Later in 2017, an improved AI, AlphaGo Zero defeated AlphaGo 100 games to 0. AlphaGo Zero won after training just three days—by playing against itself without learning from games played by people.
AI like Deepmind may soon outperform people at more practical tasks. Indeed, in December 2018, DeepMind’s AlphaFold AI took top honours in the 13th Critical Assessment of Structure Prediction (CASP), a competition for predicting protein structure. Predicting protein structure can be an important skill for drug discovery, for example. Indeed, companies like Benevolent AI are claiming their AI automates a significant portion of the drug discovery and development process.
When average workers start using routinely inventive machines that can contribute as inventors, this will make the average worker capable of innovation. That’s a problem, because invention is supposed to be exceptional rather than normal. Patenting routine output risks granting unnecessary, and socially costly, patent rights that fails to incentivise. The solution is for the skilled person standard to change to a skilled person using an inventive machine, or just an inventive machine. An inventive machine standard would emphasize that it is the machine that is engaging in inventive activity, rather than a person.
To obtain the information necessary to implement this test, the Patent Office should establish a new requirement for applicants to disclose when a machine contributes to the conception of an invention, which is the standard for qualifying as an inventor. Applicants are already required to disclose all human inventors, and failure to do so can render a patent invalid or unenforceable. Similarly, applicants should need to disclose whether a machine has done the work of a human inventor. This information could be aggregated to determine whether most invention in a field is performed by people or machines. This information would also be useful for determining appropriate inventorship, and more broadly for formulating innovation policies.
It might be difficult for a decision-maker to reason about what would be obvious to an inventive machine, but it is already difficult to do this with a hypothetical skilled person. In practice, the test suffers from hindsight bias and bears unfortunate similarities to the Elephant Test—you know one when you see it. An existing vein of critical scholarship has already advocated for the inventive step test to focus less on cognitive factors and more on economic factors, and inventive machines may provide the necessary impetus for this shift. An inventive machine standard could rely on factors such as long-felt but unsolved needs, the failure of others, and reception in the marketplace. Alternatively, the test could focus on whether common AI could reproduce the subject matter of an application with sufficient ease.
However the test is applied, an inventive machine standard would dynamically raise the current benchmark for patentability to keep pace with real-world conditions. Inventive machines will be significantly more intelligent than skilled persons, and also capable of considering more prior art. An inventive machine standard would not prohibit patents, but it would make obtaining them substantially more difficult. Either a person or computer would need to have an unusual insight that inventive machines could not easily recreate, developers would need to create increasingly intelligent computers that could outperform standard machines, or, most likely, invention would be dependent on using specialized, non-public sources of data. The nonobviousness bar will continue to rise as machines inevitably become increasingly sophisticated.
Taken to its logical extreme, and given there is no limit to how intelligent computers would become, it may be that every invention will one day be obvious to commonly used computers. That would mean no more patents would be issued without some radical change to current patentability criteria.
Ryan Abbott is professor of law and health sciences at the University of Surrey. He is a licensed and board certified physician, attorney, and acupuncturist in the United States, as well as a solicitor (non-practising) in England and Wales. Professor Abbott has served as a consultant for international organisations including the World Health Organization and the World Intellectual Property Organization.