Recently, Nicola Maxwell and I published a paper entitled ‘A quantitative metric for research impact using patent citation analytics’ (http://bit.ly/2K8wUwg).The paper proposes a new metric for measuring the research impact of academic research. Rather than counting academic citations accruing against a journal article, or counting the number of patent filings made by an academic (both approaches have their deficiencies), the new metric focuses on academic publications that become an obstacle for any third-party attempting to obtain a patent right.
Every now and again, an academic paper is raised by a patent examiner as an obstacle to patentability during the patent examination process. The patent examiner will cite the academic paper, arguing that it contains a prior disclosure of the invention. When that academic paper is from, for example, a researcher at the Faculty of Engineering and IT (FEIT) at the University Technology Sydney (UTS), it says something about the technology merits of that research. It says that the research was commercial; it says that the research contained original and inventive content; it says, therefore, that the research had ‘research impact’ in a technology sense.
To test this newly proposed metric, we took about 22 000 FEIT UTS publications from the last 10 years and looked to see if they had been raised during patent examination – anywhere, ever. We found more than 1200 instances where a UTS published paper was cited during a patent examination process. However, we decided to go further. We wanted to find those cited by patent examiners against a patent family where there was a US patent family member (because US patent rights are arguably the most commercially valuable). We also wanted to focus on those citations that had been raised by patent examiners as novelty destroying (named ‘X’-type) or obviousness destroying (named ‘Y’-type) – not those raised simply as ‘relevant background art’ (A), or added to the body of the patent as references by the drafting patent attorneys. Both X- and Y-type citations effectively restrict the scope of what can be claimed by a patent applicant, or in some cases completely prevent a patent from being obtained.
Surprisingly, by restricting the type of citations considered, we found that the number of citation instances reduced from more than 1200 to only 100 (from the more than 22 000 publications). The 93 academic papers that had 100 ‘eligible citations’ were worth focusing on. For example, we noticed a particular author who kept posing an obstacle despite not seeming to patent his own work: a possible lost opportunity for the university.
For comparison, we also took FEIT UTS patent rights and performed a similar analysis to see how many of them appeared as obstacles to other parties seeking patent rights. Of the 84 inventions published in the same 10-year period, we found 67 eligible citations using the same methodology. Note well: 84 patent rights generated 67 eligible citations whereas more than 22 000 publications generated only 100.
That is, we found that there is 180 times greater likelihood of a patent being cited by a patent examiner than a journal article. To explain this, we argue that patent examiners substantially restrict their searching to the readily available and familiar (to them) patent databases, because of the ease of parsing the resultant documents found from the search strings, secure in the knowledge that, relatively speaking, there is far greater inventive prior art to be found in the patent databases (fuelled in part by the proliferation of spurious publication in academia over the last few years). The data of this research is of such a large quantity that it is possible that we have identified proof of an inherent flaw in some patent examination processes.
Overall, the combination of the two publication types (academic papers and patent rights) allowed for the calculation of a PC-index (a patent citation index) using the traditional H-index-type approach. FEIT at UTS had a PC-index of 4. We plan to perform the same analysis on other academic institutions (and even academic journals) to see how this PC-index fares as a relative measure of research impact. Currently, the most-used measurement of research impact is based on technology case-studies; this qualitative approach doesn’t necessarily allow unbiased comparison of the performance of different cohorts of academics.
One key feature of our new metric is that it’s very difficult to game because one cannot influence whether a patent examiner will cite a research article as significant prior art. It is a matter of fact whether a paper contains significant prior art that inhibits or blocks a party later trying to seek similar patent rights. In the academic literature, gaming occurs; for example, when groups of authors informally agree to cite each other in their publications, independent of the merits of such citing, thus increasing their individual metrics based on citation counts.
From the perspective of my co-author, as a patent analyst and patent attorney, she was deeply surprised at how difficult this research was to undertake, with much of it having to be undertaken manually. Manually processing more than 22 000 items is no mean feat. She was also surprised at the poor quality of available patent citation counting, with many patent citation counts actually being ‘references’ in the patent file wrapper (i.e. not documents actually raised by the patent examiner as substantive prior art). When we took a citation count provided automatically from existing software tools and then manually de-duplicated across patent family members, the citation count was reduced by 30%. When we manually de-duplicated to meet our X/Y citation-only requirements, the citation count dropped by 86%. This is a mind-boggling result; according to our data, most reported patent citation counts contain up to 86% noise. Any conclusions (typically called patent citation analysis) drawn from such data must be inherently flawed.