How can we evaluate the performance of innovation intermediaries?

short blog by Dr Federica Rossi (CIMR Deputy Director) about some of recent work on innovation policy by Annalisa Caloffi, Riccardo Righi, Federica Rossi and Margherita Russo

also available at CIMR website

Innovation intermediaries are organisations that support firm-level and collaborative innovation. They provide a range of knowledge-intensive services including networking, knowledge and technology mapping, innovation consultancy. Examples of innovation intermediaries are the Fraunhofer Institutes in Germany, the Poles de Competitivite’ in France, the Technology Catapults in the UK, and many others. Since many of these organisations receive a substantial amount of their funds from the public purse, it is important to evaluate whether they are successful at their mission to promote innovation and technology transfer, and whether they perform a unique role that could not be done by private operators.

However, innovation intermediaries are very difficult to evaluate, since, by definition, they operate in the part of the economy that is most dynamic and uncertain.

In two studies carried out using empirical evidence from regional innovation intermediaries in Italy, we analyse in some detail the problem of evaluating the performance of these organisations. We argue that one of the main difficulties is to identify measures of successful performance that are closely tied to the policy objectives that these intermediaries are designed to achieve, rather than rely on established indicators just because they are simple and convenient.

In our first paper (Russo et al., 2019), we show that setting performance indicators that are misaligned with the policy objectives can have perverse effects. For example, the innovation intermediaries we studied were evaluated on the basis of the number of companies they recruited into their network. However, this induced them to build their network of members by extensively relying on their pre-existing connections, rather than looking to support also the many other weakest firms of their region. Another evaluation criterion was the amount and monetary value of the knowledge-intensive services that the intermediaries provided. But many intermediaries ended up providing services to companies that were already accustomed to buying such knowledge-intensive services on the market: in other words, the intermediaries simply crowded out private service providers. Another problem was that some types of activities were not evaluated, and so the intermediaries only performed those to a limited extent.

So, what steps can be taken in order to evaluate the performance of innovation intermediaries more effectively?

In our second study (Russo et al., 2018), we argue that that the key problem when defining performance targets and performance indicators is to closely align such indicators with the policy’s objectives. First, performance evaluation should involve the full range of intermediaries’ activities and pay particular attention to those that are instrumental in addressing the key objectives of the policy. This is in order to avoid omitting important activities from the evaluation just because they are less visible or less easy to measure. Second, the objectives of the intermediaries need to be spelled out clearly at the outset. The systems failure framework provides a very useful conceptual model to understand the nature of the problems that intermediaries are designed to address. By clarifying precisely what are the kinds of system failure that intermediaries should remedy (for example: interaction failures, institutional failures, managerial failures, infrastructure failures…) it becomes easier to structure the policy objectives, and to define appropriate performance indicators as a consequence.

For example, the Italian intermediaries we studied were designed to address the problem that many of the small and medium sized companies in the region failed to innovate – usually due to lack of economic resources but more importantly due to capabilities failures, particularly the lack of managerial leadership in driving innovation. The objective of the policy was therefore to expand the number of firms accessing high value-added knowledge-intensive services, which would have enabled them to boost their innovation performance mainly by allowing them to implement process and organizational innovation. The problem is that this policy objective was not clearly specified. If the policy objective had been made clearer and more explicit, it would have been easier to define more suitable performance indicators. For example, it would have been apparent that intermediaries should have been evaluated on the basis of which companies they provided services to (e.g. did they help weaker innovators, or did they just work with companies that were already innovative, thus failing to achieve their objectives?). Moreover, rather than relying on simple output indicators, outcome indicators could have been used to measure whether companies had changed their behaviour (e.g. greater networking activity, changes in the types of partners they interacted with, changes in the type of innovation processes they performed) and their performance (more innovation, greater profitability and so on) thanks to the intermediaries’ activities.

  • Russo, M., Caloffi, A., Rossi, F., Righi, R. (2019). Designing performance-based incentives for innovation intermediaries: evidence from regional innovation poles, Science and Public Policy, 46(1): 1–12.
  • Russo, M., Caloffi, A., Rossi, F., Righi, R. (2018). Innovation intermediaries as a response to system failures: creating the right incentives, In Bernhard, I., Karlsson, C. and Gråsjö, U. (eds) Geography, Open Innovation and Entrepreneurship, Cheltenham: Edward Elgar.