Content area
This study introduces an unsupervised machine learning approach to predict Technology Readiness Levels (TRLs) using bibliometric data. Traditional TRL assessments often depend on expert opinions, which can be subjective and resource intensive. By analysing metrics such as publication counts, patent filings, and grant funding, the proposed model classifies technologies into low, medium, and high readiness categories. Notably, publication-related metrics emerged as the strongest predictors, accounting for over 60% of the model's predictive power. Various unsupervised machine learning models were applied during the study, and among them, the MDBSCAN model achieved the highest accuracy of 84.9%. This data-driven methodology offers a scalable and objective alternative to conventional TRL assessments, enhancing decision-making in research and development management.
Abstract: This study introduces an unsupervised machine learning approach to predict Technology Readiness Levels (TRLs) using bibliometric data. Traditional TRL assessments often depend on expert opinions, which can be subjective and resource intensive. By analysing metrics such as publication counts, patent filings, and grant funding, the proposed model classifies technologies into low, medium, and high readiness categories. Notably, publication-related metrics emerged as the strongest predictors, accounting for over 60% of the model's predictive power. Various unsupervised machine learning models were applied during the study, and among them, the MDBSCAN model achieved the highest accuracy of 84.9%. This data-driven methodology offers a scalable and objective alternative to conventional TRL assessments, enhancing decision-making in research and development management.
Keywords: Technology Readiness Levels; Unsupervised Machine Learning; Bibliometric Data; Technology Maturity Assessment; Publication Metrics; Patent Metrics; Grant Funding Metrics; Innovation Forecasting; Data-Driven Decision Making; Research and Development Management
1. Introduction
The accurate and timely assessment of technological maturity is a critical challenge for organizations across industries and research domains. Effective technology readiness assessment enables informed decision-making in R&D prioritization, investment strategies, and policy formulation. However, traditional methods for evaluating technology readiness levels (TRLs) are often limited by subjectivity, high costs, and scalability issues (Ernst, 2002).
The Technology Readiness Level (TRL) framework, initially developed by NASA in the 1970s, provides a systematic approach to assessing the maturity of technologies, categorizing them on a scale from TRL 1 (basic research) to TRL 9 (commercialization) (Martínez-Plumed et al., 2021). While the TRL framework has been widely adopted, conventional approaches to TRL assessment continue to rely heavily on expert opinions, which can introduce biases and inconsistencies.
In an era characterized by rapid technological advancement and increasing complexity, there is a pressing need for more objective and data-driven approaches to forecast technological readiness Gao et al. (2013). Recent advances in bibliometrics, machine learning, and data analytics offer promising opportunities to automate TRL prediction and enhance the efficiency and scalability of technology assessment (Kostoff et al., 2004).
This study addresses the limitations of traditional TRL assessment methods by proposing an unsupervised machine learning framework that leverages comprehensive bibliometric data to predict technology readiness levels. By integrating insights from publications, patents, grants, and clinical trials, our approach aims to provide a more objective, data-driven assessment of technological maturity that can be applied across diverse domains.
1.1 Research Objectives and Significance
The key objectives of this study are:
1. To develop an unsupervised learning model that can accurately classify technologies into low, medium, and high TRL categories based solely on bibliometric indicators.
2. To identify which bibliometric factors arc most predictive of a technology's readiness level.
3. To compare the performance of various unsupervised learning models for TRL assessment.
4. To provide practical recommendations for R&D managers, investors, and policymakers on leveraging data-driven insights to improve technology investment decisions and innovation management practices.
The significance of this research lies in its potential to transform technology readiness assessment by providing a more objective, scalable, and efficient methodology. Such an approach could dramatically improve decision-making in R&D management, technology investment, and innovation policy by enabling more systematic evaluation of emerging technologies.
Furthermore, the study contributes to the growing body of literature on the application of machine learning and data analytics to innovation management. By demonstrating the feasibility and effectiveness of unsupervised learning for TRL prediction, this research opens new avenues for leveraging the vast amounts of data available on scientific publications, patents, and grants to gain insights into technological trends and assess technology readiness.
2. Literature Review
2.1 Technology Readiness Levels: Concept and Assessment Methods
Technology Readiness Levels (TRLs) represent a systematic measurement system that supports assessments of the maturity of technologies and the consistent comparison of maturity between different types of technologies (Martinez-Plumed et ah, 2021). The TRL scale, originally developed by NASA, consists of nine levels, with TRL 1 representing the lowest level of technology readiness (basic principles observed) and TRL 9 representing the highest (actual system proven in operational environment) (Mankins, 1995).
Traditional methods for assessing TRLs have primarily relied on expert judgment. The Delphi method, involving iterative rounds of expert elicitation, has been widely used but faces limitations in scalability and potential bias (Dalkey & Helmer, 1963). As Dalkcy and Helmer noted in their 1963 study, while expert opinions provide valuable insights, they are inherently subjective and may lead to inconsistent evaluations across different assessors or technological domains (Dalkcy & Helmer, 1963).
Expert-based methods, such as the Delphi technique and the Analytic Hierarchy Process (AHP), have been the cornerstone of TRL assessment for many years (Saaty, 1980). These methods involve soliciting opinions from subject matter experts and aggregating their judgments to arrive at a consensus view of technology readiness. Lemos and Porto (1998) argue that relying solely on experts is increasingly inadequate in rapidly evolving technological landscapes.
While expert-based methods can provide valuable insights, they are not without limitations. One of the main challenges is the potential for subjective biases to influence the assessment process. Experts may have pre-conceived notions or vested interests that can affect their judgment (Martinez-Plumed et al., 2021).
Traditional methods for TRL assessment face several key limitations:
* Subjectivity: Expert opinions can be influenced by personal biases and preferences.
* Scalability: Expert-based assessments are time-consuming and costly, making it difficult to assess many technologies.
* Inconsistency: Assessments may vary depending on the experts involved and the specific criteria used.
* Lack of transparency: The decision-making process in expert-based assessments is often opaque, making it difficult to understand the rationale behind the TRL assignment.
2.2 Bibliometric Analysis in Technology Assessment
Researchers have increasingly explored quantitative methods leveraging bibliometric data to evaluate scientific activity (Narin et ak, 1972). Bibliometrics involves the use of statistical methods to analyse publications, patents, and other scholarly outputs to gain insights into scientific and technological trends (Broadus, 1987).
Watts and Porter (1997) pioneered the use of publication and patent data to map Technology Life Cycles (TLCs), establishing the foundation for data-driven approaches to technology assessment. Their work demonstrated that patterns in bibliometric data could provide insights into the developmental stage of technologies.
Building on this foundation, subsequent research has employed various bibliometric indicators to assess technology readiness:
* Publication Counts: The number of publications related to a technology can indicate the level of research activity and knowledge accumulation (Bornmann and Mutz, 2015).
* Patent Filings: Patent filings reflect the effort to protect intellectual property and commercialize new technologies (Jaffe and Trajtcnbcrg, 2002).
* Citation Analysis: Analysing the citations to publications and patents can provide insights into the impact and influence of a technology (Garfield, 2006).
* Collaboration Networks: Examining collaboration patterns among researchers and organizations can reveal the diffusion of knowledge and the formation of technological communities (Newman, 2001).
* Text Analysis: Using natural language processing (NLP) to analyse the content of publications and patents can provide deeper insights into the characteristics and trends of a technology (Manning and Schütze, 1999).
Real-world applications of bibliometric analysis for technology readiness assessment can be seen in diverse fields Gao et al. (2013). For example, in renewable energy, bibliometric analysis has been used to evaluate the readiness of hybrid photovoItaic/thermal systems with phase change materials (PVT-PCM), helping to understand their progression toward commercialization.
2.3 Machine Learning Approaches for TRL Prediction
Unsupervised learning refers to algorithms that recognize structures and patterns in data independently and without explicit instruction (Bishop, 2006). Unlike supervised learning, which requires labelled training data, unsupervised learning identifies inherent structures in the data without predefined outputs, making it particularly suitable for exploratory analysis and pattern discovery when labelled data is scarce or unavailable.
In technology assessment, machine learning approaches have shown promise in recent years. Gao et al. (2013) demonstrated the application of a nearest neighbors classifier for evaluating a single technology's maturity. However, most existing machine learning approaches for TRL assessment have been supervised methods requiring labelled training data.
Unsupervised learning methods offer several advantages for technology readiness assessment:
* No Labelled Data Required: They don't require extensive labelled training data, which is often difficult to obtain for diverse technologies.
* Pattern Identification: They can identify natural groupings or patterns in bibliometric data that might not be apparent through manual analysis.
* Objective Assessment: They provide a more objective assessment by learning directly from the data rather than predetermined classifications.
* Cross-Domain Applicability: They can be applied across diverse technological domains without domain-specific modifications.
Common unsupervised learning techniques include clustering methods such as k-means clustering and dimensionality reduction approaches like Principal Component Analysis (PCA) (Jain et al., 1999). These methods are well-suited for identifying natural groupings of technologies based on their bibliometric profiles and determining which factors most contribute to these groupings.
3. Methodology
3.1 Data Collection and Processing
For this research, a dataset of 136 distinct technology trends was created by collecting technology names from various sources, including trend radars published by leading technology companies such as BMW, DHL, etc. These radars provided a list of emerging technologies along with their Technology Readiness Levels (TRLs). However, the TRL classifications differed across the sources. To address this, the TRLs were standardized into three categories to align with practical decision-making contexts:
* Watch (low readiness, TRL 1-3): Basic principles observed, technology concept formulated, and experimental proof of concept.
* Prepare (medium readiness, TRL 4-6): Technology validated in a lab or relevant environment, and system prototype demonstration.
* Act (high readiness, TRL 7-9): System prototype in an operational environment, system complete and qualified, actual system proven.
In addition to the trend names and their standardized TRL categories, bibliometric data was collected from the Dimensions (http://dimensions.ai/) database. Metadata for each technology was retrieved, including publication count, citation metrics, patent count, and temporal trends, to provide further context to the technology names.
The collected data underwent preprocessing, including standard text processing steps such as converting text to lowercase, removing special characters and stopwords, and ensuring consistency in naming conventions. This cleaned and standardized dataset of 136 technology trends served as the foundation for feature extraction and the application of unsupervised learning algorithms in the next stages of the analysis.
3.2 Feature Engineering and Selection
Following data collection, we performed feature engineering to extract and normalize relevant features from the raw bibliometric data. We collected and calculated the following metrics for each identified technology trend.
Scientific Publications:
* Total cumulative number of publications.
* Year-over-year growth rates of publications.
* Total and yearly number of citations.
Patent Filings:
* Cumulative number of patents filed.
* Year-over-year count of patent filings.
Research Grants:
* Cumulative number of grants awarded (in USD).
* Total amount of grant funding (in USD).
* Year-over-year variation in grant funding amounts (in USD).
Clinical Trials (for relevant technologies):
* Number of clinical trials initiated.
Following data collection, feature engineering was performed to extract and normalize relevant features from the raw bibliometric data. This process involved several steps:
3.3 Unsupervised Learning Model Development
The core of our methodology involves applying unsupervised learning algorithms to identify natural groupings of technologies based on their bibliometric profiles. Our framework (see Figure 1) is designed to discover patterns that correspond to different levels of technology readiness without requiring pre-labelled training data, making it more scalable and adaptable than supervised approaches.
We explored several clustering and dimensionality reduction techniques to determine the most effective approach.
Unsupervised ML Algorithms/ Clustering Methods:
* Multivariate Density-Based Spatial Clustering of Applications with Noise (MDBSCAN): It extends DBSCAN by adapting to datasets with varying densities, enabling the identification of clusters of diverse shapes and sizes without prior specification of cluster count. It is particularly effective in analysing complex, highdimensional data where traditional clustering methods may falter.
* Density-based spatial clustering (DBSCAN): It clusters data points based on density, identifying core samples of high density and expanding clusters from them, effectively discovering clusters of arbitrary shape while marking low-density points as outliers. It requires minimal parameter tuning and is robust to noise in the dataset.
* К-means clustering: It partitions data into к clusters by minimizing the variance within each cluster, iteratively updating cluster centroids to achieve optimal separation. While computationally efficient, it assumes spherical cluster shapes and necessitates prior knowledge of the number of clusters.
Dimensionality Reduction:
* Principal Component Analysis (PCA): It is a linear dimensionality reduction technique that transforms correlated variables into a set of uncorrelated components, preserving the directions of maximum variance in the data. It is widely used for simplifying complex datasets while retaining their essential structure.
* t-Distributed Stochastic Neighbor Embedding (t-SNE): It is a non-linear dimensionality reduction method designed for visualizing high-dimensional data by modeling pairwise similarities and preserving local structures in a lower-dimensional space. It excels at revealing clusters and patterns not captured by linear techniques.
For each approach, we optimized hyperparameters through grid search and validation metrics such as silhouette score, Davies-Bouldin index and CalinskiHarabasz. The final model was selected based on its ability to produce meaningful clusters that aligned with our predefined TRE categories.
The unsupervised learning process consisted of the following steps:
1. Initial dimensionality reduction to capture the most relevant variance in the bibliometric data.
2. Application of clustering algorithms to identify natural groupings within the feature space.
3. Interpretation of clusters in terms of technology readiness levels.
4. Validation of cluster assignments against the ground truth TRL categories derived from industry trend radars.
3.4 Evaluation and Validation Framework
To assess the performance of the clustering algorithms, both internal and external evaluation metrics were used. Internal metrics evaluate the quality of the clustering based on the data structure alone, without using any external reference labels. These included the Silhouette Score, Davies-Bouldin Index, and Calinski-Harabasz Index. The Silhouette Score measures how similar an item is to its own cluster compared to other clusters, indicating cohesion and separation. The Davies-Bouldin Index assesses the average similarity between each cluster and its most similar one, with lower values indicating better clustering. The Calinski-Harabasz Index compares the variance between clusters to the variance within clusters, where higher values suggest better-defined clusters.
For external evaluation, which involves comparing clustering results to known labels, Accuracy (%) was used. Accuracy measures how closely the predicted cluster assignments match a reference classification.
To enable this external validation, a benchmark dataset was created using trend radars from leading technology companies. These industry-standard forecasting tools included TRL classifications for various technologies. For each technology, the most frequently reported TRL category across sources was used as the reference label. This served as the ground truth against which the clustering model's predictions were evaluated, allowing for an objective assessment of classification performance.
4. Results & Discussion
4.1 Model Performance
The unsupervised learning approach demonstrated strong performance in classifying technologies into low, medium, and high TRL categories. Specifically, the model MDBSCAN achieved 84.9% accuracy when compared against the benchmark assessments derived from industry-standard technology forecasting tools. This level of agreement with established benchmarks indicates that bibliometric data contains strong signals related to technological readiness that can be effectively captured through unsupervised learning techniques.
The clustering results showed clear separation between the three TRL categories, with technologies assigned to the "Watch" (low TRL) cluster displaying bibliometric profiles characterized by high research activity but limited commercial development. Technologies in the "Prepare" (medium TRL) cluster showed balanced research and early commercial indicators, while those in the "Act" (high TRL) cluster exhibited strong commercial signals including substantial patent activity and declining research publication rates.
4.2 Key Predictive Factors
Factor importance analysis revealed that publication-related metrics emerged as the strongest predictors of technology readiness, accounting for over 60% of the model's predictive power. These findings indicate that publication metrics serve as the strongest signals of technological readiness, with the cumulative volume of publications providing the most significant indicator. The relatively lower importance of clinical trial metrics may be partly due to their applicability to only a subset of technologies, primarily in the biomedical domain. Further details on the contribution of each metric to prediction accuracy are provided in Table 3 and visualized in Figure 2.
Specifically, the most influential bibliometric indicators were:
1. Publication growth rate: Technologies with sustained high growth rates in publication volume were strongly associated with the "Prepare" category, indicating active research and development.
2. Citation impact: Technologies with high citation counts relative to publication volume were frequently classified in the "Act" category, suggesting established influence and recognition in the field.
3. Patent-to-publication ratio: A high ratio of patents to publications was predictive of the "Act" category, indicating movement toward commercialization.
4. Research funding growth: Rapid increases in grant funding were characteristic of technologies in the "Prepare" category, reflecting increased institutional investment as technologies prove their potential.
5. Clinical trial initiation: For healthcare-related technologies, the presence and number of clinical trials strongly indicated transition from "Prepare" to "Act" categories.
4.3 Interpretation of Findings
The strong predictive power of publication metrics, particularly cumulative publications (36.59% importance) and publication growth rates (24.45% importance), suggests that academic research activity serves as a foundational indicator of technological development. This aligns with the innovation lifecycle model where technologies typically progress from basic research through applied development before commercialization. The high importance of publication metrics indicates that the knowledge accumulation phase, as evidenced by scientific literature, is a crucial precursor to technological readiness.
The significance of grant-related metrics (19.62% combined importance) highlights the role of research funding in technology development. Grant activity not only enables research but also signals institutional interest and perceived potential of technologies.
Patent activity (12.65% combined importance) serving as a meaningful but secondary indicator aligns with understanding patents as intermediary outputs between research and commercialization.
The relatively low importance of clinical trial metrics (0.81%) may reflect their domain-specific nature, being primarily relevant to biomedical technologies. This underscores the challenge of developing universal readiness indicators across diverse technological domains.
The clustering results demonstrate that technologies follow discernible bibliometric patterns as they mature. These patterns can be effectively captured through unsupervised learning, enabling objective assessment without requiring predetermined classification schemes or extensive domain expertise.
4.4 Practical Implications
The methodology and findings presented in this research offer valuable practical implications for various stakeholders in the innovation ecosystem.
* For R&D Managers and Technology Strategists: The TRL prediction model provides a data-driven decision support tool for prioritizing research projects, allocating resources, and identifying technologies ripe for commercialization. By objectively assessing technological readiness, R&D managers can make more informed decisions about which technologies to advance, and which may require further development before commercialization efforts.
* For Investors and Venture Capitalists: The ability to objectively assess the maturity of emerging technologies provides investors with a valuable tool for informing investment decisions and identifying promising opportunities in their early stages. The model's capacity to classify technologies into "Watch," "Prepare," and "Act" categories align well with investment time horizons and risk profiles.
* For Policy Makers and Funding Bodies: Government agencies and funding organizations can leverage the model to evaluate the readiness of technologies in strategic areas, informing policy decisions and funding allocations. By understanding which technologies are approaching readiness for commercialization, policy makers can design targeted support programs to bridge the gap between research and market implementation.
* For Technology Transfer Offices: Universities and research institutions can use this approach to assess the potential commercialization of their research outputs and guide technology transfer strategies. The model can help identify which technologies in their portfolio are mature enough for licensing or spin-off creation versus those that require further development.
* For Industry Analysts and Consultants: The data-driven methodology provides a systematic approach for tracking and forecasting technological trends, supporting more accurate market analysis and technology foresight. By monitoring the progression of technologies through different readiness levels, analysts can provide more informed guidance about market timing and adoption potential.
* For Entrepreneurs and Startups: The tool enables benchmarking of technologies against broader industry trends, helping to position offerings and identify potential gaps or opportunities in the market. Entrepreneurs can use the assessments to validate their technology development strategies and communicate readiness levels to potential investors or partners.
5. Limitations and Future Research
While the unsupervised learning approach demonstrates strong performance, several limitations should be acknowledged:
1. Domain Specificity: Although our model performed well across diverse technological domains, certain fields may exhibit unique bibliometric patterns requiring domain-specific calibration. Future research should explore domain adaptation techniques to enhance performance in specialized technological areas.
2. Temporal Dynamics: Our cross-sectional analysis provides a snapshot of technology readiness, but technology maturation is inherently a dynamic process. Future work could extend our approach to incorporate time series analysis and predictive modelling to forecast future readiness transitions.
3. Data Availability: The quality of bibliometric data varies across technologies and sources. Emerging technologies may have limited bibliometric footprints, potentially affecting classification accuracy. Exploring complementary data sources could address this limitation.
4. Conceptual Boundaries: The mapping between bibliometric indicators and TRE categories is not always straightforward, particularly for disruptive technologies that follow unconventional development paths. Further research on identifying and accommodating such outliers would enhance model robustness.
5. Validation Challenges: Ground truth TRL assessments used for validation may themselves contain biases or inconsistencies. Developing more robust validation frameworks represents an important direction for future research.
Building on this work, several promising research directions emerge:
1. Incorporating semantic analysis of publication and patent content to capture qualitative aspects of technological development.
2. Developing hybrid models that combine unsupervised learning with limited expert input to leverage the strengths of both approaches.
3. Extending the model to predict not just current readiness levels but also future trajectories and development timelines.
4. Exploring the application of more advanced unsupervised techniques such as variational autoencoders and self-supervised learning.
5. Investigating the relationship between bibliometric indicators and market success for technologies that reach commercialization.
6. Conclusion
This study has successfully developed and validated an unsupervised machine learning approach for predicting technology readiness levels using comprehensive bibliometric data. The research demonstrates that publications, patents, grants, and other bibliometric indicators contain strong signals about technological maturity that can be effectively captured through unsupervised learning techniques without requiring extensive labelled training data or domain expertise.
The model MDBSCAN achieved 84.9% accuracy in classifying technologies into appropriate TRL categories, performing comparably to other unsupervised models while offering significant advantages in terms of scalability and domain flexibility. Publicationrelated metrics emerged as the strongest predictors of technological readiness, accounting for over 60% of the total feature importance, followed by grant-related metrics (19.62%) and patent activity (12.65%).
This research makes several important contributions to the field of innovation management:
1 It introduces a novel, unsupervised machine learning approach for predicting TRLs, addressing the limitations of existing methods that rely heavily on domain expertise or labeled training data.
2 The comprehensive analysis of diverse bibliometric indicators provides new insights into which factors are most predictive of technological readiness across different domains.
3 By offering a more objective, data-driven alternative to traditional expert-based TRL assessments, the study potentially improves the consistency and scalability of technology readiness evaluations.
4 The approach enables continuous monitoring and forecasting of technological maturity by leveraging publicly available bibliometric data, supporting more informed decision-making in R&D management and technology investment.
These contributions collectively advance our understanding of how to effectively measure and predict technological maturity, which is crucial for managing innovation in today's fast-paced technological landscape. The methodology developed in this research provides a valuable tool for various stakeholders in the innovation ecosystem, from R&D managers and investors to policy makers and entrepreneurs, enabling more informed decision-making about technology development, investment, and implementation.
Future research should focus on refining the model through text analysis of publications and patents, developing domain-specific adaptations, exploring hybrid supervised/unsupervised approaches, conducting longitudinal validation studies, and integrating additional data sources to provide a more comprehensive assessment of technological readiness.
References
Bishop, C.M. and Nasrabadi, N.M., 2006. Pattern recognition and machine learning (Vol. 4, No. 4, p. 738). New York: springer.
Broadus, R., 1987. Toward a definition of "bibliometrics". Scientometrics, 12(5-6), pp.373-379.
Bornmann, L. and Mutz, R., 2015. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the association for information science and technology, 66(11), pp.2215-2222.
Dalkey, N. and Helmer, O., 1963. An experimental application of the Delphi method to the use of experts. Management science, 9(3), pp.458-467.
Ernst, H., 2002. Success factors of new product development: a review of the empirical literature. International journal of management reviews, 4(1), pp.1-40.
Gao, L., Porter, A.L., Wang, J., Fang, S., Zhang, X., Ma, T., Wang, W. and Huang, L., 2013. Technology life cycle analysis method based on patent documents. Technological Forecasting and Social Change, 80(3), pp.398-407.
Garfield, E., 2006. The history and meaning of the journal impact factor, jama, 295(1), pp.90-93.
Jaffe, A.B. and Trajtenberg, M., 2002. Patents, citations, and innovations: A window on the knowledge economy. MIT press.
Jain, A.K., Murty, M.N. and Flynn, P.J., 1999. Data clustering: a review. ACM computing surveys (CSUR), 31(3), pp.264-323.
Kostoff, R.N., Boylan, R. and Simons, G.R., 2004. Disruptive technology roadmaps. Technological Forecasting and Social Change, 71(1-2), pp. 141-159.
Denise Lemos, Â. and Carlos Porto, A., 1998. Technological forecasting techniques and competitive intelligence: tools for improving the innovation process. Industrial Management & data systems, 98(1), pp.330-337.
Mankins, J.C., 1995. Technology readiness levels. White Paper, April, 6(1995), p.1995.
Manning, C. and Schutze, EL, 1999. Foundations of statistical natural language processing. MIT press.
Martinez-Plumed, F., Gómez, E. and Hernández-Orallo, J., 2021. Futures of artificial intelligence through technology readiness levels. Telematics and Informatics, 58, p.101525.
Narin, F., Carpenter, M. and Berit, N.C., 1972. Interrelationships of scientific journals. Journal of the American Society for Information Science, 23(5), pp.323-331.
Newman, M.E., 2001. The structure of scientific collaboration networks. Proceedings of the national academy of sciences, 98(2), pp.404-409.
Saaty, T.L., 1980. The analytic hierarchy process (AHP). The Journal of the Operational Research Society, 41(11), pp. 1073-1076.
Watts, R J. and Porter, A.L., 1997. Innovation forecasting. Technological forecasting and social change, 56(1), pp.25-47.
Copyright The International Society for Professional Innovation Management (ISPIM) 2025