EVALUATION REPORT LIBRARY
Common Fund Evaluation and Assessment
Assessment and Evaluation Philosophy
The NIH Common Fund believes assessment and evaluation are essential to planning, improving, and maintaining the quality of our science programs. They can ensure accountability of programs and build a case for future science investments. These webpages provide access to formal evaluations, bibliometric analyses, and ongoing programmatic assessment and describe our assessment approach and methodologies.
To obtain a 360-degree view of complex systems, we use multiple sources of information, employ multi-platform, mixed-method designs. This includes triangulation of data from multiple sources of data gathered during program planning, implementation, and close-out. Data sources include input from science experts, NIH leadership and staff, tool and resource users, publication analysis, portfolio review, landscape analysis, site visits, program documents, and others.
Evaluation Report Library
We have included publicly available evaluations and assessments of Common Fund programs. The following list is ordered by the Common Fund program name or topic for cross-program evaluations.
Bioinformatics and Computational Biology Program (which supported the National Centers for Biomedical Computing)
National Centers for Biomedical Computing Mid-Course Program Review Report (July 13, 2007)
An external panel assessed the status and progress of the National Centers for Biomedical Computing Initiative three years after the program started. This report presents the panel’s findings and provides guidance for the future course of the program.
Bridging Interventional Development Gaps (BrIDGs) (now housed in the National Center for Advancing Translational Sciences (NCATS))
NIH-RAID Pilot Mid-Course Review Meeting Summary (March 7, 2008) -part of Bridging Interventional Development Gaps (BrIDGs)
This review of the NIH Rapid Access to Interventional Development (RAID), an initiative of the BrIDGs program, was conducted by an external panel. The report provides the findings and recommendations that include how to improve the approach to this area of science and the management of the program.
High-Risk, High-Reward Research Program
(Four unique NIH Director’s awards are available for exceptionally creative scientists who propose highly innovative approaches with high-impact potential to major challenges in biomedical research.)
NIH Director’s New Innovator Award
Feasibility Study of an Outcome Evaluation of the National Institutes of Health’s New Innovator Award Program – Final Report (May 2011)
To determine whether an evaluation was warranted and feasible, staff interviews were conducted, literature on how innovative research has been defined and operationalized was reviewed, and a comparison group was identified. This resulted in evaluation questions, an evaluation design, a logic model, and collection of pilot data to guide a future outcomes evaluation.
Process Evaluation of the National Institutes of Health Director’s New Innovator Award Program: FY 2007–2009 – Final Report (May 6, 2011)
After reviewing the origins of the program and changes over the first three years, examining the characteristics and perceptions of applicants and external reviewers, and analyzing the scoring of the applications, the findings indicated that the program had been implemented without significant challenges. Six recommendations were made to improve the program.
National Institutes of Health Director’s New Innovator Award Outcome Evaluation Fiscal Years 2007-2009 (2016)
This outcomes evaluation examined the initial awardees of the 5-year New Innovator Award (NI) program. Two key questions were addressed: 1) Is NI research significantly more innovative, high risk, or impactful than traditionally funded NIH research; and 2) What are the impacts of NI awards on the careers of awardees compared to the career impacts of a comparable traditional NIH award?
NIH Director’s Pioneer Award
FY 2004 – 2006 NIH Director’s Pioneer Award Process Evaluation – Comprehensive Report – Final Report (January 2008)
As a pilot, the NIH Director's Pioneer Award (NDPA) continued to evolve and change structurally and conceptually. Using administrative data, interviews, and surveys, this report summarized NDPA’s design, implementation, and participation. The findings included recommendations to continue to clarify program criteria and operations; increase consistency, communication, and transparency of the selection process; and attract a diverse pool of investigators to apply to the program.
FY 2004 – 2008 NIH Director’s Pioneer Award Process Evaluation – Comprehensive Report Final Report (January 2010)
This report summarizes the process evaluations of the first 5 years of the program, highlighting changes in the program’s design and implementation and describing program participants’ perceptions. Also included are overall assessments of the program and key recommendations.
Outcome Evaluation of the National Institutes of Health (NIH) Director’s Pioneer Award (NDPA), FY2004 2005 – Final Report (July 22, 2011)
The 5-year NDPA awards represent a novel approach for supporting biomedical and behavioral research. This evaluation examined the outcomes of the first two cohorts of the NDPA. Two over-arching questions drove this evaluation: (1) Did the awardees conduct pioneering research with the funds? (2) What are the spillover effects of the program?
An Outcome Evaluation of the National Institutes of Health (NIH) Director’s Pioneer Award (NDPA) Program, FY 2004–2006 (August 2012)
Expanding on the previous outcome evaluation of the NDPA, this evaluation focused on scientific publications to answer questions: (1) To what extent does this research produce unusually high impact, (2) To what extent are the research approaches used highly innovative, (3) To what extent is this research interdisciplinary, and (4) To what extent are the Pioneers awardees collaborative?
NIH Director’s Transformative Research Award
FY 2009-2010 NIH Director’s Transformative Research Award Process Evaluation (August 2010)
For this evaluation, Transformative Research Award applicants and application reviewers were surveyed. Based on the findings, recommendations about the application and review process were made.
Interdisciplinary Research Program
Facilitating and Experiencing Interdisciplinarity in Biomedical Research – Mid-course Evaluation of the Interdisciplinary Research Consortium Program: an NIH Common Fund Program (Sept. 2011)
This mid-course program review identified facilitators and inhibitors of interdisciplinary at the project level.
Molecular Libraries and Imaging Program
NIH Roadmap Molecular Libraries and Imaging Program Mid-Course Review Meeting, December 20–21, 2006 – Executive Summary
This progress review made the recommendations to (1) focus the program on difficult or unique problems as an organizing theme to drive innovation and differentiation from drug discovery screening efforts in industry, (2) manage the program as a diversified portfolio of initiatives, and (3) reassess the program and chart the overall direction at the 5-year point.
NIH Roadmap Molecular Libraries and Imaging Program Needs Assessment Report – Final Report (January 2010)
This needs assessment examined whether the program accomplished its goals during the initial pilot phase, and gathered feedback from network users and potential users on their level of satisfaction with Molecular Libraries Program services.
Evaluating the Selection Processes for the NIH Roadmap Nanomedicine Initiative Nanomedicine Development Centers (2005-2006)
This evaluation examined the processes for selecting the Nanomedicine Development Centers. This included identifying which aspects of the selection process facilitated the solicitation and identification of applications best suited to meet the objectives of the program. Also, the use of Flexible Research Authority for selecting Centers was assessed.
Post-Award Management of the NIH Nanomedicine Development Centers (2008)
As an extension of the 2005-2006 evaluation of the Nanomedicine Development Centers, this evaluation focused on the effectiveness and efficiency of management and operations at the program level.
Midcourse Review of the NIH Nanomedicine Roadmap Initiative – (2009)
In 2009, an external panel of scientists was convened to examine questions related to the structure, management, and direction of the program. The panel identified successes and challenges of the NIH Nanomedicine Program and made recommendations to improve the program.
Patient-Reported Outcomes Measurement Information System (PROMIS) Program
The PROMIS Initiative and Mid-Course Review – 2007
This process evaluation was conducted by an external panel of scientists to examine if the goals were being achieved and if the program continued to be relevant and significant in relation to re-engineering the clinical research enterprise. The panel made recommendations to support the initiative’s continued relevance.
Common Fund Evaluation
The NIH Common Fund (also known as the NIH Roadmap) was established in 2004. This 2014 process evaluation examined the planning and management of the Common Fund. Two major questions were answered: (1) Are planning processes optimal for identifying program areas that meet the Common Fund criteria? (2) Are management and oversight processes optimal for achieving program goals?
The Common Fund is a unique funding entity at NIH, functioning as a “venture capital” space, where high-risk and innovative research can be supported. This led to the question, does research supported by the Common Fund generate more patents compared to the entire NIH research portfolio. Read the full patent report to see patent numbers from Common Fund programs compared to the NIH and to learn about select Common Fund patents.
This page includes posters, slide handouts, and videos about evaluations and assessments from the NIH Common Fund. This list is ordered by date.
- The Common Fund Patent Report (2017)(link is external) - Video
The Common Fund is a unique funding entity at NIH, functioning as a “venture capital” space, where high-risk and innovative research can be supported. This led to the question, does research supported by the Common Fund generate more patents compared to the entire NIH research portfolio.
- NIH Common Fund Single Cell Analysis Program: An Early Outcomes Assessment - NIH Evaluation Special Interest Group 2017
The program goals of The Single Cell Analysis Program, which ended in 2017, were to address key roadblocks in analyzing single cells, catalyze the emerging field, and coordinate NIH efforts in advancing the next-generation of technologies. This assessment examines the products generated by this program, the effectiveness of the program, and the impact on the field.
- Early Program Outcome Assessment: The Common Fund Closeout Guide - AEA Conference 2017
This presentation discussed the purpose and use of a guide to capture short-term outcomes from biomedical research programs.
- NIH Investment in Single Cell Analysis Research in Human Cells and Tissues (poster) - NIH Office of Portfolio Analysis 2017 Symposium
The NIH Single Cell Analysis in Human Cells or Tissues portofolio was evaluatated by a trans-NIH working group to determine the current NIH investment in this area of research, as well as identify gaps in the portfolio. This analysis was conducted as part of strategic planning to determine whether the NIH Common Fund should support a program in this area.
- Bibliometrics: a Key Performance Indicator in Assessing the Influence of Biomedical Research (poster) - AEA Conference 2016
The feasibility and utility of bibliometrics as a performance indicator for research programs, and how bibliometrics integrate with other methods used to evaluate biomedical research programs are presented.
- Using Evaluation Tools, Methods, and Thinking in Planning NIH Common Fund Programs - AEA Conference 2016
a combination of methods grounded in evaluation practice are used to inform strategic planning for biomedical research programs managed by the NIH Common Fund. This iterative process includes strategies for working with and building consensus among key stakeholders so informed decisions about research direction, resources, and funding can be made.
- Identifying Metrics for Monitoring and Evaluating Impact of a Federally-Funded Research Communication Program (poster) - AEA Conference 2015
This presentation focused on metrics needed to monitor and evaluate our research communication plans.
- Building Evaluation Capacity in a Federal Research Funding Office, the NIH Common Fund (poster) - AEA Conference 2015
Broadening Experiences in Scientific Training (BEST) Cross-site Evaluation Data
Submissions for this form are closed.
BEST Data Files
The Strengthening Biomedical Research Workforce program issued five year awards in 2013 and 2014. These “Broadening Experiences in Scientific Training” awards (aka BEST awards) were designed to develop sustainable approaches to broaden graduate and postdoctoral training, aimed at creating training programs that reflect the range of career options that trainees may ultimately pursue. The awardee sites were part of a comprehensive cross-site evaluation designed to understand trainee agency, time to desired careers, and sustainability of BEST activities and resources at academic institutions to support BEST activities. The data available here are from this evaluation.
Prior to downloading any Cross-site data, users will be prompted below to register a name and valid e-mail address in order to download data and to accept their responsibility for using data in accordance with the Data Use Agreement.
Protection of Human Subjects
The primary concern in sharing data is the protection of human subjects. The rights and privacy of people who participate in NIH-sponsored research must be protected at all times. Thus, data on this site have been de-identified to prevent linkages to individual research participants. This includes removal of all Personally Identifiable Information (PII) and indirect identifiers that are not listed as PII but could lead to “deductive disclosure” such as comment fields and educational institution.
Cross-Site Evaluation Data Collection Instruments
- Entrance Survey for Graduate Students
- Entrance Survey for Postdoctoral Scientists
- Exit Survey for Graduate Students
- Exit Survey for Postdoctoral Scientists
- Data Form
- Phone Interview Questions
Please review additional information related to the BEST program and data:
- Lenzi RN, Korn SJ, Wallace M, Desmond NL, Labosky PA. The NIH "BEST" programs: Institutional programs, the program evaluation, and early data. FASEB J. 2020;34(3):3570-3582. doi:10.1096/fj.201902064
- FINAL INTEGRATED REPORT: Evaluation of the National Institutes of Health (NIH) Broadening Experiences in Scientific Training (BEST) Program
The recommended citation for the data is:
- National Institutes of Health Office of Strategic Coordination. (2020) Public Use Data from the Broadening Experiences in Scientific Training (BEST) Award Trainee Surveys and Awardee Data Forms.