THIS IS AN ARCHIVED VERSION OF CRA'S WEBSITE. THIS ARCHIVE IS AVAILABLE TO PROVIDE HISTORICAL CONTENT.

PLEASE VISIT HTTP://WWW.CRA.ORG FOR THE LATEST INFORMATION

CRA Logo

About CRA
Membership
CRA for Students
CRA for Faculty
CRA-Women
Computing Community Consortium (CCC)
Awards
Projects
Events
Jobs
Government Affairs
Computing Research Policy Blog
Publications
Data & Resources
CRA Bulletin
What's New
Contact
Home

 

Back to January 2003 CRN Table of Contents

[Published originally in the January 2003 edition of Computing Research News, Vol. 15/No. 1, p. 6.]

Shooting Inward

By Rick Adrion and Steve Mahaney, NSF

"We must avoid our well-ingrained tendency to circle the wagons and shoot inward" (Ed Lazowska, Computing Research Association Conference at Snowbird, July 1998).

Does Ed Lazowska's comment apply to the CISE review process? Yes!

Although our recent observation of CISE panels and mail reviews indicates that CISE reviewers are being more consistent and supportive than in years past, CISE reviewers, on the average, give lower ratings than reviewers in other disciplines. This could be deserved, but that suggests that the quality of research in CISE areas is lower and we do not believe that to be true. This short article looks at how such lower ratings are perceived in NSF's processes and makes some suggestions for reviewers.

In the NSF's Mathematical and Physical Sciences Directorate (MPS) for example, which supports mathematics, physics, chemistry, astronomy, and materials sciences, the average rating for awards over the past three fiscal years was close to 4.5/5 (where "poor" =1, "fair" =2, and so on up to "excellent" =5). In CISE, the average rating for awards was around 3.9/5. More telling are the averages for declined proposals. In MPS, it was 3.5/5; in CISE, it was 2.8/5. These differences in averages have held up over several years now; CISE is consistently the lowest in self-rating among NSF's supported fields.

Does that mean that the average awarded project in CISE is merely "very good" and perhaps comparable to declined proposals in MPS? Does it imply that CISE is funding mundane projects, while MPS must decline much higher quality proposals? Figures 1 and 2 show the distribution of reviews for awards and declines in CISE and MPS.

For awards, the MPS data skew heavily towards VG-E, while the CISE data is more of a normal distribution around VG. For declines, both sets are fairly normally distributed; MPS around VG and CISE around G. These data imply that MPS awards are clearly distinguished from the declines, but that the declines include a number of high-quality proposals. In the CISE data, awards and declines have similar distributions, with awards of higher (but not excellent) quality.

MPS and CISE proposals average around 4 reviews, with awards typically receiving more reviews than declines. These data imply that the average MPS award has 2 to 3 "excellents" and 1 to 2 "very goods," while the average CISE award has 1 "excellent," 2 "very goods," and 1 "good." The comparison is not flattering to CISE proposals.

Since the funding rate in CISE is dropping as the proposal pressure increases faster than the CISE budget, one might expect that the perceived quality of declined awards would be increasing. Overall, the CISE funding rate has dropped from above the NSF average (around 30%) to a level closer to 25 percent. The 2001 and 2002 data for declines are remarkably similar, showing only an increase in the number of reviews per proposal, not a significant change in the distribution.

It may be useful to consider how these reviews have an impact at the NSF. The most obvious is the impact on how resources are allocated. In cross-disciplinary competitions (such as ITR, centers programs, graduate traineeship programs), proposals are examined as a pool at one or more stages, not just in individual disciplinary processes. In a setting where quality is the primary criterion, a group that has a lower average score is likely to receive a smaller share of funded proposals.

A second resource allocation process that can be affected is the allocation of new funds among directorates. An organization such as MPS that must turn away highly rated proposals is in a stronger position to argue for resources than one that is perceived to have resources going to weaker proposals. These allocation issues are not unique to the NSF; they are the same processes that influence admissions or allocation of faculty lines at universities.

A third impact is less directly connected to resources, but may have a longer impact: the reputation of CISE research among the NSF-supported fields of science and engineering suffers.

It would help if CISE reviewers came to a clear consensus on the best proposals.

We are looking at the panel process to find and adapt techniques from other review processes (conferences, journals) to achieve consensus in advance of the panel meeting. We believe that many more proposals deserve consideration than we can fund, but it is hard to make this argument based on these data for declined proposals.

These data and other observations lead to some deeper comments and suggestions about the review process--suggestions for both reviewers and for NSF practices. First, the review can have both assessment of the project and suggestions for improving the project. As a reviewer, you can rate a proposal within the context of the competition. If a proposal represents quality PIs, important research, and valuable impacts but has flaws that can be fixed, give it your support, indicate what needs to be addressed, and leave it to NSF and the PI to ensure that, if funded, these flaws are addressed. Even if declined, the PI will feel that her/his work is valued and can see the concerns as constructive recommendations for improvement.

We are trying to provide more analysis of the reviews to PIs. The increased proposal load has made it difficult for program managers to offer detailed guidance for improving proposals, but this is essential and we are giving extra attention to this issue

In addition to the difference in summary rating (poor, fair, etc.), we believe there is a difference among disciplines in the approach to reviewing proposals. For example, consider the following exchange, overheard at a Quantum Computing review panel:

"How do you feel about proposal xxxxxx?"
"Has the potential to be a major breakthrough."
"First-class PI and team."
"Great track record."
"High risk, but very high return."
"Excellent."
"Excellent."
"Any concerns?"
"The apparatus probably won't work."
"[My] approach is more likely to succeed."
"Highly speculative."
"Overall rating?"
"Highly recommended."

These reviewers were a mix of physicists, material scientists, and computer scientists. Note how the assessment of the PI, the potential impact, and the quality of the research is very high, even though the panelists have some concerns about apparatus, approach, and risk.

In CISE panels, one might expect the order of the comments to be reversed ("probably won't work," "too speculative," etc.) and the conclusion to be quite different. The difference is in what is emphasized. This group gave weight to the potential outcome in reaching their ratings. The differences each reviewer might have with the PI's approach were left as "concerns." In our discipline, we often put approach first ("I wouldn't do it that way, so it isn't likely to be successful [interesting]"). We aren't suggesting that impossible goals or poorly designed approaches deserve support. However, in other disciplines, there is a belief that supporting a variety of research approaches is healthy and that "all boats rise on the incoming tide." Other scientists and engineers can be conservative, shying from high-risk, non-traditional, or highly interdisciplinary research, but they are more supportive of their discipline and their colleagues.

Our experience with CISE panels may reflect the relative immaturity of the field as well as the breadth and natural interdisciplinary nature of CISE programs. It's time that we stop "shooting inward" and start being more supportive.

Take a lesson from the QC panel: put your review comments in context. If you think a proposal should be funded, give it a strong rating and clearly state the scientific and other impact outcomes that you expect to see. If you have problems with some specifics of the approach or the lack of citations or other issues, but still think the work deserves support, then state these concerns clearly as guidance to the PI, not as major criticisms of the overall effort.

There may be other ways in which reviewers can do a better job of reviewing, while at the same time helping to form a more cohesive and supportive community. We hope that this brief discussion will stimulate you to think of some!


Rick Adrion is Senior Advisor and Steve Mahaney is Senior Advisor for Budget Management & Planning and Policy, both in the Office of the Assistant Director for CISE at NSF.

 


Google
Search WWW Search cra.org

Copyright © 2007 Computing Research Association. All Rights Reserved. Questions? E-mail: webmaster@cra.org.