THIS IS AN ARCHIVED VERSION OF CRA'S WEBSITE. THIS ARCHIVE IS AVAILABLE TO PROVIDE HISTORICAL CONTENT.

PLEASE VISIT HTTP://WWW.CRA.ORG FOR THE LATEST INFORMATION

CRA Logo

About CRA
Membership
CRA for Students
CRA for Faculty
CRA-Women
Computing Community Consortium (CCC)
Awards
Projects
Events
Jobs
Government Affairs
Computing Research Policy Blog
Publications
Data & Resources
CRA Bulletin
What's New
Contact
Home

The following text was scanned from a National Research Council study titled Research-Doctorate Programs in the United States: Continuity and Change (Appendix F, pp. 115-116).


The National Survey of Graduate Faculty

The National Survey of Graduate Faculty was conducted in 1993 by the National Research Council. The survey was designed to gather the views of a sample of faculty at U.S. universities on the scholarly quality of the program faculty in their field and the effectiveness of those programs in educating research scholars/scientists.

The National Survey of Graduate Faculty traces its origins to the work of Hughes (1925), Keniston (1959), Cartter (1966), and Roose and Andersen (1970). The format and content of the present survey are generally similar to that developed by the 1982 study on the same topic (Jones, Lindzey, and Coggeshall, 1982), with individual faculty acting as "raters" for approximately 50 programs in their field.

THE SAMPLE

The sample of "raters" for the National Survey of Graduate Faculty was drawn from a list of 65,470 faculty members reported by Institutional Coordinators at 274 institutions as participants in research-doctorate training programs. The size of the sample depended on the number of programs in the field. (See Appendix C.) In fields other than the Biological Sciences the number of raters was four times the number of programs for the field. No fewer than 200 raters were selected even in fields with fewer than 50 programs participating in the study. Thus each program was included on 200 questionnaires.

For the Biological Sciences the number of raters was expanded to five times the number of programs, which resulted in each program appearing on 300 questionnaires. This step was taken in recognition of the interdisciplinary nature of the faculty in those programs and in light of our goal of seeking at least 100 ratings for every program included in the study.

THE QUESTIONNAIRE

The survey instrument (see the pages that follow) was a questionnaire that retained many of the features of the one used in the 1982 study, including questions on the "scholarly quality of the program faculty," the "effectiveness of the program in educating research scholars and scientists," and "the relative change in program quality over the years." Each questionnaire contained a random sample of 50 research-doctorate programs except in the Biological Sciences which included 60 programs.

DATA COLLECTION

In May 1993, the first set of questionnaires was mailed to 11,407 raters in 34 fields. The raters in this group represented approximately 18 percent of total faculty in fields other than the Biological Sciences. Later, a second set of questionnaires was mailed to 5,331 raters in the Biological Sciences. This group of evaluators represented approximately 25 percent of the program faculty in that area.

FINAL SAMPLE DISPOSITION

Owing to the fact that the sample was drawn from faculty lists provided by Institutional Coordinators crossing departmental boundaries, respondents occasionally indicated that they did not consider themselves qualified to rate programs in a certain disciplinary area. This effectively reduced the sample and necessitated the use of a second wave mailing in fields in which the sampling problem was particularly evident, for example, in the biological sciences and in biomedical engineering. Appendix Tables F-l and F-2 summarize the final disposition of the sample and the response outcome.

RESPONSE OUTCOMES

Questionnaires were returned to the staff of the National Research Council during the Summer and Fall of 1993. The committee achieved their goal of soliciting approximately 100 responses for each of the programs included in the study. The overall response rate was about 50 percent in most fields, except in the Biological Sciences which registered a 40 percent return.

The responses to each of the five questions asked of each program were tabulated and entered into a working file. From the working file, the "mean" and a "trimmed mean" for responses to each of the five questions for each program was computed and entered into a data base.

The "trimmed mean" was obtained by dropping the two highest and two lowest scores for each program and computing the resulting mean. In the computation of the means for B2 (Scholarly Quality of Program Faculty) and B4 (Effectiveness of Program in Educating Research Scholars/Scientists) for the response "Don't know well enough to evaluate" was not counted. However, these responses were recorded and used in the computation of the Visibility Index. (See Appendix P for a definition.)

About midway through the survey, the committee asked a subgroup of its membership1 to review the survey returns and to advise staff on strategies that might be needed to achieve the objective of 100 responses per program. The ad hoc advisory panel in fact suggested an additional mailing of questionnaires in a few fields owing to patterns of nonresponse thought to be associated with this problem of interdisciplinary faculty lists.

Four fields subsequently selected for follow-up and a second wave mailing were: Biomedical Engineering, Comparative Literature, Religion, and Music. In addition, a second wave follow-up mailing was conducted in nine fields: Electrical Engineering, English Language and Literature, Materials Science, Mechanical Engineering, Computer Sciences, Mathematics, Oceanography, History, and Psychology.

Questionnaires were returned to the staff during the Summer and Fall of 1993. Responses were tabulated and a large file was formed for purposes of analysis. Appendix Table F-l summarizes the response rate in February 1994, the point at which no further returns were accepted for analysis. Data from that table reveal that just over 7,900 raters returned forms that were usable,2 or about 51 percent of the raters known to be eligible.3 The committee achieved its goal of having a minimum of 100 raters per program, with the exception of the same fields noted earlier whose response rates varied from 82 respondents in Comparative Literature (or 43 percent of the total number surveyed) to 98 percent in Linguistics (or 51 percent of the total number surveyed). A further analysis of patterns of response by the ad hoc panel persuaded the committee, however, that patterns of non-response in this field did not reveal a bias, suggesting that the results could be utilized in the study.

The committee also sought to achieve a balance in survey replies on the basis of faculty position. That is, responses were sought from full professors, associate and assistant professors, and other faculty in proportion to their numbers in the sample of respondents. Appendix Table F-2 provides an overview of survey responses by faculty position and reveals a balance with respect to the level of seniority of faculty included in the study.

Analyses were also conducted with respect to geographic region of the rater and whether the rater had been suggested by the Institutional Coordinator or had been selected randomly by the committee in the course of composing the sample.4 The results of those analyses maybe found in Appendix Table F-3.

The accuracy of the data entry and working file preparation was monitored by cross-checking sample tabulations.

THE DATABASE

In addition to the overall survey information, the committee created a data base containing information about the characteristics of each rater, such as area of research specialization and the institution where the rater received his or her highest degree. These data have been aggregated to a level which will not identify the rater, but provide researchers with useful information about this dimension of the survey. Plans have been made to prepare these files for release to the public at the conclusion of the project. (See Note 12 of Chapter I for more details.)

NOTES

1. Committee members Drs. Norman Bradburn, Jonathan Cole, and Steve Stigler, together with the committee co-chairs, closely monitored the survey and guided staff in the interpretation of response rates. Dr. Rebecca Klemm, of Klemm Associates, provided technical support in the sampling plan and fielding of the questionnaire.

2. See Appendix F for a definition of this term and other terms.

3. Owing to the fact that it is not possible to determine the fraction of non-respondents who might also be considered ineligible.

4. The issue of potential bias associated with the use of faculty as raters who have been recommended by the Institutional Coordinators was discussed in the 1982 NRC study. See Jones, Lindzey, and Coggeshall, 1982.


Copyright © 2004 Computing Research Association. All Rights Reserved. Questions? E-mail: webmaster@cra.org.