The following text was scanned from a
National Research Council study titled
Research-Doctorate Programs in the United States: Continuity and
Change (Appendix F, pp. 115-116).
The National Survey of Graduate Faculty traces its origins to the work of Hughes (1925), Keniston (1959), Cartter (1966), and Roose and Andersen (1970). The format and content of the present survey are generally similar to that developed by the 1982 study on the same topic (Jones, Lindzey, and Coggeshall, 1982), with individual faculty acting as "raters" for approximately 50 programs in their field.
For the Biological Sciences the number of raters was expanded to five times the number of programs, which resulted in each program appearing on 300 questionnaires. This step was taken in recognition of the interdisciplinary nature of the faculty in those programs and in light of our goal of seeking at least 100 ratings for every program included in the study.
The responses to each of the five questions asked of each program were tabulated and entered into a working file. From the working file, the "mean" and a "trimmed mean" for responses to each of the five questions for each program was computed and entered into a data base.
The "trimmed mean" was obtained by dropping the two highest and two lowest scores for each program and computing the resulting mean. In the computation of the means for B2 (Scholarly Quality of Program Faculty) and B4 (Effectiveness of Program in Educating Research Scholars/Scientists) for the response "Don't know well enough to evaluate" was not counted. However, these responses were recorded and used in the computation of the Visibility Index. (See Appendix P for a definition.)
About midway through the survey, the committee asked a subgroup of its membership1 to review the survey returns and to advise staff on strategies that might be needed to achieve the objective of 100 responses per program. The ad hoc advisory panel in fact suggested an additional mailing of questionnaires in a few fields owing to patterns of nonresponse thought to be associated with this problem of interdisciplinary faculty lists.
Four fields subsequently selected for follow-up and a second wave mailing were: Biomedical Engineering, Comparative Literature, Religion, and Music. In addition, a second wave follow-up mailing was conducted in nine fields: Electrical Engineering, English Language and Literature, Materials Science, Mechanical Engineering, Computer Sciences, Mathematics, Oceanography, History, and Psychology.
Questionnaires were returned to the staff during the Summer and Fall of 1993. Responses were tabulated and a large file was formed for purposes of analysis. Appendix Table F-l summarizes the response rate in February 1994, the point at which no further returns were accepted for analysis. Data from that table reveal that just over 7,900 raters returned forms that were usable,2 or about 51 percent of the raters known to be eligible.3 The committee achieved its goal of having a minimum of 100 raters per program, with the exception of the same fields noted earlier whose response rates varied from 82 respondents in Comparative Literature (or 43 percent of the total number surveyed) to 98 percent in Linguistics (or 51 percent of the total number surveyed). A further analysis of patterns of response by the ad hoc panel persuaded the committee, however, that patterns of non-response in this field did not reveal a bias, suggesting that the results could be utilized in the study.
The committee also sought to achieve a balance in survey replies on the basis of faculty position. That is, responses were sought from full professors, associate and assistant professors, and other faculty in proportion to their numbers in the sample of respondents. Appendix Table F-2 provides an overview of survey responses by faculty position and reveals a balance with respect to the level of seniority of faculty included in the study.
Analyses were also conducted with respect to geographic region of the rater and whether the rater had been suggested by the Institutional Coordinator or had been selected randomly by the committee in the course of composing the sample.4 The results of those analyses maybe found in Appendix Table F-3.
The accuracy of the data entry and working file preparation was monitored by cross-checking sample tabulations.
2. See Appendix F for a definition of this term and other terms.
3. Owing to the fact that it is not possible to determine the fraction of non-respondents who might also be considered ineligible.
4. The issue of potential bias associated with the use of faculty as raters who have been recommended by the Institutional Coordinators was discussed in the 1982 NRC study. See Jones, Lindzey, and Coggeshall, 1982.