Research Assessment Exercise 1996


Classics, Ancient History, Byzantine and Modern Greek Studies

  1. The background
    1. Selection of Panel Members
    2. The timetable
    3. Procedures
    4. Grading
    5. The working of the evaluation process
  2. The outcome
  3. The future
    1. Statistics
    2. Changing patterns in research
    3. Quality assessment

We hope it will be of some interest to colleagues, and particularly to our subject associations, if we report our experience of the recent Exercise.

1. The background

(a) Selection of Panel Members

Subject associations were invited to suggest names to the Funding Councils. The aim was to cover as far as possible the range of specialisms included under Classics, Ancient History, Byzantine and Modern Greek Studies, to achieve a reasonable geographical spread and to ensure some continuity with the previous Exercise. The Panel that was eventually appointed (R.M. Beaton, A.M. Cameron, C. Carey, J.K. Davies, P.E. Easterling, H.M. Hine, A. Morpurgo Davies, J.S. Richardson, R.W. Sharples, B.A. Sparkes) was drawn entirely from names put forward by the constituencies. Of these nine, three (Beaton, Easterling and Richardson) had served in 1992 and the same Chair (Easterling) was appointed for both Exercises.

(b) The timetable

Criteria were formulated in accordance with the Funding Councils' guidelines and published in the autumn of 1995. After receiving submissions from 27 institutions we met in early July 1996 to divide up the reading of publications, to devise grading systems for published work and for other aspects of the submissions, particularly Forms RA5 and RA6, and to consider which publications should be evaluated by specialist advisers or members of other Panels. By mid-September it was possible to collate and discuss the results of the reading process and to arrive at some provisional grades; the final decisions were made in October 1996.

(c) Procedures

(i) All cited works were read by one or more Panel members, apart from a relatively small proportion of items sent to other Panels or to a specialist adviser. All Panel members read the full submission for each institution. (ii) In accordance with the directions of the Funding Councils, Panel members were not present for the consideration of any institution in which they had registered a major interest, and details relating to those institutions were not recorded in any documentation available to them. (iii) All discussion was open, and decisions were arrived at by consensus or by open voting, mainly straw polls.

(d) Grading

We had been advised by the Funding Councils that our scoring of individual publications should use a different system of grades from the 1-5* rating intended for whole submissions; we therefore devised a matrix for evaluating publications which would help us to take systematic account of the scholarly competence, imaginative range, and contribution to the field, of each piece of work. As for the number of works cited by each individual, we interpreted the official wording (`up to four') as giving us some flexibility, e.g. in the case of young members of staff at the beginning of their research careers or of established scholars who had published a single highly significant work based on many years of preparation. A number of the cited items fell outside our guidelines (e.g. the editing of a collection of papers which did not include a contribution by the editor); these therefore did not receive a score, but in some cases they could be considered under evidence for research culture or more general promotion of the subject.

The rest of each submission was evaluated under the following headings: the quantity and quality of post-graduate students, the success of departments in attracting grants, the standing of their members as indicated by external judgements, and the evidence, so far as it could be evaluated, for the active development of a research culture, for the use of available resources to create a favourable environment for research, and for strategic planning for the future. The aim was to apply the same criteria to each institution without using any quantitative scoring system. We tried to avoid penalising small departments for being small and to give credit to departments which took a serious interest in the development of young staff. We also gave weight to the many forms of service to scholarship, such as editorial work, reviewing, lexicography and the collection and publication of source materials, which are the necessary underpinning of all academic work.

The task was a complex one, in that we were required to balance two different kinds of evidence: on the one hand the `snapshot' of work published during a limited period, and on the other the more diverse factors affecting the conduct of research in each institution. We also had to find a way of applying the distinction between national and international standards of excellence, as set out in the official rating scale. In disciplines like ours, where most research is carried out in an international context, the distinction is not easy to draw, but we tried to follow the official definitions as precisely as possible without resorting to a mechanical scoring system.

(e) The working of the evaluation process

It was easier than in 1992 to obtain copies of out-of-the-way items from institutions, and it was a great help not to have to assess unpublished material. Bibliographical details were on the whole accurate, and in general the submissions were extremely professional; but some time was wasted in pursuit of false references. Occasionally there was some doubt as to whether the date of a work's publication was actually earlier than 31 March 1996. Some adjustments to categories of staff had to be made in the light of the Audit of submissions. The Panel did not regret having to read all the cited works, but within the time limits this was an arduous undertaking, and if there had been many more submissions it would have been impossible.

2. The outcome

(a) Like our predecessors in 1992, we have found that research in all our fields in British universities is in a remarkably flourishing state. Standards remain high, and the publications that we have seen compare very favourably with work published in our subjects from other parts of the world. There is also a great deal of productive activity: conferences, seminars, editorial work on a larger scale than ever before. Some of this can be attributed to the effects of the RAE, particularly the unfreezing of some posts, improvements in the provision of leave, and the stimulus given to some older scholars to resume independent or collaborative research. Collaboration between researchers in different subjects and across institutions is also having a positive effect: there is far less isolation of small groups than in the past, and departments are being encouraged to think strategically about research as a matter of course.

(b) Although we can see a number of good effects directly flowing from the periodic scrutiny of research, we are anxious about some of the broader trends in academic life which the evaluation process seems to have accentuated. Some examples:

(i) There has been a general increase in the volume of publication, entailing a good deal of repetition and overlap in individual submissions. This is not always prompted by a desire for self-advertisement: the trend towards the publication of conference proceedings makes overlap less and less easy for even the most scrupulous scholar to avoid. Publishers have expressed their anxieties about intense pressure to meet RAE deadlines, leading in some cases to reduction in the quality of publications.

(ii) If research is periodically assessed, the period of assessment is liable to become the engine that drives policy-making, particularly in appointments (encouraging `games-playing' or `buying bibliographies'), in the choice of projects (favouring short-term over long-term research), and in the development of young staff (giving them less space for improving their scholarly range and for acquiring teaching and administrative experience without exploitation).

(iii) As all kinds of new administrative procedures have been introduced, and the need for special funding of research projects and research leave is becoming more widely felt in the Humanities, the volume of paperwork has hugely increased, particularly for senior academics, in writing references and evaluating applications, taking part in formal and informal assessment, etc., often without significant secretarial support. All members of departments who are committed to a generous definition of `good citizenship' and believe in the importance (e.g.) of promoting their subjects' interests in the wider community, or of supporting their colleagues in secondary education, find themselves torn by conflicting demands on their time.

(iv) All this - publication, paperwork, monitoring procedures (particularly time-consuming in relation to teaching and administration) - creates an atmosphere of strain in academic life which is liable to endanger creativity. Universities, after all, are extremely disciplined institutions: academics work within highly regulated organisational and financial structures, and their lives follow an orderly round of teaching, examinations and meetings that could become oppressive if they did not have their `own work' in which to express their freedom and creativity - and to reflect these qualities in their teaching and contacts with students. Any system that threatens to remove intellectual freedom is bound to have bad effects on quality - certainly on the quality of life - in the long run.

3. The future

We hope our subject associations will take the opportunity to look carefully, not only at the outcome in terms of grades awarded, but also at changing patterns in our subject areas which may or may not be related to the effects of the assessment process.

(a) Statistics

We suggest that it is worth gathering information about the number of posts in our fields that have been on offer each year, to see how far they are affected by the timing of the RAE (it seems, for example, that last academic year there were far more openings than this year, but it is probably too early to identify a trend).

(b) Changing patterns in research

The development of the Open University as a research centre illustrates a trend that will surely develop; distance learning, the recruitment of mature students and the increased use of IT resources will bring changes in some long-established patterns. Shifts of emphasis in current work are also worth noting, e.g. the strong interest in the culture of Late Antiquity, in neo-Latin literature, in reception studies and in social and economic history.

(c) Quality assessment

Given that the total resources allocated to particular subject areas remain constant, we ought perhaps to be asking for a different model for dividing them between institutions. Would there be an advantage, if assessment is here to stay (and once a grading system has been invented, there will always be pressure from up-and-coming institutions to ask for review), in looking for a much simpler approach? Could departments put forward what they judge the single best item (or say, the best 50 pages) that each member has produced in a given period? Alternatively, each researcher could nominate up to four publications, indicating the best c.50 pages (e.g. a couple of chapters, or two separate articles) for panel members to read. Another possibility would be to reduce the number of works assessed from four to two, without reducing the period covered by the Assessment Exercise.

But these are relatively minor suggestions, and it is worth thinking about the future in broader terms. Since the current assessment processes are extremely costly, in time as well as money, and many academics now feel that the values underlying the whole university system are seriously threatened, we should like to urge the Funding Councils to undertake a more radical review, which would start from a consideration of the kind of Higher Education system that is wanted and would explore the possibility of making a holistic approach to assessment, to include both teaching (TQA) and research (RAE). It would be a great waste of resources if assessment itself were to undermine the very high standards of professionalism and the generosity towards students that have traditionally characterised most academic departments.

P.E. Easterling

Newnham College, Cambridge

CUCD Bulletin 26 (1997)
© P.E. Easterling 1997

Bulletin ContentsCUCD HomeRoyal Holloway Classics Dept