RAISING THE STANDARD FOR ADMISSION TO MEMORIAL
 UNIVERSITY:  A CRITIQUE OF WILSON AND THE TASK
 FORCE ON ADMISSIONS POLICY

 William H. Spain
 Glenn Clark
 Faculty of Education
 Fall 1994




Introduction

 The recent decision to increase the standard of admission to Memorial University for students matriculating from high school caused a flurry of discussion both on and off the campus.  The move parallels similar decisions in other Canadian universities, and speaks to the perception in some quarters that the increase in the admissions standard will result in an improvement in the quality of the university graduate.  Others have disagreed with the change, believing that it withholds the opportunity to attend university from a significant portion of the high school population who have demonstrated by their performance that they deserve the chance.

 This paper reviews the reports that led to the decision, and presents some additional analysis that expands the perspective taken in the reports.  The authors have tried to assume a neutral position in the argument, although the fact that we have troubled ourselves may suggest to some that our sympathies lie with the students who will be adversely affected.  Our concerns with the decision to increase the admissions standard are based on procedural grounds, primarily, but also touch on the values implied in the recommendations - values that we think have been influenced by faulty procedure and inappropriate presentation of the analysis in the original reports.

 Our comments are based on the information contained in the report of the Task Force on Admissions Policy (1992), and on the two reports by Paul Wilson (1991a; 1991b).  We have also examined some of the questions using a sample of data provided by the registrar of the university - all course registrations for the winter semester, 1993.  Wilson used data that was cleaner, and at the same time, more restrictive, in that he confined his analyses to students who were full-time matriculants.  The two analyses do differ in their outcomes in some respects that we will note.

 Our interest came about as a result of work that we were doing with a Faculty of Education committee studying grading standards and practices.  The data provided by the registrar for this purpose gave us an opportunity to examine some of the potential effects of the new admissions policy on the University.

 The Task Force on Admissions Policy tried to answer two questions.

1. At what level should the admission standard be set to admit those with a REASONABLE (our emphasis) chance of completing a degree and exclude those who cannot REASONABLY (our emphasis) be expected to benefit.

2. Does the admission standard, so set, DIFFERENTIALLY AFFECT (our emphasis)  students who are female, rural, or from a rural or lower socioeconomic background.  (Task Force, 1992, p. 4)

 The Task Force (1992) made two recommendations.  The first was that the minimum high school average required for admission directly from high school into university should be set to 70.  The second was to change the way the high school average was calculated by using only three level three high school courses, one each in science, mathematics and language.  Using a total of five courses in the calculation, including two elective courses, would no longer be done (p. 49).

 Four problems become immediately apparent when the various reports are examined.

1. What it means to "DIFFERENTIALLY AFFECT" is not explained in any of the reports, and is not explored thoroughly in the analysis.

2. The adequacy of the high school average as a predictor of success at Memorial is assumed, without being questioned.

3. The meaning of REASONABLE is very much an issue, and in fact, it is never defined in the Wilson and the Task Force reports.

4. There are flaws in the arguments that are applied to predict the impact of the change in the standards on student success and on costs.
 

The Admissions Problem

 Deciding on an admissions standard is not a trivial undertaking.  It involves the selection of criteria that will predict, in general, the level of attainment to be expected of applicants in University, given their level of attainment on the admissions criteria.  A standard is set based on these expectations.  Persons expected to fail, given the current programming of the university, are not admitted.

 The procedure is not straightforward, however, because no set of criteria will be perfect.  They do not predict absolutely.  Some persons expected to fail will pass, and some expected to pass will fail.  Prediction of success, therefore, becomes a matter of understanding the potential for error in the process so that the standards that are set can optimize error in terms of the relative levels of the incorrect acceptances and rejections that are made.  The best criteria will be those that minimize the total numbers of admissions errors.  The best standards will be those that reflect stakeholders' views about the acceptable balance of incorrect acceptance and rejection.  Lowering the standard minimizes incorrect rejection, while raising it minimizes incorrect acceptance.  The total error remains constant so lowering one kind of error raises the other kind of error.  The job of the Task Force, therefore, was twofold; first, to identify a criterion, or set of criteria, that would predict university success with a low rate of total error; and second, identify and weigh the views of the legitimate stakeholders respecting the balance of error that is most acceptable.
 

"Differential Effects" - Bias in Admissions

 At the outset, it should be obvious that in one sense, the current standard cannot differentially affect admissions, because all applicants meeting the cutoff will be admitted if they apply.  Wilson, and the Task Force, examined the question by asking who applied and if there were differences in the numbers of men and women, and of rural and urban students, actually admitted.  They discovered that more women than men apply, and meet the standard for admission.  They found that the same proportions of rural and urban students, at all levels of ability (as measured by high school average), apply and are admitted.  From this, they have concluded that there is no further need to examine gender effects, and that there is no rural/urban bias in admissions.

 Because of the open admission policy, if the minimum standard is met, differential admissions, in the sense actually discussed by the Task Force, is not an issue except insofar as the university is responsible for articulating the standard, and justifying it.  It is not material that more women than men, or for that matter, more men than women, apply and gain admission.  The number and standing of rural students who are admitted is not material to the standard setting question either.  It only becomes material if it can be shown that equally qualified men and women, or equally qualified rural and urban students, get different high school averages, or if it can be shown that their high school averages relate differently to their later performance in university.  Wilson, and the Task Force, did not examine these questions.

 This is the sense in which admissions bias should be considered.  The admissions process should not differentially exclude categories of potentially successful applicants to the university, and it should not differentially include categories of likely unsuccessful applicants.  Any standard of admissions that is set must consider these questions in the selection of both the criteria to be used, and the standard to be set, regardless of the definition of "reasonableness" that is used.  What this means is that the criterion (in this case the high school average) must not predict university outcomes with differing levels of accuracy for differing applicant groups.

 The argument is made by the Task Force that there is no rural/urban or gender bias in the high school average as a predictor of university success.
 

 An examination of both gender and place of residence indicated that there were no statistical(sic) differences between male and female students, and between rural and urban students, on the admission average from grade XII in any of the studies.  Therefore, any change in admission requirements would not disproportionately affect either male or female students, or rural or urban students.  (Task Force, p. 14).


 We can find no valid analysis to support this conclusion, either in the Task Force report, or in either of Wilson's studies.  The analysis that is claimed to support this in actuality only establishes that there are proportionately the same numbers of rural and urban students at different levels of high school average admitted to the University.  This is not a measure of the ability of the high school average to predict the university performance outcomes of these students.  In fact it is meaningless in this context because the analysis does not take into account either the size of the applicant pool, that is, all those who meet the minimum academic requirement for admission, or the related academic outcomes at university.

 The Task Force (1992) did, in fact, introduce evidence of bias; that is, that the high school average over predicts the success of rural applicants.  It described the tendency of rural students to be less successful in university than urban students (p. 14-15).  This finding has two interpretations.  First, if the high school average is not a biased predictor (as argued by the Task Force), and therefore predicts the academic success of rural and urban applicants equally well, then university programs are biased against rural students for reasons that cannot be discerned at the time of admission using the high school average, except that they apply more frequently to rural, than to urban, students.  Excluding students for unaccountable reasons through the use of an academic criterion is a prescription for a public relations disaster.  In order to deal with a problem that cannot be described at admission time, the academic standard for admission would have been raised, excluding from the University many potentially successful students.  Worse, it may be confidently predicted that the failure rate among the admitted students would not change to the degree anticipated by the Task Force.  The major change would be in the size of the student body.  It is true that the change in standards would affect rural and urban students equally; but, a larger proportion of the rejected urban students would have succeeded if they had been admitted.

 The second interpretation is that the high school average is, in fact, a biased predictor of academic success, in which case its use should be reconsidered.  If this interpretation is the correct one, then the data would suggest that urban students are unfairly excluded from university under the admissions procedures, and that this would be exacerbated if the admissions standard is raised in order to exclude those most at risk of failure, the largest proportion of whom are rural.
 

The High School Average as an Admission Criterion

 The adequacy of the high school average as a predictor of university success is an important question in determining the reasonableness of any standard that is set.  It is also related to the question about bias, in that prediction could be better for some groups than others.  If the prediction is good, there will be more precision, and fewer admissions errors of both types will be made.  Fewer unsuccessful students will be admitted regardless of the admission standard that is set.  If the prediction is poor, more error of both types will be made because there will be less precision.  More unqualified students will be incorrectly admitted.  The problem that would be created in this case, however, is that many more potentially successful students will be denied admission as well because they cannot meet a standard that is only marginally relevant to the potential for success in university.  This reflects our view of the present situation.

 Predicting university outcomes over time, Wilson found higher correlations of the high school average with university grades than did we (see table one which is reconstructed from Wilson, 1991a, tables 17, 19, 20, 21, and 22).  He did his analysis only overall, on a sample of full-time matriculants.  We broke our analysis down into academic faculties, and used all registrations for the winter 1993.  Both we and Wilson found that these correlations dropped, as one might expect, quite significantly after the first year.  Wilson's first year correlation of 0.69 is a reasonable, but not exceptional, level for measures of ability and achievement.  It certainly is not strong in the sense that it would predict with a high degree of precision.  While it would predict university performance, it would do so with considerable error.  Wilson did not provide information that would enable a calculation of the standard error of prediction.  At best, predictors with this level of validity would be considered marginal for making decisions about individuals, and caution in their use would be advised, preferably in conjunction with other predictors (for example, Stanley and Hopkins, 1990, p. 365).

 The first-year correlation of 0.50 found by our analysis is lower than that found by Wilson, and would be considered too weak as a predictor to be used without other predictive data.  The reason for the differences in the findings cannot be established with certainty.  Likely it is due to the fact that our analysis included all students, both full and part-time, for a single semester.  Wilson computed the overall average for two semesters for full time students only.

 Wilson (1991a, p. 123) concluded that at the end of five years, 40% of the variation in the cumulative average could be accounted for by the high school average.  This conclusion was based on the correlation within a group of survivors who had been selected out early by circumstances which were correlated with the high school average.  The really important question, however, is the degree of correlation of the high school average with early performance, and as seems clear in both analyses, this correlation is at best only moderate.

 These findings, both ours and Wilson's, suggest quite strongly that survival of the first, and certainly the second, year puts students into a situation where other factors than high school preparation play a significant role in failure.  Recalculating Wilson's data to show the percent attrition by year within the 60-69 and 70-79 admissions groups, we found that the attrition in the so-called "high risk" 60-69 group had dropped over time to the same level as in the 70-79 group, contrary to the assertion made by the Task Force that this group never catches up (see table two).  Even more interesting is the observation that attrition in this group of students begins to drop after the second year.  Subsequently, this trend continues, while attrition in the other groups increases somewhat after second year.

Table 1
 Correlation of High School Average and Winter 1993 Average
 for all Students with a Known High School Average 

*  Wilson reported correlations by year attended in a Single, Longitudinal Sample for the 1984 class
 
 

Table 2
 Attrition from a Sample of Full-Time Matriculants
 Admitted in Academic Year 1984-85

Note: Table constructed from recalculations of information in Wilson, 1991b, p. 18

 In fact, if it is accepted that survival beyond the second year is an increasing function of university programming, then the actual target of the change in admissions standards are the 65 students in each 1000 admissions who are in the 60-69 admissions category and fail in the first two years.  Most of the 43% total attrition in the university observed by Wilson cannot be attributed to causes predicted by the high school average, and this would not be expected to change with a change in standards unless there was a concomitant change in university programs.  The attrition rate using the new standards can be expected to be 37% of those admitted, not a great deal lower than at present.  The new standards will eliminate about 24.3% of the current matriculant admissions in order to screen out 6.5% of those admissions who are unqualified.  The remaining 17.8% would either succeed if admitted, or fail at least in part because of university programming.  Furthermore, accepting the argument that failure in the first two years is largely a function of high school preparation, a total of 8.2% of persons accepted under the old regulations would be unqualified even though their high school average is over 70.  Nonetheless, under the new regulations, these persons would be admitted.

 We found that the best prediction was in the science faculty, where the high school average predicted significantly better than for any other faculty, including general studies.  The correlation was at a level to be expected of good admissions criteria.  We found as well that prediction in the arts faculty was uniformly low (correlations about 0.39) right from the beginning in first year.  These same trends also can be seen in the correlations of high school averages with grades received in a variety of first and second year courses (see table three).  The highest correlations were in the sciences, and the lowest in the arts.  Very few were above .50, and the correlations in some of the critical courses, for example mathematics 1080, which had a correlation of .28, were virtually nonpredictive of success.  These points were not addressed by the Task Force.

Table 3
 Correlations of Marks and High School Average for
 Courses taken in Winter, 1993


Marginally acceptable prediction is available only for first year, and then only for persons taking sciences.  Note the implications of this for the new applications procedures, which have students now applying for their faculty on admission.  Because the high school average predicts performance in the arts so poorly compared to the sciences, it will be necessary to raise the admissions standard higher in the arts in order to attain a level of first year success that compares to that in the sciences.  However, raising the admissions standards in this way will cause the rejection of more students who would succeed in arts as compared to sciences.

 In actuality, for the majority of students admitted, the high school average appears to predict university performance poorly.  Certainly, its use is open to challenge.  We believe that a thorough examination of the use of the high school average is required before changing the standards for admission.  This would necessarily include an examination of university grading practices as one of the explanations of the poor predictiveness of high school performance.
 
 

Redefining the Admissions Criterion

 The Task Force recommended dropping the two elective high school courses from the calculation of the high school average, presumably because this would make the resulting average more valid as a predictor of university outcomes.  The rationale for this recommendation was seriously flawed.  The procedure assumed facts not in evidence; specifically, that the elective courses in high school overpredict university performance.  The real issue is whether an average calculated without these two courses will predict university performance with less error than one calculated with them included.  This was an empirical question that could have been easily examined by the Task Force, but was not.  It is possible, in fact, that the presumed lower validity of the present calculation is moderated by the improved reliability of a total comprised of five, rather than three, grades from high school courses.  The potentially lower reliability of the proposed average of three courses could yield even lower predictive validities than are found now.

 In order to justify the recommendation, the committee calculated the adjusted high school average for an extremely small (and unreliable) sample, and determined the numbers who would have met the admissions standards without the two elective courses (Task Force, p. 10-11).  The result was the conclusion that about 30% of the sample would not have met the admissions standard of the university.  This adds an interesting twist to the Task Force's recommendation in that the cumulative effect of the higher admission standard and the revised high school average is never estimated.  It would undoubtably affect many of those who now fall into the 70-79 range, and disqualify them for admission as well.  This would happen as a result of recommendations without empirical support.
 

Defining Reasonable Success

 Wilson and the Task Force define a "high risk" group a priori, without establishing criteria for defining level of risk.  As a sound procedure, this simply is unacceptable.  Surely the criteria should have been established independently of the analysis, with an appropriate standard being set afterwards. 

 The procedure followed by Wilson (1991a), and later by the Task Force (1993), was to divide an incoming class into three categories of "convenience" (Task Force, p.  7).  Calculations were done within each category, and naturally, the result was the finding that there was more attrition, proportionately, from the 60-69 average group than from the higher average groups.  This finding is expected given the observation that in first year, the correlations of high school and university averages are positive, even though not high.   It is not clear from the presentations by Wilson and the Task Force just how attrition was determined.  The implication, however, is that it was determined by attendance in the following year.  Wilson (1991a) is careful to explain why attrition might be artificially low after first year, and artificially high after fourth year (p. 132-33).  He does not discuss possible non-academic causes of attrition between first and fourth year, leaving the impression that all attrition is caused by academic deficiency.  The labels of the categories of convenience are transferred by implication into categories of "risk", with the 60-69 admissions category labelled as high risk, and from there into categories of "reasonableness", with high risk being unreasonable.

 There is no discussion of how a high risk student should be defined.  When the Wilson and Task Force reports talk about "weak" students, they are really referring to the "weakest" students in the University regardless of the actual level of performance.  No matter what level the admission standard is set at, and no matter what quality of attainment is actually achieved, the students with the lowest university averages will always be "weak" when defined in this way, and their degrees will always be "mediocre".  The same is true of the "high risk" group.  By the nature of the admissions design, admitted students with the lowest high school average will all be at the "highest risk" relative to their classmates.  Wilson and the Task Force advanced no independent evidence supporting the contention that the performance of some students is weak and that some degrees are mediocre.  Some evidence of this type is needed before appropriate admissions standards can be set.

 The impact of mature and transfer student performance should also be considered.  The Task Force decided only to consider the current matriculant category of admissions, but did not explain why it did this (Task Force, p. 12).  We found that the distribution of course marks of first year general studies students (see table four) admitted without a high school average was very similar to the distribution for matriculants.  About the same proportion in each group had semester averages over 60, even though there were a few differences in the extremes of the distributions.  This is a very gross comparison.  Future studies should take into account differences in full and part-time participation, and mature as opposed to transfer admissions.  It seems strange to apply a different standard to mature, special, and transfer admissions than to regular matriculants, but this is what could result from the increase in the admissions standard.

 Table 4
 Distribution of Winter 1993 Course Marks of General
 Studies students with 0-10 credits;
 Comparison of Those with, and without a Reported
 High School Average

 The Wilson and Task Force studies were very restrictive in who was included for their analysis, and basically considered only those in full-time study who were following traditional routes.  They then argued that for these people, the ideal was degree completion in as short a time as possible.  An argument which emerges from the Task Force study, and which has been extended to new regulations on declaring a major, is that it is in the best interest of the student to get in, get on with it, and get out.  In fact, apart from being very paternalistic, this ignores world-wide trends in higher education toward more part-time education over longer periods of time.
 

The Cost of Student Failure

 The economic argument is the only one actually explored by the Task Force in justification of the change in the admissions standards.  It claimed that because a greater proportion of those admitted with high school averages in the 60-69 range withdraw than in any other category of high school marks, an unreasonable burden was being placed on the university budget.  It was suggested that this has impacted on better qualified students by reducing the quality of their education (Task Force, p. 6).  The facts are, however, that in an absolute sense, the Wilson data show that much more of the total attrition comes from the 70-79 average group than from the 60-69 group(see table three).  In terms of the university budget, this is the more important impact.  Even though proportionately fewest of the greater than 80 group fail, even this impact is non-trivial because of the size of the group.

 Furthermore, this does not take into account the part-time matriculants, and all the non-matriculants that are admitted each year.  In fact, in the winter, 1993, full-time matriculants (registered for four or more courses) comprised only 52.6% of the total university enrollment (see table five).  The solution to the problem of non-success that has been adopted applies to only about 3.8 percent of the total enrollment of the university.

 Table 5
 Matriculation Status by Full-Time Study Status, Winter 1993

 The Task Force made the rather curious argument that the savings realized by lowering enrollments would be put to use to improve the quality of instruction.  The assumption was that the per student expenditures on instruction would be increased, and that this would translate into better programs.  Wilson (1991a, p. 347) noted the intractability of the faculty to change, however, so it might be alternatively supposed that instructional programs and grading practices are not all that likely to change, even in the unlikely event of an infusion of funding, resulting in little or no gain in student performance.  Because of the relatively low relationship of performance in first and second year courses to the high school average that was noted earlier in the discussion, it might also be predicted that university instructors will note only marginal improvement in the quality of the students.  It might be further predicted that overall failure rates of matriculants will also show only marginal improvement.

 All of this ignores other reactions to the increase in the admissions standard.  One very real possibility will be an inflation of high school averages, as has happened elsewhere.  This would make the problem of standard setting much more difficult.  Another possibility is that changes now under way in the high school program will improve the quality of the graduate, making the change in university admissions unnecessary.

 The Task Force predicted that greater demands from mature students would emerge if these changes were put in place.  We suggest that the demand will come anyway, apart from any demand generated by persons affected by these admissions changes.  Based on our analysis, the mature applicant will enjoy about the same success rate as the general studies student admitted from high school, so by putting the new admission procedures in place we only defer the problem (if it is a problem) of the student in the 60-69 range.  An interesting speculation is the possibility that raising university admissions standards will divert potential applicants to the community college system, and create pressure to divert public funding from the university to that system as well.

 There is simply no basis for the argument that it costs seven times more to educate a person in the "high risk" group than in the "low risk" group (Task Force, p. 27).  This conclusion cannot be drawn from the data collected.  The real question is the cost of failure of any student and it has already been shown that many more students fail from the group admitted with averages 70 or greater, than from the 60-69 category, and that additional students fail from other admission categories.  This information should be cast into the determination of the total risk of failure in the university, with a prediction of the actual budgetary impact to be realized by reducing the risk in the full-time matriculation category.

 Perhaps most importantly, there is a need to assess the cost of withholding an opportunity from qualified students.  This is not an argument that young people should be allowed an opportunity to sort themselves out, although perhaps such arguments should be put.  This is an argument that says that the evidence is that the high school average tells us only a little about the applicant's ability to succeed in university (with some modest exceptions that have been noted) and that the loss of the intellectual resources of those from whom opportunity has been incorrectly withheld may significantly outweigh the small savings that the University will realize.
 

The Quality of Student Performance

 A recurrent theme in the Wilson and Task Force reports was the need to safeguard and promote "quality" in student performance.  We found it very interesting that the Task Force did not directly address this issue, given its ubiquity in the discussions.  Neither of the questions posed for study mentioned the need to deal with the quality of student performance at university, and the Task Force made only two recommendations, neither of which dealt with the university performance of students.  The Task Force recommendations will influence the quality of Memorial's output only indirectly, by eliminating from admission a few of those with a tendency to perform less well in the University's current programs.  The "quality" of an education at Memorial University will be improved only in the sense that the less qualified have been eliminated.  Better qualified students will not be admitted unless some action is taken to expand the applicant pool.  The new admissions standards will do nothing to promote better performance in those admitted.  The University will send no more of its graduates to graduate school, or to better paying jobs, or to positions of responsibility in the community.  In fact, it may send fewer, because it will be now be discouraging the applications of more highly qualified people.  Persons with high school averages close to 70 will now know that they are the "high risk" group.

 The Task Force (1992) said that the change in admissions standards would send a number of messages.  Especially significant were the following:
 

 While "elite" or highly selective universities may set their standards based on who they "wish" to serve,"open" universities should set their standards on their notion of who can benefit (p. 50).


and then:
 

 Access to Memorial University is not a right conferred with residency in the province.  It is a privilege extended at no small public cost to those who have the necessary ability demonstrated through achievement in high school, and who can benefit from a university education (p. 52).


 The question of the quality of student performance is turned into a question of benefit to the student (and presumably, the community).  Moreover, it is clear that the Task Force reserved to the University the right to determine what it means to benefit from its programs, despite the fact that the public, and students, are bearing the cost.  This means that a very clear understanding is required of what those benefits are, of the way that university programming promotes them in terms of outcomes for students, and of the impact that the admissions procedures have on these outcomes.  The Task Force, unfortunately, did virtually nothing to study these questions.
 

Conclusions and Recommendations

 The Task Force on the university admissions policy posed two questions which, adequately defined and addressed, should have helped considerably in drawing sound conclusions about an appropriate admissions policy.  Unfortunately, in addition to failing to provide the needed definition, the Task Force also failed to address questions about the adequacy of the high school average as a criterion predicting success in university, and the appropriate policy to be pursued concerning the admission of mature, transfer, and other special applicants.

 Both the Wilson and the Task Force reports use inappropriate analyses, and interpret some of their analyses incorrectly.  They use inaccurate and unjustified labels which mislead the reader concerning the interpretation of the findings.  In general, the recommendations of the Task Force were inadequately supported by the reports.

 The recommendations may be needed.  There is general support for the improvement of performance in education at all levels.  One way to do that in the university is to admit more highly qualified students.  This would provoke the least disruption in the way instruction and evaluation is conducted in the university.  In order to be effective, however, admission has to be on the basis of criteria with a strong relationship to later performance, and these criteria should not be biased in their application.  The new admissions standards will not result in more highly qualified students being admitted, however.  They will simply apply to a truncated applicant pool and will not expand the pool of qualified applicants in any way.

 There is strong evidence of the inadequacy of the high school average to predict early university performance without a high level of error.  This is particularly true for students in the arts faculty, and for those taking basic courses in mathematics.  The reason for this is not apparent, but the quality of both the high school average and university evaluation practices could be at issue.

 There is some evidence creating the suspicion that the admissions procedures may be biased against urban students.  At the same time, rural students have been less successful in university and more information is required to determine if the admissions procedures should be invoked to deal with this problem.

 The study begun by Wilson and the Task Force should be reappraised 
and expanded to examine the adequacy of the high school average as an admission standard.  This should include empirical work to establish independent information on the question.  The stakeholders in the decision should be clearly identified and their input respecting the appropriate balance of error in admissions should be sought.

 Studies of the performance of part-time and mature students should be conducted, with a view to the establishment of appropriate admissions criteria where needed.

 The impact of university programming on student performance, and its relationship to the admissions process should be undertaken.  Study should be undertaken on university evaluation practice and its impact on the utility of the high school average. 

REFERENCES

 Hopkins, K., Stanley, J.C., and Hopkins, B.R. (1990).  Educational and psychological measurement.  Englewood Cliffs, New Jersey:  Prentice-Hall.

 Report of the task force on admission policy (1992).  Memorial University of Newfoundland.

 Wilson, B. Paul (1991a).  Access to and success in programs at Memorial University - An examination of the importance of isolation as a determinant.  Unpublished Thesis.  Department of Education, University of Toronto.

 Wilson, B. Paul (1991b).  An analysis of the May 1991 convocation at Memorial University. Unpublished paper, Memorial University of Newfoundland.