DEPEKY


Developmental Education in Postsecondary in Kentucky
FYE Resources and News

Evaluating Advisor Effectiveness

Evaluating advisor effectiveness sends a strong and explicit message to all academic advisors that advising is an important responsibility; conversely, failure to do so tacitly communicates the message that this student service is not valued by the institution. As Linda Darling-Hammond, higher education research specialist for the Rand Corporation, once said: ?If there?s one thing social science research has found consistently and unambiguously. . .it?s that people will do more of whatever they are evaluated on doing. What is measured will increase, and what is not measured will decrease. That?s why assessment is such a powerful activity. It cannot only measure, but change reality? (quoted in Hutchings & Marchese, 1990). Moreover, the particular content (items) that comprise an the evaluation instrument embody the specific practices and concrete behaviors which we hope that those being evaluated will strive for, or aspire to; thus the instrument can function not only as a measure of reality (what is), but also a prompt or stimulus for promoting closer approximations to the ideal (what should be).
The issue of advisor evaluation also has strong implications for topic of paramount concern for many members of this Listserv: student retention. Furthermore, advisor evaluation is inextricably related to other important advising issues, such as: advisor recruitment and selection; advisor orientation, training, and development; and advisor recognition/reward systems. Since your request was directed specifically (and urgently) for information on advisor evaluation, I?ll focus this correspondence on that topic only. In a subsequent correspondence, I?ll follow-up with information on the aforementioned issues that are interrelated with advisor evaluation and place it in a larger, more meaningful context.
- Joe -

RATIONALE FOR THE INSTRUMENT?S CONTENT
The specific items that comprise the content of the instrument were derived from published research on characteristics and qualities of advisors that students seek ordesire. Research repeatedly points to the conclusion that students value most highly academic advisors who are:
(1) available/accessible, (2) knowledgeable/helpful, (3)personable/approachable, and (4) counselors/mentors. (Winston, Ender, & Miller,1982; Winston, Miller, Ender, Grites, & Associates, 1984; Frost, 1991; Gordon, Habley, & Associates, 2000).
I attempted to define each one of these general ?core? qualities in terms of advisor roles orresponsibilities, as follows:
1) Available/Accessible: An advisor is someone who effectively communicates and interacts with students outside the classroom, and does so less formally, more frequently, and on a more long-term basis than course instructors. Students? instructors will vary from term to term, but an academic advisor is the one institutional representative with whom the student can have continuous contact and an ongoing relationship that may endure throughout the college experience.
2) Knowledgeable/Helpful: An advisor is an effective consultant, which is a role that embraces the following functions: (a) ResourceAgent?who provides accurate and timely information about the curriculum,co-curriculum, college policies, and administrative procedures. (b) Interpreter?who helps students make sense of, and appreciate the college mission, curricular requirements (e.g., the meaning, value, and purpose of general education), and co-curricular experiences (e.g., the importance of out-of-class experiences for student learning and development). (c) Liaison/Referral Agent?who connects students with key academic support and student development services.(d) Teacher/Educator?who helps students gain self-insight into their interests, aptitudes, and values; who enables students to see the ?connection? between their academic experience and their future life plans; and who promotes students? cognitive skills in problem-solving, decision-making, and critical thinking with respect to present and future educational choices.
3) Personable/Approachable: An advisor is a humanizing or personalizing agent with whom students feel comfortable seeking out, who knows students by name, and who takes a personal interest in individual students? experiences, progress, and development.
4) Counselor/Mentor: An advisor is an advocate who students can turn to for advice, counsel, guidance, or direction; who listens actively and empathically; and who responds to students in a non-judgmental manner?treating them as clients rather than as subordinates to be evaluated (and graded).
These four advisor roles were used to generate related clusters of advisor characteristics or behaviors that represent the content (rating items) in the attached instrument.
While this review and synthesis of advisor roles provide useful for guiding construction of specific items on the advisor evaluation instrument, the scholarly literature on academic advising strongly suggests that advisor evaluation should originate with, and be driven by, a clear mission statement for the program that reflects consensual or communal understanding of the overarching meaning and purpose of academic advisement (White, 2000). This statement of program purpose should be consistent with, or connect with the college mission statement, thus underscoring the centrality of the program and its pivotal role in therealization of broader institutional goals. Kuh, Schuh, Whitt, & Associates (1991) discovered that this connection between program purpose and institutional mission characterizes how programs are delivered at ?involving? colleges, i.e., colleges with a strong track record of actively engaging students in the college experience: ?Policies and practices at Involving Colleges are effective because they are mission-driven and are constantly evaluated to assess their contributions to educational purposes? (p. 156). The purpose statement for the academic advisement program should also serve as the initial springboard or launching pad that fuels and focuses the development of an effective evaluation plan.
Our college has not yet constructed a formal statement that explicitly captures the essential purpose and priorities of our advising program. This is a critical oversight which needs to be addressed because individual advisors seem to have different conceptions and philosophies about what advising should be, and their advising practices tend to vary in nature (and quality) depending on what particular advising philosophy or viewpoint they hold. In fact, research indicates that there is fairly high consistency between advisors? stated philosophy of advising and their actual advising behaviors or practices (Daller, Creamer,& Creamer, cited in Creamer & Scott, 2000).
Along with the proposed instrument, I am going to recommend to our advisement task force that the following statements, culled from the scholarly literature on academic advising, serve as models or heuristics to help guide and shape the construction of a mission statement for our advising program.

?Developmental academic advising is . . . a systematic process based on a close student-advisor relationship intended to aid students in achieving educational,career, and personal goals through the utilization of the full range of institutional and community resources. It both stimulates and supports students in their quest for an enriched quality of life? (Winston, Miller, Ender, & Grites, & Associates, 1984, p. 538).

?The formation of relationships that assure that at least one educator has close enough contact with each student to assess and influence the quality of that student?s educational experience is realistic only through a systematic process, such as an academic advising program. It is unrealistic to expect each instructor, even with small classes, to form personal relationships of sufficient duration and depth with each student in his or her class to accomplish this? (Winston, Miller, Ender, & Grites, & Associates, 1984, p. 538).

?Developmental academic advising is not primarily an administrative function, not obtaining a signature to schedule classes, not a conference held once a term, not a paper relationship, not supplementary to the educational process, [and] not synonymous with faculty member? (Ender, 1983, p.10).

?Academic advising can be understood best and more easily reconceptualized if the process of academic advising and the scheduling of classes and registration are separated. Class scheduling should no be confused with educational planning. Developmental academic advising becomes a more realistic goal when separated from class scheduling because advising can then go on all during the academic year, not just during the few weeks prior to registration each new term. Advising programs, however, that emphasize registration and record keeping, while neglecting attention to students? educational and personal experiences in the institution, are missing anexcellent opportunity to influence directly and immediately the quality of students? education and are also highly inefficient, since they are most likely employing highly educated (expensive) personnel who are performing essentially clerical tasks? (Winston, Miller, Ender, & Grites, & Associates, 1984,p. 542).

RATIONALE FOR THE STRUCTURE & ADMINISTRATIONOF THE INSTRUMENT

1. We elected to develop an internal (?home grown?) instrument rather than import an external (?store bought?) standardized instrument from and assessment service or evaluation center. For the record, it should be noted that there are commercially developed instruments available for evaluation of academic advising?e.g., ACT Survey of Academic Advising, The Academic Advising Inventory (Winston & Sander), and The Developmental Advising Inventory (Dickson & Thayer). For a review of standardized instruments designed to evaluate academic advising, see: Srebnik (1988). NACADA Journal, 8(1), 52-62. Also, for an annotated bibliography on advising evaluation and assessment, see the following website sponsored by the National Clearinghouse for Academic Advising, Ohio State University, and the NationalAcademic Advising Association: www.uvc-ohio-state.edu/chouse.html
Standardized instruments do come with the advantage of having already-established reliability and validity, as well as the availability of norms that allow for cross-institutional comparisons. However, we felt that our college had unique,campus-specific concerns and objectives that would be best assessed via locally developed questions and an instrument designed to elicit more qualitative data (written responses) than what is typically generated by standardized inventories.

2. Instead of using the usual Likert-scale with four options (strongly agree ? agree ? disagree ? strongly disagree), the proposed instrumenthas six rating options. The wider range of numerical options was included with the hope that the resulting mean (average) ratings for individual items would show a wider spread in absolute size or value. The 6-point scale may help us circumvent a problem we?ve previously encountered with 4-point rating scales used for instructor evaluations, which often yielded mean ratings for separate items that varied so little in absolute size that instructors tended to discount the small mean differences between items as being insignificant and inconsequential. For example, with the 4-option rating scale,an instructor might receive mean ratings for different items on the instrument that ranged from a low of 2.8 to a high of 3.3. Such a narrow range of differences is mean ratings led many instructors to perceive these minisculedifferences simply as random ?error variance? or failure on the part of our students to respond in a discerning or discriminating manner.
Hopefully, the expanded 6-point scale will result in larger mean differences across individual items, thus providing more discriminating data. In fact, research on student evaluations of course instructors does suggest that rating scales with fewerthan five choices tends to reduce the instrument?s ability to discriminate between satisfied and dissatisfied respondents, while rating scales with more than seven choices does not add to the instrument?s discriminability (Cashin,1990).
I am also going to suggest that, in addition to providing advisors with mean scores per item, they also be provided with the percentage of respondents who selected each ofthe six options. This statistic will reveal how student responses were distributed across the six options, thus providing advisors with potentially useful feedback about the degree of consistency (consensus) or variation (disagreement) among their advisees? ratings for each item on the instrument.

4. The instructions for the instrument strongly emphasize the need for and importance of students? written comments. Research on student evaluations of course instructors indicates that this type of feedback provides the most useful information for performance improvement (Seldin,1992). (The more I think about it, the many years of research on student evaluations of course instructors are directly applicable to the valid construction and administration of advisor-evaluation instruments. (For areview of research and practice with respect to instructor evaluations, much of which can be applied to advisor evaluations, go to the following site: http//www.Brevard.edu/fyc/listserv/index/htm, scroll down to ?Listserv Remarks? and click ?Joe Cuseo, 10-20-00,? Student Evaluations of College Courses.).

5. Appearing beneath each item (statement) to be rated, there is some empty space preceded by the prompt, ?Reason/explanation forrating: . . . .? This prompt is included because we have found that inclusion of such item-specific prompts on our instructor-evaluation instrument tends to increase the quantity of written comments student provide, as well as their quality?i.e., we receive comments that are more focused and concrete because they are anchored to a specific item (characteristic or behavior)?as opposed to the traditional practice of soliciting written comments solely at the end of the instrument in response to a generic or global prompt, such as: ?Final Comments??
Furthermore, the opportunity to provide a written response to each item allows students to justifytheir ratings, and allows us to gain some insight into why the rating was given.

6. The instrument is relatively short, containing only 12 advisor-evaluation items?four 3-item clusters relating to the four aforementioned qualities of highly valued advisors. It has been my experience that the longer an instrument is (i.e., the more reading time it takes tocomplete it), the less time students devote to writing and, consequently, the fewer useful comments we receive.

7. Toward the end of the instrument, students are asked to self-assess their own effort and effectiveness as advisees.This portion of the instrument is intended to (a) raise the consciousness ofstudents that they also need to take personal responsibility in the advisement process, and (b) assure advisors that any evaluation of their effectivenessdepends, at least in part, on the conscientiousness and cooperation of their advisees. (This, in turn, may also serve to defuse the amount of threat or defensiveness experienced by advisors about being evaluated?a feeling that almost invariably accompanies any type of performance evaluation.)

8. Before formally adopting the proposed instrument, we will have students review it (probably in a focus-group format) to provide us with feedback about its clarity and comprehensiveness (e.g., whether we have failed to include any critical questions about advisors or the advising process). I?m also toying with idea of adding an open-ended question to the end of the instrument that would ask students to assess the assessment instrument. (We could even adopt a pretentiously profound term for this process, and refer to it as ?meta-assessment??the process of assessing the assessment by the assessee [sic]).
Ideally, I would like to see if we can develop an instrument on which students not only rate items in terms of perceived satisfaction or effectiveness, but also in terms of perceived need or importance. In other words, students would giveus two ratings for each item on the instrument: (a) a rating of how satisfied they are with that item, and (b) a rating of how important that item is to them. (I?m not sure how I would structure the instrument to efficiently obtain both sets of ratings. Perhaps the item statements could be centered in the middle of the page, with a ?satisfaction? rating scale to the left of the itemand an ?importance? scale to the right of the same item. Noel and Levitz (retention researchers and consultants) have used this double-rating practice to identify institutional areas with the large ?performance gaps??items with large differences between students? rating of importance and their rating of satisfaction (i.e., high negative scores obtained when the satisfaction ratingfor an item is subtracted from its importance rating). If this strategy were applied to advisor evaluation, we would be most interested in those items that reveal high student ratings on importance but low ratings on satisfaction.These items would represent high-priority student needs that they feel are not being met. As such, they represent identifiable areas or target zones for improving academic advising?which, of course, is the ultimate purpose ofassessment.
Applying this satisfaction-importance rating scheme to the advisor evaluation instrument would, in effect, enable it to co-function as both a student satisfaction survey and a student needs assessment survey. This would be a big plusbecause I think we need to figure out more ways to get data on student needs. It seems to me that, historically, institutional research in higher education has overdosed on satisfaction surveys?which are designed to assess how students feel about what we are doing; in contrast, we have given comparatively short shrift to assessing what they (our students) need and want fromus. It could be argued that satisfaction surveys represent an institution-centered (or egocentric) form of assessment, while student needs assessment is a learner-centered form of assessment that resonates well with the new ?learning paradigm.?

9. Before formally adopting the proposed instrument, feedback will be solicited from all advisors with respect to its content and structure. This broad-based feedback should help us fine-tune the instrument and redress any shortcomings or oversights. Perhaps even more importantly, this solicitation of feedback from advisors will give them an opportunity to provide input and gain some sense of personal control orownership of the evaluation process. We would like advisors to feel that evaluation is something that is being done with or for them, rather than to them. I?m even beginning to think that ?evaluation? is not the best term to use because it tends to immediately raise a red flag in the minds of faculty (or, at least our faculty). Although the terms ?evaluation? and ?assessment? tend to be used interchangeably by some scholars and differentially by others, I think that assessment is a less threatening term which more accurately captures the primary purpose of the process: to gather feedback to be used for personal and programmatic improvement. Etymologically, assessment derives from a root word meaning to ?sit beside? and?assist,? whereas evaluation derives from the same root as ?value??which connotes appraisal and judgment of worth. An added bonus for using the term assessment is that it can be combined with ?academic advisement? to form the phrase, ?assessment of academic advisement??a triple alliteration with a rhythm-and-rhyming ring to it that should appeal to faculty with literary leanings and/or poetic sensibilities.)
Consistent with this idea of assessment for improvement rather than evaluation for judgment, I recommend that advisors be given the opportunity to assess the administrative support they receive for advising?e.g., the effectiveness of orientation, training, and development they have received; usefulness of support materials or technological tools provided for them; viability of the advisee-advisor ratio; effectiveness of administrative policies and procedures. (Note: At the end of the attached advisor-evaluation instrument, I?ve appended an instrument that our college has used to assess advisor perceptions of our advisement process/program and the effectiveness of administrative support.)
Allowing advisors to assess administrative support for advising would serve the dual purpose of (a) providing feedback to the director/coordinator of the advising program that may be used for program improvement, and (b) actively involving advisors in the assessment process?allowing them the equal opportunity to take on the active role of assessor, rather than restricting them exclusively to the role of passive recipient (or object) of assessment.
Advisors can also become more actively involved in the assessment process if they engage in self-assessment. They could do this in narrative form, perhaps as part of an advising portfolio that could include a personal statement of advising philosophy, advising strategies employed, advisor-development activities, etc. One potentially useful component of self-assessment would be for advisors to respond to their student evaluations. For instance, advisors might give their interpretations or explanations for ratings and comments received from their advisees, theirthoughts about why they received high evaluations with respect to certain advising functions, and how they might address or redress areas in which they were perceived least favorably.
One interesting idea suggested in the scholarly literature on instructor evaluations that may be adopted as a form of advisor self-assessment is to have advisors complete the same evaluation instrument as their advises, responding to it as they think their advisees will. Consistencies and discrepancies that emerge between advisor and student evaluations could provide advisors with valuable feedback for self-assessment. In particular, mismatches between advisor-advisee perceptions may create cognitive ?dissonance? or ?disequilibrium? in the minds of advisors that could stimulate productive changes in advising attitudes and behavior.
In addition to student assessment and advisor self-assessment, peer assessment could also be used in conjunction with the advisor-evaluation instrument. For instance, teams of advisors could agree to review each other?s evaluations in a collegial fashion?for the mutual purpose of improving professional performance. Or, peer assessment could be conducted in an anonymous or confidential manner, whereby each advisor receives the student evaluations of an anonymous colleague and provides that colleague with constructive feedback; at the same time, this colleague receives student evaluations from an anonymous colleague and provides that advisor with feedback. Thus, each advisor receives peer feedback from, and provides feedback to, an advising colleague.
Research on faculty development strongly supports the effectiveness of peer feedback and collegial dialogue for promoting change in instructional behavior (Eble & McKeachie, 1985). Thus, it may be reasonable to expect that peer assessment and collegial feedback would work equally well with faculty advisors. The effectiveness of peer assessment is probably due to the fact that feedback from colleagues is perceived to be less threatening and more credible than feedback coming from a superior or outside consultant?because it?s feedback coming from someone ?in the trenches??performing the same duties with the same challenges and constraints as the person being evaluated. However, despite the documented effectiveness of peer assessment and dialogue for instructional development of faculty, national survey research indicates that peer assessment is the least frequently used method of advisor evaluation (Habley, 1988). In addition to student, peer, and self-assessment, the program director also has a roleto play in the assessment process. Frost (1991) notes that comprehensive evaluation of an advisement program includes feedback from advising administrators, as well as students and individual advisors. It is the program director who is uniquely positioned to review all individual advisor evaluations and see the ?big picture,? i.e., advisement as a total program. By detecting recurrent themes or cross-advisor trends that emerge when individual advisor evaluations are aggregated and viewed as a composite, the director can obtain a panoramic perspective of the program?s overall effectiveness, and move advisor evaluations beyond the narrow scope of personnel evaluation and view them through the wider lens of program evaluation. For instance, the director could identify those items that tend to receive the lowest overallstudent ratings (aggregated across all advisors) and use these items to prioritize and focus discussion of program-improvement strategies with advisors. This discussion could be conducted in a collegial, non-threatening fashion by framing advising-improvement questions in terms of what ?we? can do collectively, as a team, to improve the effectiveness of ?our? program in areas where students perceive it least favorably.
The program director is also well positioned to identify ?critical incidents? that may serve as qualitative data for troubleshooting weaknesses or problems or weaknesses in the advising program (e.g., common sources or causes of student complaints and grievances, and recurrent reasons given by students for seekinga change of advisors.) Patterns emerging from such incidents may yield critical diagnostic data that may be used to target and focus advising-improvement efforts.

References

Cashin, W. E. (1990). Students do rate different academicfields differently. In M. Theall, & J. Franklin (Eds.),Student ratings of instruction: Issues for improving practice (pp.113-121). New Directionsfor Teaching and Learning, No. 43. San Francisco: Jossey-Bass.

Creamer, E. C., & Scott, D. W. (2000). Assessingindividual advisor effectiveness. In V. N. Gordon, W. R.Habley, & Associates, Academic advising: A comprehensive handbook(pp. 339-348). San Francisco:Jossey-Bass.

Ender, S. C. (1983). Assisting high academic-risk athletes:Recommendations for the academic advisor. NACADAJournal (October), 1-10.

Frost, S. H. (1991). Academic advising for studentsuccess: A system of shared responsibility. ASHE-ERIC HigherEducation Report No. 3. The George Washington School of Education and HumanDevelopment, Washington DC.

Gordon, V. N., W. R. Habley, & Associates (2000). Academicadvising: A comprehensive handbook. SanFrancisco: Jossey-Bass.

Habley, W. R. (1988). The status and future of academicadvising: Problems and promise. Iowa City, IA: TheACT National Center for the Advancement of Educational Practices.

Habley, W. R., & Morales, R, H. (1998). Currentpractices in academic advising: Final report on ACT?s fifth national survey of academic advising. National Academic Advising Association MonographSeries, no. 6. Manhattan, KS: National Academic Advising Association.

Hutchings, P., & Marchese, T. (1990). Watchingassessment?questions, stories, prospects. Change, 22(5),pp. 12-43.

Kuh, G., Schuh, J., Whitt, E., & Associates (1991). Involvingcolleges. San Francisco: Jossey- Bass.

Seldin, P. (1992). Evaluating teaching: New lessonslearned. Keynote address delivered at the ?EvaluatingTeaching: More than a Grade? Conference. Sponsored by the Undergraduate Teaching Improvement Council, University of Wisconsin System. Madison, WI: University of Wisconsin.

White, E. R. (2000). Developing mission, goals, andobjectives for the advising program. In V. N. Gordon, W. R.Habley, & Associates, Academic advising: A comprehensive handbook (pp. 180-191). SanFrancisco: Jossey-Bass.

Winston, R. B., Ender, S. C., & Miller, T. K. (Eds.)(1982).Developmental approaches to academic advising. New Directions for Student Services, No. 17. San Francisco: Jossey-Bass.

Winston, R. B., Miller, T.K., Ender, S. C., Grites, T. J.,& Associates (1984). Developmental academic advising. San Francisco: Jossey-Bass.


Home _ News_ Organizations _ Conferences _ Related Publications _
About DEHEKY _ Kentucky Online Resources _ Other Online Resources _ Other Lists _
Research Sites _ Assessment Testing _ KY Colleges & Universities _ Faculty Directories _
Subscribe to DEHEKY list


� 1999 [email protected]



Hosted by www.Geocities.ws

1