Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

Monday, May 21, 2012

Extreme Testing


My older daughter came home last week, after taking a New York State ELA (English Language Arts) statewide exam.  Normally after she takes a test, she mentions whether the test was easy or hard and what, if any, were the areas that give her difficulty.  This time it was different.  She complained about a reading passage concerning a race between a pineapple (that did not move) and a hare.  She indicated that the passage made little sense and that the questions/answers made even less sense.  I actually thought she was overreacting until I saw a copy of the passage and the questions in the newspapers and on the internet. The paragraph was inane and the questions had no logical answers.  Ultimately New York State agreed, and will not count this question in the scoring of the exam.

My younger daughter’s elementary school New York State ELA took place over three days with a ninety minute exam each day.  The exam also started the day the spring vacation ended.  Not even a school day in between for the kids to adjust to being back in school.  Why does an exam like this need three days and a ninety minute exam time each day?  There was also a question on the math assessment for 4th graders that had two correct answers as well as an 8th grade math assessment question that had no correct answers.  Not good on any count.

Exams are necessary.  Evaluating students is necessary. We need to be able to measure a student’s learning on a regular basis and use the results to continuously enhance the education that is provided. But there are very substantial costs if the exam is nonsensical in part or if the exam is overly stress inducing.  The first and most obvious cost is the loss of confidence by both parents and educators in the government entity that oversees the exam process. Can bad questions really measure learning? Can questions without correct answers or with multiple correct answers really measure learning? Or instead, when a student is told to select the correct answer, will this just serve to confuse the student?  And will three days of testing of 3rd, 4th, and 5th graders measure learning or, even more, seriously stress out the students?  How many of us, when we were in third grade, would have the sitting power, patience and perseverance to handle an exam that long for that many days?

But the real cost is a potential loss of the love for learning on the part of our kids.  What we all want from education is not only a knowledge base but also a respect for the importance of lifetime learning. If the questions and answers make little sense, if the exam stresses out young kids, and if the end result is a dislike for school and for education we have done a huge disservice.  This is a time for corrective action.  We need to rethink some of our exams and even more importantly some of our exam philosophies.

Monday, April 9, 2012

Evaluation


I read with interest the recent article in Inside Higher Education regarding the retiring President of Westminster College preparing for retirement by compiling an eportfolio.  President Bassis prepared the eportfolio both “to reflect on his 41 years in higher education…but also as a way to communicate to students and faculty members his steadfast belief in electronic portfolios as a method of cataloging and assessing student work.”

I am a long time believer in evaluation, both formative as well as summative.  In teaching, I think the faculty member, the administration, and the students all benefit from these evaluation programs.  The faculty member especially benefits from formative evaluation programs and faculty members, students and administrators potentially benefit from the information available in summative evaluations.

There is also benefit in evaluation of administrators. These evaluations should take place regularly and if done correctly, should have the same beneficial impact as faculty evaluations.  But there are complications in the evaluation of administrators that do not arise in student evaluations.  Any well run student program, has as the student evaluators, students in the class.  They are there, in class, on a regular basis, interact with the faculty member and consequently have the contact necessary and the information necessary to render an informed judgment.  In evaluating an administrator, especially a non-academic administrator, who has the information to provide a valid assessment?  Is it the faculty that should evaluate these non-academic administrators?  Clearly, faculty is an intelligent and sophisticated constituency.  But nevertheless, are they in a position to provide these evaluations?  Can they, for example, evaluate the effectiveness of a vice president for technology or a vice president for admissions or a vice president for finance?  No question, faculty can provide very accurate assessments of the campus’ academic technology and no question they can comment on the credentials of the incoming class but are these assessments or comments reflective of the person heading an area?  In technology, if the resources are not there, is it the VP who should be blamed?  Or if the quality of the incoming class has increased less quickly than expected, is that the fault of the VP in charge of the area?  Or could it be greater tuition discounting on the part of other institutions?  In cases such as this, valuable evaluation can still take place and faculty can still play a lead role in that process.  Faculty can evaluate academic technology; faculty can evaluate the quality of the class, but not necessarily a single individual heading a particular area.

For the evaluation of department chairs, faculty have a perfect vantage point to assess the leadership and administrative ability of the chair.  Faculty in a particular school or college are also very well placed to evaluate the dean, though depending on the size of the school or college there may be more or less direct involvement with the dean.  (As an undergraduate, I was an active student government type and I remember a number of my professors commented that I had more contact with the dean than they had.) As the provost, I have an excellent vantage point for the evaluation of deans as do department chairs.  Deans also have an excellent vantage point for the assessment of the provost as do a significant number of chairs and a significant number of faculty. And, of course, the president is also ideally positioned to evaluate the provost and other senior administrators.

But President Bassis, by his initiative in compiling an eportfolio, may have helped many of us to further strengthen assessment and evaluation.  When a faculty member stands for tenure or promotion or applies for a sabbatical, that faculty member provides a portfolio (e or regular) that helps in the assessment of that person’s work.  In evaluating an administrator, chairs, deans, provost, president, or in evaluating an area, a portfolio should also be compiled by the person being evaluated, and the process should encompass that portfolio as an important statement of self evaluation and as important data for evaluation by other constituencies.

Monday, March 28, 2011

Evaluation

Part of what attracted me to higher education in the first place and still attracts me is the shared governance environment.  Economics was the discipline that excited me, and higher education was the environment where I felt most comfortable and most productive. And from my experience shared governance works well in most places and in most cases.  My first experiences were in the area of curriculum, beginning with the department’s efforts to fine tune the economics major and subsequently extending to the committee that reviewed the undergraduate curriculum.  On the department level and on the university level, the process went well.  Faculty working with department chairs or deans scrutinized the curriculum, updated courses and reviewed requirements.

If you look at curriculum, if you look at standards, if you look at much of what happens in the academic area, we have a model for highly educated and highly intelligent individuals working together.  But the shared governance process isn’t perfect and there are areas where the process is significantly less effective.  Perhaps the area of greatest weakness is faculty evaluating other faculty.  More than a few faculty are uncomfortable making any negative comments – even when fully justified and reflective of the faculty member’s opinion—about other faculty.  In one of the first personnel cases that I had to deal with as dean, a department personnel committee chair said to me that he and his committee had only recommended positively on a personnel matter (and made only positive statements) because the committee knew that I would recommend against.  They wanted to be the “good” person and they were more than comfortable with the dean being the “bad” person.  And when the person I had just recommended against came in to see me, his first point was how could I have found fault with his record when all his colleagues in the department and in the same field had recommended positively. Not a comfortable moment.

More than a few times, faculty have come to see me to alert me that so and so is a “problem”  for  x reason and should not be (fill in the blank ) reappointed, tenured, promoted, selected as chair, etc.  But the individuals talking to me are also candid in saying that they do not want their opinion made public because they have to work closely with that person, or have the office next door, or that person will be reviewing them next year, etc.  I always indicate to the person talking to me that it is much much harder to follow up on a concern when the person raising the issue doesn’t want in any way to be identified. (In certain cases—such as allegations of sexual harassment—I also indicate that I need to report the allegation and cannot agree to not identifying the person who has brought the matter to my attention.)

In the vast majority of cases, the personnel process works well.  Where it doesn’t, everyone is done a disservice.  We are not providing the person being evaluated with the objective feedback necessary to resolve outstanding issues which can interfere with that person’s success.  We are not providing the university with the complete accurate picture that will allow uncompromised merit based decision making in areas where the consequences of bad decisions are often long term.  In this era of expanding outcomes assessment regarding curricular matters, we need to also undertake an outcomes assessment of shared governance and the evaluation process.  Overall, I am sure we will get high marks, but I am equally sure there is substantial room for improvement.