Monday, February 8, 2010

Assessing Assessment

I have just made my airline reservations to Fort Lauderdale for the middle of February. This is not a vacation; rather I am serving on an ABA reaccreditation team for Nova Southeastern Law School.  Over the years, I have served as a Middle States periodic reviewer (most recently as the first reviewer for Johns Hopkins and the second reviewer for American); ABA reviewer (most recently for Memphis Law School) and during my earlier life as a business school dean, I served not only on visiting teams but as a member of the Initial Accreditation Committee of AACSB.  I have also been on the receiving end of accreditation visits numerous times—just recently we had 14 reaccreditation visits at Hofstra in the last two years.


I have a very high regard for the peer review process.  Almost without exception, the teams I have served on and the teams that have visited Hofstra have been as diligent and objective as a group of individuals can be.  Yes, on rare occasion I have had the sense that a visiting team has been overly picky.  And yes, on rare occasion I have felt that a visiting team should have asked more questions.

From my recent service as well as my earlier service, I have also sometimes had a sense that the schools with the most prestigious reputations, at times, get a pass because reviewers feel that their overall reputation justifies overlooking some individual standards.

But overall, I have tremendous respect for those involved in the peer review process.  It takes significant time and effort to prepare for an accreditation review and it takes significant time and effort to be a reviewer.   Any school, college, or university that I have reviewed or visited, I have researched extremely well as part of the process.  And any part of Hofstra that has been subject to accreditation (and there have been many) I have understood much more clearly at the conclusion of the process. My feeling is that, overwhelmingly those involved in any aspect of the peer review accreditation process are better for the experience.

Every year, I also participate in the US News Review of Colleges and Universities.  Those asked to review are the Presidents, Provosts and Deans of Admission and the response rate is just under 50%.  The goal of this review process is one we can all support: namely, providing useful information in an easy to digest format that the public can conveniently access and use as part of the decision making process.

Toward this end, I am sent a list of all national colleges and universities and asked to provide an overall assessment of each one (using a five point scale) and I do the best I can to respond.  In some cases I know the college or university involved very well; in other cases, I know one or more programs or faculty members or administrators at the institution and I base my ranking on this microcosm of the institution.

As a long serving university administrator, I probably know the landscape of higher education better than most, but the reality is that at best I can provide a credible assessment on a relatively small minority of the institutions I am asked to rank.

I believe that the vast majority of those of us who are asked to respond to this survey recognize that there are significant weaknesses in the US News methodology beginning with the peer assessment.  And yet, here we are providing credibility but likely not expertise and accuracy to the process.

We should as a group, either make a commitment to being fully up to speed on any and all institutions we comment on, or we should refrain from providing an assessment.  Comprehensive peer assessment programs are the backbone of higher education; superficial evaluations, on the other hand, weaken us all.

No comments:

Post a Comment