Sunday, September 26, 2010

Teaching Matters and So Does Assessment: How Not to Assess Student Learning

Guest Blogger: Safro Kwame

If the Middle States' visiting team of 24th September 2010 has taught us anything, it may be that teaching matters and so does assessment of teaching! The team's report suggests the following to me:
(a) a need for an immediate change in our habits and assessment of student learning, (b) a need for faculty to take ownership of the assessment of teaching,
(c) a need for appropriate software to collect and analyze assessment data, and
(d) a need for an internal Middle-States type of assessment committee that will do what the external one (from Middle States) has done, i.e. evaluate our assessment efforts and make appropriate recommendations.

QUESTION: What did you get from the Middle States visit of 24th September 2010?

SUGGESTION: Look at Middle States' standards and guidelines on assessment and indicate whether you agree with the visiting team's report (that we are not in compliance with standard 14) and indicate why (you agree or disagree).

REFERENCES

1. See Faculty Meeting Minutes of 29th April 2008 for my original proposal for assessment

We should (1) stop doing what we have been doing about assessment or significantly improve upon it, and (2) immediately implement the Middle States evaluation team's suggestions and recommendations on assessment. Example for Consideration: Each instructor may, accordingly, design a simple test of student learning outcomes which could be electronically scored or graded and automatically processed and analyzed for program, department, school and university characteristics and recommendations. Thus, in addition to submitting a gradesheet at the end of each semester, each instructor can turn in an assessment sheet or report at the end of each semester (after grades have been submitted). – Safro Kwame, 4/29/08


2. See Faculty Meeting Minutes of 3rd February 2009 for my follow-up proposal for assessment

In a simple and easy way, (a) Middle States wants faculty to regularly assess some or a few of the goals and objectives of courses and programs, apart from the courses and students themselves, (b) share and discuss the results, and (c) implement changes resulting from the assessment and discussion.

An Example: One Type of Assessment:

1. Select 2 or 3 of your most important goals or objectives. Make sure they are (easily) measurable.
2. Set 2 or 3 questions specifically for each goal or objective.
3. Get students to answer the questions.
4. Find an easy, e.g. automatic or electronic, way to score the answers to the questions and analyze the results; e.g. by using assessment software such as Exam View or getting IT to acquire and administer appropriate software.
5. Discuss the results with your colleagues and, preferably electronically, forward the results and recommendations (which may include changes) to your supervisor and/or central coordinating unit which could be IR, Chairperson, Dean, or VP.
6. Make appropriate changes, e.g. to your syllabus, examination or content or delivery of course, as a result of your assessment of learning goals and objectives.

Note: You need (a) software to create, score, analyze, forward and collate assessment, and (b) personnel to support or assist in creating, scoring, analyzing and processing assessment. Consult IT, IR and VP. – Safro Kwame, 2/3/09


3. See News Report on the Need for Assessment Software:

New Software Aids in Assessment, The Chronicle Vol. 53, Issue 30, Page A37 3/30/2007 By Dan Carnevale

Facing greater demands for accountability, colleges turn to technology to analyze mounds of data. Richmond, Va. The last time Virginia Commonwealth University had to prepare for an accreditation review, officials here found themselves overwhelmed with data. The university's accreditor, the Southern Association of Colleges and Schools, was asking for more information than ever before about how much students were learning: grades, test scores, written evaluations, and other measures. Much of that information was scattered throughout the institution - kept in computer files and storage drawers. So Jean M. Yerian, then the director of assessment, led the development of a computer program that would organize and analyze all the assessments of students being done on the campus. The computer program, dubbed Weave, not only helped the university satisfy its accreditors, but also appealed to other colleges, which wanted to use it to prepare for their own accreditation reviews. "We started out as solving our own problem and ended up developing something that can help others as well," says Ms. Yerian. Last year Virginia Commonwealth spun off the project as an independent company called WeaveOnline. Ms. Yerian resigned her post at the university last month to become director of assessment management for the company, which has already attracted more than 40 colleges as clients. Supply is slowly meeting the demand. Companies such as Blackboard, Desire2Learn, and Datatel have developed software that helps conduct institutional assessments. Other companies, such as Oracle and eCollege, have plans to jump into the game as well.

Caribbean University Selects Blackboard Outcomes System to Assess Student Learning

Dec 11, 2007 University Is First in Latin America to Implement Comprehensive Institutional Assessment to Meet Accreditation Standards PHILADELPHIA During the annual conference of the Middle States Commission on Higher Education, Blackboard Inc., a leading provider of enterprise education software and services, announced that Caribbean University in Puerto Rico has selected the Blackboard Outcomes System(TM) to assess student learning across its system of four campuses, and plan for and measure continuous improvement in institutional effectiveness, to help continue to meet the rigorous accreditation standards set by the Commission.

WEAVEonline is a web-based assessment system that helps you to manage accreditation, assessment and quality improvement processes for your college or university.

The Blackboard Outcomes System helps institutions efficiently meet the demand for increased accountability and drive academic improvement with evidence-based decisions. The Blackboard Outcomes System makes planning and assessment easier and evidence-based.

The TrueOutcomes Assessment Manager is a complete, web-based solution that facilitates every aspect of Learning Outcomes Management from assigning, assessing, and tracking to analyzing and making evidence-based decisions to improve student learning outcomes and facilitate continuous improvement.

eLumen Achievement is an information system for managing a college's attention to student achievements, learning outcomes and education results. It is specifically designed to facilitate authentic assessment processes that are faculty-driven, student learning-centered, standards-based, and (now, with eLumen) system-supported.

Tk20 provides comprehensive outcomes assessment systems that let you collect all your data systematically, plan your assessments, compare them against specified outcomes/objectives, and generate detailed reports for compliance, analysis, and program improvement. A leader in assessment, Tk20 offers a complete set of tools for managing outcomes-based assessment and measurement of student learning as well as institutional activities such as program improvement, curriculum mapping, institutional effectiveness, and reporting.

13 comments:

  1. See Free Middle States Downloadable Publications at:

    http://www.msche.org/publications.asp

    Guidelines for Institutional Improvement and Assessment are at:

    http://www.msche.org/publications_view.asp?idPublicationType=5&txtPublicationType=Guidelines+for+Institutional+Improvement

    ReplyDelete
  2. We should have listened to Dr. Kwame in 2008 we might not be in the mess we are now in regarding Standard fourteen: Student Learner Outcomes.

    ReplyDelete
  3. Albert, I would like to think you are correct. I also agree with Kwame's original 2008 suggestions/ methods, but in our department we basically did that. We collected detailed SLO data for two years by each instructor for most our classes (at the end of semester), analyzed it, and used it to make some adjustment/ decision (we have the data). But I do not think it made much difference to the team's view (assuming the team looked at or data). To answer Kwame’s question of what I got from the Middle States visit; I have to say: sadly, more confusion and realization of wasted efforts. It is difficult to really "close the loop," if one cannot trust the sincerity of the whole process.

    ReplyDelete
  4. So, we should start with a comparative analysis of the assessment reports of the HPER, Psychology and Chemistry departments to see what they did right or differently, and how far they meet Middle States' standard 14. Shouldn't we? We may have to go further than that to resolve the confusion.

    ReplyDelete
  5. It seems to me that what Middle States might be saying is not that individual departments are not collecting data, but instead that there is no cohesive overall university-wide plan for how to analyze that data and then make logical changes based on the analysis. So I agree with Kwame that it makes sense to look at HPER, Psych and Chemistry and see what we can learn from their good examples. However--and this may partially address Ali's question-- I don't think that's the whole answer.

    What do we say our students should graduate with and how are we (as a whole university) proving they do? It can't just be grades, since they are notoriously subjective and unreliable.

    For example: A Chronicle of Higher Ed article (Volume 55, Issue 38, Page A34)last year described Miami Dade's plan. After deciding on 10 general themes (which correspond to those 8 "overarching themes" that guided the development of our core curriculum), they began to test 10% of all graduating students in their final year on each of the 10 themes. Some tests were objective tests, some essays, some videotaped performances. None took more than 50 minutes for the student to complete. A small group of faculty from the relevant discipline graded each test using a four-point scale, ranking each student as "emerging," "developing," "proficient," or "exemplary."

    As the article reports, "The scores are then aggregated to provide a snapshot of the graduating class's level of achievement, which allows for comparisons from year to year. The results also provoke important cross-disciplinary dialogue about new means of achieving the 10 learning goals."

    What could we do on a university-wide basis that would get us that sort of reliable and ongoing data from which we can see trends and draw conclusions about needed changes? I think something like that, coupled with the kind of course-level assessment that Kwame describes, would go far toward healing our Middle States status.

    ReplyDelete
  6. Linda, thank you for sharing the information (Miami Dade's plan), particularly about the four-point scale. This is encouraging, since we have also been using a 4-point scale for two years, ranking each student's Learner Outcomes as "Unacceptable," "Needs improvement," "Satisfactory", or "Secure." Certainly, we will try to improve our assessment methods on all areas, particularly linking the data in a better way to our program goals. I agree we should study HPER, Psychology, and Chemistry departments and learn from them. But I must say, truthfully after the team's visit I was very disappointed/ discouraged.

    ReplyDelete
  7. It seems to me that one of the things Chesmistry, Psychology, and HPR have in common is that all three have well established programs to assess their majors - in other words they have assessment programs at the departmental level, not just individual faculty assessesing individual students in individual courses. It seemed to me 2 years ago, and again now, that what Middle States wants to see is more of this and at a higher and broader level- programs, schools, and the administration at the University level participating in gathering, analyzing, and using assessment, and helping individual faculty, departments, and programs to standardize and combine assessment efforts. No matter how much we as individuals do in our own courses, it will never satisfy Standard 14 without the administration actively enaging with the process by providing leadership, support, and resources. Just collecting what we as faculty give them and putting it in a report isn't enough.

    ReplyDelete
  8. I believe Laurie is right. "Just collecting what we as faculty give them and putting it in a report isn't enough." Middle States assessment is more of a group or global project rather than an individual one. It's what the University does with the individual's data that's important; but unless we start generating the data at the course and individual level we can't "close the loop."

    ReplyDelete
  9. Many good points, but for now, I have trouble making assumptions about what the Middle States really wants. Perhaps most assumptions about what they expect / want for standard 14 (or in general) is accurate- I do not know! I listened to the team and got mixed messages (I am not suggesting that was intentional).

    ReplyDelete
  10. Ali: What were the mixed messages you got from this team on 24th September 2010?

    ReplyDelete
  11. [ Kwame -I wish you did not ask that question:-( ]

    I learned from the visit, we failed standard 14, yet we are a great faculty (maybe I am not good at accepting compliments). I learned, it may not really be important what we teach, as long as we are able to measure the student learner outcome goals of what we teach in a systematic way which can improve what we do and make changes to course contents and programs and perhaps do away with a Student Learner Goal if the system proves we cannot achieve it (when I was hired a couple of decades ago I promised to keep the standards of my course contents high- tougher for me to justify teaching Recursion now). Kwame, I sincerely, do not wish to intentionally offend/ criticize Middle States or ourselves or the administration about this. I think everyone means well including the visiting team members, but unless we all sincerely admit that a) Nobody really knows how to do this the right way yet at the college level b) The ultimate goal of the assessment process is not about the process itself - it is about educating the students, making sure they succeed, then we will really fail.

    ReplyDelete
  12. Ali: Your point is well taken. Let's see whether others got the same message. To do this well, we should all be familiar with the Middle States documents and forms on assessment and review our assessment plan and process, critically, as the team recommended. We need to have an honest discussion among ourselves, identify the problem and get rid of it.

    ReplyDelete
  13. I look upon Middle States' Standard 14 as an opportunity. Ongoing systematic self-assessment is an opportunity to reflect upon oneself and one's work and institute improvements to become more effective. This is the key to Middle States' Standard 14. We can use the data we collect to see if we are reaching our goals, and if we are not reaching our goals, we do things differently, based on the evidence of best practices. Our student learning outcomes are based on our goals and objectives, and we can measure our effectiveness, or our ineffectiveness. It is an opportunity to improve ourselves, our programs, and our courses-a hallmark of professionalism!

    ReplyDelete