Teacher’s Corner is a monthly newsletter from eNotes just for teachers. In it, experienced educator and eNotes contributor Susan Hurn shares her tips, tricks, and insight into the world of teaching. Check out this month’s Teacher’s Corner column below, or sign up to receive the complete newsletter in your inbox at eNotes.com.
Keeping Up with Assessment and Grading
Assessing students’ achievement is an integral part of teaching, and like everything else in the profession, it has become more complicated. The days of giving a chapter test and calling it good are over. That’s not a bad thing though. To really keep tabs on who’s learning what, assessment has to be an ongoing process, and it has to offer kids a variety of ways to show what they know and what they can do.
To be thorough and effective, assessment has to include the three main types of measurement: diagnostic, formative, and summative. Diagnostic assessment is imperative, since it’s impossible to know how much ground students have gained at the end of a study unit unless we know where they were at the beginning. Formative assessment checks their learning along the way and provides an opportunity to adjust lesson plans, if necessary, and to address specific problems a struggling student might be experiencing. Summative assessment at the end of a study unit indicates kids’ overall mastery of new material and gives a clear idea about how to proceed in instruction. A review of all six types of assessment can be found here at edudemic.com. Another good site with information about assessment practices is utexas.edu/teaching.
Yesterday it was announced that the SAT would be revising its test for the second time in just over a decade. To help you prepare for the next version of this popular standardized test, find here an outline of the changes plus other important announcements from The College Board that will impact future college admissions.
What will the new SAT look like?
The new SAT, to be released in 2016, will feature four significant changes:
- The SAT essay, introduced in 2005, will become an optional segment of the exam
- SAT scoring, also changed in 2005, will return from the 2400- to the 1600-point system
- Points will no longer be deducted for incorrect answers (currently students lose 1/4 of a point for each wrong answer)
- And lastly, “SAT vocabulary” will become a thing of the past, as complete-the-sentence sections of the exam are replaced by ones that test students’ critical reading of a passage.
Why make these changes?
One thought that struck me when I read over these changes was that the SAT is increasingly becoming more like the ACT. The criteria are familiar: no deduction of points for incorrect answers, no required essay, and a significant critical reading section are all key points of the ACT that many students over the past decade have recognized as advantages to taking it over the SAT. So much so that gone are the days that the SAT is the go-to test; when I was a high school junior, nobody ever mentioned the ACT, but when I became a test-prep tutor five years later it was the exam 90% of my students elected to take. Why? When they were evaluated at the start of our course, the overwhelming majority performed better on the ACT than the SAT. It gave them a step-up in achieving a higher ranking, and as students’ favor of the test increased, colleges’ willingness to accept it on equal terms with the SAT followed suit.
For whatever reason, be it an attempt to curry more favor (and cash) or a genuine recognition of a need to assess students more fairly, the SAT is moving towards a format more similar to the ACT.
What do these changes mean for students?
When I tutored students for the SAT, a significant focus of our preparation was on strategy. To perform well, one has to form a plan of attack, making a practical decision from the outset on how many questions would need to be answered to achieve the desired score. That’s because every wrong answer a student might give could decrease his or her overall score, thanks to the quarter-point deduction for an incorrect choice. Except for the cases where students strove for a perfect score, it was more advantageous to leave x number of questions blank.
Now, however, the idea of “SAT strategy” will be tossed by the wayside. Is this good or bad? Perhaps we should simply say it assesses a different skill. The SAT Reasoning Test, to go by its full name, was designed to test a student’s ability to reason and evaluate. In reality, though, this has meant that Read the rest of this entry »
Does more equal better? When it concerns students’ essay scores on the Scholastic Aptitude Test (SATs), that may well be the case. Milo Beckman, a fourteen-year-old student at Stuyvesant High School in New York conducted a study among his peers after becoming frustrated with the scores he received on the exam. Beckman took the exam twice, and to his surprise, discovered that his second test scored higher than the first, although he deemed the first attempt to be superior in quality. The second essay he wrote was considerably longer, but not, in the students’ opinion, as well written.
Beckman then polled 115 students who had taken the exam in his school, asking them to count the number of words they had written. The students who wrote lengthier essays almost always received a higher score, despite the quality of the content. Beckman’s results were confirmed by MIT professor Les Pereleman. (Read the full story as first reported by Elisabeth Leamy of ABC’s Good Morning America here.)
In other testing news, it is not just students who are being graded. Increasingly, teachers are being held accountable for the performance of their students. Houston, Texas is the latest city to announce that teachers’ jobs will no longer rely solely on evaluations by their principals. Until this year, 99% of teachers received satisfactory performance scores based on personal reviews. However, now student test scores will play a much greater role in deciding who is hired and fired.