Start the Second Semester off Right: Vision to Assessment Success

If only they would release more assessment items! If I just knew what the test looked like. It’s always a moving target—it always changes! Who are those writers anyways? They are just biased and don’t know anything about kids. If only the kids had a way of practicing….

Have you figured out what I’m describing yet? Those statements are all complaints that I have heard regarding Ohio’s Assessment system, all within the last week. While I believe there to be no state system out there that is flawless, and while ours, perhaps arguably, has issues that need to be addressed, there is one thing that I am positive of. It’s time to get on the proverbial professional development school bus and adjust that mindset…you can conquer the test.

1Before I show you how, I want to remind you about the heart of what we do as educators—student learning—which is anchored around three interrelated areas. Curriculum / assessment / instruction always exist in relationship to one another; they are not separate from one another, and they can’t exist without each other (Pelligrino, Chudowski, & Glaser, 2001). For any Deweyian scholars out there, this is the transactional relationship at best (Ryan, 2011). For the non-nerds out there, it’s the concept of a store without customers. The store does not exist because it has no customers and the customers don’t exist because there is no store. The same exists in the classroom—we can’t teach without finding out what kids know and if we don’t know what kids know, we can’t teach. Assessments, no matter what their form (an oral question for example), are proof that somebody (a student) knows something (that the teacher has taught).  

Since assessment is integral to what we do in classrooms, we must have a vision for it— in other words, if we want to conquer the test, we need to start first with vision (that’s Step 1 to conquering the test). Vision as Manasse (1986) describes it is the force that gives meaning and purpose to the work of an organization. It inspires commitment for the future as it explains who is involved and what they plan to accomplish, and explains the importance for the growth. As Pejza (1985) cogently stated, it’s a “hunger to see improvement.”  

I remember this like it was yesterday. It was almost four years to the day. I was walking with one of my thought partners (Stanny, 2012) down the Short North in Columbus during a lunch break on one of the coldest days ever that winter. We were shooting the breeze about everything educational, and I remember that this friend and colleague was challenging me on my thinking process about where I wanted my district to go. I was shopping for assessment products, as we had had a bit of an assessment crisis in my district, and I realized very quickly that I needed to envision what it was that I wanted our team to accomplish. What was going to be the purpose of our assessment system? How would it relate to curriculum? What instructional strategies did we as a district value that would align to our assessments? What did I want teachers to do with the data? It really, of course, wasn’t about the platform, but rather the vision we had for how we were going to work systematically across our district to improve student learning (see this entry for additional information). In other words, my goal was (and still is!) to help teachers understand how to write and analyze good assessment items, as by teaching a teacher to fish, I feed them for a lifetime.

It isn’t until late that I’ve really come to understand how crucial that conversation was and how important the envisioning process is. We say it in Admin 101 classes all the time that vision is gold, and we teach teachers on a regular basis that there must be clearly defined learning targets (their vision for a lesson). The same is true for our assessment systems within our districts. No matter what role you play in your district (curriculum director, principal, or teacher), start with a vision that addresses the relationship between your classroom assessments and state assessments and then the relationship between your formatives and summatives within your context. What is your goal by assessing? What are you hoping your assessments show? How do you know that your assessments prove what students should know and are able to do?

2.pngStep 2 in conquering the test comes through group discussion and utilizing the plethora of resources that exist out in our state. For starters, I encourage everyone to review the blueprints for what we are hoping our students can do. Test blueprints are outlines of the content and skills that are to be measured. They specifically help teachers understand what the test looks like and also helps reassure us that the target isn’t moving. Our blueprints have stayed the same for the last three years and it’s a great exercise to compare these in relationship to curriculum maps and lesson plans and in relationship to state standards (ELA, Math, Science, Social Studies).

Next should come analysis of released items; these released items come out on a regular basis per Ohio Revised Code and provide insight into how different concepts can be assessed. (A caution to note is that released items are different than practice items. A practice item is designed to practice technology skills. In other words, a Grade 8 math practice item might not be aligned to Grade 8 math but does show students how to manipulate technologies that they will see on the test). Once you’ve analyzed those items in relationship to your standards, start to create your own items, items that measure the depth that is required in the standards.

Speaking of depth, my favorite tool is to utilize the performance level descriptors to start to move students forward (this becomes the intersection of curriculum / instruction / assessment). PLDs are, as ODE describes, the “link between Ohio’s Learning Standards and performance standards and help to define what students should know or be able to do at each performance level.” The power in this tool comes in when we start to look at a collection of student work over the course of a unit or a month or week and start to compare this student’s work to the PLDs. For example,  a 3rd Grade Basic student can “Determine the main idea of a text and identify key details to recount the main idea;” this is different than the Proficient student who can “Determine the main idea of a text and recount key details and explain how they support the main idea.” Now we know that we have to teach the student how to explain how details support the main idea. This can become the vision (i.e. learning target) for instruction.

3The last step comes in learning more about assessments. Learn how Ohio writes its assessments — interestingly enough, they are not individuals in fedoras, but rather teachers and administrators like you and me (see this to learn about the item development process). Learn about other myths that exist in relationship to Ohio’s testing system.  

Then learn how to write your own items that will help support the instruction and curriculum happening in your classrooms.  

Now, if only if there was a place to learn more about all of these concepts, and ways to get more examples….

Ohio educators can see Dr. Drost present at the Ohio Assessment Literacy Conference on January 27, 2018 at Summit ESC.

Bryan Drost_Edcite BlogBryan R. Drost is the Director of Curriculum and Instruction for the Summit ESC, Ohio. He holds a Master’s of Education in Educational Foundations with an emphasis in Standards-Based instruction as well as a Ph.D. in Curriculum and Instruction and Assessment both from Kent State. Bryan holds a variety of roles at the state and national levels: an ODE Network Regional Leader, a member of ODE’s Fairness and Test Use Committee, Content Advisory Committee member, NAEP assessment member, a steering committee member of the Northeast Ohio TALK Network, a RESA master coder, a national supervisor for edTPA, a consultant for the National Board, part of NCME’s Standards and Test Use Committee, the mathematics lead for Ohio’s Core Advocates, and Regional Data Lead for State Support Team Regions 4 & 8. He has presented throughout the state and country on various topics related to instructional shifts, assessment, and technology integration.

DIS Insight: Digital Assessment & Edcite (Part 2 of 2)

Click here to read Part 1 of this post, where Alexander Clarkson discusses the challenges teachers face when giving regular formative assessments and feedback.

Smal bit of feedbackA solution

I don’t have the answer, but I have an answer: next generation digital assessment. My teaching emphasized writing as assessment because I was suspicious of structured response items like multiple-choice, true/false, or matching. They felt less like authentic thinking tasks and more like artificial hoops that practically beg students to cheat or use test-taking skills to trump thinking skills. But, what if I could reduce the amount of writing grading that I faced by replacing those bulky assessments with next generation digital tasks that required authentic thinking skills, properly challenged students to master those skills, and provided formative feedback necessary for modification of instruction? And what if that approach graded itself?

The idea is simple. We can now develop digital assessment models that automatically grade while providing students with challenging, authentic skills practice. We must move away from multiple-choice question types to those that present thinking challenges that cannot be “gamed,” but will accurately provide data on a student’s ability to perform a skill, with that data indicating how to proceed.

Let me give an example. I wrote an item last year to prepare students for Ohio’s state tests, which were being administered by Pearson’s PARCC platform for the first time. In trying to prepare students for these new tests, I had nearly no practice material, so I collaborated with another teacher to write original material based on PARCC approaches. This particular item was based on an excerpt from Stephen King’s The Girl Who Loved Tom Gordon (wonderful little book; check it out) in which the protagonist, a 14 year-old girl named Tricia, finds herself lost after fainting on the Appalachian Trail. The question gave the students six statements about events that happen in the novel before the excerpt presented in the assessment. Students had never read that part. They had to arrange the statements into the correct order. Students had to use causal and inferential reasoning to accurately arrange the statements. In the excerpt, the girl had just woken up from a faint, so the statement “Tricia faints” was logically the last event before the excerpt. The student would move backward from there.

Ordered List Question
Edcite order list response item for Stephen King’s The Girl Who Loved Tom Gordon.

I’m using this item as an example because it shows the type of assessment item I am now looking to repeat. It requires skills that I actually want students to develop, causal and inferential reasoning, not test-gaming with multiple-choice or matching questions. It is replicable for another passage, which means I can re-write the question with different content and give more opportunities to practice the skills. And, best of all, it will be automatically graded. All I need to do is assign it, let the students complete it, review the data, and modify instruction. The grading burden drops to nearly zero. Sure, the assessment creation takes time, but less time than grading, and assessment creation can be shared collaboratively with teachers throughout buildings or an entire district, thus reducing the time needed even further.

Empower Teacher Quote

This is where I was when Edcite came into my life. I knew what I wanted to do, but I was struggling with the perfect platform to accomplish it. As a Google user, I stuck with Forms graded through Flubaroo, but Forms was never designed for educators. It works just fine for multiple-choice questions, but designing this Stephen King question in Forms led to a student experience that was basically clunky. I suspected that students may not be able to complete the question well because of its awkward presentation. Edcite, however, offers an order list response item type, which allowed me to create the question as a user friendly drag and arrange item. It worked perfectly. After looking at it, I reviewed other items in the same assignment, which were mostly traditional multiple-choice and multiple-select items, and chided myself for not creating more of these rigorous and authentic challenges for my students. Empowered by Edcite, I’m excited to design more.

And what will those items look like? How about having students watch a compilation of movie clips and then sort quotes based on the type of figurative language? How about asking them to graph a quadratic equation or use a math keyboard to answer a word problem? Or maybe asking them to click on sections of a map when asked questions like “Identify the compass rose” or label a blank map of Mesopotamia? No multiple-choice to provide assistance. Just the student’s ability (or lack thereof). How about asking students to highlight statements from the novel The Valley of Fear to answer a question about irony? All of this automatically graded. Just design the assessment, assign it, and modify instruction based on the data. It’s just like setting up that robot pitcher.

That’s why Edcite is such an incredible gift to teachers. Instead of offering a handful of question types and limited ability to customize, Edcite offers (at the time of this writing) 74 question types. I have discussed only five or six here. Most questions allow for customization including the embedding of images, videos, sound files, links, and more. With a little creative thought and focus on effective learning challenges, a teacher could use this platform to completely redesign assessment in a way that would provide repeated opportunities for authentic skills practice. Oh, and without the crushing burden of grading.

Screen Shot 2016-05-11 at 3.39.35 PM
Various Question Types available on Edcite.com

I’m an English teacher. I will always grade essays, and my students will always work hard to improve those vital communication and critical thinking skills, but by embracing next generation assessment approaches, I do not need to only grade essays. I can develop a library of assessments that will sharpen a wide range of skills without the constant crush of grading.

It’ll just be that kid and me, her in the cage, me watching from outside. A pitch and a miss, followed by a few words. Another pitch, another miss. More words. Some demonstration. Another pitch, and CRACK! A slam threatening to punch a hole in the net.

shutterstock_308484590

Clarkson Profile PicAlexander Clarkson is currently the digital instruction specialist for Sylvania Schools, where he helps teachers include innovative instructional strategies in their classrooms as they move to full 1:1 implementation. Just last year, though, Alex finished a sixteen-year tenure of teaching that included English language arts, philosophy, and film studies at the college, high school, and junior high school levels. When he’s not thinking about digital instruction, Alex marvels at his two-year-old’s abilities with a tablet and his fifteen year-old’s abilities with a drum kit.

DIS Insight: Digital Assessment & Edcite (Part 1 of 2)

Part 1 of Alexander Clarkson’s Guest Blog Post

The Problem

As a teacher, I think of batting cages often. shutterstock_405329950No, I don’t teach phys. ed., and I don’t coach sports, but the batting cage, a precious memory from both my childhood and fatherhood, rattles around my brain as the perfect metaphor for the kind of teacher I strive to be. Think about it. The cage is the perfect teaching and learning situation. In an artificial environment designed to replicate an authentic one, the learner is encouraged to try and try again, modifying each attempt under guidance from a mentor or personal observation. There are no high stakes because no one’s keeping score. The only purpose is refinement of a skill, and nothing distracts from that. Boys and girls for generations have received effective instruction in that simple cage as they refine their skill for the big game.

Over the past few years, I have endeavored to create batting cages for each of my students through techniques of formative assessment. My students would target a set of related skills and practice them once, twice, three times, and hopefully more before the summative assessment, or “big game.” The formative practice carried no grade, but it received a ton of feedback. For example, when I taught skills of argument writing, I would not ask students to write one big researched argument paper, but several short ones. Students would read news articles for controversial topics, choose one, conduct extra research, fashion a logic model of their argument and the opponent’s counter-argument, outline the paper, and write it. I’d read it, add comments, assign a rubric-based grade, and hold a workshop to discuss trends in strengths and weaknesses. So, a student could write a paper on an issue of their choice, say school uniform policies, and receive guidance both written and verbal. We would workshop the paper, and they’d try again. Chalkboard feedback
Not a revision, mind you, but a new paper on a new topic with all the same steps repeated, but perhaps this time the student would choose living wages for fast food workers. Each new paper was a new pitch in the cage, different in space and time, but identical in structure and expectation. Just like the coach outside the cage, I was looking for skill refinement through repeated practice and guidance, and I tried to bring it to every skill I taught: creative writing, literary analysis, grammar revision, news article analysis, research, and more. Each series of assessments was a new session in the batting cage.

This practice helped me understand that assessment was not a threatening trial, or at least it should not be. Each assessment was an opportunity for feedback and help, not a dreadful exercise in humiliation. Assessment, not test. A way to check for progress and provide guidance to improve. Following that line of thinking, the more assessments, the better. Why step into the cage for two pitches? Where’s the use in that? The best practice comes from repeated assessment and repeated guidance. If executed properly, students should look forward to assessments as nonthreatening opportunities for help, and teachers should throw themselves excitedly into the role of individualized mentor.

Each Assessment Quote

Unfortunately, the excitement on my part was hard to come by for one simple reason: grading. Proper formative assessment should be frequent and feedback should be as close to immediate as possible. A teacher that returns an essay a month or more after submission should not have bothered to assign it. The feedback will be nearly meaningless at that point. I redesigned writing assignments to be shorter in order to grade them faster and more tightly focused so we could discuss feedback on a narrower range of skill standards. I challenged myself to turn back papers in no more than three class days, and I kept to that pretty well. Students wrote, we discussed, I graded and discussed my feedback, they wrote, and we repeated the process. Pitch after pitch. But, keeping up that pace for an average course load of 160 students was coming close to breaking me. Sure, I became a faster and better grader. I wrote precise and useful rubrics. I developed new digital means to speed up the process. I went paperless to improve organization and communication. This time was a flurry of innovation, student interaction, spontaneous class planning, and . . . utter exhaustion.

Argument Paper Screenshot
Google Docs-driven argument writing with comments for feedback.

But, I could not abandon this approach. I believed in frequent formative feedback, even on major skills like essay composition and research, and the students benefited. Time on task in the classroom rose dramatically as revision tasks were clearly defined and manageable for students. Tension lessened because the workshops necessary to drive this instruction fostered collaboration, creating social, instead of isolated, effort.

Clarkson Classroom Pic
During workshop, a student works with me to explore a piece of feedback from a formative paper.

I was better able to support students because their challenges were specific and had a history from previous efforts. But, the overload was still there. So, what to do?

Click here to read Part 2: “The Solution”

Clarkson Profile PicAlexander Clarkson is currently the digital instruction specialist for Sylvania Schools, where he helps teachers include innovative instructional strategies in their classrooms as they move to full 1:1 implementation. Just last year, though, Alex finished a sixteen-year tenure of teaching that included English language arts, philosophy, and film studies at the college, high school, and junior high school levels. When he’s not thinking about digital instruction, Alex marvels at his two-year-old’s abilities with a tablet and his fifteen year-old’s abilities with a drum kit.

 

Data Analysis: One of a Teacher’s Best Tools

 

shutterstock_381183622

Data. That 4-letter word! Often when teachers hear the word data, it makes us cringe. You mean we have to look at data to determine what we are teaching?  That takes so much time. For those of us who have been in education for a while, it can be hard to get into that mindset of looking at data in order to determine what to teach. I mean, really, I’ve taught long enough, I know the areas that are the concern, and I plan for that in my lessons….Right?

Data analysis is actually one of a teacher’s best tools. It can be as holistic as looking at the summary of data to determine strengths or weaknesses within a Professional Learning Community (PLC), or it can be used to determine whether students missed a question because of knowledge and skill or due to a poorly written question.

It seems easy: give the formative assessment and look at the data. From there, use the data to create differentiated assignments addressing the strengths and weaknesses of the class. The problem is logistics. How do you do that when you have 150 students, a pacing guide for your district-wide unit, school meetings and other routine duties, all while trying to have some sort of home life?

I wish I could say that our school has figured it all out, but we are still in the process of working on it. I do know that we have decided to use a tool for assessment that will give us the data immediately. Our teachers have started using Edcite to create assignments for students so the data will be quickly and easily accessible. For example, our math teachers administer formative assessments to gather data for our RtI sessions that we hold twice a week. Teachers look at the data, determine what types of sessions they need to offer students, and then give exit slips after the sessions to determine if students have mastered the skills.

shutterstock_220837810We also use Edcite’s new platform, Edcite Schools, which gives us the ability to create master distributions of assignments and analyze far more powerful reports.  This saves teachers time because one person can make an assignment and distribute it to all students in a grade level, across classes. There is also a feature to create groups and folders for teachers to access for collaboration.

Our school is moving forward with the data piece in education.  Weekly we are looking at ways to make data analysis easier, more efficient, and more effective for teachers in order to help students progress and achieve.

We know it is a long journey, but we are up to the challenge.


 

Melanie ThiesseMelanie Thiesse has been in education for 31 years. She has taught junior high English and business classes. With her experience in both English and business, she was asked to design a course and write the curriculum for a class that combined business skills with English skills. It was adopted by the state, and several districts in Arkansas adopted the course for their schools. For the last 4 years, Melanie has been working as an Instructional Facilitator. She enjoys working with teachers to help them provide rigorous instruction to students to prepare them for the future. Melanie is also part of Edcite’s advisory panel of teachers, the Edcite Evangelists.

Assessments: Start with the Why

“Assessment” has become one of the biggest buzzwords in education: formative assessment, summative assessment, state assessment, district assessment, digital assessment, continuous assessment, common assessment, etc… Literally anything could be considered an “assessment” when you are gathering data. Assessments are not anything new; they have always been, and should always be, a part of every teacher’s toolkit. We have to know where our students are, in relation to the material that we are teaching, in order to best target the learning.

Although the act of assessing a student can be as simple as asking a student a verbal check for understanding, in recent years the term “assessment” has developed a bad reputation. In the US there has been uproar that we are “over-assessing” our students and many are calling for restrictions on the amount that we assess students. While people may have had a bad experience with a particular assessment or group of assessments, we don’t want to throw the baby out with the bathwater. Speaking with such vague generalities is confusing the issue. Regular assessment and data are not inherently bad—although some would lead you to believe that they are.

Assessments and data are essential, but they can be tainted when the administration doesn’t align with why the assessment was meant to happen. The problem isn’t that we are over-assessing, it’s that we aren’t always clear WHY we are assessing and therefore, aren’t ensuring the how and what follow appropriately.

Julia Sweeney_Data

Let’s not waste our time waging a war on data, or focusing our energy on attacking specific assessment providers, let’s instead take the time to reflect on assessment practices in our schools. Let’s discuss the why behind assessments and then make sure we align our how and what.

Using the Start With Why book and Ted Talk by Simon Sinek as our inspiration, we started our recent Professional Development event with a Michigan school district with an exploration into their why regarding assessments. The session sought to help their school district develop common assessments to measure student learning and develop ways to respond to the data. Often times when people think about developing assessments, they immediately think about scheduling and the form the assessment will take. That approach doesn’t start with why and will lead you to question whether these assessments were “successful.” You have to know what it is that you want, in order to know whether you’ve gotten it. So, we started with what they wanted in assessments–we started with their why.

Screen Shot 2016-01-28 at 1.59.01 PMWe had the coaches begin by thinking through the various rationales behind assessment. Coaches evaluated some potential reasons for assessment, by considering statements like:

 

  • We assess to provide students feedback.
  • We assess to hold teachers accountable.
  • We assess to inform our instructional decisions.

 

From there, groups collaborated to identify the three biggest reasons they want to assess student learning. Once we had a clear picture of why we were creating assessments, then we could better ask and answer questions about how we create and administer assessments, and THEN coaches could start developing assessments (or creating the structures to have assessments developed).

The schools and districts that I support, in their creation of common assessments on Edcite, have a much healthier relationship with data when they have a clear understanding of why they assess students. The problem isn’t the assessment; the problem is, that too often, we don’t know why we are doing it.

If you would like more information about the Professional Development events we have done in regards to school and district assessments, please contact me at Julia@edcite.com.