The Role of Evaluation in a Learning Organization
In my last post, I wrote about the important role practitioner inquiry plays in supporting teachers as learners. We strive for something similar from evaluation at KSTF. Like many organizations, we have regular cycles of monitoring how well our programs are working and where we might improve. We set organizational goals and program goals that guide planning and evaluation. To measure our performance against those goals, we collect data in a variety of ways, including meeting observations, program records, online discussion content and surveys. We analyze the data and write reports that capture strengths, areas for improvement, next steps and future changes. But we aren’t just interested in evaluation that is limited to fixed targets and outcomes. We want evaluation that nurtures KSTF as a learning organization; that helps us continually get better at improving what we do. So what does that mean for us individually and collectively? We’re still working on that but here are some thoughts along the way.
By definition, learning organizations are dynamic; their “targets” shift and move. So, the kind of evaluation I’m talking about here is more about constant, continuing questions, rather than definitive answers. It’s more about waypoints than endpoints. But that doesn’t mean anything goes. It means that, as an organization, KSTF takes an inquiry stance towards building a network of strong secondary STEM teachers and teacher leaders. This requires regular cycles of clearly articulating the direction we want to head and why, and opening that to critical input, followed by systematic questioning of what we do, whether we are headed in the direction we intend, and how we know. We have been working on embedding those kind of inquiry practices more intentionally into our regular routines. For example, we now come together periodically across programs and use a protocol to raise questions about our plans for, or progress in, our work.
People often equate evaluation with accountability. Evaluation that supports organizational learning has a different take than a traditional “did we hit the mark or not” accountability. For a learning organization, accountability is achieved through transparency. There are structures and processes in place that enhance visibility and participation. But, most importantly, there is a culture of courage and trust across the organization. So all can contribute to, and learn from, each other’s progress and challenges. We’re focusing our evaluation efforts on building the relationships and processes that encourage transparency. That means being more explicit with each other about what we’re doing and why, examining data from a range of sources, and collectively contributing to lessons learned that can be broadly shared. That’s collective accountability.
KSTF does have a research and evaluation team. We support evaluation across the organization but we don’t own it. In a learning organization, evaluation is a shared responsibility. Fellows, staff, specialists and board members all are essential and important contributors to the organization’s growth and development. We are partners in learning how to build a stronger STEM teaching profession. That’s why we have the audacity to ask busy teachers to fill out a survey or talk about their KSTF experiences with an interviewer. One of the challenges we face as an organization is how to support shared responsibility without weighing anyone down. We try, as much as possible, to embed data collection in the work itself, so that an online discussion or notes from group work, for example, can serve as evaluation data. But we find (occasional!) surveys and interviews provide other valuable perspectives as well.
In the end, I don’t think evaluation in a learning organization is captured in reports or numbers from a survey. It lives in what we do, with and for each other—in the questioning, transparency and shared responsibility we might not always enjoy but know is necessary to keep moving forward.