The heart of our work is long-term interventions, so keeping clients engaged enhances the effectiveness of our interventions. So when we started noticing students leaving our Stay the Course program after the winter break, we began asking “why?” On the back end, we analyzed all the data we had at our fingertips: age, gender, family size, preferred language, time of enrollment, campus, and degree plan. But we found no significant differences from those who stayed enrolled. We knew we needed to collect qualitative data, but it is time consuming to collect and analyze. One of our site supervisors had a thought – what if we don’t collect data in the traditional sense? What if we asked the questions along the way to a random sampling of students in a way that was responsive, sustainable, and integrated into our regular program operations? We drew upon the theory of Expectancy-Value, gathering from students whether they felt that their expectations were being met and if the program was responsive to their needs. We did this in informal, one-on-one conversations with clients instead of laborious and often posthumous surveys, much like “stay interviews” employed in human resources departments. It was real time and informal, enabling us to pivot and make tweaks without waiting until it was too late. What we continue to be reminded is that evaluation is everyone’s business. It is an attitude. It is a culture of learning where programs are equipped and empowered to collect and act upon client feedback. While we definitely invest in formal evaluations and research, we’re also turning in and asking how we might also invest in programs that are evaluative by their very design. We decided to take a new, more informal approach to get real feedback from our Stay the Course students who left the program mid-year. Want to stay current on our latest research projects and insights? Subscribe to our Research Spotlight Newsletter below!