I’m 48 hours away from leaving for six weeks in the field, my office looks like a combination of a the shipping dock and a outdoor supply store. Seriously duffle bags, so much duffle bags. But it’s good. I’m taking the lab out for a month of fieldwork.  We are excited to continue our work in Nagigi and work with the community to understand how to best make marine reserves.  However this year year I am making time to focus on doing outreach in-country, so the first three weeks we are spending in Fiji will be spent in Suva making connections and doing things which, in the NSF parlance, would be broader impacts.

The centerpiece of these activities is a weeklong course that we are teaching at the University of the South Pacific sponsored by the US Embassy. This course, which we are calling Fiji W.I.S.E. (workshop on international science education), will be a really great opportunity share cutting edge conservation science with students and conservation practitioners from the region. As of now we have 12 registered students from Fiji, New Zealand, Papua New Guinea and Vanuatu, who have expertise from aquaculture to marine reserve design. The course will consist of morning lectures and afternoon field trips to local reefs and mangroves.

What I wanted to write about today though was not the course per se, but a larger issue of how we evaluate our broader impacts activities. Since the NSF mandated the broader impacts I’ve seen a progression from  “I’ll talk to some High School students” to “I’ll make a website” to some really innovative projects including game play, educational videos, and real community involvement like bioblitzes. (For a deeper look at what should, and shouldn’t, count as a broader impact please see Prof Like Substance’s blog here)

Recently on twitter several of us were talking about how to show that our broader impacts were actually doing something.  This is a big issue to me as I am both seeking, and evaluating, funding. I want to make sure that the taxpayer money I’m spending is actually doing something, and I think incorporating some sort of formal evaluating can challenge us to think about what exactly those impacts are.

I’m not a big fan of the broadcast spawning model of outreach, wherein “if you make the website people will visit it” because that puts the onus of discovery on the public. Let’s face it; there are a lot of cat pictures to get through on the Internet before the public will find something on shark-toothed weapons or the evolution of coral reef community structure. I’m not saying we shouldn’t have engaging websites, but simply putting one up, in my opinion, is a necessary but not sufficient condition.

Which brings us to Fiji. I’m going to be teaching this class, I’m not currently funded by the NSF, but I’m hoping to use this class in future submissions as an example of the kinds of outreach that I’m doing. However I’m not going to simply say “we taught a blended lecture / field class to 12 Pacific Islanders” because as a reviewer I’d say “meh, so what? They sat there and you talked at them for a week. Hardly broad, and probably little impact” Rather I’m incorporating a formal evaluation.

My goal here is to assess what their baseline level of knowledge of marine conservation is across the five major topics of the class (spatial planning, biogeography, community ecology, terrestrial/marine linkages and community based management) prior to taking the class.  Then after the course has been completed we will then see what (if any) improvement in knowledge they have on those same topics, as well as seeing if they are able to draw connections.  We will do this through a combination of short answer (yay quantifiable data) and thematic mapping (more hand wavy but easier to represent complex issues).

If this all works out well at the end of the class I’ll be able to say something like “70% of the class showed an increase in their knowledge of microbial ecology” or something like that. This will not only help us better fine tune the class for next year (should the funding gods smile) but also allow us to go to granting agencies and say “this money you gave us had a real and quantifiable impact on these students.”  This won’t be the sum of my broader impacts as it’s very intensive and only reaches a small group of people.  However, those people were selected because they are plugged into a larger community of active conservationists and our hope is that they will be able to share their lessons with others.

I’m interested in hearing what you think. Are you incorporating evaluator metrics in your Broader Impact activities? How can we, as a community, move beyond YouTube views and really get a grasp on if we’re making a real difference? Or are views an acceptable measure of impact for some kinds of activities but not others? I doubt there’s going to be a single metric, but I think we can only improve the quality of our work by thinking about how we would evaluate those outcomes.