My coauthor Wil Doane and I are open-sourcing our ICER 2016 submissions.
18 Jul 2016My coauthor Wil Doane and I are open-sourcing our ICER 2016 submissions. For each paper, you’ll find
- A link to the Github repository, which contains not only the entire source code of each manuscript, but also the entire commit history. You can see how we went from
initial commit
to final product, browse our pull requests, and seeblame
views for the entire manuscript. - A PDF containing
- The submitted paper itself
- All reviewer reports (including a meta-review)
- The ICER 2016 guidelines for reviewers
- A link to a discussion issue, where I warmly invite you to leave feedback and start conversations
Title | Download (PDF) | Join the Conversation |
---|---|---|
Reconstructing design thinking and learning through code snapshots and clinical interviews (Source) | Download It! | Discuss It! |
Expanding Models of Cognition within Computing Education Research (Source) | Download It! | Discuss It! |
Why am I doing this?
I’m doing this for five reasons (at my current count):
- Teaching. In my entire graduate career, I never once saw the review history of a paper I didn’t write. And by and large, we don’t design our graduate courses around that. Sure, we pack our syllabi full of influential research and cornerstone pieces, but we don’t get to see how those pieces were made. We also don’t get to see how the community (in the form of reviewers) thought about and responded to those pieces, or what changed on their journey from initial submission to final acceptance. If that’s going to change—if we’re going to provide the community with worked examples of our own scholarship—it has to start somewhere. So I’m offering up my work.
- Exposing the Scholarly Writing Process. When I wrote my first first-author article, I filled a folder with more than 100 drafts of the manuscript. Each draft had entire conversations in the margin between me and my co-authors. The manuscript itself is now published, but what aren’t published are the dozens of conversations we had in those marginal comments, arguing (formally) with each other, persuading each other, convincing each other. The research was in the manuscript, but the dialectic was in the margins. We also had three separate rounds of review, and our second round included seven different reviewers. In each round we wrote the editor to respond to every single point the reviewers raised; in one of those rounds our letter to the editor was more than 10,000 words long. But none of that material—neither the marginal dialectics nor the carefully considered arguments we made to reviewers and editors—is available to current scholars. With this open-source project, you can start to see how Wil and I engaged in our authorial back-and-forth. It’s nowhere near a complete record of our conversations, but it’s a start.
- Continuing to Advance the Review System. Reviewing is hard, thankless work. And, odds are if you’ve submitted your work anywhere, you’ve experienced feeling like “the reviewers just don’t get it,” or “I addressed that on page 6!” or “Thanks, reviewer, for pointing out you’re dissatisfied while offering me no information on how to make you satisfied.” Our systems are imperfect, but we’re trying to make them better. And in that spirit, I’d like to point out a major disconnect between two different kinds of review I’ve been a part of. As an NSF review panelist, I’ve reviewed dozens of proposals where I had reservations or concerns. And every time I’ve raised a concern, the panel works to basically say “could the authors resolve your concern in a quick phone call/email, or is this the kind of concern that would require substantial revisions to the proposal?” The process of NSF reviewing allows for a certain limited back-and-forth with proposal authors, such that certain concerns can be triaged as not threatening the entire proposal. Our current ICER review process doesn’t work that way. We have no formal request-for-clarification/rebuttal mechanism. And, while I know about (and appreciate) the strides we’ve made with meta-reviews, I can easily identify comments in my reviews where having the chance to rewrite just one paragraph would have addressed reviewer concerns. If it’s happening to me, it might be happening to other people too.
- Keeping Knowledge Open. I think our scholarship should be as open as we can make it. If these papers had been among the 25% of papers accepted, they would—as far as I know—have been published behind the ACM digital library’s paywall. And while I know that not all CSEd research is publicly funded, this research was. The nation entrusted me and my colleagues with the means to investigate; I won’t betray that trust by making them pay again for my results.
- Fighting for the Legitimacy of Method. In their 2007 paper on acceptance and belonging in engineering education, Foor, Walden, and Tryttenden reflected on why they chose to conduct ethnographic research:
Accepting qualitative research, especially ethnography of the particular, into the toolbox of engineering education research provides a microphone for the voices of the marginalized to be heard. Ethnography of the particular allows us to hear each and every voice that would otherwise be lost in aggregate ethnography or statistical analyses. (p. 113)
I can’t claim that my ethnographic work is good enough to make it into ICER. The reviewers ultimately judged it wasn’t, and there’s no formal way for me to argue the case. What I can say with certainty is that those of us who conduct ethnographic work are still fighting for legitimacy in CSEd and engineering education. We still have to fight reviewers who have problems with ‘small-N’ studies, and the reviews you’ll see for my manuscripts prove it. And, to be clear, I’m not suggesting that, say, experimental design research doesn’t have to justify validity. It does. The difference is experimental designers have to defend their implementation of a research method; they don’t have to defend the epistemlogical legitimacy of that method. Put another way,—if research methods were objects—an experimental designer needs to defend their instance; ethnographers in CSEd have to defend the entire class itself. And ethnographers have to defend that class almost every time they want to publish in CSEd. Even though Erickson defended it in 1986. Even though Schoenfeld defended it again in 1994. And even though Andy Elby, Ayush Gupta, and I defended it again in 2014, with an entire section of our manuscript defending not just our instance of small-sample ethnography, but the rigor and value of the entire class of small-sample ethnography. If we have to justify our method’s legitimacy in every manuscript we write—even and especially when the venue is meant to represent the bleeding edge of CSEd research—it’s one more narrative burden, one more place reviewers can ding us, one more dimension that separates us from other kinds of research, and one more sign that while our community might say it embraces this method, we still have farther to go in making that true. And if you don’t believe that, reflect on the last time you got a reviewer comment that said “the problem with this study is that the N is just too large.”