I read Jenny Stephens Moore’s post on Reilly and Atkins’ chapter, “Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects”and Daniel Hocutt’s post on Crow’s chapter, “Managing datacloud decisions and “big data”: Understanding privacy choices in terms of surveillant assemblages”.
I commented on Jenny’s post that I thought the Atkins and Reilly article complements the article I read by VanKooten (see my post about it here), because both are talking about the assessments being tied to rhetorical theory (not just pedagogical principles). Of course we must assess what is taught, and we must have specific and practical and desirable learning outcomes, but these outcomes must also be tied to theoretical principles that are applied through the work that is assessed (I think theory is often lost in the race to make something “useful” and what is applied may be more technical knowledge or lower order thinking, rather than the higher-order synthesis and evaluation skills that would be addressed by paying attention to the theory behind the methods.
Atkins and Reilly’s assert that the “language of assessments of digital writing projects should be generalizable, generative, aspirational”, which is defined as encouraging students to use new tools and learn new skills. Moore notes that teachers should also “solicit student involvement in assessment creation, which Reilly and Atkins claim will localize and contextualize the assessment.” VanKooten also discussed co-creating the rubric with her students, which builds agency and investment in the project, but also requires them to think about the component parts (which would include applied theory) that must go into the assessment. I’m a firm believer in the “localize and contextualize” the assessment to the particular work in that particular class, which isn’t the same from semester to semester even if you are teaching the same class and giving generally the same assignment. I also liked Atkins and Reilly’s discussion about risk-taking and the aspirational component of assessment. We want to encourage our students to take risks — LEARNING involves risks. If you don’t feel uncomfortable, then you aren’t learning, you’re just doing. Adding a component to the rubric that encourages students to take risks by rewarding them tangibly with points. It also assures that students will have less of a tendency to fall back on “pat” responses to the assignment and should also discourage plagiarism.
Crow’s article brings up the “dark side” of the cloud, much as White’s afterword brings up the “dark side” of technology hyper-mediating our experience. Crow uses Deleuze and Guattari’s concept of assemblage to theorize that e-portfolios funcation as a convergence of once discrete surveillance systems, a “surveillant assemblage.” A classroom is one such discrete surveillance system, but when you create a portfolio of artifacts from multiple classrooms, you create such a convergence — performance across the boundaries of the individual performances for individual teachers who formerly surveyed their own students but now have access to products beyond the borders of their classrooms and students who were not “their own.” The performance revealed in the portfolio is something new in and of itself — it is more than the sum of the discrete performances in particular classes. Furthermore, the audience is extended, especially depending on who has access to the e-portfolio. And this, according to Crow, has implications for privacy that we should consider.
I have often thought about this “surveillant assemblage” (but without using that vocabulary term) when it comes to SafeAssign, the anti-plagiarism tracking tool in use at many high schools, colleges and universities (or another program of the same ilk, such as TurnItIn). Students don’t REALLY have a choice about putting their work into the database. Sure, they have the disclaimer in front of them that says they voluntarily agree to add their information to this network, but they cannot turn the assignment in to the teacher without agreeing to the terms. The vastness of the information contained in the SafeAssign database — including personal identifying information, reflections, etc. — is amazing to think about. It is in the hands of a multi-billion dollar corporation (BlackBoard, which is owned by an investment group. What do they do with the data? What *could* they do?
I am careful to have students put their personal narratives, profiles of partners, and their professional writing that includes their names, addresses, and resumes into BlackBoard but *not* SafeAssign for this very reason. Their papers are still on the VCCS BlackBoard server, and potentially viewable by others besides me, but that is not the same as being added to large, multi-school database. My institution requires me to use SafeAssign for certain assignments — either as a justification for paying for it or an attempt to monitor plagiarism (which sometimes subsumes the purpose of writing and making it about “catching” wrongdoing).
I think I would very much like to read other articles in this book. It is timely and fresh, with some new and exciting theorizes. And I got used to reading on screen, rather than printing, which I am trying to train myself to do. It’s a process.
Works Cited
Brown, Maury. “Toward a Rhetorically Sensitive Assessment Model for New Media Composition” – Crystal Van Kooten Annotated Bibliography Entry”. Blog Post. 3 Feb. 2014. Web. 7 Feb. 2014.
Crow, Angela. “Managing Datacloud Decisions and ‘Big Data’: Understanding Privacy Choices in Terms of Surveillant Assemblages.” Digital Writing Assessment and Evaluation. Ed. Heidi A McKee and Dànielle Nicole DeVoss. Computers and Composition Digital Press, 2013. Web. 8 Feb. 2014.
Gardner, Traci. “Digital Rhetoric: Wordle of Top 50 most frequently used words from the DRC Blog Carnival focused on defining Digital Rhetoric.” 20 June 2012. Web. 7 Feb. 2014.
Hocutt, Daniel. “Annotated Bibliography Entry: Crow in DWAE.” Blog Entry. 3 Feb. 2014. Web. 7 Feb. 2014
Moore, Jenny Stephens. “Annotated Bibliography: Reilly and Atkins.” Blog Entry. 3 Feb. 2014. Web. 7 Feb. 2014.
Reilly, Colleen A., and Anthony T. Atkins. “Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects.” Digital Writing Assessment and Evaluation. Ed. Heidi A McKee and Dànielle Nicole DeVoss. CC Digital Press, 2013. Web. 2 Feb. 2014.
VanKooten, Crystal. “Toward a Rhetorically Sensitive Assessment Model for New Media Composition.” Digital Writing Assessment and Evaluation. Ed. Heidi A McKee and Dànielle Nicole DeVoss. Computers and Composition Digital Press, 2013. Web. 3 Feb. 2014.
White, Edward M. “Afterword: Not Just a Better Pencil (McKee and DeVoss, Eds.) – Afterword.” Digital Writing Assessment and Evaluation. Ed. Heidi A McKee and Dànielle Nicole DeVoss. CC Digital Press, 2013. Web. 3 Feb. 2014.