Thursday, February 19, 2015

Waiting on the World to Change

This morning, I woke up at an ungodly early time to work on grading my students' writing pieces. As I sat down to start the task, I had the thought, "Wouldn't it be nice if there was a tool I could use that would let me just click a few buttons for their writing rubrics and send them their results?" A Google search later, I found two new tools: Doctopus and Goobric. Doctopus compiles all of my students' writing pieces from Google Docs into one spreadsheet. Then Goobric takes a rubric you've created in a Google Spreadsheet and applies it to the document. You'll see a split-screen with the rubric on top and the student's writing below, and you can just click the box of the descriptor that applies. You can even record audio feedback for your student!

When  you're done, you click "Submit," and that's where the real magic happens. It will automatically paste the appropriately shaded rubric with a link to your audio comments at the bottom of your student's document, AND it will input the rubric scores on your Doctopus spreadsheet. It's amazing, and it made the quality of my feedback far better in much less time than it would normally take me to grade essays.

All because I did a quick Google search to try and solve a problem this morning.

When I then started reading the Markle, West, & Rich (2011) article, I quickly came down from my technology-empowered high and settled back to the real world. The fact that they provided the video clips along with the conversation analysis did so much to emphasize the inadequacy of CA as a stand-alone method, and yet, CA and other types of transcription are standard practices. The tools exist to make the practice better, so that's not the barrier; it's the researchers and gatekeepers who are standing in the way.

Just as I was able to find tools to improve my grading process, so, too, could researchers improve their data process. I know those tools exist. For example, iBooks Author (Mac) allows you to write and publish text with embedded multimedia files. Magazines are moving toward a similar format for their digital editions where they add slideshows, videos, and playlists to enhance the content. Our class readings this week suggested many other tools as well. Scholars could still write traditional texts for hard copy books and journals, but they could have the enhanced digital version available online. In addition, if researchers were concerned about privacy issues for their research subjects, Markle, West, & Rich (2011) suggest that there are tools that could be used to edit the files to protect subjects. The pitch of a subject's voice could be raised or lowered to make it less identifiable to others, and video could be edited to mask a person's face. As long as the researcher was transparent to both the subjects and the research audience about using these tools to alter the data and justified it based on privacy concerns, I don't think it would be a problem. We could at least start heading in that direction.

Markle, West, & Rich (2011) make two arguments that I think are home runs for the move toward multimedia enhanced writing:

1) It frees up writing space so that the research quality improves. When researchers are constrained by word count limits, it's unfortunate to have to dedicate some of those precious words to transcripts that don't even reflect the conversation as authentically as the audio file itself would. The field would improve by more thorough analysis of the interview rather than a transcribed account of it.

2) It improves the teaching of novice researchers. It takes the research process from an abstract concept to a concrete, hands-on experience. Researchers would enter the field better equipped to conduct powerful research, and the quality of research would improve as a result.

These seem like two major benefits that would outweigh any disadvantages advanced by resistors.

But people still need to want to make the shift, and I'm not sure how to convince them to do it. I face this challenge constantly when I find new teaching tools like Doctopus and Goobric that I want my colleagues to try, but they resist for reasons that don't always make sense to me. Exposing the ways that technology improves the process or product is one way to help, which is why I'm grateful that we have this class. Articles like the one by Markle, West, & Rich (2011) are helpful, too, but I always wonder if the people who need to be reading those articles actually are. The fact that their article is being published in FQS over a more traditional research journal makes me wonder if they're already preaching to the choir.

So I guess I'm leaving this week's readings a little bit frustrated because the world is not changing as quickly as I'd like it to. The benefits of using these technology tools seem overwhelming and obvious to me, but I feel like I'm in the minority on that front. I think things will get better as younger people move into academia, but that's still a long time to wait.

And I'm impatient.

Tuesday, February 17, 2015

Online Research: The Other Fifty Shades of Gray

If there was one thing made clear about online research this week, it was that it's still a gray area that is open to many interpretations and context-dependent decisions. While the Association of Internet Researchers has attempted to offer some guidance about ethical research practices, even those are described in shades of gray--acknowledging that there are still as many questions as answers. The three fundamental tensions center on human subjects, texts/data vs. people, and public vs. private spaces.

Maybe I've become too cynical about Internet privacy after all of the Wikileaks drama and other scandals in the last few years, but I have no real expectation of privacy on the Internet anymore. I know that I've probably signed away rights on all sorts of sites by agreeing to TOU policies that were too long for me to take the time to read. Amazon and countless other sellers track my every shopping query, and they remind me of such as they post ads of the specific products I've perused to my Facebook sidebars. Emails, browsing histories, IP addresses -- everything can be tracked, so I'm not terribly swayed by privacy concerns on Internet research. What does resonate with me, however, is the argument from the Swedish Research Council in the Elm chapter that "People who participate in research must not be harmed, either physically or mentally, and they must not be humiliated or offended" (2009, p. 84). For me, this is the fundamental issue that is critical to my integrity as a researcher. I'm not overly concerned about rules about public/private spheres because there is such a blur between those. But I do care what my research subjects think about how I treat them. I would not want to harm, humiliate, or offend the people who are important to my research interests, nor would I want to jeopardize my future relationships with them in any way. I recognize that my research areas are pretty tame, so there's little risk of harming others. But there are so many consequences that may be unpredictable, and it's wise to be thoughtful throughout the process, not just when getting IRB approval. It therefore makes sense that the AoIR guidelines would be rather nebulous.

One thing that stood out to me in the readings this week was the idea that there is not really an international consensus on ethical practices for Internet research--particularly in regards to the definition of human subjects. While the AoIR guidelines are a good starting point for a framework, interpretation and application may vary from country-to-country and probably university-to-university as well. Given these variations, I'm curious what that means for Internet research. Are there some countries or universities that are researcher "hot spots" where online researchers want to go? Or are there some that are shunned because they're too strict or too loose with their research requirements? It seems like the uneven approaches could create some interesting dynamics among scholars.

I was also very interested in reading the Salmons chapters because I plan to do most of my dissertation interviews online. A couple of questions I hope she can address:

1) What are some strategies and challenges with recruiting participants in online research?
2) Are there any particular tools that are especially good for online qualitative interviews?

Tuesday, February 10, 2015

There's an App for That...

As an early iPad adopter and someone who trains teachers on how to use iPads in the classroom, there have been times when I've felt overwhelmed by the number of apps and resources available to choose from. Sometimes having unlimited options can feel paralyzing, and I often consider that when I decide which apps to share with other teachers and how many to share at one time. I was reminded of this when I browsed the options available for use on the Nova website. There were over a dozen note-taking apps alone! I know much depends on personal preferences, but I'm curious to hear from our class presenter, Everett Painter, about some of the considerations he uses in selecting tools to use.

I tend to gravitate toward tools that can accomplish several objectives on its own or is designed to interface with other apps or devices. Evernote, for example, can be used for note taking and audio recording, and it syncs with Skitch and Penultimate. Evernote and Skitch also have desktop platforms so I can move between my computer and mobile device as necessary. Those affordances push me toward some apps over others. So I guess I'm wondering:

  • Which of the apps have a web/desktop platform in addition to the mobile app?
  • Are there any apps that are mobile-only but offer unique features that can't be accomplished by a laptop? 
As far as the readings this week, I really appreciated the Corti, van den Eynden, Bishop & Woollard (2014) chapter. As a beginning researcher, there are many aspects of the research process that I'm not sure I could anticipate in advance. I recognize the importance of planning, and I liked how the chapter broke steps down into checklists of questions to consider. There were many there -- especially about formatting and storage -- that I hadn't really thought about before. I am certain that I will return to this chapter in the future as I plan my research.

I'm also fascinated by some of the ethical questions that come out of doing Internet research. As the Paulus, Lester, & Dempster (2014) pointed out, there are lots of grey areas when it comes to online research, and I'm excited about entering a research area that is still so new and undefined. I know that forging new ground carries with it a lot of extra responsibilities to get it right, but I also look forward in participating in debates about what those standards should be. I will definitely be tracking down some of the recommended resources mentioned in the chapter's bibliography so that I can learn more about the ethical issues of online research and familiarize myself with those practices as I plan my research. 

Tuesday, February 3, 2015

Paperless? Yes, please!

This is a picture that I shared on my teaching blog a few years ago when I decided that I was going to start pushing my fourth grade classroom to go paperless. I took this picture as I was preparing to do report cards, and the papers were taking over my life. There was so much to lug around between home and school, and it was hard to keep track of everything. I teach in a 1:1 iPad classroom, so once I figured out the workflow process and taught it to my students, it made my life much easier. Being paperless, at least to me, doesn't mean being paper-free -- if there are times when it makes more sense to do something on paper, I will. But I've certainly cut way back on the amount of stuff that I print or copy. If I didn't have a paperless classroom, I shudder to think about how many trees would be dying between my students and my PhD research!

I often tell colleagues that the whole reason I love using technology is because it often makes my work more productive, efficient, and organized. I couldn't accomplish half of what I do without having great technology resources to lean on. With my classroom experiences of going paperless, I feel pretty confident about tackling a paperless lit review. I've taught teacher workshops on using GoodReader and Evernote, and I'm excited to abandon EndNote add Mendeley to the mix. I think the biggest challenge with any of this stuff is figuring out that best possible workflow. I was reading the blog posts from Dr. Jennifer Lubke, and it prompted a few questions that perhaps Dr. Varga could address in class:

1) It seems like one of the biggest objections to using the .pdf viewer in Mendeley for annotations was that it crashed a lot. Is that still the case? I played around with it a little bit on my iPad and saw that I could do highlighting and "sticky notes" -- fewer functions than are available in GoodReader, to be sure -- but it didn't crash at all. Are there other reasons why I should integrate a separate app into the reading process?

2) Another nerdy workflow question...I've set up a watch folder on DropBox for Mendeley. If I also sync that work folder with GoodReader, annotate the .pdfs, and save the flattened .pdfs back to DropBox, will Mendeley import those changes when it syncs with my watch folder? Is it only watching for new files, or does it watch for changed files, too?

3) How helpful are the social networking/resource recommendations on Mendeley? Is that a valuable feature, or is it still too soon to tell?

4) Is there a way to connect Mendeley to the UGA library to search automatically for full texts or download the library-owned electronic copies of cites stored on Mendeley?

I could certainly play around with the tools to figure out the answers to these questions, but if you already know the answers, I'd love to hear more.

Unrelated, my work session last week was productive. I'm slowly figuring out ATLAS.ti. The webinar was helpful, but technology webinars are always challenging -- I need to play around with the features more in order to really make sense of them, and that's going to take me a while. I was excited to learn about the video training resources and the ATLAS.ti blog where I can dig in as I'm ready.

I've also been reading further in the Friese book, Qualitative Data Analysis with ATLAS.ti, to make more sense of the software. I like the book, but I can tell the book was written using the Windows version. I'm generally okay with navigating the Mac/Windows divide, but she does talk about three different sample projects that come with ATLAS.ti, and I don't think the Mac version comes with any sample projects. I did a quick search for them online, and I haven't found them yet. This seems like a potential limitation of the Mac software in terms of the learning process. I'm still in the early stages of my research, so I don't have a lot of my own data. I'm hoping I can figure out a way to get a sample data set to play around with as I learn the software, and then I'll feel more confident using it when I get deeper into my own research.

I'm excited to learn more about Mendeley!