Tuesday, March 31, 2015

Evolutions

When I first started this course, I was a little apprehensive. While I am able to pick up technology skills fairly easily, I'm still in the very early stages of my doctoral program. I have an idea for my dissertation topic, and I know that technology will play an important role in that -- both in content and in analysis of the data. But I haven't really collected any data yet. I was skeptical of my ability to learn the tools without having my own research already complete, and I was jealous of some of my colleagues who are deeper into the process (and closer to writing the dissertation) because they had more data to work with. 

The more I learn about CAQDAS technologies, however, the happier I am that I'm getting exposure to all of these tools now -- in the early stages. I feel certain it will save me countless hours down the road because I'll be better organized and prepared to use the tools, and I won't feel overwhelmed to dive into them. I think the Bazeley & Jackson (2013) text said it best:
"Starting early, if you are still learning software, will give you a gentle introduction to it and a chance to gradually develop your skills as your project builds up. This is better than desperately trying to cope with learning technical skills in a rush as you become overwhelmed with data and the deadline for completion is looming." (p. 26).
It's nice to know that I'll be able to use the same tools throughout the process, and I can slowly learn more features as they're needed.

*******

The evolution and use of CAQDAS tools fascinates me. I had no idea coming into this semester how interested I would be in this area, but I suppose it makes sense given that it's a great intersection of my research and technology interests. I was talking about QDAS with my husband recently, and it occurred to me that he uses some similar technologies for his job. My husband is an attorney who works in complex business litigation, and one of the things he frequently has to do is review documents. For example, he might have to read 10,000+ emails downloaded from a client's inbox, code them for content, and look for segments that may support or refute a particular argument. We were talking about the software he uses and how that process compares to the work I may ultimately do in ATLAS.ti, and he mentioned that the new trend in the legal field is to move toward predictive coding software. He doesn't have it at his firm yet, but he said that it's supposed to learn some of your coding habits and conduct some of the document analysis for you based on parameters you set. 

I immediately started thinking about that in terms of qualitative research, and I wonder if something like that will ever be used or accepted in our research community. I would need to know more about how it works to really form an opinion on it, but I can see potential advantages and disadvantages with it. If it really is a learning software that learns how I code and applies that knowledge to my projects, then I think it could be a huge time-saver. But it could also distance me from my data, and I would really want to scrutinize the process that it uses. It's like outsourcing -- there are some things (like housekeeping!) that I'm happy to outsource to others, but there are other things that just aren't worth outsourcing. Coding might be one of those things. I guess we'll see as the software continues to evolve. 

*******

I'm excited to learn more about Dedoose. I like ATLAS.ti so far, but I'm a fan of cloud computing, and I'm curious about how it might handle my mixed methods research. Two questions that came up as I was looking through the website:

1) Compatibility: It says in the video that it can pull in data from the software programs like NVIVO and ATLAS.ti, but is that relationship bi-directional? Can you import and export data with Dedoose? 

2) Pricing: I know Dedoose charges a monthly fee. Do you only pay for the months that you use it (e.g., sign in)? If you have a project uploaded in Dedoose that you don't touch for a month or two, do you have to pay the monthly fee because they're still housed on the platform?

Tuesday, March 24, 2015

Video Killed the Radio Star

While I don't expect that I will do a lot of video analysis -- at least in the short term -- I was impressed by all of the features that Transana has to offer. The video overview was helpful in showing some of the powers of the software. I was especially intrigued by the potential to view multiple transcripts at once from different points of view, all synchronized with the video. I think the example from the film making perspective really highlighted this potential. The Dempster & Woods (2011) was also helpful in seeing how the software could be used collaboratively between researchers. As explored the Transana website (and experienced a little sticker shock over the cost), I had a few questions about the software:

1) If you buy the standard version of the software, is it possible to upgrade to professional at a discounted rate?

2) I know from the article that multiple users can code synchronously online. Is it possible for multiple people to code the same video asynchronously and then combine the files/codes to compare? It seems like this could be a useful teaching tool if you wanted to see different interpretations of the same video from several different users.

3) How memory intensive is the software? What computer specs would be considered optimal for Transana's needs?

One thing that intrigued me from the Paulus, Lester, & Dempster (2014) chapter this week was the idea of asynchronous video coding through collaborative video annotations. That seems like such a great teaching tool for both preservice teachers and in-service teachers. I can imagine a lot of potential professional learning that could center on watching and annotating videos. The vignette described using Microsoft Movie Maker, Microsoft Paint, and a PHP script, but that seems complicated since it's potentially three different programs. Is there a cheap, easily accessible, streamlined equivalent that could do the same things? I'd love to be able to use something like that with some of my colleagues and the preservice students I mentor, but I don't foresee getting them to download ATLAS.ti or Transana anytime soon...

I may have to brainstorm a research project that uses video so I can play with more of these toys...






Tuesday, March 17, 2015

Land of Confusion

This week's readings were very interesting, and they raised several issues that I hadn't previously considered. More than ever, I feel like a lot of my ideas about these topics are in flux, and I don't know where I land on these topics.

1. Challenges of temporariness and related ethical issues
All experiences are temporary moments in time, and traditionally, researchers would work to capture those through videos, photos, audio recordings, and field notes. But now that there are resources such as SnapChat -- an app that markets itself based on the temporariness of anything that's posted, what does that mean for researchers? The Pihlaja YouTube study described in the Page, Barton, Unger & Zappavigna (2014) text highlights this challenge. Pihlaja was attempting to study dialogue through comments on YouTube videos, and some people went back and deleted their comments. How could that data then be handled? It existed, and if it was a more traditional setting, would a subject be able to retract their comments or actions? But in an online setting, that becomes more of a possibility, and I feel like it's a gray area for how to handle that as a researcher. Going back to SnapChat, if all posts are ephemeral by purpose, can the researcher even ethically use screencaptures for research? If not, how could you research that platform effectively?

2. Terms of Service
I have never taken the time to read any of the Terms of Service for platforms like Facebook and YouTube, but I know that several platforms have pretty restrictive expectations. YouTube, for example, doesn't allow you to download videos from their platform, but there are many other services (e.g., KeepVid) that will allow you to download YouTube videos. Similarly, Facebook claims ownership of the content that is posted on its platform, but then other services like Texifter allow you to download Facebook content for analysis. What are the legal and ethical issues involved with those practices, and how do digital researchers handle those?

The Fuchs (2014) chapters were fantastic in helping me understand some of the critical Marxism. I'm fairly familiar with Marxist theory, but these chapters unpacked the concepts in easy-to-understand ways while drawing in concrete examples from modern social media practices. One part that really stood out to me was the section in chapter 1 about the dialectic and contradictions. It seems that we give corporations like Facebook a lot of power when we agree to their evolving terms of service, and perhaps one way to take back some of that power is through using other tools (e.g., Texifter) that can help us better understand how social media works. Are critical theorists more flexible in their thinking about some of these platforms' rules and expectations? And if so, how does the IRB feel about that?

I'm not sure where I fall on any of these issues anymore. The more I tread into the waters of online research, the murkier my surroundings feel. I'm not deterred by that, but there's an element of unknown that's a bit intimidating--especially as an inexperienced researcher. I'll clearly need to explore these topics further.



Tuesday, March 3, 2015

Netnography, Virtual Worlds, and Tool Adoptions

Since the last class, I've been thinking a lot about my love for digital tools and whether I need to put my attitude about tool adopters vs. non-adopters in check. The best I can say is...maybe.

I didn't grow up around computers. I'm not a digital native. The first computer I owned arrived when I was a senior in high school, and I didn't get Internet service at home (dial-up) until after my Freshman year of college. My first laptop came after I graduated college. That was followed by a cell phone after I was married and a smartphone after I started my second career as a teacher. So technology hasn't always been a part of my life, but as I've experienced ways that it could make my life better and less complicated, I've embraced it. I'm pretty open-minded about playing with new tools, and if something doesn't work for me, I'm okay with abandoning it. The beauty of the digital playground is that there are always more toys.

So here's the issue for me: if tools exist to make the research process more efficient, transparent, and accessible, then why shouldn't they be widely used?

I appreciate that some people have found strategies that work for them and that tools may be difficult to learn, but I don't know if those reasons are good enough to warrant resistance. I think it comes down to a fundamental question of what is the purpose of research? If research is intended to be a primarily researcher-focused act--which it very well may be given that the researcher decides every aspect of the study--then the researcher should just use whatever works, digital or not. But if the purpose of research is to contribute more broadly to society and our understandings of the world or our fields of study, then I think digital tools are a necessary part of that. They allow closer and more verifiable examination of research, and they provide better data trails to assist novice researchers understand research practices. In this worldview, it seems selfish to resist using digital tools out of convenience.

I'm not saying that researchers have to learn and use every digital tool available. There are some that will be a better fit for the research and the researcher than others (I'm looking at you, EndNote...). But I don't think general ignorance of the tools or resistance to them is acceptable among those who want to do research professionally (i.e., academics). Tools are becoming more accessible and intuitive all the time, and even if a particular tool is rejected for one reason or another, researchers should at least consider them with an open mind.

So yeah, I guess I'm still on Team CAQDAS. Pretty passionately so...

Speaking of my CAQDAS passions, I was disappointed to see that the new Netnography book (Netnography: Redefined) isn't coming out until June. I've been having regular Amazon deliveries of books introduced through this class every Friday since the beginning of the semester. It will be weird not to race home on Friday to hide another package of qualitative research books before my husband sees it... I need to get better at reading nonfiction books on the Kindle...

I'll be curious to see how much of the Netnography book is actually "redefined." There are so many fascinating issues in the chapters we read that I can see applying to my own research of teacher bloggers. My research is going to examine experiences of bloggers and lurkers and see if there is any difference in how the quantify (with survey data) or account for (with interview data) their self-efficacy beliefs as teachers. I can imagine worlds in which aspects of Kozinets's four A's (adaptation, anonymity, accessibility, and archiving) could be relevant. For example, maybe adaptation differentiates those who blog vs. those who lurk. Maybe the bloggers are better able to adapt to the different types of technology involved in blogging. Anonymity is definitely an issue; teachers are highly public figures, so they have to be careful about any digital footprints they lead. Some will only blog or comment under pseudonyms while others are identifiable but careful about the types of information they share. Accessibility seems to be decreasing as an issue (and maybe the new book will speak to that since there are more recent Pew Internet Reports reflecting these trends). Archiving also factors in since everything is preserved on the many blogging platforms, and once something is published, it's hard to undo it. I want to explore more of the Netnography methodology to see exactly how it will fit into my research.

Finally, I enjoyed reading chunks of Holt's World of Warcraft dissertation. I didn't have a chance to read all of it, but it was interesting to learn about his research methodology. I think it would be incredibly challenging to research a MMORPG while immersed as a player. How do you juggle the research experience with the player experience? It seems like it would be hard to set playing goals such as getting to the raider/end-of-game level without letting that consume you or overshadow the research. But at the same time, I can't imagine any other way to study that culture. Similarly, I wondered about the possible ethical issues that could arise from having multiple identities (alts) within the game. It's definitely a possibility that is unique to the online world, and I wonder what issues that might present and how those are handled in the research. As always, there's a lot to consider.