Tuesday, April 21, 2015

ATLAS.ti and the Inevitability of Compromise

I've become spoiled as a Mac user.

I first made the switch from PCs to Macs when I took my current job in City Schools of Decatur. I was handed a Macbook on my first day of work and was forced to make the switch. I remember it being a culture shock in the beginning--everything I'd ever done on a computer during the previous decade had been done on a PC using Windows, and I felt disoriented not having the predictable actions that come with a Mac. But I soon adjusted, and I quickly grew to love working on a Mac. Programs seemed to operate more smoothly with fewer crashes and lags, and I wasn't living in constant paranoia of bugs or blue screens of death. I've since replaced all of our home computers with Macs, and add in the iPhones and iPads, and you'll find that my house is one Apple-loving family. It's been a blissful few years of being surrounded by Apple products, and I forgot that PCs are still pretty popular.

Until I tried learning ATLAS.ti.

Let me say from the start, that I like the program, and I'm pretty optimistic about where the new Mac version of ATLAS.ti is headed. But the initial learning process has been a little challenging at times. Many of the resources available to teach how to use ATLAS.ti are specific to the Windows version. The book Qualitative Data Analysis using ATLAS.ti by Susanne Friese, for example, is fantastic, but it is entirely specific to the Windows version. While the concepts can be transferred (at least in most cases -- not all of the features are functional in the Mac version yet), it takes a little trial and error to figure out the Mac equivalent instructions. In some cases, the Mac version is more intuitive and requires fewer steps, but in other areas, the Mac version isn't quite there yet. The transcription features aren't fully functional yet, for example, and it's impossible to collaborate between Mac and Windows users. These are some big limitations with how the program currently operates. Still, I know from the feature matrix that these features are coming, so I'll just need to be patient until they're ready. Fortunately, I'm still in the early stages of my research, so my need for those features isn't so great. Yet.

There are two things I really like about ATLAS.ti so far:
1) I like the ability to complete literature reviews and code the articles that I read. I've been looking for a system for tagging arguments and themes and organizing them across articles, and ATLAS.ti is by far the best system I've found for that. Even if I ultimately decide to use other programs for data collection and analysis, I'm certain that I will stick with ATLAS.ti for my literature reviews. All the functions that I need for that are fully operational.
2) I like the memo features. It has been helpful to write notes to myself and track my thinking across research projects. My use of the memos hasn't been terribly sophisticated yet, but I like this feature and expect to use it more.

Things I wish were different about ATLAS.ti:
1) The aesthetics - (and this is where my snotty Mac lover comes out) - the interface of the software is gray and boring, and there's a part of me that wonders if I'll get seasonal affective disorder from staring at the gray too long. I have to admit that I'm attracted to programs like Dedoose in part because of the visuals. Even if I could change up the color scheme in the preferences a bit, that might help. But if I'm going to be spending a long time in a piece of software, the aesthetics of the design elements matter to me, and ATLAS.ti is definitely taking function over form route.
2) It's a pain to shift between devices. I wish that there were some cloud storage options or a cloud-based version of the software that would allow me to access a project from different devices or do real-time collaboration. I'm not a one-computer gal. When I'm home, I'm on my big screen Mac, but I am obviously not lugging that with me to class or to conferences. I want my research easily accessible, and while projects can be moved from one device to another, it's not an easy process by today's cloud-based standards.

Things I have mixed feelings about:
1) The constant updates. It seems like every time I open the software, it asks to install a new update. I like that ATLAS.ti is working to improve the software and add new features to the Mac version, and I think this is ultimately a good thing. But the updates aren't small, and it's a little bit cumbersome to be updating all the time. Still, better than a software that never releases updates.

Obviously, these are all first-world problems, and none of them are deal-breakers. The reality of working with CAQDAS-related programs is that no piece of software can do it all. Compromises are inevitable, and there are always tradeoffs. I can be a satisfied ATLAS.ti user and still dream of ways to improve the experience. The push to make things better should always be encouraged in everything we do.

So far now, I remain optimistic about the future of the Mac version of ATLAS.ti, and I plan to keep going with it. Whether it will be a long-term love affair remains to be seen, but it has potential. And I'm glad we were introduced this semester.



Tuesday, April 7, 2015

Scrivener and Writing

One of the things that has surprised me most about my journey through digital tools this semester is how many tools I'm already using can be repurposed for qualitative research. Scrivener is one such tool. I started using Scrivener as a fan-fiction(!) writer because I liked how I could map out several chapters at a time and block out all of the distractions on my screen. Once I lost time and interest in writing fanfic, I started using it to plan blog posts for my teaching blog. I also used it to transcribe an interview once. I liked that I could split the screen between the audio file and the transcript, and I could use short cuts to pause/play the audio. There's also a feature that automatically rewinds an amount you set when you pause the audio, so you can re-listen to the last few words to check your transcript before you type the next section. It was a good solution for transcription at the time, but I suspect I will use other transcription tools like InqScribe in the future. InqScribe just seems to have better features like inserting timestamps and linking them to the audio.

Still, I haven't played around with Scrivener for a few months, so I'm intrigued to get back to it. I can definitely see how it would be useful for academic writing. I could import many research articles that I've already annotated and map out dissertation chapters or article sections using the tool. And I could certainly use a return the distraction-free mode (I say as I have 14 tabs open on my web browser...). I think the biggest barrier to my more frequent use was portability. I frequently shift between devices (desktop, laptop, and iPad), and I have a hard time remembering to shift my work with me. I think all of the cloud technologies like Google Drive have spoiled me that way. So I would have to get into a better habit with saving and transporting (or commit to a device for writing) before I could really use Scrivener seriously.

My questions about Scrivener are:

  • Are there any plans for a mobile version of the app?
  • What are the best ways to handle moving projects between devices -- especially when the user frequently switches from one computer to another?
As a long-time blogger and someone who plans to write a dissertation centered on teacher blogger practices, I was a big fan of the Powell, Jacob, & Chapman (2012) reading this week. Some parts that stood out for me:

The lines between platforms are blurring. At least among teacher-bloggers, it's not enough to have a presence through a blog alone; bloggers often have to branch out to all of the other major social media platforms, and significant strategizing goes into maintaining presence on those platforms. Whenever I write a blog post for my teaching blog, for example, I immediately schedule posts promoting the blog on Twitter and Facebook and share pictures from the blog on Pinterest and Instagram. It's necessary to increase and maintain readership, and it means that bloggers need to learn about PR skills in order to be successful. I imagine in academia, this is a real shift. Writing and making your research accessible for a global audience requires different skills than just writing for scholarly peers, but it seems that such skills could have a lot of value for universities. Social media like blogs can draw attention to research and invite broader conversations and potentially more funding as people become interested in the research content. 

There's a delicate balance to be had between scholarship and popularity. In my experience, blogs need to be either very useful substantively or highly entertaining (or ideally both) in order for me to read them regularly. If a blog is too "scholarly," it would probably lose readers because it would seem more like journal articles. But at the same time, it's important not to sacrifice credibility and evidence for the sake of becoming more popular. I was reminded of this this morning when I was reading a Gawker article about "Food Babe," a blogger who investigates food ingredients and reports on health issues and hidden toxins. Food Babe is not a scholar by any means, but she has definitely leveraged her blog to become popular, and as the scathing criticism in the Gawker article reveals, she doesn't back up many of her claims with credible scientific evidence. Researchers have to be particularly careful not to be blinded by the prospect of becoming popular at the expense of maintaining credibility for their research. It's a delicate balance for sure, especially once a blog starts to gain readers outside of the academic community, but I think heading toward more effective use of blogging and social media could be very beneficial for academics and the public as a whole. 


Tuesday, March 31, 2015

Evolutions

When I first started this course, I was a little apprehensive. While I am able to pick up technology skills fairly easily, I'm still in the very early stages of my doctoral program. I have an idea for my dissertation topic, and I know that technology will play an important role in that -- both in content and in analysis of the data. But I haven't really collected any data yet. I was skeptical of my ability to learn the tools without having my own research already complete, and I was jealous of some of my colleagues who are deeper into the process (and closer to writing the dissertation) because they had more data to work with. 

The more I learn about CAQDAS technologies, however, the happier I am that I'm getting exposure to all of these tools now -- in the early stages. I feel certain it will save me countless hours down the road because I'll be better organized and prepared to use the tools, and I won't feel overwhelmed to dive into them. I think the Bazeley & Jackson (2013) text said it best:
"Starting early, if you are still learning software, will give you a gentle introduction to it and a chance to gradually develop your skills as your project builds up. This is better than desperately trying to cope with learning technical skills in a rush as you become overwhelmed with data and the deadline for completion is looming." (p. 26).
It's nice to know that I'll be able to use the same tools throughout the process, and I can slowly learn more features as they're needed.

*******

The evolution and use of CAQDAS tools fascinates me. I had no idea coming into this semester how interested I would be in this area, but I suppose it makes sense given that it's a great intersection of my research and technology interests. I was talking about QDAS with my husband recently, and it occurred to me that he uses some similar technologies for his job. My husband is an attorney who works in complex business litigation, and one of the things he frequently has to do is review documents. For example, he might have to read 10,000+ emails downloaded from a client's inbox, code them for content, and look for segments that may support or refute a particular argument. We were talking about the software he uses and how that process compares to the work I may ultimately do in ATLAS.ti, and he mentioned that the new trend in the legal field is to move toward predictive coding software. He doesn't have it at his firm yet, but he said that it's supposed to learn some of your coding habits and conduct some of the document analysis for you based on parameters you set. 

I immediately started thinking about that in terms of qualitative research, and I wonder if something like that will ever be used or accepted in our research community. I would need to know more about how it works to really form an opinion on it, but I can see potential advantages and disadvantages with it. If it really is a learning software that learns how I code and applies that knowledge to my projects, then I think it could be a huge time-saver. But it could also distance me from my data, and I would really want to scrutinize the process that it uses. It's like outsourcing -- there are some things (like housekeeping!) that I'm happy to outsource to others, but there are other things that just aren't worth outsourcing. Coding might be one of those things. I guess we'll see as the software continues to evolve. 

*******

I'm excited to learn more about Dedoose. I like ATLAS.ti so far, but I'm a fan of cloud computing, and I'm curious about how it might handle my mixed methods research. Two questions that came up as I was looking through the website:

1) Compatibility: It says in the video that it can pull in data from the software programs like NVIVO and ATLAS.ti, but is that relationship bi-directional? Can you import and export data with Dedoose? 

2) Pricing: I know Dedoose charges a monthly fee. Do you only pay for the months that you use it (e.g., sign in)? If you have a project uploaded in Dedoose that you don't touch for a month or two, do you have to pay the monthly fee because they're still housed on the platform?

Tuesday, March 24, 2015

Video Killed the Radio Star

While I don't expect that I will do a lot of video analysis -- at least in the short term -- I was impressed by all of the features that Transana has to offer. The video overview was helpful in showing some of the powers of the software. I was especially intrigued by the potential to view multiple transcripts at once from different points of view, all synchronized with the video. I think the example from the film making perspective really highlighted this potential. The Dempster & Woods (2011) was also helpful in seeing how the software could be used collaboratively between researchers. As explored the Transana website (and experienced a little sticker shock over the cost), I had a few questions about the software:

1) If you buy the standard version of the software, is it possible to upgrade to professional at a discounted rate?

2) I know from the article that multiple users can code synchronously online. Is it possible for multiple people to code the same video asynchronously and then combine the files/codes to compare? It seems like this could be a useful teaching tool if you wanted to see different interpretations of the same video from several different users.

3) How memory intensive is the software? What computer specs would be considered optimal for Transana's needs?

One thing that intrigued me from the Paulus, Lester, & Dempster (2014) chapter this week was the idea of asynchronous video coding through collaborative video annotations. That seems like such a great teaching tool for both preservice teachers and in-service teachers. I can imagine a lot of potential professional learning that could center on watching and annotating videos. The vignette described using Microsoft Movie Maker, Microsoft Paint, and a PHP script, but that seems complicated since it's potentially three different programs. Is there a cheap, easily accessible, streamlined equivalent that could do the same things? I'd love to be able to use something like that with some of my colleagues and the preservice students I mentor, but I don't foresee getting them to download ATLAS.ti or Transana anytime soon...

I may have to brainstorm a research project that uses video so I can play with more of these toys...






Tuesday, March 17, 2015

Land of Confusion

This week's readings were very interesting, and they raised several issues that I hadn't previously considered. More than ever, I feel like a lot of my ideas about these topics are in flux, and I don't know where I land on these topics.

1. Challenges of temporariness and related ethical issues
All experiences are temporary moments in time, and traditionally, researchers would work to capture those through videos, photos, audio recordings, and field notes. But now that there are resources such as SnapChat -- an app that markets itself based on the temporariness of anything that's posted, what does that mean for researchers? The Pihlaja YouTube study described in the Page, Barton, Unger & Zappavigna (2014) text highlights this challenge. Pihlaja was attempting to study dialogue through comments on YouTube videos, and some people went back and deleted their comments. How could that data then be handled? It existed, and if it was a more traditional setting, would a subject be able to retract their comments or actions? But in an online setting, that becomes more of a possibility, and I feel like it's a gray area for how to handle that as a researcher. Going back to SnapChat, if all posts are ephemeral by purpose, can the researcher even ethically use screencaptures for research? If not, how could you research that platform effectively?

2. Terms of Service
I have never taken the time to read any of the Terms of Service for platforms like Facebook and YouTube, but I know that several platforms have pretty restrictive expectations. YouTube, for example, doesn't allow you to download videos from their platform, but there are many other services (e.g., KeepVid) that will allow you to download YouTube videos. Similarly, Facebook claims ownership of the content that is posted on its platform, but then other services like Texifter allow you to download Facebook content for analysis. What are the legal and ethical issues involved with those practices, and how do digital researchers handle those?

The Fuchs (2014) chapters were fantastic in helping me understand some of the critical Marxism. I'm fairly familiar with Marxist theory, but these chapters unpacked the concepts in easy-to-understand ways while drawing in concrete examples from modern social media practices. One part that really stood out to me was the section in chapter 1 about the dialectic and contradictions. It seems that we give corporations like Facebook a lot of power when we agree to their evolving terms of service, and perhaps one way to take back some of that power is through using other tools (e.g., Texifter) that can help us better understand how social media works. Are critical theorists more flexible in their thinking about some of these platforms' rules and expectations? And if so, how does the IRB feel about that?

I'm not sure where I fall on any of these issues anymore. The more I tread into the waters of online research, the murkier my surroundings feel. I'm not deterred by that, but there's an element of unknown that's a bit intimidating--especially as an inexperienced researcher. I'll clearly need to explore these topics further.



Tuesday, March 3, 2015

Netnography, Virtual Worlds, and Tool Adoptions

Since the last class, I've been thinking a lot about my love for digital tools and whether I need to put my attitude about tool adopters vs. non-adopters in check. The best I can say is...maybe.

I didn't grow up around computers. I'm not a digital native. The first computer I owned arrived when I was a senior in high school, and I didn't get Internet service at home (dial-up) until after my Freshman year of college. My first laptop came after I graduated college. That was followed by a cell phone after I was married and a smartphone after I started my second career as a teacher. So technology hasn't always been a part of my life, but as I've experienced ways that it could make my life better and less complicated, I've embraced it. I'm pretty open-minded about playing with new tools, and if something doesn't work for me, I'm okay with abandoning it. The beauty of the digital playground is that there are always more toys.

So here's the issue for me: if tools exist to make the research process more efficient, transparent, and accessible, then why shouldn't they be widely used?

I appreciate that some people have found strategies that work for them and that tools may be difficult to learn, but I don't know if those reasons are good enough to warrant resistance. I think it comes down to a fundamental question of what is the purpose of research? If research is intended to be a primarily researcher-focused act--which it very well may be given that the researcher decides every aspect of the study--then the researcher should just use whatever works, digital or not. But if the purpose of research is to contribute more broadly to society and our understandings of the world or our fields of study, then I think digital tools are a necessary part of that. They allow closer and more verifiable examination of research, and they provide better data trails to assist novice researchers understand research practices. In this worldview, it seems selfish to resist using digital tools out of convenience.

I'm not saying that researchers have to learn and use every digital tool available. There are some that will be a better fit for the research and the researcher than others (I'm looking at you, EndNote...). But I don't think general ignorance of the tools or resistance to them is acceptable among those who want to do research professionally (i.e., academics). Tools are becoming more accessible and intuitive all the time, and even if a particular tool is rejected for one reason or another, researchers should at least consider them with an open mind.

So yeah, I guess I'm still on Team CAQDAS. Pretty passionately so...

Speaking of my CAQDAS passions, I was disappointed to see that the new Netnography book (Netnography: Redefined) isn't coming out until June. I've been having regular Amazon deliveries of books introduced through this class every Friday since the beginning of the semester. It will be weird not to race home on Friday to hide another package of qualitative research books before my husband sees it... I need to get better at reading nonfiction books on the Kindle...

I'll be curious to see how much of the Netnography book is actually "redefined." There are so many fascinating issues in the chapters we read that I can see applying to my own research of teacher bloggers. My research is going to examine experiences of bloggers and lurkers and see if there is any difference in how the quantify (with survey data) or account for (with interview data) their self-efficacy beliefs as teachers. I can imagine worlds in which aspects of Kozinets's four A's (adaptation, anonymity, accessibility, and archiving) could be relevant. For example, maybe adaptation differentiates those who blog vs. those who lurk. Maybe the bloggers are better able to adapt to the different types of technology involved in blogging. Anonymity is definitely an issue; teachers are highly public figures, so they have to be careful about any digital footprints they lead. Some will only blog or comment under pseudonyms while others are identifiable but careful about the types of information they share. Accessibility seems to be decreasing as an issue (and maybe the new book will speak to that since there are more recent Pew Internet Reports reflecting these trends). Archiving also factors in since everything is preserved on the many blogging platforms, and once something is published, it's hard to undo it. I want to explore more of the Netnography methodology to see exactly how it will fit into my research.

Finally, I enjoyed reading chunks of Holt's World of Warcraft dissertation. I didn't have a chance to read all of it, but it was interesting to learn about his research methodology. I think it would be incredibly challenging to research a MMORPG while immersed as a player. How do you juggle the research experience with the player experience? It seems like it would be hard to set playing goals such as getting to the raider/end-of-game level without letting that consume you or overshadow the research. But at the same time, I can't imagine any other way to study that culture. Similarly, I wondered about the possible ethical issues that could arise from having multiple identities (alts) within the game. It's definitely a possibility that is unique to the online world, and I wonder what issues that might present and how those are handled in the research. As always, there's a lot to consider.


Thursday, February 19, 2015

Waiting on the World to Change

This morning, I woke up at an ungodly early time to work on grading my students' writing pieces. As I sat down to start the task, I had the thought, "Wouldn't it be nice if there was a tool I could use that would let me just click a few buttons for their writing rubrics and send them their results?" A Google search later, I found two new tools: Doctopus and Goobric. Doctopus compiles all of my students' writing pieces from Google Docs into one spreadsheet. Then Goobric takes a rubric you've created in a Google Spreadsheet and applies it to the document. You'll see a split-screen with the rubric on top and the student's writing below, and you can just click the box of the descriptor that applies. You can even record audio feedback for your student!


When  you're done, you click "Submit," and that's where the real magic happens. It will automatically paste the appropriately shaded rubric with a link to your audio comments at the bottom of your student's document, AND it will input the rubric scores on your Doctopus spreadsheet. It's amazing, and it made the quality of my feedback far better in much less time than it would normally take me to grade essays.

All because I did a quick Google search to try and solve a problem this morning.

When I then started reading the Markle, West, & Rich (2011) article, I quickly came down from my technology-empowered high and settled back to the real world. The fact that they provided the video clips along with the conversation analysis did so much to emphasize the inadequacy of CA as a stand-alone method, and yet, CA and other types of transcription are standard practices. The tools exist to make the practice better, so that's not the barrier; it's the researchers and gatekeepers who are standing in the way.

Just as I was able to find tools to improve my grading process, so, too, could researchers improve their data process. I know those tools exist. For example, iBooks Author (Mac) allows you to write and publish text with embedded multimedia files. Magazines are moving toward a similar format for their digital editions where they add slideshows, videos, and playlists to enhance the content. Our class readings this week suggested many other tools as well. Scholars could still write traditional texts for hard copy books and journals, but they could have the enhanced digital version available online. In addition, if researchers were concerned about privacy issues for their research subjects, Markle, West, & Rich (2011) suggest that there are tools that could be used to edit the files to protect subjects. The pitch of a subject's voice could be raised or lowered to make it less identifiable to others, and video could be edited to mask a person's face. As long as the researcher was transparent to both the subjects and the research audience about using these tools to alter the data and justified it based on privacy concerns, I don't think it would be a problem. We could at least start heading in that direction.

Markle, West, & Rich (2011) make two arguments that I think are home runs for the move toward multimedia enhanced writing:

1) It frees up writing space so that the research quality improves. When researchers are constrained by word count limits, it's unfortunate to have to dedicate some of those precious words to transcripts that don't even reflect the conversation as authentically as the audio file itself would. The field would improve by more thorough analysis of the interview rather than a transcribed account of it.

2) It improves the teaching of novice researchers. It takes the research process from an abstract concept to a concrete, hands-on experience. Researchers would enter the field better equipped to conduct powerful research, and the quality of research would improve as a result.

These seem like two major benefits that would outweigh any disadvantages advanced by resistors.

But people still need to want to make the shift, and I'm not sure how to convince them to do it. I face this challenge constantly when I find new teaching tools like Doctopus and Goobric that I want my colleagues to try, but they resist for reasons that don't always make sense to me. Exposing the ways that technology improves the process or product is one way to help, which is why I'm grateful that we have this class. Articles like the one by Markle, West, & Rich (2011) are helpful, too, but I always wonder if the people who need to be reading those articles actually are. The fact that their article is being published in FQS over a more traditional research journal makes me wonder if they're already preaching to the choir.

So I guess I'm leaving this week's readings a little bit frustrated because the world is not changing as quickly as I'd like it to. The benefits of using these technology tools seem overwhelming and obvious to me, but I feel like I'm in the minority on that front. I think things will get better as younger people move into academia, but that's still a long time to wait.

And I'm impatient.

Tuesday, February 17, 2015

Online Research: The Other Fifty Shades of Gray

If there was one thing made clear about online research this week, it was that it's still a gray area that is open to many interpretations and context-dependent decisions. While the Association of Internet Researchers has attempted to offer some guidance about ethical research practices, even those are described in shades of gray--acknowledging that there are still as many questions as answers. The three fundamental tensions center on human subjects, texts/data vs. people, and public vs. private spaces.

Maybe I've become too cynical about Internet privacy after all of the Wikileaks drama and other scandals in the last few years, but I have no real expectation of privacy on the Internet anymore. I know that I've probably signed away rights on all sorts of sites by agreeing to TOU policies that were too long for me to take the time to read. Amazon and countless other sellers track my every shopping query, and they remind me of such as they post ads of the specific products I've perused to my Facebook sidebars. Emails, browsing histories, IP addresses -- everything can be tracked, so I'm not terribly swayed by privacy concerns on Internet research. What does resonate with me, however, is the argument from the Swedish Research Council in the Elm chapter that "People who participate in research must not be harmed, either physically or mentally, and they must not be humiliated or offended" (2009, p. 84). For me, this is the fundamental issue that is critical to my integrity as a researcher. I'm not overly concerned about rules about public/private spheres because there is such a blur between those. But I do care what my research subjects think about how I treat them. I would not want to harm, humiliate, or offend the people who are important to my research interests, nor would I want to jeopardize my future relationships with them in any way. I recognize that my research areas are pretty tame, so there's little risk of harming others. But there are so many consequences that may be unpredictable, and it's wise to be thoughtful throughout the process, not just when getting IRB approval. It therefore makes sense that the AoIR guidelines would be rather nebulous.

One thing that stood out to me in the readings this week was the idea that there is not really an international consensus on ethical practices for Internet research--particularly in regards to the definition of human subjects. While the AoIR guidelines are a good starting point for a framework, interpretation and application may vary from country-to-country and probably university-to-university as well. Given these variations, I'm curious what that means for Internet research. Are there some countries or universities that are researcher "hot spots" where online researchers want to go? Or are there some that are shunned because they're too strict or too loose with their research requirements? It seems like the uneven approaches could create some interesting dynamics among scholars.

I was also very interested in reading the Salmons chapters because I plan to do most of my dissertation interviews online. A couple of questions I hope she can address:

1) What are some strategies and challenges with recruiting participants in online research?
2) Are there any particular tools that are especially good for online qualitative interviews?






Tuesday, February 10, 2015

There's an App for That...

As an early iPad adopter and someone who trains teachers on how to use iPads in the classroom, there have been times when I've felt overwhelmed by the number of apps and resources available to choose from. Sometimes having unlimited options can feel paralyzing, and I often consider that when I decide which apps to share with other teachers and how many to share at one time. I was reminded of this when I browsed the options available for use on the Nova website. There were over a dozen note-taking apps alone! I know much depends on personal preferences, but I'm curious to hear from our class presenter, Everett Painter, about some of the considerations he uses in selecting tools to use.

I tend to gravitate toward tools that can accomplish several objectives on its own or is designed to interface with other apps or devices. Evernote, for example, can be used for note taking and audio recording, and it syncs with Skitch and Penultimate. Evernote and Skitch also have desktop platforms so I can move between my computer and mobile device as necessary. Those affordances push me toward some apps over others. So I guess I'm wondering:

  • Which of the apps have a web/desktop platform in addition to the mobile app?
  • Are there any apps that are mobile-only but offer unique features that can't be accomplished by a laptop? 
As far as the readings this week, I really appreciated the Corti, van den Eynden, Bishop & Woollard (2014) chapter. As a beginning researcher, there are many aspects of the research process that I'm not sure I could anticipate in advance. I recognize the importance of planning, and I liked how the chapter broke steps down into checklists of questions to consider. There were many there -- especially about formatting and storage -- that I hadn't really thought about before. I am certain that I will return to this chapter in the future as I plan my research.

I'm also fascinated by some of the ethical questions that come out of doing Internet research. As the Paulus, Lester, & Dempster (2014) pointed out, there are lots of grey areas when it comes to online research, and I'm excited about entering a research area that is still so new and undefined. I know that forging new ground carries with it a lot of extra responsibilities to get it right, but I also look forward in participating in debates about what those standards should be. I will definitely be tracking down some of the recommended resources mentioned in the chapter's bibliography so that I can learn more about the ethical issues of online research and familiarize myself with those practices as I plan my research. 

Tuesday, February 3, 2015

Paperless? Yes, please!


This is a picture that I shared on my teaching blog a few years ago when I decided that I was going to start pushing my fourth grade classroom to go paperless. I took this picture as I was preparing to do report cards, and the papers were taking over my life. There was so much to lug around between home and school, and it was hard to keep track of everything. I teach in a 1:1 iPad classroom, so once I figured out the workflow process and taught it to my students, it made my life much easier. Being paperless, at least to me, doesn't mean being paper-free -- if there are times when it makes more sense to do something on paper, I will. But I've certainly cut way back on the amount of stuff that I print or copy. If I didn't have a paperless classroom, I shudder to think about how many trees would be dying between my students and my PhD research!

I often tell colleagues that the whole reason I love using technology is because it often makes my work more productive, efficient, and organized. I couldn't accomplish half of what I do without having great technology resources to lean on. With my classroom experiences of going paperless, I feel pretty confident about tackling a paperless lit review. I've taught teacher workshops on using GoodReader and Evernote, and I'm excited to abandon EndNote add Mendeley to the mix. I think the biggest challenge with any of this stuff is figuring out that best possible workflow. I was reading the blog posts from Dr. Jennifer Lubke, and it prompted a few questions that perhaps Dr. Varga could address in class:

1) It seems like one of the biggest objections to using the .pdf viewer in Mendeley for annotations was that it crashed a lot. Is that still the case? I played around with it a little bit on my iPad and saw that I could do highlighting and "sticky notes" -- fewer functions than are available in GoodReader, to be sure -- but it didn't crash at all. Are there other reasons why I should integrate a separate app into the reading process?

2) Another nerdy workflow question...I've set up a watch folder on DropBox for Mendeley. If I also sync that work folder with GoodReader, annotate the .pdfs, and save the flattened .pdfs back to DropBox, will Mendeley import those changes when it syncs with my watch folder? Is it only watching for new files, or does it watch for changed files, too?

3) How helpful are the social networking/resource recommendations on Mendeley? Is that a valuable feature, or is it still too soon to tell?

4) Is there a way to connect Mendeley to the UGA library to search automatically for full texts or download the library-owned electronic copies of cites stored on Mendeley?

I could certainly play around with the tools to figure out the answers to these questions, but if you already know the answers, I'd love to hear more.

Unrelated, my work session last week was productive. I'm slowly figuring out ATLAS.ti. The webinar was helpful, but technology webinars are always challenging -- I need to play around with the features more in order to really make sense of them, and that's going to take me a while. I was excited to learn about the video training resources and the ATLAS.ti blog where I can dig in as I'm ready.

I've also been reading further in the Friese book, Qualitative Data Analysis with ATLAS.ti, to make more sense of the software. I like the book, but I can tell the book was written using the Windows version. I'm generally okay with navigating the Mac/Windows divide, but she does talk about three different sample projects that come with ATLAS.ti, and I don't think the Mac version comes with any sample projects. I did a quick search for them online, and I haven't found them yet. This seems like a potential limitation of the Mac software in terms of the learning process. I'm still in the early stages of my research, so I don't have a lot of my own data. I'm hoping I can figure out a way to get a sample data set to play around with as I learn the software, and then I'll feel more confident using it when I get deeper into my own research.

I'm excited to learn more about Mendeley!




Tuesday, January 20, 2015

Reflexivity & Technology: Who Am I?

I am a techie. 

I wasn't always that way, but as I've gotten older, I've seen so many ways that technology has helped me work more productively. 

I think my love of technology really started in 2008. I had been teaching fifth grade for three years while my husband was in law school, and he accepted a one-year federal clerkship in Montgomery, AL. We decided to leave Atlanta for a year to move to Montgomery, and I decided to spend the year finishing my master's degree from Michigan State through online classes and by Skyping in to live classes. I spent most of my time at my computer working on coursework, and since we didn't know many people in Alabama, I relied on technology to stay connected to teaching and other educators. It was during this year that I started participating in #edchat and other chats on Twitter, and I started reading and following a lot of teaching blogs. I connected with many other tech-savvy educators, and I saw a lot of potential for teacher learning through online collaboration since I was experiencing that myself. 

For a long time, I was a lurker in these online conversations and blogs. But over time, as I became more comfortable with the tools and had more confidence in myself, I began to participate. I started my own blog and networked with many other teachers more formally. We've had several meet-ups, and these teacher bloggers have been some of the most inspiring and motivating teachers I've encountered. I find that I learn much, much more through them than I typically do through my school-sponsored professional learning opportunities. It's because of these experiences that I want to study teacher bloggers for my dissertation research. I want to explore how their online experiences impact their classroom experiences and their feelings of self-efficacy. I also want to see if there's a difference in the self-efficacy beliefs of bloggers vs. lurkers -- those who read the blogs, but don't comment or write anything of their own. My experiences of being isolated for a year in Alabama and transitioning from a lurker to a blogger really changed my understanding of technology's potential, and it's a focal point of my research interests. 

As I was going through this week's class readings, I was struck by a couple of issues. First, I do think there is a point where we experience "information overload," and that point can change from day to day or topic to topic. I love using sites like Twitter, but I feel like I can only take them in small doses because there is so much available. And while I can focus my attention on the thought leaders around a particular topic, I'm not sure that will help me build relationships for future collaboration opportunities. I have to widen the net to find others who share my interests but who might not be at the forefront of the field yet. That's a tough issue to balance, and I'm not sure that I've figured that out yet.

The second thing that struck me were the ethical issues raised by online collaboration and document sharing. My whole life is in the cloud now. Between my Dropbox, Google Drive, and Evernote accounts, I'm completely beholden to having my work saved in those spaces so that I can move seamlessly between devices. The cloud poses some definite downsides. Last year, for example, Dropbox had a security breach, and they reset all of my file sharing links without telling me they were doing that -- not fun for my collaborators! But overall, the cloud makes me much more efficient and productive on-the-go. It seems like qualitative researchers will need to accept that as a reality of modern research practices and develop some guidelines for ethics that embrace that fact. I like that I'm entering the field at a time when there's still a lot of dialogue about that.

I'm a long-time Evernote user, but I do have a question for Dr. Britt: is it better to organize by tags or by notebooks? 

I've heard conflicting perspectives on this. One hardcore Evernote user that I know insists that it's a waste of time to create notebooks because you can locate everything you need through good use of tags. Others say you should segment out different topics through notebooks but still tag individual notes. None of the people I've discussed this with have been researchers, however, so I'd like to hear another perspective.

Wow! My thoughts this week really meandered. Thanks for sticking with me through this -- lots to process. 


Monday, January 12, 2015

Decisions, Decisions...

When it comes to using technology for qualitative research, I'm sold. I'm an early adopter of so many tools as it is, and there are many areas of my life where technology has made my work more efficient. It seems obvious that I would integrate technology as much as possible in my research. But when it comes to making decisions about the tools to use to start my qualitative research and data analysis, I feel like I'm car shopping. Do I go with ATLAS.ti or NVivo? Should I test drive both? Will they both get me where I need to go? Should I just commit to one now and roll with it?


I'm planning to do a mixed methods study for dissertation, and I'd ultimately like to become proficient in both qualitative and quantitative methods. I want a tool that will give me that flexibility. I'm tempted to go with ATLAS.ti since that's what we're using in this course, but I'm nervous to do so after the warning in the Silver and Lewins text that "caution[s] against choosing a package simply because it is the one you have the 'easiest' (e.g. immediate or free) access to" (2014, p. 22). Am I choosing it just because it's available? At the same time, it seems silly to reject it without knowing the real differences between the two programs. By the accounts that I've heard so far, both of these CAQDAS packages do essentially the same things, so if I could customize either one to fit my needs, does it even really matter? I'd hate to make my life unnecessarily difficult by going in a different direction if it's not going to make much of a difference. Is it even possible to make the "wrong" decision here? And would I even know what I was missing if I did?

I was also very interested in the Jackson (2014) paper about how QDAS fits into our ideas of transparency. As a tech-y person, I suspect that I could go on and on in my dissertation about how I'll ultimately use my QDAS tools. At the same time, however, I wonder if my dissertation would be the right place to do that. In my experiences with blogging about technology for my classroom and coaching other teachers in using technology, I find that most people just want to know the most basic details about tech tools. There's a weird stigma around technology where people often make it seem scarier and more overwhelming than it should be, and they hole up in a way that they wouldn't necessarily do if they were learning about any other topic. For some people, technology is scary and uncomfortable, and I would worry about alienating my readers too much by going into the finer details of how I use the QDAS. At the same time, transparency is something I really value, so I'm wondering if the descriptions of how QDAS influences the researcher needs to be contained in the final product itself. Could it, for example, exist elsewhere such as a publicly available blog? I could easily imagine myself blogging my way through the decision-making process, exposing how I'm using the tools for those who are genuinely curious, without alienating the less QDAS-familiar readers of my research. Is that a reasonable middle-ground as a qualitative (or mixed-methods!) researcher?

These are the things I'm considering as I prepare for this week's class.

Thanks for reading!

Sunday, January 4, 2015

The Networked Teacher: A Study of Teacher-Bloggers

The nature of teacher professional development is changing. What was once limited to in-services, conferences, and workshops has now become something unbound by time, budgets, or geography. The increased accessibility of the Internet through smartphones, tablets, and other computing devices has caused an explosion of social networking opportunities, and sites such as Pinterest's education category show that some teachers are taking full advantage of this shift to innovate, improve, and share effective practices from their own classrooms. 

©  | Dreamstime Stock Photos

I am designing a mixed-methods study to explore elementary teachers' perceptions of how blogging affects their self-efficacy beliefs. My research will explore the following questions:

1. What are teachers' perceptions of and experiences with blogging about their classrooms?
2. What are teachers' perceptions of and experiences with reading blogs about other teachers' classrooms?
3. What impact, if any, does blog participation have on teachers' self-efficacy beliefs? Is there any difference between the efficacy beliefs of those who blog versus those who read blogs without creating any content of their own?

Phase one of my study will include a mostly quantitative survey using modified self-efficacy scales and professional learning communities scales. From there, participants will have the ability to opt-in to the qualitative phase of my study.

Phase two will consist of qualitative interviews of teacher bloggers (content producers) and teachers who do not blog but read other teachers' blogs (content consumers). Through these interviews, I hope to gain a better understanding of how teachers use blogs for professional growth and how their blogging practices as readers and/or writers affect them in the classroom.

I'm still in the very early stages of designing this study, but I welcome comments. If you are here visiting from my teaching blog, Eberopolis: Teaching Reading and Writing with Technology, welcome! Many of my posts -- at least in the short term -- will be done for course assignments, but I will try not to bore you. Your amazing insights as teacher-bloggers inspired me to pursue this study, so jump right in! And for my classmates and instructor, I'm looking forward to taking this research journey with you. I'm a technology-lover at heart, so I'm excited to find new ways to marry my love for technology tools with my research. Given my dissertation topic and other interests, a course on Digital Technologies & Qualitative Research sounded like a natural fit for me.

Thanks for reading!





[This post was written in response to Reflexive Practice Prompt 1.1 in Digital Tools for Qualitative Research (Paulus, Lester, & Dempster, 2014).]