I've become spoiled as a Mac user.
I first made the switch from PCs to Macs when I took my current job in City Schools of Decatur. I was handed a Macbook on my first day of work and was forced to make the switch. I remember it being a culture shock in the beginning--everything I'd ever done on a computer during the previous decade had been done on a PC using Windows, and I felt disoriented not having the predictable actions that come with a Mac. But I soon adjusted, and I quickly grew to love working on a Mac. Programs seemed to operate more smoothly with fewer crashes and lags, and I wasn't living in constant paranoia of bugs or blue screens of death. I've since replaced all of our home computers with Macs, and add in the iPhones and iPads, and you'll find that my house is one Apple-loving family. It's been a blissful few years of being surrounded by Apple products, and I forgot that PCs are still pretty popular.
Until I tried learning ATLAS.ti.
Let me say from the start, that I like the program, and I'm pretty optimistic about where the new Mac version of ATLAS.ti is headed. But the initial learning process has been a little challenging at times. Many of the resources available to teach how to use ATLAS.ti are specific to the Windows version. The book Qualitative Data Analysis using ATLAS.ti by Susanne Friese, for example, is fantastic, but it is entirely specific to the Windows version. While the concepts can be transferred (at least in most cases -- not all of the features are functional in the Mac version yet), it takes a little trial and error to figure out the Mac equivalent instructions. In some cases, the Mac version is more intuitive and requires fewer steps, but in other areas, the Mac version isn't quite there yet. The transcription features aren't fully functional yet, for example, and it's impossible to collaborate between Mac and Windows users. These are some big limitations with how the program currently operates. Still, I know from the feature matrix that these features are coming, so I'll just need to be patient until they're ready. Fortunately, I'm still in the early stages of my research, so my need for those features isn't so great. Yet.
There are two things I really like about ATLAS.ti so far:
1) I like the ability to complete literature reviews and code the articles that I read. I've been looking for a system for tagging arguments and themes and organizing them across articles, and ATLAS.ti is by far the best system I've found for that. Even if I ultimately decide to use other programs for data collection and analysis, I'm certain that I will stick with ATLAS.ti for my literature reviews. All the functions that I need for that are fully operational.
2) I like the memo features. It has been helpful to write notes to myself and track my thinking across research projects. My use of the memos hasn't been terribly sophisticated yet, but I like this feature and expect to use it more.
Things I wish were different about ATLAS.ti:
1) The aesthetics - (and this is where my snotty Mac lover comes out) - the interface of the software is gray and boring, and there's a part of me that wonders if I'll get seasonal affective disorder from staring at the gray too long. I have to admit that I'm attracted to programs like Dedoose in part because of the visuals. Even if I could change up the color scheme in the preferences a bit, that might help. But if I'm going to be spending a long time in a piece of software, the aesthetics of the design elements matter to me, and ATLAS.ti is definitely taking function over form route.
2) It's a pain to shift between devices. I wish that there were some cloud storage options or a cloud-based version of the software that would allow me to access a project from different devices or do real-time collaboration. I'm not a one-computer gal. When I'm home, I'm on my big screen Mac, but I am obviously not lugging that with me to class or to conferences. I want my research easily accessible, and while projects can be moved from one device to another, it's not an easy process by today's cloud-based standards.
Things I have mixed feelings about:
1) The constant updates. It seems like every time I open the software, it asks to install a new update. I like that ATLAS.ti is working to improve the software and add new features to the Mac version, and I think this is ultimately a good thing. But the updates aren't small, and it's a little bit cumbersome to be updating all the time. Still, better than a software that never releases updates.
Obviously, these are all first-world problems, and none of them are deal-breakers. The reality of working with CAQDAS-related programs is that no piece of software can do it all. Compromises are inevitable, and there are always tradeoffs. I can be a satisfied ATLAS.ti user and still dream of ways to improve the experience. The push to make things better should always be encouraged in everything we do.
So far now, I remain optimistic about the future of the Mac version of ATLAS.ti, and I plan to keep going with it. Whether it will be a long-term love affair remains to be seen, but it has potential. And I'm glad we were introduced this semester.
The Networked Teacher
Tuesday, April 21, 2015
Tuesday, April 7, 2015
Scrivener and Writing
One of the things that has surprised me most about my journey through digital tools this semester is how many tools I'm already using can be repurposed for qualitative research. Scrivener is one such tool. I started using Scrivener as a fan-fiction(!) writer because I liked how I could map out several chapters at a time and block out all of the distractions on my screen. Once I lost time and interest in writing fanfic, I started using it to plan blog posts for my teaching blog. I also used it to transcribe an interview once. I liked that I could split the screen between the audio file and the transcript, and I could use short cuts to pause/play the audio. There's also a feature that automatically rewinds an amount you set when you pause the audio, so you can re-listen to the last few words to check your transcript before you type the next section. It was a good solution for transcription at the time, but I suspect I will use other transcription tools like InqScribe in the future. InqScribe just seems to have better features like inserting timestamps and linking them to the audio.
Still, I haven't played around with Scrivener for a few months, so I'm intrigued to get back to it. I can definitely see how it would be useful for academic writing. I could import many research articles that I've already annotated and map out dissertation chapters or article sections using the tool. And I could certainly use a return the distraction-free mode (I say as I have 14 tabs open on my web browser...). I think the biggest barrier to my more frequent use was portability. I frequently shift between devices (desktop, laptop, and iPad), and I have a hard time remembering to shift my work with me. I think all of the cloud technologies like Google Drive have spoiled me that way. So I would have to get into a better habit with saving and transporting (or commit to a device for writing) before I could really use Scrivener seriously.
My questions about Scrivener are:
Still, I haven't played around with Scrivener for a few months, so I'm intrigued to get back to it. I can definitely see how it would be useful for academic writing. I could import many research articles that I've already annotated and map out dissertation chapters or article sections using the tool. And I could certainly use a return the distraction-free mode (I say as I have 14 tabs open on my web browser...). I think the biggest barrier to my more frequent use was portability. I frequently shift between devices (desktop, laptop, and iPad), and I have a hard time remembering to shift my work with me. I think all of the cloud technologies like Google Drive have spoiled me that way. So I would have to get into a better habit with saving and transporting (or commit to a device for writing) before I could really use Scrivener seriously.
My questions about Scrivener are:
- Are there any plans for a mobile version of the app?
- What are the best ways to handle moving projects between devices -- especially when the user frequently switches from one computer to another?
As a long-time blogger and someone who plans to write a dissertation centered on teacher blogger practices, I was a big fan of the Powell, Jacob, & Chapman (2012) reading this week. Some parts that stood out for me:
The lines between platforms are blurring. At least among teacher-bloggers, it's not enough to have a presence through a blog alone; bloggers often have to branch out to all of the other major social media platforms, and significant strategizing goes into maintaining presence on those platforms. Whenever I write a blog post for my teaching blog, for example, I immediately schedule posts promoting the blog on Twitter and Facebook and share pictures from the blog on Pinterest and Instagram. It's necessary to increase and maintain readership, and it means that bloggers need to learn about PR skills in order to be successful. I imagine in academia, this is a real shift. Writing and making your research accessible for a global audience requires different skills than just writing for scholarly peers, but it seems that such skills could have a lot of value for universities. Social media like blogs can draw attention to research and invite broader conversations and potentially more funding as people become interested in the research content.
There's a delicate balance to be had between scholarship and popularity. In my experience, blogs need to be either very useful substantively or highly entertaining (or ideally both) in order for me to read them regularly. If a blog is too "scholarly," it would probably lose readers because it would seem more like journal articles. But at the same time, it's important not to sacrifice credibility and evidence for the sake of becoming more popular. I was reminded of this this morning when I was reading a Gawker article about "Food Babe," a blogger who investigates food ingredients and reports on health issues and hidden toxins. Food Babe is not a scholar by any means, but she has definitely leveraged her blog to become popular, and as the scathing criticism in the Gawker article reveals, she doesn't back up many of her claims with credible scientific evidence. Researchers have to be particularly careful not to be blinded by the prospect of becoming popular at the expense of maintaining credibility for their research. It's a delicate balance for sure, especially once a blog starts to gain readers outside of the academic community, but I think heading toward more effective use of blogging and social media could be very beneficial for academics and the public as a whole.
Tuesday, March 31, 2015
Evolutions
When I first started this course, I was a little apprehensive. While I am able to pick up technology skills fairly easily, I'm still in the very early stages of my doctoral program. I have an idea for my dissertation topic, and I know that technology will play an important role in that -- both in content and in analysis of the data. But I haven't really collected any data yet. I was skeptical of my ability to learn the tools without having my own research already complete, and I was jealous of some of my colleagues who are deeper into the process (and closer to writing the dissertation) because they had more data to work with.
The more I learn about CAQDAS technologies, however, the happier I am that I'm getting exposure to all of these tools now -- in the early stages. I feel certain it will save me countless hours down the road because I'll be better organized and prepared to use the tools, and I won't feel overwhelmed to dive into them. I think the Bazeley & Jackson (2013) text said it best:
The more I learn about CAQDAS technologies, however, the happier I am that I'm getting exposure to all of these tools now -- in the early stages. I feel certain it will save me countless hours down the road because I'll be better organized and prepared to use the tools, and I won't feel overwhelmed to dive into them. I think the Bazeley & Jackson (2013) text said it best:
"Starting early, if you are still learning software, will give you a gentle introduction to it and a chance to gradually develop your skills as your project builds up. This is better than desperately trying to cope with learning technical skills in a rush as you become overwhelmed with data and the deadline for completion is looming." (p. 26).It's nice to know that I'll be able to use the same tools throughout the process, and I can slowly learn more features as they're needed.
*******
The evolution and use of CAQDAS tools fascinates me. I had no idea coming into this semester how interested I would be in this area, but I suppose it makes sense given that it's a great intersection of my research and technology interests. I was talking about QDAS with my husband recently, and it occurred to me that he uses some similar technologies for his job. My husband is an attorney who works in complex business litigation, and one of the things he frequently has to do is review documents. For example, he might have to read 10,000+ emails downloaded from a client's inbox, code them for content, and look for segments that may support or refute a particular argument. We were talking about the software he uses and how that process compares to the work I may ultimately do in ATLAS.ti, and he mentioned that the new trend in the legal field is to move toward predictive coding software. He doesn't have it at his firm yet, but he said that it's supposed to learn some of your coding habits and conduct some of the document analysis for you based on parameters you set.
I immediately started thinking about that in terms of qualitative research, and I wonder if something like that will ever be used or accepted in our research community. I would need to know more about how it works to really form an opinion on it, but I can see potential advantages and disadvantages with it. If it really is a learning software that learns how I code and applies that knowledge to my projects, then I think it could be a huge time-saver. But it could also distance me from my data, and I would really want to scrutinize the process that it uses. It's like outsourcing -- there are some things (like housekeeping!) that I'm happy to outsource to others, but there are other things that just aren't worth outsourcing. Coding might be one of those things. I guess we'll see as the software continues to evolve.
*******
I'm excited to learn more about Dedoose. I like ATLAS.ti so far, but I'm a fan of cloud computing, and I'm curious about how it might handle my mixed methods research. Two questions that came up as I was looking through the website:
1) Compatibility: It says in the video that it can pull in data from the software programs like NVIVO and ATLAS.ti, but is that relationship bi-directional? Can you import and export data with Dedoose?
2) Pricing: I know Dedoose charges a monthly fee. Do you only pay for the months that you use it (e.g., sign in)? If you have a project uploaded in Dedoose that you don't touch for a month or two, do you have to pay the monthly fee because they're still housed on the platform?
Tuesday, March 24, 2015
Video Killed the Radio Star
While I don't expect that I will do a lot of video analysis -- at least in the short term -- I was impressed by all of the features that Transana has to offer. The video overview was helpful in showing some of the powers of the software. I was especially intrigued by the potential to view multiple transcripts at once from different points of view, all synchronized with the video. I think the example from the film making perspective really highlighted this potential. The Dempster & Woods (2011) was also helpful in seeing how the software could be used collaboratively between researchers. As explored the Transana website (and experienced a little sticker shock over the cost), I had a few questions about the software:
1) If you buy the standard version of the software, is it possible to upgrade to professional at a discounted rate?
2) I know from the article that multiple users can code synchronously online. Is it possible for multiple people to code the same video asynchronously and then combine the files/codes to compare? It seems like this could be a useful teaching tool if you wanted to see different interpretations of the same video from several different users.
3) How memory intensive is the software? What computer specs would be considered optimal for Transana's needs?
One thing that intrigued me from the Paulus, Lester, & Dempster (2014) chapter this week was the idea of asynchronous video coding through collaborative video annotations. That seems like such a great teaching tool for both preservice teachers and in-service teachers. I can imagine a lot of potential professional learning that could center on watching and annotating videos. The vignette described using Microsoft Movie Maker, Microsoft Paint, and a PHP script, but that seems complicated since it's potentially three different programs. Is there a cheap, easily accessible, streamlined equivalent that could do the same things? I'd love to be able to use something like that with some of my colleagues and the preservice students I mentor, but I don't foresee getting them to download ATLAS.ti or Transana anytime soon...
I may have to brainstorm a research project that uses video so I can play with more of these toys...
1) If you buy the standard version of the software, is it possible to upgrade to professional at a discounted rate?
2) I know from the article that multiple users can code synchronously online. Is it possible for multiple people to code the same video asynchronously and then combine the files/codes to compare? It seems like this could be a useful teaching tool if you wanted to see different interpretations of the same video from several different users.
3) How memory intensive is the software? What computer specs would be considered optimal for Transana's needs?
One thing that intrigued me from the Paulus, Lester, & Dempster (2014) chapter this week was the idea of asynchronous video coding through collaborative video annotations. That seems like such a great teaching tool for both preservice teachers and in-service teachers. I can imagine a lot of potential professional learning that could center on watching and annotating videos. The vignette described using Microsoft Movie Maker, Microsoft Paint, and a PHP script, but that seems complicated since it's potentially three different programs. Is there a cheap, easily accessible, streamlined equivalent that could do the same things? I'd love to be able to use something like that with some of my colleagues and the preservice students I mentor, but I don't foresee getting them to download ATLAS.ti or Transana anytime soon...
I may have to brainstorm a research project that uses video so I can play with more of these toys...
Tuesday, March 17, 2015
Land of Confusion
This week's readings were very interesting, and they raised several issues that I hadn't previously considered. More than ever, I feel like a lot of my ideas about these topics are in flux, and I don't know where I land on these topics.
1. Challenges of temporariness and related ethical issues
All experiences are temporary moments in time, and traditionally, researchers would work to capture those through videos, photos, audio recordings, and field notes. But now that there are resources such as SnapChat -- an app that markets itself based on the temporariness of anything that's posted, what does that mean for researchers? The Pihlaja YouTube study described in the Page, Barton, Unger & Zappavigna (2014) text highlights this challenge. Pihlaja was attempting to study dialogue through comments on YouTube videos, and some people went back and deleted their comments. How could that data then be handled? It existed, and if it was a more traditional setting, would a subject be able to retract their comments or actions? But in an online setting, that becomes more of a possibility, and I feel like it's a gray area for how to handle that as a researcher. Going back to SnapChat, if all posts are ephemeral by purpose, can the researcher even ethically use screencaptures for research? If not, how could you research that platform effectively?
2. Terms of Service
I have never taken the time to read any of the Terms of Service for platforms like Facebook and YouTube, but I know that several platforms have pretty restrictive expectations. YouTube, for example, doesn't allow you to download videos from their platform, but there are many other services (e.g., KeepVid) that will allow you to download YouTube videos. Similarly, Facebook claims ownership of the content that is posted on its platform, but then other services like Texifter allow you to download Facebook content for analysis. What are the legal and ethical issues involved with those practices, and how do digital researchers handle those?
The Fuchs (2014) chapters were fantastic in helping me understand some of the critical Marxism. I'm fairly familiar with Marxist theory, but these chapters unpacked the concepts in easy-to-understand ways while drawing in concrete examples from modern social media practices. One part that really stood out to me was the section in chapter 1 about the dialectic and contradictions. It seems that we give corporations like Facebook a lot of power when we agree to their evolving terms of service, and perhaps one way to take back some of that power is through using other tools (e.g., Texifter) that can help us better understand how social media works. Are critical theorists more flexible in their thinking about some of these platforms' rules and expectations? And if so, how does the IRB feel about that?
I'm not sure where I fall on any of these issues anymore. The more I tread into the waters of online research, the murkier my surroundings feel. I'm not deterred by that, but there's an element of unknown that's a bit intimidating--especially as an inexperienced researcher. I'll clearly need to explore these topics further.
1. Challenges of temporariness and related ethical issues
All experiences are temporary moments in time, and traditionally, researchers would work to capture those through videos, photos, audio recordings, and field notes. But now that there are resources such as SnapChat -- an app that markets itself based on the temporariness of anything that's posted, what does that mean for researchers? The Pihlaja YouTube study described in the Page, Barton, Unger & Zappavigna (2014) text highlights this challenge. Pihlaja was attempting to study dialogue through comments on YouTube videos, and some people went back and deleted their comments. How could that data then be handled? It existed, and if it was a more traditional setting, would a subject be able to retract their comments or actions? But in an online setting, that becomes more of a possibility, and I feel like it's a gray area for how to handle that as a researcher. Going back to SnapChat, if all posts are ephemeral by purpose, can the researcher even ethically use screencaptures for research? If not, how could you research that platform effectively?
2. Terms of Service
I have never taken the time to read any of the Terms of Service for platforms like Facebook and YouTube, but I know that several platforms have pretty restrictive expectations. YouTube, for example, doesn't allow you to download videos from their platform, but there are many other services (e.g., KeepVid) that will allow you to download YouTube videos. Similarly, Facebook claims ownership of the content that is posted on its platform, but then other services like Texifter allow you to download Facebook content for analysis. What are the legal and ethical issues involved with those practices, and how do digital researchers handle those?
The Fuchs (2014) chapters were fantastic in helping me understand some of the critical Marxism. I'm fairly familiar with Marxist theory, but these chapters unpacked the concepts in easy-to-understand ways while drawing in concrete examples from modern social media practices. One part that really stood out to me was the section in chapter 1 about the dialectic and contradictions. It seems that we give corporations like Facebook a lot of power when we agree to their evolving terms of service, and perhaps one way to take back some of that power is through using other tools (e.g., Texifter) that can help us better understand how social media works. Are critical theorists more flexible in their thinking about some of these platforms' rules and expectations? And if so, how does the IRB feel about that?
I'm not sure where I fall on any of these issues anymore. The more I tread into the waters of online research, the murkier my surroundings feel. I'm not deterred by that, but there's an element of unknown that's a bit intimidating--especially as an inexperienced researcher. I'll clearly need to explore these topics further.
Tuesday, March 3, 2015
Netnography, Virtual Worlds, and Tool Adoptions
Since the last class, I've been thinking a lot about my love for digital tools and whether I need to put my attitude about tool adopters vs. non-adopters in check. The best I can say is...maybe.
I didn't grow up around computers. I'm not a digital native. The first computer I owned arrived when I was a senior in high school, and I didn't get Internet service at home (dial-up) until after my Freshman year of college. My first laptop came after I graduated college. That was followed by a cell phone after I was married and a smartphone after I started my second career as a teacher. So technology hasn't always been a part of my life, but as I've experienced ways that it could make my life better and less complicated, I've embraced it. I'm pretty open-minded about playing with new tools, and if something doesn't work for me, I'm okay with abandoning it. The beauty of the digital playground is that there are always more toys.
So here's the issue for me: if tools exist to make the research process more efficient, transparent, and accessible, then why shouldn't they be widely used?
I appreciate that some people have found strategies that work for them and that tools may be difficult to learn, but I don't know if those reasons are good enough to warrant resistance. I think it comes down to a fundamental question of what is the purpose of research? If research is intended to be a primarily researcher-focused act--which it very well may be given that the researcher decides every aspect of the study--then the researcher should just use whatever works, digital or not. But if the purpose of research is to contribute more broadly to society and our understandings of the world or our fields of study, then I think digital tools are a necessary part of that. They allow closer and more verifiable examination of research, and they provide better data trails to assist novice researchers understand research practices. In this worldview, it seems selfish to resist using digital tools out of convenience.
I'm not saying that researchers have to learn and use every digital tool available. There are some that will be a better fit for the research and the researcher than others (I'm looking at you, EndNote...). But I don't think general ignorance of the tools or resistance to them is acceptable among those who want to do research professionally (i.e., academics). Tools are becoming more accessible and intuitive all the time, and even if a particular tool is rejected for one reason or another, researchers should at least consider them with an open mind.
So yeah, I guess I'm still on Team CAQDAS. Pretty passionately so...
Speaking of my CAQDAS passions, I was disappointed to see that the new Netnography book (Netnography: Redefined) isn't coming out until June. I've been having regular Amazon deliveries of books introduced through this class every Friday since the beginning of the semester. It will be weird not to race home on Friday to hide another package of qualitative research books before my husband sees it... I need to get better at reading nonfiction books on the Kindle...
I'll be curious to see how much of the Netnography book is actually "redefined." There are so many fascinating issues in the chapters we read that I can see applying to my own research of teacher bloggers. My research is going to examine experiences of bloggers and lurkers and see if there is any difference in how the quantify (with survey data) or account for (with interview data) their self-efficacy beliefs as teachers. I can imagine worlds in which aspects of Kozinets's four A's (adaptation, anonymity, accessibility, and archiving) could be relevant. For example, maybe adaptation differentiates those who blog vs. those who lurk. Maybe the bloggers are better able to adapt to the different types of technology involved in blogging. Anonymity is definitely an issue; teachers are highly public figures, so they have to be careful about any digital footprints they lead. Some will only blog or comment under pseudonyms while others are identifiable but careful about the types of information they share. Accessibility seems to be decreasing as an issue (and maybe the new book will speak to that since there are more recent Pew Internet Reports reflecting these trends). Archiving also factors in since everything is preserved on the many blogging platforms, and once something is published, it's hard to undo it. I want to explore more of the Netnography methodology to see exactly how it will fit into my research.
Finally, I enjoyed reading chunks of Holt's World of Warcraft dissertation. I didn't have a chance to read all of it, but it was interesting to learn about his research methodology. I think it would be incredibly challenging to research a MMORPG while immersed as a player. How do you juggle the research experience with the player experience? It seems like it would be hard to set playing goals such as getting to the raider/end-of-game level without letting that consume you or overshadow the research. But at the same time, I can't imagine any other way to study that culture. Similarly, I wondered about the possible ethical issues that could arise from having multiple identities (alts) within the game. It's definitely a possibility that is unique to the online world, and I wonder what issues that might present and how those are handled in the research. As always, there's a lot to consider.
I didn't grow up around computers. I'm not a digital native. The first computer I owned arrived when I was a senior in high school, and I didn't get Internet service at home (dial-up) until after my Freshman year of college. My first laptop came after I graduated college. That was followed by a cell phone after I was married and a smartphone after I started my second career as a teacher. So technology hasn't always been a part of my life, but as I've experienced ways that it could make my life better and less complicated, I've embraced it. I'm pretty open-minded about playing with new tools, and if something doesn't work for me, I'm okay with abandoning it. The beauty of the digital playground is that there are always more toys.
So here's the issue for me: if tools exist to make the research process more efficient, transparent, and accessible, then why shouldn't they be widely used?
I appreciate that some people have found strategies that work for them and that tools may be difficult to learn, but I don't know if those reasons are good enough to warrant resistance. I think it comes down to a fundamental question of what is the purpose of research? If research is intended to be a primarily researcher-focused act--which it very well may be given that the researcher decides every aspect of the study--then the researcher should just use whatever works, digital or not. But if the purpose of research is to contribute more broadly to society and our understandings of the world or our fields of study, then I think digital tools are a necessary part of that. They allow closer and more verifiable examination of research, and they provide better data trails to assist novice researchers understand research practices. In this worldview, it seems selfish to resist using digital tools out of convenience.
I'm not saying that researchers have to learn and use every digital tool available. There are some that will be a better fit for the research and the researcher than others (I'm looking at you, EndNote...). But I don't think general ignorance of the tools or resistance to them is acceptable among those who want to do research professionally (i.e., academics). Tools are becoming more accessible and intuitive all the time, and even if a particular tool is rejected for one reason or another, researchers should at least consider them with an open mind.
So yeah, I guess I'm still on Team CAQDAS. Pretty passionately so...
Speaking of my CAQDAS passions, I was disappointed to see that the new Netnography book (Netnography: Redefined) isn't coming out until June. I've been having regular Amazon deliveries of books introduced through this class every Friday since the beginning of the semester. It will be weird not to race home on Friday to hide another package of qualitative research books before my husband sees it... I need to get better at reading nonfiction books on the Kindle...
I'll be curious to see how much of the Netnography book is actually "redefined." There are so many fascinating issues in the chapters we read that I can see applying to my own research of teacher bloggers. My research is going to examine experiences of bloggers and lurkers and see if there is any difference in how the quantify (with survey data) or account for (with interview data) their self-efficacy beliefs as teachers. I can imagine worlds in which aspects of Kozinets's four A's (adaptation, anonymity, accessibility, and archiving) could be relevant. For example, maybe adaptation differentiates those who blog vs. those who lurk. Maybe the bloggers are better able to adapt to the different types of technology involved in blogging. Anonymity is definitely an issue; teachers are highly public figures, so they have to be careful about any digital footprints they lead. Some will only blog or comment under pseudonyms while others are identifiable but careful about the types of information they share. Accessibility seems to be decreasing as an issue (and maybe the new book will speak to that since there are more recent Pew Internet Reports reflecting these trends). Archiving also factors in since everything is preserved on the many blogging platforms, and once something is published, it's hard to undo it. I want to explore more of the Netnography methodology to see exactly how it will fit into my research.
Finally, I enjoyed reading chunks of Holt's World of Warcraft dissertation. I didn't have a chance to read all of it, but it was interesting to learn about his research methodology. I think it would be incredibly challenging to research a MMORPG while immersed as a player. How do you juggle the research experience with the player experience? It seems like it would be hard to set playing goals such as getting to the raider/end-of-game level without letting that consume you or overshadow the research. But at the same time, I can't imagine any other way to study that culture. Similarly, I wondered about the possible ethical issues that could arise from having multiple identities (alts) within the game. It's definitely a possibility that is unique to the online world, and I wonder what issues that might present and how those are handled in the research. As always, there's a lot to consider.
Thursday, February 19, 2015
Waiting on the World to Change
This morning, I woke up at an ungodly early time to work on grading my students' writing pieces. As I sat down to start the task, I had the thought, "Wouldn't it be nice if there was a tool I could use that would let me just click a few buttons for their writing rubrics and send them their results?" A Google search later, I found two new tools: Doctopus and Goobric. Doctopus compiles all of my students' writing pieces from Google Docs into one spreadsheet. Then Goobric takes a rubric you've created in a Google Spreadsheet and applies it to the document. You'll see a split-screen with the rubric on top and the student's writing below, and you can just click the box of the descriptor that applies. You can even record audio feedback for your student!
When you're done, you click "Submit," and that's where the real magic happens. It will automatically paste the appropriately shaded rubric with a link to your audio comments at the bottom of your student's document, AND it will input the rubric scores on your Doctopus spreadsheet. It's amazing, and it made the quality of my feedback far better in much less time than it would normally take me to grade essays.
All because I did a quick Google search to try and solve a problem this morning.
When I then started reading the Markle, West, & Rich (2011) article, I quickly came down from my technology-empowered high and settled back to the real world. The fact that they provided the video clips along with the conversation analysis did so much to emphasize the inadequacy of CA as a stand-alone method, and yet, CA and other types of transcription are standard practices. The tools exist to make the practice better, so that's not the barrier; it's the researchers and gatekeepers who are standing in the way.
Just as I was able to find tools to improve my grading process, so, too, could researchers improve their data process. I know those tools exist. For example, iBooks Author (Mac) allows you to write and publish text with embedded multimedia files. Magazines are moving toward a similar format for their digital editions where they add slideshows, videos, and playlists to enhance the content. Our class readings this week suggested many other tools as well. Scholars could still write traditional texts for hard copy books and journals, but they could have the enhanced digital version available online. In addition, if researchers were concerned about privacy issues for their research subjects, Markle, West, & Rich (2011) suggest that there are tools that could be used to edit the files to protect subjects. The pitch of a subject's voice could be raised or lowered to make it less identifiable to others, and video could be edited to mask a person's face. As long as the researcher was transparent to both the subjects and the research audience about using these tools to alter the data and justified it based on privacy concerns, I don't think it would be a problem. We could at least start heading in that direction.
Markle, West, & Rich (2011) make two arguments that I think are home runs for the move toward multimedia enhanced writing:
1) It frees up writing space so that the research quality improves. When researchers are constrained by word count limits, it's unfortunate to have to dedicate some of those precious words to transcripts that don't even reflect the conversation as authentically as the audio file itself would. The field would improve by more thorough analysis of the interview rather than a transcribed account of it.
2) It improves the teaching of novice researchers. It takes the research process from an abstract concept to a concrete, hands-on experience. Researchers would enter the field better equipped to conduct powerful research, and the quality of research would improve as a result.
These seem like two major benefits that would outweigh any disadvantages advanced by resistors.
But people still need to want to make the shift, and I'm not sure how to convince them to do it. I face this challenge constantly when I find new teaching tools like Doctopus and Goobric that I want my colleagues to try, but they resist for reasons that don't always make sense to me. Exposing the ways that technology improves the process or product is one way to help, which is why I'm grateful that we have this class. Articles like the one by Markle, West, & Rich (2011) are helpful, too, but I always wonder if the people who need to be reading those articles actually are. The fact that their article is being published in FQS over a more traditional research journal makes me wonder if they're already preaching to the choir.
So I guess I'm leaving this week's readings a little bit frustrated because the world is not changing as quickly as I'd like it to. The benefits of using these technology tools seem overwhelming and obvious to me, but I feel like I'm in the minority on that front. I think things will get better as younger people move into academia, but that's still a long time to wait.
And I'm impatient.
When you're done, you click "Submit," and that's where the real magic happens. It will automatically paste the appropriately shaded rubric with a link to your audio comments at the bottom of your student's document, AND it will input the rubric scores on your Doctopus spreadsheet. It's amazing, and it made the quality of my feedback far better in much less time than it would normally take me to grade essays.
All because I did a quick Google search to try and solve a problem this morning.
When I then started reading the Markle, West, & Rich (2011) article, I quickly came down from my technology-empowered high and settled back to the real world. The fact that they provided the video clips along with the conversation analysis did so much to emphasize the inadequacy of CA as a stand-alone method, and yet, CA and other types of transcription are standard practices. The tools exist to make the practice better, so that's not the barrier; it's the researchers and gatekeepers who are standing in the way.
Just as I was able to find tools to improve my grading process, so, too, could researchers improve their data process. I know those tools exist. For example, iBooks Author (Mac) allows you to write and publish text with embedded multimedia files. Magazines are moving toward a similar format for their digital editions where they add slideshows, videos, and playlists to enhance the content. Our class readings this week suggested many other tools as well. Scholars could still write traditional texts for hard copy books and journals, but they could have the enhanced digital version available online. In addition, if researchers were concerned about privacy issues for their research subjects, Markle, West, & Rich (2011) suggest that there are tools that could be used to edit the files to protect subjects. The pitch of a subject's voice could be raised or lowered to make it less identifiable to others, and video could be edited to mask a person's face. As long as the researcher was transparent to both the subjects and the research audience about using these tools to alter the data and justified it based on privacy concerns, I don't think it would be a problem. We could at least start heading in that direction.
Markle, West, & Rich (2011) make two arguments that I think are home runs for the move toward multimedia enhanced writing:
1) It frees up writing space so that the research quality improves. When researchers are constrained by word count limits, it's unfortunate to have to dedicate some of those precious words to transcripts that don't even reflect the conversation as authentically as the audio file itself would. The field would improve by more thorough analysis of the interview rather than a transcribed account of it.
2) It improves the teaching of novice researchers. It takes the research process from an abstract concept to a concrete, hands-on experience. Researchers would enter the field better equipped to conduct powerful research, and the quality of research would improve as a result.
These seem like two major benefits that would outweigh any disadvantages advanced by resistors.
But people still need to want to make the shift, and I'm not sure how to convince them to do it. I face this challenge constantly when I find new teaching tools like Doctopus and Goobric that I want my colleagues to try, but they resist for reasons that don't always make sense to me. Exposing the ways that technology improves the process or product is one way to help, which is why I'm grateful that we have this class. Articles like the one by Markle, West, & Rich (2011) are helpful, too, but I always wonder if the people who need to be reading those articles actually are. The fact that their article is being published in FQS over a more traditional research journal makes me wonder if they're already preaching to the choir.
So I guess I'm leaving this week's readings a little bit frustrated because the world is not changing as quickly as I'd like it to. The benefits of using these technology tools seem overwhelming and obvious to me, but I feel like I'm in the minority on that front. I think things will get better as younger people move into academia, but that's still a long time to wait.
And I'm impatient.
Subscribe to:
Posts (Atom)