19 Working With People

Jennifer Clary-Lemon; Derek Mueller; and Kate Pantelides

Abstract

In this chapter from Try This, the authors encourage writers to consider the unique perspective of people as primary sources, and detail strategies like interviews, surveys, case-studies and questionnaires to collect data about people. They outline methods for collecting primary research, and offer suggestions for the best practices of working closely with human subjects. The chapter also includes instructions for putting together a research memo of collected information.

This reading is available below and as a PDF.

New to town, you notice a lot of activity at a skate park near where you live. You walk nearby a time or two, noticing the activities, which involve small groups of teenagers, some of whom talk with one another and others of whom appear far more interested in attempting skateboarding feats while friends and accomplices video record.

At a local coffee shop where you frequently go to study, you begin to notice a pattern in the ways twenty-somethings sit at tables by themselves and divide their time between paying attention to their phones and paying attention to their computer screens.

You’ve started a new job at a local restaurant where the managers, kitchen team, and front of the house staff gather for weekly meetings. By the fourth meeting, you notice the same people talk, some of them saying the same things almost verbatim each week.

In each of these scenarios, you begin to wonder why and how people do what they do in these contexts. Questions begin to form. In this chapter, you will learn more about how researchers work with people and how they might approach such contexts.

Just as working with archives requires that we build careful stories of those who lived in the past, choosing to do research by working with people in the present requires a great degree of care. In Chapter 2, we suggested that ethical research with people begins with following your university’s practices for working with human subjects. In this chapter, we discuss different research methods that can be helpful once you’ve determined that your research question is best answered through writing with, talking to, or observing people. As we discussed in Chapter 3, there’s a lot of information already out there in secondary forms of research—literature that has already been read and reviewed, surveys that have already been conducted, sources that have already included ethnographic research in their design so that you don’t have to. Ethnography (from the Greek ethno-, meaning “people” and –grapho, meaning “to write”) is a common research methodology, a way of thinking and doing that includes many kinds of methods put together as data in the humanities and social sciences. It uses a variety of research practices that work with people in order to come to some kind of conclusion about a societal or cultural phenomenon. In order to study societies, of course, you have to work with people, which is why ethnographers use a variety of methods in their research that we cover here, like interviews and surveys, as well as some of the methods that we’ve talked about in earlier chapters, like coding schemes.

While you may or may not be ready to become an ethnographer, it helps to think about your research question a bit in order to determine if it might be best answered by working with people rather than in some other way.* When we conduct research about writing in particular, our first impulse may be to talk to those who are already engaged in the practice we are interested in: those who write! However, it’s important to remember before we decide to work with people that many researchers who study writing have already produced a lot of knowledge on that subject by working with human subjects, whether by using focus groups to figure out if what students learn in university writing classes transfers to other classes (Bergman and Zepernick), interviewing students to see if there is a link between reading and identity (Glenn and Ginsberg), or surveying students to see how they really feel about buying a plagiarized essay online (Ritter). Lots of excellent people-based research has already been done about a variety of research topics. It’s important to do some preliminary reading (this is where your worknets come in!) to figure out if you should go through the careful process of working with people or if your research question can be answered by another means. It’s also important to know when the benefits of working with people outweigh any potential drawbacks. Some questions you can ask yourself as you decide if you want to work with people in research that might span a semester are:

Should I work with people?* Likely YES if Should I work with people? Likely NO if
  • I want to replicate a prior study with people on a smaller scale to see if it is still true;
  • I want to build on prior studies by working with people;
  • I have insider insight into a particular group;
  • I want to help preserve someone’s story or memory;
  • there is information about people’s behaviors, feelings, sensations, knowledge, background, or values about my topic that I don’t know and cannot find out any other way;
  • my ethics review and research can be completed in the time I have allotted for this work;
  • I want to gather pilot information on a topic rather than generate definite conclusions; or
  • working with people might help prove or disprove a theory.
  • the research question has already been answered by many other studies and does not need replication;
  • I already know what I think people will answer;
  • I don’t know anyone from the population of people who would be knowledgeable about my research question;
  • I won’t have the time to transcribe or code a lot of data;
  • I have definite opinions about how people should behave or respond while I work with them;
  • my work will be with vulnerable people—for example, under the age of 18—or about sensitive content;
  • my work will put people in physical or emotional discomfort; or
  • I have some kind of power over the people I might work with.
*The decision about whether or not to work with people should be made with care. If possible, ask other researching writers why they decided to work with people (or not).

   Once you’ve decided that you want to work with people in order to gather data to try to answer your research question, it’s important to think about the kind of method you want to use. We’ll be talking about surveys, interviews, and case-study approaches to research design in this chapter, and each method has its own distinct advantages and disadvantages (often related to how much time a researcher has to work with large amounts of data). We like to think of these as differences in the proximity—closeness—of a researcher to her research question and how it might be best answered. A survey is an eagle-eye, overhead view of a group of people that gathers big-picture and multilayered information, often about a breadth of knowledge, behaviors, and opinions. Interviews allow for a much closer, intimate, in-depth view of one or more of those same things. A case-study approach might balance between near and far, using some up-close interview data or site-based observations to support parts of an argument, and using the benefit of the breadth of survey data to support other parts. As you begin to think about which method is right for you, start thinking about whether your question implies a research strategy that would be better as a snapshot from above (How stressed out does writing a paper make university students?) or as an in-depth look into particular pro- cesses (How stressed out did writing a paper make a particular student over a particular period time?).

Try This Together: People-Focused Research (20 minutes)

Working with a partner, generate a list of three to five research focuses where people seem important to some activity, but you aren’t aware of any studies related to this group, or you think the people may be difficult to gain access to. Why do you think this group hasn’t been studied before? What are some of the reasons access may be challenging? What ideas do you have for ways to gain access to this person or group?

Surveys

One of the ways we collect data about numbers of people that are too large to interview—depending on your time frame for data collection, this might be 20 people or it might be in the thousands—is a survey. A survey is a series of carefully-designed questions, sometimes called a questionnaire. In the context of a research project, surveys are put together with the intention of gathering information that will answer a bigger research question. Whether working with smaller or larger populations of people, surveys can help you determine both countable, or quantitative, information about your respondents (how many people answered yes or no on a question, for example) and descriptive information, or qualitative data, about their opinions, habits, and beliefs—what we might call variables.* In the following examples, we discuss how a researcher might go about research design and considerations when working with small and large groups as well as with one or more variables. However, when it comes to survey question design and survey implementation (getting your surveys out to intended respondents), there are resources that you can access that will help no matter how large or small a population you study.

*The word “variables” is also used to describe quantitative data. Much like qualitative variables, variables in those cases are items that you can measure, such as time, height, density, distance, strength, and weight. Such variables are usually those that come with measurement markers—pounds, inches, centimeters, microns, moles.

     Example 1: You get your most recent paper back from your instructor, and on it you’ve received a B+. All in all, you’re pretty happy, since you’ve always gotten Bs on high school writing assignments. You get into a conversation after class with someone next to you who is very upset that he got a B+ on his paper. “I’ve only ever gotten As on high school English papers,” he says. Because of this conversation, you’ve become curious about how being graded on writing in high school affects people’s perception of themselves as “good” writers by the time they are in college or university. A well-designed survey might look at a small relevant population of people (say, a classroom’s worth. Your classroom’s worth!) that would help determine both the answer to that research question and even the future pathway of a research project—perhaps after surveying 25 students, you are so interested in some answers that you’d like to follow up more closely by interviewing four or five of them. A research project of this size benefits from convenience sampling*—finding survey participants by who you know.

*What are some ethical implications of convenience sampling?

     Once you know who you are going to survey, you might think about the kinds of information that would be helpful to know about the two variables you’re interested in: people’s feelings about themselves as writers and their feelings about grades. You might survey respondents with open-ended questions, which allow students to write (or say) their responses in short statements or sentences, or with closed-ended questions, in which students would choose among a finite set of answer choices (like “yes” or “no”). Open-ended questions better allow you to report descriptive data, while closed-ended questions allow you to get a quick snapshot of a large number of responses. Question design depends on the kind of information you need: if you need to determine what you mean by a “good” writer, you’ll need to be able to define it—or determine if that’s something you’ll want your survey respondents to define for you. You may want to know about what kinds of grades or comments students received on high school papers and what kinds of grades or comments they’ve received on college or university papers. These kinds of information are well-suited to open-ended questions. However, you might also want to know how happy students are with particular grades. In order to get that information, it might be best to ask students closed-ended questions, assessing people’s feelings about writing on an ordinal scale—an ordered set of numbers that correspond to a variable, like how happy or unhappy a student is with a particular grade on a paper. The people you’re surveying should be able to distinguish between the kinds of modifiers you use to describe that variable.

     For example:

     I just got a B back on my last paper. On a scale of 1-5, I am

  1. Extremely happy
  2. Very happy
  3. Somewhat happy
  4. Not so happy
  5. Not at all happy

Most people can figure out that in the order of things, “extremely” is higher than “very,” and “not at all” is lower than “not so.” The easy part about this kind of survey is that you can distribute and collect the survey in class. After you collect your survey data, you can begin to put together a picture of how the small sample group you’re working with feels about the relationship between high school and college or university paper grades and how the group members feel about their writing performance. However, it would be important to compare what you find out with other studies that have been done about your topic in order to synthesize as much available data as you can in order to draw conclusions from it.

     Example 2: Let’s say you’ve been thinking a lot about a conversation you’ve had with your father recently. In it, he talked a lot about unpredictable weather and how it’s been affecting his gardens. When you brought up the idea of global warming, he got a bit flustered and insisted that it was just a matter of weather variability. Since then, you’ve been thinking a lot about whether the kind of words people use to discuss climate change impacts whether or not they believe in it as a proven scientific phenomenon. After doing a bit of reading, you come across an article that talks about the kinds of questions climate-change surveys ask their respondents—Tariq Abdel-Monem and colleagues’ “Climate Change Survey Measures: Exploring Perceived Bias and Question Interpretation.” At the end of that article, you notice the authors mentioned that often survey respondents did not have a clear consensus about the definitions of the terms used to describe climate change. The authors call for more research on that issue in particular, which fits well with the thoughts you’d been having about the conversation with your father.

     You decide to design a survey to help clarify how people interpret climate-related terms, like “weather variability,” “climate change,” “global warming,” “greenhouse effect,” and “arctic shrinkage.” Because you’re in- terested in how lots of people define these terms, you’re not limiting your sample only by the convenience of who you are immediately near but on a more random sample of groups of people that begin with who you know but snowball, or grow bigger, from there: you might make a list of all possible people you could send a survey to, such as people in all of your classes, your instructors, your friends, your parents and grandparents and their friends, clubs you and your family belong to, members of a church, organization, or extracurricular activity. This list might make you decide that you are only interested in a certain demographic (or particular slice of the population, such as those between the ages of 18-25), in which case you might narrow your list to one or two groups and make sure that you have the people you survey identify their age groups in a survey question. If you just want large numbers of responses and are only mildly interested in demographic data, you might design a survey that can be distributed online and circulated widely—posted on social media, for example, or to online classroom message boards. Perhaps you would aim, in this case, to survey 100 people about their interpretation of climate-related terms.

     In this example, you’ll want to think about the best way to answer a specific research question about how people interpret climate-related terminology. Because there has been a lot of survey research already done in this area, your best place to start designing your survey is to look at surveys that have been conducted before—which brings us to some good advice about survey design, no matter the research question!

Try This: Writing Survey Questions (30 minutes)

Write two survey questions each for Examples 1 and 2. What underlying concept or variable are your survey questions trying to explore? How do those variables relate to the research question in each example? How do your survey questions for Example 1 (writing and grades) and Example 2 (climate change) differ according to what you’re trying to find out?

Designing Good Questionnaires

Unlike interviews, which are often intimately tied to a research design that is so specific they usually have to be uniquely crafted, surveys are often more general. Yet, like interview questions, survey questions should be tested before they are launched in a questionnaire and you accidentally receive information you don’t want! The good news is that you have access to a range of national and international surveys (and their questions) that have already been pre-tested for you: Roper iPoll through the Roper Center for Public Opinion Research (ropercenter.cornell.edu/ipoll/), the Pew Research Center (www.pewresearch.org/), Gallup (www.gallup.com/home.aspx), the Inter-University Consortium for Political and Social Research (ICPSR) (www.icpsr.umich.edu/web/pages/ICPSR/index. html), and Ipsos (www.ipsos.com/en) all store large repositories of surveys— both their analyses and the questions themselves. You can search them by keyword and find surveys on topics done that are similar to the one you’re planning.

     Once you have a few models of survey questions, you can change them to suit your needs. There are a few best practices to keep in mind when designing your own questionnaire:

  • Don’t forget instructions! Be sure to tell people briefly what they can expect (how many questions, how to fill out the survey, and how long it will take to complete).
  • Questions should be clear and free of jargon: don’t put in any specialized vocabulary that would be difficult for a respondent to understand.
  • If you have to use technical terms, define them for your respondents.
  • Each question should measure only one thing at a time—avoid questions that ask people to respond to multiple items in one question.
  • If you are putting answers on a scale, respondents should have between five and seven points from which to choose.
  • Be as specific as you can with your questions, whether they are open- or closed-ended.
  • Questions should be short. In fact, your questionnaire should be short! When questions and surveys are too long, people lose interest and do not complete them.
  • With closed-ended questions, people often choose the first option they read (if reading a survey) and the last option they hear (if a survey is read aloud). Vary the order of your answers to avoid this, if you can.
  • Try to avoid loaded (or unloaded!) language that might persuade your respondents to answer a certain way: there is a perceived difference between, for example, the words “climate change” and “global warming.” Be sure you use the terminology you mean, and be ready to explain your choices in your analysis.

Designing and Distributing Surveys

Surveys can be physically designed and distributed in a number of ways: on paper through the mail, in person, on the phone, or online through email or a distributed link. It’s important to note that if you deliver a survey in person (on the phone or distributing a paper survey), you should have an introductory script that gives a framework and instructions for your research.

     If you are designing and/or distributing a survey online, you can use websites that offer free survey software with some basic functionality—surveys of ten questions or less, say, or surveys that max out at a total number of respondents.* These are excellent and professional sites to use to begin your survey research, and the surveys you produce with them can be circulated and embedded into emails to specific people or circulated as a link that can be forwarded on to other people than its first recipients. If you require more functionality, you might check with your college or university’s research office, some of which give access to institutional survey software to students upon request. This will enable you to design farther-reaching surveys that often have extra bells and whistles to their design and functionality, like graphic sliding scales, heat maps, and the ability to drag-and-drop text into categories.

*Test your survey by sharing a draft with a friend, roommate, or classmate and listening to their feedback. Sometimes called usability testing, or user-testing, this, too, is an approach to research commonly practiced by professional and technical writers.

     Once your survey is ready for distribution, it’s important to know that a good research process should result in a high survey response rate. The larger your sample size or the less you know your targeted audience (such as in the climate change example), the lower your response rate is likely to be. In a large survey, a good response rate is about 30 percent. So, if you really wanted to survey 100 people, you would want to send your survey out to at least 300 people to try to reach that number. However, a high response rate for a small survey, such as our first example of a 25-student classroom, is about 80 percent—the smaller, more personal, and more targeted an audience, the higher the response rate.

Now, let’s say you successfully surveyed 25 people in your classroom, but after looking at your survey results, you decide you want more information from just a few of those people. An interview might be an excellent method to achieve that purpose.

Try This: Revising Survey Questions (15 minutes)

Working with the questions for Examples 1 and 2 that you generated in the previous “Try This,” revise your questions by following the suggestions in at least one of the best practices for writing questionnaires.

Interviews

Interviews allow a researcher a real-time environment that allows for things that surveys don’t, like being able to ask follow-up questions or asking someone to clarify an answer. Yet interviews also generate a lot of data because conversations need to be recorded and usually transcribed or written down (and it takes about three hours to transcribe every one hour of talk). A benefit to interviews is that there are different types, depending on your research question. You might sit down with a small group of people, called a focus group, and ask one question to see how people respond and negotiate their answers in groups, since usually one person’s response provokes agreement, disagreement, or room for follow-up. A focus group might enable you to get a general sense of consensus or understand divergent attitudes about a particular variable. You might develop questions for 1-on-1 research interviews, in which you sit down with one person at a time and ask them a series of carefully-designed questions that help you answer your research question (you might repeat the same set of questions with each interview for consistency, in this case). If your purposes extend beyond only answering a research question and you are trying to preserve a sound recording of stories or memories for future generations to listen to, then you would conduct an oral history interview with either one person or a group of people, in which you would design an interview script with topics about a particular area of interest and a long list of questions that you may or may not ask, depending on your participant’s memory and willingness to talk. Unlike a research interview, an oral history interview does not seek to replicate the same questions for each interviewee but instead trusts the process of proceeding through topics and questions that result in the best outcome: an oral history of a person, place, or group.

Asking Questions “From the Side”

Some of the same advice about survey questions applies to interview questions: They should be clear, specific, short, and free of specialized vocabulary your interviewees might not know. They shouldn’t try to double up a few questions in just one breath or be written in a way that offends a listener or presumes something about them. However, unlike survey questions, interviews don’t really benefit from closed-ended (yes or no) questions; usually you are more interested in why a participant answered yes or no.

Good interviews come from really good questions that are related to your research question, but research questions often are not what you would actually ask someone in an interview. In other words, there is a difference between your research question and an interview question. The best way to ask about your research question is actually by asking an interview question from the side rather than head on. For example, a research question about a topic you want to learn about—let’s say, plagiarism—is not best answered by the most direct question. Asking, “Have you ever copied a paper from someone?” likely would result in some discomfort on both sides of the interviewing table. Instead, designing questions from the side might be a better way to get at what you’re hoping to find out. In the case of curiosity about plagiarism, you might ask about someone’s knowledge of online paper mills, ask about whether or not they have ever had trouble with their works cited page, ask about their opinion of plagiarism detection software, or ask if they know about campus resources that help students revise their work. All of these topics are about plagiarism without developing an accusatory tone about serious academic misconduct, and they would probably help you establish a more interesting angle for your own research question once you’ve spoken to a few people.

Try This: Designing Interview Questions from the Side (30 minutes)

In order to design an interview question from the side, you’ll need to know your research question. (Note that Chapter 1 introduces research questions and the ways they expand and shift throughout a research process.) Once you have that, you’ll need to figure out what exactly it is you’re hoping to learn to be able to answer that research question. Then, you’ll need to determine who you might ask to get at what you want to learn. Finally, you’ll generate a list of interview questions that would help you get at what you’re trying to learn—from the side! Here’s an example of how this process works:

  • Research Question: What matters more in the workplace: “hard skills” (technical skills) or “soft skills” (communication skills)?
  • What I’m hoping to learn: Do employers value technical skills more than communication skills or vice-versa? Are college or university graduates being given the tools they need in technical and communication skills to get a job when they graduate? Which kind of skill is the more difficult to learn?
  • Who might have this information: Employers/employees at any company, recently employed college or university graduates, instructors in both technical and communication-based fields are all likely to have insights.
  • Interview questions from the side that will help me learn what I want to know:
    • For an employer: What is the most important skill an employee can have?
    • For a student: What is the hardest assignment you ever had to complete? What made it so hard?
    • For anyone: Think about a recent problem that came up in your workplace. What do you think caused it?

Based on this example, come up with some interview questions from the side for your own research question.

Interviewing Equipment and Best Practices

Unlike surveys, to really be useful, interviews need to be audio- or video-recorded and then transcribed so you can understand what was said in order to interpret your data. This means that interviews require putting in some effort to be successful: finding a comfortable and quiet place to meet (so that voices are easily heard over any ambient noise), using a good quality audio and/or video recorder, and finding the time it takes to listen and transcribe the voices you hear (including your own). As you decide which kind of interview to con- duct, you’ll want to consider that transcribing a 1-on-1 research or oral history interview is much easier than transcribing a group interview, where people talk over and interrupt one another. Similarly, in a video recording, it is easier to set up a camera that captures two people in a frame than a whole group, which may require another person to operate a camera. While we don’t expect you to bring a camera crew to a group interview, it’s important to know the benefits and constraints of working with certain kinds of equipment.

Try This Together: Interview Question Sketch (30 minutes)

In a small group, choose one of the following topics:

    • Fake news
    • Seasonal affective disorder
    • Photo retouching
    • Learning a second language
    • Genetically modified foods

Next, complete the following steps:

    1. Develop a research question about your chosen topic.
    2. Decide what you would hope to learn from interviews.
    3. Consider who might have the information you need.
    4. Write three interview questions you might ask.

     When it comes time to conduct the actual interview, you’ll want to talk to your interviewees before you begin recording. About a week or so before, it is often good practice to share the interview questions or interview script (in the case of an oral history, which might be more loosely configured) with your participant(s) so they know what you plan on asking and thus can be prepared with thoughtful answers. This isn’t always possible, especially if you don’t have a way of contacting your interviewees beforehand. The day of your interview, you should make sure that your participants know the details of where you’ll be meeting and at what time. Right before your interview, you should discuss with your interviewee the ethics protocol of the interview in order to get their informed consent, as we discussed in Chapter 2. If you’re undertaking an oral history interview, you will also want to discuss a deed of gift* with your interviewee, in which they agree to release their story both to you and to a larger public repository of other stories like theirs. This is unlike a research interview, in which it is likely that only you will ever listen to the recording or read a transcript. After any kind of interview, you’ll want to follow up with your interviewees with a brief note of thanks that reminds them of what will happen with their data as well as how they might reach you if they have questions about the interview process.

*A Deed of Gift is a separate and special document from other consent forms. In a Deed of Gift, the oral history participant “gifts” the interviewer (or institution, or library, or archive) their story so that other people may listen to it or use it for research purposes

Try This Together: Research Interview vs. Oral History Interview (45 minutes)

Because a research interview is very different in its purpose (to help answer a research question) than an oral history interview (which records and preserves stories and memories and sometimes helps to answer a research question), it’s important that interview questions are designed with the appropriate purpose in mind depending on the type of interview you’re conducting. Because they emphasize storytelling, ways of seeing the self or the community, and memories of historical events, oral history interviews often need fewer specific questions and more prompts than research interviews.

First, choose one of the following topics:

  • Changes in telephone technology since its patent in 1876
  • The best cake you’ve ever eaten
  • The “millenium” or “Y2K” bug
  • The development of a local community center in your region
  • The increase in diabetes since 1980
  • The price of gasoline over the last 100 years

Then, generate with a partner four different interview questions (one of which needs to be a follow-up question) for both a research interview and an oral history interview. When you’re finished, discuss the differences between the two sets of questions and what accounts for these differences.

When you’re interviewing, it’s important to keep track of your main research question, as responses may stray from what you expect and you might get caught up in what your interviewee is saying. It’s important to be prepared with follow-up interview questions that might piggy-back off of a prior question. Similarly, you might also want to be prepared to ask “Why?” or “Tell me more about that,” after an answer you receive (especially if you get an answer that is shorter than you expect). Sometimes the best questions simply ask for clarification (“Could you tell me what you mean by that?” or “Could you give me an example of what you mean?”) or are constructed on the fly (“Can we go back to that example you talked about earlier?” or “How did you feel about that?”). Oral history interviews benefit from mocking up an outline of topics and then generating a list of many possible questions in each section of your outline and letting the interview organically emerge from whatever series of questions are appropriate.

Finally, it is important to take into account that, as the interviewer, you develop and ask the questions. This places you in a position of power (even if you don’t feel particularly powerful, such as if you are a student interviewing an instructor). When you interview someone, you enter into a relationship with them for a brief time, and it is important that everyone feels as comfortable as possible.

Putting It All Together: Case Studies

A case study is a kind of qualitative research method that combines data collected from a variety of other methods that we have already talked about—like surveys, interviews, and different kinds of documents and artifacts. A case-study approach to answering a research question is best suited when the phenomenon you’re studying is particular, or distinct, in relation to a larger society, culture, or environment. You might want to look at a case to understand broader details than any one method, like an interview with one person, might tell you. Looking at cases is particularly helpful when researchers are trying to gain some insight about the nature of a particular environment in more detail; however, it’s important to note that the limitation of a case study is that one single, detailed instance of a phenomenon cannot be used to generalize to all instances of that kind of activity everywhere. Case studies offer us a snapshot of an individual unit, a glimpse as comprehensive as we can get, that helps us understand or know systems of the world—and its people—a bit better.

     To undertake a case study, you will need to gather one or more kinds of data that we have already discussed and then analyze or code it to find categories or patterns. Once you have those preliminary analyses or codes, you might compare what you’ve found with other, similar cases. Finally, you’ll work to interpret your research notes to come to some conclusions about how the case you’ve chosen offers up an understanding of your research question.

     For example, let’s say commencement is right around the corner and you are interested in the rules and regulations that govern graduation—what people can (and cannot) wear, what freedom they have to decorate their mortarboard hats or wear culturally significant accessories, how honorary degrees get conferred and taken away—and what graduation signifies in terms of a major life event for college or university students. In other words, you are seeking an answer to a broad research question, “What does commencement mean to a college or university community?” Because most colleges and universities engage in this activity, choosing to look at one—at the college or university you attend—would offer a case-study glimpse at the nature of commencement. Your examination of commencement at your institution would give an audience some ways to understand how graduation is significant to college and university communities.

     You might begin this case study with a worknet, reviewing the literature about the history of commencements, recent newsworthy pieces about dress codes or cultural items that have made it into the popular press, and local updates from your college or university about the who, what, where, and how of commencement planning. Once you’ve done some reading, it’s time to plan your case study: just what kind of data should you collect, from who, and why?

     We realize that even planning out a case study as a brief exercise might seem overwhelming, especially if you have to use one or more research methods to get there. That’s why it’s important at every step in your research process—whether gathering a preliminary round of survey results, reflecting after an interview or site-based observation, or handling a new artifact—to document what you notice, document what sticks out during the experience you’ve just had, document how it connects to other data-collection or data-handling experiences, and document what significant patterns emerge as your research experiences add up. Researchers call this documentation a research memo*, and it will help you move from data collection to data interpretation—in other words, a research memo will help you begin to make sense of all the information you are gathering in a way that is not as overwhelming as looking at data from 50 surveys, 5 interviews, and 3 site visits all at once. 

*Research memos are also remarkably important for showing the work and communicating in-progress analysis when multiple researchers collaborate

Try This: Case Study Planning (30 minutes)

Using the commencement example above, develop design considerations for a case study by answering the following questions:
  • What kinds of data will you collect?
  • What are the best methods to use to collect your data?
  • Who should you talk to?
  • What other cases can you compare this case to?
  • What are you going to look for in your data? What are your variables?

 

Focus on Delivery: Writing a Research Memo

A research memo is an in-between phase of writing: it’s not the same as the data you collect or code, but it also isn’t a final research paper. Instead, it’s an analytic memo that a researcher writes after each of their major data-collection episodes to help them make sense of what they just experienced. It helps a researcher look back on the small pieces of what they’ve done to understand emergent patterns for analysis of their research question. Because all of the small parts of a case-study—field notes, transcriptions, documents, coding sheets—can add up, taking time out to review and reflect is necessary.

     Unlike the observant, real-time detail that is required of field notes, research memos are instead a place for analysis, which means they are a place for freewriting, thinking on paper, noting patterns and anomalies by comparing one kind of data with another, assessing your progress or noting problems with your research, planning for a future stage, and noting your feelings about your research. You might think of a research memo as a working paper about the major data points of your case study—this may mean one interview or a series of interviews, one site visit or multiple visits, one coding sheet or ten coding sheets. Regardless, it’s important to keep up with your research memos, as they will simplify the process of interpreting multiple kinds of data.

     As you write your research memo, it is best if you have with you the data you’ve already collected (the interview transcript, field notes, coding sheet, document, or artifact).

     In your research memo, you should

  • include relevant dates and data types (e.g., “June 14 research memo on interview with Sonja Notte, May 31”) and bibliographic information if a textual source;
  • include relevant quotations (for interviews or surveys), quantities (for surveys), observations (for fieldwork), words and phrases (for coded documents), or descriptions (for material artifacts) that stick out to you from your data collection;
  • record why you think these chosen details are important, relevant, or stick out;
  • reflect on how the data contributes to clarifying your research question or helps to define or refine the scope of your research question (this can help you revise your research proposal); and
  • comment on what you think of the data: What questions do you have? What patterns or trends are emerging when you consider this data in light of others you’ve collected? What connections can you make across data sets? What confuses you?

Works Cited

Abdel-Monem, Tariq, et al. “Climate Change Survey Measures: Exploring Perceived Bias and Question Interpretation.” Great Plains Research, vol. 24, no. 2, 2014, pp. 153-68. Project Muse, doi.org/10.1353/gpr.2014.0035.

Bergman, Linda S., and Janet S. Zepernick. “Disciplinarity and Transfer: Students’ Perceptions of Learning to Write.” WPA: Writing Program Administration, vol. 31, no. 1-2, 2007, pp. 124-49, associationdatabase.co/archives/31n1-2/31n1-2berg- mann-zepernick.pdf.

Glenn, Wendy J., and Ricki Ginsberg. “Resisting Readers’ Identity (Re)Construction across English and Young Adult Literature Course Contexts.” Research in the Teaching of English, vol. 51, no. 1, 2016, pp. 84-105. National Council of Teachers of English, library.ncte.org/journals/rte/issues/v51-1/28686.

Ritter, Kelly. “The Economics of Authorship: Online Paper Mills, Student Writers, and First-Year Composition.” College Composition and Communication, vol. 56, no. 4, 2005, pp. 601-31.

 

Keywords

interviews, surveys, questionnaires, case-study, research memo

 

Author Bios

Jennifer Clary-Lemon is Associate Professor of English at the University of Waterloo. She is the author of Planting the Anthropocene: Rhetorics of Natureculture, Cross Border Networks in Writing Studies (with Mueller, Williams, and Phelps), and co-editor of Decolonial Conversations in Posthuman and New Material Rhetorics (with Grant) and Relations, Locations, Positions: Composition Theory for Writing Teachers (with Vandenberg and Hum). Her research interests include rhetorics of the environment, theories of affect, writing and location, material rhetorics, critical discourse studies, and research methodologies. Her work has been published in Rhetoric Review, Discourse and Society, The American Review of Canadian Studies, Composition Forum, Oral History Forum d’histoire orale, enculturation, and College Composition and Communication.

Derek N. Mueller is Professor of Rhetoric and Writing and Director of the University Writing Program at Virginia Tech. His teaching and research attends to the interplay among writing, rhetorics, and technologies. Mueller regularly teaches courses in visual rhetorics, writing pedagogy, first-year writing, and digital media. He continues to be motivated professionally and intellectually by questions concerning digital writing platforms, networked writing practices, theories of composing, and discipliniographies or field narratives related to writing studies/rhetoric and composition. Along with Andrea Williams, Louise Wetherbee Phelps, and Jen Clary-Lemon, he is co-author of Cross-Border Networks in Writing Studies (Inkshed/Parlor, 2017). His 2018 monograph, Network Sense: Methods for Visualizing a Discipline (in the WAC Clearinghouse #writing series) argues for thin and distant approaches to discerning disciplinary patterns. His other work has been published in College Composition and CommunicationKairosEnculturationPresent TenseComputers and CompositionComposition Forum, and JAC.

Kate Lisbeth Pantelides is Associate Professor of English and Director of General Education English at Middle Tennessee State University. Kate’s research examines workplace documents to better understand how to improve written and professional processes, particularly as they relate to equity and inclusion. In the context of teaching, Kate applies this approach to iterative methods of teaching writing to students and teachers, which informs her recent co-authored project, A Theory of Public Higher Education (with Blum, Fernandez, Imad, Korstange, and Laird). Her work has been recognized in The Best of Independent Rhetoric and Composition Journals and circulates in venues such as College Composition and CommunicationComposition StudiesComputers and Composition, Inside Higher Ed, Journal of Technical and Professional Writing, and Review of Communication. 

 

definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Muse: Misunderstandings and Their Remedies Copyright © by Jennifer Clary-Lemon; Derek Mueller; and Kate Pantelides is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book