Excitement then Peace

Discuss some problems within your individual team members’ work settings that could be addressed in a research project. Analyze various methods of data collection that might be appropriate. Discuss issues related to validity and reliability. Write a 300 to 400 word summary paper stating the problem and explaining your data collection plan.

The problem is the children in my kindergarten class don’t settle down quickly enough after music is playing during station rotations. The music is played to refresh their concentration between stations but the cost in increased difficulty in settling in is daunting. I want to try various methods or transitioning out of music or into stations to reduce the lost settling in time. Here are some things I want to try:

  1. Discuss the problem with the students and ask them to settle in faster or risk losing the music between stations.
  2. Deep breathing at the end of the music, then move to stations.
  3. Move to stations after music, then do deep breathing to begin the station.

There is no simple way in a unified class to create different test groups and/or a control group. So my only choice is to run the experiments sequentially, using both the summative assessment for the previous experiment and the pre-test to track success. In terms of tracking success, I see no easy way to quantify the results under real classroom pressures. My solution is for all the adults in the class to journal the experiment and do a qualitative assessment at the end. This wouldn’t be publishable in a Journal but, with any luck, it will give me the information to decided on a solution to my problem.

Other notes: I would run the experiments in the numerical order because my hypothesis is that that is the order of effectiveness, which would make the differences easier to track. Also, a fourth experiment would be cutting music out completely. I am disinclined to do that because I believe that is a positive contribution to the children. However, should I wish to test that hypothesis, I would need a whole different Action Research study, comparing learning w/ and w/o music during rotations.

Arguments, Too

I spent a good part of today talking to a Ph.D. with whom I am involved on a research project. She said something I thought might be useful here. She said, “If they want to attack you, they will assail your methodology.” I don’t know if this is better or worse, but what I take that to mean is there’s a second level of “argument” which skips the content and works only in the minutia, like a lawyer trying to get a case thrown out because of failed Miranda warnings or other procedural grounds. I suppose it is also a sobering reminder to so that we can construct the least assailable studies.


What elements make a statement arguable? Why is this relevant to action research?

Hmmm. Well to begin with, our books says, “postmodernists argue that truth is relative, conditional, and situational, and that knowledge is always an outgrowth of prior experience.” So, if you’re a postmodernist (or arguing with one) everything is arguable. To some extent, this is universally true. There is nothing in science beyond questioning, beyond argument – at least in principal.

The only things that aren’t arguable are factual statements. I can tell you how many people clicked which answer in my survey. That number in response to that question is not arguable. That is why the famous “hanging chads,” from the 2000 Presidential election were so significant. People’s intentions in voting suddenly became relevant. The convention is a vote is a vote. Nobody says “Well, they voted that way but they didn’t mean to…”

This is also why observable and valid become so very important in research. Researchers desperately need an unassailable factual basis from which to build their argument. The data need to be observable and measurable to be “facts.” They need to be valid (to measure what they proport to measure) to be relevant.

One side note, in qualitative research, the “facts” are the varying stories of experience. The constraint of “fact,” of what is “true,” is relaxed to include conflicting data from which a subjective pattern is woven. This is not dissimilar to me proceeding on the results of my survey, knowing the “science” is weak, the arguability is high, but also believing valid information exists in the data that was collected.

I suppose it should also be said that facts and arguability become particular important when group action is needed. Individuals, like classroom teachers, to a large extent, can adapt their behavior based on an intuitive belief in truth, a lower standard of proof. groups tend to want more safety and the safety comes in knowing the “facts” and acting based on inarguable knowledge. This has various good and bad implications, beyond the scope of this response. So the relevance to Action Research is less than to Academic research, but the relevance remains that the lack of arguability is critical when persuading other people to believe you is important.

Representative Surveys

Survey respondents self-select along lines of motivation or interest. That is to say, unless there’s a mechanism to oblige participation or to incent participation independent of the topic (e.g. being paid), the most likely respondents are not representative. Many of them come from a subset who are motivated enough on that subject to take the time to reply.

I am used to surveys where the implication is to make your voice heard, you need to fill out the survey. This could be a PTA or school survey or a neighborhood development survey. In all these surveys, the dynamic is “Respond if you want your voice heard. If you don’t, tough for you.” This is different from a scientific survey where you want to hear from a representative sample of your target population. I see now that this is MUCH harder to achieve.

As an example, my survey of ‘movement in the classroom’ was sent to a subset of the population, my friends on Facebook. They are far from randomly chosen. They have been bombarded with my posts about education and are mostly moving in similar experiences to mine. Then there’s the self-selection on the ~10% who chose to respond. While the cumulative response “feels” reasonable to me, in truth, I have no science to support my conclusion. Also, I have no idea whatsoever whether these responses are generalizeable nor do they help even my intuition in this regard.

Survey Says….

I got 17 responses to my survey. This is hardly sufficient for a paper but still interesting. My target audience was my Facebook friends, people known to me, usually with school age children.

I surveyed “movement in the classroom” and got three really clear bits of feedback.

1) “Education” is the dominant expectation of schools (versus athletics, socialization, self-actualization or creativity).
2) The respondants had a traditional approach to the expectation that children should be taught to sit down in class. The average answer was “moderately important” on “How important is learning to ‘sit down, sit properly, and sit still’ to academic performance, in your opinion?”
3) BUT they scored just short of “overwhelmingly pleased” (with only 2 voting less than “moderately pleased”) on “If, upon visiting a classroom, you found children sprawled on the floor or kneeling on their chairs attentively doing their assigned tasks, would you be?”

So the people who responded are serious about school and learning, have a somewhat traditional expectation but are mostly results oriented, keying on “attentively doing their assigned tasks” rather than “children sprawled on the floor or kneeling on their chairs.” Put another way, I could expect these people to accept and support relaxed movement rules IF the task focus and learning success stayed solid or improved.

This is not exactly what I expected (more traditional and more willing to accept successful adaptation). This is good information, helpful to me in advocating for more movement in our classrooms (and in how I do so).


How do you pilot a test, questionnaire, or survey?

The best way to pilot a test, questionnaire or survey is to have a representative subgroup of the population at which it is aimed take it. It doesn’t have to be a big group, just enough to give it a test run, to expose one’s oversights. As an example, I didn’t test the questionnaire that I just sent it out. I looked at it hard before I sent it out and imagined in my mind the responses. But I couldn’t see what I couldn’t see. The biggest oversight is my final question, the key question, was worded such that when I got results I wasn’t expecting, I wasn’t sure if it was because of the question’s wording or simply my expectations were wrong. The question was: “10. If, upon visiting a classroom, you found children sprawled on the floor or kneeling on their chairs attentively doing their assigned tasks, would you be: 1 Horrified; 2 Concerned, 3 Indifferent, 4 Mildly Pleased or 5 Indifferent?” In spite of the previous questions building a case that the respondents mostly had a traditional expectation of classroom behavior, the weight of answers to this question fell towards the Indifferent end. This could be because respondents missed the conflict between “sprawled,” etc and “attentively,” perhaps picking up on attentive more than sprawled. As the number of responses piled up, I have come to believe that in spite of traditional expectations, these respondents valued the “attentive” and either were indifferent to or happy about the “sprawled.” For this exercise and for my own information, it was useful and interesting. But to be certain or to use this survey in a more authoritative fashion, I would need to recheck that conclusion with one or more explicitly written questions. Not checking the survey before publishing it caused a potential problem with the survey’s reliability to go unnoticed.

A second problem, less likely to plague more experienced researchers is that I constructed my questions to give interlocking value. By this I mean, I asked (for example) the sex of the children and hoped to compare that to the sensitivity to movement issues. Come to find out when I ran the study that the “basic” version of the survey site I used doesn’t to allow this level of analysis.

Red Plank, Again

Truth be told, I was pretty pissed off when I saw Red Plank so many years ago. It seemed like a ridiculous farce and the idea that my art teachers were presenting it as something worthy of admiration and that it was in a prestigious art museum stripped my gears. It seemed like lunacy and I resented what I thought was an affront to my common sense and the perceived implication that if I didn’t see its value, I was an uncultured boob.

I was maybe 14 then and have gotten much more accepting of being and/or being perceived as an uncultured boob. I have learned that standing for my truth, with as little confrontation as possible, is the best way to honor me and the other parties to the conversation. It such a joy to be able to discuss wonderful things and not get tripped up by all the silliness that often surrounds such subjects, I am very grateful to have mostly learned that lesson.

Back to art, the idea that the artists creates art as a conversation with the audience fascinating. It never occurred to me that the viewer is considered by the artist. That makes a difference in my understanding. For example, I better understand Jackson Pollock’s cigarette butts. Not completely, but I have an inkling.

I wonder why the interactivity is sucked out of the art in museums. There, the cold placement and historical context seems to powerfully fix the visitor in the “observer” role, passive and mute. Wouldn’t it be interesting to be an artist who silently asked questions with art and had viewers fill out questionnaires, fine tuning the art to the way it’s perceived? Really, if you want to provoke an conversation or an emotional connection with the viewer, what better way to refine that process than with feedback from a survey?

Bias in Qualitative Research

I wish that were true that “you cannot let your personal opinions or biases get in the way of your research.” What about the “participatory and advocacy practices” wing of qualitative research? There they believe “the qualitative researcher is not an objective, authoritative, politically neutral observer standing outside and above the text.” Further, “Ideas such as these challenge traditional research that holds firm to a neutral and objective stance.”

Chris MatthewsTo be fair, it also says, “It also calls for the inquirers to report actively in their studies their own personal biases, values, and assumptions” and sets laudable goals like creating research “in which the rights of women, gays, lesbians, racial groups, and different classes in our society need to be considered.” However, the casting aside the need for or aspiration toward neutrality and objectivity is a deal breaker for me. If I want somebody’s political views, I’ll watch Chris Matthews.