• No results found

1. Introduction

1.3 Gaps in earlier research about teachers’ questioning practices

My review of earlier studies on questioning in general resulted in three gaps, which I identified as warranting a further investigation. First, the study reports over several years consistently indicated that teachers continue to dominate their classes and pose many questions in a typical lesson, of which the majority are low-level questions. In addition, the reports also indicate that teachers had not taken up research recommendations and suggested techniques. Even those teachers that underwent training in questioning techniques (Lucking, 1978; Rice, 1977; Wilen, 1984), could not pervasively and continuously implement the learned techniques (Sanders, 1972). This consistent finding over several years raises the question of why teachers seem persistent with their ways of asking questions despite calls for change. Consequently, reasons for why teachers continue to execute their questioning practices as consistently reported are unknown from a research perspective. Further still, earlier research studies do not indicate having taken into account teachers’ own knowledge and perceptions about question asking, or how teachers themselves conceptualize the questions they

7

use in their teaching. Thus, a teacher’s perspective with regard to classroom questioning in general is missing in prior research about questioning.

Teacher education research in the past focused mostly on what teachers need to know and how they can be trained into doing it (Carter, 1990; Richardson, 1990). What teachers actually know about teaching and how they acquired what they know, received less attention. Consequently, teachers’ own contributions to the knowledge base of teaching had for long been missing in research (Cochran-Smith & Lytle, 1990). The same happened in the study of teacher questioning practices. Cochran-Smith and Lytle (1990) argue that important teacher perspectives concerning the nature of “questions teachers ask, the ways teachers use writing and intentional talk in their work lives, and the interpretive frames teachers use to understand and improve their own classroom practices” (p. 2) need to be explored from a teacher’s perspective as well as a researcher’s perspective. They express that limiting the knowledge base for teaching to what academics have recommended has resulted into problems such as discontinuity between what is taught in universities and what exactly happens in classrooms. In regard to questioning in science classrooms for example, Eshach et al. (2014) pointed to a gap between how science researchers and teachers view the role of teacher questions. They report that while teachers consider the affective domain, science education researchers focus on the cognitive dimension of teacher questions. Putnam and Borko (2000) also report about research knowledge being inconsistent with how teachers think and view the reality of teaching. They note that “teachers, both experienced and novice often complain that learning experiences outside classroom are too removed from the day-to-day work of teaching to have a meaningful impact” (p.

6).

The implication is that not only teachers’ experiential knowledge has a substantial effect on the actual practice of teaching, but also on the extent to which teachers take up and apply educational research knowledge. Teachers’ experiential knowledge, their beliefs and perceptions about teaching serve as a core reference for teachers as they process new information, and strongly influence how teachers approach their teaching

8

(Golombek, 1998; Hampton, 1994; Pajares, 1992; Tabacbnick & Zeichner, 1984). To be able to improve teaching, there is need for a sufficient understanding of “how teachers cope with the complexities of their work” (Freeman, 1996, p. 95). Thus, reasons why teachers employ a large percentage of lower-level questions could be well established if it was known how teachers conceptualize the questions they use, their knowledge about questioning and the types of classroom questions in general. Without establishing teachers’ knowledge and thinking about the questions they ask, it is difficult to validate researcher claims about teachers’ lack of knowledge about questioning, since there are likely conceptual differences in regard to what forms of knowledge about questioning are being considered between the researcher and the teacher. In addition, it is also difficult to ascertain the exact problems teachers face when using questions, as well as suitable forms of interventions that would contribute to developing teachers’ questioning practices.

Second, most research studies on teacher questioning employed question classification schemes (taxonomies) based either on Bloom’s cognitive domain, e.g., Sanders’ (1966) question classification scheme, or on Gallagher & Aschner’s (1963) question category system, to study and report about teacher questioning. With such pre-established frameworks, a researcher would categorize a teacher’s questions and then count the number of questions coded at a particular cognitive level along the used question classification scheme. The results would show how many of a teacher’s questions were lower-level questions and how many were higher-level questions if Bloom’s cognitive levels are used, or, how many questions were convergent or divergent in that respect if s/he used Gallagher & Aschner’s (1963) question category system.

In her reviews of the use of teacher questions, Gall points out the insufficiency of available taxonomies in classifying teacher questions as they are not fully grounded in a theory of instruction and learning and thus fail on providing a basis for deciding the various levels of questions asked and their respective answers (Gall, 1970; Gall, Gall,

& Borg, 1996). She further mentions that these systems were formulated to explain the questions teachers ask rather than those questions which teachers should ask in a

9

classroom situation and thus are not suitable for use in question framing. In a similar vein, Furst (1981) reviewing the application of Bloom’s taxonomy in questioning noted that “the scheme is aimed more at the outcomes of instruction than at the language moves a teacher might undertake to probe meanings, opinions, and preferences and otherwise to facilitate discussion” (p. 33).

Several other researchers expressed similar concerns about using pre-defined category systems to study teacher questioning practices. Farrar (1986) noted that using question classification frameworks in the study of teacher questions could not allow for accounting for all the functions of teacher questions which are both social and cognitive. Others also pointed to a lack of fine-grained analyses in earlier studies on teacher questioning to uncover all the details around questioning (e.g.; Andre, 1979;

Chin, 2007; Dunkin & Biddle, 1974; Heritage & Heritage, 2013; Ho, 2005; Roth, 1996). For example, Ho (2005) expressed that the question-answer exchanges are not isolated activities but rather influenced by other factors within the teaching context, and such exchanges are open to varied interpretations. Roth (1996) noted that using pre-determined frameworks to measure and collapse scores across students, situations or social and physical settings does not allow a sufficient understanding of teachers’

practice of questioning. Andre (1979) also expressed that the question taxonomies being used often fail to capture the details in the teacher’s questioning. He added that some researchers might have difficulties using some question classification taxonomies, while others might be influenced by their own perceptions and understanding of the topic of questioning. He thus like Dunkin and Biddle (1974, p. 8) concluded that the reliabilities with which teacher questions could be classified using pre-determined schemes can at best be moderate.

Further still, research on teacher questioning that focused on the relationship between discrete observable teacher questioning practices and students’ outcomes or students’

achievement in particular (Carlsen, 1991; Chin, 2007; Roth, 1996), seem to have paid little attention to the interactional nature of classroom discourse. This can be thought to be one of the reasons why research on whether certain questions lead to more

10

students’ learning gains than others remain inconclusive (Brophy, 1986). Review studies on this aspect of questioning indicate that whereas some researchers found higher level questions to lead to higher students’ achievement (Gayle et al., 2006; Mills et al., 1980), some studies (Dunkin & Biddle, 1974; Gall, 1970) concluded that the use of higher level questions demonstrated little relationship to student achievement.

Therefore, it is likely that such inconsistences are a result of the inadequacy of the methods that were employed in the study of teachers’ questions, which only account for the cognitive functions of questions, leaving out the social function or the interactional nature of classroom discourse. Like Farrar (1986) advised, there is need for approaches that allow for examining questions and responses in context before any valid judgments can be made about the values of classroom questions.

The third gap identified as warranting an exploration concerns the fact that despite a large body of research on teacher questioning in science classrooms, research studies focusing on questioning in chemistry classrooms were scarce especially at the conception of the current study. Indeed, by the time of conception of the present study topic in the spring of 2013, there were challenges finding reliable sources of information concerning teachers’ questioning behaviors in chemistry classrooms. This picture has not changed much as of 2017. Only a few studies addressing teachers’

questioning in chemistry classrooms have emerged (e.g.; Kira, Komba, Kafanabo, &

Tilya, 2013; Li & Arshad, 2014; Nehring, Päßler, & Tiemann, 2017). Further still, the first two issues that I noted as missing from previous research studies are not addressed in these recent studies.