• No results found

P OST E XPERIMENTAL Q UESTIONS

6. HYPOTHESES TESTING

6.2 P OST E XPERIMENTAL Q UESTIONS

Table 10: Summary of Post Experimental Question Responses

# Question content Scale / Anchor Average Std.dev. Max. Min.

1 Years of experience as an auditor Years 7.3 2.5 12 2.5 2 Time to complete the study Minutes 50.6 8.1 65 35 3 Case materials were easy to

understand 1=Never, 7=Always 5.1 1.2 7 3

4 Case materials were realistic 1=Never, 7=Always 4.7 1.4 6 2 5 Difficulties due to materials being

in English 1=None, 7=Very Difficult 1.8 0.9 5 1 6 Understanding of framework 1=Poor, 7=Excellent 5.3 0.8 7 4 7 Effort needed to understand and

complete materials 1=Easy, 7=Difficult 4.1 1.0 6 2 For question #3 to #7 a seven point scale anchored in the ends was used.

Responses are from the 21 participants included in the judgment analysis.

Twenty-one participants were included in the judgment analysis. All of the participants were audit managers, and average work experience as an auditor was 7.3 years (#1 in table 10).

An average of 51 minutes was spent on the entire experiment (#2 in table 10).

Participants on average reported the case materials to be easy to understand (5.1 on a 7 point scale; #3 in table 10) and realistic (4.7 on a 7 point scale: #4 in table 10). Providing the

59 For example; when participants interpreted inherent risk as all transactions being erroneous prior to control, an appropriate judgment policy would be to require all locations to have sufficient control if control risk is to be below 100%.

Such a judgment policy implies that all cues in the cases that were intended to be independent, amplifying, compensating and multi-step were perceived as completely-dependent (i.e., since all control cues are required to be effective for control to be sufficient).

instructions in English did not cause difficulties (1.8 on a 7 point scale: #5 in table 10).

Participants generally assessed their understanding of the controller interrelationship framework as good (5.3 on a 7 point scale: #6 in table 10). When asked to circle any framework elements they found difficult to understand or apply, 11 participants reported difficulties with amplifying controls; 6 participants had difficulties with both understanding and applying the definition, 3 participants had difficulties understanding the definition (i.e., but did not report difficulties applying the definition), and 2 participants had difficulties applying the definition (i.e., but did not report difficulties understanding the definition).

Furthermore, two participants reported difficulties applying the definition of multi-step controls, while one participant reported difficulties applying the definition of substitutable controls. Participants reported that, overall, understanding and completion of the case materials required some effort (4.1 on a 7 point scale: #7 in table 10).

Overall this indicates that the experimental task was not perceived as trivial. Furthermore, responses indicate that participants generally understood the materials and that the overall effort was not too difficult. It can, however, be noted that issues with understanding and applying the amplifying cue definition were reported by some participants.60

Judgment logic in binary judgments

Since participants also made control risk judgments for each case, it is possible that the binary judgment was made based on the control risk percentage judgment, and not by reassessing cues. This would imply a different judgment policy (i.e., a simple accept/reject judgment about a percentage score) than the hypothesized conjunctive or disjunctive policy applied directly to the cues. The construct validity of the dependent variable operationalization (i.e., FFJP) would thus be threatened (i.e., the regression coefficients would not represent the underlying judgment policy).61

60 No covariate analysis was performed for the following reason: The dependent variable in this study is the functional form of the judgment policy. This is only observed at the aggregate level for all participants. It is therefore not feasible to analyze the effect of individual level covariates on the aggregate policy.

61 As noted in the review of psychology literature this is an inherent weakness with policy capturing since one does not observe what goes on within the mind of the judge. The models from the regression of cue levels on judgments are only paramorphic (i.e., surface) representations of judgment policies (Hoffman, 1960; Ashton, 1982; Trotman, 1996).

In the post experimental questions, participants were therefore asked the following question:

It is of special interest to this study to understand your thought process when you were judging the “YES/NO” question of whether sufficient controls were in place. Please read both options below and think carefully about the way you responded to that question in the survey. Circle the option (below) that best describes your thought process. Both options lead to appropriate judgments, so please think back to how you actually made the judgments in your mind:

(1) I simply looked at the control test results and considered whether there were sufficient controls in each location. That is, in your mind you were not just comparing the overall control risk score of the client to a threshold.

(2) I simply looked at the percentage score in the client’s overall control risk judgment and considered whether this was above or below my threshold. That is, in your mind you were not thinking in terms of sufficiency of controls in locations, but rather in terms of percentage scores. Please indicate the control risk threshold you applied:_________%

From a normative perspective, both strategies should result in identical judgments. It is, however, more efficient to use a conjunctive (disjunctive) judgment strategy since one only needs to find enough deficient (efficient) controls to judge controls insufficient (sufficient) (e.g., if all cues are perceived as necessary (individually sufficient), then it is sufficient to find one deficient (effective) control in order to make a negative (positive) judgment).

Responses revealed that 14 auditors (i.e., 67%) made their judgments based on the cues (option 1 above), while 7 auditors (i.e., 33%) made their judgments based on the percentage score (option 2 above). If it is assumed that the auditor chooses one of these judgment processes by random (i.e., 50% chance for each), then the likelihood of 14 out of 21 auditors basing their judgments on the cues directly being due to randomness is <0.039.

Statistically, it is therefore supported at the 5% level that auditors, on average, use option 1 deliberately (i.e. they judge cues directly).

The six auditors basing their judgments on percentage scores had thresholds between 0%

and 15%, while one auditor reported a threshold of 25%. For the majority of the auditors (67%), the description of their judgment policy supports the validity of the dependent variable construct operationalization.