In Section 9.4.2 (An example: evaluating
icon designs), we saw that the observed results
could be the result of interference. Can you think
of alternative designs that may make this less likely?
Remember that individual variation was very high,
so you must retain a within-subjects design, but you
may perform more tests on each participant.
Three possible ways of reducing interference
- During the initial training period,
swap back and forth between learning the two sets
of icons, with the aim of getting the subjects used
to swapping between the two sets of remembered icons.
However, this design could be argued to suffer the
same flaws as the original. If the abstract icons
had been taught in isolation perhaps they might
have fared far better.
- We could invent a third set of 'random'
icons (call them R). We could then interpose them
in the experiment, that is present the icons in
the orders RARN and RNRA. The intention is to swamp
any transfer effect in the 'noise' of the random
icons. It could be argued that our experiment then
measures the robustness of the icon sets to such
- We could give the subjects multiple
presentations, for example ANAN and NANA presentation
orders. This would not remove transfer effects,
but it would give us some way to quantify them.
Imagine that in the ANAN group the second presentation
of the abstract icons was significantly worse than
the first, but there was not a similar effect for
natural icons in the NANA group. This would give
us both positive evidence of a transfer effect,
and perhaps some quantitative measure. However,
even going from this additional evidence to a strong
conclusion will be difficult.
Notice that all the above measures require
additional subject time and one has to constantly
weigh up the advantages of richer experiments against
those of larger subject groups.