Terezinha Nunes*, Peter
Bryant, Deborah Evans, Daniel Bell, Selina Gardner, Adelina Gardner
and Julia Carraher
It has often been
claimed that children’s mathematical understanding is based on
their ability to reason
logically, but there is no good evidence for this causal link. We
tested the causal
hypothesis about logic and mathematical development in two
related studies. In a
longitudinal study, we showed that (a) 6-year-old children’s
logical abilities and
their working memory predict mathematical achievement 16
months later; and (b)
logical scores continued to predict mathematical levels after
controls for working
memory, whereas working memory scores failed to predict the
same measure after
controls for differences in logical ability. In our second study, we
trained a group of
children in logical reasoning and found that they made more
progress in mathematics
than a control group who were not given this training.
These studies establish
a causal link between logical reasoning and mathematical
learning. Much of
children’s mathematical knowledge is based on their understanding
of its underlying
logic.
Logical relations lie
at the heart of many fields of inquiry. Think of the relation known
as
transitivity, for
example, if A ¼ B and B ¼ C, then A ¼ C. This relation is
pertinent to
the number of objects
in a set, to length, to volume, to colour, to shape, to people’s
intelligence, to the
matching of photographs of fingerprints, to the taste of orange
juice
etc. Transitivity is
significant in the domains of number and measurement, but it is
neither number nor
measurement. Not all the relations are transitive: if A is the father
of
B and B is the father
of C, it does not follow that A is the father of C. For this reason,
Piaget (1950) argued
that the pertinence of a logical relation is given meaning in a
domain through
experience.
A similar argument was
advanced by Simon and Klahr (1995; Klahr, 1982) about
conservation.
Transformations, they argued, have different effects on different
quantities, and
children learn about these differences by experience. If we add a jar
of water at 208C to
another jar of water also at 208C, the quantity of water (the
extensive quantity)
increases, but the temperature (an intensive quantity) stays
the same. Children must
learn the domains where different logical relations (or
axioms) apply.
Children almost
certainly need to understand the logical relations between
quantities in order to
learn how to represent numbers and arithmetic. These relations
are not the same as
arithmetic or the numeration system, but are relevant to them. One
such relation is
correspondence: if two sets contain the same number of objects, then
the objects in one set
are in one-to-one correspondence with those in the other. If set B
is in two-to-one
correspondence with set A, and C is in two-to-one correspondence
with A, then B and C
are equivalent. One-to-one correspondence is involved in the
understanding of
cardinality (e.g. Gelman & Gallistel, 1978) and one-to-many
correspondence in the
understanding of multiplication (Park & Nunes, 2001). One-to-
one relations have been
explored in many studies, but little is known about children’s
use of one-to-many
correspondences (though exceptions exist, e.g. Frydman & Bryant,
1988; Kornilaki, 1999;
Nunes & Bryant, 1996; Piaget, 1952).
Another logical
relation relevant to the whole-number arithmetic is the inverse
relation between
operations, for example, A þ B 2 B ¼ A. Bryant, Christie, and
Rendu (1999) showed
that children initially have a non-quantitative understanding of
this inverse relation,
without grasping its quantitative nature. Our hypothesis is that a
quantitative
understanding of this inverse relation is an important foundation for
learning arithmetic.
Children are taught, for example, that instead of adding 9 to a
number, they can take
the easy route of adding 10 and subtracting 1 – a procedure that
only makes sense if
they understand the inverse relation between addition and
subtraction.
A third example is the
relation known as additive composition, which can be
applied to operations
and relations as well as numbers. Piaget and Inhelder (1969)
argued that 2-year-olds
understand the additive composition of movements in space: a
movement from A to B
combined with a movement from B to C is the same as a
movement from A to C.
In its application to number, additive composition means that
any number can be
expressed as the sum of two other numbers (or decomposed into
two other numbers): 8
can be expressed as the sum of 7 þ 1, or 6 þ 2 or 5 þ 3
etc. and the value of
the set does not change. Additive composition is central to
column addition, where
we add units and tens separately: here, we implicitly
decompose numbers into
tens and units and add them separately. Whereas the
composition of numbers
is understood by about 60% of children around age six
(Nunes & Bryant,
1996), Vergnaud (1982) and Brown (1981) independently found
that the composition of
relations and transformations is achieved much later,
around age 11.
Finally, in elementary
mathematics, children must grasp relations of order. Mix,
Huttenlocher, and
Levine (2002) argued that order relations are necessary for children
to monitor some of
their early learning of operations: they should realize, for example,
that 3 þ 1 cannot be 2
because 2 is less than 3. Since Piaget’s pioneering work (Piaget,
1952), many studies
have investigated children’s understanding of seriation. It has
been
shown that children
differ in their understanding of seriation at the beginning of school
(van de Rijt, van Luit,
& Pennings, 1999).
The importance of logic
in learning mathematics was Piaget’s (1952) central
claim about
mathematical development. Yet, there is no firm evidence that
children’s progress
in learning mathematics depends on their understanding of
logical relations. This
is a causal hypothesis, and to test this adequately, one must
gather two kinds of
evidence. The first is longitudinal: if children’s logic
determines
how well they learn
mathematics, a measure of children’s understanding of the
logical basis of
mathematics when they start school should predict the progress that
individual children
make in mathematics over the ensuing months. As far as we
know and to our
surprise, no one has tested this particular prediction. There have
been longitudinal
studies of mathematics with predictors such as number
experiences
(Young-Loveridge, 1989) and associated measures (e.g. making visual
size comparisons,
counting, calculating, copying number patterns and comparing
the numerosity in rows;
Aubrey, Dahl, & Godfrey, 2006; Van Luit, Van de Rijt, &
Pennings, 1994), but
none of these established that children’s logical reasoning is a
predictor of
mathematics learning in school, even though a measure of logical
reasoning was included
in one study (Aubrey et al., 2006).
The second essential
test of a causal hypothesis is an experimental intervention. If
the understanding of
logical relations in mathematics is a genuine causal factor, an
intervention that
increases children’s understanding of these relations should also
improve their
mathematics. This dual approach to testing a causal hypothesis
(Bradley &
Bryant, 1983; Wagner &
Torgesen, 1987) rests on the idea that the strengths of each
method cancel out the
weaknesses of the other. Successful predictions certainly
establish a genuine
connection between two variables, but do not remove the danger of
a third unknown
variable that determines both the predictor and the outcome. On the
other hand,
intervention experiments do establish a causal connection within the
experiment. The
weakness of intervention experiments is that this connection could be
artificial. A variable
that has an effect in a laboratory experiment may not be relevant in
real life. However,
when the intervention is complemented by a successful longitudinal
prediction, the risk of
artificiality disappears. This powerful combination of
complementary methods
has not been tried in work on the links between children’s
logic and their
mathematical development.
We set up a two-part
test of the hypothesis that children’s understanding of
the logical basis of
mathematics is a determinant of how well they learn
mathematics. The first
part was a longitudinal study in which the main predictor was
a measure of children’s
understanding of logical relations relevant to number
representation and
arithmetic; the outcome measure was their progress in
mathematics at school.
We included a measure
of working memory as a control. There is impressive
evidence that working
memory plays a part in processes involved in mental
arithmetic in the UK
(Adams & Hitch, 1997; Gathercole & Pickering, 2000a; Hitch &
McAuley, 1991; McLean &
Hitch, 1999; Towse & Hitch, 1995), US (Fuchs et al.,
2005; Geary, Hamson, &
Hoard, 2000; Siegel & Linder, 1984; Siegel & Ryan, 1989)
´
and other countries
(Barrouillet & Lepine, 2005; Passolunghi & Siegel, 2004).
Case
(1982) argued that the
connection between working memory and arithmetic is
due to the constraints
that central processing capacity imposes on cognitive
development. Thus, one
must control for the effects of working memory when
analysing whether
logical reasoning predicts mathematics learning. Our aim was to
see whether the two
variables make independent contributions to children’s
mathematics
achievement.
The second part of the
study was an intervention, in which we taught an
experimental group
about logical relations (again our emphasis was on logic and not at
all on calculation) and
examined the effects on mathematical progress.
.
STUDY 1: PREDICTING
PERFORMANCE IN A STANDARDIZED
MATHEMATICS ACHIEVEMENT
TEST
Method
Participants
We recruited 59
children from four schools in Oxford, which serve a varied clientele
in
socio-economic terms.
All the children in their first year of school were invited to
participate. Parental
permission and the child’s own consent were obtained.
The children were seen
on three occasions. There was a loss of six participants
between the first and
the last occasion. The data reported are on the 53 children who
were seen all the three
times. Their mean age was 6 years (SD ¼ 3:4 months) at the first
testing occasion, 6
years 4 months at the second and 7 years 4 months at the third.
Design
The first sweep of
data collection was in the beginning of their second term at school.
They were seen
individually by an experimenter during two testing sessions, when
they
completed four subtests
of the British Abilities Scale (BASII; Elliott, 1997) as a measure of
general cognitive
ability, two subtests from the Working Memory Test Battery for
Children (Pickering &
Gathercole, 2001) and our assessment of logical competence.
On the second testing
occasion, we gave them the logical competence test again to
establish its
test–retest reliability. The third testing occasion comprised our
outcome
measure, the
school-administered Standardized Achievement Tasks, Mathematics
Section (SATs-Maths;
see http://www.qca.org.uk). The interval between our first data
collection and the
SATs-Maths was 16 months.
Measures
BAS II
We used four subtests
of the BAS II: Quantitative ability, verbal similarities, number
skills
and matrices. We
pro-rated these scores for an overall estimate of general cognitive
ability. The four
subtests show high reliability (all alpha coefficients . .8) and
high test–
retest correlation
(verbal similarities and number skills, r ¼ :91; quantitative
reasoning,
r ¼ :70; matrices, r ¼
:64). The correlation between verbal similarities, quantitative
reasoning and matrices
and general cognitive ability are all above .7; the correlation
between number skills
and general cognitive ability is .55. Although this last correlation
is lower, we included
this subtest to control for the children’s knowledge of written
numbers and addition
and subtraction facts. The first eight items of this subtest include
reading four numbers
(100, 12, 40 and 31) and working out four sums (2 þ 3, 4 2 1,
9 þ 5 and 18 2 5).
Subsequent items are usually too hard for children of this age.
Working memory
We chose two subtests
of the Working Memory Test Battery for Children (Pickering &
Gathercole, 2001),
counting recall and backward digit recall. In counting recall,
children
have to count the
number of dots on a series of pages and then recall how many dots
were on each page in
the right order. In backward digit recall, the children hear a series
of numbers and have to
recall them in the opposite order. The two subtests have high
loadings on the factor
identified as central executive (for counting recall, .81; for
backward digit recall:
.64; Pickering & Gathercole, 2001, pp. 19–22), which shows
a higher correlation
with standardized tests of arithmetic than phonological loop
measures (Pickering &
Gathercole, 2001, p. 25). Both the subtests have good test–retest
reliability (counting
recall r ¼ :74; backward digit recall r ¼ :53).
Assessment of
children’s logical competence
This assessment
requires no knowledge of reading or writing numbers. It concentrates
on
children’s
understanding of the logical basis of whole number and operations and
does not
depend on arithmetical
skills. The only form of numerical knowledge required is counting:
the way the counting is
used depends on the logical relation underlying the question.
The test was carried
out individually: all instructions and responses were given orally.
The assessment included
four subtests of children’s logical understanding of: (1) the
inverse relation
between addition and subtraction; (2) additive composition; (3)
one-to-
one and one-to-many
correspondences; and (4) seriation.
(1) Children’s
understanding of the inverse relation between addition and
subtraction was
assessed by two types of problem: one could be answered without
counting and the other
involved an answer, which could be obtained through counting
or by calculation. The
problems that did not require counting were of the form
[a þ b 2 b ], [a þ b
2 (b 2 1)] or [a þ b 2 (b þ 1)]. Three of these were about bricks
presented as a row; the
row a, to which we added and subtracted the same number of
bricks, was covered by
a cloth so that counting was not possible. The bricks were added
to one end of the row
and subtracted from the opposite end. The remaining three
problems were stories
presented with the support of drawings, which indicated that a
objects were in a box,
b were added and b or b 2 1 or b þ 1 were taken away. For
example, one story was:
‘There were nine doughnuts in a box; then someone came and
put seven doughnuts in;
later, someone came and took seven doughnuts out; how many
doughnuts are in the
box now?’ The drawings showed a sequence, as in a cartoon: the
first picture was of a
box; the second picture had the same box and above it seven
doughnuts of different
colours, next to an arrow that pointed to the box, representing
addition; the third
picture showed the box and doughnuts of different colours and
spread differently from
those in the previous drawing, next to an arrow pointing away
from the box,
representing subtraction. The children could not count to obtain the
solution because the
initial set in the box was not visible. If they understood the
inverse
relation between
addition and subtraction, they would understand that there was no
need to count or
compute to solve these items: the answer was either the same number
of bricks or objects at
the start, or a number that differed from the starting-point by 1.
The problems which
required counting were missing addend (two items) or missing
minuend (two items)
problems. For example, a missing minuend problem was: ‘Ali had
some sweets in her bag,
I don’t know how many. She gave her brother three sweets and
still had four sweets
in her bag. How many sweets did she have in the bag before she
gave some away?’ The
children were given manipulatives (e.g. cut-out figures of sweets)
to represent the
sweets. If the children understood the inverse relation between
addition and
subtraction, they would represent the two sets in the story, three
and four,
and count up, not down,
even though in the story Ali gave three sweets away.
(2) The understanding
of additive composition was assessed through the Shop Task
(Nunes & Bryant,
1996; Nunes & Schliemann, 1990), where children are invited to
pretend that they are
buying items from the tester. The task is introduced by making
sure that the children
recognize 1p, 5p, 10p and 20p coins, which are used in the task.
They are also asked to
count a set of 30 1p coins to ascertain whether the values in the
task are in their
counting range. These warm-up items were not scored and six items
were then presented.
Three required the children to say how much money they had
been given (e.g. in one
item they were given one 5p and three 1p coins and asked how
much money they had).
In the remaining items, the children were asked to ‘pay’ a
certain value with the
coins that they had (e.g. they were given one 5p and four 1p coins
and asked to pay 7p).
In each of the items, one coin more valuable than 1p (5p, 10p or
20p) was presented in
combination with 1p coins. In order to pay a specified amount,
for example, 7p, using
one 5p and two 1p coins, the child would have to realize that 7 is
the same as 5 þ 2.
This could be achieved by counting on from 5 as the 1p coins were
taken by the child.
(3) Children’s
understanding of one-to-one and one-to-many correspondences was
assessed by 14 items.
The one-to-one correspondence items (four items) were presented
in print and required
the children to find which row of objects (out of three) had the
same number as the top
row or to produce an array of counters with the same number
as the dots on a page.
The row, which had the same length, had a different density of
objects. These items
were designed by van de Rijt et al. (1999).
There were two types of
one-to-many correspondence items, involving either
sharing or not (five
items of each type). In the sharing trials, the children were asked
to
share fairly some
sweets or coins of different values to two dolls, as in Frydman and
Bryant (1988). For
example, with coins the children were told that they were going to
give money to two
dolls, which were going to buy sweets. The distribution should be
fair, so that the dolls
could buy the same amount of sweets. The children were then
shown 1p and 2p coins
and asked how much each was worth. Most children in this age
range (and all children
in this study) know the value of the coins and know that 2p is
more than 1p. The
children were then told that one doll was going to receive its money
in 2p coins and the
other in 1p coins, but that they should receive fair shares. The
solution can be
attained by giving a 2p coin to one doll and two 1p coins to another
doll,
instead of sharing on a
one-for-A and one-for-B basis. No counting is required but the
solution can also be
obtained by counting: in this case, the children would have to
establish a one-to-two
correspondence between the 2p coins and the count words in
order to determine the
fair shares.
In the remaining
one-to-many correspondence trials, the children were asked to
solve simple
multiplication questions and were given manipulatives that would help
them solve the problem.
For example, they were shown a row of four cut-out pictures of
houses and told: ‘In
each house in this street live three dogs. How many dogs live in this
street?’ Children of
this age level do not know multiplication tables: they solve the
problem by pointing
three times to each house as they count the number of non-visible
dogs that live in the
house. They seem to use a theorem-in-action (Vergnaud, 1988),
which could be
verbalized as: ‘if there is a one-to-three correspondence between
houses
and dogs, and I create
a one-to-three correspondence between houses and number
words, then I know the
number of dogs’.
Thus, in the sharing
items, the children often establish the correspondence in action:
they give two one-unit
items to one recipient and then give one two-units item to
another recipient
(Frydman & Bryant, 1988). In the multiplication items, the
children
establish a one-to-many
correspondence between visible objects and counting labels for
the invisible objects:
in the problem just described, they typically point three times to
each house as they
count the number of dogs that live inside (Kornilaki, 1999).
(4) Children’s
understanding of seriation was assessed by four items, where a
series of objects with
different and ordered characteristics (e.g. a series of candy
sticks with 1, 3, 4 and
5 coloured stripes) was depicted in a row. Outside the row is a
similar object (e.g. a
stick with two coloured stripes); the child is asked to show
where that object
belongs in the row. Two items assessed double seriation (e.g.
drawings with candle
sticks with different numbers of candle holders and cards with
different numbers of
candles), where two series have to be placed in
correspondence. The
items are inspired by Piaget’s (1952) seriation tasks; two
were designed by van de
Rijt et al. (1999).
The SATs-Maths
Pupils in England are
given a standardized mathematics achievement test at age 7, which
is designed by the
Qualifications and Curriculum Authority of the Department for
Education and Skills.
This test is administered by teachers in schools. Children are
classified into levels
of achievement based on their test performance. Table 1 presents a
brief description of
the levels of achievement.
Table 1. Levels of
attainment in the National Curriculum
Level 1
Pupils count, order,
add and subtract numbers when solving problems up to 10 objects. They
read and
write the numbers
involved
Level 2
Pupils count sets of
objects reliably, and use mental recall of addition and subtraction
facts to 10. They
begin to understand the
place value of each digit in a number and use this to order numbers
up to 100.
They choose the
appropriate operation when solving addition and subtraction problems.
They use the
knowledge that
subtraction is the inverse of addition. They use mental calculation
strategies to solve
number problems
involving money and measures. They recognize sequences of numbers,
including odd
and even numbers
Level 3
Pupils show
understanding of place value in numbers up to 1,000 and use this to
make approximations.
They begin to use
decimal notation and to recognize negative numbers, in contexts such
as money and
temperature. Pupils use
mental recall of addition and subtraction facts to 20 in solving
problems
involving larger
numbers. They add and subtract numbers with two digits mentally and
numbers with
three digits using
written methods. They use mental recall of the 2, 3, 4, 5 and 10
multiplication tables
and derive the
associated division facts. They solve whole number problems involving
multiplication or
division, including
those that give rise to remainders. They use simple fractions that
are several parts of
a whole and recognize
when two simple fractions are equivalent
Source:
http://www.nc.uk.net/webdav/servlet/consulted January 2006.
Level 1 is subdivided
into A and B, distinguished by counting range. Level 2 is divided
into a, b and c,
corresponding to increasing ability with place value and computation.
Level 2C children (the
lowest of the three levels) show difficulty in adding two-digit
numbers and saying, for
example, which one of four numbers is closest to 48, when the
closest number is 50
and the other numbers are in the same decade as 48. In contrast,
those in level 2A are
capable of adding three-digit numbers and rounding 86 to the
nearest ten. Children
in level 2A also show better computation skill in multiplication and
division and some
ability to solve fraction problems. The assessment has shown high
internal consistency
over different years: Cronbach’s alpha for 2004, when our sample
was assessed, was .88
(http://www.qca.org.uk; consulted in January 2006). In our
analyses, the levels
are coded in numbers, from 1 to 5.
Results
Since our assessment of
children’s logical competence has not been described before,
we start by presenting
its psychometric characteristics.
Characteristics of the
assessment of children’s logical competence
The mean number of
correct responses (out of 32 items) was 14.98 (SD ¼ 7:36) at Time 1
and 17.97 (SD ¼ 8:10)
at Time 2. The distribution of scores was consistently normal
(skewness z ¼ 1:74 for
Time 1 and 0.71 for Time 2). Reliability was high: Cronbach’s alpha
was .89 for Time 1 and
.91 for Time 2 (values above .7 are acceptable; Kline, 1999). The
test–
retest correlation was
.87 (p , :001). The children showed significant test–retest
progress
(t ¼ 5:75; df ¼ 58; p
, :001), due to either practice or improvements over time.
A second cohort of
children (N ¼ 53) from the same schools and in their first year of
school was tested
subsequently in order to increase our sample size and allow us to use
factor analysis to
scrutinize the test’s internal validity. The factor analysis with
the total
sample (N ¼ 112) with
varimax rotation showed that only one factor could be
identified, which
explained 66% of the variance of the total score. The factor loadings
were above .8 for the
correspondence, inversion and additive composition scales and
.67 for seriation.
Although one must be cautious about using factor analysis with small
samples, the
consistency between results for Cronbach’s alpha and this factor
analysis
suggests that the
scores can be treated as a measure of a single factor.
Testing predictive
models
As an exploratory
analysis, the correlations between the measures taken at Time 1 and
the outcome measure,
SATs-Maths, were calculated. We then tested whether logical
understanding and
working memory predict mathematics learning independent of
general intelligence.
The BAS estimate was entered as one of the measures, including the
subtest number skills,
but the correlation between number skills and SATs-Maths scores
was also calculated.
Table 2 presents the correlations between the different measures.
Logical competence, BAS
and number skills were all significantly related, but none of
the correlations is
above .8, which would produce the risk of multicollinearity (Field,
2005, p. 174).
The correlation between
the two measures of working memory was low (r ¼ :3),
though significant,
replicating the normative data (r ¼ :34; Gathercole & Pickering,
Table 2.
Intercorrelations between the Time 1 measures and the outcome
measure, SATs-Maths
T1 T1 Number T1
Counting T1 Backward T1 Logical
BAS
skills
recall
digit recall competence
SATs-Maths
Age at SATs
T1 BAS
T1 Number skills
T1 Counting recall
T1 Backward digit
recall
T1 Logical competence
*significant at .05
level.
**significant at .01
level.
.01
.26*
.53**
.16
.16
.28*
.33*
.21
.37**
.30*
.26
.63**
.69**
.34*
.26
.13
.70**
.60**
.35*
.25
.75**
2000b). The
correlations between the two measures of working memory and
intelligence were not
significant; those with logical competence and number skills were
significant but low.
Therefore, there is a reduced risk of accepting the null hypothesis
with respect to a
contribution from working memory in predicting the SATs-Maths
levels when other
predictors are entered in the equation first. Finally, the
correlation
between backward digit
recall and SATs-Maths was not significant, though backward
digit recall correlated
significantly with the BAS number skills subtest. Although this
result may seem
surprising, it is in line with the literature. SATs-Maths scores are
not
entirely based on
arithmetic: the correlation (Table 2) between the BAS-number skills
subtest and SATs
results (r ¼ :6) suggests that these measures have only 36% of
variance
in common. Gathercole
and Pickering (2000a) also observed a larger effect size for
counting recall than
backward digit recall, when comparing children with low versus
expected levels of
achievement in SATs-Maths. Thus, in subsequent analyses, we used
counting recall to
represent working memory in the prediction of SATs-Maths.
To test whether logical
competence, working memory and intelligence make
independent
contributions to the prediction of SATs-Maths levels, we used three
different regression
equations. In these, the BAS scores were obtained using all the four
subtests in the
estimate of the children’s cognitive ability, including number
skills. This
initial and more
conservative analysis was necessary because the reliability of the
BAS is
greater when more
subscales are used. A power analysis showed that for a large effect
size, which we expected
from our theory, a sample size of 50 participants is sufficient
for five predictors
(Miles & Shevlin, 2001).
The first regression,
presented in Table 3, tested whether logical competence
predicted children’s
SATs-Maths. In order to control for the shared variance with other
factors, we entered age
at the time of SATs testing, BAS scores and counting recall as
independent steps in
the regression equation before entering the children’s scores in
logical competence.
Table 3. Fixed order
regression analysis used to test whether logical competence makes an
independent
contribution from age, intelligence and counting recall to the
prediction of mathematics
achievement
Variable entered at
each step
Step 1
Step 2
Step 3
Step 4
Age at SATs-Maths
BAS–T1
Counting recall
Logical competence
Standardized b
0.005
0.392
0.118
0.453
R2 Change
0.01
0.49
0.04
0.10
F Change
0.72
49.45
4.28
14.15
Sig. F Change
.40
.001
.04
.001
The total adjusted r2
showed that 62% of the variance in SATs-Maths was
accounted for by these
four variables. Age did not make a significant contribution to
the prediction, but the
other three predictors accounted for significant portions of
the variance in SATs
levels.
BAS scores accounted
for 50% of the variance, counting recall for an additional 4%
and logical competence
for a further 10%, after controlling for the preceding factors.
Thus, logical
competence is a significant predictor of mathematics learning in
school
and this relationship
is not explained by extraneous variables, such as general cognitive
ability and working
memory.
We ran a supplementary
analysis, where we entered first the age, second the BAS subtest
of number skills, third
the BAS score estimated without using the scores for number skills,
fourth the counting
recall and finally the logical competence. By entering number skills
separately, the
regression accounted for 69% of the variance. Age accounted for 2% of
the
variance, number skills
for 34%, BAS scores for a further 20%, counting recall for a further
4%
and logical competence
for an additional 9%. This more detailed analysis confirms that
logical competence and
working memory accounted for variance in SATs-Maths levels
independent of general
cognitive ability, knowledge of written number and number facts.
In the second analysis,
we tested whether counting recall makes a significant and
independent
contribution to predicting SATs levels, after controlling for BAS and
logical
competence scores.
Table 4 presents the results of this analysis.
Table 4. Fixed order
regression analysis used to test whether counting recall makes an
independent
contribution from age,
intelligence and logical competence to the prediction of mathematics
achievement
Variable entered at
each step
Step 1
Step 2
Step 3
Step 4
Age at SATs-Maths
T1 BAS
T1 Logical competence
T1 Counting recall
Standardized b
0.005
0.39
0.45
0.12
R2 Change
0.01
0.49
0.13
0.01
F Change
0.72
49.45
17.72
1.65
Sig. F Change
.40
.001
.001
.20
There was an increase
in the contribution that logical competence made to the
prediction of SATs
levels and a corresponding decrease in the independent contribution
made by counting
recall. Working memory was no longer a significant predictor. Thus,
working memory’s
contribution to predicting SATs levels is independent of number
skills, as the previous
analysis established, but not of logical competence.
The third analysis
placed BAS scores (based on all the four subtests, including
number skills) as the
last step in the regression equation to test whether general
intelligence, which
captures the influence of environmental factors as well as learning
about written numbers
and computation before school, makes a contribution to the
prediction of SATs
levels which is independent of the other two predictors. Table 5
presents the results of
this analysis.
Table 5. Fixed order
regression analysis used to test whether general intelligence makes
an
independent
contribution from age, counting recall, and logical competence to the
prediction of
mathematics achievement
Variable entered at
each step
Step 1
Step 2
Step 3
Step 4
Age at SATs-Maths
T1 Counting recall
T1 Logical competence
T1 BAS score
Standardized b
0.005
0.12
0.45
0.39
R2 Change
0.01
0.11
0.44
0.09
F Change
0.72
6.21
48.77
11.91
Sig. F Change
.40
.02
.001
.001
There was a
considerable overlap between the contributions that counting recall
and
BAS scores made to the
prediction of SATs-Maths levels: the amount of variance
explained by counting
recall increased to 11% when entered in the equation before BAS
scores. There was also
an overlap between logical competence and general cognitive
ability: 44% of the
variance in SATs-Maths levels was explained by logical competence
when it was entered in
the equation before BAS scores. Finally, BAS scores continued to
explain a significant
and independent portion of the variance, after controls for working
memory and logical
competence.
Conclusion
Logical competence at
the beginning of their school career predicted children’s
mathematics learning 16
months later and might therefore be a causal factor of this
learning. Its
contribution was independent of general cognitive ability and working
memory. Working
memory’s relationship to children’s mathematics learning
overlapped
with general cognitive
ability and logical competence. Finally, after controls for
children’s
logical competence and
working memory, general cognitive ability still made a significant
contribution to the
prediction of children’s SATs-Maths level. These results are
consistent
with the hypothesis
that logical reasoning is causally related to mathematics learning.
STUDY 2: DOES
CHILDREN’S MATHEMATICS LEARNING IMPROVE
IF THEIR LOGICAL
COMPETENCE IMPROVES?
We carried out the
intervention study in the schools where we had done the
longitudinal study. The
children in the longitudinal study formed a control group; the
children in the
intervention group enrolled in the same schools 1 year after the
children
in the control group.
The intervention took
place during the time set aside for numeracy teaching, so the
children in the
intervention group did not receive extra instruction, but a special
type of
instruction on logical
reasoning. They participated in the intervention sessions once a
week over 12 weeks,
starting in the spring term and ending in the beginning of the
summer term. Most of
their numeracy instruction was still carried out in their normal
classroom by the class
teacher.
Numeracy teaching in
England is set by national guidelines, which are precise about
content and format.
Children in the control and the intervention groups were taught
according to the same
curriculum during the initial period of our study.
At the beginning of the
second year of study, before the children in the control group had
taken SATs-Maths, there
was a consultation about changes to both the assessment and the
teaching of numeracy in
primary schools. We then decided to administer an assessment of
children’s
mathematical knowledge that was devised for another longitudinal
project
(Nunes & Bryant,
2004), which correlates significantly with SATs-Maths levels (r ¼
:63), in
order to have a common
outcome measure even if the national assessments were changed
during the period of
the study. This test was administered to the control group in the
autumn term in their
second year of school. We also used the results of the SATs-Maths as
an
outcome measure.
However, schools had been allowed to introduce changes to the
curriculum and use
different numbers of tasks for SATs-Maths, so results with this
outcome
measure should be
compared with those of our mathematics achievement test.
Method
Design
The study was carried
out over 3 years, using a pre-test, an immediate post-test and a
delayed post-test
design. In the first year, we obtained data on the control group:
the
pre-test was given in
the spring term and the immediate post-test in the summer terms.
In the second year, the
control group was given the delayed post-test in the autumn term
and the SATs-Maths in
the summer term. The intervention group was given the pre-test
late in the autumn and
the immediate post-test in the summer of the second year of
the project. In the
third year of the project, they received the delayed post-test in
the autumn and the
SATs-Maths in the summer. The intervention was delivered in the
second year of the
project from late spring to the middle of the summer terms.
The interval between
the pre-test and the post-tests was on an average: 3.5 months to
immediate post-test;
10.5 months to delayed post-test and 17.5 months to SATs-Maths.
The pre-test measures
were working memory, BAS and logical competence, as in
Study 1. The immediate
post-test was the assessment of logical competence, included to
assess the
effectiveness of the intervention in improving children’s logical
prowess.
The delayed post-tests
were a measure of mathematics achievement, described in detail
in a subsequent section
(Nunes & Bryant, 2004) and SATs-Maths.
All children in their
first year of school in all the four schools participated in the
pre-
test. There were 59
children in the first year of the study and 53 in the second year.
Using
the pre-test data, we
carried out a regression analysis with age at pre-test as a
predictor.
The children’s
logical competence scores were the outcome measure. The residual
scores were used to
identify children who were underperforming in logical competence
for their age. The
children whose residuals were below the 20th percentile in each
group, control and
intervention, were selected for the study.
Participants
A total of 27 children,
14 in the control and 13 in the intervention group were identified.
One of the children in
the intervention group could not be included in the intervention
because this was to be
carried out in small groups and there were no other children in
the school meeting the
criterion. Of the 26 children included, one child in the control
group moved away after
the immediate post-test. Thus, the analyses include 26 children
at pre-test and
immediate post-test, and 25 at delayed post-test.
Table 6 gives the
pre-test scores and group comparisons. There was no significant
difference between
groups at pre-test on the BAS. However, the children in the
intervention group were
younger, had a lower counting recall span (this difference was
not significant but
suggests a trend), and performed significantly worse in logical
competence than the
control group. We controlled for counting recall and logical
competence in further
group comparisons, since these measures were the significant
predictors of
SATs-Maths in Study 1.
At the immediate
post-test, the children’s mean ages were 6 years 2 months
(SD ¼ 3:3 months) and
6 years (SD ¼ 4:4 months), respectively, for the control and the
Table 6. Means and
standard deviations by group at pre-test
Control (N ¼ 14)
Intervention (N ¼ 12)
Comparison
Mean
Age
BAS
Counting recall span
Logical competence
6 years 0 month
80.36
1.64
6.57
Standard
deviation
3.76 month
11.0
0.50
2.50
Mean
5 years 9 months
87.67
1.17
4.67
Standard
deviation
4.41 month
13.33
0.72
2.31
t
2.09
1.51
1.99
3.35
p
.05
.15
.06
.003
Logic and mathematics
learning
intervention groups. At
the delayed post-test, the mean ages were 6 years 10 months
(SD ¼ 3:3 months) and
6 years 9 months (SD ¼ 4:4), respectively, for the control and
the intervention
groups. Although the experimental group is still younger on an
average,
the age difference was
not significant at immediate and delayed post-tests. This
fluctuation is due to
small variations in the interval between the pre-test and the post-
tests; because our
sample is small, only larger differences between the means are
significant. Thus, it
was unnecessary to enter age as a covariate in the comparisons
between the control and
the intervention children.
Measures
This section describes
the only measure used in the intervention study, which was not
used in the
longitudinal study: the test of children’s mathematical knowledge
(Nunes &
Bryant, 2004) given as
a delayed post-test. This differs from our test of logical
competence in many
ways. First, it is a paper-and-pencil test, administered in the
classroom rather than
individually. Second, the problems are presented on booklets,
which contain drawings
and written numbers; no manipulatives are provided. Third,
the children have to
produce written answers. Fourth, all the items involve problem
solving, and problem
types not used in our teaching sessions are included. For example,
there are six
comparison problems (e.g. ‘Serena and Jamal are playing a game;
Serena is
on number 11 and Jamal
is on number 4; how many spaces ahead is Serena?’), three
sharing problems (e.g.
‘there are 18 sweets to be shared fairly among three children;
how many will each one
get?’), and one measurement problem (a ribbon is placed above
a broken ruler and the
children are asked how long the ribbon is). These problem types,
not included in our
intervention, represent the aspects of number knowledge and
applied problems, which
are a part of the numeracy curriculum. There are eight
multiplication problems
in this test and only two involve simple correspondences,
similar to those used
in our intervention sessions. Other problem types (from van den
Heuvel-Panhuizen, 1990)
were not included in our training, for example, two require
the children to analyse
spatial displays (e.g. children are asked how many cans are in a
pile of fizzy drinks)
and two involve situations that are considerably more difficult than
those used in the
training (e.g. a roll of sweets with two sections is displayed and
the
number next to it
indicates that there are eight sweets in total; children are asked
how
many sweets in a roll
with 5 sections).
The problems that are
most similar to those used in our training relate to money. In
five problems, the
children are asked to tick the (drawings of) coins to pay the exact
money for some items.
However, two other problems about money involve addition to
find how much money
was spent and subtraction to find the change in a transaction;
these problem types
were not included in our intervention.
Procedure for the
training
The children worked in
small groups (of three to five children) with a researcher (one of
the authors) outside
the classroom, once a week during 12 weeks. Each session lasted
approximately 40 min.
The researcher posed a problem orally and each child wrote the
answer, in words or
numbers, on a blank sheet. The children had manipulatives, which
were either cut-out
shapes of the objects mentioned in the problems, bricks or counters.
The researcher asked
one child to say the answer, taking turns so that each child could
be the first to
respond. The child was asked to show with materials or explain
verbally
how the answer was
obtained. If the answer was correct, the experimenter confirmed
the answer. If it was
wrong, the experimenter guided the children in the use of materials
to act out the story
and to reach the correct answer. The children were given points for
correct answers, which
were later exchanged for small trinkets. Feedback focused on
the child’s logic in
interpreting and solving the problems; there was no feedback on the
writing of numbers
(even when these were reverse or not consistent with the oral
answer, as in the case
of a child who confused the digits 2 and 5) and no memorization of
number bonds.
Overview of the
training
Sessions 1 and 2
focused on the understanding of the inverse relation between addition
and subtraction and of
additive composition. To demonstrate the inverse relation, the
children were shown a
row of bricks. This was then partially covered with a cloth, but
one end of the row was
still visible, so that the children could watch the addition and
subtraction of bricks,
but could not count the bricks. The experimenter added and
subtracted bricks to
the row, and then asked the children how many bricks formed the
row after these
transformations. The initial trials contained helpful cues: the
bricks were
added to and subtracted
from the same end of the row and were of a different colour
from those in the row.
Thus, the children could easily see that the bricks added and
subtracted were the
same. The cues were removed later on: the second group of trials
involved bricks of the
same colour as those in the original row still added to and
subtracted from the
same end of the row. In the third group of trials, all the bricks
were
of the same colour and
the addition was made to one end, whereas the subtraction was
with different bricks
from the other end.
Some of the trials used
simple inversion; the manipulations can be represented as
a þ b 2 b. Others
combined inversion with decomposition, represented as
a þ b 2 (b 2 1) or a þ
b 2 (b þ 1). Thus, the children could not answer by always
saying the number of
bricks originally used in the row: sometimes the number was the
same, sometimes it was
greater by 1 and sometimes smaller by 1.
The teaching of
additive composition was carried out by asking the children to count
the money that they
were given to buy something at a pretend-shop. Combinations of
coins of different
denominations were used: 5p and 1p, 10p and 1p, and 20p and 1p.
The first groups of
trials used only one 5p or one 10p coin plus different numbers of 1p
coins. If the children
made the mistake of counting 5p or 10p coins as one, which is the
most common error in
this task, they were asked to say again the value of the coin and
show the value with
their fingers. They were then encouraged to count on from the
value they displayed on
the fingers. In order to do this, the experimenter pointed at the
child’s fingers,
said the value they showed, then pointed at a 1p coin and said: plus
1p
more, makes what? When
the children spontaneously adopted this procedure, the
experimenter no longer
prompted them. Most children eventually drop the use of
fingers and count on
from the value of the 5p or 10p coin. Counting 20p plus several 1p
coins works as an
extension of the count-on procedure, where the children do not have
enough fingers to
represent the starting point.
At the beginning of the
sessions 3–7, there was a small number of trials on
additive composition
and inversion. These were followed by inverse addition and
subtraction problems,
which the children solved with manipulatives. Initially, these
were cut-out figures
of the objects in the problems (e.g. rabbits, flowers, marbles
etc.), but later the
children used bricks to show what happened in the stories. Most
problems involved
inverse reasoning (e.g. missing addend and missing minuend
Logic and mathematics
learning
stories), but some
simpler problems were used to ensure that all the children
experienced some
success.
Sessions 8–10 focused
on the logic of correspondence and how it can be used to
solve problems. A few
addition and subtraction problems requiring inverse reasoning
were used in these
sessions to ensure that the children did not automatically resort to
making correspondences.
The children were given cut-out figures and blocks to
represent the
correspondences described in the problems. For example, the
introductory problem
for one-to-many correspondence problems was: ‘The teachers
were organizing a party
in the garden and three lorries were bringing tables to the
school party. Inside
each lorry, there are four tables. How many tables are they bringing
to the school
altogether?’ The children were given cut-out pictures of lorries
and bricks
to represent the
tables. If they did not know how to start, they were encouraged to
make
a row of lorries and
put bricks on top of the lorries in order to figure out the answer.
Variations were used in
which the correspondences were not indicated and the children
had to find these by
distributing the elements. The introductory problem for this was:
‘A boy has 12
marbles. He wants to put the same number of marbles in two bags. How
many marbles should he
put into each one?’ The children were given two rubber-bands
to represent the bags
and cut-out pictures of marbles that they could distribute equally
between the two bags.
The children were not taught multiplication and division facts:
they were only asked to
represent the situations through correspondences and to
answer the questions.
In session 10, the
children were asked to solve the problems using only bricks; so,
they had no material to
represent one of the variables. This can be difficult because the
children have to
imagine one variable while representing the other. For example, in
the
lorries–tables
problem, they would have to make three groups of four bricks on
different
places on the table, so
that each group corresponded to one lorry. If the children were
unable to solve the
problems with manipulatives representing only one variable, they
were offered materials
to represent the other variable.
Sessions 11 and 12
contained a mixture of problems from the previous 10 sessions.
Results
The test of logical
competence at immediate post-test showed a high level of internal
consistency (Cronbach’s
a ¼ :91) and a normal distribution, with an overall mean of
12.46 (SD ¼ 7:4) out
of 32 items. The analysis of skewness (skewness z ¼ 0:29) and
kurtosis (kurtosis z ¼
1:34) showed that the distribution was normal. The mathematics
achievement test, used
in the delayed post-test, showed good internal consistency
(Cronbach’s a ¼ :79)
and a normal distribution: mean (out of 28) ¼ 8.32; SD ¼ 4:45;
skewness z ¼ 0:49;
kurtosis z ¼ 21:47. Thus, our measures were reliable and normally
distributed. The
Spearman correlation between our mathematics achievement test and
SATs-Maths was r ¼ :72
(p , :001).
We predicted that our
intervention would improve children’s performance in logical
reasoning at immediate
post-test, in mathematics knowledge test at delayed post-test
and in SATs-Maths. To
test these predictions, we used three analyses of covariance with
the children’s
results on these measures as dependent variables. The covariates were
the
children’s
performance at pre-test in counting recall and logical competence.
The results
of these analyses are
summarized in Table 7.
Working memory was a
significant covariate of performance in the first two analyses,
but only revealed a
significant trend in the analysis of effects on SATs-Maths (for the
.
Table 7. Adjusted means
(controlling for working memory and logical competence at pre-test)
and
standard error of the
mean by group for the different post-tests
Control
Standard
error
1.47
1.08
0.21
Intervention
Standard
error
1.60
1.08
0.22
Comparison
Mean
Immediate post-test
7.00 (N ¼ 14)
(max: 32)
Delayed post-test
5.70 (N ¼ 13)
(max: 28)
SATs-Maths (max for
3.39 (N ¼ 13)
this sample: 5)
Mean
18.83 (N ¼ 12)
11.16 (N ¼ 12)
4.41 (N ¼ 12)
F
26.33
12.02
9.40
p
, .001
.002
.006
immediate post-test:
Fð1; 22Þ ¼ 11:08, p ¼ :003; for the delayed post-test:
Fð1; 22Þ ¼ 8:70, p ¼
:008; for the SATs-Maths: Fð1; 22Þ ¼ 3:34, p ¼ :08). Performance
at pre-test on the
logical competence assessment was not a significant covariate for
the
immediate post-test
(Fð1; 22Þ ¼ 0:90; p ¼ :35). This is possibly the result of the
large
change in the
intervention group, whose performance was unexpectedly high for their
pre-test performance.
Logical competence was a significant covariate in the analysis for
SATs-Maths (Fð1; 22Þ
¼ 5:28; p ¼ :03) and a significant trend was revealed in the
analysis for
mathematics achievement (Fð1; 22Þ ¼ 3:25; p ¼ :09).
In all the three
comparisons, the intervention group significantly outperformed the
control group. The
effect sizes were all calculated as Cohen’s d. For the immediate
post-
test, the effect size
was 1.6 SD. This large effect size shows that the intervention did
improve the children’s
logical competence significantly. The effect size for the
difference between the
groups in mathematics achievement was 1.2 SD, which is a large
effect size for a
relatively small intervention and with effects measured at delayed
post-
test, almost 11 months
later. Finally, the effect size for SATs-Maths was 1.2 SD. This is
again a large effect
size, observed on a measure administered about 13 months after the
intervention was
concluded and by teachers who were not aware of the children’s
assignment to our
groups, as they had not taught the children in the previous year,
when
the intervention was
carried out.
In summary, these
analyses showed that our teaching of logical competence was
highly successful, and
that this teaching had strong and beneficial effects on children’s
mathematics learning
even after an interval of 13 months.
CONCLUSIONS AND
DISCUSSION
Our aim was to use a
combination of longitudinal and intervention methods to test
whether logical
competence plays a causal role in mathematics learning. In the
longitudinal study, we
tested three different models in the prediction of children’s
learning of mathematics
in school. This allowed us to establish that children’s logical
competence at the start
of primary school makes a significant contribution to the
prediction of their
school achievement in mathematics after controlling for its overlap
with general
intelligence and working memory. Thus, logical competence became a
strong candidate as a
causal factor in mathematics achievement. Our confidence in this
conclusion is
strengthened by the fact that the intelligence test that we used
included a
measure of knowledge of
written numbers and arithmetic at the start of school and by
Logic and mathematics
learning the ecologically valid nature of our outcome measure,
provided by the schools themselves and not influenced in any way by
us.
The models that we
tested also allowed us to establish that the contribution of
general intelligence to
the prediction of children’s school achievement is not reduced to
the connection between
intelligence and logic or to the connection between
intelligence and
working memory: general intelligence remained a strong predictor of
mathematics achievement
after controlling for both of these factors. However, there was
a significant overlap
between the results of general cognitive ability, logical competence
and working memory.
When general intelligence was entered in the regression equation
before the other two
factors, it explained almost 50% of the variance in mathematics
achievement, but it
explained only 9% of the variance when it was entered in the
equation after the
other two factors. Thus, measures of intelligence, logical competence
and working memory are
not completely independent of each other. There is every
reason to expect
overlaps between them, but each has its own specificities. One
cannot
be seen as a proxy for
the other, even though Kyllonen and Christal (1990) have claimed
that there is a little
difference between intelligence and working memory.
Finally, the
regressions also showed that working memory makes a contribution to
mathematics achievement
which is independent of general intelligence, but not of its
connection to general
intelligence and logical competence. When entered after
intelligence but before
logical competence in the regression equation, it continued to
make a significant
contribution to the prediction of mathematics learning, but its
contribution was no
longer significant, when both factors were entered before working
memory.
The regression
equations that combined all the three factors explained 64% of the
variance in the
children’s mathematics achievement, measured 16 months after the
first
sweep of data
collection and independently from our project. This is an impressive
accomplishment in terms
of explanation in psychological research, but it does leave
room for other factors,
not studied here. They might include, for example, the quality of
teaching in the
classroom, the school’s attitude towards learning mathematics and
the
children’s
self-perception as learners. These are complementary explanatory
hypotheses, but they
were not tested here.
The intervention study
provides strong support for the hypothesis that logic plays a
causal role in
mathematics achievement in school. Through a relatively small amount
of
training, which did not
increase the amount of numeracy instruction received by the
children, we were able
to improve their performance in a test of logical competence at
immediate post-test, in
a test of mathematics achievement at delayed post-test and in the
schools’ own
assessment of the children’s achievement.
The intervention did
not focus on the school curriculum: we did not teach the
children anything about
place value, addition and subtraction algorithms or number
facts. Our aims were to
improve their understanding of the additive composition of
number, the inverse
relation between addition and subtraction and the use of
correspondences to
establish relations between variables and solve problems. This
teaching about the
logic of quantities and operations improved mathematics learning in
children whose logical
competence scores were suggestive of later difficulties in
learning mathematics.
Their improvement cannot be simply explained as regression to
the mean: they
outperformed children from a control group with similarly low
performance at the
start of school.
Therefore, these
results strongly suggest that logic forms a basis for children’s
assimilation of
mathematics instruction that they receive in school. We consider this
to
be a direct connection
between children’s understanding of the logic of quantities at the
start of school and
their learning of how to represent quantities and operate on these
representations in
school.
Our results came from a
relatively small sample in only four schools in a single city.
Although there is no
reason to assume that things would turn out different with a larger
sample, replication
with greater numbers is highly desirable and would provide the
basis for detailed
investigation of the different aspects of children’s logical
competence.
Finally, the
educational implications of these studies should be considered. There
is a
debate between those
who advocate teaching mathematics with constructivist
methods, which
emphasize children’s logic and the back-to-basics movement, which
concentrates on the
learning of number facts. Our results leave no doubt that logic has
an important role in
children’s mathematics achievement, above and beyond that played
by drills in number
representation and computational skills. Thus, the mathematics
education of young
children should provide them with a solid basis for understanding
the logic of numbers
and operations. Time invested in promoting children’s logical
understanding is well
invested. This is a message for parents, preschools and also
teachers.
Nenhum comentário:
Postar um comentário