DISCUSSION: a skill participant’s use within the assessment,

DISCUSSION:

This
study explored the relationship between typically developing children’s verbal
reasoning skills and performance on national curriculum reading comprehension
tests, cognitive and language skills. The study additionally investigated
psychometric properties of the assessment such as inter-rater reliability and
performance on different scenarios. The findings, limitations and clinical implications are discussed below.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

Relationship between language and cognitive
assessment and verbal reasoning skills

The current study found that despite a small sample size, the
relationship between verbal reasoning skills and vocabulary was found to be
significantly significant. There was a positive correlation indicating that
children who have a larger vocabulary are able to score higher in the LFT
assessment. These findings add to the previous literature around verbal
reasoning skills and vocabulary (Calvo 2004, Cain 1996, Harrison 2004). Lepola et.al (2012) report that
vocabulary knowledge has been one of the best predictors of narrative listening
comprehension and verbal
reasoning skills (Snowling & Stevenson, 2004, Ouellette and Rodney
2006). This current study shows parallels with the literature around
children who have speech, language and communication needs. Cain et.al. (2004)
found that children who had poor vocabulary knowledge had limited verbal
reasoning skills. They found it difficult to infer meanings of novel words using
context, and were unable to draw information from the texts. Mitchell
and Riggs (2000) found that in order for children to develop strong inference
skills, there is a need
in the increase of vocabulary in particular, mental concepts. When presenting
children with LFT style tasks/ picture books, it could be important to extract
key vocabulary to support the child’s ability to access their own schematas to
increase their verbal reasoning skills.

Within this study the results concluded that there
was not a statistically significant relationship between cognitive skills and
verbal reasoning skills. This result does contradict the current research and
due to the sample size should be interpreted with caution.

Possible reasons behind the results in the study could
be around the type of cognition assessed. The Ravens Progressive Matrices
(Ravens, 1998) is designed to assess general intelligence and although working
memory is a skill participant’s use within the assessment, it is not
specifically targeted. Working memory has been widely discussed in the
literature around verbal reasoning and inference skills. The work of Graesser
et.al (1994) and Calvo (2004) emphasise the importance of working memory in
order to keep the mental representation in the mind in order to access the
appropriate vocabulary and background knowledge to effectively infer. It should
also be noted that some studies have also found that children still effectively
infer, despite lower intelligence when around a topic of their interest (Barnes
et.al. 1996). The current study therefore adds to the mixed literature around
cognitive skills and verbal reasoning skills.  

 

When looking at syntax (comprehension and expression
of grammar) and verbal reasoning skills, the small sample indicated a
statistically significant relationship between the different variables. There
was a high correlation between receptive grammar and verbal reasoning skills (rs=.796) and use of grammatical structures and
verbal reasoning skills (rs=.611). There was not a significant
relationship between sentence formulation and verbal reasoning skills (rs=.399). This could be interpreted that being
able to comprehend grammatical structures helps to support verbal reasoning
skills. Lepola et.al. (2012) study found that ‘listening comprehension at age
four was a predictor of verbal reasoning and inference skills at age 5 and 6’ (p.275)
in particular sentence memory was an important feature of listening
comprehension at that age. This is an important factor when thinking about what
skills are important for the development of verbal reasoning skills.

Relationship between
national curriculum reading comprehension scores and verbal reasoning
assessments.

 

The
results indicated that the relationship between national curriculum reading
comprehension results and verbal reasoning skills was not statistically
significant. Seen in Figure 6, there was a weak correlation between the two
assessments. It was however noted that the participants who scored higher in
the verbal reasoning assessment, scored high on the reading comprehension test.
Sample
size may have impacted on the results of the assessments. The results could
also indicate that there is a threshold a higher performance in reading
comprehension tests, in which children need to have adequate verbal reasoning
skills. LM1 Within this current population, no participant scored the ceiling in
either assessment. This could be due to arguments emphasising that reasoning
and inference skills develop between six and eleven years (Paris and Lindauer,
1976 and Paris et.al., 1977), cited in Kispal (2008) literature review.

 

When
looking at the reading comprehension test, the questions were examining a range
of skills such vocabulary knowledge, sequencing and explaining information, and
make inferences and predictions from texts. The results were given as a total
and therefore the assessments were not interpreted in terms of which areas each
participant scored highly on and may not be sensitive to a participant’s verbal
reasoning skills. The presentation of questions between the LFT assessment
(Parsons & Branagan, 2005) and reading comprehension differed. The majority
of the students scored in the top end of the assessment in paper one. The
students scored lower in paper two which required an increased demand on
inference and prediction skills (see Table 5). Within the reading comprehension
test booklet paper one (An octopus under my bed) there were five questions out
of nine, which were multiple choice. The majority of the multiple choice
questions required inference and prediction skills. Within paper two, there was
an increased demand on using open ended questions when asking inference and
prediction questions. The LFT assessment (Parsons & Branagan, 2005) uses
open ended questions throughout the assessment. Shohamy (1984) found that when
assessing reading comprehension skills, multiple choice questions were
‘consistently easier than open ended questions within different texts'(p.157).
All participants scored higher in the paper one assessment which had more
multiple choice questions to support answers.

 

It may
be important to think about the type of teaching the participants received in
order to complete the reading comprehension tasks. They may have had specific
teaching and practised tests around how to answer these type of booklets.
Outlined in the literature, the level of teaching and support in enhancing verbal
reasoning and inference skills is vital. 
Lennox (2013) emphasises that there are many concepts that children
cannot discover independently and therefore the adult as the mediator plays a
key role in supporting new understandings.

 

Background knowledge of
different assessments

 

Within the study, the results highlighted that
there could be a statistically significant difference between the participant’s
score between the scenarios based on their background knowledge. Although
assessments were administered on a small sample, the results do link in with
the current literature around background knowledge and verbal reasoning skills.
Lennox (2013) emphasises that “background knowledge and reasoning skills are
needed to predict, hypothesis, explain, imagine, infer, problem solve and
evaluate.” (p.386)

 

Within the Level-A questions, the difference
between scenarios was small. In order to answer the questions accurately the
students are required to use the information in front of them. The difference
between the scenarios were larger within LFT Level-C questions. These questions
require higher-level thinking skills in which participants are required to draw
on their background knowledge. For example, a participant responded to the
Level-C questions ‘Which is better, TV or cinema/puppet show?’ differently. In
the cinema question they scored 3/3 responding, “movies because you get to
watch it bigger and dark”. Within the puppet show the participant scored 1/3
responding, “TV because puppet shows might be quicker.” The answer in the
cinema scenario was specific in which the participant drew on their experiences.
The participant’s answer in the puppet show demonstrated some understanding
however the question was not answered as accurately due to lack of background
knowledge. Pressley & Afferbach (1995)
report that a reader has to have experiences in order for higher-level thinking
to develop; “the richer a children’s word experiences are, the richer the
child’s schematic knowledge is that they are able to draw on.” (p.54).

 

There was four instances
in which the participants scored higher in the puppet scenario than the cinema
scenario although did not affect overall levels. Other arguments have discussed
that it is not only about background knowledge but also to do with being able
to access it in order to answer inference questions accurately (Cain &
Oakhill 1998).

The
relationship between raters of the LFT assessment

 

Inter-rater reliability
is important in order to support the quality of assessments and increase the
usefulness within the scoring process (Morgan et.al. 2014). There was two types
of measures when looking at the inter-rater reliability within this study; correlation and ICC. The
present literature (Rankin & Stokes 1998, Morgan et.al 2014 and Solarova
et.al 2014) emphasise the importance of dual measurements of reliability in
order to look at the strength of linear association and agreement. From the
assessment data it can be found there is a strong inter-rater reliability
between the three therapists. In Figure 12 there is a normal distribution of
scores for rater three. This rater is a co-author of the LFT programme. As the
author, co-wrote the scoring criteria, the normal distribution of scores may be
due to truly understanding the rationale behind the scoring criteria. The
results however did show that nevertheless two therapists who are currently practicing
in the field correlated scores with the author. It should be noted however that
all three raters were experienced Speech and Language Therapists and had
experience in clinically administering and scoring this assessment, even when a
participant’s answer was not presented in the scoring criteria. The high
correlations could be seen as an indication that the scoring criteria within
the LFT assessment is generally useful in being able to assess a child’s verbal
reasoning skills. The current study does not however look at whether there is
an agreement between an experienced therapist and a professional who has limited
experience in scoring this assessment.

 

It should be noted
however that there were 7/12 participants in which the level on the LFT would
have differed, impacting on the recommendation of whether the child would require
additional specific support. This difference may have an impact on outcomes for
students with Speech, Language and Communication needs. It was seen in the
results, that there was wider distribution of scores when the raters scored Level-C
questions. As seen earlier, these questions require the ability to ‘predict,
reflect on and integrate ideas and relationships’ (Blank et.al 1978a ). Perhaps, due to the
variability of these answers in this level, the current scoring criteria does
not aid a therapist in being able to effectively score the results. Solarova
et.al (2014) highlight that the more items a scale comprising a raw score has,
the more difficulties there is in order to reach a complete agreement of
scorers. It may therefore be important to think about making the scoring more concrete
and increasing the examples within the current assessment to allow more
agreement amongst raters.

 

Limitations of the
study:

This
study was limited by a small group size, which is a risk to the reliability of
the results. This was particularly evident in comparison of language and
cognitive assessments. Button et.al (2013) report that low statistical power
which is due to low sample size can negatively affect that a statistically
significant result is a true effect.  The
small group size also had a limited range of ages of participants. Due to the
timing in the year, there was one pupil aged 6;0 years within year two. Future
research needs to be conducted on a larger sample across the spread of the year
in order to look at the extent of which age is a factor within the development
of verbal reasoning skills and whether the statistically significant results
are able to be replicated.

Other
factors such as gender and English as an additional language were not
investigated within the current study. Again due to small sample size, the
current data was unable to be analysed in terms of whether there were differences
amongst gender and whether bilingualism has an effect on verbal reasoning
skills.

 

Due to
the sample size, there was one outlier in the data included in the results.
This participant was lower than average within all assessments however due to a
small sample size was included. This participant may have had an impact on the
overall data and therefore the results may not be as reliable. If this research
was to be replicated, a larger sample would allow for outliers to be excluded.

It
should also be noted that only one school was recruited in a borough of London
that was ranked in the top quarter in poverty rankings (trust for London REF). Waldfogel
and Washbrook (2012) report that in the UK, there is a 20 month gap in
vocabulary at school entry between the wealthiest and poorest. This could
potentially not be representative of the typical population. It would therefore be important to conduct further research in different
geographical areas in order to look at any additional factors such as
socio-economic status.

The
school is situated within a multi-lingual population. From the current data, 6/12
of the participants spoke English as an additional language. Although the questionnaire
identified what languages were spoken, the researcher did not look at how long
the child had been exposed to English. Cummins (1984b) examined the relationship
between fluency and everyday conversation and the ability to use language for
academic purposes. He found that children who start learning a second language
after school admission may acquire a good level of fluency in everyday
conversation but may take between five and seven years before they have caught
up with the average monolingual children on measures of academic achievement. It
should also be noted that the language assessments used are not standardised on
the bilingual population.

 

The
school provided the researcher with the quantitative scores of the national
curriculum tests. As a result, the researcher was unable to look qualitatively
at the way the participants had answered targeted questions in the booklets in
order to look at whether there are comparisons to be drawn between the
student’s ability to draw information from the text was due to verbal reasoning
skills.

 

Clinical Implications

These findings may have relevant implications for professionals in
supporting a child’s verbal reasoning skills using the LFT programme (Parsons
& Branagan, 2005). Firstly it may be important to select a scenario that
the child has experience of. The professional might also want to take into
account cultural experiences with regard to a child’s score. Torr & Scott
(2006) emphasise the importance of ‘relating
the language to the child’s personal situation’ (p.161).

The data coincides with the current research, around pre-teaching of vocabulary
to support verbal reasoning skills. When implementing the LFT programme
(Parsons & Branagan) in intervention, professionals may want to draw out
the vocabulary from the picture presented to the child if they are using
‘picture and talk/ text’ styles. Cain et.al (2007) further emphasise this point
particularly when attempting to increase an understanding within picture books.

These findings further support previous research (REF) around teachers creating
opportunities to talk around books, and discuss inferences within the text from
a whole class approach at an earlier age. Taggart et.al (2005) highlight that
‘story time can be an opportunity to develop children’s thinking’ (p.ix). It
may be important to continue to support children in key stage one in making
more inferences, when there are no other demands involved e.g. reading the text.
This may scaffold a child’s ability to perform higher in their reading
comprehension tests.

The participants’ scores in the LFT (Parsons & Branagan, 2005) assessment
were largely different to the examples in the book (see appendix item X). The authors of the
book may want to revisit the scoring criteria in order to provide more detailed
scoring guidelines, ensuring modern examples are included.

 

Future research
directions:

–      
A future study with similar
assessments with a larger sample size across different schools in different geographic
areas.

–      
For future data to be replicated and explored
in terms of gender, socio-economic class and English as an additional language.

–      
To further look at the relationship
between different cognitive skills (e.g. working memory) and verbal reasoning
skills.

–      
To further look at agreement amongst professionals
who do not have experience with the assessment in order to look at whether
there is still a strong agreement when using the scoring criteria.

 

 

CONCLUSIONS

The
findings from the present study indicate that vocabulary, and syntax (receptive
and expressive grammar) may correlate with verbal reasoning skills. There was a
limited correlation between cognitive skills and verbal reasoning skills. There
was not a significant relationship between national curriculum reading
comprehension result and verbal reasoning skills, however participants who
scored higher in the verbal reasoning assessment, scored high on the reading
comprehension test.

When
looking at the properties of the LFT assessment, there were differences amongst
assessment scenarios based on a child’s experience. This may be an important
factor for educational professionals when considering using the assessment with
individuals, particularly individuals with English as an additional language with
cultural experiences. The assessment currently does have an agreement amongst
different raters, however the scores between the Speech and Language Therapists
and the author of the programme varied largely when marking higher-level reasoning
questions. A stricter criteria would be beneficial for accurate scoring of the
assessment, particularly for professionals who have limited experience with the
programme. The LFT assessment however is a quick, easy to administer assessment
that would benefit from further analysis of its psychometric properties in
order to support standardisation as a diagnostic tool.