zoom controls
celebrate dyslexia1st draft (Winter 2017/18)


Academic confidence and dyslexia at university


thesis graphic


Research design

This section opens by refreshing the aims and objectives of the project and outlining the principal elements of the research design.
This is followed by a reminder about the metrics that were used in the data collection and subsequently in the quantitative analysis, and summarizes why these were appropriate in the context of the theoretical underpinnings of the project. A short section follows which reiterates my intrinsic stance on the project, but which also states my position on research methodology processes in level 8 study and how this has impacted on designing the data collection tools.

The methodology for the project is explained through use of a workflow chronology as I considered this to be the most expedient way to document the journey from research question to data collection and analysis. Particular attention is given to this project's unique perspective on quantifying dyslexia-ness in higher education students with a lengthy report being provided which describes the process of development of the Dyslexia Index Profiler that has been used to gauge dyslexia-ness in this study, this being the dependent variable in the quantitative analysis. This includes an account of the processes applied to determine levels of internal reliability consistency and how this influenced the analysis->re-analysis cycle that was used to try to arrive at the most dependable analysis outcomes.

In the Methods section which follows, an account is provided about how the research questionnaire was constructed and deployed which includes reports of difficulties and challanges encountered - technical, ethical and practical - and how these were overcome. A sub-section briefly discusses difficulties that the use of Likert-style items and scales present to the quantitative researcher due to the statistical tensions created through the conventional use of fixed anchor-point scales, the non-parametric data that these provide and a tendency for researchers to manipulate such data so that parametric statistical tests can be applied, a practice which it is argued can render the meaning derived from such data analysis procedures as dubious in many circumstances. Hence particular attention is paid to explaining how I tried to mitigate these effects through the development of continuous-range scales in the research questionnaire as a mechanism to replace traditional, discrete, fixed anchor-points in the Likert-scale items that I used.

This section concludes by describing the process of multifactoral analysis that has been applied to data collected for both metrics, and especially for the Dyslexia Index metric, relating the pertinence of this given that dyslexia in higher education contexts is increasingly being researched through the lens of multifactorialism. Attention is given to reporting why this was considered useful and also how it enabled the data to be iteratively re-analysed to try to identify more specifically the combinations of factors of Dyslexia Index and of Academic Behavioural Confidence that were the most influential in explaining differences in academic confidence between dyslexic, non-dyslexic and quasi-dyslexic students at university.

Complete Thesis Contents


Research Design


Research Design Section Contents:

    • Design Focus
    • Metrics
    • Analysis and results
    • Stance
    • Workflow chronology
      • The preceeding MSc small-scale enquiry
      • Defining dyslexia: the professional practitioners' view
      • Existing dyslexia evaluators and identification processes - why these were dismissed
      • Development of the Dyslexia Index Profiler - overview
      • Measuring academic confidence using the Academic Behavioural Confidence Scale
    • Collecting information - the rationales, justifications and challenges
    • Establishing Research Groups and research sub-groups
    • Procedures for data collection
      • Designing and building a web-browser-based electronic questionnaire
        • Background demographics
        • The Academic Behavioural Confidence Scale
        • Additional psychosocial construct evaluators
        • Development of the Dyslexia Index Profiler - construction:
          • Dimensions of dyslexia in higher education settings
          • Evolution of the Dx Profiler
          • Internal consistency reliability
        • Qualitative data
      • Questionnaire deployment
      • Data receipts and collation
      • Data visualization
    • Data analysis


research design abstractOverview

The aims and objectives of this research project together with the clear research questions have been set out above.

This section, reporting on the Research Design, describes the blueprint for the strategic and practical processes of the project that were informed at the outset by the researcher's previous Master's dissertation, subsequently by the relevant literature - including identifying where gaps in existing knowledge became evident - and not least by the researcher's deep desire, as a learning practitioner in higher education, to explore how dyslexic students perceive the impact of their dyslexia on their study behaviours, attitudes and processes at university in comparison with their non-dyslexic peers. In addition to conducting a study in this under-researched area which is highly interesting in itself, the driving rationale has been that the outcomes of the research might usefully contribute to the gathering discourse about how knowledge acquisition, development and creation processes can be transformed in higher education in ways that more significantly adopt principles of equity, social justice and universal design (Lancaster, 2008, Passman & Green, 2009, Edyburn, 2010, Cavanagh, 2013), and to especially promote the wider acceptance that dyslexia may now be best considered as an alternative form of information processing (Tamboer et al, 2014) rather than a learning disability (eg: Heinemann et al, 2017, Joseph et al, 2016, amongst numerous other studies).

Descriptions are provided about how practical processes were designed and developed to enable appropriate data sources to be identified and how data has been collected, collated and analysed so that the research questions might be properly addressed. The rationales for research design decisions are set out and justified, and where the direction of the project has diverted from the initial aims and objectives, the reasons for these changes are described including the elements of reflective processes that have underpinned project decision-making and re-evaluation of the focus of the enquiry where this has occurred. The originality of the research rationale will be emphasized and justification made as to the equally original final presentation of the project outcomes because this will be a combination of this traditionally-written thesis and an extensive suite of webpages that have been constructed by the researcher as part of the learning development process that this doctorate-level study has contributed to. The project webpages have served as a sandbox for project ideas and development of some of the technical processes, particularly related to data collection and for diagramatically representing data outputs; the webpages have diarized the project, contain a reflective commentary on the project's progress throughout its 3-year timescale which is in the form of a Study Blog; and contain, present and visualize the data collected. An electronic version of the final thesis is published on the project webpages, notably so that pertinent links to supportive, online material contained elsewhere on the webpages can be easily accessed by the reader.


Design focus

This primary research project has taken an explorative design focus because little is known about the interrelationships between the key parameters being investigated and so no earlier model has been available to emulate. The main emphasis has been to devise research processes which are able to establish empirical evidence to support previously anecdotally observed features of study behaviour and attitudes to learning amongst the dyslexic student community at university. These were study characteristics that appeared to the researcher to be driven more so by students' feelings about their dyslexia and what being identified as 'dyslexic' meant to their study self-identity rather than ones that might be expected to be more directly influenced by the obstacles and impediments to successful academic study, apparently attributable to their dyslexia, when functioning in a literacy-based learning environment. This was first explored in the Master's research dissertation (Dykes, 2008) which preceded this PhD project, is available here, and which has contributed to this current project almost as a pilot study.

The fundamental objective has been to establish a sizeable research group datapool that comprised two principal subgroups: the first was to be as good a cross-section of higher education students as may be returned through voluntary participation in the project; the second was to be a control group of students known to have dyslexic learning differences by virtue of a) acquiring them as participants through a request targeted specifically at the university's dyslexic student community, and b) through their self-disclosure in the project questionnaire. In this way, it was felt that it could be assumed that students recruited from this cohort will previously have been formally identified as dyslexic through one or more of the currently available processes, for example as an outcome from an assessment by an educational psychologist either prior to or during their time at university. Subsequently, the research aim was twofold: firstly to acquire a sense of all research participants' academic confidence in relation to their studies at university; secondly to establish the extent of all participants' 'dyslexia-ness' and this has been a key aspect of the project design because from this, it was planned that students with dyslexia-like profiles might be identified from the research subgroup of supposedly non-dyslexic students. Quantitative analysis of the metrics used to gauge these criteria have addressed the primary research questions which hypothesize that knowing about one's dyslexia may have a stronger negative impact on academic confidence than not knowing that one may be dyslexic. Hence suggesting that labelling a learner as dyslexic may be detrimental to their academic confidence in their studies at university, or at best, may not be as useful and reliable as previously believed (Elliott & Grigorenko, 2014).

The research has devised an innovative process for collecting original data by utilizing recently developed enhanced electronic (online) form design processes (described below), analysed it and interpreted the analysis outcomes in relation to the research questions posed at the outset and to existing literature. The research participants were all students at university and no selective nor stratified sampling protocols were used in relation to gender, academic study level or study status - that is, whether an individual was a home or overseas student - although all three of these parameters were recorded for each participant respondent and this data has been used throughout the analysis and discussion when considered apposite. For students recruited into the dyslexic students research subgroup, information was also collected in the questionnaire which recorded how these students learned of their dyslexia because it was felt that this may be pertinent to the discussion later relating to the effects of stigmatization on academic study. A dissemination of the results, together with a commentary is presented below and incorporated into the discussion section of this thesis where this has been helpful in trying to understand what the outcomes of the data analysis mean.

mixed methodsThe research design has adopted a mixed methods approach although the main focus has been on the quantitative analysis of data collected through the project's participant self-report questionnaire. The questionnaire was designed and developed for electronic deployment through the research project's webpages and participants were recruited voluntarily on the basis of responding to an intensive period of participant-request publicity kindly posted on the university's main, student-facing webpages for a short period during the academic year 2015-16, and also through the researcher's home-university Dyslexia and Disability Service student e-mail distribution list. The raw score data was collected on the questionnaire using a Likert-style item scale approach, although the more conventionally applied, fixed anchor point scale items have been discarded in favour of a continuous scale approach which has been uniquely developed for this project by taking advantage of new 'form' processes now available for incorporation into web-browser page design. The rationale for designing continuous Likert scale items was to try to avoid the typical difficulties associated with anchor-point scales where the application of parametric statistical processes to non-parametric data is of questionable validity (Ladd, 2009, Carifio & Perla, 2007). A more detailed argument to support this choice is presented below.

In addition to recording value scores, research participants were also encouraged to provide qualitative data if they chose to, which has been collected through a 'free-text' writing area provided in the questionnaire. The aim has been to use these data to add depth of meaning to the hard outcomes of statistical analysis where possible.

This method of data collection was chosen because self-report questionnaires have been shown to provide reliable data in dyslexia research (eg: Tamboer et al, 2014, Snowling et al, 2012); because it was important to recruit participants widely from across the student community of the researcher's home university and where possible, from other HE institutions; because it was felt that participants were more likely to provide honest responses in the questionnaire were they able to complete it privately, hence avoiding any issues of direct researcher-involvement bias; because the remoteness of the researcher to the home university would have presented significant practical challenges were a more face-to-face data collection process employed.

So as to encourage a good participation rate, the questionnaire was designed to be as simple to work through as possible whilst at the same time eliciting data covering three broad areas of interest. Firstly, demographic profiles were established through a short, introductory section that collected personal data such as gender, level of study, and particularly whether the participant experienced any specific learning challenges; the second section presented verbatim the existing, standardized Academic Behavioural Confidence Scale as developed by Sander & Sanders (2006, 2009) and tested in other studies researching aspects of academic confidence in university students (eg: Sander et al, 2011, Nicholson, et al, 2013, Hlalele & Alexander, 2011). Lastly, a detailed profile of each respondent's study behaviour and attitudes to their learning has been collected and this section formed the bulk of the questionnaire. The major sub-section of this has been the researcher's approach to gauging the 'dyslexia-ness' of the research participants and care has been taken throughout the study to avoid using value-laden, judgmental phraseology such as 'the severity of dyslexia' or 'diagnosing dyslexia' not least because the stance of the project has been to project dyslexia, such that it might be defined in the context of university study, as an alternative knowledge acquisition and information processing capability where students presenting dyslexia and dyslexia-like study profiles might be positively viewed as being neurodiverse rather than learning disabled.



Academic confidence has been assessed using the existing, Academic Behavioural Confidence Scale because there is an increasing body of research that has found this to be a good evaluator of academic confidence presented in universlty-level students' study behaviours. Secondly, no other metrics have been found that explicitly focus on gauging confidence in academic settings (Boyle et al, 2015) although there are evaluators that measure self-efficacy and more particularly academic self-efficacy, which, as described earlier, is considered to be the umbrella construct that includes academic confidence. Hence, the Academic Behavioural Confidence Scale is particularly well-matched to the research objectives of this project and comes with an increasing body of previous-research credibility to support its use in the context of this study. A more detailed profile of the ABC Scale has been discussed above.

dyslexia measureDyslexia-ness has been gauged using a profiler designed and developed for this project as a dyslexia disrciminator that could identify, with a sufficient degree of construct reliability, students with apparently dyslexia-like profiles from the non-dyslexic group. It is this subgroup of students that is of particular interest in the study because data collected from these participants were to be compared with the control subgroups of students with identified dyslexia and students with no indication of dyslexia. For the purposes of this enquiry, the output from the metric has been labelled as Dyslexia Index (Dx), although the researcher acknowledges a measure of disquiet at the term as it may be seen as contradictory to the stance that underpins the whole project. However, Dyslexia Index at least enables a narrative to be contructed that would otherwise be overladen with repeated definitions of the construct and process that has been developed.

Designing a mechanism to identify this third research subgroup has been one of the most challenging aspects of the project. It was considered important to develop an independent means for quanitifying dyslexia-ness in the context of this study in preference to incorporating existing dyslexia 'diagnosis' tools for two reasons: firstly, an evaluation that used existing metrics for identifying dyslexia in adults would have been difficult to use without explicitly disclosing to participants that part of the project's questionnaire was a 'test' for dyslexia. It was felt that to otherwise do this covertly would be unethical and therefore unacceptable as a research process; secondly, it has been important to use a metric which encompasses a broader range of study attributes than those specifically and apparently affected by literacy challenges not least because research evidence now exists which demonstrates that students with dyslexia at university, partly by virtue of their higher academic capabilities, many have developed strategies to compensate for literacy-based difficulties that they may have experienced earlier in their learning histories. But also because in higher education contexts, research has also revealed that other aspects of the dyslexic self can impact significantly on academic study and that it may be a mistake to consider dyslexia to be only a literacy issue or to focus on cognitive aspects such as working memory and processing speeds (Cameron, 2015). In particular, those processes which enable effective self-managed learning strategies to be developed need to be considered (Mortimore & Crozier, 2006), especially as these are recognized as a significant feature of university learning despite some recent research indicating at best marginal, if not dubious, benefits when compared with traditional learning-and-teaching structures (Lizzio & Wilson, 2006). Following an inspection of the few, existing dyslexia diagnosis tools considered applicable for use with university-level learners (and widely used), it was concluded that these were flawed for various reasons (as discussed above) and unsuitable for inclusion in this project's data collection process. Hence, the Dyslexia Index Profiler has been developed and, as the analysis report details below, has fulfilled its purpose for discriminating students with dyslexia-like study characteristics from others in the non-dyslexic subgroup.

It is important to emphasize that the purpose of the Dyslexia Index Profiler is not to explicitly identify dyslexia in students, although a subsequent project might explore the feasibility of developing the profiler as such. The purpose of the Profiler has been to find students who present dyslexia-like study profiles such that these students' academic confidence could be compared with that of students who have disclosed an identified dyslexia - hence addressing the key research question relating to whether levels of academic confidence might be related to an individual being aware of their dyslexia or dyslexia-like attributes. From this, conjecture about how levels of academic confidence may be influenced by the dyslexia label may be possible.


Analysis and results

A detailed presentation about the methods used to analyse the data and the reasons for using those processes is provided, which includes a reflective commentary on the researcher's learning development in statistical processes where this adds value to the methods description. It is recognized that even though this is a doctoral level study, the level of statistical analysis that is used has had to be within the researcher's grasp both to properly execute and to understand outputs sufficiently to relate these to the research hypotheses. Invaluable in achieving these learning and research processing outcomes has been firstly a good understanding of intermediate-level statistical analysis, a degree of familiarity with the statistical analysis software application, 'SPSS' and much credit is also given to the accompanying suite of SPSS statistical analysis tutorials provided by Laerd Statistics online, which it is felt has both consoidated the researcher's existing competencies in statistical processes as well as providing a valuable self-teach resource to guide the understanding and application of new analysis tools and procedures.

Recall that the aim of the enquiry is to determine whether statistically significant differences exist between the levels of Academic Behavioural Confidence (ABC) of the three, principal research subgroups. The key, statistical outputs that have been used to establish this are the T-test for differences between independent sample means together with Hedges' 'g' effect size measures of difference. These are important outputs that can permit a broad conclusion to be drawn about significant differences in Academic Behavioural Confidence between the research subgroups, however a deeper exploration using Principal Component Analysis (factor analysis) has also been conducted on not only the results from data collected using the Academic Behavioural Confidence Scale but also of the Dyslexia Index metric which has enabled a matrix of T-test outcomes and effect sizes to be constructed. This has been a useful mechanism for untangling the complex interrelationships between the factors of academic behavioural confidence and the factors of dyslexia (as determined through the Profiler), and has contributed towards trying to understand which dimensions of dyslexia in students at university appear to have the most significant impact on their academic confidence in their studies.



return to the top


[2760 / 63,074 (at 08 Jan 2018)]




The research methodology for the enquiry is set out below as a workflow chronology as this serves to divide the reporting of how the research process has unfolded into ordered descriptions and justifications of the component parts of the project. In these, reference is made to pertinent methodology theory subsections where appropriate, and the extent to which, and reasons why this has been embraced or challenged. The workflow chronology is prefaced by a foreword which sets out the researcher's stance on the conventions of this part of a major, individual research enquiry which serves to underpin the chronology subsequently reported.



To this researcher at least, it seems clear that most research studies in the Social Sciences, of which education research may be considered one branch, are incremental in nature, generally conservative in the research approach adopted and produce research outputs that may as much conflate knowledge as advance it. Social science does not appear to be as objective as 'regular' science. 'Social' might include or not, elements of cultural, ethnical, ethnographical, inter- (and intra?) personal factors or combinations of these. 'Societal' means as relating to human society and all of us in it and by its very nature is complex, multifactorial and often convoluted. Observing the activities of the world's peoples and the diversity of their functioning within it evidences this. Science concerns finding out more about the nature of the natural and social world through systematic methodology based on empirical evidence (Science Council, 2017). The methodology of social science research arguably attempts to unravel the ethnography of human attitudes, behaviour and interrelationships in ways which can, however, isolate behavioural, attitudinal or societal variables from their interrelated co-variables, devise methods to observe and analyse them and subsequently attempt to unravel the often unconvincing results that are difficult to interpret, gain meaning from or understand with a degree of conviction or certainty upon which agentic change may be proposed. Research outputs that are not change agents, however incremental this might be, are surely purposeless and offer little more than esoteric value to the community of academics and researchers undertaking their projects. No more can this be true than in the field of education and learning where cementing a transactional bond between research and teaching is surely an essential process in the advancement of the epistimological discourse.

We have learned from the overview above that Bandura's strong argument in his social cognition theory, decades in development, is that people should be viewed as self-organizing, proactive, self-reflective and self-regulating and not merely reactive organisms shaped by environmental forces or inner impulses. Law (2004) strongly argues that the methods that are devised to explore complexities such as these do more than merely describe them because these methods are contingent on the research 'trend' at the time and may as much influence and create the social realities that are being observed as measure them. This is because the conventionality of the 'research methods' processes that we are taught, supposedly fully considered and perfected after a century of social science 'tend to work on the assumption that the world is properly to be understood as a set of fairly specific, determinate and more or less identifiable processes' (ibid, p5). But the alternative (i.e. Law's) viewpoint is to challenge this global assumption on the basis that the diversity of social behaviours, interactions and realities is (currently) too complex for us to properly know (= the epistemological discourse), and hence argues that consequently, the shape of the research should accommodate the kind of fluidity in its methodology that is responsive to the flux and unpredictable outcomes of the mixtures of components and elements that are the focus of the enquiry. If this mindset is adopted, then in follows - so the argument goes - that the research output might be all that more convincing. This researcher is struck by how taking this approach to devising and actioning a research methodology seems analogous to Kelly's view of the societies of peoples whereby individuals integrate with their societies as scientists [sic] and the mechanisms through which this is accomplished is pursuant on constructing representational models of their world realities so that they can navigate courses of behaviour in relation to it (Kelly, 1955).

This introduction to the Research Methodology thus sets out the researcher's standpoint on the constraints of prescriptive research processes because to follow them especially creates a tension between such an obedient mindset and the context within which this enquiry is placed and shaped - that is, one that challenges conventional analyses of learning difference and strives to locate it along a spectrum of diversity which equally challenges the traditional acceptance of neurotypical as 'normal' (Cooper, 2006) and everything else as an aberration. This means that although at the outset a fairly clear sense of what needed to be discovered as the product of the enquiry was constructed, elements of grounded theory as a research methodology have been part of the research journey which at times it must be admitted, has drifted a little towards reflecting on the aetiologies of both dyslexia and academic confidence rather than merely reporting them. But it has also been important to regularly re-document these reflections on both product and process which, despite a tentative acceptance of the Popkewitzian argument that research is more than the dispassionate application of processes and instruments because it needs to embrace the underlying values and shared meanings of the research community within which it is located (Popkewitz, 1984), in the field of dyslexia research, consensus about the nature and origins of the syndrome remains as yet a distant objective - as outlined at the top of this paper - an irony that flies in the face of, for example, Rotter's strongly argued proposition that the heuristic value of a construct is [at least] partially dependent on the precision of its definition (1990, p489).

Thus despite the indistinctness of shape and character that surrounds the dyslexia syndrome, the research design has tried hard to retain focus on the primary objective which has been to evidence that students with unidentified dyslexia-like profiles have a stronger sense of academic confidence than identified dyslexic students, and has attempted to develop an enquiry that traces a clear path through the somewhat contentious fog that is dyslexia research.


[975 /63,074 (at 08 Jan 2018))]


chronologyWorkflow chronology

Key steps in the workflow chronology are marked by learning and research process landmarks that have determined the final architecture of the project. This workflow chronology identifies and documents how these have influenced this enquiry and aims to present firstly how the researcher's interest in the impact of dyslexia on learning at university was kindled by learning practitioner experience, and as the project has progressed, how key realizations based on a better understanding of theory and careful reflection on how it has reshaped thinking migrated the research agenda onto a slightly different track:

  1. The preceeding small-scale enquiry:
    The legacy of outcomes from the researcher's preceeding Masters' dissertation (Dykes, 2008) has had a significant impact on the development of this current project. As a preceeding study, this small-scale enquiry within the dyslexic student community at a UK university was interested in understanding why some students with dyslexia were strong advocates of the learning support value provided by a dedicated learning technology suite staffed by dyslexia and disability specialists evidenced through their making frequent use of the suite and services throughout their time at university. Whereas at the same time, others with apparently similar student profiles appeared to be of the opposite disposition as these students rarely visited the technology suite or contacted the staff despite initially registering for access to the resources and services. It was thought that this disparity might, in part at least, be due to differences in the attitudes and feelings of students with dyslexia to their own dyslexia but particularly their perceptions about how it impacted on their access to, and their engagement with their learning at university. The study attempted to reveal these differences through exploration of (academic) locus of control as a determining variable by discriminating research participants into 'internalizers' or 'externalizers' as informed by the theories and evaluative processes widely acreditted to Rotter (1966, 1990). The hypothesis being considered was that students who did not use the learning technology suite and support services were likely to be internalizers whilst those who were regular 'customers' and frequently requested learning support from the staff, likely to be externalizers. This was formulated out of an extrapolation of the literature reviewed which suggested that externalizers were likely to be significantly more reliant on learning support services to help with and guide their studies in comparison to internalizers, who typically presented the more independent learning approaches generally observed amongst the wider student community. It was expected that this would be related to their attitudes and feelings about their dyslexia. As a member of staff of the suite at the time, privileged access to computer workstation log-in credentials was granted for the purposes of the research and this was used to determine which dyslexic students were frequent users of the service and which were not. Through a process of eliminating conflating variables, the research-participant base was established which eventually provided a sample size n=41 of which 26 were regular student users of the service and 15 were not. Data was collected through a self-report questionnaire which asked participants to rate their acquiscence using Likert-style responders to a selection of statements about learning and study preferences and about their feelings towards their dyslexia.

    locus of control profileslearned helplessnessBy deconstructing academic locus of control into factors and structuring the questionnaire statements accordingly, some small but significant differences between student profiles did emerge after the data was analysed although overall, the results were inconclusive. However a highly significant output of the study was the development of what were termed at the time, Locus of Control Profiles. An example of the profiles generated from three respondents' data is shown (right). These were diagrams that represented the numerical conversion of each participant's responses to elements of the data collection questionnaire that were then aggregated into 5 distinct factors that the literature had shown were often associated with locus of control. These factors were attempting to measure respectively: Affective Processes, Anxiety regulation and Motivation, Self-Efficacy, Self-Esteem, and Learned Helplessness and each axis of the profile diagrams represented each of these factors. Due to the methods used to code the data collected from this section of the questionnaire, the magnitude of the area of the polygons generated in these profile diagrams represented the degree of internal locus of control presented by each participant. Greater areas represented higher levels of internal locus of control. Hence it was possible to differentiate internalizers from externalizers given boundary values which, it must be admitted now, were somewhat arbitrarily determined, but which nevertheless worked at the time and enabled an analysis process of sorts to be completed.

    The factors emerged out of the literature review of the enquiry which had referred to many studies where levels of these constructs in learners with dyslexia were observed to have been siginificantly different than typically seen in non-dyslexic individuals (eg: Kerr, 2001, Burden & Burdett, 2005, Humphrey & Mullins, 2002, Riddick et al, 1999, Burns, 2000, Risdale, 2005). The literature also appeared to be showing that dyslexic individuals were more likely to be externalizers than internalizers in this locus of control context (eg: Bosworth & Murray, 1983, Rogers & Saklofske, 1985) because these individuals perceived, accurately or not, that their dyslexia would naturally put them at a learning disadvantage, not least due to their perception of their dyslexia as a learning disability rather than a learning difference in contemporary learning environments and therefore they would need additional support and resources in order to engage with their curricula on a more equal footing to their non-dyslexic peers. Originally the profiles were devised as a means to visually interpret and find meaning in the complex data that the enquiry had collected. No other studies had been found that presented multi-factoral outputs in a similar way and so in this respect, this portrayal of results in this way was highly innovative. On reviewing these individual participant-response profiles collectively, it was clear that they could be sifted into groups determined as much by striking similarities as by clear contrasts between them. The limitations of the enquiry recognized that there was much more information contained in these profile diagrams and their groupings than could be analysed and reported at the time and that these could be part of a further research project. It was also identified that a greater understanding of the relationships between the 5 factor constructs and locus of control and how these related to dyslexia in comparison to learners with no dyslexia was a necessary prerequisite for further work. It was also documented that an equally greater appreciation of 'due scientific process' would also be required for a later project especially in statistical analysis processes for example, gaining an understanding of what is now known to the researcher as principal component analysis. Despite this, it was recognized that the data generated through the self-report questionnaire were non-parametric, indeed with Likert-style attitudinal statements presented to participants with just two anchor-point choices - 'I generally agree with ...' or 'I generally disagree with...', the gradation of the data collected was extremely coarse. Clearer planning for the questionnaire design may have led to finer finer anchor-point gradings which would have facilitated a more cogent statistical argument to support the enquiry outcomes. Nonetheless, when this data was coded so that statistical analysis was possible, the non-parametric nature of the data collected was recognized, leading to use of the Mann-Whitney U and Kolgorov-Smirnoff Z tests for significant differences between medians. It is of no surprise therefore, especially with hindsight and with the researcher's more recently acquired statistical competencies, that the outcomes of the analysis were inconclusive. But the project did uncover interesting realities about the feelings and attitudes of students with dyslexia to their learning at university and worth reporting amongst these were statistically significant differences between the attitudes of 'internalizers' and 'externalizers' to their perceptions of the intensity of study required to match the academic performance of their non-dyslexic peers, and also about their feelings about the university's provision for the perceived uniqueness-of-need of students with dyslexia. Also of significant value were the sometimes quite heartfelt confessions of frustration, embarrassment and feelings of misunderstood expressed by many of the respondents, and also in relation to the institutional expectations to seek support to ameliorate study difficulties and challenges which actually increased study burdens rather then reduced them, especially in relation to time management. For example:
    • "I did not use dyslexia support at all last year ... [I] find the extra time in having to organize dyslexia support well in advance is not helpful as I would much prefer to ask for help as and when I have a problem" [respondent #28, Dykes, 2008, p85]
    • "I am unable to use study support sessions as I am already finding it hard to keep up with the course and don't have time" [respondent #34, ibid, p88]
    • "Going for help with studies takes up more of my time when i'm already struggaling with too much work and not enough time and it rarely helps as I can't explain why I'm struggaling, otherwise I would have just done it in the first place ... [and] all the forms assosated with getting help or reimbursement for books etc means that I keep putting [it] off - forms are something I am daunted by" [respondent #20, ibid, p98]
    This information was collected through a free-writing area on the questionnaire which in fact provided the most useful data of the whole enquiry and it was clear that this mayh have been the first opportunity that many dyslexic learners had been given to speak out about how they felt about their dyslexia and the challenges of studying that they attributed to it. At the time, the value of this rich, qualitative data was underappreciated and not thoroughly analysed, instead this data was used - to good effect nonetheless - to amplify points made in the discussion section of the dissertation.

    Thus is has been that enquiry background that has fuelled this current study notably through three, lasting impression of this earlier small-scale enquiry which emerged. The first was that in this higher education context at least, it seemed that as many students with identified dyslexia appeared to be at ease with the challenges of their studies as were those who were burdened with them and struggled (Dykes, 2008). This was in part demonstrated by a clear distinction between students who appeared ambivalent towards the additional support provisions that they had won through their Disabled Students' Allowance, sometimes citing these as not really necessary and evidenced in part by a significant lack of interest in taking advantage of training opportunities for the assistive technologies that they had been provided with to make studying easier for them, and those who presented quite the opposite. A similar result has been reported in another study where more than half of the 485 university students with dyslexia that were surveyed indicated that they had not taken up the assistive technology training that they had been offered although the reasons for this were not explored in detail with no distinction being made between training on hardware and training on software or assistive technology applications (Draffen et al, 2007). A later study amongst disabled students at the same institution (which coincidentally was also this researcher's own university) also uncovered a significant proportion of participants in the research cohort not taking up these training opportunities, citing time constraints, the burdensome nature of training regimes or the lack of contextual relevance to their studies as some of the reasons (Seale et al, 2010). A significant and possibly related factor may also be expressions of feelings of guilt about being given expensive hardware such as laptops and other devices that some dyslexic students felt they should not be entitled to because they did not consider themselves to be learning disabled and therefore considered that this additional support that they did not feel that they really needed might be giving them an unfair advantage over their peers as a result (Dykes, 2008).

    The second feature that emerged from the earlier study which has impacted on the research methodology of this current project was evidence of a significant variability in attitudes towards their own dyslexia expressed by students in the earlier survey which appeared to be as much related to wide range of dyslexia 'symptoms' presented and students' perceptions of the relevance of these to their studies, as to other psychological factors such as self-esteem, academic self-efficacy and learned helplessness, collectively reflecting either a generally positive or negative approach towards the challenges of study at university - hence the interest in relating these factors to the degree of internal or external locus of control. The ways in which this has influenced the current research process has firstly been to flag up the complexity of the dyslexia syndrome and hence how challenges in clearly establishing what dyslexia means at university conflate dyslexia research paradigms; and secondly how complex, psycho-educational factors that affect every individual's learning behaviours and attitudes to their studies can be teased out into identifiable variables or dimensions that can be somehow measured and thus are comparable across research subgroups.

    The third factor that has influenced and strongly motivated this current project has been the significant proportion of student respondents in the earlier study who strongly expressed their feelings that their dyslexia was about much more than writing challenges and poor spelling, and also how being differentiated from other learners had impacted negatively and had lasting effects.
    • "Dyslexia is seen too much as a reading and writing disorder ... I am just not hard wired in [the same] way [as others]. I just end up feeling stupid 'cos I just don't get it" [respondent #12, Dykes, 2008, p95]
    • "I find searching databases and getting started on writing especially difficult" [respondent #34, ibid, p88]
    • "I avoid using computers at university as they are not set up the same as mine at home and I find it confusing" [respondent #19, ibid, p109]
    • "My spelling and reading sometimes gets worse when I think about dyslexia. I get annoyed with the tact that people can blame bad spelling on dyslexia" [respondent #11, ibid, p82]
    • "I am not sure dyslexia is real because I believe everyone, if given the chance to prove it, could be a bit dyslexic. So perhaps my problem is that I am not as intelligent as others, or that my lack of confidence from an early age decreased my mental capability" [respondent #9, ibid, p94]
    • "In my academic studies I have always had good grades but never found it easy to concentrate on my work" [respondent #27, ibid, p100]
    • "... I will be thinking of one word but write a completely different word, for example I will be thinking 'force' but write 'power' ... I'm obviously cross-wired somewhere" [respondent #33, p101]
    • "When I do not understand, most people think the written word is the problem but [for me] it is the thought process that is different" [respondent #41, ibid, p128]
    • "I was separated all the time [in primary school] and made out to be different. I feel this wasn't the best way to deal with me" [respondent #39, ibid, p103]
    Hence taking the perspective that dyslexia can be a multifactoral learning difference that is much more than a manifestation of a reading and spelling disorder shaped by phonological deficits accumulated during early years has driven the desire to develop an alternative process for exploring such factors or dimensions in adult learners, thus forming the rationale for innovating the Dyslexia Index Profiler used in this project.

  2. All of these considerations were welded together into a research design for this current project which proposed at the outset to firstly develop the profile visualizations of the prior study into a discriminator for identifying dyslexia-like characteristics amongst apparently non-dyslexic students, and secondly to adopt Sanders et al (2003, 2006) established Academic Behavioural Confidence Scale as a mechanism for exploring the differences in academic behaviour between students with dyslexia, students with no dyslexia and most importantly, students with previously unidentified dyslexia-like profiles. As reported elsewhere in this thesis, the ABC Scale has been used in a number of recent research studies to investigate the causes for differences in students' study behaviours but to date, this process is not known to have been applied specifically to explore the impact of dyslexia on academic confidence. The decision to make use of the ABC Scale emerged out of reflecting on the resonance that the construct of 'academic confidence' had with the differences observed in the earlier study between dyslexic students who expressed a general confidence in tackling their studies at university compared with those others who presented low levels of self-esteem and academic self-efficacy which correlated negatively with levels of study anxiety, defeatism, learned helplessness and, to a lesser extent, academic procrastination. This led to the supposition that this disparity may be a function of how individuals' were incorporting their dyslexic identity into their study identify, with those who were more comfortable with their dyslexic self being part of who they are presenting more positive approaches towards their studies than others who strongly perceived their dyslexia as a disabling characteristic of their study identity and to which they tended to attribute their low levels of self-esteem and self-confidence in tackling academic challenges. Much of this earlier thinking has been grounded in Burden's extensive research with dyslexic adolescents which collectively was one of the few research interests that took individuals' feelings and attitudes to their dyslexia as the main focus - that is, the affective dimension of dyslexia (Burden, 2008a, Burden, 2008b, Burden & Burdett, 2005, Burden & Burdett, 2007). The Myself-As-A-Learner Scale (MALS) (Burden, 2000), developed out of his research into dyslexic teenagers' approaches to their learning which in particular looked at attitudinal differences within learners attending a special school which specifically focused on supporting dyslexia, was designed to evaluate students' academic self-concept as a means to understand more about how learners' self-identity impacted on academic engagement and ultimately, attainment. Key to this, has been the broad use of confidence as a characterizing attribute. The scale has been used in a number of more recent studies amongst learners where dyslexia was not the focus of the enquiries nor was the condition mentioned but these studies were ones which recognize that academic self-concept and the affective dimensions that accompany it has traditionally been regarded amongst researchers to have less significance in relation to academic attainment than the concept of intelligence. Amongst these has been a longitudinal study which was interested in changes in students MALS scores as they progressed through secondary education (Norgate et al, 2013), a project which was exploring the relationships between academic self-concept, classroom test performance and causal attribution for achievement (Erten & Burden, 2014) and an interesting study which looked at how socio-emotional characteristics of dyslexic children were modified during their temporary extraction from mainstream teaching settings into specialist learning units to support enhancing literacy skills (Casserly, 2012). Burden took the MALS as a starting point for further development with many of the conceptual underpinnings surfacing in a much more focused metric, the Dyslexia Identity Scale, (Burden, 2005a). This evaluator was more concerned with finding out more about the affective dimensions of dyslexia and how these contributed to a dyslexic learner's sense of 'self'. Other researchers were similarly interested in the 'dyslexic self', notably Pollack (2005) whose interest was more with dyslexia in higher education contexts and how institutional perceptions and understandings about dyslexia should be challenged in the light of agendas promoting greater inclusivity and accessibility, suggesting that a reframing of teaching and learning approaches to be more properly aligned with this social justice agenda were long overdue. [more here?]

    Reflecting on the research methodology that supported the enquiry at that time, hindsight suggests that it would have benefitted from a more robustly developed framework based on theoretical underpinnings that could have been better understood at that time although the high final grade for the dissertation indicated that it had nevertheless presented an understanding of the relevant concepts and application of theory to method that was broadly appropriate for that level of study.

  3. Defining dyslexia: the professional practitioners' view:
    [THIS SECTION TO BE EDITED (as Feb 2018) - work into it, Wadlington & Wadlington, 2005, What educators really believe about dyslexia; and Hornstra et al, 2010, Teacher attitudes towards dyslexia]
    An outcome of the early stages of the literature review on dyslexia was an emerging unease about the lack of consensus amongst researchers, theorists and practitioners about how it should be defined. Having little recourse to a consensual definition about dyslexia was felt to be an obstacle in the research design, not least as it was clear that researchers and theorists tended to underpin their reports with a definition of dyslexia. It was felt that to conduct a research study about dyslexia in the absence of a universally agreed definition of what it is could be problematic and it was recognized that others had expressed a similar disquiet, in some cases constructing alternative theories to resolve the issue of definition discrepancies (eg: Frith, 1999, 2002, Evans, 2003, Cooper, 2006, Armstrong, 2015). Some of these have been referred to above but in summary, what seems to be emerging from the continued debate is that first of all, adherents to the deficit definitions which have traditionally been the preserve of clinicians who diagnose dyslexia have been numerous amongst earlier research and hence, this polarizes the research outcomes into alignment with this definition perspective. Secondly, the social constructivist model has encouraged the egress of 'differently-abled' as a definition standpoint which has gained in research popularity not least driven by inclusion agendas. Lastly, an increasing research narrative is supporting the argument that defining dyslexia is elusive to the extent that the label is unhelpful and laden with such stigma as to be academically counter-productive.

    This difficulty has been discussed above, however a practical outcome of this concern was an interest in exploring how dyslexia is framed in professional practice. The led to the development and subsequent deployment of a small-scale sub-enquiry, in fact more of a 'straw poll' given its limited methodological underpinnings, that aimed to construct a backdrop of contemporary viewpoints from amongst dyslexia support practitioners about how dyslexia is defined in their worlds. There are precedents for an enquiry that tries to explore professionals' knowledge about dyslexia. Bell et al (2011) conducted a comparative study amongst teachers and teaching assistants in England and in Ireland who have professional working contact with students with dyslexia to explore how teachers conceptualize dyslexia. The research asked teachers and teaching assistants to describe dyslexia as they understood it and the data collected was categorized according to Morton & Frith's causal modelling framework that defines dyslexia as either a behavioural condition, a cognitive one or of biological origin (Morton & Frith, 1995). Their paper highlighted concerns that the discrepancy model of dyslexia - that is, where the difficulties are assumed to be intrinsic to the learner - persisted amongst practitioners, where discrepancy criteria were more frequently used to identify learners with dyslexia rather any other category or criterion (ibid, p185). Significant in Bell's study was an acknowledgement of the wide-ranging spectrum of characteristics associated with dyslexia and learning and hence, the importance of developing highly individualized teacher-learner collaborations if students with learning differences are to be fairly accommodated in their learning environments. Emerging from this was the call for better teaching training and development that enabled professionals to gain a greater understanding of the theoretical frameworks and most up-to-date research surrounding dyslexia and how it can be both idenfitied, formally or otherwise, and subsequently embraced in learning curricula. Soriano-Ferrer & Echegaray-Bengoa (2014) attempted to create and validate a scale to measure the knowledge and beliefs of university teachers in Spain about developmental dyslexia. Their study compiled 36 statements about dyslexia such as 'dyslexia is the result of a neurological disorder', 'dyslexic children often have emotional and social disabilities', 'people with dyslexia have below average intelligence' and 'all poor readers have dyslexia'. Respondents were asked to state whether they considered each statement about dyslexia to be true, false or that they did not know. Unfortunately their paper made no mention of the resulting distribution of beliefs, merely claiming strong internal consistency reliability for their scale. A similar, earlier (and somewhat more robust) study also sought to create a scale to measure beliefs about dyslexia with the aim of informing recommendations for better preparing educators for helping dyslexic students (Wadlington & Wadlington, 2005). The outcome was a 'Dyslexia Belief Index' which indicated that the larger proportion of research participants, who were all training to be or already were education professionals (n=250), held significant misconceptions about dyslexia. Similar later work by Washburn et al (2011) sought to gauge elementary school teachers' knowledge about dyslexia, using a criteria that claimed that 20% of the US population presents one or more characteristics of dyslexia. Other studies which also used definitions of dyslexia or lists of characteristics of dyslexia were interested in attitudes towards dyslexia rather than beliefs about what dyslexia is (eg: Honrstra et al, 2010, Tsovili, 2004).

    Thus it was felt approriate to echo Bell's (op cit) interest and attempt to determine the current viewpoint of professional practitioners by exploring their alliances with the some of the various definitions of dyslexia. 'Professional practitioners' are taken as academic guides, learning development tutors, dyslexia support tutors, study skills advisers and disability needs assessors but the enquiry was scoped broadly enough to include others who work across university communities or more widely with dyslexic learners. Given that the definition of dyslexia may be considered as 'work in progress' it is not unreasonable to suppose that an academic researcher may use one variant of the working definition of dyslexia in comparison to that applied by a disability needs assessor or a primary school teacher for instance. Hence it was felt that finding out the extent to which dyslexia is framed according to the domain of functioning of the practitioner would provide a useful, additional dimension to this project's attempt to understand what dyslexia is. Some existing, recent studies report similar attempts to explore the meaning of dyslexia amongst practitioners.

    10 definitions of dyslexia were sourced that tried to encompass a variety of perspectives on the syndrome which were built into a short electronic questionnaire and deployed on this project's webpages (see Appendix # and available online here). The questionnaire listed the 10 definitions in a random order and respondents were requested to re-order them into a new list that reflected their view of them from the 'most' to 'least' appropriate in their contemporary context. The sources of the 10 definitions were not identified to the participants during completion of the questionnaire because it was felt that knowing who said what may introduce some bias to answers. For example, practitioners may align their first choice with the definition attributes to the British Dyslexia Association (which was one of the sources) more out of professional/political correctness than according to their genuine view. Conversely, a respondent may dismiss the definition attributed to a TV documentary as unscientific because this may be perceived as an unscientific or potential biased source. On submission of the questionnaire the sources of all of the definitions were revealed and participants were told that this would occur in the preamble to the questionnaire. If was felt that curiosity about sources may be a contributory motivating factor to participate. Also provided in the questionnaire was a free-text area where respondents were able to provide their own definition of dyslexia if they chose to, or add any other comments or views about how dyslexia is defined. Additionally, participants were asked to declare the professional role and practitioner domain - for example 'my role is: 'a university lecturer in SpLD or a related field' '. The questionnaire was only available online and was constructed using features of the newly available HTML5 web-authoring protocols which enabled an innovative 'drag-drop-sort' feature. The core section that comprises the definitions and demonstrates the list-sorting functionality is below.

    It was distributed across dyslexia forums, discussion lists and boards. and was also promoted to organizations with interest in dyslexia across the world, who were invited to support this 'straw poll' research by deploying it across their own forums or blogs or directly to their associations' member lists. Although only 26 replies were received, these did include a very broad cross-section of interests ranging from disability assessors in HE to an optometrist.

    Although a broad range of definitions was sought it is notable that 8 out of the 10 statements used imply deficit by grounding their definitions in 'difficulty/difficulties' or 'disorder', which is indeed a reflection of the prior and prevailing reliance on this framework. With hindsight, a more balanced list of definitions should have been used, particularly including those pertinent to the latest research thinking which at the time of the questionnaire's construction had not been fully explored. The relatively positive definition #5, that of the British Dyslexia Association, which recognizes dyslexia as a blend of abilities and difficulties hence marking a balance between a pragmatic identification of the real challenges faced by dyslexic learners and a positive acknowledgement of many of the positive, creative and innovative characteristics frequently apparent in the dyslexic profile, was selected and placed in first, second or third place by 16 respondents with 12 of those placing it first or second. This only narrowly beat definition #8, noting dyslexia principally as a ‘processing difference’ (Reid, 2003) which was placed in first, second or third place by 14 respondents, also with 12 of those placing it in first or second place. Interestingly, this definition #8 beat the BDA’s definition for first place by 6 respondents to 5. The only other definition being selected and placed first by 6 respondents was definition #9 which characterizes dyslexia (quite negatively) with a ‘disability’ label, this being the only definition to include this in its wording indicating its origination in the USA where the term ‘learning disability’ is more freely used to describe dyslexia.

    So from this relatively cursory inspection of the key aspects of respondents’ listings overall, it seems fairly evident that a clear majority of respondents align their views about the nature of dyslexia with both the that of the British Dyslexia Association and with that of an experienced practitioner, researcher and writer Gavin Reid, (2003), whose work is frequently cited and is known to guide much teaching and training of dyslexia ‘support’ professionals.

    However let us briefly consider some of the ways in which these results are dispersed according to the professional domains of the respondents:
    Of the three results received from university lecturers in SpLD, two placed the  BDA’s definition of a ‘combination of abilities and difficulties…’ in first position with the third respondent choosing just the definition describing dyslexia as a specific learning disability. 7 respondents described their professional roles as either disability/dyslexia advisors or assessors by which it is assumed these are generally non-teaching/tutoring roles although one respondent indicated a dual role in being a primary teacher as well as an assessor. None of these respondents used the BDA’s definition as their first choice, with two not selecting it at all. Of the remaining five, this definition was either their second or third choice. Two of these respondents put definition #8, ‘a processing difference…’ in first place with three others choosing definition 9, ‘a specific learning disability’ to head their list. Perhaps this is as we might expect from professionals who are trying to establish whether an individual is dyslexic or not because they have to make this judgment based on ‘indications’ derived from screenings and tests which are comprised of intellectual and processing challenges particularly designed to cause difficulty for the dyslexic thinker. This is central to their identifying processes. Although the professionalism and good intentions of assessors and advisors is beyond doubt, it might be observed that professional conversancy with a ‘diagnostic’ process may generate an unintentional but nevertheless somewhat dispassionate sense of the ‘learning-related emotions’ (Putwain, 2013) that might be expected in an individual who, most likely given a learning history peppered with frustration, difficulties and challenges, has now experienced an ‘assessment’ that, in the interests of ‘diagnosis’, has, yet again, spotlighted those difficulties and challenges. It is hard to see how such a process does much to enhance the self-esteem of the individual who is subjected to it. This is despite such trials being a necessary hurdle for determining eligibility for access to specialist support and ‘reasonable adjustments’ which, it will be claimed, will then ‘fix’ the problem. The notion of the impact of the identifying process is discussed a little more below.
    One respondent was an optometrist ‘with a special interest in dyslexia’ who selected just one definition in their list, this being #9, ‘a specific learning disability…’ but additionally provided a very interesting and lengthy commentary which advocated visual differences as the most significant cause of literacy difficulties. An extensive, self-researched argument was presented, based on an exploration of ‘visual persistence’ and ‘visual refresh rates’. The claimed results showed that ‘people who are good at systems thinking and are systems aware are slow, inaccurate readers but are good at tracking 3D movement, and vice versa’, adding that ‘neurological wiring that creates good systems awareness [is linked with] slow visual refresh rates and that this results in buffer overwrite problems which can disrupt the sequence of perceived letters and that can result in confusion in building letter to sound associations’. Setting aside the more immediate interest in this study, without recourse to its argument and conclusions being tested through peer-review (which do not seem apparent) at the very least this may be an example of a research perspective that is in clear alignment with the domain of functioning of the researcher and not wholly objective, although this is not to cast unsubstantiated aspersions onto the validity of the research. This respondent was also of the opinion that none of the definitions offered were adequate (actual words used not repeatable here) with some particularly inadequate, commenting further that ‘I do not know what it would mean to prioritize a set of wrong definitions’ - a point which distills much of the argument presented in my paper so far relating to issues of definition impacting on research agendas.

    With the exception of Cooper’s description of dyslexia being an example of neuro-diversity rather than a disability, difficulty or even difference, definitions used by researchers and even professional associations by and large remain fixed on the issues, challenges and difficulties that dyslexia presents when engaging with the learning that is delivered through conventional curriculum processes.  This approach compounds, or certainly tacitly compounds the ‘adjustment’ agenda which is focused on the learner rather than the learning environment. Although it is acknowledged that more forward-looking learning providers are at least attempting to be inclusive by encouraging existing learning resources and materials to be presented in more ‘accessible’ ways – at least a pragmatic approach – this is still not grasping the nettle of how to create a learning environment that is not exclusively text-based. I make no apologies for persistently coming back to this point.

  4. Existing dyslexia evaluators and identification processes in higher education - why these were dismissed:
    All of the current devices used in higher education settings for identifying dyslexia in students search diagnostically for deficits in specific, cognitive capabilities and use baseline norms as comparators. These are predominantly grounded in lexical competencies. Whilst the literacy-based hegemony prevails as the defining discourse in judgments of academic abilities (Collinson & Penketh, 2010) there remains only a perfunctory interest in devising alternative forms of appraisal that might take a more wide-ranging approach to the gauging of academic competencies. All of the tools use a range of assessments which are built on the assumption that dyslexia is principally a phonological processing deficit that is accompanied by other impairments in cognitive functioning which collectively, are said to disabled learning processes to a sufficient extent that the individual 'diagnosed' is left at a substantial disadvantage in relation to their intellectually-comparable peers. The principle reason for identifying a student as dyslexic in university settings has been to provide access to learning support funding through the Disabled Students' Allowance (DSA) within which, dyslexia has been regarded as a disability. In the light of persistent funding constraints, a UK Government review of the provision of the DSA, first published in 2014, originally proposed the removal of dyslexia, termed Specific Learning Difficulties, from the list of eligible impairments, mental health conditions and learning difficulties, but to date the proposals set out in the review have not been actioned, not least as a result of strong lobbying from organizations such as the British Dyslexia Association, PATOSS (the Professional Association for Teachers and Assessors of Students with Specific Learning Difficulties) and ADSHE (the Association of Dyslexia Specialists in Higher Education). Although undergoing a less intrusive screening process is usually the first stage in attempting to establish whether a student presents dyslexia, full assessments can only be conducted by educational psychologists and although the battery of tests and assessments administered might be considered as necessarily comprehensive and wide-ranging, undergoing such cognitive scrutiny is time-consuming, fatiguing for the student being 'diagnosed' and can add to feelings of difference (Cameron, 2015), anxiety (eg: Carroll & Iles, 2006, Stampoltzis, 2017) and negative self-worth (Tanner, 2009) typically experienced by learners who may already be trying to understand why they find academic study so challenging in comparison to many of their peers

    A lengthier discussion about dyslexia assessments and identification has been presented earlier in this thesis which is not repeated here. It is worth repeating however, that the principle reason why existing metrics for assessing dyslexia were not utilized in this project is because to do so would raise ethical concerns about subsequent requirements for disclosure to students whose outcomes on such assessments, even within the framework of this project's questionnaire, appeared to be indicating that they may be dyslexic. I have been careful throughout this project to point to a desire to measure levels of dyslexia-ness rather than to identify dyslexia as it is central to the methodological processes used in this project that a metric is devised that focuses on study attributes rather than the cognitive characteristics conventionally regarded as deficient in individuals with dyslexia. It is of note that there is a small but growing recognition in university learning development services and study skills centres that finding alternative mechanisms for identifying study needs, whether these are appear to be dyslexia-related or not, is desirable, especially in the climate of widening participation currently being promoted in our universities. Although these have been driven through a need for finding improved and positively-oriented mechanisms for identifying learning differences typically observable in dyslexic students (Casale, 2015, Chanock et al, 2010, Singleton & Horne, 2009, Haslum & Kiziewicz, 2001) what appears to emerge out of the discussion of studies' results is that many of the characteristics that may, with development, prove useful as identification discriminators, fall broadly into the realm of study skills, academic task management and broader approaches to learning, which are clearly applicable to all students. In other words, interest is growing, mostly out of practioner-research, in finding other ways to describe dyslexia as opposed to identifying or diagnosing it using non-cognitive parameters, notably, in discursive constructions of dyslexia using the everyday lived experiences of dyslexic students (Cameron & Billington, 2015a, Cameron & Billington, 2015b, Cameron, 2016).

    [insert here: section detailing a previous attempt to develop an electronic, computerized dyslexia screening assessment (Haslam & Kiziewicz, 2001) built on thesis by Zdzienski (1998)]

    However, as mentioned earlier, Elliott & Grigorenko (2014) still conclude that one of the most significant difficulties that remains very challenging to address when designing new processes for determining whether a student presenting a particular set of study or academic learning management difficulties is actually presenting dyslexia, is establishing sensible boundary conditions above and below which dyslexia is considered to be the 'cause' of the student's difficulties or not. This is, of course, not least due to a) the persistent difficulty in defining dyslexia and b) the wide diversity of learning differences that may be presented. Navigating a path through this landscape has been one of the greatest challenges of this research project and hence, has contributed to the rationale for designing and building the specific, evaluative tool to meet the needs of this study's research questions. Hence by adopting an approach to devising a metric that considers variances in study behaviours and learning preferences as the basis of its working parameters, the Dyslexia Index Profiler that has been developed is building on the emerging discourse that is grounded in a non-cognitive stance. A detailed account of this design and development is presented below, but in summary and given the boundary conditions that were established, the Dyslexia Index Profiler correctly identified as dyslexic, or not likely to be not dyslexic, all but 2 of the 68 students in the research subgroup who disclosed in the questionnaire that they had been formally idenfitied with dyslexia.

    [More content in preparation - Feb 2018]

  5. Development of the Dyslexia Index (Dx) Profiler:
    The Dyslexia Index (Dx) Profiler has been developed to meet the following criteria:

    1. it is a self-report tool requiring no administrative supervision;
    2. it includes a balance of literacy-related and wider, acadademic learning-management evaluators;
    3. it includes elements of learning biography;
    4. self-report stem item statements are as applicable to non-dyslexic as to dyslexic students;
    5. although Likert-style based, stem item statements avoid fixed anchor points by presenting respondent selectors as a continuous range option;
    6. stem item statements were written so as to minimize response distortions potentially induced by negative affectivity bias (Brief, et al, 1988);
    7. even though stem item statements were also written so as to minimize respondent auto-acquiescence ('yea-saying'), by virtue of the response indicator design requiring a fine gradation of level-judgment to be made, it was hoped that auto-acquisence bias could be minimized as the tendency to respond positively to attitude statements has been identified as often problematic (Paulhaus, 1991);
    8. Although not specifically designed into the suite of stem-item statements at the outset, there were natural groupings of statements which later emerged through factor analysis as scales. This then showed adherence to the standard social science procedure for grouping statements into scales in self-report questionnaires;
    9. In writing the stem item statements, an attempt was made to try to avoid social desirability bias, that is, the tendency of respondents to self-report themselves positively, either deliberately or unconsciously. In particular, an overall neutrality was sought for the complete Dx Profiler so that it was difficult for participants to guess what were likely to be favourable responses (Furnham & Henderson, 1982).

    It has been constructed following review of dyslexia self-identifying evaluators, in particular, the BDA's Adult Checklist developed by Smythe and Everatt (2001), the original Adult Dyslexia Checklist proposed by Vinegrad (1994) upon which many subsequent checklists appear to be based, and the much later, York Adult Assessment (Warmington et al, 2012) which has a specific focus as a screening tool for dyslexia in adults and which, despite the limitations outlined earlier, was found to be usefully informative. Also consulted and adapted has been work by Burden, particularly the 'Myself as a Learner Scale' (Burden, 2000), the useful comparison of referral items used in screening tests which formed part of a wider research review of dyslexia by Rice & Brooks (2004) and more recent work by Tamboer & Vorst (2015) where both their own self-report inventory of dyslexia for students at university and their useful overview of other previous studies were consulted.

    It is widely reported that students at university, by virtue of being sufficiently academically able to progress their studies into higher education, have frequently moved beyond many of the early literacy difficulties that may have been associated with their dyslexic learning differences and perform competently in many aspects of university learning (Henderson, 2015). However the nature of study at university requires students to quickly develop their generic skills in independent self-regulated learning and individual study capabilities, and enhance and adapt their abilities to engage with, and deal resourcefully with learning challenges generally not encountered in their earlier learning histories (Tariq & Cochrane, 2003). Difficulties with many of these learning characteristics or 'dimensions' that may be broadly irrelevant or go un-noticed in children may only surface when these learners make the transition into the university learning environment because adult learning in higher education requires greater reliance on self-regulated learning behaviours in comparison to earlier, compulsory education contexts where learning is largely teacher-directed. Many students struggle to deal with these new and challenging learning regimes, whether dyslexic or not and this has seen many, if not most universities developing generic study-skills and/or learning development facilities and resources to support all students in the transition from regulated to self-regulated learning. It is possible that increasing institutional awareness of their duties to respond to quality assurance protocols and recently introduced measures of student satisfaction has also influenced the development of academic skills provisions in universities, together with a commercial interest in keeping levels of attrition to a minimum both to reduce the financial consequences through loss of fees and also to minimize the publicity impact that attrition levels might have on future student recruitment.

    For many students, gaining an understanding about why they may be finding university increasingly difficult, perhaps more so than their friends and peers, does not happen until their second or third year of study when they subsequently learn of their dyslexia, most usually through referral from diligent academic staff to learning support services. One earlier research paper established that more than 40% of students with dyslexia only have their dyslexia identified during their time at university (Singleton et al, 1999) and given acknowledgement that widening participation and alternative access arrangements for entry to university in the UK has certainly increased the number of students from under-represented groups moving into university learning (Mortimore, 2013), although given higher participation in higher education generally it is the proportion of students with dyslexia relative to the student population as a whole rather than the absolute number that might be a better indicator, it is nevertheless possible that this estimate remains reasonable. This might further suggest that many dyslexic students progress to the end of their courses remaining in ignorance of their learning differences, and indeed many will gain a rewarding academic outcome in spite of this suggesting that their dyslexia, such that it may be, has been irrelevant to their academic competency and has had little impact on their academic agency.

    But there are many reasons why dyslexia is not identified at university and a discussion about this is presented in an earlier section of this thesis. However one explanation for this late, or non-identification may be because these more, academic learning management-type dimensions of dyslexia which are components of self-regulated learning processes are likely to have had little impact on earlier academic progress because school-aged learners are supervised and directed more closely in their learning at those stages through regulated teaching practices. At university however, the majority of learning is self-directed, with successful academic outcomes relying more heavily on the development of effective organizational and time-management skills which may not have been required in earlier learning (Jacklin et al, 2007). Hence, because the majority of the existing metrics appear to be weak in gauging many of the study skills and academic competencies, strengths and weaknesses of students with dyslexia that may either co-exist with persistent literacy-based deficits or have otherwise displaced them, this raised a concern about using any of these metrics per se, a concern shared by many educators working face-to-face with university students (eg: Chanock et al, 2010, Casale, 2013) where there has been a recent surge in calls for alternative assessments which more comprehensively gauge a wider range of study attributes, preferences and characteristics.

    So the two preliminary enquiries reported above were developed that sought to find out more about how practitioners are supporting and working with students with dyslexia in UK universities. The aim was to guide the development of the Dyslexia Index Profiler by grounding it in the practical experiences of supporting students with dyslexia in university contexts because it was felt that this would complement the theoretical basis of the metric. The first enquiry aimed to find out more about the kind of working definition of dyslexia that these practitioners were adopting; the second aimed to explore the prevalence of attributes and characteristics associated with dyslexia that were typically encountered by these practitioners in their direct interactions with dyslexic students at university on a day-to-day basis. The results of this second enquiry have been used as the basis for building the Dyslexia Index Profiler and are reported below.

    The Profiler has collected quantitative data from participant responses across the complete datapool which has enabled baselines scores of dyslexia-ness to be established for students who have disclosed their dyslexia - effectively this is deriving the control group for the study. As a consequence of this process, scores from participants who declared no dyslexic learning differences could be compared and from these, the two distinct subgroups established: non-dyslexic student participants whose scores are so far adrift from those in the dyslexic control group that these could be properly considered as non-dyslexic, or, in the terms adopted for this project, presented a low degree of dyslexia-ness; secondly, a subgroup of student participants whose scores are similar to those in the control group, hence establishing the subgroup of very particular interest to the study, which is students who had declared themselves to be not dyslexic but who nevertheless presented a dyslexia-ness of the similar level to those students in the dyslexic control group. Thus, the academic confidence of the three subgroups could then be compared.

    The final profiler comprised 20 Likert-style item statements and each item statement aimed to capture data about a specific study attribute or aspect of learning biography. At the design stage, item statements were referred to as 'dimensions' and were loosely grouped into scales each designed to measure distinct study and learning management processes. At the outset, these were determined 'by eye' into 5 categories or scales: Reading; Scoping, thinking and research; Organization and time-management; Communicating knowledge and expressing ideas; Memory and information processing. With results available to inspect later, factor analysis applied dimensionality reduction to re-determine these scales and new dimension groupings emerged, subsequently referred to as FACTORS and designated:
    • Factor 1: Reading, writing and spelling;
    • Factor 2: Thinking and processing;
    • Factor 3: Organization and time-management;
    • Factor 4: Verbalizing and scoping;
    • Factor 5: Working memory;
    Accounts of these processes are below and in the Data & Analysis section of this thesis.

    A detailed account of the design, development and construction of the Dyslexia Index Profiler follows, below.

  6. Gauging academic confidence by using the Academic Behavioural Confidence (ABC) Scale:
    The ABC Scale developed by Sander & Sanders throughout the first decade of this century has generated a small but focused following amongst researchers who are interested in exploring differences in university student study behaviours and academic learning management approaches.

    ABC Researchers to mention


Collecting information - the rationales, justifications and challenges

This is a primary research project so research participants have had to be located and information from them requested. The project has focused on the academic confidence of university students and is relating that to levels of dyslexia-ness determined as part of the data collection process. Academic confidence has been operationalized using a standardized and freely available metric, the Academic Behavioural Confidence Scale which is a 24-item self-report questionnaire. Given that participants' dyslexia-ness would also be gauged using a self-report process, that other demographics could easily be collected simultaneously and that qualitative data could also be acquired through written responses to open-ended questions it was felt that designing and building a complete, self-report, data-collection questionnaire was the most expedient data-collection process and would be fit-for-purpose in this project. Data analysis would use quantitative statistical processes to address the research questions and hypotheses, where examining the data for significant differences and effect sizes would be the major part of the analysis. Quantitative data was collected in the questionnaire through Likert-style item statements which together, formed scales and subscales. This is described in more detail below. A widely-reported challenge when collected self-report data using Likert scales in questionnaires is that when conventional, fixed anchor points are used - commonly 5- or 7-points - the data produced has to be numercially coded so that it can be statistically analysed. This raises the controversy about whether data coded in this way justifies parametric analysis as non-parametric techniques should really be employed because the data is not authentic and actual (Carifio & Perla, 2007, Carifio & Perla 2008). To manage this issue, an innovative data-range slider was used in the questionnaire design which provided much finer anchor-point gradations, effectively eliminating these altogether in favour of a continuous scale, hence enabling parametric statistical analyis of the results to be conducted. This is also described more fully below.k

Recent developments in web-browser technologies and electronic survey creation techniques have led to the widepread displacement of paper-based questionnaires by those that can be delivered electronically and so this method was used. Substantial technical challenges in the design of the electronic questionnaire were encountered and a report about how these were managed is provided in the following sub-section. Once created, the intention was to deploy the questionnaire to two, distinct student groups. The first was to be students with known dyslexia at the home university so that the baseline control group could be established; the second was to the wider student community at the home university. Challenges in obtaining the cooperation of significant staff at the home university for the deployment to students with dyslexia were encountered resulting in a delay of some months whilst these were resolved. The main issue was an uncertainty that all ethical procedures and protocols had been properly followed, and that complete student-data confidentiality would be preserved. This was despite all Ethics Approval documents being made available, and sight of the main data-collection questionnaire being made so that assurances that no student names and contact details were part of the data collection could be confirmed.







research methods cartoonThis sub-section provides a report of the processes that were designed and developed to collect data.

The underlying rationale as a primary research project has been to collect data about levels of academic confidence of university students, collectively referred to below as the Research Group (RG), measured through use of the Academic Behavoural Confidence Scale, and levels of dyslexia-ness, gauged through the Dyslexia Index Profiler, especially developed for this study.

Additional background information was collected to provide the demographic context of the Research Group which in particular included a short section which asked participants to declare any learning differences of which dyslexia was of primary interest. Additional information was also collected relating to broader psycho-social constructs which, at the time of the design and development of the research questionnaire, were intended to form the key discriminator for gauging levels of dyslexia-ness. However, in the light of a simulation exercise to test the feasibility of this, it was decided that an additional metric should be developed and incorporated into the questionnaire which more directly assessed dyslexia-ness through the lens of study-skills and academic learning management attributes - hence the development of the Dyslexia Index Profiler.

return to the top


Establishing the Research Group and research sub-groups

Describe briefly how the datapool was decided upon and how the research subgroups were to be determined.

Add details about how research support from the Disability and Dyslexia Service was very difficult to gain including mention about how essential it has been to the research that it was through access to students registered as dyslexic with this Service that the CONTROL group would be established.

Add further details about the challenges in deploying the Invitation to Participate on the university's student intranet 'home' page. Include the video invitation that was prepared as an incentive to participate.


link to research questionnaire


return to the top


Procedures for data collection

This sub-section

return to the top


Designing and building a web-browser-based electronic questionnaire

This sub-section

return to the top


Questionnaire deployment

This sub-section

return to the top


The Academic Behavourial Confidence Scale

This sub-section

return to the top


Additional psychosocial construct evaluators

This sub-section

return to the top


Development of the Dyslexia Index Profiler - construction:

The Dyslexia Index (Dx) profiler forms the final 20-item Likert scale on the main research questionnaire for this project which has been deployed to students during the summer term of 2016. This final section of the main QNR addresses respondents to:

  • 'reflect on other aspects of approaches to your studying or your learning history - perhaps related to difficulties you may have had at school - and also asks about your time management and organizational skills more generally.'

The bank of 20 'leaf' statements comprise the 18 statements from the baseline enquiry (as detailed below) plus two additional statements relating to learning biography:

  • 'When I was learning to read at school, I often felt I was slower than others in my class';
  • 'In my writing at school I often mixed up similar letters like 'b' and 'd' or 'p' and 'q'.

and these leaf statements are collectively preceded by the 'stem' statement: 'To what extent do you agree or disagree with these statements ...'. Respondents register their level of acquiesence using the input-variable slider by adjusting its position along a range from 0% to 100% with the value on the final position being presented in an output window. The complete main research questionnaire of which this metric comprises the final section, is available to view here.

Each respondent's results were collated into a spreadsheet, adjusted where specified (through reverse-coding some data, for example - details below) and the Dyslexia Index (Dx) is calculated as the weighted mean average of the input-values that the respondent set against each of the leaf statements. The final calculation generates a value between 0 < Dx < 1000. The process of weighting the input-value for each leaf statement arises from analysis of the data collected in the baseline enquiry whereby the weighting applied is derived from the mean average prevalence of each attribute, or 'dimension' of dyslexia that emerged from that data.

An attempt has been made to try choose the wording of the leaf statements carefully so that the complete bank has a balance of positively-worded, negatively-worded and neutral statements overall. There is evidence that to ignore this feature of questionnaire design can impact on internal consistency reliability although this practice, despite being widespread in questionnaire design, remains controversial (Barnette, 2000) with other more recent studies reporting that the matter is far from clear and requires further research (Weijters et al, 2010). A development of this Dyslexia Index Profiler will be to explore this issue in more depth.

A working trial of a standalone version of the Dyslexia Index profiler which produces an immediate Dx value is available here but which, it is stressed, has been created and published online initially to support this paper, although it is hoped that further development will be possible, most likely as a research project beyond this current study. So it must be emphasized that this is only a first-development profiler that has emerged from the main research questionnaire data analysis to date and has a slightly reduced, 16-item format. Details about how this has been developed will be presented in the final thesis as constraints in this paper prevent a comprehensive reporting of the development process here.

return to the top

Baseline enquiry: collecting data about the prevalence of 'dimensions' of dyslexia

This tool aimed to collect data about the prevalence and frequency of attributes, that is, dimensions of dyslexia encountered by dyslexia support professionals in their interactions with dyslexic students at their universities. An electronic questionnaire (eQNR) was designed, built and hosted on this project's webpages, available here . A link to the eQNR was included in an introduction and invitation to participate, sent by e-mail to 116 of the UK Higher Education institutions listed on the Universities UK database. The e-mail was directed to each university's respective student service for students with dyslexia where this could be established from universities' webpages (which was most of them) or otherwise to a more general university enquiries e-mail address. Only 30 replies were received which was disappointing, although it was felt that the data in these replies was rich enough to provide substantive enough baseline data which could positively contribute to the development of the Dyslexia Index Profiler and hence it could incorporated into the project's main research questionnaire scheduled for deployment to students later on.

The point of this preliminary enquiry was twofold:

  • by exploring the prevalence of attributes (dimensions) of dyslexia observed 'at the chalkface' rather than distilled through theory and literature, it was hoped that this data would confirm that the dimensions being gauged through the enquiry were indeed significant features of the learning and study profiles of dyslexic students at university. A further design feature of the enquiry was to provide space for respondents to add other dimensions that they had encountered and which were relevant. These are shown below together with comments about how they were dealt with;
  • through analysis of the data collected, value weightings would be ascribed to the components of the Dyslexia Index Profiler when it was built and incorporated into the main research questionnaire. This was felt to be a very important aspect of this preliminary enquiry because it was an attempt to establish the relative prevalence of dimensions as it was felt that this could be a highly influential factor in determining a measure of dyslexia, this being the most important feature of the profiler so that it could be utilised as a discriminator between dyslexic and non-dyslexic students.

A main feature of the design of the eQNR, was to discard the conventionally-favoured Likert scale-item discrete scale-point anchors with input-range sliders to displace to enable respondents to record their inputs. The advent of this relatively new browser functionality has seen electronic data-gathering tools begin to use input-range sliders more readily following evidence that doing so can reduce the impact of input errors, especially in the collection of measurements of constructs that are representative individual characteristics, typically personality (Ladd, 2009), or other psychological characteristics. Controversy also exists relating to the nature of discrete selectors for Likert scale items because data collected through typically 5- or 7-point scales needs to be coded into a numerical format to permit statistical analysis. The coding values used are therefore arbitrary and coarse-grained and the controversy relates to the dilemma about using parametric statistical analysis processes with what is effectively non-parametric data - that is, it is discrete, interval data rather than continuous. (Brown, 2011, Carifio & Perla, 2007 & 2008, Jamieson, 2004, Murray, 2013, Norman, 2010, Pell, 2005). Through using input-range slider functionality, this not only addresses these issues because the outputs generated, although technically still discrete because they are integer values, nevertheless provide a much finer grading and hence may be more justifiably used in parametric analysis. This baseline enquiry also served the very useful purpose of testing the technology and gaining feedback about its ease of use to determine whether it was robust enough and sufficiently accessible to use in the in the project's main student-questionnaire later or should be discarded in favour of more conventionally constructed Likert scales items. Encouraging feedback was received, so the process was indeed included in the main research questionnaire deployed to students.


Dyslexia Dimension (eg): 'students show evidence of being very disorganized most of the time'

50 %


In this prelinimary enquiry 18 attributes or 'dimensions' of dyslexia were set out in the eQuestionnaire collectively prefixed by the question:

• 'In your interactions with students with dyslexia, to what extent do you encounter each of these dimensions?'

In the QNR, each Likert-style stem statement refers to one dimension of dyslexia. 18 dimensions were presented and respondents were requested to judge the frequency that each dimension was encountered in interactions with dyslexic students as a percentage of all interactions with dyslexic students. For example in the statement: "students show evidence of being disorganized most of the time" a respondent who judged that they 'see' this dimension in 80% of all their dyslexic student interactions would return '80%' as their response to this stem statement. It was anticipated that respondents would naturally dis-count repeat visitors from this estimate although to do so was not made explicit in the instructions as it was felt that this would over-complicate the preamble to the questionnaire. It is recognized that there is a difference between 80% of students being 'disorganized' and 'disorganization' being encountered in 80% of interactions with students. However it was felt that since an overall 'feel' for prevalence was the aim for the questionnaire, the difference was as much a matter of syntax as much as distinctive meaning and so either interpretation from respondents would be acceptable. Respondents were requested to record their estimate by moving each slider along a continuous scale ranging from 0% to 100% according to the guidelines at the top of each of the 18 leaf statements. The default position for the slider was set at 50%. With hindsight, it may have been better to have set the default position at 0% in order to encourage respondents to be properly active in responding rather than somewhat inert with some statements that were considered with ambivalence which may have been the case with the default set at 50%. This could only have been established by testing prior to deployment for which time was not available. Research to inform this is limited at present as the incorporation of continuous rating scales in online survey research is relatively new technology although the process is now becoming easier to implement and hence is attracting research interest (eg: Treiblmaier & Flizmoser, 2011).

The 18 leaf statements, labelled 'Dimension 01 ... 18' are:

  1. students’ spelling is generally very poor
  2. students say that they find it very challenging to manage their time effectively
  3. students say that they can explain things more easily verbally than in their writing
  4. student show evidence of being very disorganized most of the time
  5. in their writing, students say that they often use the wrong word for their intended meaning
  6. students seldom remember appointments and/or rarely arrive on time for them
  7. students say that when reading, they sometimes re-read the same line or miss out a line altogether
  8. students show evidence of having difficulty putting their writing ideas into a sensible order
  9. students show evidence of a preference for mindmaps or diagrams rather than making lists or bullet points when planning their work
  10. students show evidence of poor short-term (and/or working) memory – for example: remembering telephone numbers
  11. students say that they find following directions to get to places challenging or confusing
  12. when scoping out projects or planning their work, students express a preference for looking at the ‘big picture’ rather than focusing on details
  13. students show evidence of creative or innovative problem-solving capabilities
  14. students report difficulties making sense of lists of instructions
  15. students report regularly getting their ‘lefts’ and ‘rights’ mixed up
  16. students report their tutors telling them that their essays or assignments are confusing to read
  17. students show evidence of difficulties in being systematic when searching for information or learning resources
  18. students are very unwilling or show anxiety when asked to read ‘out loud’

It is acknowledged that this does not constitute an exhaustive list of dimensions and in the preamble to the questionnaire this was identified. In order to provide an opportunity for colleagues to record other, common (for them at least) attributes encountered during their interactions with students, a 'free text area' was included and placed at the foot of the questionnaire for this purpose. Where colleagues listed other attributes, they were also requested to provide a % indication of the prevalence. In total, an additional 24 attributes were reported with 16 of these indicated by just one respondent each. 2 more were reported by each of 6 further respondents, 1 more reported by each of 3 respondents and 1 more reported by 4 respondents. To make this clearer to understand, the complete set is presented below:

Additional attribute reported % prevalence
poor confidence in performing routine tasks 90 85 80 *n/r
slow reading 100 80 *n/r
low self-esteem 85 45
anxiety related to academic achievement 80 60
pronunciation difficulties / pronunciation of unfamiliar vocabulary 75 70
finding the correct word when speaking 75 50
difficulties taking notes and absorbing information simultaneously 75 *n/r
getting ideas from 'in my head' to 'on the paper' 60 *n/r
trouble concentrating when listening 80
difficulties proof-reading 80
difficulties ordering thoughts 75
difficulties remembering what they wanted to say 75
poor grasp of a range of academic skills 75
not being able to keep up with note-taking 75
getting lost in lectures 75
remembering what's been read 70
difficulties choosing the correct word from a spellchecker 60
meeting deadlines 60
focusing on detail before looking at the 'big picture' 60
difficulties writing a sentence that makes sense 50
handwriting legibility 50
being highly organized in deference to 'getting things done' 25
having to re-read several times to understand meaning n/r
profound lack of awareness of their own academic difficulties *n/r
(* n/r = % not reported)

It is interesting to note that the additional attribute most commonly reported referred to students' confidence in performing routine tasks, by which it is assumed is meant 'academic tasks'. It was felt that this provided encouragement that the more subjective self-report, Academic Behavioural Confidence scale that is incorporated into the main research questionnaire would account for this attribute as expected, and that to factor the construct of 'confidence' into the Dyslexia Index Profiler would not be necessary. However this may be a consideration for the future development of the stand-alone Profiler in due course.

Data collected from the questionnaire replies was collated into a spreadsheet and in the first instance, simple statistics were calculated to provide the mean average prevalence for each dimension, together with the standard deviation for the dataset and the standard error so that 95% confidence intervals for the background population means for each dimension could be established to provide an idea of variability. The most important figure is the sample mean prevalence because this indicates the average frequency that each of these dimensions were encountered by dyslexia support professionals in university settings. For example, the dimension that was encountered with the greatest frequency on average, is 'students show evidence of having difficulty putting their writing ideas into a sensible order' with a mean average prevalence of close to 76%. The table below presents the dimensions according to the average prevalence which in itself presents an interesting picture of 'in the field' encounters and it notable that the top three dimensions appear to be particularly related to organizing thinking. A deeper analysis of these results will be reported in due course.

Interesting in itself as this data is, the point of collecting it has been to inform the development of the Dyslexia Index (Dx) Profiler to be included in the main research questionnaire and it was felt that there was sufficient justification to include all 18 dimensions into the Dx Profiler but that to attribute them all with an equal weighting would be to dismiss the relative prevalence of each dimension, determined from their rankings of mean prevalence shown in the table below. So by aggregating input-values assigned to each dimension in the Dx Profiler on a weighted mean basis it was felt that the result, as presented by the Dyslexia Index value, would be a more representative indication of any one respondent presenting a dyslexia-like profile of study attributes or not. Hence this may then be a much more reliable discriminator for sifting out 'unknown' dyslexic students from the wider research group of (declared) non-dyslexic students.

dim# Dyslexia dimension mean prevalence  st dev st err 95% CI for µ
8 students show evidence of having difficulty putting their writing ideas into a sensible order 75.7 14.75 2.69 70.33 < µ < 81.07
7 students say that when reading, they sometimes re-read the same line or miss out a line altogether 74.6 14.88 2.72 69.15 < µ < 79.98
10 students show evidence of poor short-term (and/or working) memory - for example, remembering telephone numbers 74.5 14.77 2.70 69.09 < µ < 79.84
18 students are very unwilling or show anxiety when asked to read 'out loud' 71.7 17.30 3.16 65.44 < µ < 78.03
3 students say that they can explain things more easily verbally than in their writing 70.6 15.75 2.88 64.84 < µ < 76.30
16 students report their tutors telling them that their essays or assignments are confusing to read 70.4 14.60 2.67 65.09 < µ < 75.71
2 students say that they find it very challenging to manage their time effectively 69.9 17.20 3.14 63.67 < µ < 76.19
17 students show evidence of difficulties in being systematic when searching for information or learning resources 64.3 19.48 3.56 57.21 < µ < 71.39
13 student show evidence of creative or innovative problem-solving capabilities 63.2 19.55 3.57 56.08 < µ < 70.32
4 students show evidence of being very disorganized most of the time 57.2 20.35 3.72 49.79 < µ < 64.61
12 when scoping out projects or planning their work, students express a preference for looking at the 'big picture' rather than focusing on details 57.1 18.00 3.29 50.58 < µ < 63.69
9 students show evidence of a preference for mindmaps or diagrams rather than making lists or bullet points when planning their work 56.7 17.44 3.18 50.32 < µ < 63.01
1 students' spelling is generally poor 52.9 21.02 3.84 45.22 < µ < 60.52
11 student say that they find following directions to get to places challenging or confusing 52.3 20.74 3.79 44.78 < µ < 59.88
14 students report difficulties making sense of lists of instructions 52.0 22.13 4.04 43.98 < µ < 60.09
15 students report regularly getting their 'lefts' and 'rights' mixed up 51.7 18.89 3.45 44.83 < µ < 58.57
5 in their writing, students say that they often use the wrong word for their intended meaning 47.8 20.06 3.66 40.46 < µ < 55.07
6 students seldom remember appointments and/or rarely arrive on time for them 35.7 19.95 3.64 28.41 < µ < 42.93

The graphic below shows the relative rankings of all 18 dimensions again, but with added, hypothetical numbers of interactions with dyslexic students in which any particular dimension would be presented based on the mean average prevalence. These have been calculated by assuming a baseline number of student interactions of 100 for each questionnaire respondent (that is, professional colleagues who responded to this baseline enquiry), hence generating a total hypothetical number of interactions of 3000 (30 QNR respondents x 100 interactions each). The graphic below shows the relative rankings of all 18 dimensions again, So for example, the mean average prevalence for the dimension 'students show evidence of having difficulty putting their writing ideas into a sensible order' is 75.7% based on the data collected from all respondents. This means that we might expect any one of our dyslexia support specialists to experience approximately 76 (independent) student interactions presenting this dimension out of every 100 student interactions in total. Scaled up as a proportion of the baseline 3000 interactions, this produces an expected number of interactions of 2271 presenting this dimension.

Complex and fiddly as this process may sound at first, it was found to be very useful for gaining a better understanding of what the data means. With hindsight, it may have enabled a clearer interpretation to have been made if the preamble to the questionnaire had very explicitly made clear that the interest was in independent student interactions to try to ensure that colleagues did not count the same student visiting on two separate occasions presenting the same dimension each time. It is acknowledged that this may be a limiting factor in the consistency of the data collected and mention of this has already been made above. We should note that this QNR has provided data about the prevalence of these 18 dimensions of dyslexia not from a self-reporting process amongst dyslexic students, but on the observation of these dimensions occurring in interactions between professional colleagues supporting dyslexia and the dyslexic students they are working with in HE institutions across the UK. The QNR did not ask respondents to state the number of interactions on which their estimates of the prevalence of dimensions were based over any particular time period, but based on how busy dyslexia support professionals in universities tend to be, it might be safe to assume that the total number of interactions on which respondents' estimates were based is likely to have been reasonable.

dyslexia dimensions rankings

Another factor worthy of mention is that correlations between dimensions have been calculated to obtain Pearson Product-Moment Correlation Coefficient 'r' values. It was felt that by exploring these potential interlinking factors, more might be learnt about dimensions that are likely to be occurring together, which aside from being interesting in itself, understanding more about correlations between dimensions could, for example, be helpful for developing suggestions and guidelines for dyslexia support tutors working with their students. So far at least, no research evidence has been found that considers the inter-relationships between characteristics of dyslexia in university students and whether there is value in devising strategies to jointly remediate them during study-skills tutorial sessions.

Although at present, the coefficients have been calculated and scatter diagrams plotted to spot outliers and explore the impact that removing them has on r, a deeper investigation about what might be going on is another further development to be undertaken later. In the meantime, the full matrix of correlation coefficients together with their associated scatter diagrams is available on the project webpages here . Some of the linkages revealed do appear fascinating, for example, there appears to be a moderate positive correlation (r = 0.554) between students observed to be poor time-keepers and who also often get their 'lefts' and 'rights' mixed up; or that students who are reported to be poor at following directions to get to places appear to be observed as creative problem-solvers (r = 0.771). Some other inter-relationships are well-observed and unsurprising, for example, r = 0.601 for the dimensions relating to poor working memory and confused writing. Whilst it is fully understood thatcorrelation does not mean causation, nevertheless, time will be set aside to revisit this part of the data analysis as it is felt that there is plenty of understanding to be gained by exploring this facet of the enquiry more closely later.

Feeding these results into the construction of the Dx Profiler

In the main research questionnaire, the Dyslexia Index Profiler formed the final section. All 18 dimensions were included and were reworded slightly into 1st person statements. Respondents were requested to adjust the input-value slider to register their degree of acquiescence with each of the statements. The questionnaire's output submitted raw scores to the researcher in the form of an e-mail displaying data in the body of the e-mail but also as an attached .csv file. Responses were first collated into a spreadsheet which was used to aggregate them into a weighted mean average derived from the Preliminary Enquiry 2 as described above. Two additional dimensions were included to provide some detail about learning biography, one to gain a sense of how the respondent remembered difficulties they may have experienced in learning to read in early years, and the other about similar-letter displacement mistakes in their early writing:

  • when I was learning to read at school, I often felt I was slower than others in my class
  • In my writing at school, I often mixed up similar letters like 'b' and 'd' or 'p' and 'q'

It was felt that these two additional dimensions elicit a sufficient sense of early learning difficulties typically associated with the dyslexic child but which will, or are likely to have been mitigated in later years, especially amongst the population of more academically able adults who might be expected to be at university. These dimensions were not included in the baseline enquiry to dyslexia support professionals as it was felt that they would be unlikely to have knowledge about these aspects of a student's learning biography. The table below lists all 20 dimensions in the order and phraseology in which they were presented in the main research questionnaire, together with the weighting (w) assigned to each dimension's output value. It can be seen that the two additional dimensions were each weighted by a factor of 0.80 to acknowledge the strong association of these characteristics of learning challenges in early reading and writing with dyslexia biographies.

It should be noted, and in accordance with comments earlier, that some statements have also been reworded to provide a better balance overall between dimensions that imply negative characteristics and which might attract unreliable disaquiescence and those which are more positively worded. For example, the dimension explored in the baseline enquiry of: 'students' spelling is generally poor' is rephrased in the Dyslexia Index Profiler to: 'My spelling is generally good'. Given poor spelling to be a typical characteristic of dyslexia in early-years writing, it would be expected that although many dyslexic students at university have improved spelling, it remains a weakness and many rely on technology-associated spellcheckers for correct spellings.

item #  item statement weighting
 3.01  When I was learning to read at school, I often felt I was slower than others in my class 0.80
3.02  My spelling is generally very good 0.53
3.03  I find it very challenging to manage my time efficiently 0.70
3.04  I can explain things to people much more easily verbally than in my writing 0.71
3.05  I think I am a highly organized learner 0.43
3.06  In my writing I frequently use the wrong word for my intended meaning 0.48
3.07  I generally remember appointments and arrive on time 0.64
3.08  When I'm reading, I sometimes read the same line again or miss out a line altogether 0.75
3.09  I have difficulty putting my writing ideas into a sensible order 0.76
3.10  In my writing at school, I often mixed up similar letters like 'b' and 'd' or 'p' and 'q' 0.80
3.11  When I'm planning my work I use diagrams or mindmaps rather than lists or bullet points 0.57
3.12  I'm hopeless at remembering things like telephone numbers 0.75
3.13  I find following directions to get to places quite straightforward 0.48
3.14  I prefer looking at the 'big picture' rather than focusing on the details 0.57
3.15  My friends say I often think in unusual or creative ways to solve problems 0.63
3.16  I find it really challenging to make sense of a list of instructions 0.52
3.17  I get my 'lefts' and 'rights' easily mixed up 0.52
3.18  My tutors often tell me that my essays or assignments are confusing to read 0.70
3.19  I get in a muddle when I'm searching for learning resources or information 0.64
3.20  I get really anxious if I'm asked to read 'out loud' 0.72

However it is recognized that designing questionnaire items in such a way as to best ensure the strongest veracity in responses can be challenging. Setting aside design styles that seek to minimize random error, the research literature reviewed appears otherwise inconclusive about the cleanest methods to choose and, significantly, little research appears to have been conducted about the impact of potentially confounding, latent variables hidden in response styles that may be dependent on questionnaire formatting (Weijters, et al, 2004). Although only possible post-hoc, analysis measures such as Cronbach's α can at least provide some idea about a scale's internal consistency reliability although at the level of this research project, it has not been possible to consider the variability in values of Cronbach's α that may arise through gaining data from the same respondents but through different questionnaire styles, design or statement wording. Nevertheless, this 'unknown' is recognized as a potential limitation of the data collection process that must be mentioned and these aspects of questionnaire design will be expanded upon in more detail in the final thesis.


Reverse coding data

Having a balance of positively and negatively-phrased statements brings other issues, especially when the data collected is numerical in nature and aggregate summary values are calculated. For each of the dimension statements either a high score was expected, indicating strong agreement with the statement, or a low score, conversely indicating strong disagreement, to be a marker of a dyslexic profile. Since the scale is designed to provide a numerical indicator of a 'dyslexia-ness', it seemed appropriate to aggregate the input-values that were recorded by respondents in such a way that a high aggregated score points towards a strong dyslexic profile. It had been planned to reverse code scores for some statements so that the overall calculation to the final Dyslexia Index would not be upset by high and low scores cancelling each other out where a high score for one statement and a low score for a different statement were each indicating a dyslexic profile. Below is the complete list of 20 statements showing whether a 'high score=strong agreement (H)' or a 'low score=strong disagreement (L)' was expected to be the dyslexic marker.

Thus for the statement: 'my spelling is generally good' where it is widely acknowledged that individuals with dyslexia tend to be poor spellers, a low score indicating strong disagreement with the statement would be the marker for dyslexia and so respondent values for this statement would be reverse-coded when aggregated into the final Dyslexia Index. However the picture that emerged for many of the other statements once the data had been collated and tabulated was less clear. To explore this further a Pearson Product-Moment Correlation was run to calculate values for the correlation coefficient, r, for each statement with the final aggregated Dyslexia Index (Dx). Although it is accepted that this is a somewhat circular process, since all of the statements being correlated with Dx are each part of the aggregated score that creates Dx, it was felt that exploring this may still provide a clearer picture for deciding which statements' data values should be reverse-coded and which others should be left in their raw form. It has only been possible to apply this analysis once all data has arrived from the deployment of the main research questionnaire (May/June 2016). In total, 166 complete questionnaire replies were received or which 68 included a declaration that the respondent had a formally identified dyslexic learning difference.

These correlation coefficients are presented in the table below. The deciding criteria used was this: if the expectation is to reverse-code a statement's data and this is supported by a strong negative correlation coefficient, hence indicating that statement is negatively correlated with Dx, then the reverse-coding process would be applied to the data. If the correlation coefficient indicates anything else – that is ranging from weak negative to strong positive – the data would be left as it is. H/L indicates whether a High or a Low score is expected to be a marker for dyslexia and 'RC' indicates a statement that is to be reverse-coded as a result of considering r.

w  statement  H / L  r  RC ?
 0.80  When I was learning to read at school, I often felt I was slower than others in my class  H  0.51  -
 0.53  My spelling is generally very good  L  - 0.52  RC
 0.70  I find it very challenging to manage my time efficiently  H  0.13  -
 0.71  I can explain things to people much more easily verbally than in my writing  H  0.60  -
 0.57  I think I am a highly organized learner  L  - 0.08  -
 0.48  In my writing I frequently use the wrong word for my intended meaning  H  0.67  -
 0.36  I generally remember appointments and arrive on time  L  0.15  -
 0.75  When I'm reading, I sometimes read the same line again or miss out a line altogether  H  0.41  -
 0.76  I have difficulty putting my writing ideas into a sensible order  H  0.51  -
 0.80  In my writing at school, I often mixed up similar letters like 'b' and 'd' or 'p' and 'q'  H  0.61  -
 0.57  When I'm planning my work I use diagrams or mindmaps rather than lists or bullet points  neutral  0.49  -
 0.75  I'm hopeless at remembering things like telephone numbers  H  0.41  -
 0.52  I find following directions to get to places quite straightforward  L  -0.04  -
 0.57  I prefer looking at the 'big picture' rather than focusing on the details  neutral  0.21  -
 0.63  My friends say I often think in unusual or creative ways to solve problems  H  0.20  -
 0.52  I find it really challenging to make sense of a list of instructions  H  0.49  -
 0.52  I get my 'lefts' and 'rights' easily mixed up  H  0.39  -
 0.70  My tutors often tell me that my essays or assignments are confusing to read  H  0.36  -
 0.64  I get in a muddle when I'm searching for learning resources or information  H  0.57  -
 0.72  I get really anxious if I'm asked to read 'out loud'  H  0.36  -

It can been seen from the summary table that the only dimension that has eventually been reverse-coded has been dimension #2: 'my spelling is generally very good' as this was the only one that presented a high(ish) negative correlation with Dx of r = - 0.52. It of note that of the other dimensions that were suspected to require reverse-coding, their correlations with Dx is close to zero which suggests that either reverse-coding or not will make little appreciable difference to the aggregated final Dyslexia Index.

With the complete datapool now established from the 166 main research questionnaire replies received, it has been possible to look more closely at correlation coefficient relationships between the dimensions. A commentary on this is posted on the project's StudyBlog (post title: 'reverse coding') and a deeper exploration of these relationships is part of the immediate development objectives of this part of the project. It of note though, that by also running a Student's t-test for identifying differences between independent samples' means (at the 0.05 critical level, one-tail test ), for the mean value of each of the 20 dimensions in the Dyslexia Index Profiler between the two primary research groups (respondents with declared dyslexia, effectively the 'control' group, n=68 and the remaining respondents, assumed to have no formally identified dyslexia, n = 98), significant differences between the means were identified for 16 out of the 20 dimensions. The 4 dimensions where no significant difference occurred between the dimensions' sample means were:

  • I find it very challenging to manage my time effectively; (t = -1.1592, p = 0.113)
  • I think I am a highly organized learner; (t = -0.363, p = 0.717)
  • I generally remember appointments and arrive on time; (t = 0.816, p = 0.416)
  • I find following directions to get to places quite straightforward; (t = 0.488, p = 0.626)

... which is suggesting that these four dimensions are having little or no impact on the overall value of the Dyslexia Index (Dx) and that therefore these dimensions might be omitted from the final aggregated score. In fact these same four dimensions were identified through the Cronbach's Alpha analysis as being possibly redundant items in the scale (details below). T-test results for all the other 16 dimensions produced p-value results at very close to zero indicating very highly significant differences for each dimensions' mean values between the control group of dyslexic students and everyone else. So as mentioned below, in the first-stage development of the Dyslexia Index Profiler, these four dimensions have been removed, leaving a 16-item scale. In addition, data from this reduced scale has now been used to recalculate each respondent's Dyslexia Index where this is being used as the key discriminator to identity students with a dyslexia-like profile but who are not known to be dyslexic, and hence, to enable research groups' academic behavioural confidence to be compared.


Internal consistency reliability - Cronbach's α

cronbachs alphaIt has also now been possible to assess the internal consistency reliability of the Dyslexia Index Profiler using the 166 datasets that have been received with the data collated into the software application SPSS. Cronbach's Alpha (α) is widely used to establish the supposed internal reliability of data collection scales. It is important to take into account, however, that the coefficient is a measure for determining the extent to which scale items reflect the consistency of scores obtained in specific samples and does not assess the reliability of the scale per se (Boyle et al, 2015) because it is reporting a feature or property of the individuals' responses who have actually taken part in the questionnaire process. This means that although the alpha value provides some indication of internal consistency it is not necessarily evaluating the homogeneity, that is, the unidimensionality of a set of items that constitute a scale. Nevertheless and with this caveat in mind, the Cronbach's Alpha process has been applied to the scales in the datasets collected from student responses to the main research questionnaire using the 'Scale-Analyse' feature in SPSS.

The α value for the Dyslexia Index (Dx) 20-item scale computed to α = 0.842 which seems to be indicating a high level of internal consistency reliability. According to Kline (1986) an alpha value within the range 0.3 < α < 0.7 is to be sought, with preferred values being closest to the upper limit in this range. Kline proposed that a value of α < 0.3 is indicating that the internal consistency of the scale is fairly poor whilst a value of α > 0.7 may be indicating that the scale contains redundant items whose values are not providing much new information. It is encouraging to note that the same, four dimensions as identified and described in the section above did emerge as the most likely, 'redundant' scale items, hence further validating the development of the reduced, 16-item scale for Dyslexia Index, as reported above. Additionally, an interesting paper by Schmitt (1996) highlights research weaknesses that are exposed by relying on Cronbach's Alpha alone to inform the reliability of questionnaires' scales, proposing that additional evaluators about the inter-relatedness of scale items should also be reported, particularly, inter-correlations. SPSS has been used to generate the α value for the Dx scale and the extensive output window that accompanies the root value also presents a complete matrix of inter-correlations. Providing the product-moment coefficient, r, this connects well with mention above about exploring the correlation inter-relationships between each of the dimensions being gauged in the Dyslexia Index Profiler as a future development. The full table of r values computed for the complete datapool is shown below which are the correlations between dyslexia dimension outputs from all 166 research participants.

On the basis of Kline's guidelines, the value of α = 0.842, possibly showing a suspiciously high level of internal consistency and hence, some scale-item redundancy. SPSS is very helpful as one of the outputs it can generate shows how the alpha value would change if specific scale items are removed. Running this analysis showed that for any single scale item that is removed, the corresponding revised values of alpha fell within the range 0.833 < α < 0.863 which, quite confusingly, is quite a tight range of α values and might be suggesting that in fact, all scale items are making a good contribution to the complete 20 scale-item value of α. It is intended to explore all this in more detail, especially by using SPSS to remove all of the 4, apparently redundant items to observe the impact that this has on the value of Cronbach's α.  


correlation colour coding
dimension reading aloud text slow reader words writing spelling bee problem solving lefts and rights confused writing mindmap mixed up letters systematic lists disorganized gantt clock think big speaking compass memory
text 0.635                                      
slow reader 0.583 0.557                                    
words 0.478 0.498 0.488                                  
writing 0.433 0.621 0.433 0.583                                
spelling bee 0.406 0.418 0.400 0.513 0.356                              
problem solving 0.153 0.202 0.294 0.251 0.269 0.157                            
lefts and rights 0.255 0.272 0.264 0.363 0.335 0.365 0.310                          
confused writing 0.379 0.369 0.339 0.549 0.492 0.420 0.310 0.456                        
mindmap 0.231 0.267 0.295 0.454 0.368 0.248 0.272 0.216 0.396                      
mixed up letters 0.356 0.441 0.401 0.541 0.393 0.493 0.333 0.450 0.430 0.259                    
systematic 0.395 0.445 0.405 0.517 0.567 0.310 0.353 0.335 0.507 0.362 0.469                  
lists 0.153 0.401 0.382 0.409 0.474 0.307 0.310 0.329 0.392 0.337 0.539 0.353                
disorganized 0.017 -0.018 0.011 -0.048 -0.201 -0.113 0.014 0.022 -0.048 0.035 -0.029 -0.166 -0.225              
gantt 0.000 0.106 0.094 0.092 0.318 -0.024 0.169 0.013 0.034 0.139 0.090 0.312 0.034 -0.446            
clock -0.105 -0.093 -0.062 0.030 -0.090 -0.083 -0.146 -0.163 -0.110 0.004 -0.054 -0.125 -0.243 0.414 -0.291          
think big 0.019 0.178 0.127 0.177 0.244 0.027 0.193 0.099 0.005 0.240 0.127 0.005 0.048 0.102 0.084 0.173        
speaking 0.189 0.332 0.315 0.484 0.395 0.259 0.221 0.107 0.279 0.205 0.253 0.356 0.286 -0.056 0.183 0.123 0.316      
compass -0.100 -0.025 -0.008 -0.097 0.017 -0.140 0.019 -0.150 -0.059 0.071 -0.096 -0.104 -0.083 0.123 -0.088 0.198 0.134 -0.049    
memory 0.331 0.360 0.365 0.319 0.349 0.296 0.134 0.306 0.207 0.160 0.352 0.306 0.333 -0.041 0.190 -0.181 0.151 0.225 -0.191  



The matrix of inter-correlations for the metric Dx does present a wide range of correlation coefficients (above). These range from r = -0.446, between scale item statements: 'I think I'm a highly organized learner' and 'I find it very challenging to manage my time efficiently' – which might be expected; to r = 0.635, between scale item statements: 'I get really anxious if I'm asked to read 'out loud' and 'When I'm reading, I sometimes read the same line again or miss out a line altogether' – which we also might expect.

This interesting spectrum of results has been explored in more detail through a Principal Component Analysis (PCA). When applied to these correlations coefficients to examine how highly correlated scale items might be brought together into a series of factors, five Dyslexia Index (Dx) factors did emerge from the PCA process and these were designated:

  • Dx Factor 1: Reading, Writing, Spelling;
  • Dx Factor 2: Thinking and Processing;
  • Dx Factor 3: Organization and Time Management;
  • Dx Factor 4: Verbalizing and Scoping;
  • Dx Factor 5: Working Memory;

compass roseThe complete results of this process are reported in detail below, with some hypotheses on what these analysis outcomes may mean presented later in the 'Discussion' section of this thesis. However, the PCA process applied to the metric Dyslexia Index has enabled highly interesting visualizations of each research respondent's Dyslexia Index dimensional traits to be created. In the first of the three examples below we see the profile map for the dataset of one respondent overlaid onto the profiles of dimension mean values for each of the research subgroups of students with dyslexia and students with no dyslexia. In this first example, the respondent shown was from the non-dyslexic subgroup although this individual's Dyslexia Index (Dx), established from the Dyslexia Index Profiler being reported in this section, was at a value more in line with those research participants who had disclosed their dyslexia (Dx = 682.5, in comparison with the mean value for the subgroup of dyslexic students of Dx = 662.82, and of the subgroup of non-dyslexic students of Dx = 396.33). The radial axes are scaled from 0 to 100 and as can be seen, this respondent's profile is clearly skewed towards the mean dyslexic profile in the three sectors north-west to east (using a compass analogy) with additional close similarity with other dimensional markers. Setting aside the visual appeal of this presentation of this respondent's profile and the holistic overview of the spectrum of dimensions that it captures, it can be seen that as a broad indication of where this student is likely to be experiencing academic challenges, this representation of strengths and weaknesses could be of significant use to a university learning development and support professional who has been approached by this student for some help and guidance towards improving the quality of their academic output. The second and third examples are also profiles of students from the non-dyslexic subgroup but who present Dyslexia Index values of Dx = 708.6 and 655.3 respectively. These are provided to demonstrate examples of different dimensional profiles but which still aggregate to a Dyslexia Index that is close to the mean Dx value for the dyslexic subgroup. This appears to be adding substance to the argument that by looking at an individual's apparent dyslexia-ness on a dimension by dimension basis, a better understanding of how these dimensions may impact on their academic study regime and hence provide a valuable insight into developing effective learning scaffolds that might enable this learner to better understand their academic strengths and how these may be used to advantage whilst at the same time, create learning development strategies that reduce the impact of challenges. As a baseline reference, the mean Dyslexia Index for subgroup of dyslexic students was Dx = 662.84, and for the subgroup of non-dyslexic students, the mean was Dx = 396.33.



Example 1 (above): This student declared no dyslexic learning difference however their Dyslexia Index of Dx = 682.5 indicated a level of dyslexia-ness more in line with the mean value of students with dyslexia.



Example 2 (above): This visualization shows a level of dyslexia-ness of Dx = 708.6 and is the profile of a student who also declared no dyslexic learning difference.



Example 3(above): With a Dx = 655.3, this student also indicated no dyslexic learning difference.


These dyslexia dimension profile visualizations are helpful for gaining a better understanding of how the different dimensions contribute to dyslexia-ness and a more analytic discussion about how these can be useful and can contribute to a better understanding about what dyslexia means in higher education learning contexts is presented as part of the Discussion Section of this thesis. Developing this profile visualization concept will be a project for further research where the focus will particularly be on creating a data connection infrastructure that can enable these profiles to be generated directly from a respondent's inputs on the Dyslexia Index Profiler, in itself for development at a later stage but which is presented in a very early pilot form here.



Reporting more than Cronbach's α

Further reading about internal consistency reliability coefficients has identified studies which firstly identify persistent weaknesses in the reporting of data reliability in research, particularly in the field of social sciences (eg Henson, 2001, Onwuegbuzie & Daniel, 2000, 2002). Secondly, useful frameworks are suggested for a better process for reporting and interpreting internal consistency reliability estimates which, it is argued, then present a more comprehensive picture of the reliability of data collection procedures, particularly data elicited through self-report questionnaires. Henson (op cit) strongly emphasizes the point that 'internal consistency coefficients are not direct measures of reliability, but rather are theoretical estimates derived from classical test theory' (2001, p177), which connects with Boyle's (2015, above) interpretation about the sense of this measure being relational to the sample from which the scale data is derived rather than directly indicative of the reliability of the scale more generally. However Boyle's view relating to the scale item homogeneity appears to be different from Henson's who, contrary to Boyle's argument, does state that internal consistency measures do indeed offer an insight into whether or not scale items are combining to measure the same construct. Henson strongly advocates that when (scale) item relationship correlations are of a high order, this indicates that the scale as a whole is gauging the construct of interest with some degree of consistency – that is, that the scores obtained from this sample at least, are reliable (Henson, 2001, p180). This apparent perversity is less than helpful and so in preparation for the final thesis of this research project, this difference of views needs to be more clearly understood and reported, a task that will be undertaken as part of the project write-up.

However at this stage, it has been found informative to follow some of these guidelines. Onwuegbuzie and Daniel (2002) base their paper on much of Henson's work but go further by presenting recommendations to researchers which proposes that they/we should always estimate and report:

  • internal consistency reliability coefficients for the current sample;
  • confidence intervals around internal consistency reliability coefficients – but specifically upper tail limit values;
  • internal consistency reliability coefficients and the upper tail confidence value for each sample subgroup (ibid, p92)

The idea of providing a confidence interval for Cronbach's α is attractive, since, as being discussed here, we now know that the value of the coefficient is relating information about the internal consistency of scores for items making up a scale that pertains to that particular sample. Hence it then represents merely a point estimate of the likely internal consistency reliability of the scale, (and of course, the construct of interest), for all samples taken from the background population. But interval estimates are better, especially as the point estimate value, α, is claimed by Cronbach himself in his original paper (1951) to be most likely a lower-bound estimate of score consistency, implying that the traditionally calculated and reported single value of α is likely to be an under-estimate of the true internal consistency reliability of the scale were it to be applied to the background population. So Onwuegbuzie and Daniel's suggestion that one-sided confidence intervals (the upper bound) are reported in addition to the value of Cronbach's α is a good guide for more comprehensively reporting the internal consistency reliability of data because it is this value which is more likely to be close to the true value.


Calculating the upper-limit confidence value for Cronbach's α

Confidence intervals are most usually specified to provide an interval estimate for the population mean using sample data to do this by using a sample mean – which is a point estimate for the population mean – and building the confidence interval estimate based on the assumption that the background population follows the normal distribution. So it follows that any point estimate of a population parameter might also have a confidence interval estimate constructed around it provided we can accept the most underlying assumption that the distribution of the parameter is normal. For a correlation coefficient between two variables in a sample, this is a point estimate of the correlation coefficient between the two variables in the background population and if we took a separate sample from the population we might expect a different correlation coefficient to be produced although there is a good chance that it would be of a similar order. Hence a distribution of correlation coefficients would emerge, much akin to the distribution of sample means that constitutes the fundamental tenet of the Central Limit Theorem and which permits us to generate confidence intervals for a background population mean based on sample data.

Fisher's Z transformationFisher (1915) explored this idea to arrive at a transformation that maps the Pearson Product-Moment Correlation Coefficient, r , onto a value, Z', which he showed to be approximately normally distributed and hence, confidence interval estimates could be constructed. Given that Cronbach's α is essentially based on values of r, we can use Fisher's Z' to transform Cronbach's α and subsequently apply the standard processes for creating our confidence interval estimates for the range of values of α we might expect in the background population. Fisher showed that the standard error of Z', which is obviously required in the construction of confidence intervals, to be solely related to the sample size: SE = 1/√(n-3), with the transformation process for generating Z' shown in the graphic (right).

So now the upper-tail 95% confidence interval limit can be generated for Cronbach alpha values and to do this, the step-by-step process described by Onwuegbuzie and Daniel (op cit) was worked through by following a useful example of the process outlined by Lane (2013):

  • Transform the value for Cronbach's α to Fisher's Z'
  • Calculate the Standard Error (SE) for Z'
  • Calculate the upper 95% confidence limit for Z' + (SE)*Z [for the upper tail of 95% two-tail confidence interval, Z = 1.96]
  • Transform the upper confidence limit for Z' back to a Cronbach's α internal consistency reliability coefficient.

Cronbach's alpha results tableA number of online tools for transforming to Fisher's Z' were found but the preference has been to establish this independently in Excel using the z-function transformation shown in the graphic above. The table (right) shows the set of cell calculation step-results from the Excel spreadsheet and particularly, the result for the upper 95% confidence limit for α for the Dyslexia Index Profiler scale (α = 0.889). So this completes the first part of Onwuegbuzie & Daniel's (2002) additional recommendation by reporting not only the internal reliability coefficient, α, for the Dyslexia Index Profiler scale, but also the upper tail boundary value for the 95% confidence interval for α.

The second part of their suggested improved reporting of Cronbach's α requires the same parameters to be reported for the subgroups of the main research group. In this study the principle subgroups divide the complete datapool into student respondents who declared their existing identification of dyslexia and those others who indicated that they had no known learning challenges such as dyslexia. As detailed above , these research subgroups are designated research group DI (n = 68) and research group ND (n = 98) respectively. SPSS has then been used again to analyse scale reliability and the Excel spreadsheet calculator function has generated the upper tail 95% CI limit for α. Results are shown collectively in the table (right, and below).

These tables show the difference in the root values of α for each of the research subgroups: Dx - ND, α = 0.842; Dx - DI, α = 0.689. These are both 'respectable' values for Cronbach's α coefficient of internal consistency reliability although at the moment I cannot explain why the value of α = 0.852 for the complete research datapool is higher than either of these values, which is puzzling. This will be explored later and reported. However, it is clear to see that, assuming discrepancies are resolved with a satisfactory explanation, the upper tail confidence interval boundaries for not only the complete research group but also both subgroups all present an α value that indicates a strong degree of internal consistency reliability for the Dyslexia Index scale, notwithstanding Kline's earlier caveats mentioned above.

Cronbach's alpha results table


return to the top


Qualitative data

This sub-section

return to the top


Questionnaire deployment

This sub-section

return to the top


Data receipt and collation

This sub-section

return to the top


Data visualization

This sub-section

return to the top


Statistical tools and processes

This sub-section reports on the statistical processes that have been used to analyse the data and the rationales for using them.

  • data collation
  • t-test rather than ANOVA
  • effect sizes
  • Principal Component Analysis
    The process of Principal Component Analysis (PCA) performs dimensionality reduction on a set of data, and especially a scale that is attempting to evaluate a construct. The point of this process is to see if a multi-item scale can be reduced into a simple structure with fewer components (Kline, 1994). For example, Sander & Sanders (2009) conducted a factor analysis of their original, 24-item Academic Behavioural Confidence (ABC) Scale, finding that it could be reduced into a 17-item scale with 4 factors which they designated as grades, verbalizing, studying and attendance. I have considered both the original 24-item scale and this reduced, 17-item, 4-factor scale in the analysis of the data collected in my project so far which has revealed interesting results that are reported below.
    Much like the well-used Cronbach's 'alpha' measure of internal consistency reliability, factor analysis is ascribable to the dataset onto which it is applied and hence, the factor analysis that Sander & Sanders (ibid) used and which generated their reduced item scale with four factors was derived from analysis of the collated datasets they had available from previous work with ABC, sizeable though this became (n=865). The factor structure that their analysis derived, however, may not be entirely applicable more generally despite being widely used by other researchers in one form or another (eg: de la Fuente et al, 2013, de la Fuente et al, 2014, Hilale & Alexander, 2009, Ochoa et al, 2012, Willis, 2010, Keinhuis et al, 2011, Lynch & Webber, 2011, Shaukat & Bashir, 2016). Indeed, Stankov et al (in Boyle et al, 2015) in reviewing the Academic Behavioural Confidence Scale implied that more work should be done on firming up some aspects of the ABC Scale, not so much by levelling criticism at its construction or theoretical underpinnings but more so to suggest that as a relatively new measure (> 2003) it would benefit from wider applications in the field and subsequent scrutiny about how it is built and what it is attempting to measure. Hence conducting a factor analysis of the data I collected using the original 24-item ABC Scale is worthwhile because it may reveal an alternative factor structure that fits the context of my enquiry more appropriately and hence is also a response to Stankov's remarks.

return to the top




+44 (0)79 26 17 20 26 www.ad1281.uk | ad1281@live.mdx.ac.uk This page last edited: February 2018