❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayLatest Results for Educational Psychology Review

Far Transfer of Metacognitive Regulation: From Cognitive Learning Strategy Use to Mental Effort Regulation

Abstract

Training of self-regulated learning is most effective if it supports learning strategies in combination with metacognitive regulation, and learners can transfer their acquired metacognitive regulation skills to different tasks that require the use of the same learning strategy (near transfer). However, whether learners can transfer metacognitive regulation skills acquired in combination with a specific learning strategy to the regulation of a different learning strategy (far transfer) is still under debate. While there is empirical evidence that learners can transfer metacognitive regulation between different learning strategies of the same type (e.g., from one cognitive learning strategy to another), whether transfer also occurs between learning strategies of different types is an open question. Here, we conducted an experimental field study with 5th and 6th grade students (N = 777). Students were cluster-randomized and assigned to one of three groups: two experimental groups receiving different training on the metacognitive regulation of a cognitive learning strategy and one control group receiving no training. After training, students worked on two different tasks; after each task, we measured their metacognitive regulation of a resource management strategy, that is, investing mental effort. Results (based on data from 368 students due to pandemic conditions) indicated far metacognitive regulation transfer: After training, students in the training groups were better able to metacognitively regulate their mental effort than students in the control group. Although effect sizes were small, our results support the hypothesis of far transfer of metacognitive regulation.

An Individual Participant Data Meta-Analysis to Support Power Analyses for Randomized Intervention Studies in Preschool: Cognitive and Socio-Emotional Learning Outcomes

Abstract

There is a need for robust evidence about which educational interventions work in preschool to foster children’s cognitive and socio-emotional learning (SEL) outcomes. Lab-based individually randomized experiments can develop and refine such interventions, and field-based randomized experiments (e.g., cluster randomized trials) evaluate their effectiveness in real-world daycare center settings. Applying reliable estimates of design parameters in the context of a priori power analyses is essential to ensure that the sample size of these studies is adequate to support strong statistical conclusions regarding the strength of the intervention effect. However, there is little knowledge on relevant design parameters with preschool children. We therefore utilized a systematic collection of individual participant data from four German probability samples (554 ≀ N ≀ 2928) with preschool children (aged two to six years) to estimate and meta-analyze design parameters. These parameters are relevant for planning single-level (e.g., in non-clustered lab-based settings), two-level (children nested in daycare centers), and three-level (children nested in groups, with groups nested in daycare centers) randomized intervention studies targeting cognitive and SEL outcomes assessed with three methods (standardized tests, parent ratings, and educator ratings). The design parameters depict between-group and -center differences as well as the proportion of variance in the outcomes explained by different covariate sets (socio-demographic characteristics, baseline measures, and their combination) at the child, group, and center level. In conclusion, this paper provides a rich source of design parameters, recommendations, and illustrations to support a priori power analyses for randomized intervention studies in early childhood education research.

Developing the Mental Effort and Load–Translingual Scale (MEL-TS) as a Foundation for Translingual Research in Self-Regulated Learning

Abstract

Assessing cognitive demand is crucial for research on self-regulated learning; however, discrepancies in translating essential concepts across languages can hinder the comparison of research findings. Different languages often emphasize various components and interpret certain constructs differently. This paper aims to develop a translingual set of items distinguishing between intentionally invested mental effort and passively perceived mental load as key differentiations of cognitive demand in a broad range of learning situations, as they occur in self-regulated learning. Using a mixed-methods approach, we evaluated the content, criterion, convergent, and incremental validity of this scale in different languages. To establish content validity, we conducted qualitative interviews with bilingual participants who discussed their understanding of mental effort and load. These participants translated and back-translated established and new items from the cognitive-demand literature into English, Dutch, Spanish, German, Chinese, and French. To establish criterion validity, we conducted preregistered experiments using the English, Chinese, and German versions of the scale. Within those experiments, we validated the translated items using established demand manipulations from the cognitive load literature with first-language participants. In a within-subjects design with eight measurements (N = 131), we demonstrated the scale’s criterion validity by showing sensitivity to differences in task complexity, extraneous load manipulation, and motivation for complex tasks. We found evidence for convergent and incremental validity shown by medium-size correlations with established cognitive load measures. We offer a set of translated and validated items as a common foundation for translingual research. As best practice, we recommend four items within a reference point evaluation.

The Cronbach’s Alpha of Domain-Specific Knowledge Tests Before and After Learning: A Meta-Analysis of Published Studies

Abstract

Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high internal consistency (e.g., Cronbach's Alphas greater .70). To provide an initial empirical base for this debate, we conducted a meta-analysis of the Cronbach's Alphas of knowledge tests derived from an available data set. Based on 285 effect sizes from 55 samples, the estimated typical Alpha of domain-specific knowledge tests in publications was α = .85, CI90 [.82; .87]. Alpha was so high despite a low mean item intercorrelation of .22 because the tests were relatively long on average and bias in the test construction or publication process led to an underrepresentation of low Alphas. Alpha was higher in tests with more items, with open answers and in younger age, it increased after interventions and throughout development, and it was higher for knowledge in languages and mathematics than in science and social sciences/humanities. Generally, Alphas varied strongly between different knowledge tests and populations with different characteristics, reflected in a 90% prediction interval of [.35, .96]. We suggest this range as a guideline for the Alphas that researchers can expect for knowledge tests with 20 items, providing guidelines for shorter and longer tests. We discuss implications for our understanding of domain-specific knowledge and how fixed cut-off values for the internal consistency of knowledge tests bias research findings.

The Effect of Psychological Interventions on Statistics Anxiety, Statistics Self-Efficacy, and Attitudes Toward Statistics in University Students: A Systematic Review

Abstract

Psychological interventions offer a unique approach to enhancing the educational experience for university students. Unlike traditional teaching methods, these interventions directly address cognitive, emotional, and behavioural factors without requiring changes to course content, delivery methods, or involvement from the teaching team. This systematic review evaluated psychological interventions that were designed to reduce statistics anxiety, boost statistics self-efficacy, and/or foster positive attitudes toward statistics among university students enrolled in statistics courses. All included studies followed a longitudinal design with at least pre- and post-intervention assessments, comprising single group studies, randomised controlled trials, and non-randomised control studies. The protocol of this systematic review was registered with PROSPERO. Search terms were entered into five databases. The screening, assessment of risk of bias, and data extraction processes were conducted by two independent reviewers. Meta-analysis was not conducted due to the heterogeneity across the included studies. Therefore, a narrative synthesis was used to describe the results of 11 studies (1786 participants), encompassing studies targeting statistics anxiety, attitudes, self-efficacy, or a combination of these outcomes. Findings revealed that although no intervention was definitively effective in reducing statistics anxiety, some showed promise, especially those combining exposure with coping strategies. Moreover, the review identified interventions that effectively improved self-efficacy and attitudes, discussed some important methodological considerations, and provided suggestions for future psychological interventions. Finally, further empirical research is necessary to address existing limitations and fully understand the effectiveness of these interventions, particularly regarding statistics anxiety.

❌
❌