❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 8 March 2025Online First Publication: Journal of Experimental Psychology: Human Perception and Performance

Exposure to second-language accent prompts recalibration of phonemic categories.

Journal of Experimental Psychology: Human Perception and Performance, Mar 06, 2025, No Pagination Specified; doi:10.1037/xhp0001298

We examine how first-language (L1) Spanish listeners with varying levels of experience with English recalibrate their phonemic category boundaries following exposure to second-language (L2), American-English-accented Spanish. Specifically, we examine changes to voice onset time boundaries, which are often positively shifted when produced by American-English-accented Spanish speakers (as compared to L1 Spanish speakers). Our results demonstrate that listeners make adjustments to their phonemic category boundaries following exposure to accented words with the critical sounds in onset position (e.g., β€œbailar” and β€œparir,” meaning β€œto dance” and β€œto give birth,” for the /b/ and /p/ phonemic categories). In many cases, generalization of phonemic learning was also observed, such that boundaries for categories that were not presented in training were also adjusted. Surprisingly, however, there were cases in which boundaries for trained categories did not show adjustments; for example, listeners trained with items for all places of articulation showed recalibration of their bilabial boundary but not their alveolar and velar boundaries. Also notable was the role of the Spanish listeners’ experience with English: More experienced listeners showed more positively shifted (English-like) boundaries in the pretest session. This suggests that more experienced listeners may have rapidly identified the American-English-accented Spanish and applied their English category boundaries accordingly. We conclude that listener accommodation of L2 accent is supported by a phonemic recalibration mechanism and that experience with the L1 of an L2-accented speaker facilitates rapid recalibration of phonemic categories. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Evidence from odor similarity judgments suggests a widespread ability to imagine odors.

Journal of Experimental Psychology: Human Perception and Performance, Mar 06, 2025, No Pagination Specified; doi:10.1037/xhp0001292

A persistent belief holds that humans can imagine visual content but not odors. While visual imagery is regarded as recreating a perceptual representation, it is unknown whether olfactory mental imagery shares a perceptual format. Visual imagery studies have demonstrated this perceptual formatting using distance and shape similarity judgments, whereas olfactory studies often use single-odor vividness ratings, complicating the establishment of perceptual formatting for odors. Using odor pair similarity scores from two experiments (odor-based: 8,880 ratings from 37 participants, including 20 women; label-based: 129,472 ratings from 2,023 participants, including 1,164 women), we observed a strong correlation (r = .71) between odor-based and label-based odor pairs. The correlation was unaffected by gender and age and was present in a wide range of self-perceived olfactory functions. Pleasantness similarity was the main determinant of overall similarity for both odor-based (r = βˆ’.63) and label-based (r = βˆ’.45) odor pairs. We then used a large language model to derive semantic similarity scores for the labels of all odor pairs. Semantic similarity only mediated a small part of the observed correlation, further supporting our conclusions that odor imagery shares a perceptual formatting with vision, that odor percepts may be elicited from verbal labels alone, and that odor pair pleasantness may be a dominant and accessible feature in this regard. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
Before yesterdayOnline First Publication: Journal of Experimental Psychology: Human Perception and Performance

Associations between musical expertise and auditory processing.

Journal of Experimental Psychology: Human Perception and Performance, Mar 03, 2025, No Pagination Specified; doi:10.1037/xhp0001312

Many studies have linked musical expertise with nonmusical abilities such as speech perception, memory, or executive functions. Far fewer have examined associations with basic auditory skills. Here, we asked whether psychoacoustic thresholds predict four aspects of musical expertise: music training, melody perception, rhythm perception, and self-reported musical abilities and behaviors (other than training). A total of 138 participants completed nine psychoacoustic tasks, as well as the Musical Ear Test (melody and rhythm subtests) and the Goldsmiths Musical Sophistication Index. We also measured and controlled for demographics, general cognitive abilities, and personality traits. The psychoacoustic tasks assessed discrimination thresholds for pitch and temporal perception (both assessed with three tasks), and for timbre, intensity, and backward masking (each assessed with one task). Both music training and melody perception predicted better performance on the pitch-discrimination tasks. Rhythm perception was associated with better performance on several temporal and nontemporal tasks, although none had unique associations when the others were held constant. Self-reported musical abilities and behaviors were associated with performance on one of the temporal tasks: duration discrimination. The findings indicate that basic auditory skills correlate with individual differences in musical expertise, whether expertise is defined as music training or musical ability. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Category-specific effects of high-level relations in visual search.

Journal of Experimental Psychology: Human Perception and Performance, Mar 03, 2025, No Pagination Specified; doi:10.1037/xhp0001300

Recent empirical findings demonstrate that, in visual search for a target in an array of distractors, observers exploit information about object relations to increase search efficiency. We investigated how people searched for interacting people in a crowd, and how the eccentricity of the target affected this search (Experiments 1–3). Participants briefly viewed crowded arrays and had to search for an interacting dyad (two bodies face-to-face) among noninteracting dyads (back-to-back distractors), or vice versa, with the target presented in the attended central location or at a peripheral location. With central targets, we found a search asymmetry, whereby interacting people among noninteracting people were detected better than noninteracting people among interacting people. With peripheral targets, the advantage disappeared, or even tended to reverse in favor of noninteracting dyads. In Experiments 4–5, we asked whether the search asymmetry generalized to object pairs whose spatial relations did or did not form a functionally interacting set (a computer screen above a keyboard vs. a computer screen below a keyboard). We found no advantage for interacting over noninteracting sets either in central or peripheral locations for objects, but, if anything, evidence for the opposite effect. Thus, the effect of relational information on visual search is contingent on both stimulus category and attentional focus: The presentation of social interactionβ€”but not of nonsocial interactionβ€”at the attended (central) location readily captures an individual’s attention. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

One object with two identities: The rapid detection of face pareidolia in face and food detection tasks.

Journal of Experimental Psychology: Human Perception and Performance, Mar 03, 2025, No Pagination Specified; doi:10.1037/xhp0001296

Humans are so sensitive to faces and face-like patterns in the environment that sometimes we mistakenly see a face where none existsβ€”a common illusion called β€œface pareidolia.” Examples of face pareidolia, β€œillusory faces,” occur in everyday objects such as trees and food and contain two identities: an illusory face and an object. In this study, we studied illusory faces in a rapid serial visual presentation paradigm over three experiments to explore the detectability of illusory faces under various task conditions and presentation speeds. The first experiment revealed the rapid and reliable detection of illusory faces even with only a glimpse, suggesting that face pareidolia arises from an error in rapidly detecting faces. Experiment 2 demonstrated that illusory facial structures within food items did not interfere with the recognition of the object’s veridical identity, affirming that examples of face pareidolia maintain their objecthood. Experiment 3 directly compared behavioral responses to illusory faces under different task conditions. The data indicate that, with extended viewing time, the object identity dominates perception. From a behavioral perspective, the findings revealed that illusory faces have two distinct identities as both faces and objects that may be processed in parallel. Future research could explore the neural representation of these unique stimuli under varying circumstances and attentional demands, providing deeper insights into the encoding of visual stimuli for detection and recognition. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Hands-on adaptation: Bodily stimuli increase size adaptation aftereffect.

Journal of Experimental Psychology: Human Perception and Performance, Feb 27, 2025, No Pagination Specified; doi:10.1037/xhp0001294

The size adaptation aftereffect is a perceptual phenomenon in which a stimulus is perceived as smaller (or larger) after exposure to a larger (or smaller) stimulus. Given that size perception of body parts is computed with the highest accuracy for biological reasons, it is currently uncertain whether these are differently susceptible to illusory size misperceptions, such as those induced by adaptation paradigms. We induced the Uznadze illusion (i.e., a size-contrast adaptation aftereffect) to investigate its effect over stimuli depicting body parts (hands) or nonbody stimuli (i.e., abstract shapes). In three experiments, pairs of hands or nonhands were presented in separate sessions. After repeated exposure to two stimuli with different sizes, one larger and one smaller, participants judged the size of two new stimuli. In Experiments 1 and 2, we found a stronger effect for hands than nonhands. In Experiment 3, we enhanced the similarity between hand and nonhand stimuli, and we confirmed a stronger adaptation for hands, but only when participants performed the task with nonhand stimuli in the first session. These results indicate that visual hand stimuli would be more susceptible to size adaptation, suggesting that the identity and meaning attributed to the stimulus can influence the perceptual aftereffect. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Reinvestigating endogenous attention and perceived duration of peripheral stimuli: Differential effects for neutral versus valid and invalid cues.

Journal of Experimental Psychology: Human Perception and Performance, Feb 27, 2025, No Pagination Specified; doi:10.1037/xhp0001307

Research has shown that increasing stimulus eccentricity can shorten temporal estimations and integration. Endogenous attention has been shown to prolong subjective duration and stimulus processing, especially for more peripheral stimuli. This study investigates the impact of endogenous attention on the perceived duration of peripheral stimuli. In a temporal bisection task, participants judged the varying duration of a probe stimulus (20–220 ms) presented at 3Β° or 9Β° of eccentricity left or right from fixation as either short or long. The probe stimulus was either preceded by a valid or neutral central arrow cue (Experiment 1) or valid or invalid central arrow cue (Experiment 2) to manipulate endogenous attention. Eye movements were monitored with an eye tracker. In both experiments, subjective duration decreased with increasing stimulus eccentricity, consistent with earlier findings. Reaction times were lower for valid cues in both experiments, indicating that the cue was successful in shifting attention. While there was no significant difference in perceived duration between valid and neutral cues (Experiment 1), perceived duration was lower for invalid cues compared to valid cues (Experiment 2). In both experiments, there was no interaction between eccentricity and cue. The results are discussed in the context of the underlying processes involved in temporal processing and the notion that perceived duration does not differ between attention distributed over the screen or directed toward the peripheral stimulus, but directing attention away from the stimulus shortens perceived duration. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Influences on the emergence of strategic visuomotor learning mechanisms in school-aged children.

Journal of Experimental Psychology: Human Perception and Performance, Feb 20, 2025, No Pagination Specified; doi:10.1037/xhp0001291

Visuomotor development is based on implicit procedural and explicit strategic learning mechanisms. Given that both of these mechanisms are associated with child maturation, we sought to explore the effect of three related variables on adaptation rates: chronological age, intelligence quotient, and motor skills. In our study, 86 healthy school-aged children (grouped in 6–7, 8–9, and 10–11 years) with no reported visual or developmental disorders participated in a prism-throwing task under two different conditions. In the first condition, we introduced a wedge prism that displaces the visual field laterally. Adapting to this kind of visual perturbation relies mainly on procedural mechanisms. In the second condition, we introduced a dove prism, which reverses the visual field horizontally, allowing us to evaluate explicit strategic learning mechanisms. Most of the children managed to adapt to the use of implicit procedures based on the error feedback, regardless of age. However, older children were able to adopt explicit strategies to counteract the optical disturbance generated by the dove prism in greater proportions, irrespective of motor ability scores or intelligence quotient. Our results suggest that adopting strategic mechanisms depends more on chronological development than on intelligence or motor skills. In contrast, implicit error-based visuomotor learning consolidates from an early age. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Top-down inhibitory control of singleton distractors: Distractor type and time course.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001283

Attentional capture by irrelevant distractors can impede people’s performance, making the cognitive control of attentional capture an important mechanism. The methodology applied in previous studies provided participants with foreknowledge of the upcoming distractor to investigate the top-down inhibitory control mechanism. However, these findings are mixed. This study further examined whether precueing the spatial location of the color-singleton or onset-singleton distractors in the upcoming search array can reduce related interference. Experiments 1A and 2A demonstrated the benefit of precueing the spatial location of the color-singleton distractors. Contrastingly, the overall performance of Experiments 1B, 1C, and 2B did not reveal a benefit of precueing when onset-singleton distractors were presented. In addition, analyses of vincentized cumulative reaction time distributions, the spatial distribution of the suppression effect in each experiment, and memory probes in Experiments 2A and 2B suggest that the effects of distractor precueing may involve the coordination of proactive and reactive control mechanisms. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Reporting confidence decreases response and change-of-mind accuracy in a perceptual decision task.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001297

Self-monitoring seems to be crucial for regulatory behavior, but it is not clear how it influences performance in simple cognitive tasks. Some studies suggest that increased monitoring improves metacognitive regulation and enhances performance, while others suggest it impairs learning, problem solving, or perceptual processes. We investigated whether the requirement to report confidence in perceptual decisions affects metacognitive regulation and response accuracy. Participants performed a visual discrimination task in which they provided two responses: initial and final. Depending on the condition, participants reported their confidence (either together with or following the initial decision), performed an additional task, or were asked to observe a blank screen between two responses. We expected that reporting decision confidence would induce efficient regulatory activity, which would benefit final accuracy. In three experiments, we did not find evidence that rating confidence improves regulatory processing or performance in perceptual tasks. Rather, when confidence ratings were retrospective, the final response improvement was smaller compared to the condition with no additional task, and changes of mind were less frequent and less corrective. Confidence ratings given jointly with the initial response generally decreased accuracy. The results suggest that deliberate monitoring might put additional strain on cognitive resources and impair lower-order task processing. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

From tusk till horn: Modulating feature boundaries in action control.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001280

In the literature on action control, it is assumed that all stimulus features that occur in an action episode are integrated together with the response features into an event file. Any ensuing repetition of a feature stored in this event file leads to the retrieval of the entire event file, causing stimulus–response (S–R) binding effects because of the relation between repeated and changed features. However, the retrieval depends on the extent to which a particular feature is actually repeated and thus touches the question of what constitutes a feature. Since not only perceptual but also conceptual features are assumed to be bound, the boundaries between feature representations might not only be fluid but also modulable. In this study, we evaluated whether a direct manipulation of feature boundaries is possible. In three experiments (cumulative n = 217), by adding additional counting tasks to a distractor–response binding task, we either merged or separated feature categories, causing a significant difference in S–R binding effectsβ€”merged feature categories caused weaker S–R binding effects compared to separated feature categories. The results indicate that merged features were actively brought to be processed as more similar to each other. We interpret our data under the broader and old question of what a feature actually is and suggest that feature boundaries are task dependent. Human agents are highly flexible in controlling the internal representation of objects they interact with. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Template-based and saliency-driven attentional control converge to coactivate on a common, spatially organized priority map.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001287

Visual attention can be controlled both by a match to known target attributes (template-based guidance) and by physical salience (saliency-driven guidance). However, it remains unclear how these mechanisms interact to determine attentional priority. Here, we contrasted two accounts of this interaction. Under a coactive mechanism, template-based and saliency-driven guidance are simultaneously integrated in a common priority signal. Under a noncoactive mechanism, the two sources of control do not converge on a common priority signal, either because they are separated architecturally (separate-activations model) or temporally (sequential model). In a redundancy-gain paradigm, search targets were defined either as a match to a shape cue (template-based), the presence of a singleton-colored item (saliency-driven), or both (redundant). We assessed whether the response time distribution in the redundant condition contained a substantial proportion of trials that were faster than could have been generated by the faster of the two individual guidance processes operating independently in parallel, that is, violation of the race model inequality (RMI). This effect can be generated only by a coactive mechanism. The results showed robust violations of the RMI when both features appeared at the same location, consistent with a coactive model. In addition, violations of the RMI were eliminated when redundant features were displayed at different locations, indicating that guidance signals combine on a spatially organized priority map. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Visuotactile correlation increases the integration of visual body-related effects into action representation.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001285

Understanding how individuals represent their bodies in action is crucial for cognitive sciences. The theory of event coding suggests actions are coded by their perceptual effects. Yet the spatial representation of actions during simultaneous body-related effects is less understood. This study aimed to explore whether a correlation between visual and tactile effects could integrate visual information into action representations. Indeed, spatiotemporal correlation among sensory signals coming from various modalities is known to be a critical factor, especially in studies on body representations. We manipulated visual feedback by inverting it on the horizontal plane. The first group performed an induction task involving stroking a surface with a time lag between tactile and visual feedback (asynchronous group), while the second experienced no time lag (synchronous group). Participants then rated their subjective feeling of referral of touch (RoT) which corresponds to the perceived location of their index finger’s tactile sensations. Subsequently, both groups completed the Simon task to assess spatial action coding. Results indicated no significant differences in RoT ratings between groups; however, the Simon task showed that the synchronous group coded responses based on the visual effects’ locations, unlike the asynchronous group. Additionally, a correlation was observed between RoT ratings and the Simon task. These findings suggest that when multiple body-related action effects compete, visual effects may prevail if there is a temporal correlation between visual and tactile effects. These findings underscore the importance of spatiotemporal correlations in coding actions and support the linkage between action and body representation processes. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

When irrelevant-feature priming fails: Encoding failure or failure to guide attention?

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001279

We tend to prioritize features and locations that have recently received our attention. Surprisingly, even irrelevant features of recently attended targets enjoy increased priority. However, such irrelevant-feature priming was found for some features and not for others. Here, we inquired whether the fact that irrelevant-feature priming is sometimes absent results from a failure of encoding or from a failure of attentional guidance. To answer this question, we relied on a finding common to the visual search and attentional-control literature: when a stimulus is responded to, the features and motor response associated with the action event are bound in a common representation and can be later retrieved. In two experiments, some participants searched for a color target and others for a shape targetβ€”with shape and color serving as the target’s irrelevant feature for the former and for the latter, respectively. Responding to the target required an easy discrimination (Experiment 1) or a difficult one (Experiment 2). Repeating the target’s irrelevant color speeded search, but repeating its irrelevant shape did not. However, the irrelevant feature–response binding effect was similar for the two search dimensions. These findings invalidate the no-encoding account. Additional findings indicate that irrelevant-feature priming shares the main characteristics of other intertrial priming phenomena. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Disentangling the contributions of spatiotopic, retinotopic, and configural frames of reference to the filtering of probable distractor locations.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001293

Human observers can allocate their attention to locations likely to contain a target and can also learn to avoid locations likely to contain a salient distractor during visual search. However, it is unclear which spatial frame of reference such learning is applied to. As such, our aim was to systematically disentangle the contributions of spatiotopic, retinotopic, and configural frames of reference to provide a comprehensive account of how the probabilistic distractor filtering effect comes about. We first demonstrate that the filtering effect is better determined by the probability of a salient distractor appearing at a relative location (i.e., in relation to one’s eye position or an item’s position in relation to other items within a display) rather than a fixed (spatiotopic) location, by varying the position of visual search arrays (along with fixation) across spatial contexts. We then separate retinotopic and configural reference frames by varying the configural but not retinotopic properties of biased (i.e., displays containing a probable distractor location) and unbiased visual search arrays and vice versa. In doing so, we find the filtering effect to be restricted to biased contexts when retinotopic positions are maintained, but configural properties are varied. In contrast, when the configural properties of visual search arrays are maintained, we show the transfer of the filtering effect across retinotopic positions. Thus, we demonstrate that probabilistic distractor filtering primarily emerges via a configural representation that codes the relative positions of items within search displays independent of spatiotopic and retinotopic coordinates. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Random rewards reduce task-switch costs.

Journal of Experimental Psychology: Human Perception and Performance, Feb 17, 2025, No Pagination Specified; doi:10.1037/xhp0001288

Task switching is effortful as it requires overcoming habitual task sets and reconfiguring new ones. Smaller switch costs can facilitate productivity and efficiency in modern work environments, where multitasking is crucial. However, the mechanisms to reduce switch costs are not fully understood, limiting reliable enhancement of this skill through interventions. While recent perspectives have hypothesized that noncontingent, or random, rewards could reduce switch costs, this has yet to be demonstrated. This study documented four experiments investigating the impact of random rewards on switch costs using a cued task-switching paradigm. Specifically, participants received rewards unrelated to the task, presented as gifts randomly at the end of some trials but not others. Results from all four experiments consistently showed that random rewards led to smaller switch costs compared to no rewards. However, this reduction was due to significant slowing after random reward presentations on both subsequent switch and repeat trials. Experiment 2 demonstrated that contingency played a role, with smaller switch costs observed only in the random reward condition, not in the performance-contingent reward condition. Experiments 3a and 3b further showed that reduced switch costs occurred only in the random reward condition, not in the oddball condition, despite both involving unexpected visual events, suggesting that the value of random rewards is critical. These findings provide initial evidence that random rewards can reduce switch costs, manifesting as post-reward slowing on both switch and repeat trials. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

The impact of model eyesight and social reward on automatic imitation in virtual reality.

Journal of Experimental Psychology: Human Perception and Performance, Jan 23, 2025, No Pagination Specified; doi:10.1037/xhp0001271

Motivational theories of imitation state that we imitate because this led to positive social consequences in the past. Because movement imitation typically only leads to these consequences when perceived by the imitated person, it should increase when the interaction partner sees the imitator. Current evidence for this hypothesis is mixed, potentially due to the low ecological validity in previous studies. We conducted two experiments (NExperiment 1 = 94, NExperiment 2 = 110) in which we resolved this limitation by placing participants in a virtual environment with a seeing and a blindfolded virtual agent, where they reacted to auditory cues with a head movement to the left or right, while the agent(s) also made a left or right head movement. We tested the effect of model eyesight (Experiments 1 and 2) and social reward on imitation (Experiment 2). Data were collected in 2023 and 2024. As expected, participants tended to imitate the agents. However, we found only limited evidence for the effect of model eyesight on automatic imitation in Experiment 1 and no evidence for the effect of model eyesight or social reward in Experiment 2. These findings challenge claims made by motivational theories. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Perceptual learning of modulation filtered speech.

Journal of Experimental Psychology: Human Perception and Performance, Jan 23, 2025, No Pagination Specified; doi:10.1037/xhp0001274

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal. We systematically filtered speech to control modulation content during training and test blocks. Perceptual learning was highly specific to the modulation filter heard during training, consistent with the hypothesis that learning involves changes to representations of speech modulations. In further experiments, we used modulation filtering and different feedback regimes (clear speech vs. written feedback) to investigate the role of talker-specific cues for cross-talker generalization of learning. Our results suggest that learning partially generalizes to speech from novel (untrained) talkers but that talker-specific cues can enhance generalization. These findings are consistent with the proposal that perceptual learning entails the adjustment of internal models that map acoustic features to phonological categories. These models can be applied to degraded speech from novel talkers, particularly when listeners can account for talker-specific variability in the acoustic signal. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Social and goal-related foundations of interpersonal adaptation during joint action.

Journal of Experimental Psychology: Human Perception and Performance, Jan 16, 2025, No Pagination Specified; doi:10.1037/xhp0001273

Collaborative motor interactions (joint actions) require relating to another person (social dimension) whose contribution is needed to achieve a shared goal (goal-related dimension). We explored if and how these dimensions modulate interactive behavior by exploring posterror interpersonal adaptations. In two experiments carried out in 2022 (N₁ = 23; Nβ‚‚ = 24, preregistered), participants played sequences of notes in turn-taking with a coactor either described as another participant or the computer (human vs. nonhuman coactor, social manipulation) while pursuing shared or individual goals (goal-related manipulation). The coactor was programmed to make a mistake in 50% of the trials. We found that, only in the shared goal condition, participants were slower when interacting with a human than a nonhuman coactor depending on how strongly they believed the human coactor was a real participant. Moreover, the general slowdown following a partner’s error was absent when the action required from the participant corresponded to what the coactor should have done (correction tendency effect). This effect was found only in the shared goal condition without differences between coactors, suggesting it was driven by goal-related representations. The social and goal-related dimensions thus independently but significantly shape interpersonal adaptations during joint action. (PsycInfo Database Record (c) 2025 APA, all rights reserved)

Running after two hares in visual working memory: Exploring retrospective attention to multiple items using simulation, behavioral outcomes, and eye tracking.

Journal of Experimental Psychology: Human Perception and Performance, Jan 13, 2025, No Pagination Specified; doi:10.1037/xhp0001270

Multi-item retro-cueing effects refer to better working memory performance for multiple items when they are cued after their offset compared to a neutral condition in which all items are cued. However, several studies have reported boundary conditions, and findings have also sometimes failed to replicate. We hypothesized that a strategy to focus on only one of the cued items could possibly yield these inconsistent patterns. In Study 1, a Monte Carlo simulation showed that randomly selecting one of the cued items as the focus in each trial increased the chance of obtaining significant β€œmulti-item retro-cueing effects” on the mean accuracy over the trials, providing an incorrect conclusion if interpreted as evidence for attending all the cued items. These high rates to obtain such data fit with inconsistent patterns in the literature. To try and circumvent this situation, we conducted two new experiments (Studies 2A and 2B) where participants were explicitly instructed to fixate their gaze on all the cued positions, verified through eye tracking (Study 2B). These produced robust multi-item retro-cueing effects regardless of previously identified boundary conditions. Notably, gazes were clearly fixated to multiple cued positions within each trial. Nevertheless, simulation revealed that our accuracy patterns could also in principle be produced by single-item enhancement on each trial. The present study forms the first step to disentangle overt gaze-based allocation of attention from single-item focusing strategies while also highlighting the need for improved methodologies to probe genuine multiplicity in working memory. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
❌
❌