forcegogl.blogg.se

Cross modal perception definition
Cross modal perception definition






cross modal perception definition

Moreover, visual and somatosensory representations have typically been related to the same area, with participants completing the task without body or head movement. Second, the workspaces used in these tasks have typically been small, and located in front of participants. Several previous experiments 20, 21 relied on the conflict between visual and somatosensory feedback about the hand, emphasising the process of combining redundant information more than RF transformation. First, because most tasks were conducted using hand responses, somatosensory information is available the whole time. The reaching tasks used in these studies have involved several common restrictions. Recent studies have reconciled these divergent data, demonstrating that if the task precludes the unimodal comparison of target and hand information, the CNS represents movement plans simultaneously in multiple RFs for task-dependent reweighting and optimal use of available sensory information 18, 19. These studies have produced disparate results, variously indicating that the RF used in motor planning is retinotopic 1, 8, 9, 10, 11, hand- or body-centred 12, 13, 14, or a common representation 15, 16, 17. A number of psychophysical and neuroimaging studies have used the reaching task paradigm to examine how multiple sensory signals are processed, and which RFs are used for motor planning. However, in cross-modal tasks, transformations between modalities are inevitable. It has been suggested that when unimodal comparison is possible, the central nervous system (CNS) tends to avoid performing unnecessary coordinate transformations that may add noise 7. Previous studies reported that sensory transformation can incur a cost, adding bias and variability 5, 6. To utilise available information for comparison and planning at different locations, some signals must be transformed within or between RFs. Thus, sensory inputs gathered with various gaze directions and body postures are combined to form coherent representations of the whole space. Visual information is represented retinotopically 1, relative to current gaze direction, while somatosensory information from arms and hands, which can be further divided into discrete modalities 2 (e.g., tactile and proprioceptive sensation), is represented relative to the body 3, shoulder, or hand 4. Using multiple modalities for motor planning in various locations is not trivial because different sensory inputs are encoded in different reference frames (RFs). Similarly, a person may pick up a ribbon and tie a bow behind their back without difficulty. For example, a person can pick up a hat and put it on without looking, requiring only slight adjustment using a mirror. Humans use multi-sensory information when interacting with objects in peripersonal three-dimensional (3D) space. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. The current study used a slant perception and parallelity paradigm to explore this issue. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space.








Cross modal perception definition