A Computational Model of Attention-Guided Visual Learning in a High-Performance Computing Software System
DOI:
https://doi.org/10.54327/set2025/v5.i1.245Keywords:
Computational Model, Attention-Guided Visual Learning, High-Performance Computing, Reinforcement Learning, Computer VisionAbstract
This research investigates transformer architectures in high-performance computing (HPC) software systems for attention-guided visual learning (AGVL). The study focuses on the effects of environmental factors and non-contextual stimuli on cognitive control. It reveals how attention increases responses to attentive stimuli, thereby normalizing activity across the population. Transformer blocks use parallelism and less localized attention than current or convolutional models. The study investigates the use of transformer topologies to enhance language modeling, focusing on attention-guided learning and attention-modulated Hebbian plasticity. The model includes an all-attention layer with embedded input vectors, non-contextual vectors containing generic task-relevant information, and self-attentional and feedforward layers. The work employs relative two-dimensional positional encoding to address the challenge of encoding two-dimensional data such as photographs. The feature-similarity gain model proposes that attention multiplicatively strengthens neuronal responses based on how similar their feature tuning is to the attended input. The attention-guided learning approach rewards learning with neural attentional response gain, which the network modifies via gradient descent to achieve the projected objective outputs. The study discovered that supervised error backpropagation and the attention-modulated Hebbian rule outperformed the weight gain rule on MNIST; however, concentration differed.
Downloads

Downloads
Published
Data Availability Statement
Supplementary materials and data used in this research are accessible upon request for future research and development. We did not use any private, restricted, or licensed data in this research. For access, please contact the corresponding author via [resipo.bd@gmail.com].
License
Copyright (c) 2024 Alice Ahmed, Md. Tanim Hossain

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website, social networking sites, etc).