banner

Exploring the relationship between computational frameworks and neuroscience studies for sensorimotor learning and control

Ahmed Mahmood khudhur

Abstract


The relationship between computational frameworks and neuroscience studies is crucial for understanding sensorimotor learning and control. Various tools and frameworks, such as Bayesian decision theory, neural dynamics framework, and state space framework, have been used to explore this relationship. Bayesian decision theory provides a mathematical framework for studying sensorimotor control and learning. It suggests that the central nervous system constructs estimate of sensorimotor transformations through internal models and represents uncertainty to respond optimally to environmental stimuli. The neural dynamics framework analyzes patterns of neural activity to understand the computational mechanisms underlying sensorimotor control and learning. The state space framework assesses the structure of learning in the state space and helps understand how the brain transforms sensory input into motor output. Computational frameworks have provided valuable insights into sensorimotor learning and control. They have been used to study the organization of motor memories based on contextual rules and the role of structural learning in the sensorimotor system. These frameworks have also been employed to investigate the neural dynamics under sensorimotor control and learning tasks, as well as the effect of explicit strategies on sensorimotor learning. The interplay between computational frameworks and neuroscience studies has enhanced our understanding of sensorimotor learning and control. Bayesian decision theory, neural dynamics framework, and state space framework have provided valuable tools for studying the computational mechanisms underlying these processes. They have helped uncover the role of contextual information, structural learning, and neural dynamics in sensorimotor control and learning. Further research should continue exploring the relationship between computational frameworks and neuroscience studies in sensorimotor learning and control. This interdisciplinary approach can lead to a better understanding of how motor skills are learned, retained, and improved through targeted interventions. Additionally, the application of computational frameworks in clinical settings may help develop more effective rehabilitation strategies for individuals with motor impairments.


Keywords


computational frameworks; sensorimotor learning; neuroscience studies; Bayesian decision theory

Full Text:

PDF

References


1. Sohn H, Meirhaeghe N, Rajalingham R, et al. A network perspective on sensorimotor learning. Trends in Neurosciences 2021; 44(3): 170-181. doi: 10.1016/j.tins.2020.11.007

2. Houbre Q, Angleraud A, Pieters R. Balancing exploration and exploitation: A neurally inspired mechanism to learn sensorimotor contingencies. Human-Friendly Robotics 2020, Proceedings of the 13th International Workshop. Springer International Publishing; 2021.

3. Heald JB. Sensorimotor Learning Under Switching Dynamics [PhD thesis]. University of Cambridge; 2020.

4. Makino H, Hwang EJ, Hedrick NG, Komiyama T. Circuit mechanisms of sensorimotor learning. Neuron 2016; 92(4): 705-721. doi: 10.1016/j.neuron.2016.10.029

5. Shorrock HK, Gillingwater TH, Groen EJN. Molecular mechanisms underlying sensory-motor circuit dysfunction in SMA. Frontiers in Molecular Neuroscience 2019; 12: 59. doi: 10.1016/J.NEURON.2016.10.029

6. Yan A, Howe B. Equitensors: Learning fair integrations of heterogeneous urban data. In: Proceedings of the 2021 International Conference on Management of Data; 20-25 June 2021. pp. 2338-2347.

7. Hoang C, Chowdhary K, Lee K, et al. Projection-based model reduction of dynamical systems using space-time subspace and machine learning. Computer Methods in Applied Mechanics and Engineering 2022; 389: 114341. doi: 10.1016/j.cma.2021.114341

8. Heald JB, Lengyel M, Wolpert DM. Contextual inference underlies the learning of sensorimotor repertoires. Nature 2021; 600(7889): 489-493. doi: 10.1038/s41586-021-04129-3

9. Cesanek E, Zhang Z, Ingram JN, et al. Motor memories of object dynamics are categorically organized. Elife 2021; 10: e71627. doi: 10.7554/elife.7162710

10. Heald JB, Ingram JN, Flanagan JR, et al. Multiple motor memories are learned to control different points on a tool. Nature Human Behaviour 2018; 2(4): 300-311. doi: 10.1038/s41562-018-0324-5

11. Marvel CL, Morgan OP, Kronemer SI. How the motor system integrates with working memory. Neuroscience & Biobehavioral Reviews 2019; 102: 184-194. doi: 10.1016/j.neubiorev.2019.04.017

12. Collins AGE, McDougle SD. Context is key for learning motor skills. Nature 2021; 600(7889): 387-388. doi: 10.1038/d41586-021-03028-x

13. Heald JB, Lengyel M, Wolpert DM. Contextual inference in learning and memory. Trends in Cognitive Sciences 2023; 27(1): 43-64. doi: 10.1016/j.tics.2022.10.004

14. Xu C, Krabbe S, Gründemann J, et al. Distinct hippocampal pathways mediate dissociable roles of context in memory retrieval. Cell 2016; 167(4): 961-972.e16. doi: 10.1016/j.cell.2016.09.051

15. Wenderoth N. Changing the brain with multimodal mirrors: Combining visual and somatosensory stimulation to enhance motor plasticity. Clinical Neurophysiology 2015; 126(6): 1065-1066. doi: 10.1016/j.clinph.2014.09.024

16. Kim S, Ogawa K, Lv J, et al. Neural substrates related to motor memory with multiple timescales in sensorimotor adaptation. PLoS Biology 2015; 13(12): e1002312. doi: 10.1371/journal.pbio.1002312

17. Tatti E, Cacciola A. The role of brain oscillatory activity in human sensorimotor control and learning: bridging theory and practice. Frontiers in Systems Neuroscience 2023; 17: 1211763. doi: 10.3389/fnsys.2023.1211763

18. Franco-Robles J, Escareno J, Labbani-Igbida O. Two-level sensorimotor learning for leader-follower consensus control. In: Proceedings of the 2023 European Control Conference (ECC); 13-16 June 2023; Bucharest, Romania. pp. 1-8.

19. Graham KR, Hayes KD, Meehan SK. Combined peripheral nerve stimulation and controllable pulse parameter transcranial magnetic stimulation to probe sensorimotor control and learning. Journal of Visualized Experiments 2023;194: e65212. doi: 10.3791/65212

20. Sadeghi Eshkevari S, Sadeghi Eshkevari S, Sen D, et al. Active structural control framework using policy-gradient reinforcement learning. Engineering Structures 2023; 274: 115122. doi: 10.1016/j.engstruct.2022.115122

21. Weinstein A, Botvinick MM. Structure learning in motor control: A deep reinforcement learning model. ArXiv 2017; arXiv:1706.06827. doi: 10.48550/arXiv.1706.06827

22. Dario CR, Stefan K. The Effects of Uncertain Context Inference on Motor Adaptation. BioRxiv; 2021.

23. Perry CM, Singh T, Springer KG, et al. Multiple processes independently predict motor learning. Journal of NeuroEngineering and Rehabilitation 2020; 17(1). doi: 10.1186/s12984-020-00766-3

24. Grari V, Hajouji OE, Lamprier S, et al. Learning unbiased representations via rényi minimization. In: Machine Learning and Knowledge Discovery in Databases. Springer International Publishing; 2021.

25. Iyengar RS, Mallampalli K, Raghavan M. A Novel Paradigm for Deep Reinforcement Learning of Biomimetic Systems. BioRxiv; 2021.

26. Sahlin U, Troffaes MCM, Edsman L. Robust decision analysis under severe uncertainty and ambiguous tradeoffs: an invasive species case study. Risk Analysis 2021; 41(11): 2140-2153. doi: 10.1111/risa.13722

27. Blesch M, Philipp E. Robust decision-making under risk and ambiguity. ArXiv 2021; arXiv:2104.12573. doi: 10.48550/arXiv.2104.12573

28. Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Frontiers in Synaptic Neuroscience 2014; 6: 8. doi: 10.3389/fnsyn.2014.00008

29. Haar S, Donchin O. A revised computational neuroanatomy for motor control. Journal of Cognitive Neuroscience 2020; 32(10): 1823-1836. doi: 10.1162/jocn_a_01602

30. Areshenkoff CN, de Brouwer AJ, Gale DJ, et al. The Structural-Functional Neural Architectures of Implicit and Explicit Motor Learning. BioRxiv; 2023.

31. Mill RD, Cole MW. Neural Representation Dynamics Reveal Computational Principles of Cognitive Task Learning. BioRxiv; 2023.

32. White A, Kilbertus N, Gelbrecht M, Boers N. Stabilized neural differential equations for learning constrained dynamics. ArXiv 2023; arXiv:2306.09739. doi: 10.48550/arXiv.2306.09739

33. Yamada A. Constructionalization of the Japanese addressee-honorification system. In: Proceedings of the 23rd Annual Meeting of the Japanese Cognitive Linguistics Association.

34. Arnold F, King R. State-space modeling for control based on physics-informed neural networks. Engineering Applications of Artificial Intelligence 2021; 101: 104195. doi: 10.1016/j.engappai.2021.104195

35. Ebitz RB, Hayden BY. The population doctrine in cognitive neuroscience. Neuron 2021; 109(19): 3055-3068. doi: 10.1016/j.neuron.2021.07.011

36. Haber A, Verhaegen M. Modeling and state-space identification of deformable mirrors. Optics Express 2020; 28(4): 4726. doi: 10.1364/oe.382880

37. McDougle SD, Ivry RB, Taylor JA. Taking aim at the cognitive side of learning in sensorimotor adaptation tasks. Trends In Cognitive Sciences 2016; 20(7): 535-544. doi: 10.1016/j.tics.2016.05.002

38. Sobuh M, Al Yaman M. Development of an algorithm for automating gaze data analysis gathered while using upper limb prostheses. In: Proceedings of the 2017 7th International Conference on Modeling, Simulation, and Applied Optimization (ICMSAO 2017); 4-6 April 2017; Sharjah, United Arab Emirates.

39. Doğan B, Ölmez T. A novel state space representation for the solution of 2D-HP protein folding problem using reinforcement learning methods. Applied Soft Computing 2015; 26: 213-223. doi: 10.1016/j.asoc.2014.09.047

40. Ganesh G, Burdet E. Motor planning explains human behaviour in tasks with multiple solutions. Robotics and Autonomous Systems 2013; 61(4): 362-368. doi: 10.1016/j.robot.2012.09.024

41. Jangid A, Chaudhary L, Sharma K. Computational neuroscience and its applications: A review. Intelligent Energy Management Technologies: ICAEM 2019; 2020: 159-169. doi: 10.1007/978-981-15-8820-4_16

42. Jangid A, Chaudhary L, Sharma K. Computational neuroscience models and tools: A review. Bio-inspired Neurocomputing 2021; 403-417. doi: 10.1007/978-981-15-5495-7_22

43. Kriegeskorte N, Douglas PK. Cognitive computational neuroscience. Nature Neuroscience 2018; 21(9): 1148-1160. doi: 10.1038/s41593-018-0210-5

44. Naselaris T, Bassett DS, Fletcher AK, et al. Cognitive computational neuroscience: A new conference for an emerging discipline. Trends in Cognitive Sciences 2018; 22(5): 365-367. doi: 10.1016/j.tics.2018.02.008

45. Mohammed EZ. Proposed Classification System by Using Artificial Neural Network. Kirkuk University Journal-Scientific Studies 2015; 10(3):59-78. doi: 10.32894/kujss.2015.104982

46. Sarishma D, Sangwan S, Tomar R, et al. A review on cognitive computational neuroscience: Overview, models, and applications. Innovative Trends in Computational Intelligence 2022; 217-234. doi: 10.1007/978-3-030-78284-9_10

47. Kaplan DM. Explanation and description in computational neuroscience. Synthese 2011; 183(3): 339-373. doi: 10.1007/s11229-011-9970-0

48. Kringelbach ML, Deco G. Brain states and transitions: Insights from computational neuroscience. Cell Reports 2020; 32(10): 108128. doi: 10.1016/j.celrep.2020.108128

49. Huang T, Lu X, Zhang D, et al. ACC-RL: Adaptive congestion control based on reinforcement learning in power distribution networks with data centers. Energies 2023; 16(14): 5385. doi: 10.3390/en16145385

50. Naeem M, Rizvi STH, Coronato A. A gentle introduction to reinforcement learning and its application in different fields. IEEE Access 2020; 8: 209320-209344. doi: 10.1109/access.2020.3038605

51. Yang MA, Lee JH, Lee SW. Biological Reinforcement Learning via Predictive Spacetime Encoding. BioRxiv; 2020. doi: 10.1101/2020.08.21.260844.

52. Du J, Futoma J, Doshi-Velez F. Model-based reinforcement learning for Semi-Markov decision processes with neural odes. Advances in Neural Information Processing Systems 2020; 33: 19805-19816.

53. Subramanian A, Chitlangia S, Baths V. Reinforcement learning and its connections with neuroscience and psychology. Neural Networks 2022; 145: 271-287. doi: 10.1016/j.neunet.2021.10.003

54. Hakimzadeh A, Xue Y, Setoodeh P. Interpretable reinforcement learning inspired by Piaget’s theory of cognitive development. ArXiv 2021; arXiv:2102.00572. doi: 10.48550/arXiv.2102.00572

55. Tatti E, Cacciola A. The role of brain oscillatory activity in human sensorimotor control and learning: Bridging theory and practice. Frontiers in Systems Neuroscience 2023; 17: 1211763. doi: 10.3389/fnsys.2023.1211763

56. Baker MR, Padmaja DL, Puviarasi R, et al. Implementing critical machine learning (ML) approaches for generating robust discriminative neuroimaging representations using structural equation model (SEM). Computational and Mathematical Methods in Medicine 2022. doi: 10.1155/2022/6501975

57. Noel JP, Caziot B, Bruni S, et al. Supporting generalization in non-human primate behavior by tapping into structural knowledge: Examples from sensorimotor mappings, inference, and decision-making. Progress in Neurobiology 2021; 201: 101996. doi: 10.1016/j.pneurobio.2021.101996

58. Jordan T, de Wilde P, de Lima Neto FB. Decision making for two learning agents acting like human agents: A proof of concept for the application of a Learning classifier systems. In: Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC); 19-24 July 2020; Glasgow, United Kingdom. pp. 1-8.

59. Geng B, Varshney PK. On decision making in human-machine networks. In: Proceedings of the IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS 2019); 4-7 November 2019; Monterey, CA, USA. pp. 37-45.

60. Mulavara AP, Feiveson AH, Fiedler J, et al. Locomotor function after long-duration space flight: effects and motor learning during recovery. Experimental Brain Research 2010; 202(3): 649-659. doi: 10.1007/s00221-010-2171-0

61. Macaulay TR, Peters BT, Wood SJ, et al. Developing proprioceptive countermeasures to mitigate postural and locomotor control deficits after long-duration spaceflight. Frontiers in Systems Neuroscience 2021; 15: 658985. doi: 10.3389/fnsys.2021.658985

62. Stephan KE, Riera JJ, Deco G, et al. The brain connectivity workshops: Moving the frontiers of computational systems neuroscience. Neuroimage 2008; 42(1): 1-9. doi: 10.1016/j.neuroimage.2008.04.167

63. Jamali M, Mitchell DE, Dale A, et al. Neuronal detection thresholds during vestibular compensation: contributions of response variability and sensory substitution. The Journal of physiology 2014; 592(7): 1565-1580. doi: 10.1113/jphysiol.2013.267534

64. Seth AK. Interoceptive inference, emotion, and the embodied self. Trends in Cognitive Sciences 2013; 17(11): 565-573. doi: 10.1016/j.tics.2013.09.007 ‏

65. Albert ST, Jang J, Haith AM, et al. Competition between parallel sensorimotor learning systems. Elife 2020; 11: e65361. doi: 10.1101/2020.12.01.406777

66. Bibaut A, Chambaz A, Dimakopoulou M, et al. Post-Contextual-Bandit inference. ArXiv 2021; arXiv:2106.00418. doi: 10.48550/arXiv.2106.00418

67. Gallivan JP, Chapman CS, Wolpert DM, et al. Decision-making in sensorimotor control. Nature Reviews Neuroscience 2018; 19(9): 519-534. doi: 10.1038/s41583-018-0045-9

68. Pape AA, Noury N, Siegel M. Motor actions influence subsequent sensorimotor decisions. Scientific Reports 2017; 7(1): 15913. doi: 10.1038/s41598-017-16299-0

69. Barendregt NW, Josić K, Kilpatrick ZP. Analyzing dynamic decision-making models using Chapman-Kolmogorov equations. Journal of Computational Neuroscience 2019; 47(2-3): 205-222. doi: 10.1007/s10827-019-00733-5

70. Veshneva I, Chernyshova G. The scenario modeling of regional competitiveness risks based on the Chapman-Kolmogorov equations. Journal of Physics: Conference Series 2021; 1784(1): 012008. doi: 10.1088/1742-6596/1784/1/012008

71. Nagengast AJ, Braun DA, Wolpert DM. Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control. Proceedings of the Royal Society B: Biological Sciences 2011; 278(1716): 2325-2332. doi: 10.1098/rspb.2010.2518

72. Barrios Sánchez JM, Baeza Serrato R. Design and development of an optimal control model in system dynamics through state-space representation. Applied Sciences 2023; 13(12): 7154. doi: 10.3390/app13127154

73. Maddalena ET, Godoy RB. State-space models for assisting loosely coupled inductive power transfer systems analysis. Journal of Control, Automation and Electrical Systems 2017; 29(1): 119-124. doi: 10.1007/s40313-017-0354-7

74. Byravan A, Hasenclever L, Trochim P, et al. Evaluating model-based planning and planner amortization for continuous control. ArXiv 2021; arXiv:2110.03363. doi: 10.48550/arXiv.2110.03363

75. Rawlinson D, Kowadlo G. Computational neuroscience offers hints for more general machine learning. Artificial General Intelligence. In: Proceedings of the 10th International Conference (AGI 2017); 15-18 August 2017; Melbourne, VIC, Australia. pp. 123-132.

76. Eeckman FH, Bower JM. Computation and Neural Systems. Springer Science & Business Media; 2012.

77. Glomb K, Cabral J, Cattani A, et al. Computational Models in Electroencephalography. Brain Topography 2021; 35(1): 142-161. doi: 10.1007/s10548-021-00828-2

78. Laflaquière A. Grounding the experience of a visual field through sensorimotor contingencies. Neurocomputing 2017; 268: 142-152. doi: 10.1016/j.neucom.2016.11.085

79. Noppeney U. Perceptual inference, learning, and attention in a multisensory world. Annual Review of Neuroscience 2021; 44(1): 449-473. doi: 10.1146/annurev-neuro-100120-085519




DOI: https://doi.org/10.32629/jai.v7i3.1245

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Ahmed Mahmood Khudhur

License URL: https://creativecommons.org/licenses/by-nc/4.0/