Spinal Cord Research Help
AboutCategoriesLatest ResearchContact
Subscribe
Spinal Cord Research Help

Making Spinal Cord Injury (SCI) Research Accessible to Everyone. Simplified summaries of the latest research, designed for patients, caregivers and anybody who's interested.

Quick Links

  • Home
  • About
  • Categories
  • Latest Research
  • Disclaimer

Contact

  • Contact Us
© 2025 Spinal Cord Research Help

All rights reserved.

  1. Home
  2. Research
  3. Neurology
  4. Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards

Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards

IEEE Trans Neural Syst Rehabil Eng, 2017 · DOI: 10.1109/TNSRE.2017.2700395 · Published: October 1, 2017

NeurologyBioinformaticsBiomedical

Simple Explanation

Functional Electrical Stimulation (FES) uses electrical currents to help paralyzed individuals regain movement. This study explores using human feedback to train computer controllers for FES, specifically for arm movements. The study compares controllers trained with human rewards to those trained with computer-generated rewards, assessing their ability to achieve reaching tasks effectively. The results suggest that human-provided rewards can be a useful training signal for FES controllers, potentially allowing for personalized control strategies.

Study Duration
Not specified
Participants
10 neurologically intact human subjects
Evidence Level
Not specified

Key Findings

  • 1
    RL controllers trained with human and pseudo-human rewards significantly outperformed standard controllers in reaching tasks.
  • 2
    Reward positivity and consistency were not significantly related to the success of learning, suggesting the controller is robust to subjective human input.
  • 3
    Pseudo-human rewards showed a slight advantage in learning speed compared to human-generated rewards, but both were effective.

Research Summary

This study investigates the use of human-generated rewards to train reinforcement learning (RL) controllers for functional electrical stimulation (FES) of arm movements in individuals with spinal cord injury (SCI). The RL controllers were trained using subjective numerical rewards provided by human participants, and their performance was compared to controllers trained using computer-generated pseudo-human rewards and automated rewards. The results indicate that human rewards can be effectively used to train RL-based FES controllers, achieving performance comparable to controllers trained with pseudo-human rewards, and significantly outperforming standard controllers.

Practical Implications

Personalized FES Control

Human-generated rewards can tailor RL controller performance to individual user preferences, potentially improving the usability and effectiveness of FES systems.

Pre-training with Pseudo-Human Rewards

Pseudo-human rewards can be used to pre-train controllers in simulation, providing a baseline level of performance before human-guided training is implemented.

Robustness to Human Subjectivity

The RL controller's ability to learn effectively from inconsistent human rewards suggests it is robust to subjective biases and variations in human input.

Study Limitations

  • 1
    The study used neurologically intact human subjects to train the controllers, which may not fully reflect the challenges of training with individuals with SCI.
  • 2
    The arm model used in the study was a simplified planar model, which may not capture the full complexity of real-world arm movements.
  • 3
    The rewards were delayed, sparse, and inconsistent.

Your Feedback

Was this summary helpful?

Back to Neurology