Chung Hyuk Park
  • Home
  • CV
  • Research
    • Research Themes >
      • HUMAN-ROBOT INTERACTION (HRI) IN ASSISTIVE ROBOTICS
      • MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE (AI) FOR HRI
      • HRI IN TELE-MEDICAL ROBOTICS
      • AI FOR MEDICAL ASSISTIVE ROBOTICS
    • Lab Members
    • Lab equipment
  • Projects
    • Robots for Autistic Adolescents
    • AI-based IADL Analysis for AD/MCI Patients
    • Intelligent Haptic Teleoperation for Microrobots
    • Real-time Nerve Tissue Detection and Visualization
    • Robotic Surgery Assistance: Surgical Step Recognition & Prediction
    • Explainable AI for Neuroscience
    • Emotion detection with BCI
    • Music-based Interactive Robots for ASD
    • Robotic Learning
    • Medical VR Simulator
    • Haptic Telepresence
    • Assistive Robotics for VI
  • Teaching
    • Courses
    • Lab Members and Mentees
  • About Me

Dancing With Robots  

Not just learning the moves, but learning along the music. Robot Nao has using this framework to play with autistic children during the pandemic to dance to fun music~!
Picture
Picture
Robot Learning from Human Teacher
In order to improve the abilities of robots to function as an integral part of our daily lives, we need to design robots in such a way that the robots can easily learn from a "human teacher". This research focused on how to teach a robot to perform a new task through teleoperative instructions. Since my previous study on vision-based haptic guidance suggested that combining robotic vision and teleoperation-based haptic modality can provide an efficient mechanism for human-robot interaction, this research tackled the problem of robotic task-learning from humans using a methodology based on spatio-temporal learning of teleoperation sequences. Overall, the goal of this research was to provide a better solution for robots in our environment, enabling them to increase their ability to support and assist humans more efficiently in our daily lives.

Video is recorded in real time. Only 1~2 training sequences are used for robotic learning per each case.
Picture
Haptic Skill Transfer of High-DoF Manipulation Tasks 
To further the depth of learning processes in human-robot interaction, the haptic modality is utilized as a mediator for both skill acquisition and skill translation. This research focuses on a coordinated haptic training architecture that is useful for transferring expertise in teleoperation-based manipulation in human-to-human or human-to-robot skill transfer. Robotic learning is more extensively incorporated to translate human skills into robotic knowledge, and the method to reuse this data with a haptic skill transfer process is investigated. This research aims to construct a reality-based haptic interaction system for knowledge transfer by linking an expert’s skill with robotic movement in real time. The benefits from this approach include (i) a representation of an expert’s knowledge into a more compact and general form by learning from a minimized set of training samples, and (ii) an increase in the capability of a novice user by coupling learned skills absorbed by a robotic system with haptic feedback. In order to evaluate these ideas and present the effectiveness of this paradigm, human handwriting is selected as the experiment of interest. For the learning algorithms, artificial neural network (ANN) and support-vector machine (SVM) were utilized and their performances were compared. For the evaluation of performance, a modified Longest Common Subsequence (LCSS) algorithm was implemented. Results showed that one or two experts’ samples are sufficient for the generation of haptic training knowledge, which can successfully recreate manipulation motion with a robotic system and transfer haptic forces to an untrained user with a haptic device. Also in the case of handwriting comparison, the similarity measures result in up to an 88% match even with a minimized set of training samples.