Jinwoo Lee
Jinwoo Yi
Jinwoo Yi
dfsdfsdfsdfsNews
Video Abstract
My co-first-authored paper has been officially accepted to the AAAI2025 Workshop of Artificial Intelligence for Music! 🙂
This work originated from my AI × Art Hackathon 2024 project. In this work, my colleagues and I combined EEG-based affect decoding and multi-modal generative modeling to reconstruct music videos that depict individuals’ affect-charged autobiographical memory. These videos were generated using short essays and sketches about their memories, as well as the real-time valence sequence decoded from EEG signals recorded during memory recall to establish a temporal structure. Through a two-stage user study, we found that the generated videos effectively captured and represented the affective dynamics felt during memory recall. This work was also highlighted in a blog post from Neuroelectrics, whose EEG recording device was utilized in this project.
Check out my personal video abstract and a preprint for more details!
© 2025 Jinwoo Lee. All rights are reserved.