About

Email:

I am an Assistant Professor in the Computer Science Department at Carnegie Mellon University, and also a Research Scientist at Google DeepMind on the Magenta team (part-time).

My research goal is to develop and responsibly deploy generative AI for music and creativity, thereby unlocking and augmenting human creative potential. To this end, my work involves (1) improving machine learning methods for controllable generative modeling for music, audio, and other sequential data, and (2) deploying real-world interactive systems that allow a broader audienceβ€”inclusive of non-musiciansβ€”to harness generative music AI through intuitive controls.

I am particularly drawn to research ideas with direct real-world applications, and my work often involves building systems for real users to be evaluated in-the-wild. For example, my work on Piano Genie was used in a live performance by The Flaming Lips, and my work on Dance Dance Convolution powers Beat Sage, a live service used by thousands of users a day to create multimodal music game content.

Previously, I was a postdoc at Stanford CS advised by Percy Liang. Before that, I completed a PhD at UCSD co-advised by Miller Puckette and Julian McAuley.

News

G-CLef

Our group's logo, a mashup of a treble clef (G-Clef) and CMU's mascot Scotty created with DALL-E 2.

I lead the Generative Creativity Lab (G-CLef) at CMU. Our research focuses on the development and deployment of generative AI towards augmenting human creativity. We primarily focus on musical creativity as an application domain but also explore other areas such as gaming.

PhD students

Irmak Bukey CSD PhD student
Irmak Bukey
CSD PhD student
Wayne Chi CSD PhD student
Wayne Chi
CSD PhD student

Affiliates

Shih-Lun Wu LTI MS student
Shih-Lun Wu
LTI MS student
Michael Feffer S3D PhD student
Michael Feffer
S3D PhD student
Alexander Wang Music and Technology MS student
Alexander Wang
Music and Technology MS student

Supporters

The G-CLef lab is supported by generous contributions from:

Recent Papers

Quickly discover relevant content by filtering publications.
(2023). Music ControlNet: Multiple Time-varying Controls for Music Generation.

arXiv PDF BibTeX πŸ”Š Examples Video

(2023). Anticipatory Music Transformer.

arXiv PDF BibTeX πŸ”Š Examples Code

(2023). SingSong: Generating Musical Accompaniments from Singing.

arXiv PDF BibTeX πŸ”Š Examples

(2022). Melody Transcription via Generative Pre-training. In ISMIR.

arXiv PDF BibTeX πŸ”Š Examples Code Dataset Video