AI & Machine Learning
Infinite Music AI
Generative Music Systems
Overview
At Infinite Music, I worked on the design, training, and evaluation of large-scale text-to-music generation systems based on Meta's MusicGen. My role covered the full research pipeline, from constructing structured music datasets to fine-tuning models and testing creative interactions with users.
I focused on improving alignment between text prompts and musical structure, and on shifting generative models toward performer-aware and collaborative systems rather than black-box generators. This work sits at the intersection of machine learning, music cognition, and human-computer interaction.
Key Contributions
- Built and curated large music datasets with tempo, mood, genre, instrumentation, and rhythmic features
- Explored contrastive audio-text representations to analyze prompt alignment and failure modes
- Proposed and implemented remixing features conditioned on extracted percussion patterns
- Worked closely with researchers in computational neuroscience and music performance
Technologies
Python
PyTorch
MusicGen
CLAP
Transformers
Audio Signal Processing