About

Machine learning engineer and applied researcher exploring the intersection of AI, music, and human perception.

Background

I grew up moving between different fields, from engineering to art, theory to practice, curiosity to constraint. As a child, I was drawn to systems and how parts work together. Creative activities helped me test these ideas. Music was my first space to experiment. Before learning any formal technical skills, I picked up structure, timing, and variation by manipulating sounds and exploring patterns intuitively.

Throughout my academic journey, I actively sought opportunities to combine disciplines. I built original datasets for projects that applied signal processing and machine learning to perception, often starting from raw signals rather than curated inputs. When tools or resources were unavailable, I designed my own experimental setups, from controlled sensing environments to data collection pipelines.

I am especially motivated by the idea that technology should serve as a mirror rather than a director. Much of my creative and technical work has focused on revealing latent structures in music, images, and data—such as translating musical patterns into visual representations or using perceptual signals to expose hidden regularities in behavior.

Research Vision

I perceive music as a compressed form of cognition, where rhythm reflects time perception, harmony embodies expectation, and timbre evokes memory. My long-term research goal is to design multimodal AI systems capable of generating audiovisual narratives grounded in cognitive models of musical structure and human perception.

I am particularly interested in systems for performance, opera, film, and interactive art that position music as a primary organizing force rather than a decorative element.

Experience

Education

Creative Practice

Nash Equilibrium is my long-term experimental electronic music project, where I explore structure, rhythm, and gradual transformation through performance and composition. As a member of Discos Graves (Lima) and the Residencia Kai artist residency (Sacred Valley), I perform experimental downtempo/electronic sets and give talks on music conceptualization and the intersection of technology and art.

This creative practice serves as a parallel research space, grounding my technical work in lived musical experience and shaping how I think about intention, expressivity, and audience perception in generative systems.

Skills

Python PyTorch TensorFlow Transformers GenAI Deep Learning Computer Vision Audio Processing Signal Processing AWS GCP SQL R C++

Publication

Analysis of Deforestation in Ucayali-Peru Using Satellite Imagery from Sentinel-2. Velayarce, D., Alvarez, M., Guevara, D., & Murray, V. (2021). Proceedings of the 6th Brazilian Technology Symposium, BTSym 2020. Springer.

Contact

I'm always interested in discussing research collaborations, creative projects, or opportunities in AI and music technology.