Louis Bradshaw
Curriculum Vitae
/ GitHub
/ Twitter
l.b.bradshaw [at] qmul.ac.uk
I'm a CS/ML PhD student at C4DM, where I specialize in Deep Learning for Audio. I'm also a research lead at EleutherAI. Prior to my PhD, I studied Mathematics (Algebraic Geometry) at Imperial College London (BSc, MSc). My current research interests are varied and include:
Deep Learning for Music. I've led research projects in several core areas of generative music and information retrieval including: musical foundation models (see the Aria project), self-supervised representation learning, audio transcription, and datasets. My current PhD research explores post-training techniques for pre-trained music models, aiming to improve quality control, enhance guidance and conditioning methods, and enabling applications to MIR via transfer learning.
Hybrid Audio/Language Models. I've developed a growing research interest in models that integrate audio with text or other token-based symbolic information. This includes areas such as neural audio codecs, automatic speech recognition, and audio-language models. I'm particularly interested in improving conversational speech foundation models (e.g., Moshi) through both architectural and data-centric approaches.
I'm also extremely interested in the engineering problems surrounding ML/DL. In my non-research time, I currently dedicate a portion to studying C++/CUDA. Outside of research, I have a deep love for mathematics, music production, and reading. The best part of doing a PhD is getting to learn from all kinds of people. If you are interested in collaborating, or just chatting about research, feel free to reach out!
News
- [Jan 2025] My paper Aria-MIDI: A Dataset of Piano MIDI Files was accepted at ICLR 2025. Looking forward to meeting everyone in Singapore!
- [Jun 2024] I've been made a research lead at EleutherAI, where I will continue leading our expansive research project on generative music.
- [Jan 2024] Thanks to StabilityAI and EleutherAI, who have provided us with significant compute sponsorship (10k A100 hours) for the Aria project.
Aria Project
I currently lead a research project on building and scaling a transformer-based foundation model for symbolic music. The project gets its codename, Aria, from the Goldberg Variations, and has attracted generous and ongoing compute support from EleutherAI & StabilityAI. Although this project hasn't been publicly released due to publication review requirements, we are very excited to share our work in May! In the meantime, here are some early samples showcasing what it can do:
If you are interested in finding out more about the Aria project, the best place is on the EleutherAI discord channel.
Misc
These essays [1, 2] and these books [3, 4, 5] had a big influence on me.