The Musician in the Machine
Note: This is an archived copy of an article originally published at magenta.tensorflow.org. Archived here for preservation.
[Full article content to be manually copied from original source]
Summary
This article by Dan Jeffries describes how his team used Google’s Music Transformer to generate ambient music compositions.
Project Goal: The author and his team explored whether neural networks could compose ambient music comparable to established artists in the genre.
Why Ambient Music: Jeffries selected ambient because its ethereal, flowing nature makes it forgiving of imperfections—a misplaced note blends into the overall soundscape rather than standing out.
Technology Used: They leveraged the Music Transformer, which uses “relative attention” mechanisms to understand long-term musical relationships between notes, superior to earlier RNN approaches.
Implementation: The team built their pipeline using Pachyderm (a containerized ML framework) with a curated Spotify playlist as training data. They provided open-source tools on GitHub for others to train similar models.
Results: The generated MIDI files were rendered through Logic Pro’s ambient instruments (Stratosphere, Peaceful Meadow, etc.) to create finished compositions. Results varied—some generated pieces were genuinely compelling, while others contained artifacts like extended silences or repetitive notes.
Limitations Acknowledged: The model struggled with certain artists’ styles and lacked access to original, non-transcribed MIDI files, which affected output quality.
Future Vision: The article concludes optimistically that artists will increasingly co-create with AI systems using these tools.