AI-assisted experimental music is not a new genre — it is a logical extension of decades of algorithmic composition. What changed in the last two years is radical accessibility: tools that once required institutional infrastructure now run in a browser tab. For artists working at the edges of electronic music, that shift does not replace practice. It amplifies it and complicates it at the same time.

Historical Context: From ILLIAC to Large Audio Models
The history of machine-generated music begins well before neural networks. In 1957, Lejaren Hiller and Leonard Isaacson used an ILLIAC mainframe computer at the University of Illinois to compose the Illiac Suite for String Quartet, applying algorithmic rules derived from counterpoint theory. The result was the first piece of music substantially composed by a computer.
In the 1980s, composer David Cope developed EMI (Experiments in Musical Intelligence), a system capable of generating new works in the style of Bach, Beethoven, or Brahms with sufficient plausibility to ignite a philosophical debate — one that anticipated most of the arguments circulating today about AI authorship. Brian Eno’s Discreet Music (1975) and subsequent generative systems introduced the concept of music that composes itself within defined parameters, framing the composer as a system architect rather than a note-writer.
The qualitative shift came with deep neural networks. In 2016, Google published the Magenta project under Google Brain, with the explicit aim of exploring whether machine learning could produce art with autonomous aesthetic value. In 2023 and 2024, platforms like Suno and Udio democratised that access to the point of mass consumption. For the experimental scene, the relevant moment is not the ease of use of consumer platforms — it is the availability of open models and intervenable frameworks.
Key Tools for Experimental Practice

The current ecosystem is heterogeneous. Consumer platforms give surface-level control; open-source models give access to the architecture. For experimental practice, the distinction matters.
Google Magenta / Magenta RealTime is the most relevant tool for advanced experimental practice. In June 2025, Google published Magenta RealTime (Magenta RT) — an autoregressive transformer model with 800 million parameters, trained on approximately 190,000 hours of instrumental audio, available under the Apache 2.0 licence on GitHub and Hugging Face. Its defining characteristic is real-time generation: the model produces audio faster than playback — 1.25 seconds of computation per 2 seconds of audio — and allows style prompts to be changed while music is generating. It accepts both text control and reference audio. For live performance and sound installations, this level of interactivity has no equivalent in commercial platforms.
RAVE (Realtime Audio Variational autoEncoder) is a variational autoencoder for audio developed by researcher Antoine Caillon and collaborators, first published in 2021 and subsequently updated. It allows timbre transfer: feeding in one sound and generating audio with the learned timbral characteristics of a training corpus. RAVE runs in real time, has a Max/MSP external and an audio plugin version, and has been used in live performance contexts where the model’s learned vocabulary of sound becomes an instrument. Artists like Yann Seznec and Rave Ramirez have documented its use in live settings.
NSynth, released by Google Magenta in 2017, uses a neural network to learn the timbral qualities of individual instrument sounds and interpolate between them — producing sounds that are genuinely between a piano and a flute, or a voice and a cello, rather than layering the two. The NSynth Super instrument, a hardware prototype developed in collaboration with artist Yuri Suzuki and Instrument 1 co-creator Dani Díaz, brought the model into a physical interface. NSynth is primarily of historical and conceptual interest now — its successor architectures (RAVE, AudioCraft) offer more practical control — but the timbral interpolation idea it introduced remains influential.
Meta AudioCraft / MusicGen, published in 2023, is a text-and-melody-controlled music generation model available for local download and use. It requires more technical configuration than consumer platforms but offers full control over the process. For artists concerned about data privacy or platform dependency, local models like MusicGen represent a different relationship with the tool.
Suno AI is, by global usage, the most widely used platform. Its limitations for experimental use are the inverse of its consumer appeal: the model’s architecture is opaque, control is at the surface level of the prompt rather than the model, and the output is optimised for conventional song structures. That said, its v5 model (September 2025) produces audio of sufficient fidelity that it functions as useful base material for processing in a DAW or through a modular chain. The gap between the interface and the model is precisely where experimental artists have found something to work with.
Artists Working with AI in Experimental Music
The most interesting uses of AI in experimental practice treat the model as one element within a more complex system — alongside modular synthesis, live coding, or sound installation — rather than as a standalone production tool.
Holly Herndon is the most cited reference internationally for AI-integrated practice. Her album PROTO (2019), made with collaborator Mat Dryhurst, trained a custom AI system named Spawn on vocal material from a human ensemble, then integrated Spawn’s outputs into the compositional process. The 2023 collaboration Jlin & Holly Herndon: Godmother had Jlin provide compositional direction while Herndon’s AI systems provided sound generation — a collaboration that made the question of distributed authorship audible rather than theoretical.
Arca (Alejandra Ghersi Rodríguez), born in Caracas and based in Barcelona, has built a body of work at the intersection of experimental electronic music, visual art, and performance. The Kick quintology (2020–2021) integrated deconstructed reggaetón, techno, ambient, and IDM in a format that used AI-adjacent generative processes as one layer among many. In 2021 she became the first Venezuelan and Latin American artist nominated for a Grammy in the dance/electronic album category.
Chilean collective Matar a un Panda (Carla Redlich and Jean Didier Larrabure) received an Honorary Mention in the New Animation Art category at Ars Electronica 2024 for No se van los que se aman, a work that builds a performative narrative through dialogue with language systems — generating a digital body that interrogates identity and grief. It is one of the few Latin American works recognised in a central category at that edition of the festival.
In the festival circuit documented on Amplify DAI’s artist profiles, the dominant pattern is not AI as a composition substitute but as material for real-time processing: AI-generated audio used as a layer within modular synthesis performances, or prompts used to generate textures that are subsequently manipulated in SuperCollider or Max/MSP.
Ethical Questions: Authorship, Training Data, and Rights
The ethical dimensions of AI in music are legal and philosophical simultaneously, and the legal situation varies significantly by jurisdiction.
The argument for recognising artistic value in AI-generated music rests on the premise that art does not require conscious intention from the agent producing the object — only that the object produces aesthetic experience in the listener. The selection of the prompt, the curation of the output, and its integration into a broader context are artistic acts even if the sound synthesis is automated.
The opposing argument notes that AI models are trained on prior human work without compensation or consent, making generation a form of extraction. In 2025, Udio faced a lawsuit from the RIAA on exactly these grounds, adding to earlier litigation affecting other generative platforms. Musicians who have documented their recordings appearing in training data without authorisation have a concrete grievance, not only a philosophical one.
On copyright: the US Copyright Office has established that music generated entirely by AI without human creative contribution does not qualify for copyright protection. In the UK, the Copyright, Designs and Patents Act 1988 (Section 9(3)) takes the opposite position — computer-generated works can attract copyright for 50 years, with the person who made the necessary arrangements deemed the author. This jurisdictional divergence matters practically for anyone distributing AI-generated music internationally.
For experimental practice, the most historically consistent position is one that does not resolve the tension but uses it as material: working with AI while acknowledging that the system carries a debt to the human musical archive, and making that visible rather than concealing it.
Where to Start: Entry Points for Musicians and Artists
The most direct entry point for someone coming from conventional electronic music production is not Suno or Udio — it is Magenta RealTime, because it integrates AI generation within an existing DAW workflow. It is available as an Infinite Crate plugin prototype that feeds audio directly into a DAW, and as a downloadable model for those with a configured Python environment.
For those starting from scratch with AI tools specifically:
- Suno AI (free plan): generate the first tracks with descriptive prompts. Experiment with genres that do not exist — impossible combinations — to understand the model’s limits. The goal is not the finished track: it is understanding what you control and what you do not.
- AIVA (free plan with monthly limit): generate an instrumental piece, download the MIDI, and open it in any DAW. Edit the score, change instruments, alter the structure. This combination — AI for a draft, human for intervention — is the most common workflow in experimental practice.
- Magenta RealTime (requires Python + GPU, or Colab): if you have experience with code environments, the Google Colab notebook is the entry point without local installation. The model accepts text prompts and reference audio; the output is real-time audio you can record and process.
- RAVE (Max/MSP or plugin): for artists already working with Max/MSP or a plugin-based setup, RAVE’s real-time timbre transfer offers a qualitatively different kind of AI interaction — the model’s learned sonic vocabulary becomes an instrument you play against your own material.
The most important step is not technical: it is defining what role AI plays in your practice. Is it an instrument you perform in real time? A generator of raw material you process afterwards? A system you negotiate with toward a final result? That decision determines which tool makes sense to use.
Further Reading and Tools
- Google Magenta RealTime — official project page and Colab notebook
- RAVE on GitHub — open-source variational autoencoder for audio
- AIVA — AI composition platform with MIDI export
- How to Use Suno AI to Create Music: Complete Guide — Amplify DAI
- Generative Art with p5.js: Beginner’s Guide — Amplify DAI
- All resources — Amplify DAI