In the 1950s and the 1960s, music made by artificial intelligence was not fully original, but generated from templates that people had already defined and given to the
AI, with this being known as
rule-based systems. As time passed, computers became more powerful, which allowed machine learning and artificial neural networks to help in the music industry by giving AI large amounts of data. By the early 2000s, more advancements in artificial intelligence had been made, with
generative adversarial networks (GANs) and
deep learning being used to help AI compose more original music that is more complex and varied than possible before. Notable AI-driven projects, such as OpenAI's
MuseNet and Google's Magenta, have demonstrated AI's ability to generate compositions that mimic various musical styles. 20th century art historian
Erwin Panofsky proposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject. AI music explores the foremost of these, creating music without the "intention" that is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.
Timeline Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played.
Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1952. In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composer
Leonard Isaacson and mathematician
Lejaren Hiller. In 1965, inventor
Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show ''
I've Got a Secret'' that same year. By 1983,
Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research. In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of
Bach. EMI would later become the basis for a more sophisticated algorithm called
Emily Howell, named for its creator. In 2002, the music research team at the Sony Computer Science Laboratory in Paris, led by French composer and scientist
François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.
Emily Howell would continue to make advancements in musical artificial intelligence, publishing her first album
From Darkness, Light in 2009. Since then, many more pieces by artificial intelligence and various groups have been published. In 2010,
Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles. was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method. With progress in
generative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field are
Suno AI, launched in December 2023, and
Udio, which followed in April 2024. In November 2025 the AI generated song "Walk My Walk" presented as being by
Breaking Rust topped the Billboard Country Digital Song Sales chart. The same year, AI band
The Velvet Sundown attracted one million listeners on Spotify. In November 2025, the service claimed that 50,000 AI generated songs were uploaded daily, about a third of total uploads. Composers and artists like
Jennifer Walshe or
Holly Herndon have been exploring aspects of music AI for years in their performances and musical works. Another original approach of humans "imitating AI" can be found in the 43-hour sound installation
String Quartet(s) by
Georges Lentz. ==Software applications==