Could AI generated music affect jazz?

Could AI generated music affect jazz?

Could AI generated music affect jazz? In contemporary culture, AI is becoming much more advanced.  AI in the domain of sci fi movies and novels has created a fear among people that robots and artificial intelligence could take over human society.  AI generated music has taken the fields of classical and pop by storm in the last five years, so far it has not taken hold in jazz, but it could, and what could be some of the ethical concerns?

AI generated music was first investigated in 1957 with the piece Illiac Cycle at the University of Champagne Urbana-Illinois composed by Lejaren Hillier and Leonard Isaacson.  The 1950’s were a time of computerized music being developed in utero, mainly at Universities.  This was a time period where synthesizers and crude computerized music systems took up entire rooms, and the music created was largely an academic concern.  Edgar Varese, early influential Poeme Electronique in 1958, sounded more like a game of Pong on an Atari system in 1972.

In the 1980’s three and a half decades after the creation of Illiac Cycle, a process called generative analysis was developed to complete Schubert’s Symphony No. 8: Unfinished.  AI was fed 2,000 pieces of Schubert piano compositions and a neural pathway was spawned using inputs and outputs.  Algorithms were developed to look for compositional patterns but the final piece was performed by humans.  This breakthrough in the technological zeitgeist of the 1980’s raised an interest question more relevant and thought provoking than ever, can music created by AI possess the soul and creativity of music made by musicians.

AIVA, artificial intelligence Virtual Artist, is a program developed by a Luxembourg start up whose primary purpose is to create classical music in the style of master composers.  AIVA also is employed in the creation of film and game music.  In 2017,  a complete album entitled Genesis was recorded using this program, which composes music by learning from the pieces of masters like Beethoven, Mozart, and Bach.   Based on the information learned from their styles and through its own interpretation of music theory the program writes it’s own music.  Ethics really come into question here, because Turing Tests created by the AIVA  creators demonstrated that humans could not tell the difference of AI vs human composers.  Based on this test, it could be argued that machines are able to duplicate the soul of human musicians, however a listen to the AIVA composition “Symphonic Fantasy in A minor, Op. 21” reveals a case of the uncanny valley that is too big to ignore. The Uncanny Valley is a theory posited by Japanese roboticist Masahiro Mori in 1970, that argues the more human something robotic appears, the more likely it is to be embraced by humans.  Often, the opposite is true.  As with CGI, or even hyper realistic art, a person knows it’s not real, but at the same time, it’s so realistic it borders on the unsettling.  “Symphonic Fantasy in A Minor, Op. 21 ” is impressive, particularly with the sense of dynamics and orchestral samples used, but it is still very  that it is NOT created or performed by human beings– on the other hand its not surprising, because over the past 20 years, a few generations have grown up listening to music not played by flesh and blood humans at all.

In 2017, Sony Music Entertainment used AI to create a song in the style of The Beatles.  More recently, a new Nirvana track was created that reached the top of the charts.  In the case of the Nirvana track,  the music was created using AI, but a singer imitating the style of Kurt Cobain was used to sing the lyrics.  The track, was included on the album 27: The Lost Tapes featuring “new music” from artists like Cobain, Jimi Hendrix, Janis Joplin and Amy Winehouse.  In pop, perhaps AI generated music could fill a niche for fans clamoring for music from deceased heroes but the ethical question remains.  Estates looking to make a quick buck could use this method and sign off on such projects, and record companies, like film studios with digitally generated actors and de aging could use this method to keep iconic musicians alive forever.  Imagine how many would hip hop fans  (or not) would rejoice if new Tupac, Biggie or DMX tracks were created by this method in coming years, a lot of new tracks featuring them over the past 20 years have been created using previously unreleased rhymes.  Sampling technology has developed to such a degree that musicians voices and sounds can live in a DAW forever.

Ethical Impact on Jazz

While AI generated music trends have not exactly affected jazz en masse right now (no pun intended to Jackie McLean’s classic 1965 album), there could be the possibility.  Similar technology has been used to keep legends “alive”.  In 1991, the late Natalie Cole released the album Unforgettable.  The hit single of course, was Cole’s live vocals being blended with that of a vocal performance of her father, the great Nat King Cole.  While this is something that kind of amounts to sampling, the result in real time during it’s 1991 release, me as a ten year old and my mother owning the CD it was a bit unsettling.  Kenny G famously utilized this technique to much criticism “duetting” with Louis Armstrong on “What A Wonderful World” in 2000, but taking it to a totally different level with his album G released in 2021.  Gorelick sampled Stan Getz and through technology was able to play new melodies using Getz’ sound.  Again, the result is disturbing, and most of Gorelick’s audience as a very telling scene in the documentary demonstrated Listening To Kenny G. would probably not be able to tell the difference much less know who Getz was.  Stan Getz was not a very nice man, his career ravaged by addiction and personal issues, but as a saxophonist he was non pareil.  It doesn’t really come as a surprise the Getz Estate signed off on this.  One cannot imagine the estates of Miles Davis or John Coltrane to stoop this low– record companies, yes.  The technology with AI generated music has the potential to create music of deceased legends like Duke Ellington, Count Basie, Miles Davis, or Louis Armstrong among others– it could also create new music by defunct but much loved groups like Weather Report or Mahavishnu Orchestra.    Such a move would be tasteless and cheapen the formidable catalogs those groups created.  The argument might be a bit similar to those at the dawn of the 1980’s when the Linn M-1  an array of other drum machines, and samplers like the Fairlight, Synclavier and E-mulator II appeared.  Session musicians feared at the time that these technologies would put them out of a job, which it didn’t exactly do but it made music creation much cheaper. As a natural consequence producers, or artists themselves would opt to use the then new technology.  Rick Beato, Youtube sensation, throughout various videos on his channel has told tales of session drummers being hired for sessions, but producers being unsatisfied with the “human” result and chose to use MIDI drums, or other technology instead.  With jazz, there is beauty in the imperfections found on classic recordings, but with the current cultural climate, in the music today, especially among contemporary musicians there is a concerted effort to at times, make things as flawless as possible.

AI generated music could potentially lower the bar more than it is, many top producers don’t need to learn music theory and instead use MIDI chord packs to create progressions, and use Beat Detective to make stock beats, but with as many possibilities AI generated music brings; ethical questions remain.

The New York Jazz Workshop holds workshops, intensives and master classes in a wide variety of disciplines in Manhattan, Brooklyn and Europe.