Listen to music composed by Google’s ‘Project Magenta’ AI

“Move over Gaga, here comes Google,” Steven Musil reports for CNET.

“The Web giant on Wednesday unveiled a 90-second piano melody, the first piece of art created by machine learning as part of Project Magenta,” Musil reports. “The project, announced last week at Moogfest, is an artificial intelligence effort to create original music and visual art.”

“The project is built on top of Google’s TensorFlow,” Musil reports, “the artificial-intelligence engine the Web giant uses to add capabilities such as speech and object recognition to its products.”

MacDailyNews Take: Yes, that does generally suck… for now. But, remember, this is music composed without the human hand. The researchers gave the AI four notes to start off and from there the machine composed the actual song itself.

Certainly Google has a long way to go to even approach Emily Howell, a computer program created by David Cope during the 1990s:

Mike Murphy writes for Quartz, “This is no simple task, given that even the most advanced artificially intelligent systems have enough trouble copying the styles of existing artists and musicians, let alone coming up with entirely new ideas themselves.”

MacDailyNews Take: In three years, Alphabet will become the largest supplier of music. All streaming services are upgraded with Alphabet computers, becoming fully unmanned. Afterwards, they chart songs with frightening success. The Skynet Funding Bill is passed. The system goes online August 4th, 2027. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug…

22 Comments

  1. As a lover of classical music (old and new), I don’t feel the least bit threatened. Pop music on the other hand hasn’t really been buried in talent for a very long time. No problem for me if those folks lose their jobs to a chromebook.

    1. Yes, I remember Radiolab did a show about Cope’s infamous EMI algorithm/A.I. some years ago. Although it sounded rather weird or unconvincing at times, some sections were astonishingly good. Obviously, the software wouldn’t be able to convincingly finish unfinished masterpieces like Bach’s Kunst der Fuge, Mozart’s Requiem, or something as complex as Scriabin’s Mysterium, but fuck it, I was impressed nonetheless. Apparently, Charlie Parker came up with his signature style because he wanted to make sure that no (white) man would be able to copy him. Maybe, in a distant future, the highest achievement for musicians will be to develop a style that can’t be mimicked by an A.I.

      1. “Maybe, in a distant future, the highest achievement for musicians will be to develop a style that can’t be mimicked by an A.I.”

        That actually seems pretty likely, I reckon, even without the word “distant”… but only for “serious” music, the type that most people barely know exists. Pop music has – for several decades already – generally been so formulaic that it’d be easy for A.I. to emulate. (Thinking of the sentiment expressed in Godley and Creme’s 1978 “Hit Factory/Business is Business”, for example.)

  2. After I listened to this, I listened again on a much higher quality system to listen properly, but the piano sample used ( or synthesised ) was dreadful and actually sounded worse on decent speakers. You would have thought that at the very least, they would have demoed their algorithm using good quality sounds. There are any number of built in sounds within a basic install of Garageband that are massively superior to any of those samples. Whatever the merits of the tune and backing created from that simple starting point, it sounded awful, but could easily have sounded a whole lot better.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.