As I mentioned in a previous post, I’ve been following the emergence of computer-composed music for some time. In that post I link to some of the software tools out there making music.
What do I think of the state of the industry? There’s no doubt that computers are creating some decent experimental music, as well as film style soundtrack music (what’s sometimes called “functional music”, like the royalty free backing tracks I create) and even pop songs. But I think computers have a ways to go before they really compete* with the output of humans (e.g. meat based computers.)
* Keep in mind: computers don’t have to produce music at a human level to be competitive. If a computer can create “ok” music at a fraction of the cost it takes a human to create “great” music, the computer may get the sale.
So when will computers be competitive with humans? Based on my experience with computer programming and my readings on topics like neuroscience and cognitive psychology, I think that computers will create credible, competitive music sooner than later. Let’s say within ten years.
When that day comes, what do musicians do? Will we all hit the unemployment line? The truth is, I don’t know for sure though I can make some educated guesses.
A music career has always been tough
First, let’s be aware that “musician” has been a pretty tough job for some time. It’s one of the few jobs where you have a lot of competition from talented non-professionals. Even now, I am not a full-time musician. I make more than half of my money from non-musical gigs. Music was always a difficult career choice and it’s only gotten worse over the past 20 years. There are a variety of reasons for this, many of them having to do with technology.
A music career is about more than music
It’s also true that the musical success of many performers isn’t really dependent on their mastery of music making. I love KISS but what drove them to stardom had more to do with their showmanship and attitude than their compositional ability or playing chops. What Nirvana represented culturally was just as important as their chord progressions. A lot of artists’ success is based more on their attractiveness or charisma, authenticty (or just plain celebrity) than their music ability. And these factors exist even on the level of a local, non-superstar music economy. (e.g. The top musician around town may just be a guy who hob-nobs well.) How these non-musical elements will factor into music careers of the future is hard to determine.
Bottom-up versus top-down
All that said, what do computers bring to the music making game? I think software is getting pretty good at creating chord progressions, melodies, rhythms and various “small” units of music. What it needs help with is gluing those components into a larger whole with nice, dynamic builds and emotional moments. Maybe the musician of the future won’t spend much time composing granular bits of music but on arranging pre-composed (by humans or computers) musical bits into a larger whole. This is, of course, what plenty of hip-hop and electronic musicians already do; they take beats and samples and arrange them into a song.
This takes me back to my early days in web design. In 1995 we designed web pages via coding with the markup language HTML. For example, if you wanted to add an image to a web page, you added a line of code that specified where to find that image, where it should be placed on the page and what its dimensions should be, etc. I usually did this in Microsoft’s Notepad, the most basic of text editors.
Eventually the web development tool Dreamweaver appeared. (As did other tools, but Dreamweaver was what I used.) With Dreamweaver you didn’t have to deal with actual code (though you could.) Rather you inserted and dragged and dropped objects (images, forms, buttons, etc.) onto the web page.
You can think of the code-focused way of doing web design as a bottom-up approach. The Dreamweaver approach was top-down. With music, we may see a move away from the bottom-up approach (sitting at a piano coming up with a melody that you then expand into a whole song) to top-down (opening up a musical template and then tweaking/swapping out parts.) At that point, the useful skills won’t be so much writing individual bits (melodic fragments, drums grooves, bass lines etc.) but arranging these bits into a cohesive whole.
Of course, with web design what people did and still do is really a mix of bottom-up and top-down approaches. So it will be, I suspect, with music.
Composing versus producing music
There’s another thing to consider here. There are really several steps between writing music and then creating something that can be heard as music (such as a recording or a live performance.) First, someone (or something) has to compose the music, usually capturing it with some kind of notation (music written on a staff, MIDI, whatever.) Then the music has to be performed in some way. If a performance is recorded and copied it can be distributed
Computers currently do a good job of “performing” music that can be captured via electronic instruments. Computers can create “good sounding” music using MIDI, synth sounds or certain samples. Computers still can’t really create music that sounds like it was performed on acoustic instruments. For example, acoustic guitar samples still… what is the term… suck? Synthesized duplicates of electric guitars are still pretty weak. Synth horns and strings can be impressive but are far from “real.” It’s still true that with “acoustic” sounds if you want top quality you need skilled, human players.
Can computers overcome these limitations? I think with deep learning computers may be able to analyze digital waveforms of mic’d acoustic and electric instruments and start to figure out the harmonic patterns that make those instruments sound so unique. Then computers could apply this knowledge to synthesizing those instruments. It’s hard to know how long this process could take (though I’m certain people are working on this right now.)
I wonder if the work-around for this problem will robotics. Check out this video of the robot chef.
As you can see, the robo-chef physically prepares a meal. It “learns” how to do this by “studying” video of a real human making the same meal. Could a similar robot play a guitar like Segovia or Hendrix? Playing guitar is, of course, a more dexterous action than stirring soup or cooking an omelet*. That said, I’ve heard talk of the design of robot surgeons and few things require a more delicate touch than surgery.
Now would be a great time to mention “Captured by Robots.” This heavy metal “band” is made up of animatronic robots with a human singer. Well worth checking out. (Video NSFW)
I think robot musicians could be used to “play” acoustic and electric instruments in a studio setting. I doubt they’ll take over live gigs because, for complex psychological reasons, people like seeing music played by people. If you are considering a robot musician for your band you might as well use a backing track (which, of course, has been happening for years.)
In my view, robots should be able to soon duplicate the fine motor actions used to play instruments you pluck, strike or bow—guitars, pianos, drums, violins etc. (For kicks, google the phrase “self playing guitar.” There’s some interesting early attempts. ) What may be more challenging for robots to play are instruments into which you blow—horns, saxophones, etc.
Are we screwed?
So what do musicians do to compete with computers? The short answer is: figure out what computers can’t do (organize bits of music into greater wholes, play instruments computer can’t faithfully render, be an attractive/charismatic/authentic performer etc.) and do that.
However, I also think the rise of artificial intelligence and robotization is poised to radically alter human society on many fronts—culturally, economically, morally. The whole notion of a job or profession may change. As for how all that will play out and affect musicians… your guess is as good as mine.
I am morally obligated to end this post with the following Kraftwerk video.