Welcome to the Piano World Piano Forums Over 3 million posts about pianos, digital pianos, and all types of keyboard instruments. Over 100,000 members from around the world.
Join the World's Largest Community of Piano Lovers
(it's free)
It's Fun to Play the Piano ... Please Pass It On!
Here's someone who made a YouTube video of the WaveNet piano https://www.youtube.com/watch?v=Y8UawLT4it0 It modeled itself, by listening to only 60 hours of (poorly recorded?) other piano videos on YouTube.
FWIW Google published the recipe though you'd have to be quite the data chef. Imagine if we used decent source material and concentrated only on modeling the instruments. We could leapfrog over a couple of generations of incremental improvements to existing modeling software.
There are some more piano examples at the end of this article, and an explanation of what WaveNet does - it was designed for speech synthesis. Kind of interesting that it works at all when directed at piano sounds.
This is where piano modeling should go or any kind instrument modeling for that matter. Once they've modeled an actual instrument - take the results and connect it to a keyboard.
It would probably take a little more work, but I bet you could give control of a piano to the software so it could also include the velocity curves of an actual piano.
What's crazy is that the fundamental principle behind this software can be used to model anything, not only speech but intelligence. It's really mind-boggling how well it learns and models the real world. Its not even sentient, but it's capable of learning to be smarter than any person who ever lived. Wonderful and frightening at the same time.
Im hoping for Jarvis, but we might just get SkyNet after all.
We are the music makers, And we are the dreamers of dreams.
Oh that totally reminded me...we saw one of these at a mall a couple of months ago. My daughter was scared stiff of it, would not get with 6 feet of the Pepper and hid behind me and Mom. Our friends' daughter who was with us strode right up to it and started demanding Google/Alexa answers to trivia questions (which the robot promptly ignored). Then she kicked it (!!) And the mall chaperone asked her to be taken away. I don't know what to make of these interactions, but I feel someone in this new generation is going to be sadly disappointed in the state of the world in 20 years....
And let us take a HUGE breath here... This is nothing new. If anyone is familiar with "Band In A Box", it's been around forever. It has a solo mode that plays any instrument improv style. Sounds exactly like this. Generated in real time. Listen to this jazz solo.
Fscotte, the difference is that Band-in-a-Box and other existing "random music" generators is that they generate music from algorithms that were designed by humans. In this case, no human designed an algorithm... the computer came up with the algorithms itself, merely by "listening" to and analyzing existing music.
Less clear (and this addresses MMM's comment as well) is where the piano sound itself is coming from. How is WaveNet generating the piano tone? Is it modeling a piano sound? I assume that's where the OP's Pianoteq reference comes in.
The big thing about Wave Net is that it's a neural network that learns and programs itself.
Nobody programmed it with any music theory or piano samples/sounds. It learned from listening to real world music how to model/create/synthesize those piano sounds and then it figured out, on its own, the rules needs to create/approximate its own piano riffs/compositions. To top it off it's not even playing an instrument, it's singing it all back to us.
Same thing with human speech, it wasn't programmed to speak, it was programmed to learn how to speak by listening and learning from real world sounds. The results are remarkable.
Aside from being used to take over the world I imagine it could be trained to create instrument models that are more sophisticated than anything Pianoteq or Roland have developed up to this point.
What's scary in the long run is if you have several Wave Nets all working together, one mimicking real word sounds, while another generates the rules for music theory, and a third figures out the rules of what sounds 'good'. You could have a robotic musician better than anything we've ever seen.
We are the music makers, And we are the dreamers of dreams.
What's scary in the long run is if you have several Wave Nets all working together, one mimicking real word sounds, while another generates the rules for music theory, and a third figures out the rules of what sounds 'good'. You could have a robotic musician better than anything we've ever seen.
And someone could make "her" look real good whilst doin' it . . . . .a real moneymaker!
Get back to me when a computer has a soul. Until then, the best artists will be human, not computer.
The sequencing/composing isn't the big story here (and the YT video so comically misses the point). The fact that ML is getting to the point where it can fully model a piano de novo, is remarkable. No research in synthesis, developing algorithms that mimic string vibrations, modeling partials/fundamentals, etc. The computer listens to training data and figures out the best way to emulate it (whatever that way is). We may even get to a point where you can have your DP custom-model a particular piano you like, simply by playing it a recording from that piano That's probably a long ways off (if even possible), but ML can learn to perfectly model from a given sample, we may start seeing some significant improvements in modeling in the next few years.