Piano World Home Page
There has been a lot of interest in a different conversation about this topic. To avoid hijacking that thread, let's start the discussion on this one.

Questions:
- how much latency is too much latency for piano?
- how much latency is too little latency?
- is too much/little because we are "used" to one setup (e.g. acoustic) of for actual physiological/musical reasons?
- the computer can only report part of the latency, anybody has measured it?

Many papers regarding this topic have been mentioned, particularly:

https://www.researchgate.net/publication/7603558_Touch_and_temporal_behavior_of_grand_piano_actions
https://www.researchgate.net/public...nal_Percussionists_and_Amateur_Musicians
https://asa.scitation.org/doi/10.1121/1.1376133
Regarding the "too little" latency some Kawai instruments can artificially increase it, as we started discussing in that other thread:

Quote
So CA99 / NV10 /NV5 (Only in Piano sound), we have "Hammer delay" for pianissimo :
(Page 60 of manual)
https://www.kawai.co.uk/service/ca99_79_e.pdf
I was able to capture audio from the piano and audio from the VST, and examine the delay. The result: about 2.5 msec.

This does not include the delay inside the piano from keypress to audio output.
It just shows that the VST does not introduce much more delay compared to the piano itself.
I'm not sure if you want to discuss the refs you give, or get answers to your questions.
I suppose the latter.

I don't think you can put hard numbers on it, there is always some situation/task/music where lower latency proves better.

Also what is acceptable for someone is unacceptable for others.

I don't think there is "too little latency", 0 is probably ultimately the best.

It is true that also on real pianos there is a delay between moment you press a key and moment you hear the sound. I just say that there it would also be better to have less latency. It is just probably that nobody considered that , or maybe they did and the trade-off was what we have today.

Yes you can get used to it. Check with organ players, the latency may be so large that they are playing several notes ahead in fast pieces. But you need a lot of skill and training for it I think and you need to get used to it. So lower is still better

>the computer can only report part of the latency, anybody has measured it?
For audio, the computer does report total latency including sound card, DA converter times and analog circuit times. At least, on mac, it does report these values so that the computer can compensate and report on these. You could also include the speakers and consider acoustical delays in theory (not standard on mac FAIK).

So it is possible to do all this dor DPs just as well.
Originally Posted by MacMacMac
I was able to capture audio from the piano and audio from the VST, and examine the delay. The result: about 2.5 msec.

This does not include the delay inside the piano from keypress to audio output.
It just shows that the VST does not introduce much more delay compared to the piano itself.

That's too quick. Suppose goal is 4 ms total latency, then 2.5 is a big chunk of that budget already, and maybe too much.
Originally Posted by wouter79
I'm not sure if you want to discuss the refs you give, or get answers to your questions.
I suppose the latter.

I don't "want" anything, other than discussing and learning. Those questions were only kickstarters.


Originally Posted by wouter79
I don't think there is "too little latency", 0 is probably ultimately the best.

That is what I would have said too, however given the fact than on acoustic loud means only "early", part of the discussion of the third link is also about a possible way to emphasize: in addition to loudness, also timing. Whether or not it's voluntarily made, it's happening and I find it quite interesting and not disturbing in recordings.
I don't understand what you mean. How is that too quick? The goal is 0, right?
Originally Posted by wouter79
Originally Posted by MacMacMac
I was able to capture audio from the piano and audio from the VST, and examine the delay. The result: about 2.5 msec.

This does not include the delay inside the piano from keypress to audio output.
It just shows that the VST does not introduce much more delay compared to the piano itself.
That's too quick. Suppose goal is 4 ms total latency, then 2.5 is a big chunk of that budget already, and maybe too much.
The goal is surely to have the Piano sound the note before you press the key?










grin
Originally Posted by OU812
The goal is surely to have the Piano sound the note before you press the key?

I will think about this answer each notes I will delay by hesitation... I don’t think I will forget it soon ! wink
Who will be the first one to call his piano dealer because his new acoustic piano that just got delivered has too much latency?
Hi,

Real Piano behaviour :

In PPP, Key Bottom is after Hammer String
In FFF, Key Bottom is before Hammer String
Staccato : same time

Grobal latency 20/30ms between key finger and Hammer string :

see :
http://www.speech.kth.se/music/5_lectures/askenflt/measure.html
Originally Posted by owfrappier
Hi,

Real Piano behaviour :

In PPP, Key Bottom is after Hammer String
In FFF, Key Bottom is before Hammer String
Staccato : same time

Grobal latency 20/30ms between key finger and Hammer string :

see :
http://www.speech.kth.se/music/5_lectures/askenflt/measure.html

Exactly. Yet nobody says a thing about it. Whereas on digital (computer based) pianos latencies of 10ms often sound barely acceptable? Why? Some hypotheses have floated around, some reasonable speculations yes, but I have not seen a compelling answer yet!
Originally Posted by OU812
The goal is surely to have the Piano sound the note before you press the key?
grin

How did you know of my invention?

I’m working on a special helmet that relays your intentions to the piano in advance, so the note does not ‘sound’ before you press a key (as per your system), but instead the piano anticipates your intentions and accommodates accordingly. This allows for calculations and other machinations to begin in advance; therefore, reducing the latency to 0.

For example, if your intention is to play fff, the piano will summon/cue the corresponding samples/notes, resonance(s), timbre, etc. in advance; therefore, eliminating the need for real-time processing, and, by extension, reducing the latency to 0.

You’re welcome, but I’m still suing you for imitating my invention.
Most people that complain will be experiencing delays of half a second or more.
There's confusion about latency in digital pianos and VSTs ...
Originally Posted by Del Vento
Whereas on digital (computer based) pianos latencies of 10ms often sound barely acceptable? Why? Some hypotheses have floated around, some reasonable speculations yes, but I have not seen a compelling answer yet!

The piano senses a key stroke.
It generates sound. How much latency is there between the key stroke and the sound?
It generates MIDI data. How much latency is there between the key stroke and the data transmission?
The MIDI data travels over a MIDI cable. It takes 1 msec.
... or ...
The MIDI data travels over USB. How long does that take?
The PC captures the MIDI data. How long does that take?
The VST captures the MIDI data. How long does that take?
The VST conjures up digital sound signal. How long does that take?
The VST sends the digital sound data to an audio interface ... inside the PC or perhaps one attached via USB. How long does that take?
The signal becomes sound in a sound system. No latency to speak of (down in the microseconds).
The sound travels to our ears. We ignore that in our computation because we expect to suffer that transit time in a digital piano just as much as we expect it in an acoustic.
Every step involves some time delay.

When people quote latency figures they often (or usually?) refer to the delays in the PC-to-audio-interface chain. That's only one piece of the latency. They generally omit the rest. Hence the confusion.

I've made measurements of the time DIFFERENCE between the sound signal coming from the piano and that coming from an audio interface driven by a VST.
My assumptions are ...
1. That the piano's in-built latency is "normal". (It's never bothered me.)
2. Any additional latency beyond that amount is potentially bad. My 2.5 msec measurement gives satisfactory results. My 13 msec result on an older computer was not acceptable.
Originally Posted by owfrappier
Hi,

Real Piano behaviour :

In PPP, Key Bottom is after Hammer String
In FFF, Key Bottom is before Hammer String
Staccato : same time

Grobal latency 20/30ms between key finger and Hammer string :

see :
http://www.speech.kth.se/music/5_lectures/askenflt/measure.html
This was interesting. 20-30 ms for an acoustic piano. I play mostly on an acoustic but decided to experiment with a synth. I set the attack time to 0 for a saw waveform. Then began to vary the delay time between the key down and start of attack. From 0 to 40 ms delay, it seemed instantaneous. At 40 ms, it seemed like maybe I detected a delay. At 50 ms, I was sure there was some delay, but still not unpleasant. By 75 ms, the delay was clear and by 100 ms the delay was annoying. My conclusion: if you are used to playing an AP, any DP combination with 30-40 ms of latency will probably sound just fine. Curious what others have found.
Posted By: Osho Re: Latency in acoustic and digital instruments - 01/20/21 01:47 AM
First, we need to define what exact latency we are talking about. In following, by latency I mean 'from midi event to when the sound is heard'. That is different than from when the key starts traveling down to when the sound is heard.

Originally Posted by Del Vento
- how much latency is too much latency for piano?
For me, it is 10ms. Anything more and it gets annoying. Anything more than 20ms is unplayable.
Originally Posted by Del Vento
- how much latency is too little latency?
Negative... it will freak me out and will never make me want to touch that possessed piano again!

Originally Posted by Del Vento
- is too much/little because we are "used" to one setup (e.g. acoustic) of for actual physiological/musical reasons?
I don't think it is because we are 'used' to a setup. Even before playing acoustic pianos, I was sensitive to DP latencies. It is simple because the brain expects a connection between when it thinks the hammer will hit the string to when the sound is heard by the ears.

If somebody beats a drum that you have never seen, your brain will still expect the sound to come at a certain time. If it comes later, it will be 'confused'.

Originally Posted by Del Vento
- the computer can only report part of the latency, anybody has measured it?

Yes, there have been reports of latency measurements, both in DP forum and in other forums. There is even a data-base of latency measurements here - though the measurement there is for RTL (Round Trip Latency) - which is different than what we need in DP as we do not have any analog inputs.

Osho
Posted By: Osho Re: Latency in acoustic and digital instruments - 01/20/21 01:55 AM
Originally Posted by Del Vento
Originally Posted by owfrappier
Hi,

Real Piano behaviour :

In PPP, Key Bottom is after Hammer String
In FFF, Key Bottom is before Hammer String
Staccato : same time

Grobal latency 20/30ms between key finger and Hammer string :

see :
http://www.speech.kth.se/music/5_lectures/askenflt/measure.html

Exactly. Yet nobody says a thing about it. Whereas on digital (computer based) pianos latencies of 10ms often sound barely acceptable? Why? Some hypotheses have floated around, some reasonable speculations yes, but I have not seen a compelling answer yet!

Not sure, if this is a compelling answer for you - but here is the info from the other thread:

Quote
There is a difference between the latency mentioned in that paper and the latency we talk about in DP world. In that paper, the latency is from finger-key contact to when the sound is heard. In DP world, the latency we typically talk about is from midi event generation to when the sound is heard. My point is that the midi event generation is intended to capture the point referred to as "hammer-string contact" in the paper.

Despite the numbers in that paper, no one has ever reported that they are not satisfied with the latency of their acoustic pianos. You can ask in the "Piano forum" or search in that forum - but that discussion never comes up. On the other hand, there are plenty of people complaining about DP VST latency. The reason is that the brain intuitively expects the sound to come at the hammer-string contact point. In the acoustic world, the only delay is due to the sound travelling through the air - which is about 1ft/msec. So, the effectively latency that the brain hears ends up being 3-4ms.

In DP world, the latency from midi event to sound heard can vary anywhere from 5ms or more. After a certain point (which depends on person to person), it becomes perceivable as being different from what you would expect from an acoustic instrument. For me, it is about 10ms - but it can vary from person to person.

Osho
'course I've seen all of this discussion from the other thread and the additional one in this thread. None of it is satisfactory to me. Just hypotheses and speculations. Reasonable, yes, but not compelling.

I tried to measure the latency of my digital piano (Yamaha NU1), acoustic piano (inexpensive golden era baby grand) and computer-based piano (Pianoteq controlled with the NU1, leave alone the Pianoteq bashing: in fact I don't use it but it's the only thing I have a demo I understand how to decently use). This is what I did:

1) with external mic, record the sound with the volume for the digital adjusted in such a way that the natural action noise is not too softer than the piano sound (on the acoustic it is what it is)

2) analyze the recording in audacity trying to measure the time it takes from the key bottoming noise to the beginning of the piano sound

3) by accident I also recorded the key release noise on some DP recording, they are completely invisible in the display of the recorded signal, but very clearly audible.

These are my conclusions:

- For all three cases I can hear very slight the delay and it's not annoying, besides for Pianoteq....

- I can artificially slow down the sound (without changing the pitch) and the delay becomes super-evident

- However I cannot measure such a time in the interface: both in the original and in the slowed down version there is no clear point where the noise or the sound "begin", the attack is so "blurred" that I could measure anything between 2ms and 100ms (and that is on the same recording!)

So here I am with something I can hear and measure and see on the screen, yet I cannot quantify. So I wonder how one can say "below x ms it's fine, above it's not". As many have said, the numbers reported by Pianoteq (or other software pianos) are only part of the mix, and I am interested in the full picture.

Has anybody attempted at doing this measurement and achieved any reasonable success?

I know one of you has measured two piano sounds (digital and acoustic) and I could do something similar (NU1 and Pianoteq), but, assuming I am successful, that tells only half of the story: in my case could be Pianoteq is x ms later than NU1 internal sound and that makes it annoying. But is the total 1+x ms or 10+x ms 50+x ms or what? That makes a huge difference.

Last, I really can't believe 50ms would not be unpleasand as SoundThumb wrote: 50ms is a little short than half of a 16th note when playing at quarter_note = 120. So if you are playing at 120 anything that has a succession of a few 16th notes (such as a Czerny or Hanon exercise, something that any intermediate pianist can do, let's leave it aside whether or not it is useful) everything you play comes half a note later, so it'd feel like you are playing on the key release rather than the key press!!!! I had that sensation with Pianoteq at times and it is totally weird!
When the ears don't hear what the fingers should have produced, the mind becomes confused.

My psych professor demonstrated this to the class long ago ...
A pretty young thing was chosen from the front row of the audience.
She was given the "special headphones" ... a pair of sink drain plungers with small speakers inside, fashioned into a headset. (Why did he not use conventional headphones?)
She was also given a hand-held microphone to speak into.
Between mic and headset was a "special amplifier". The latter introduced delay ... nothing more. Perhaps a second or so.

The class could not hear her voice directly, Nor could she. We could only hear her voice over the PA ... which was the same delayed sound she was hearing in the headset.
She was given a short text to read. But she couldn't finish two words because her mind was thrown off. She was not hearing what she was speaking, but instead what she had spoken a second earlier.

It's the same with the piano. We need to hear what our fingers are producing. Delay is confusing. It doesn't take much latency before it becomes intolerable.

My VST introduces a mere 2 msec over what the piano can do on its own. That's quite acceptable.
My old computer introduced 13 msec. That was annoying.
That computer without ASIO and Presonus box introduced around 25 msec latency. It wasn't just annoying. It was unplayable.
Originally Posted by Del Vento
Last, I really can't believe 50ms would not be unpleasand as SoundThumb wrote: 50ms is a little short than half of a 16th note when playing at quarter_note = 120. So if you are playing at 120 anything that has a succession of a few 16th notes (such as a Czerny or Hanon exercise, something that any intermediate pianist can do, let's leave it aside whether or not it is useful) everything you play comes half a note later, so it'd feel like you are playing on the key release rather than the key press!!!! I had that sensation with Pianoteq at times and it is totally weird!

Eight notes/sec is considerably faster than I can play except possibly for a trill. So it is quite interesting that you can hear and associate the finger to notes at that speed. I would have thought that the brain would quickly adapt to a delay of 50 ms, but the fact that you could notice it is a good data point. When I play short fast notes, I have no concept of whether the note is sounding as my finger goes down or as it goes up. So this may just show how different individual are and probably why threads like this usually produce a range of opinions.
Originally Posted by MacMacMac
When the ears don't hear what the fingers should have produced, the mind becomes confused.

I think a mismatch between the action and the sensory feedback by the consequences of that action is indeed the source of feeling 'disconnected'. The brain constantly monitors sensory input in relation to its own intended motor gestures, and produces an 'error signal' when sensory feedback does not match the prediction. This is fundamental to skill learning. The prediction depends on previous experiences.

Originally Posted by MacMacMac
That computer without ASIO and Presonus box introduced around 25 msec latency. It wasn't just annoying. It was unplayable.

But since the brain can learn the new (delayed) relation between motor gesture and sensory feedback, it may become better playable after a while, and become the 'new normal'.

IMO an underestimated factor in VST latency/disconnection/despair discussions is *random* variability in latency. This variation is unlearnable and will cause prediction errors all the time. By contrast, the latencies on an AP for different key velocities are variable but perhaps better predictable, and apparently sufficiently learnable because no one reports feeling disconnected to an AP.

I have seen (and provided) many latency measurements here on PW but these are always a single number like 10ms, which is either reported by the computer or measured as the difference between sound output in a handful examples. To get better insight into what's going on, it would be really nice if we had an idea of the variability in overall latency (i.e. action to sound output) on real-world systems.

One of the studies Del Vento lists finds that variation (jitter) as little as 3 ms around an average delay of 10 ms already causes subjects (even non-musicians!) to judge it as of 'lower quality'.
Originally Posted by pianogabe
Originally Posted by MacMacMac
My VST introduces a mere 2 msec over what the piano can do on its own. That's quite acceptable.
My old computer introduced 13 msec. That was annoying.
That computer without ASIO and Presonus box introduced around 25 msec latency. It wasn't just annoying. It was unplayable.

But since the brain can learn the new (delayed) relation between motor gesture and sensory feedback, it may become better playable after a while, and become the 'new normal'.

True, but how have these numbers been measured? Just what the computer reported, hence just the buffer latency, not the end to end? If so, they are of little interest.

Originally Posted by pianogabe
IMO an underestimated factor in VST latency/disconnection/despair discussions is *random* variability in latency. This variation is unlearnable and will cause prediction errors all the time. By contrast, the latencies on an AP for different key velocities are variable but perhaps better predictable, and apparently sufficiently learnable because no one reports feeling disconnected to an AP.

I have seen (and provided) many latency measurements here on PW but these are always a single number like 10ms, which is either reported by the computer or measured as the difference between sound output in a handful examples. To get better insight into what's going on, it would be really nice if we had an idea of the variability in overall latency (i.e. action to sound output) on real-world systems.

One of the studies Del Vento lists finds that variation (jitter) as little as 3 ms around an average delay of 10 ms already causes subjects (even non-musicians!) to judge it as of 'lower quality'.

I would love to be able to measure that. Unfortunately, as I have described, I have troubles measuring even a single latency number, let alone many latencies to estimate its jitter.
Please read what I've twice written above.
Originally Posted by Del Vento
Originally Posted by pianogabe
Originally Posted by MacMacMac
My VST introduces a mere 2 msec over what the piano can do on its own. That's quite acceptable.
My old computer introduced 13 msec. That was annoying.
That computer without ASIO and Presonus box introduced around 25 msec latency. It wasn't just annoying. It was unplayable.
But since the brain can learn the new (delayed) relation between motor gesture and sensory feedback, it may become better playable after a while, and become the 'new normal'.
True, but how have these numbers been measured? Just what the computer reported, hence just the buffer latency, not the end to end? If so, they are of little interest.
I measure the TIME DIFFERENCE between the piano sound and the VST sound.
Originally Posted by MacMacMac
Please read what I've twice written above.
Originally Posted by Del Vento
Originally Posted by pianogabe
Originally Posted by MacMacMac
My VST introduces a mere 2 msec over what the piano can do on its own. That's quite acceptable.
My old computer introduced 13 msec. That was annoying.
That computer without ASIO and Presonus box introduced around 25 msec latency. It wasn't just annoying. It was unplayable.
But since the brain can learn the new (delayed) relation between motor gesture and sensory feedback, it may become better playable after a while, and become the 'new normal'.
True, but how have these numbers been measured? Just what the computer reported, hence just the buffer latency, not the end to end? If so, they are of little interest.
I measure the TIME DIFFERENCE between the piano sound and the VST sound.

Sorry for missing that part of your message and thanks for repeating it. This is very interesting and consistent to my experience with the few tests I've run with a MacBookPro and a oldish but beefy Linux box -- without being able to measure the absolute latency per my discussion in my previous message. Setting Pianoteq buffer size for a reported latency of more than 5ms was definitely noticeable and more than 10ms was annoying.

How did you measure that difference? I suspect with something similar to what I described in my other message? Is there any way that you can turn down the volume of the piano sound to make it comparable to the key bottoming noise, and measure the latency of the internal piano sound, which should be pretty close to the difference between the key bottoming noise and the piano sound itself?

Do you have any way to do the measurement a few times to see if the jitter, as pianogabe suggests, that is "more annoying" than latency itself?

Thanks!
Originally Posted by Del Vento
I would love to be able to measure that. Unfortunately, as I have described, I have troubles measuring even a single latency number, let alone many latencies to estimate its jitter.

I am thinking out loud here, but previously I measured DP internal vs VST latency the way you also did this, by recording both simultaneously in audacity (or equivalent) and then manually look at the difference in 'sound start time'. I guess one could produce a script to create a midi file that plays the same note for a few hours, record both internal and VST output, and then use some simple threshold algorithm to find the note beginnings in the audio file. Since the note is always the same, and there is little noise involved (make sure output level is high), that might work sufficiently. If it doesn't one could use crosscorrelation or so. If you allow for more complex programming, you could use different notes and levels. Of course, because of different 'attacks' you can only measure variability for specific notes and levels.

This could also test the assumption that the internal DP engine does not have any appreciable latency variation.

Sounds like a nice programming project smile
That's what I did ... recording piano and VST in Audacity.

As for the programming/automation ... I'll leave that to others. I'm retired from tech. smile
Jesus Christ! Have we not measured enough already (pivots, keys, music rests, sample length/decay time, bench height/width, coffee table, piano apparel, etc.)?

I ask again, why do we have to measure the heck out of everything? Is there an underlying collective obsession that drives us to do this? Are we compensating for a collective want we do not -all- have? Is tiny, nano, small ever enough? Why? Why? grin
Why measure? Because my ears said "this stinks". So I measured. It stank.
I got better equipment. My ears said "this rocks". So I measured. It rocked.

That was years ago. Since then ... no more measuring.
But when someone asks, I respond.
Originally Posted by Pete14
Jesus Christ! Have we not measured enough already ...
BTW ... Jesus measured. A lot! He was a carpenter.
Originally Posted by MacMacMac
Originally Posted by Pete14
Jesus Christ! Have we not measured enough already ...
BTW ... Jesus measured. A lot! He was a carpenter.
laugh
The advantage of a measure is that it is objective. Let’s say you hesitate between a Steinberg UR22 and a Focusrite 2i2... latency measures can help you choose if you are very picky about latency.

If I told you my UR22 is Ok (for me), you won’t learn anything about if it is Ok for you.

And it is rare to find a music shop with both audio interfaces ready to be compared... sometimes, you have to trust some other opinions (it is ok), interpret some specs (well a 4ms latency should be ok), and perhaps make some mistakes ! (But I would prefer to make a $150 mistake about my audio interface than a $8000 mistake about my N1X... Surely the later was bern tested in a shop !).
Originally Posted by Pete14
Jesus Christ! Have we not measured enough already (pivots, keys, music rests, sample length/decay time, bench height/width, coffee table, piano apparel, etc.)?

I ask again, why do we have to measure the heck out of everything? Is there an underlying collective obsession that drives us to do this? Are we compensating for a collective want we do not -all- have? Is tiny, nano, small ever enough? Why? Why? grin

laugh ha I think I share your sentiment, and will shamefully go back to playing my piano in minute. But rationally I agree with Frédéric that this objectifies things and can be a great help, e.g. to see how to minimize latency if it appears to bother you. That and MacMacMac's argument that Jesus is on our side is also compelling.
Imagine a little boy hitting a makeshift ‘piano key’ and then never ever hearing a corresponding sound, a note, or even a hiss; well, imagine no more because ‘that little boy was me’!

Yes, it’s easy to take latency for granted when all you have to wait for is a few milliseconds to hear something.

Pete never had that luxury; growing up playing on a keyboard painted onto a wooden table. Yes, my dad was so poor that all he could afford for me was a can of white, a can of black, and an old crooked wooden table. That’s all that little boy had for a ‘piano’. So you see, that little boy played an ‘F’ but he never heard that note because latency was, by definition, infinite. The note never sounded, you guys, it never sounded.

So yes, I take a little offense when I hear some ‘round here complaining about a few milliseconds when that little boy had to wait till he grew up and bought a P-515 to ever hear that ‘F’ ring. Shame on you!
Originally Posted by Pete14
Pete never had that luxury; growing up playing on a keyboard painted onto a wooden table. Yes, my dad was so poor that all he could afford for me was a can of white, a can of black, and an old crooked wooden table. That’s all that little boy had for a ‘piano’. So you see, that little boy played an ‘F’ but he never heard that note because latency was, by definition, infinite. The note never sounded, you guys, it never sounded.

So yes, I take a little offense when I hear some ‘round here complaining about a few milliseconds when that little boy had to wait till he grew up and bought a P-515 to ever hear that ‘F’ ring. Shame on you!

I can relate with you. I am an adult piano learner because of a similar situation: when I was kid I did not have a piano and my piano teacher told me to draw a keyboard on something like a roll of toilet paper which according to her would have been much better than the toy organ I had (that was basically a melodica with an electric pump). So I did not play neither the toy organ nor the toilet paper and I went to the lessons not prepared. Soon enough I gave up and here I am learning as an adult.

On the other hand, there are poor people today. Should we all sell our instruments to feed them? Or should we learn to play well to honor the Lord as J.S. Bach did? Since you mentioned Jesus in a previous message, this is a truly serious moral question. But it is OT in this thread, as we always end up doing...

Assuming we want to learn to play well, the question remains on what is the best instrument we can afford? Measurements can tell at least part of the story.
Pete ... All this talk reminds me of The Three Yorkshiremen.
Originally Posted by Pete14
Pete never had that luxury; growing up playing on a keyboard painted onto a wooden table. Yes, my dad was so poor that all he could afford for me was a can of white, a can of black, and an old crooked wooden table. That’s all that little boy had for a ‘piano’. So you see, that little boy played an ‘F’ but he never heard that note because latency was, by definition, infinite. The note never sounded, you guys, it never sounded.

So yes, I take a little offense when I hear some ‘round here complaining about a few milliseconds when that little boy had to wait till he grew up and bought a P-515 to ever hear that ‘F’ ring. Shame on you!
grin
Originally Posted by Del Vento
- However I cannot measure such a time in the interface: both in the original and in the slowed down version there is no clear point where the noise or the sound "begin", the attack is so "blurred" that I could measure anything between 2ms and 100ms (and that is on the same recording!)

So here I am with something I can hear and measure and see on the screen, yet I cannot quantify.

For this particular question, you can do the following:

- Adjust DP volume to become the loudest sound source
- Record DP sound plus action noise plus ambient noise with a microphone (as you did)
- Simultanously record line-out from the DP (void of action/ambient sound)
- Normalize and correlate the two signals to find the best match. This indicates the time-relationship between the two recordings.
- Find the first indication of generated sound in the line-out recording. This indicates when the DP "knew" that a key has been pressed.
- Find the first indication of action noise in the microphone recording. It won't be buried below the piano sound because it's earlier.
- Possibly compensate for different sound travel time from action/piano to your microphone.

To be more exact about the action noise detection, you can create multiple recordings of the action noise and "average" them to create a synthetic ideal sound patch (once). Position the microphone to reduce noise that you repeatedly create with your fingers/body/bench. Or use different playing techniques for different samples so that unavoidable noise will average-away in the final result. Correlate this ideal reference sound patch with your recordings to extract the action timing.

You will probably get problems due to overlap between action and piano sound. You can trim the action reference patch to be shorter then the expected latency. Then there is no overlap. Likewise you can trim (all) line-out inputs by an arbitrary amount (from the easy to identify sound) and correlate only the remainder with the microphone input. For this, consider the unique sample length of your DP and when it starts looping the same waveform over and over (expect 1-4 seconds, or consult the corresponding megathread here on PW). Also, as recommended at the beginning, adjusting DP volume to be high probably makes this unnecessary.

To measure acoustic instruments, you can use a visual approach. Open the instrument and position a camera to capture both finger/key as well as hammer/string in a single frame. Capture a clock for a minute to make sure your nominal framerate is acceptable. Wobble the camera during a test capture to make sure it produces no tearing. If there is, the lines that compose the image are not being captured at the same instant of time. In this case, rotate the camera until all relevant components are aligned horizontally in the output (ideally all are visible in one single line). Record your data, then analyze the frames to count the time from touch to string.

Hammer travel time depends on velocity. You can use a player piano (pneumatic self-playing pianos like seen in western movies) to generate consistent velocities. Alternatively you can use a piano with silent system and MIDI out to read the actual velocity and reject recordings that fall outside of your target range. Lacking all, you may need to construct a device to get repeatable strikes (youtube has examples).

DPs also adapt to velocity, although probably less pronounced. The audio "peaks" may be less aggressive, or artificial delay may be introduced. For fair comparisons you should also control velocity on DPs.

I know this response is very pedantic, but you asked for it didn't you?
Posted By: Osho Re: Latency in acoustic and digital instruments - 01/21/21 05:03 AM
Originally Posted by Del Vento
- I can artificially slow down the sound (without changing the pitch) and the delay becomes super-evident

- However I cannot measure such a time in the interface: both in the original and in the slowed down version there is no clear point where the noise or the sound "begin", the attack is so "blurred" that I could measure anything between 2ms and 100ms (and that is on the same recording!)

So here I am with something I can hear and measure and see on the screen, yet I cannot quantify. So I wonder how one can say "below x ms it's fine, above it's not". As many have said, the numbers reported by Pianoteq (or other software pianos) are only part of the mix, and I am interested in the full picture.

Has anybody attempted at doing this measurement and achieved any reasonable success?

The closest I know is MacMacMac's experiment where he recorded both Piano output and VST output in audacity and then compared the waveforms.

One idea if you have a high FPS video camera (such as some of the Go Pro that can do 240fps - so one frame almost every 4ms) with a good mic. If you record a video and audio and then have a software that can go through video frame by frame and also show the audio waveform, you may be able to calculate the time from beginning of the key going down to when the sound is heard (which should be a very distinct peak in the audio waveform). Perhaps a slo-mo mode in a high end smartphone can be used for the same purpose. I have a iPhone 12 Pro which allegedly can do "Slo-mo video support for 1080p at 120 fps or 240 fps".

Osho
Originally Posted by MacMacMac
I don't understand what you mean. How is that too quick? The goal is 0, right?
Originally Posted by wouter79
Originally Posted by MacMacMac
I was able to capture audio from the piano and audio from the VST, and examine the delay. The result: about 2.5 msec.

This does not include the delay inside the piano from keypress to audio output.
It just shows that the VST does not introduce much more delay compared to the piano itself.
That's too quick. Suppose goal is 4 ms total latency, then 2.5 is a big chunk of that budget already, and maybe too much.

Your reasoning is too quick. You're taking incorrect shortcuts
© Piano World Piano & Digital Piano Forums