2022 our 25th year online!

Welcome to the Piano World Piano Forums
Over 3 million posts about pianos, digital pianos, and all types of keyboard instruments.
Over 100,000 members from around the world.
Join the World's Largest Community of Piano Lovers (it's free)
It's Fun to Play the Piano ... Please Pass It On!

SEARCH
Piano Forums & Piano World
(ad)
Who's Online Now
69 members (Carey, clothearednincompo, Bellyman, AlkansBookcase, accordeur, akse0435, Barry_Braksick, BadSanta, 12 invisible), 1,881 guests, and 299 robots.
Key: Admin, Global Mod, Mod
Previous Thread
Next Thread
Print Thread
Hop To
Page 1 of 5 1 2 3 4 5
Joined: Aug 2010
Posts: 188
S
Full Member
OP Offline
Full Member
S
Joined: Aug 2010
Posts: 188
I'm waiting for my Steinberg UR22 MK2 audio interface to arrive, and in the meantime I've been reading about recording in general, and also about Audacity, which is the software I intend to use. Now based on what I've read, these are my conclusions:

(a) Sample rate for WAV should be 44100 Hz if the final product is audio only, and 48000 Hz if the audio will be merged with video.

(b) Bit depth for WAV should be 16-bit if the final product is audio only, and 24-bit if the audio will be merged with video.

(c) Sample rates above 44100 Hz cannot be detected by the human ear without special equipment and software analyzing the recording.

(d) MP3 should not be merged with video because it is "lossy" compressed audio, but if it really needs to be MP3, then the sample rate should be 44100 Hz and the bit rate 320kbps or higher.

(e) With MP3, bit rates higher than 320kbps do not cause any problems, but going above sample rate 44100 Hz with WAV can lead to recording dropouts.

(f) Recording music in 192000 Hz can result in ultrasonic playback distortion, making the recording worse.

(g) The source (in this case a digital piano) should be as loud as possible but without any clipping.


Please correct me if any of the above is wrong.

Assuming all or most of it is correct, my questions are:

- Why should the "32-bit float" setting be used in Audacity instead of 24-bit or 16-bit?

- Why is 24-bit recommended for audio with video?

- If anything above 44100 Hz cannot be detected by the human ear without special equipment and software, and if anything above 44100 Hz can lead to recording dropouts, why then is 48000 Hz recommended for audio with video?

- How common are these recording dropouts above 44100 Hz and how can they be avoided?

- Is there any purpose whatsoever in recording in 96000 Hz? Some recommend it for video instead of using 48000 Hz.

- How important are codecs with both WAV and MP3? (Which are the good / best ones?)

- Why is 44100 Hz recommended for MP3 with video, instead of 44000 Hz (or 96000 Hz) just like WAV with video?

- Which is better: using the digital piano's own reverb effect, or recording without reverb and then adding it in Audacity?

- What are the advantages/disadvantages of using the "normalize" effect?

- Is it usual/recommended to use equalization for digital piano recordings?

- Is there any forum thread or website that recommends step by step which Audacity settings to use when recording digital piano? This would be ideal because settings such as "real-time conversion" and "high-quality conversion" with "sample rate converter" and "dither" are really confusing me.

- Will I need to download any special drivers for the Steinberg UR22 MK2, or should the CD already have everything that's necessary?


Many many thanks to anyone who wishes to help out!

Last edited by Stephano; 07/08/16 10:51 PM.

Yamaha Clavinova CLP-645 Polished Ebony
Korg Triton Studio 61
Korg Pa800
Joined: Jan 2010
Posts: 1,643
1000 Post Club Member
Offline
1000 Post Club Member
Joined: Jan 2010
Posts: 1,643
(a) Sample rate for WAV should be 44100 Hz if the final product is audio only, and 48000 Hz if the audio will be merged with video.

In whatever DAW you decide to use, there is usually a project setting. You can record at as high a sample rate as you like, and many do. But when you export your work, it will have to be in a format that is appropriate for delivery. So you wind up having to dither down and convert sample rate to whatever is needed. Yes, typically audio for music is delivered at 44.1k, if you are exporting for video, yes 48k is the norm.

(b) Bit depth for WAV should be 16-bit if the final product is audio only, and 24-bit if the audio will be merged with video.

Bit depth largely has to do the dynamic range of the recording. I record at 24bit (also a project setting) all the time. And just like with sample rate, bit depth and sample rate can be chosen at export to deliver in the format that is necessary.


(c) Sample rates above 44100 Hz cannot be detected by the human ear without special equipment and software analyzing the recording.

I would say that most people can be fooled by lesser spec'd recordings if the rest of the work on the project is done well. But there can be benefit to higher sample rate, especially when dealing with orchestra music where the frequency range of the material is very wide... particularly the stuff in the upper range of human hearing. As we get older - the high stuff is the first to go. So you have to have some very good ears and monitoring equipment to pick out a 48k vs. 96k recording. On the other hand, I believe I can pick out an MP3 (especially ones created at low bit rates) pretty quickly if thrown in with a collection of 16bit/44.1 wavs.
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

(d) MP3 should not be merged with video because it is "lossy" compressed audio, but if it really needs to be MP3, then the sample rate should be 44100 Hz and the bit rate 320kbps or higher.

A lot of video these days is delivered in MP4 (H264 codec) and actually to keep file sizes smaller the audio is compressed as an AAC (Advanced Audio Coding). You can work with 48k wavs in your video editing software all the way up until you bounce down to MP4.
https://en.wikipedia.org/wiki/Advanced_Audio_Coding

Most experienced/professional people have trouble discerning a 256kbps from 320kbps MP3. The lower bit rates, 192 and under are where it really starts sounding poor.

(e) With MP3, bit rates higher than 320kbps do not cause any problems, but going above sample rate 44100 Hz with WAV can lead to recording dropouts.

I'm not sure what you are referring to and where it might be a concern.

(f) Recording music in 192000 Hz can result in ultrasonic playback distortion, making the recording worse.

As mentioned above there is a point of diminishing returns with higher sample rate recordings. I don't personally see a reason to go over 48k. But many professional facilities (the ones that still exist) are offering very high sample rate 96k, 192k. I've read this theory of ultrasonic distortion, but it's not a concern for me. I have no interest in recording at 192k because a) it puts unnecessary stress on the system (CPU mainly) and b) the file sizes are so damn large. I mainly work with 1gb spinning drives when recording, SSDs are coming down in price but I haven't invested in them other than to use as a boot drive - which works AMAZingly well.

(g) The source (in this case a digital piano) should be as loud as possible but without any clipping.

This is generally true, although when recording direct (analog audio outs of your digital piano to your UR22) you have to listen for the noise floor of the instrument. Meaning, when you crank the volume of your DP you might hear more hiss. Find a good level beneath clipping and work from there. You'll have to balance the output of the keyboard and the inputs on your UR22 to find the best levels. Depending on the style of music you are recording you may want to use dynamic compression later to narrow the dynamic range and make up some gain. More extreme in pop music than classical or jazz.

-------

- Which is better: using the digital piano's own reverb effect, or recording without reverb and then adding it in Audacity?

- What are the advantages/disadvantages of using the "normalize" effect?

Normalizing is generally crap because it raises the noise floor along with the rest of your program material (recorded audio). It is also a destructive effect, meaning once you apply it you can't go back at a later date and remove it. So we tend to use compressor/limiters more often when dealing with dynamic range issues.

- Is it usual/recommended to use equalization for digital piano recordings?

It depends on how well the source piano delivers the desired sound. Compare your recordings vs. professional ones that you find attractive to listen to. You may very well need to make adjustments to achieve your sonic goals.

- Is there any forum thread or website that recommends step by step which Audacity settings to use when recording digital piano? This would be ideal because settings such as "real-time conversion" and "high-quality conversion" with "sample rate converter" and "dither" are really confusing me.

YouTube is generally good for things like this.

- Will I need to download any special drivers for the Steinberg UR22 MK2, or should the CD already have everything that's necessary?

The UR22 will ship with a CD probably with the driver on it. Rarely does the CD have the most recent driver available. Here is a link to the one you want to use prior.

http://www.steinberg.net/en/support/downloads_hardware/yamaha_steinberg_usb_driver.html

Also, Steinberg hardware products usually come with a light version of Cubase (their DAW) called Cubase AI. It's quite good and has many features that Audacity does not. You may enjoy to take the time to learn to use it. I get that Audacity is free and to an extent simple. But I'm not personally a regular user, there's better stuff out there.

Consider:
Cubase - either the free version you'll get with your UR22 or a full version
Reaper - by cuckos
Studio One - Persons
ProTools 12 - Avid

there are many others.

Joined: Oct 2013
Posts: 3,868
3000 Post Club Member
Offline
3000 Post Club Member
Joined: Oct 2013
Posts: 3,868
Ok for most of the answer, but the point about the normalization puzzle me. If the bit depth is 24bit, you will have quite little loss. But you shouldn't use normalization because of a gain not well configurated : the higher the gain is the higher the noise will be amplified.

The compressor is an amplification like the normalization and will have the same problem. The difference is the gain which depends of the volume. This permits higher gain without clipping.


http://www.sinerj.org/
http://humeur-synthe.sinerj.org/
Yamaha N1X, Bechstein Digital Grand, Garritan CFX, Ivory II pianos, Galaxy pianos, EWQL Pianos, Native-Instrument The Definitive Piano Collection, Soniccouture Hammersmith, Truekeys, Pianoteq
Joined: Feb 2010
Posts: 5,870
W
5000 Post Club Member
Offline
5000 Post Club Member
W
Joined: Feb 2010
Posts: 5,870
Quote
(a) Sample rate for WAV should be 44100 Hz if the final product is audio only, and 48000 Hz if the audio will be merged with video.

(b) Bit depth for WAV should be 16-bit if the final product is audio only, and 24-bit if the audio will be merged with video.

I recommend recording 96kHz 24 bit and use lower quality like 48kHz 16 bit only for the final file.


Quote
(c) Sample rates above 44100 Hz cannot be detected by the human ear without special equipment and software analyzing the recording.

Ears can not detect "sample rate". But the actual argument is that the human ear can hear up to 20kHz and the maximum recordable frequency is coupled to the sample rate. But this coupling is quite complex and assumes perfect removal of higher frequencies even before the AD conversion step. In practice such perfect removal is not possible.


Quote
(d) MP3 should not be merged with video because it is "lossy" compressed audio, but if it really needs to be MP3, then the sample rate should be 44100 Hz and the bit rate 320kbps or higher.

Yes always try to use the best sources in the processing and use a single compression step in the end. This also holds for creating an MP3 (which you should not create from another already compressed file like a youtube movie).

Quote
(e) With MP3, bit rates higher than 320kbps do not cause any problems, but going above sample rate 44100 Hz with WAV can lead to recording dropouts.

*recording* dropouts?? You should not record MP3 directly. See my previous answer

Quote
(f) Recording music in 192000 Hz can result in ultrasonic playback distortion, making the recording worse.

No. Recording is not related to playback. You can record in 192kHz and play back at 44kHz (the player or your OS will convert the recording digitally). There are other arguments for not recording at 192k however.

Quote
(g) The source (in this case a digital piano) should be as loud as possible but without any clipping.

This is confusing.

Maybe you mean, if the source is louder, the recording system can do with less amplification which is good to reduce the noise level and makes other ambient noises less audible. Yes that's true.

But you shouldn't play louder or so just for the recording. Rather you should record the thing as you think it should sound. If the source is a speaker, then you can change the amplification of the speaker but it should sound good in the first place.

BTW there's a lot more to making a good recording.

Last edited by wouter79; 07/09/16 03:17 AM.

[Linked Image][Linked Image][Linked Image][Linked Image]
Joined: Feb 2010
Posts: 5,870
W
5000 Post Club Member
Offline
5000 Post Club Member
W
Joined: Feb 2010
Posts: 5,870
Originally Posted by ElmerJFudd
Normalizing is generally crap because it raises the noise floor along with the rest of your program material (recorded audio). It is also a destructive effect, meaning once you apply it you can't go back at a later date and remove it. So we tend to use compressor/limiters more often when dealing with dynamic range issues.



The recommendation on compressors is baffling me. If anything creates destructive distortion, it's compressors. They are in the *recording* lne so your original recording will already be damaged if the compressor was activated. Stay away from them and instead adjjust your recording level properly to ensure your audio doesn't clip.

I'm not sure what exactly is ment here with "Normalizing" but usually "Normalizing" means amplifying the recording such that the maximum peak in the recording matches the full sampling range.

Proper "normalizing" in audacity is done with the "Effect/Amplify" menu item. It will automatically propose the max gain without distortion.

It does not affect the noise floor *relative to the maximum in the recording*.

All editing is destructive. So all editing should be done on a COPY of the file.


[Linked Image][Linked Image][Linked Image][Linked Image]
Joined: Jul 2016
Posts: 103
N
Full Member
Offline
Full Member
N
Joined: Jul 2016
Posts: 103
Wow! You really thought this trough haven't you? I'll give you a few pointers to chew on...

44100, 48000 hz, 24, 32 bit etc are 'digital sample rates' and resolution of the waveform and do not represent what the human ear can perceive. The human ear can hear 20-20.000 hz (analog!)in the prime of his life. The perception of high notes drops naturally with the climbing of age. But may well be accelerated by the use of modern day headphones and earbuds?

There have been massive discussions about sampling rates on the net. It is believed that higher rates will provide more 'resolution' in the eventually exported analog signal. But it actually has never been confirmed by proof. I use 48.000 24 bit to be on the save side. 32 bit floating point may present compatibility issue with some programs. So best to avoid it because the advantage has not been confirmed as of yet!

MP3. The maximum quality for MP3 is 320kbs(C)onstant(B)it(R)rate. (V)ariable(B)it(R)rate looks at frequencies it can compress and this will save space, but potentially at cost of quality.

The ongoing discussion of what is better? On board/external effects, higher sample rates, EQ etc...

Let me tell you a true story:

Back in the analog age there was a basic rule: The original is always the best, every copy of this is a reduction in quality.
A band member of mine back then stated that I should copy the original master tape to my Akai MG14D so it would be better. According to him this was a better device than was used to record the master so it would be better after a copy? I told him he was full of [censored] because it can't get better than the original. But after a lot of discussion I humored him and copied it as requested. Just to show him he was so wrong!
After listening back the copy I was amazed!! Not because it was better in terms of signal quality but because the subtle added distortion and noise gave our music a different atmosphere and that was a real enhancement! He still didn't know [censored], but he sure made a lucky guess that turned out alright? smile

Nowadays we have state of the art plugins to add vinyl noise, to add his and tape distortion. I wonder why? smile

My message to you? Don't dig into theoretical 'facts' too much! Wait for your UR22, experiment and make up your own mind in how to record? And make sure you install the latest drivers for the UR22Mkll! You van find them here:

http://www.steinberg.net/index.php?id=12353&L=1


Piano: Kawai MP11 / Yamaha upright accoustic
Recording: Cubase 11 Pro / Roland Octa-Capture 16x10 / Nektar Panorama P1
Main VST's: Kontakt 7 / Omnisphere 2 / Arturia V collection
Joined: Feb 2010
Posts: 5,870
W
5000 Post Club Member
Offline
5000 Post Club Member
W
Joined: Feb 2010
Posts: 5,870
Quote
- Why should the "32-bit float" setting be used in Audacity instead of 24-bit or 16-bit?

On OSX, the internal format used by the OS is 32 bit float. So it's converted into 32 bit float anyway, regardless of what you set the driver on. So if you set audacity to something else than 32 bit float, every editing step will take conversion back to the format you selected, then the editing is done, and then again back to 32 bit float. Every step looses you quality. On Windows I dont know

Quote
- Why is 24-bit recommended for audio with video?

Because you want to have headroom while recording, such that you can accomodate extra peak levels that were not planned without introduce clipping

Quote
- If anything above 44100 Hz cannot be detected by the human ear without special equipment and software, and if anything above 44100 Hz can lead to recording dropouts, why then is 48000 Hz recommended for audio with video?


To lower the requirements of the steepness of the analog filtering before the AD converter. ANalog filtering is never perfect and the lower the requirements, the more filtering can be moved into the digital domain where perfect filtering is possible.


Quote
- How common are these recording dropouts above 44100 Hz and how can they be avoided?


They shouldn't happen with a quality recording. Maybe you have set the driver to "real time conversion" which can trade off quality for speed. Or if you overload your computer (playing videos while recording or so?) could cause hickups in the recording process.


Quote
- Is there any purpose whatsoever in recording in 96000 Hz? Some recommend it for video instead of using 48000 Hz.

Yes, see above. To reduce the requiements for the analog filtering step.


Quote
- How important are codecs with both WAV and MP3? (Which are the good / best ones?)

Don't know. I'm using lame and it seems to work fine.

Quote
- Why is 44100 Hz recommended for MP3 with video, instead of 44000 Hz (or 96000 Hz) just like WAV with video?

44100 is a standard rate. 44000 is not a standard rate and many programs probably can't handle that.

Quote
- Which is better: using the digital piano's own reverb effect, or recording without reverb and then adding it in Audacity?


I prefer your real room's reverb and then well dosed.


Quote
- Is there any forum thread or website that recommends step by step which Audacity settings to use when recording digital piano? This would be ideal because settings such as "real-time conversion" and "high-quality conversion" with "sample rate converter" and "dither" are really confusing me.

For recording you should use "high quality", not "real time". Dither is something specific for the hardware, use the recommended setting unless you really know what's going on.



[Linked Image][Linked Image][Linked Image][Linked Image]
Joined: Nov 2014
Posts: 773
500 Post Club Member
Offline
500 Post Club Member
Joined: Nov 2014
Posts: 773
Originally Posted by ElmerJFudd

Studio One - Persons

PreSonus

Joined: Sep 2011
Posts: 3,756
T
3000 Post Club Member
Offline
3000 Post Club Member
T
Joined: Sep 2011
Posts: 3,756
Originally Posted by Nickeldome
Let me tell you a true story:

Back in the analog age there was a basic rule: The original is always the best, every copy of this is a reduction in quality.
A band member of mine back then stated that I should copy the original master tape to my Akai MG14D so it would be better. According to him this was a better device than was used to record the master so it would be better after a copy? I told him he was full of [censored] because it can't get better than the original. But after a lot of discussion I humored him and copied it as requested. Just to show him he was so wrong!
After listening back the copy I was amazed!! Not because it was better in terms of signal quality but because the subtle added distortion and noise gave our music a different atmosphere and that was a real enhancement! He still didn't know [censored], but he sure made a lucky guess that turned out alright? smile

Nowadays we have state of the art plugins to add vinyl noise, to add his and tape distortion. I wonder why? smile


Nickeldome, this tunes in so well with my experiences! Endless boneheaded discussions and goofy experiments with band members (usually guitarists but not drummers or saxophonists) where I'd be pompously convinced of the superiority of my arguments, but the massed-ignoranti would win the day.

It's a similar story with the endless analogue vs digital arguments. The truth seems to be something like this:

It's not hi-fi or precise reproduction that people want. Rather, it's the right sort of distortion and the nicest kind of colouration. And for that, people will pay fortunes (gold plated, titanium-tipped valve amplifiers and other costly baubles).


Roland HP 302 / Samson Graphite 49 / Akai EWI

Reaper / Native Instruments K9 ult / ESQL MOR2 Symph Orchestra & Choirs / Lucato & Parravicini , trumpets & saxes / Garritan CFX lite / Production Voices C7 & Steinway D compact

Focusrite Saffire 24 / W7, i7 4770, 16GB / MXL V67g / Yamaha HS7s / HD598
Joined: Jan 2010
Posts: 1,643
1000 Post Club Member
Offline
1000 Post Club Member
Joined: Jan 2010
Posts: 1,643
Originally Posted by wouter79
Originally Posted by ElmerJFudd
Normalizing is generally crap because it raises the noise floor along with the rest of your program material (recorded audio). It is also a destructive effect, meaning once you apply it you can't go back at a later date and remove it. So we tend to use compressor/limiters more often when dealing with dynamic range issues.


The recommendation on compressors is baffling me. If anything creates destructive distortion, it's compressors. They are in the *recording* lne so your original recording will already be damaged if the compressor was activated. Stay away from them and instead adjjust your recording level properly to ensure your audio doesn't clip.

I'm not sure what exactly is ment here with "Normalizing" but usually "Normalizing" means amplifying the recording such that the maximum peak in the recording matches the full sampling range.

Proper "normalizing" in audacity is done with the "Effect/Amplify" menu item. It will automatically propose the max gain without distortion.

It does not affect the noise floor *relative to the maximum in the recording*.

All editing is destructive. So all editing should be done on a COPY of the file.


For those reading who may now be confused by the difference between Normalization and Compression and trying to decide which they should use and when...

Normalizing is not an algorithm. It is not a compressor. It's not a limiter. It's simply gain adjustment across the whole file - uniformly.

Example: if the highest peak in your file is -5 and you would prefer it were -1 then everything in your file is brought up +4db of gain.

Useful if the desired effect is to keep all the hills and valleys of your dynamics the same and just make the track louder (noise floor included). It is a destructive edit, meaning that once you apply normalization to a track you can't come back to it the next day and say, "oh, i didn't want that". This is what real-time insert effects are for. In ProTools, Logic, StudioOne, Reaper, etc. you can raise a fader or insert a gain adding plugin and monitor the result in realtime. It's non-destructive, meaning... if you don't like the result or you change your mind, you can remove the plugin and still have your source file in tact. It hasn't been permanently altered.

Compression is not normalization. Set properly, Compression makes the peaks softer but leaves the quieter sections (including noise floor) uneffected. You can then turn the overall volume up because the peaks are lower (yes this would also raise noise, but you have total control over how much - particularly if you automate your faders). At the same time the distance between the highest points and lowest is now smaller meaning that in playback it is easier for the listener to hear the softest parts of the recording. Now in a classical recording wide dynamic range is acceptable, even desirable to an extent as an expressive part of the performance. But in reality on most peoples playback systems (ear buds, computer speakers, etc.) some compression is recommended. Normalizing does not alter the hills and valleys of the dynamic range, and that is how it's different from compression.

Side note: You may also want to read up on Fader Automation and Gates and/or Noise Gates for how they may be useful in recording projects.


Joined: Nov 2014
Posts: 773
500 Post Club Member
Offline
500 Post Club Member
Joined: Nov 2014
Posts: 773
Reading this topic, I tried to play with Normalize…, Amplify… and Compressor… functions in Audacity. I used draft mp3 file, made with USB Audio Recorder feature in my instrument. And it seems that I liked Compressor results more, than others. It seems to me like it is more effective (judging spectrum and sound itself). I didn't feel much reduction of dynamics (furthermore, when I listen to recordings instead of real life acoustic performances, I don't like too wide dynamic range, in that cases I constantly adjust volume on my speakers). And on spectrum there is still big difference between hills and valleys. I used Compressor function with parameters by default, didn't change anything there, because I don't know what each of them means. So, just a bit wonder, why I offen hear about Normalizing as a recommendation to make quiet recordings to sound louder. But almost never hear about Compressor. In my first experiment I liked Compressor result more, than Normalizing. But that was just first quick and short experiment with draft mp3 file (which purpose was just to check that USB Audio Recorder function works and records all nuances). I will experiment more.

Generally this is interesting theme to me. Because on YouTube, Soundcloud etc. I hear so many really quiet recordings… which I don't like. And some others have noticeably better sound level in their recordings. And I always thought how to make them sounding louder without adding artifacts. And why digital piano manufacturers don't make it possible to record audio on USB flash drive to sound with higher level (without post editing on computer). Is it really necessary to have some additional expensive studio equipment or software just to achieve decent sound level without artifacts. But at least thanks to those, like Kawai, who gives some possibilities to adjust Gain Level. As far as I know, others don't give even this in their instruments.

P.S. the same problem was described today by another person here. Many people don't like that their recordings have low sound level.

Joined: Aug 2010
Posts: 188
S
Full Member
OP Offline
Full Member
S
Joined: Aug 2010
Posts: 188
Whoa, thank you all for the awesome input. I'll have to carefully read all of these replies.

In the meantime... the Steinberg arrived today! I've been messing around and made a quick recording of Bach's Fugue in Cm from WTC I. There are no distortions, but the volume is way too low I think.

My settings are:

Digital piano volume knob: 2 o'clock

Steinberg input gain knobs: 12 o'clock

Audacity recording volume: 1.0

Recorded sample rate: 44100 Hz

Final product: 44100 Hz 16-bit WAV, Size 16,9 MB


I adjusted it this way by playing "fff" chords in the bass to make it as loud as possible without clipping. I didn't apply any effects in Audacity, only a slight reverb on the DP.

I uploaded the recording here: http://vocaroo.com/i/s1K77yWvuXnx

What am I doing wrong in terms of loudness?

PS: The actual playing was done just as a recording test so I didn't give my 110% percent as far as the performance goes. Please excuse any possible mistakes.


Yamaha Clavinova CLP-645 Polished Ebony
Korg Triton Studio 61
Korg Pa800
Joined: Sep 2011
Posts: 3,756
T
3000 Post Club Member
Offline
3000 Post Club Member
T
Joined: Sep 2011
Posts: 3,756
Sounds good to me, though as you say probably slightly lower than the optimum. What happens when you try normalise in Audacity? And can you increase the gain on the interface a little without causing distorted peaks? This particular music is relatively undynamic so you'd probably be able to allow a higher input without problems.

(By the way, this's a well played version of one of my favourite well tempered pieces 😉)


Roland HP 302 / Samson Graphite 49 / Akai EWI

Reaper / Native Instruments K9 ult / ESQL MOR2 Symph Orchestra & Choirs / Lucato & Parravicini , trumpets & saxes / Garritan CFX lite / Production Voices C7 & Steinway D compact

Focusrite Saffire 24 / W7, i7 4770, 16GB / MXL V67g / Yamaha HS7s / HD598
Joined: Sep 2009
Posts: 14,439
Yikes! 10000 Post Club Member
Offline
Yikes! 10000 Post Club Member
Joined: Sep 2009
Posts: 14,439
What toddy said. The volume could be raised a bit. Otherwise it sounds great.

Joined: Dec 2012
Posts: 8,134
C
8000 Post Club Member
Offline
8000 Post Club Member
C
Joined: Dec 2012
Posts: 8,134
[quote]
Normalizing is generally crap because it raises the noise floor along with the rest of your program material (recorded audio). It is also a destructive effect, meaning once you apply it you can't go back at a later date and remove it. So we tend to use compressor/limiters more often when dealing with dynamic range issues.
[quote]

My take:

It's better to raise the signal level (e.g., turn up the volume on the DP) _before_ the recorder's input stage, rather than use "Normalize". As the post says, "Normalize" raises the noise level, as well as the signal level.

"Normalize" is, in theory, "destructive". Increasing volume by 10 dB, and then decreasing it by 10 dB, doesn't get you back to exactly where you started. [The reason is "quantization noise".]

However, with the 32-bit, floating-point encoding that Audacity uses, the distortion (caused by the up-and-down "Normalize") will be quite low. I wouldn't worry about it much.

Compression is a very powerful tool. It'll let you take a classical piano recording, and raise the "pp" sections so that you can hear them driving your car, without the "ff" sections blowing your ears out.

That's "distortion", but it's very useful _when used in moderation_. If you run serious compression, your piano recordings will sound like "elevator music" -- no dynamic range at all.

In the days of vinyl recordings, which had limited dynamic range compared to what we use now, "riding gain" was a crucial activity as a recording was "mastered" (that is, transferred from the original magnetic tape to a vinyl master). The recording engineer _had to_ boost the "pp" sections, to keep them well above the inherent vinyl playback noise. And he might have to boost gain during the original recording session, so that the "pp" sections were louder than the magnetic-tape noise in his (very expensive) analog recording equipment.

My guess is that, with modern recording and playback techniques, there's a greater dynamic range in commercial piano recordings than there used to be.


. Charles
---------------------------
PX-350 / Roland Gaia / Pianoteq
Joined: Aug 2010
Posts: 188
S
Full Member
OP Offline
Full Member
S
Joined: Aug 2010
Posts: 188
Originally Posted by toddy
(By the way, this's a well played version of one of my favourite well tempered pieces 😉)

Originally Posted by MacMacMac
What toddy said. The volume could be raised a bit. Otherwise it sounds great.

I'm glad you like it, thank you. smile

Alright so I opened the saved Fugue (".aup" file) in Audacity and experimented a little.

Results:

- Normalize with maximum amplitude 0.0 causes it to be very loud but also to clip in one place. It is barely noticable, but I caught it via "View -> Show Clipping".

- Normalize with maximum amplitude -1.0 does not result in clipping, and is still pretty loud, perhaps a bit too much.

- Normalize with maximum amplitude -3.0 seems to be a very good solution. No clipping, and loud but not too loud.


I assume that's the way to do it, by taking each recording and testing out different normalize levels, correct?

And "Amplify" looks like it could be used instead of "Normalize", but it may be more difficult to fine tune.

Questions:

1.) The "Normalize" effect menu contains, aside from the maximum amplitude setting also a "Remove DC offset (center on 0.0 vertically)" setting which is checked by default, and a "Normalize stereo channels independently" setting which is not checked by default.

Should I change something there?

2.) My "Quality" preferences are, aside from the sample rate and sample format,

Real-time conversion:
Sample Rate Converter: Medium Quality
Dither: None

High-quality Conversion:
Sample Rate Converter: Best Quality (Slowest)
Dither: Shaped

Should I change something there?

3.) What exactly is "View -> Mixer Board" for, and should it be relevant to me?

4.) When exporting audio, I get a warning saying "Your tracks will be mixed down to two stereo channels in the exported file", and I can choose either OK or Cancel. Is this normal?

5.) Is maximum volume from the piano with less input gain on the audio interface a better approach than having about 70% volume from the piano and more input gain on the interface?

6.) Is it normal for L/R channels not to be perfectly aligned when playing/recording? What I mean is, quite often L is shown to be louder than R and vice versa. Or when clipping, sometimes only one channel clips instead of both.

7.) Let's say I wanted to burn an audio CD with several tracks. What is the best way to get equal loudness on all tracks? I'd like to avoid the listener having to increase/decrease volume because a track is too soft or too loud.

8.) When exporting audio, there is no preset to save the WAV file as 24-bit. As far as WAV goes, there are only the following two presets:

- WAV (Microsoft) signed 16-bit PCM
- WAV (Microsoft) signed 32-bit float PCM

However, when I choose "Other uncompressed files", the following appears right below under Format Options: "Header" and "Encoding". I can now choose "WAV (Microsoft)" under "Header", and "Signed 24-bit PCM" under "Encoding", but the default file extension in this setting is ".aiff", so I have to rename it into ".wav".

Is this is the right way to save 24-bit WAV files? The ".aiff" somewhat confused me.

9.) Under my "Yamaha Steinberg USB Driver" settings, the Sample Rate was 44.1 kHz when I recorded the Fugue. Would it be wise to change this to 96 kHz even though the final WAV file will never be higher than 48 kHz?

10.) Is the "Buffer Size" under ASIO (also in the Steinberg driver settings) something I need to worry about, even though I'm not using a virtual piano? Right now the Buffer Size is 512 Samples, and device listed is Steinberg UR22mkII. I assume this is normal?

11.) In Audacity under Preferences -> Quality, you can choose the Default Sample Rate. What would happen if you selected for example 96, but you accidentally left 44.1 or 48 in the Steinberg driver settings?

12.) Should I avoid having the Steinberg turned on / connected for a longer period like a couple of hours? It doesn't seem to get hot or anything, but it's something I thought about.

13.) Under Audacity Preferences -> Devices -> Interface -> Host, there is "MME", "Windows DirectSound", and "Windows WASAPI". The default seems to be MME. Is this the way it should be?

14.) How much truth is there in the claim that using compressors and limiters for piano recordings is not good because those recordings have wide dynamic range which supposedly gets destroyed by compressing the sound, whereas normalizing supposedly does not destroy it?

15.) If I wanted to use the laptop's speakers to listen while recording, I have to activate "Software Playthrough" located under Audacity's Preferences -> Recording -> Playthrough. Which other settings would I need to adjust in this case in order to minimize latency as much as possible while at the same time not affecting recording quality?


There are obviously still things to learn, but I think I'm slowly starting to get the hang of this, and thank you all very much for helping out. This is really a fantastic forum, and this thread looks like it has the potential to become the official "how-to-record-your-digital-piano" thread with all the necessary information from A to Z.

Last edited by Stephano; 07/09/16 11:32 PM.

Yamaha Clavinova CLP-645 Polished Ebony
Korg Triton Studio 61
Korg Pa800
Joined: Aug 2010
Posts: 188
S
Full Member
OP Offline
Full Member
S
Joined: Aug 2010
Posts: 188
Here's another quick test before I head off to bed (it's 04:35 here in Germany).

Details:

- Recorded with 44.1 kHz (just like the Fugue)

- Exported to 44.1 kHz 24-bit (the Fugue was 16-bit)

- Same volume settings like the Fugue

- Normalize effect with maximum amplitude -3.0 dB

- Slight reverb on piano just like with the Fugue


Link to recording: http://vocaroo.com/i/s001OQ4O8lU5

I think the sound quality and loudness turned out really nice. Thoughts?

Again, please excuse any possible mistakes as this is just for testing purposes.

Last edited by Stephano; 07/09/16 11:14 PM.

Yamaha Clavinova CLP-645 Polished Ebony
Korg Triton Studio 61
Korg Pa800
Joined: Sep 2011
Posts: 3,756
T
3000 Post Club Member
Offline
3000 Post Club Member
T
Joined: Sep 2011
Posts: 3,756
It seems that the normalise at minus three dB solves your volume problem, and also provides a standard level for making recordings for other people and for the public: that is the purpose of normalisation, I think.

Most of the default settings in Audacity and elsewhere can be left alone, and there's no need to be concerned with them. For example, the unchecked "Normalize stereo channels independently" should be left as it is in most circumstances because if the channels are normalised (or compressed, limited or amplified) as separate processes, the stereo image may become unstable, floating around over time. That is why the default setting fixes the process for both channels. Many of those parameters will make little or no detectable difference.

For most of the other settings it is a similar solution: leave the defaults. But the answer to some other questions such as being able to view the mixing board and changing the sample buffer, perhaps it's best to experiment. You will have personal preferences for recording procedures and the results obtained.

Last edited by toddy; 07/10/16 03:48 AM.

Roland HP 302 / Samson Graphite 49 / Akai EWI

Reaper / Native Instruments K9 ult / ESQL MOR2 Symph Orchestra & Choirs / Lucato & Parravicini , trumpets & saxes / Garritan CFX lite / Production Voices C7 & Steinway D compact

Focusrite Saffire 24 / W7, i7 4770, 16GB / MXL V67g / Yamaha HS7s / HD598
Joined: Sep 2011
Posts: 3,756
T
3000 Post Club Member
Offline
3000 Post Club Member
T
Joined: Sep 2011
Posts: 3,756
PS, it occurs to me that, in the future, your best bet might be to start using pianos in the computer (vst/VSTi) to get better and more varied piano sounds. And if you do, emenelton's advice to find an interface with low round-trip latency (low sample buffers) will have been quite prescient. Using VSTs requires low latency.

Fortunately, you've got a good interface in the UR22, and it will have been well worth the investment.


Roland HP 302 / Samson Graphite 49 / Akai EWI

Reaper / Native Instruments K9 ult / ESQL MOR2 Symph Orchestra & Choirs / Lucato & Parravicini , trumpets & saxes / Garritan CFX lite / Production Voices C7 & Steinway D compact

Focusrite Saffire 24 / W7, i7 4770, 16GB / MXL V67g / Yamaha HS7s / HD598
Joined: Feb 2010
Posts: 5,870
W
5000 Post Club Member
Offline
5000 Post Club Member
W
Joined: Feb 2010
Posts: 5,870
Recording seems fine to me.
It's only 22kHz mono, but I assume you did that for this test only.

You have about 12dB headroom which is a good choice for ppieces with lots of dynamics. So keep it for pieces like Rachmaninoff. But this example piece has little dynamics, in which case you can consider reducing the headroom a bit (eg raise the recording amplifier by 4 to 6 dB)

Usually for a distribution you would raise the level to get rid of the extra headroom using the amplify function as I described, so that the listeners do not have to mess with their play back volume.



[Linked Image][Linked Image][Linked Image][Linked Image]
Page 1 of 5 1 2 3 4 5

Link Copied to Clipboard
What's Hot!!
Piano World Has Been Sold!
--------------------
Forums RULES, Terms of Service & HELP
(updated 06/06/2022)
---------------------
Posting Pictures on the Forums
(ad)
(ad)
New Topics - Multiple Forums
New DP for a 10 year old
by peelaaa - 04/16/24 02:47 PM
Estonia 1990
by Iberia - 04/16/24 11:01 AM
Very Cheap Piano?
by Tweedpipe - 04/16/24 10:13 AM
Practical Meaning of SMP
by rneedle - 04/16/24 09:57 AM
Country style lessons
by Stephen_James - 04/16/24 06:04 AM
Forum Statistics
Forums43
Topics223,390
Posts3,349,260
Members111,633
Most Online15,252
Mar 21st, 2010

Our Piano Related Classified Ads
| Dealers | Tuners | Lessons | Movers | Restorations |

Advertise on Piano World
| Piano World | PianoSupplies.com | Advertise on Piano World |
| |Contact | Privacy | Legal | About Us | Site Map


Copyright © VerticalScope Inc. All Rights Reserved.
No part of this site may be reproduced without prior written permission
Powered by UBB.threads™ PHP Forum Software 7.7.5
When you purchase through links on our site, we may earn an affiliate commission, which supports our community.