What level should I Normalise audio to?

Audio normalization refers to the process of adjusting the volume levels of audio tracks so they have similar perceived loudness. This is done by analyzing the loudest peaks in a track and then boosting or attenuating the overall volume to hit a target level. Proper normalization is critical for ensuring consistent listening levels across albums, playlists and streaming platforms. It improves the listener experience by preventing drastic volume shifts between songs and loudness wars between artists. With the shift towards digital music consumption, normalization has become an essential step in modern audio mastering and distribution.

According to research, inconsistent loudness levels lead to listener fatigue and impact music perception. Studies show both expert and casual listeners prefer professionally normalized audio over unnormalized tracks (https://www.vipzone-samples.com/en/what-is-normalizing-audio/). Normalization allows the listener to focus on the musical content rather than adjusting volume constantly. For streaming platforms and broadcasters, normalization provides a more uniform experience and lets audio play at optimal levels on various speaker setups. Overall, normalization is vital for delivering music the way it was intended to sound.

Standards for Audio Levels

When it comes to setting audio levels for production and broadcasting, there are a few key industry standards to be aware of:

EBU R128 – This standard was developed by the European Broadcasting Union and recommends targeting an integrated loudness level of -23 LUFS. It also specifies a true peak limit of -1 dBTP. R128 has been widely adopted for audio post-production worldwide.

ATSC A/85 – The standard used for digital TV broadcasting in the United States, developed by the Advanced Television Systems Committee. It also recommends -24 LKFS integrated loudness, along with a -2 dB headroom for true peak level.

ITU-R BS.1770 – Developed by the International Telecommunications Union, this standard provides the algorithm for measuring program loudness and true peak levels. The specifications in EBU R128 and ATSC A/85 are based on the BS.1770 loudness measurements.

So in summary, for broadcast applications you’ll generally want to target between -23 to -24 LUFS integrated loudness, with a true peak limit around -1 to -2 dBTP. The specific standard will depend on your particular platform or delivery specifications.

Choosing a Target Loudness Level

There are some general guidelines for target loudness levels based on the intended distribution medium:

  • For music streaming services like Spotify and Apple Music, the recommended target is -14 LUFS integrated loudness with a max true peak of -1 dBTP. This is based on their normalization practices which aim for around -14 LUFS on their platforms (Stack Exchange).
  • For YouTube, a good target is -13 to -16 LUFS integrated loudness. YouTube normalizes audio at -13 LUFS so aiming a little lower allows for some headroom (eMastered).
  • For podcasts, a typical target is around -16 to -20 LUFS. This provides sufficient loudness while avoiding a compressed and fatiguing sound (Homebrew Audio).
  • For film and broadcast, standards call for dialnorm of -24 LUFS integrated and max true peak at -2 dBTP. This allows for wide dynamic range while keeping average loudness normalized (eMastered).

In general, more dynamic mediums like film and podcasts aim for lower integrated loudness targets, while compressed mediums like streaming favor louder normalization levels. Consider both the technical delivery format and creative intent when choosing a target level.

Loudness vs Peak Normalization

Loudness normalization aims to make the perceived loudness consistent across audio tracks by targeting a specific loudness level, such as -14 LUFS. This allows for a more consistent listening experience. Peak normalization, on the other hand, simply scales the highest peak to a target level, such as -1 dB. This can result in very different perceived loudness between tracks.

Some pros of loudness normalization compared to peak normalization:

  • More consistent perceived loudness between tracks
  • Avoids overly compressed and limited sound caused by the “loudness wars”
  • Better retained dynamic range

Some cons of loudness normalization:

  • Can decrease loudness of modern pop/electronic songs originally mastered very loudly
  • Requires more sophisticated audio processing and metering
  • Standards and target levels vary across different platforms

Peak normalization is simpler to implement but can result in jarring loudness changes between songs. Loudness normalization provides a more listenable experience but requires more care to get right. The choice depends on the priorities for the audio application.

True Peak Limiting

True peak limiting is a process used in audio mastering to prevent clipping and other distortion when a signal is converted from digital to analog. It helps control peak levels and ensure they do not exceed 0dBFS (decibels relative to full scale), which can cause unwanted artifacts.

Unlike traditional peak limiting which looks at the actual digital signal peaks, true peak limiting takes into account the reconstruction of transients that happens during the D/A conversion process. Transients with high frequencies can misrepresent their true peak levels. True peak limiting uses oversampling and interpolation techniques to more accurately predict the analog levels after conversion.

Mastering engineers typically enable true peak limiting on their final limiter when mastering for CD release or digital distribution. It provides an extra safety margin and guards against inter-sample peaks exceeding 0dBFS after D/A conversion. This prevents unintended clipping, distortion, or compression from unexpectedly high signal peaks. True peak limiting is especially important when mastering material with many transients, like electronic music.

As noted by mastering engineer Zino Mikorey, true peak limiting should generally be enabled, but levels should not be pushed all the way to 0dBFS. Leaving 1-3dB of headroom allows for a robust, clean signal. The true peak limiter precisely catches any stray peaks, while still preserving dynamics.

Order of Operations

When deciding the order to apply compression, limiting, and normalization in the mastering chain, it’s best to start with gentle compression to control overall dynamics and balance the song. Next, apply limiting to maximize loudness without introducing distortion. Finally, normalize the audio to hit the target integrated or peak loudness level. Some common orders are:

Compression → EQ → Limiting → Normalization

Compression → EQ → Normalization → Limiting

The key is to avoid normalizing too early in the chain, as this can reduce available headroom for compression and limiting. As Izotope recommends, normalizing should come after compression and limiting to “target loudness levels after the dynamics and frequency content have been adjusted.” This ensures the best sound quality and loudness optimization.

Platform-Specific Considerations

Different platforms and mediums have their own specific loudness guidelines that are important to consider when normalizing audio.

For streaming services like Spotify and Apple Music, the recommendation is to normalize audio to -14 LUFS integrated loudness, with a true peak limit of -1 dB TP (some recommend -2 dB TP). This ensures consistency across tracks and playback volumes on streaming.

For broadcast television and film, the standard in the US is -24 LKFS integrated loudness, while Europe uses -23 LUFS. Films often target -27 to -30 LUKS. True peak limiting for broadcast and film is typically -2 dB TP.

For YouTube, normalization to -14 LUFS is recommended so videos achieve uniform loudness. However, some caution against over-compressing for YouTube and suggest -16 to -18 LUFS instead.

When mastering audio for CD, the standard is to peak normalize to -0.1 dB TP, though some aim for -3 dB TP to allow for inter-sample peaks. Integrated loudness is less relevant for CD.

Considering the destination platform and adjusting loudness normalization accordingly can help deliver the best quality audio for each medium.

Monitoring and Metering

Proper monitoring and metering is crucial for achieving consistent and high quality audio levels. Here are some best practices:

Use both peak and loudness meters. Peak meters like VU meters show maximum levels, while loudness meters like LUFS show perceived loudness over time (1). Having both gives you the full picture.

Calibrate your meters regularly to ensure accuracy. Studio monitors and headphones should also be calibrated to a reference level like 83dB SPL at the listening position (2).

Add an oscilloscope to visualize the audio waveform in real-time. This helps identify clipping or other distortion issues (3).

When mastering for streaming platforms like Spotify, aim for integrated LUFS levels around -14 to -11 LUFS to optimize loudness. For broadcast, -24 LUFS is recommended (2).

Use high quality monitoring equipment in an acoustically treated room. Quality matters more than having the loudest or flattest response (3).

Take breaks and check mixes on different systems. Ear fatigue causes loss of objectivity. Cross-checking on headphones, earbuds, car stereo, etc. helps identify issues.

Trust your ears over your eyes when meters disagree. The perceived loudness and quality is what matters most.

(1) https://www.izotope.com/en/learn/what-is-metering-in-mixing-and-mastering.html

(2) https://mixingmonster.com/audio-metering/

(3) https://sonicscoop.com/everything-need-know-audio-meteringand/

Quality Control

Before finalizing any audio, it is critical to perform quality control checks to ensure there are no issues. Here are some key quality control steps:

Run the audio through plugins like EXPOSE 2 that can analyze audio and detect any problems with the frequency balance, stereo field, dynamics, and more. Tools like this make it easy to identify issues that need to be corrected.

Listen closely and critically to the audio on multiple speaker systems – from laptop speakers to studio monitors. Different systems will reveal different issues. Laptop speakers in particular will expose any problems in the low-mid frequencies.

Compare your mix against high quality professional reference tracks in the same genre. Switch between your track and the reference while listening on the same system. This will reveal how your track measures up.

Have another experienced audio engineer conduct a review and provide feedback. Fresh ears will pick up issues you may have initially missed.

Confirm the audio meets platform specifications. For example, streaming services often require an integrated loudness target of -14 LUFS. Loudness normalization plugins can analyze this.

Check for any glitches, clicks, pops by zooming in close on the audio waveform. Playback at half speed can also help reveal subtle issues.

Correct any problems discovered before finalizing the audio. Quality control is a critical step that should never be skipped.

Conclusion

To recap, there are a few key takeaways when it comes to normalizing audio levels:

  • Adhering to loudness standards like EBU R128 helps ensure consistency across platforms.
  • Consider targeting between -14 to -16 LUFS for online distribution.
  • Loudness normalization preserves dynamics better than peak normalization.
  • Apply true peak limiting as a final stage to prevent clipping.
  • Monitor both peak and loudness levels throughout the production process.
  • Conduct quality control checks on various systems before finalizing your audio.

By following these best practices, you can deliver audio at an appropriate level for your intended platform and audience while maintaining audio quality.

Leave a Reply

Your email address will not be published. Required fields are marked *