Shure Logo.png
Blog

The Microflex Ecosystem, Part 2: Digital Signal Processing

Meeting room sound needs to be both intelligible and natural Learn how digital signal processing helps achieve this.
June, 17 2020 |
conference room discussion

A meeting without video is still a meeting; a meeting without audio is cancelled. In fact, 81% of IT Decision Makers say audio has the biggest impact on improving the quality of virtual meetings. Good sound is easy to overlook, but poor sound cannot be missed.  It causes fatigue, increases distraction, and reduces comprehension – all of which reduce productivity and revenue.

In this series of blog posts, we’ll examine how the various parts of the Microflex ecosystem affect sound quality. The job of the audio system is to capture the voices of people in the room for transmission, and to reproduce the voices of people coming from the other location. To do this well, the sound needs to be both intelligible (meaning you can understand what’s being said) and natural (meaning that people sound like themselves, as they would if you were speaking face to face.) In blog post #2, we’ll discuss the role of digital signal processing.

How DSP Improves Your Audio

The microphone’s job is to convert sound waves traveling through the air into an audio signal that can be transmitted, amplified, or recorded. Except in small huddle rooms, one microphone is almost never enough. Most meeting rooms require multiple microphones that need to be mixed together. The raw signals from the microphones are like singers in a choir – no matter how good they are alone, it’s how they perform together that counts.

What’s needed is some post-production that polishes and refines each individual microphone signal, and then combines them all into a balanced, harmonious mix. In the old days, this used to require a rack full of boxes with knobs, lights, and meters that had to be painstakingly tweaked by a skilled sound engineer to work together.

Fortunately, it no longer takes an indoctrination in the dark arts of audio engineering to get the job done; now all of the important processes can be accomplished by one device called a digital signal processor (or ‘DSP’). The DSP can be a standalone hardware appliance, or part of an application that runs on a PC – but not every DSP is suitable for workplace or college environments. The DSP for a videoconferencing application has to deal with video, call management, and other housekeeping duties; audio is just one thing on its to-do list.

All DSP is not created equal. The DSP built into a videoconferencing application has to deal with video, call management, and other housekeeping duties; audio is just one thing on its to-do list.

What you want is a dedicated audio DSP that is designed to work with microphones, and devotes all of its attention and finesse to making speech sound as natural as it can be. Like a Swiss Army knife, an audio DSP is equipped with a full suite of processing tools to optimize audibility and intelligibility.

Audio Problems That DSP Can Fix

In a recent survey, 80% of professionals cited audio problems as the top sources of frustration with virtual meetings. Most videoconferences are plagued by the same set of chronic problems. Each of the tools or ‘processing blocks’ in your audio DSP has a specific purpose and solves one of these problems:

Problem #1: Too Loud or Too Soft

One of the most common audio problems with videoconferences is simply controlling levels. Sometimes the people on one side of the call aren’t loud enough, or sometimes they’re too loud. The solution is Automatic Gain Control (AGC), which adjusts the level of each microphone channel (or of the incoming audio from the far site) to ensure consistent volume. Like a good sound engineer, the AGC turns quiet talkers up a bit, and turns loud talkers down a bit. This is ideal for meeting rooms where the distance between the talker and the microphone varies as different people use the room.

Problem #2: We Sound Like We’re In a Barrel

Hollow sound – like you’re in a can or a barrel – can be caused by having too many open microphones at the same time. An Automatic Mixer takes care of that by instantly activating the nearest microphone when a talker speaks and turning off microphones that aren’t needed. In a room with eight microphones, eliminating the seven that aren’t needed makes a night-and-day difference in sound quality. 

Problem #3: Echo, Echo, Echo

On a videoconference, it’s possible for the sound coming out of the loudspeaker to be picked up by a microphone and re-transmitted back to the far site, causing an annoying echo. An Acoustic Echo Canceller (AEC) digitally removes the incoming far site audio from the outgoing audio to prevent this. Most video conferencing applications (like Microsoft Teams, Zoom, or Skype for Business) have a single-channel AEC built-in which is best-suited for when you’re joining one of these meetings from a laptop. But for larger meeting rooms and classrooms with multiple participants and mics, good sound quality requires a DSP that dedicates a separate AEC to each microphone channel.

Problem #4: Noise Is Distracting

Most meeting rooms have some underlying background noise caused by projectors or computers, HVAC systems, building rumble, or environmental noise seeping in from outside. People in the room may not notice it, but microphones pick it up. Equalization can tune out much of the rumble and hiss at the low and high ends, but electronic Noise Reduction can digitally remove noise that overlaps the speech range so it’s not audible to listeners. The effect of a DSP with good Noise Reduction can be nothing short of amazing.

Problem #5: Can’t They Hear Us?

The more noise and reverberation there is in the audio signal, the harder it is for the videoconferencing codec (whether that’s an app on a PC or a hardware device) to provide natural back-and-forth interactivity. If audio problems aren’t solved before the signal reaches the codec, it can be difficult to interrupt the other side or for them to interrupt you. This slows down communication and causes annoying distractions.

Problem #6: Audio Not Synced With Video

Video requires more processing than audio to fit through a typical internet connection, which takes slightly more time. The audio arrives at the far site sooner than the video, so you hear someone speak before their mouth moves on the screen. An adjustable delay in the DSP allows the audio feed to the videoconference to be synchronized to align with the video.

Audio DSP Hardware vs. Software

Audio conferencing DSP needs to be located wherever makes the most sense for your application. In smaller rooms, a microphone with built-in audio DSP (like the Microflex Advance MXA710 or MXA910) eliminates the need for outboard hardware and simplifies configuration. In medium to large size rooms with multiple microphones and other audio sources, audio DSP on a dedicated hardware device (like the IntelliMix P300) provides more power, flexibility, and connectivity options to interface with hardware or software codecs. Uniquely, Shure also offers a software-based DSP solution, IntelliMix Room, that can also run on an in-room PC or videoconferencing appliance, which allows easier deployment and centralized maintenance by IT staff. No matter the form factor, high-performance audio DSP delivers natural sound that facilitates effortless communication and maximizes the value of your investment in facilities and technology.

Read the other articles highlighting the Microflex Ecosystem for AV Conferencing:

Shure digital signal processors refine and combine the room microphones into the highest quality audio signal possible, and are available on hardware or software platforms. Learn more here

chris-lyons_contactImage.webp
Chris Lyons
Chris Lyons is a 30-year Shure veteran who has filled a variety of different marketing and public relations roles. His specialty is making complicated audio technology easy to understand, usually with an analogy that involves cars or food. He doesn't sing or play an instrument, but he does make Shure Associates laugh once in a while.