-1

I have 'n' microphones placed in square fashion and I want to make sure that all the channels must be aligned in time exactly the same if the signal is equidistant from all the 'n' microphones i.e. in the center of the square.

I have written below script to do the difference in zero crossing timings and if the difference is above some precision then print that and bail out.

from scipy.io import wavfile
import numpy as np
import argparse

parser = argparse.ArgumentParser(description='diff the zero crossing of two files')
parser.add_argument('-f1', '--file_name_1', help='provide first file name')
parser.add_argument('-f2', '--file_name_2', help='provide second file name')
parser.add_argument('-p', '--precision', help='precision to compare against', type=float, default=0.0001)

args = parser.parse_args()
print(args)
files = []
files.append(args.file_name_1)
files.append(args.file_name_2)

sampling_rates = []
signals = []
for file in files:
  fs, signal = wavfile.read(file)
  signal = signal / max(abs(signal))                        # scale signal
  sampling_rates.append(fs)
  signals.append(signal)
  assert min(signal) >= -1 and max(signal) <= 1
  print 'fs           ==> ', fs, 'Hz'                       # sampling rate
  print 'len(signal)  ==> ', len(signal), 'samples'

sampsPerMilli = 0
#files should have same sampling rates and length
prev_rate = sampling_rates[0]
for rate in sampling_rates[1:]:
  cur_rate =  rate
  sampsPerMilli = int(rate / 1000)
  if prev_rate != cur_rate:
    print("rates doesn't match %d %d"% (prev_rate, cur_rate))
    exit(0)
  cur_rate = rate

#signal length also should be same
prev_length = len(signals[0])
for signal in signals[1:]:
  cur_length = len(signal)
  if prev_length != cur_length:
    print("length of signals doesn't match for %d %d"% (prev_length, cur_length))
    exit(0)
  cur_length = prev_length

zccs = []
for signal in signals:
  zcc = []
  DC = np.mean(signal)
  newSignal = signal - DC
  for i in range(1, int(len(newSignal))):
    if((newSignal[i] * newSignal[i-1]) < 0):
      #print("crossing at %f seconds"% ((i/sampsPerMilli) * 0.001))
      zcc.append((i/sampsPerMilli) * 0.001)
  zccs.append(zcc)

for a, b in zip(zccs, zccs[1:]):
  if len(a) != len(b):
    print("length doesn't match %d %d"% (len(a), len(b)))
    for c, d in zip(a, b):
      if c-d > args.precision:
        print("precision %f c %f d %f exceeded"% (args.precision, c, d))
        exit(0)

Is there any better approach or can this script be improved?

user3053970
  • 11
  • 1
  • 2

1 Answers1

0

A better approach for use in the presence of noise and distortion is to use the Wiener-Hopf equations which will provide a least-mean-square solution of the effective "channel" between the microphones. The group delay can be determined using scipy.signal.group_delay.

Further details on this approach specific to microphone captures including Matlab/Octave source code (that can be easily ported to Python Scipy) is on this post:

Compensating Loudspeaker frequency response in an audio signal

In that post, the right channel was equalized to the left channel by passing it through the derived compensation for the effective channel between the microphones, but the post details how you can determine the channel itself rather than the compensation.

You would want to do this with sounding signals that fully occupy the usable spectrum of interest (chirps or white noise generators would be good choices) since the channel can only be determined accurately at the frequencies where signal energy exists. You can do this using one received channel compared to the other as I have done in the link when you don't have a copy of the source, or you can do each left and right channel separately compared to the source if you have the exact waveform that was used to create the source sound.

By using this to derive the channel, and then using the group delay on the channel frequency response, you will see the delay for each frequency in your signal (if there is significant variation, this is group delay variation which is a source of distortion (so you can also use this technique to equalize your room when you do have the source signal). To match the distance between the microphones, you would compare the average of the group delay over frequency, and you can weight this average on frequency ranges that are more important to you.

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135