Bulldog, Flying

A new Fatigue Meter for the Bulldog aircraft (Part 4)

PART FOUR

This is a direct continuation of PART THREE, picking up where I left off, with the continuation of flight tests and comparing the SSL fatigue meter readouts with the legacy fatigue meter readouts, and adjusting the SSL software as necessary.

Test Flight #5

Flight profile: aerobatics

I performed another aerobatic sortie test with both the legacy fatigue meter and the SSL prototype installed, configured in AIRSPEED SWITCH MODE and NOMINAL DATA MODE, with the same software configuration as at the end of PART THREE (low-pass filter, etc), but with the inclusion of an archive function whereby the entire acceleration history was stored on the SSL and retrieved later for offline processing.

SSL versus legacy fatigue meter readings

After the flight, I downloaded the meter-readings archive from the SSL and compared with the delta readings from the legacy device. The results are contained in Table 1. As with the Flight #4 in PART THREE, the SSL readings are now either identical or numerically very similar to the legacy readings across the wide range of bins covered in the flight.

Flight #5-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL05203220400
Legacy05193622400
% Error0%5%-11%-9%0%
Table 1. Comparison of the SSL and legacy fatigue meter delta readings for Test Flight #5, an aerobatics sortie, using the same software configuration as in PART THREE. Colour-coding and ‘% Error’ definitions as per PART THREE. The errors are seen to be numerically small, requiring only minor fine-tuning to achieve exact alignment. The average accuracy [defined as 100-average(abs(%Error))] is 95%.

Acceleration histories

In order to fine-tune the counting algorithm to obtain precise alignment between the SSL and legacy readings, I’ve downloaded the entire acceleration history from the sorties with which to perform offline tuning and testing of the algorithm. Figure 1 shows the acceleration history for the entire mission. For interest’s sake, Figure 2 is zoomed-in on a segment containing an inner loop and a slow roll, and Figure 3 is zoomed-in further to illustrate the effect of the low-pass filtering (described in in PART THREE).

Figure 1. Z-axis acceleration measured by the SSL every 0.1 seconds over the entire Flight #5. The spikes coincide with aerobatic manoeuvres. See Figure 2 for detail.
Figure 2. Segment of the z-axis acceleration trace from Figure 1, zoomed-in on a segment containing an inner loop and a slow roll aerobatic manoeuvres.
Figure 3. Z-axis acceleration trace from Figure 1, zoomed-in to show the effect of the low-pass filtering (described in PART THREE). The filtered signal (red curve) is used in the SSL fatigue meter bin counting calculations rather than the raw signal (blue curve) which leads to an over-counting compared with the legacy readings.

Algorithm parameter adjustments via numerical optimisation

Rather than attempting to further adjust the bin-counting parameters by manual trial-and-error, I used numerical optimisation. Specifically, by treating the low-pass filter cut-off frequency and filter-order as two adjustable parameters, and the Outbound and Inbound bin threshold parameters as a further ten adjustable parameters (ignoring the threshold parameters for -1.5g, +5.0g, and +6.0g since I have no data for those levels as yet), I ran an optimisation algorithm (in MATLAB using the fminsearchbnd algorithm) to search for the set of 12 parameter values which minimise the difference between the SSL readings and the legacy readings when applied to the raw acceleration trace for Flight #5 (as depicted by the blue curve in Figure 1).

The results of the optimisation are contained in Table2.

Low-pass Filter ParameterValue
Cut-off frequency (Hz)1.85
Filter order9
Bin-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
Outbound-1.47g-0.47g+0.19g+1.73g+2.45g+3.4g+4.97g+5.97g
Inbound-1.53g-0.53g+0.19g+1.91g+2.45g+3.55g+5.03g+6.03g
Table 2. Adjusted low-pass filter parameters and bin thresholds resulting from numerical optimisation performed on the fatigue meter readings from Flight #5. Note: the italicised values are hard-coded, i.e., were not included in the optimisation since no relevant data exists.

Applying the bin-counting algorithm with the parameters from Table 2 to the raw acceleration data from Flight #5 gives the results shown in Table 3 where the average performance with the optimised parameters is now 98% compared with 95% from before (Table 1).

Flight #5-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL (optimised)05193222400
Legacy05193622400
% Error0%0%-11%0%0%
Table 3. Comparison of the SSL and legacy fatigue meter delta readings for Test Flight #5 using the optimised software configuration parameters in Table 2. The average accuracy [defined as 100-average(abs(%Error))] is now 98%.

G-meter comparison

The SSL outputs for max-g and min-g for this aerobatic sortie (Flight #5) were +4.5g and -1.2g, respectively. Figure 4 shows the corresponding readings from the analogue G-Meter in the Bulldog cockpit panel, which are seen to be approximately +3g and -0.51g, respectively. As with Flight #4 in PART THREE, given that we know that the negative-g must be at least as low as -1g on account of the fact that the sortie involved slow rolls with the aircraft being held in the fully-inverted orientation for a few seconds each time, it strongly suggests that the SSL readout for the negative-g should be trusted more than the G-meter which seems to underestimate the negative-g. Likewise, given that both the SSL and legacy fatigue meters registered three counts in the +3.5g bin, this suggests that the SSL positive-g reading of +4.5g should be trusted more than the G-meter value of +3g which again seems to be an underestimation.

Figure 3. G-meter reading after test flight #5, suggesting a maximum value of approximately +3g and a minimum value of approximately -0.51g which underestimate the actual extrema g-values when compared with the flight profile (fully inverted -1g, etc). The corresponding readings from the SSL are are +4.5g and -1.2g, respectively, which seem to be more trustworthy.

Test Flight #6

Flight profile: aerobatics

I performed another aerobatic sortie test with both the legacy fatigue meter and the SSL prototype installed, configured in AIRSPEED SWITCH MODE and NOMINAL DATA MODE, with the optimised parameters from Table 2 deployed in the SSL software. The acceleration trace for this sortie is shown in Figure 4.

Figure 4. Z-axis acceleration measured by the SSL every 0.1 seconds over the entire Flight #6. The spikes coincide with aerobatic manoeuvres. The sortie has a similar dynamic profile to the previous flight (Figure 1).

SSL versus legacy fatigue meter readings

After the flight, I downloaded the meter-readings archive from the SSL and compared with the delta readings from the legacy device. The results are contained in Table 4. As with the Flight #5, the SSL readings are now either identical or numerically very similar to the legacy readings across the wide range of bins covered in the flight, with an average accuracy of 97%.

Flight #6-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL (optimised)05174627400
Legacy05184625400
% Error0%6%0%8%0%
Table 4. Comparison of the SSL and legacy fatigue meter delta readings for Test Flight #6, an aerobatics sortie, using the optimised software configuration parameters in Table 2. The average accuracy [defined as 100-average(abs(%Error))] is 97%.

Simulated extreme bins

Since the flight tests didn’t extend to the very extremes of the dynamic envelope, it is helpful to check that at least the counting algorithm works correctly on the extreme bins by simulating the levels in software and applying the algorithm accordingly. Figure 5 contains the same raw acceleration trace from Figure 4, but with values of +6g and -1.5g artificially injected in place of the existing extremal values.

Figure 5. Artificially modified raw acceleration trace from Figure 4. The modifications extend the envelope to the extremes of the fatigue meter range i.e., +6g and -1.5g in order to test the bin-counting algorithm at these extremes.

Applying the bin-counting algorithm with the parameters in Table 2 to this modified trace gives the results contained in Table 5. It is seen that the algorithm correctly registers the unit increments in the +6g, +5g, and -1.5g bins (highlighted in cyan in the table).

Flight #6 (modified)-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL algorithm15174627411
Table 5. SSL bin-counting algorithm using the software configuration parameters in Table 2 applied to the artificially modified Test Flight #6 trace from Figure 5.

Test Flights #6 and #7

Flight profile: aerobatics

I performed two more aerobatic sortie tests with both the legacy fatigue meter and the SSL prototype installed, configured in AIRSPEED SWITCH MODE and NOMINAL DATA MODE and recorded the acceleration traces for these sorties, as shown in Figures 6 & 7.

Figure 6. Z-axis acceleration measured by the SSL every 0.1 seconds over the entire Flight #7. The spikes coincide with aerobatic manoeuvres. The sortie has a similar dynamic profile to the previous aerobatic flights.
Figure 7. Z-axis acceleration measured by the SSL every 0.1 seconds over the entire Flight #8. The spikes coincide with aerobatic manoeuvres. The sortie has a more aggressive dynamic profile compared with the previous aerobatic flights.

Further optimisation

Multiple flights combined

Given that the goal of the SSL fatigue meter is to replicate the legacy fatigue meter bin counts over all flights taken together, it makes sense to optimise the SSL algorithm using the data spanning multiple flights rather than from a single flight at a time. To this end, Figure 8 shows the SSL raw acceleration traces from Flights #5,#6,#7, & #8 concatenated together into a single trace.

Figure 8. Composite raw Z-axis acceleration trace obtained by concatenating the individual traces from Flights #5,#6,#7,#8.

Separate optimisations per bin

As well as utilising the data from multiple flights in the optimisation process, I’ve provided additional degrees-of-freedom by enabling a separate lowpass filter design per acceleration bin, and by running the optimisation algorithm on each bin separately. The resulting optimal parameters are contained in Table 6.

Bin-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
Low-pass cut-off
frequency (Hz)
1.851.0751.61451.301.56681.0751.851.85
Low-pass filter
order
96558699
Outbound
threshold
-1.47g-0.455g+0.1897g+1.8355g+2.45g+3.4g+4.97g+5.97g
Inbound
threshold
-1.53g-0.54g+0.1927g+1.9156g+2.4748g+3.5g+5.03g+6.03g
Table 6. Adjusted low-pass filter parameters and bin thresholds resulting from numerical optimisation performed per bin on the concatenated raw fatigue meter readings from Flights #5,#6,#7,#8 (displayed in Figure 8). Note: the italicised values are hard-coded, i.e., were not included in the optimisation since no relevant data exists.

Deploying the optimal parameters from Table 6 into the SSL bin-counting algorithm and applying to each of the traces for Flights #5, #6, #7, #8, and also to the combined trace from all four flights together, gives the results shown in Table 7 alongside the corresponding legacy delta readings.

Flight #5-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL 06203221400
Legacy05193622400
% Error20%5%-11%-5%0%
Flight #6-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL 05174723400
Legacy05184625400
% Error0%-6%2%-8%0%
Flight #7-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL 03102614300
Legacy03112614400
% Error0%-1%0%-5%-25%
Flight #8-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL 061753351300
Legacy071650321200
% Error-14%6%6%9%8%
Combined
#5,#6,#7,#8
-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL 02064158932400
Legacy02064158932400
% Error0%0%0%0%0%
Table 7. Comparison of the SSL and legacy fatigue meter delta readings for Test Flights #5, #6, #7, #8 and the combination of all four, using the further-optimised software configuration parameters in Table 6. The average accuracy [defined as 100-average(abs(%Error))] for each individual trace is now 92% (Flight #5), 96% (Flight #6), 97% (Flight #7), 92% (Flight #8), and 100% (combination of Flights #5, #6, #7, #8) .

The results are very encouraging because these further optimisations have resulted in 100% accuracy between the SSL and legacy counts when the four flights are considered together (and better than 94% on average for the individual flights).

Next steps

The convergence between the SSL and the legacy readings across the dynamic envelope, confirms that the SSL is in principle working correctly and that the optimised parameters effectively represent an accurate “reverse-engineering” of the dynamic response of the legacy fatigue meter. That said, I’ll continue to test across future test flights (and fine-tune the parameters as necessary). Also, I’ve not yet achieved flight loads lower than -1.2g and higher than 4.5g (I can’t push or pull hard enough!) so have been unable to test the acceleration sensitivity at the extreme ranges of the fatigue meter, though as noted above, from simulated data, the bin-counting algorithm at least is found to perform as expected at these extremes.

In summary, the following steps remain from the original list in PART ONE:

  • Create a suitable housing box and mount the SSL components within it (i.e., beyond the current prototype foam-board enclosure). I’ve procured a 3D printer for this and will report on progress in my next post.
  • Make any final adjustments to the software based on further flight tests — ideally across the full dynamic envelope of the fatigue meter (+6g, -1.5g) and against different legacy instruments (i.e., rather than just the single instrument used in all tests so far).
  • If the SSL continues to perform accurately and reliably, submit for mod approval via the LAA for installation on my Permit-to-fly Bulldog.
  • Calibrate the SSL via the ground-test company (if required for the mod approval process). This may be required to reach the extreme ranges (+6g, -1.5g) if unable to achieve those in flight.
Standard
Bulldog, Flying

A new Fatigue Meter for the Bulldog aircraft (Part 3)

PART THREE

This is a direct continuation of PART TWO, picking up where I left off, with the continuation of flight tests and comparing the SSL fatigue meter readouts with the legacy fatigue meter readouts, and adjusting the SSL software as necessary.

Flight tests

Flight profiles #1 (maiden flight), #2, #3

As with the maiden flight, I performed two further tests with both the legacy fatigue meter and the SSL prototype installed, configured in AIRPSEED SWITCH MODE and NOMINAL DATA MODE, but with no changes to the SSL from the maiden flight. Likewise, each follow-up flight comprised a standard take-off and climb-out to the local area, some “lightweight” general-handling (i.e., not full aerobatics sorties), then return to base with a few circuits before a full-stop landing.

Observations during the flights

As with the maiden flight, I made the following observations during each follow-up flight:

  • From the status leds on the SSL, it was seen to successfully power-up on activation of the airspeed switch just after take-off once airspeed exceeded 75kts, and remain on mains power for the duration of the flight (except for during the circuits, see below).
  • The “Fatigue Meter” circuit-breaker remained “in” (i.e., did not pop) for the duration of the flight, verifying that the electrical demand combined from the SLL and the legacy fatigue meter remained within the 2 Amp limit of the circuit-breaker.
  • During the circuits, the status leds alternated between mains and battery power, verifying that the SSL was correctly responding to the airspeed switch actions, i.e., switching to battery power (and suspending data recording) whenever the airspeed dropped below 65kts i.e., just before touchdown, and reverting back to mains power (and re-commencing data recording) whenever the airspeed exceeded 75kts again i.e., in the climb-out just after take-off.

Observations after the flights

After each flight, I downloaded the meter-readings archive from the SSL and compared with the delta readings from the legacy device. The results are contained in Table 1, including those for the maiden flight (#1, for completeness, already reported in PART TWO). It can be seen that the SSL and legacy meters have generally been activated in the same acceleration thresholds (“bins”) as one another (which is good), but that the SSL counts are higher than the legacy counts when there are multiple triggers in a given bin. This is particularly obvious when looking at the +1.75g bin which has the most counts since it is the one most often triggered in “lightweight” general-handling flying.

Flight #1-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL0084923000
Legacy0052116000
% Error60%133%44%
Flight #2
SSL00180000
Legacy00131000
% Error0%167%-100%
Flight #3
SSL001131100
Legacy00121000
% Error0%550%0%100%
Table 1. Comparison of the SSL and legacy fatigue meter delta readings for each of the first three test flights. SSL highlighted green when readings are identical to legacy, orange when above legacy, and red when below legacy. Ideally all should be green, but if not, orange is better than red since then the SSL is overestimating the acceleration counts i.e., erring on the side of caution. The rows labelled ‘% Error’ show the difference between SSL readings and legacy readings, per bin, expressed as a percentage of the legacy readings. These are seen to be numerically large, suggesting an over-counting by the SSL versus the legacy.

Low-pass filtering

Given that the SSL readings are generally triggered at the correct threshold levels but are over-counting compared with the legacy, and assuming that the counting algorithm is correct i.e., incrementing when a threshold is passed in the outward direction (but not in the return direction), this suggests that perhaps the SSL pre-count acceleration signal is fluctuating (up and down) more than the legacy pre-count acceleration signal, and that these fluctuations are being duly counted, leading to higher counts for the SSL than for the legacy. The fluctuations could be genuine or due to noise in the system, or a combination of both. No matter the cause, the standard signal-processing solution under such circumstances is to apply a low-pass filter operation to the signal before passing it through the counting algorithm.

Filter design in MATLAB

Since the signal from the SSL accelerometer is already digitised, a digital filter is appropriate, implemented in software (i.e., rather than an analogue filter implemented in circuitry). The design parameters I used are contained in Figure 1.

Figure 1. Digital low-pass filter design using MATLAB (which has a convenient graphical-user-interface for filter design). The sample rate (Fs) of the data retrieved from the accelerometer is 10 Hz (as described in PART ONE). I’ve specified the filter low-pass cut-off frequency (Fc) to be 1.5 Hz since I don’t imagine that the Bulldog would be flown in such a manner as to manoeuvre through the g-levels at a higher rate than this. The other settings specify the type of filter structure. I’ve chosen a 5th order Butterworth design due to the efficiency of IIR filters for real-time computation.

Filter implementation in Python

The very same filter design implemented in MATLAB (Figure 1) can be carried out in Python (using the Scipy package) using the following line of code, where 5 is the desired filter order, and 0.3 is the desired low-pass cut-off frequency specified as a multiple of half-the-sample-rate (the Nyquist frequency) (so we get 1.5 Hz/ (0.5 * 10 Hz) i.e., 0.3):

sos=signal.butter(5,0.3,output='sos')

The resulting filter can then be applied to the raw signal on a batch-by-batch basis via the following line of code in Python:

zf=signal.sosfiltfilt(sos,z)

where z represents a batch of raw acceleration values (in Python Numpy format), sos specifies the filter (from above), and zf represents the filtered acceleration values. Note that I’ve used the filtfilt mechanism which applies the filter forward and backward through the batch in order to eliminate phase offsets — not strictly necessary, but tidy.

Figure 2 shows the results of the low-pass filtering applied to a sample trace from the SSL accelerometer. The raw signal is a sample trace captured from the SSL accelerometer (along the Z-axis i.e., the local vertical). The filtered signal is the output from the digital low-pass filter when applied to the raw signal. It can be seen that the raw signal is indeed noisy as suspected– which is likely the cause of the over-counting of the acceleration threshold-crossings — and that the filter is effective in reducing the noise. It is hoped that by sending the filtered signal (rather than the raw signal) through the acceleration threshold-crossings algorithm, the SSL fatigue meter bin counts come into alignment with the legacy fatigue meter results.

Figure 2. Effect of low-pass filtering on the SSL accelerometer readout.

Post-modification test flight

Table 2 shows the delta readings from a flight test (#4) conducted after the incorporation of the digital filter within the SSL flight software to pre-process the acceleration measurements before being passed through the bin-counting algorithm. In order to trigger more of the bins than in previous flights, the flight profile comprised some aerobatic manoeuvres: two inside loops, two slow rolls, plus a few steep turns. It can be seen that the low-pass filter seems to have largely corrected the issue of over-counting. The SSL readings are now either identical or numerically very similar to the legacy readings across the wide range of bins covered in the flight (a more aggressive aerobatic sequence would be required to trigger all the bins!).

Flight #4-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL023116300
Legacy02397300
% Error0%0%22%-17%0%
Table 2. Comparison of the SSL and legacy fatigue meter delta readings for the fourth test flight, an aerobatics sortie, following the implementation of the low-pass filter. Colour-coding and ‘% Error’ definitions as per Table 1. The errors are seen to be numerically much smaller than before the implementation of the low-pass filter (Table 1), suggesting that the low-pass filter has been largely successful in addressing the issue of over-counting.

G-meter comparison

The SSL outputs for max-g and min-g for this aerobatic sortie were +4.64g and -1.02g, respectively. Figure 3 shows the corresponding readings from the analogue G-Meter in the Bulldog cockpit panel, which are seen to be approximately +3g and -0.45g, respectively. Given that we know that the negative-g must be at least as low as -1g on account of the fact that the sortie involved slow rolls with the aircraft being held in the fully-inverted orientation for a few seconds each time, it strongly suggests that the SSL readout for the negative-g should be trusted more than the G-meter which seems to underestimate the negative-g. Likewise, given that both the SSL and legacy fatigue meters registered three counts in the +3.5g bin, this suggests that the SSL positive-g reading of +4.64g should be trusted more than the G-meter value of +3g which again seems to be an underestimation.

Figure 3. G-meter reading after test flight #4, suggesting a maximum value of approximately +3g and a minimum value of approximately -0.45g which underestimate the actual extrema g-values when compared with the flight profile (fully inverted -1g, etc). The corresponding readings from the SSL are are +4.64g and -1.02g, respectively, which seem to be more trustworthy.

Further improvement

I’m encouraged that the SSL fatigue meter readings are now rather close to the legacy readings with the incorporation of a low-pass filter in the SSL software. My goal is to achieve complete alignment. There are two areas where fine-tuning can be pursued to this end: (i) adjusting the low-pass filter characteristics; and (ii) adjusting the bin thresholds.

Bin threshold adjustments

Rather than using a single numerical value per bin threshold i.e., -1.5g, -0.5g, +0.25g, +1.75g, +2.5g, +3.5g, +5.0g, +6.0g, each bin has a “buffer zone”, bounded by the “Outbound” and “Inbound” values contained in Table 3. For each bin, the count is incremented whenever the acceleration enters the buffer zone in the outbound direction. The bin-counting for the given bin is then deactivated until the acceleration re-enters the buffer zone from the inbound direction, re-arming for the next count for that bin.

Bin-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
Outbound-1.47g-0.47g+0.28g+1.72g+2.47g+3.47g+4.97g+5.97g
Inbound-1.53g-0.53g+0.21g+1.78g+2.53g+3.53g+5.03g+6.03g
Table 3. Bin thresholds used in the counting algorithm. These can be adjusted to fine-tune the counting.

Next steps

Given that the SSL readings are now converging on the legacy readings across the dynamic envelope, this confirms that the counting algorithm is in principle the correct one. I’ll continue to test and fine-tune via adjustments to the low-pass filter design and to the bin buffer zone values in order to bring the SSL readings into complete alignment with the legacy readings.

In summary, the following steps remain from the original list in PART ONE:

  • Create a suitable housing box and mount the SSL components within it (i.e., beyond the current prototype foam-board enclosure). I’m now considering the use of a 3D printer for this.
  • Make any final adjustments to the software based on further flight tests in order to achieve complete alignment between SSL and legacy readings across the dynamic envelope.
  • If it proves to perform with the required accuracy and reliability, submit for mod approval via the LAA for installation on my Permit-to-fly Bulldog.
  • Calibrate the SSL via the ground-test company (if required for the mod approval process)
Standard
Bulldog, Flying

A new Fatigue Meter for the Bulldog aircraft (Part 2)

PART TWO

This is a direct continuation of PART ONE, picking up where I left off. I start with the next major step being Source the uninterruptible power supply (UPS). I work through the integration of this into the device including the implementation of appropriate power and data modes. I next present the installation of the device within a suitable test enclosure mounted in the Bulldog cockpit in parallel with the legacy fatigue meter, including suitable power connectors. Finally, I present the results from the initial flight test.

PiJuice HAT

There are various options available for choice of UPS for the Raspberry Pi . I’ve selected the PiJuice HAT (Figure 1) which is a full-featured UPS with multiple technical benefits in addition to meeting the key requirement of providing battery backup power whenever the mains power is interrupted e.g., when the airspeed switch temporarily cuts the power.

Figure 1 PiJuice HAT uninterruptible power supply which conveniently stacks on top of the Raspberry Pi via the GPIO pins. It incorporates a lithium-ion battery which enables the Raspberry Pi to keep running when the mains power is interrupted (e.g., when the Bulldog airspeed switch cuts the power when the airspeed is low). The Fatigue Meter circuit board (from PART ONE) stacks in turn on top of the PiJuice board. When the mains power is (re-)connected, the power control logic within the PiJuice directs power to the Raspberry Pi (and to the Fatigue Meter board), whilst also re-charging the lithium-ion battery. Testing under load suggests that the system can run on battery for approximately two hours on the standard battery shipped with the PiJuice, pictured above, with all functionality active (Raspberry Pi, accelerometer, WiFi, status leds). So there is no need to install a larger battery (an option available in principle with the PiJuice). The PiJuice HAT including the standard battery costs USD 69 (GBP 48).

Realtime Clock (RTC)

An RTC is required to ensure that the system date & time are maintained whenever the Raspberry Pi is in the shutdown state. This timekeeping is not essential for the fundamental functionality of the Fatigue Meter but is handy for display and for ordering the stored records in chronological order.

To this end, the PiJuice actually incorporates an RTC powered by its lithium-ion battery. However, this means that the PiJuice RTC may only keep running for a few days if the system happens to have been powered-down with the PiJuice battery in a low-charge state. This could be inconvenient, with the clock running out of power (and hence losing all sense of time) whenever the aircraft is idle in the hangar for days or weeks. So, I opted to install a separate RTC (Figure 2), independent of the PiJuice, which has its own button cell battery lasting many years.

Figure 2 A Realtime Clock (RTC) for the Raspberry Pi, independent of the PiJuice, with its own button cell battery lasting years. The unit I chose is the Adafruit DS3231 Precision RTC, costing USD 14 (GBP 10).

Hardware and software integration

Figure 3 shows the SSL Fatigue Meter prototype flight unit, modified from PART ONE, to incorporate the PiJuice HAT and the RTC.

Figure 3. Prototype SSL Fatigue Meter flight unit modified from PART ONE to incorporate the PiJuice UPS and the separate RTC. The various components are mounted within a modular frame which, in turn, will (eventually) be contained within a suitable protective enclosure. Visible at the top of the image are cables & connectors which will eventually be routed to the outside of the enclosure. These include (i) two leds to display system status; (ii) a push button (blue plastic button, upper left in image) to manually switch the unit on and off — which can be used to override the automated switching controlled via the Bulldog airspeed switch; and (iii) a micro usb connector to provide a separate source of mains power (e.g., via a phone charger etc) i.e., in addition to the mains power from the Bulldog airspeed switch circuit. I incorporated a diode to allow both the micro usb power supply and the airspeed switch mains power supply to connect to the PiJuice micro usb power supply in parallel (so there is no need to take care to disconnect either of the mains power supplies whilst the other is in use).

GPIO configuration for PiJuice

In order for the PiJuice HAT to correctly operate alongside the accelerometer and the separate RTC, over the shared i2c bus, it must be configured as shown in Figure 4.

Figure 4. Correct configuration of the PiJuice HAT in order for it to function correctly over the i2c bus which is shared with the accelerometer and the separate RTC. The EEPROM address has been changed from the default value of 50 to 52 (and EEPROM Write unprotect checked to enable the change to stick). The RTC address 68 for the clock built-in to the PiJuice is left “as is”. This clock is not used and is disabled by ensuring that the corresponding kernel is not loaded. Instead, the separate clock — which also resides at address 68 — is enabled by loading the appropriate kernel via the /boot/config.txt file, as shown in Figure 5.
Figure 5. The /boot/config.txt file must be edited as shown to load the appropriate kernel for the separate RTC. The i2c addresses are then correctly assigned, as shown in Figure 6.
Figure 6. When correctly configured, the i2cdetect command yields the result shown above. The accelerometer is assigned address 19, the PiJuice address 14, and the separate RTC address 68, replaced instead by UU which signifies that the clock kernel has been successfully loaded.

Auto-start via PiJuice

The PiJuice has been configured to enable the unit to automatically power-up whenever mains power is supplied. Fundamentally, this is to facilitate incorporation of the fatigue meter within the Bulldog airspeed switch circuit, exactly like the legacy fatigue meter. This auto-start behaviour was achieved as follows (essentially from trail-and-error because the documentation for the PiJuice in combination with Raspberry Pi 4 is incomplete):

  • Ensure that mains power is supplied only via the PiJuice micro usb connector, and not the Raspberry Pi usb-C connector.
  • Ensure that the PiJuice is configured as shown in Figure 7.
Figure 7. Correct configuration of PiJuice HAT “System Task” tab to enable auto-start of Raspberry Pi 4 whenever mains power is connected. The mains power must be supplied to the PiJuice micro usb connector and not to the Raspberry Pi usb-C connector.

RTC synchronisation

The RTC can be conveniently synchronised to the correct time by connecting the Raspberry Pi to the internet (via the ethernet cable). This only needs to be done once. Thereafter, the RTC will keep time as long as its button cell battery has charge, i.e., for many years of nominal use (when the battery eventually needs replacing, the time can be set manually via mobile phone connected to the Raspberry Pi WiFi hotspot, i.e., without the need for an internet connection).

The RTC is now the basis of timekeeping. In order for the Raspberry Pi to adopt the time from the RTC each time it boots up, the /etc/rc.local file needs to be edited as shown in Figure 8.

Figure 8. In order for the Raspberry Pi to synchronise its system clock with the RTC each time it boots up, the /etc/rc.local file needs to be edited as shown.

POWER and DATA modes

I’ve extended the python code (whose primarily function is to gather data from the accelerometer) described in PART ONE to provide the following modes of operation:

POWER MODES

  • AIRSPEED SWITCH MODE — this is the nominal mode of operation. The system is automatically powered on when the airspeed switch engages. If the airspeed switch then disengages temporarily, system continues to run on battery. If/when the airspeed switch re-engages, the system runs on mains again whilst the battery charges. If the period of running continuously on battery (i.e., when airspeed switch is disengaged) exceeds 30 minutes, or if the battery charge state falls below 5%, the system automatically shuts down.
  • MAINS BYPASS MODE — if power is provided via the micro usb adapter (e.g., from a phone charger etc), the system is automatically powered on and remains on until the power supply is removed and then the system automatically shuts down after 30 minutes running continuously on battery or if the battery charge state falls below 5%. All triggering from the airspeed switch is ignored when the micro usb power is connected. Therefore this mode should only be used for testing, or for charging the battery when parked, or for downloading meter readings to a mobile device when parked. Otherwise, erroneous readings will be recorded (e.g., due to taxying over bumps etc) which would nominally be ignored via the airspeed triggering.
  • BATTERY MODE — when no mains power is available via the airspeed switch circuit or the micro usb connector, the system can be powered up and run on battery alone. This is achieved by pressing (and quickly releasing) the externally-routed push button switch (visible in Figure 3). The system automatically shuts down after 30 minutes running continuously on battery or if the battery charge state falls below 5%. Alternatively, the system can be shut down manually by pressing and holding the push button switch for 10 seconds. Note: this manual shutdown can be used irrespective of whether the system is running on battery or mains: but if mains power remains connected, the system will automatically re-start after 60 seconds.

DATA MODES

  • NOMINAL DATA MODE — this is the nominal mode of operation used in combination with AIRSPEED SWITCH MODE for power. Whenever the mains power is engaged via the airspeed switch, the accelerometer readings are recorded and counted. In this state, the BLUE LED is lit and the RED LED is unlit. Whenever the mains power is dis-engaged via the airspeed switch, the accelerometer reading (and counting) are temporarily suspended, emulating the behaviour of the legacy fatigue meter. In this state, both the BLUE LED and the RED LED are lit. Also, whenever the mains power is dis-engaged continuously for 5 minutes, the currently running data gathering session is terminated, the acceleration counts logged for that session, and a new session started.
  • TEST DATA MODE — this is a special mode available for testing, and must be configured by editing the system configuration file stored on the device then rebooting. When running in TEST DATA MODE, the power settings are ignored (i.e., mains or battery treated identically), and the accelerometer readings are recorded and counted as long as the system is powered up. In this mode, the acceleration measurements can be tested in any power mode. For example, without any connection to the aircraft power (i.e., in MAINS BYPASS MODE or in BATTERY MODE). When data gathering in this TEST DATA MODE, the BLUE LED is lit and the RED LED is unlit. Using this mode, I was able to perform some basic functional tests of the accelerometer readout and bin counting whilst running on BATTERY MODE in my car, before testing in the aircraft (see later).

Battery health self-test

I’ve also incorporated self-tests of the PiJuice battery and power system and included these in the overall system health diagnostic report (described in PART ONE). If there is an fault in the battery or power system, the entire instrument is deemed to be unhealthy.

Test drive

To perform basic tests of the instrument, I took it for a test drive in my car (Figures 9 & 10). This enabled me to test the power modes which don’t require connection to the Bulldog airspeed circuit i.e., the MAINS BYPASS MODE and BATTERY MODE as well as the TEST DATA MODE. I was also able to functionally test the NOMINAL DATA MODE by emulating the action of the airspeed switch by manually connecting and disconnecting the mains bypass micro usb power cable.

Figure 9 Testing the basic functionality of the power and data modes in my car using a temporary enclosure (cardboard box!) for now. In the scenario depicted, the system is running in BATTERY MODE (no power cable connected) and TEST DATA MODE (blue led top-right of image).
Figure 10 Box closed, instrument fully self-contained, running on battery, with the accelerometer measuring bumps in the road.

Test flight

Temporary enclosure

In preparation for the first test flight, I graduated from the cardboard box of the road tests and built a temporary enclosure using foam-board in order to secure the instrument in the Bulldog cockpit. Eventually I will create a durable enclosure using, for example, an acrylic box, once the design of the instrument is stabilised. Figure 11 shows the foam-board enclosure (with the instrument inside), mounted in the same location in the Bulldog where the legacy fatigue meter is usually located (I temporarily removed the legacy meter in order to check that the new instrument would fit properly).

Figure 11. PSSL fatigue meter inside a temporary foam-board enclosure, mounted in the same location the legacy meter is usually located in the Bulldog cockpit i.e., under the fibre-glass cover (removed, not shown) situated behind the glovebox between the seats.

Figure 12 demonstrates that the SSL fits neatly in the Bulldog cockpit, and is visible through the perspex window in the glovebox, the same window which is ordinarily used for taking readings from the legacy fatigue meter. With the new instrument, the readings are uploaded to a mobile phone via WiFi, so there are no readings to be taken as such. Instead, the window is useful for observing the status leds on the new instrument, visible via plastic windows inserted in the foam-board enclosure front plate.

Figure 12. SSL fatigue meter mounted as in Figure 11, but now with the fibre-glass cover in place, viewed through the perspex window in the back of the Bulldog cockpit glovebox, the same window ordinarily used for taking readings from the legacy instrument. The perspex window is convenient for observing the status leds on the new instrument (illustrated here in the powered-on state). The meter readings for the new instrument are uploaded via WiFi to a mobile phone, replacing the need to take physical readings through the window.

Figure 13 shows a WiFi signal strength diagnostic measurement taken on a mobile phone when seated in the Bulldog cockpit. The signal strength from the instrument WiFi hotspot is wholly sufficient to establish network communication with the device. As long as the eventual permanent enclosure (e.g., fabricated from acryclic) has similar transparency to WiFi, network communication should be fine. Note: this need for WiFi transparency is a reason not to build the eventual enclosure from metal.

Figure 13. Strong WiFi signal from the SSL hotspot (“FlyLogicalSSLWiFi”) broadcast from within its foam-board enclosure located underneath the fibre-glass glovebox cover in the Bulldog cockpit.

Power connectors

For convenience, the power connection to the SSL should use the same connector as the legacy fatigue meter. It turns out that although the original (“Plessey”) connectors are no longer available they can be made to order via Lane Electronics (www.fclane.com). Figure 14 shows a male-female pair of connectors obtained from Lane Electronics.

Figure 14. Electrical power connectors (obtained from Lane Electronics) compatible with the legacy “Plessey” connector in the Bulldog. The female connector (top of image) is the same as the plug which connects the Bulldog airspeed switch to the legacy fatigue meter.

Figure 15 shows a “splitter” harness I constructed using these connectors, enabling the 28 V power cable from the Bulldog airspeed switch to be used for powering both the legacy fatigue meter and the SSL fatigue meter in parallel.

Figure 15. A “splitter” harness constructed using the connectors in Figure 14 to provide power simultaneously to the old and new fatigue meters via the existing Bulldog power supply cable (fed from the airspeed switch).

Figure 16 shows the harness connected to the old and new fatigue meters (on the bench).

Figure 16 showing the harness from Figure 15 connected to both the old and new fatigue meters in parallel. The existing plug (situated in the Bulldog cockpit, just behind the passenger seat) which ordinarily connects to the legacy fatigue meter, will connect to the adapter shown at the bottom of the image.

Figure 17 shows both fatigue meters installed in the Bulldog, with the harness connected. This parallel arrangement is necessary in order to test the new instrument against the old. The eventual goal is for the new device to wholly replace the old, when it would then be mounted in the legacy location as in Figures 11 & 12. But for now, until approved, the legacy device must remain installed and operational, with the new device in a suitable location, running in parallel.

Figure 17 Both the old and new fatigue meters installed in situ in the Bulldog cockpit. The legacy instrument (not visible) is installed in its usual place behind the glovebox, under the fibre-glass shroud. The new instrument is attached (using velcro pads) to the armrest immediately above the location of the legacy device. The harness from Figure 16 is connected to both. The existing plug (lower-right in the image) which ordinarily connects to the legacy fatigue meter is now connected to the adapter, thereby providing power to both fatigue meters in parallel.

Telemetry monitoring mode

To assist with testing and debugging, I’ve created a mode of operation whereby the telemetry from the SSL can be monitored in real-time on a mobile phone via the WiFi hotspot. Specifically, I access the Raspberry Pi via the RaspController mobile app and run the fatigue meter software kernel inside a console window.

Pre-flight sense check

As an example of the use of telemetry, Figure 18 shows the accelerometer readouts and the bin-count values as they change during a “roll” manoeuvre as displayed in Figure 19. During this manoeuvre, the vertical acceleration measured by the accelerometer goes from +1g to – 1g and back again. This entire manoeuvre represents a single loading cycle (even though it comprises two passes — out and back — through each g level). If my understanding of how the legacy fatigue meter functions is correct (as discussed in PART ONE), this cycle should trigger an increment of 1 in each of the +0.25g bin and the -0.5g bin counts. This is indeed the case, as verified in the observed telemetry in Figure 18, implying that the computation has been correctly programmed in accordance with the assumed algorithm. However, if the underlying assumptions prove to be incorrect and the legacy instrument counts transitions through acceleration levels differently — and this will become apparent when testing the SSL alongside the legacy meter — I will need to modify the programming logic accordingly. But at least such software changes are straightforward once the algorithm is known.

This sense-check exercise represents a simple but effective end-to-end test of the SSL fatigue meter (albeit by activating only two of the g bins), in preparation for actual flight tests.

Figure 18. Mobile phone screenshot video capturing real-time telemetry from the SSL for test and debugging purposes. As the video progresses, observe how the bin_counts increase by 1 (for each of the -0.5 g and the +0.25 g bins) when executing the “roll” manoeuvre displayed in Figure 19.

Figure 19. Executing a “roll” manoeuvre (actually, a “half-roll” then back again due to practicality of holding the SSL in one hand and the camera-phone in the other) with the SSL operating wholly self-contained (i.e., in BATTERY MODE and TEST DATA MODE).

The maiden flight

I performed the maiden flight on 18 March 2022 from Ronaldsway Airport, Isle of Man (EGNS), with both the legacy fatigue meter and the SSL prototype installed as in Figure 17. The SSL was configured in AIRPSEED SWITCH MODE and NOMINAL DATA MODE. The flight comprised a standard take-off and climb-out to the local area, some “lightweight” general-handling (i.e., not a full aerobatics sortie) in order to exercise the accelerometer bin-counts in the non-extreme acceleration ranges, then return to base with a few circuits before a full-stop landing.

Observations during the flight

I made the the following observations during the flight:

  • From the status leds on the SSL, it was seen to successfully power-up on activation of the airspeed switch just after take-off once airspeed exceeded 75kts, and remain on mains power for the duration of the flight (except for during the circuits, see below).
  • The “Fatigue Meter” circuit-breaker remained “in” (i.e., did not pop) for the duration of the flight, verifying that the electrical demand combined from the SLL and the legacy fatigue meter remained within the 2 Amp limit of the circuit-breaker.
  • During the circuits, the status leds alternated between mains and battery power, verifying that the SSL was correctly responding to the airspeed switch actions, i.e., switching to battery power (and suspending data recording) whenever the airspeed dropped below 65kts i.e., just before touchdown, and reverting back to mains power (and re-commencing data recording) whenever the airspeed exceeded 75kts again i.e., in the climb-out just after take-off.

Observations after the flight

After the flight, I downloaded the meter-readings archive from the SSL (via its WiFi hotspot to my mobile phone). These are displayed in Figure 20. Comparison with the delta readings from the legacy device are contained in Table 1. It can be seen that the SSL and legacy meters have been activated in the same acceleration thresholds (“bins”) as one another (which is good), but that the SSL counts are generally higher than the legacy counts, suggesting that the counting algorithm may have to be amended to bring the SSL measurements into alignment with the legacy measurements. Further flight tests are required to establish precisely what modifications are required.

Figure 20. Readings from the SSL fatigue meter following the maiden flight on 18 March 2022. Before the flight, the SSL had been reset to “zero” so the displayed session counts are the same as the total counts (and the delta counts). Table 1 shows these delta counts per bin for the SSL and the legacy fatigue meter.
-1.5g-0.5g+0.25g+1.75g+2.5g+3.5g+5.0g+6.0g
SSL0084923000
Legacy0052116000
Table 1. Fatigue meter delta readings from both the SSL and the legacy fatigue meters following the maiden flight of the SSL.

As well as the acceleration bin count readings, the SSL software has been modified to record the maximum and minimum accelerations from the flight (i.e., the “maxG” and “minG” values in Figure 20). These are seen to be +3.42g and -0.36g, respectively. Figure 21 shows the corresponding readings from the analogue G-Meter in the Bulldog cockpit panel, which are seen to be approximately +2.6g and +0.3g, respectively. The SSL values suggest wider extremes than the analogue instrument. Without independent calibration of both, it is not certain which is correct!

Figure 21. G-meter reading after maiden test flight on 18 March 2022, suggesting a maximum value of approximately +2.6g and a minimum value of approximately +0.3g. The corresponding readings from the SSL are are +3.42g and -0.36g, respectively (from Figure 20).

Next steps

I’m encouraged that the SSL worked end-to-end in-flight as expected and that the fatigue meter readings it generates are directionally correct compared with the legacy fatigue meter. The detail of the counting algorithm needs to be investigated through further tests and calibration, and I’ll report my findings in a future post. In summary, the following steps remain from the original list in PART ONE:

  • Select a suitable housing box and mount the SSL components within it (i.e., beyond the current prototype foam-board enclosure)
  • Make any final adjustments to the hardware and software based on the flight tests
  • If it proves to perform with the required accuracy and reliability, submit for mod approval via the LAA for installation on my Permit-to-fly Bulldog.
  • Calibrate the SSL via the ground-test company (if required for the mod approval process)

I’m also encouraged to discover that I’m not the only one using a Raspberry Pi in a “production grade” aerospace application: here is a project which uses a Raspberry Pi for the first time ever as a spacecraft flight computer.

Standard
Bulldog, Flying

A new Fatigue Meter for the Bulldog aircraft (Part 1)

Also see (now published)

PART TWO

PART THREE

PART FOUR

PART ONE

In a previous post from 8 years ago I present my musings on the Bulldog Fatigue Meter. A couple of (expensive) three-yearly overhaul cycles have passed since then, and the next one is coming up. So it got me thinking “surely there is a better way of doing this?”. In this post, I describe a prototype replacement device which I’m currently building. If it proves to perform as expected in terms of accuracy and reliability, the ultimate intent would be to get it approved as a suitable replacement for the old device (hereafter referred to as the “legacy” device). I call the new device the “FlyLogical Solid State Laboratory (SSL)” because the technological platform is versatile and extensible, and can potentially be used for more than just a fatigue meter. Also, the abbreviation “SSL” is a play on “Suitable System of Levers”, see that previous post for the explanation.

Design goals

The main design goals for the new Fatigue Meter are summarised as follows:

  • Utilise a modern solid-state accelerometer with a digital readout i.e., no moving parts whatsoever in the device, eliminating the need for routine / periodic maintenance / overhaul
  • Include an electronic self-test at boot-up so that the device can report its health on every use cycle. If it generates a healthy signal, the device is deemed serviceable. If it generates an unhealthy signal, it would be deemed unserviceable (and only then would require technical attention).
  • Incorporate a wireless connection to a mobile phone to (i) facilitate convenient retrieval of the Fatigue Meter readings per flight (as well as the ability to download the entire stored archive); (ii) monitor the health of the device via an on-screen health diagnostic report
  • Compatibility with the existing Bulldog Fatigue Meter power supply and airspeed switch for “plug-and-play” convenience when it comes to replacing the legacy device

Solution architecture

Solid-state accelerometer

There are many available on the market. I’ve chosen the LIS331 from STMicroelectronics on account of the following key features (see the datasheet for full list of features):

  • Dynamically selectable range ±6g/±12g/±24g
  • Embedded self-test
  • Digital output interface
  • Low cost (USD 13 / GBP 9)
Figure 1 The LIS331 solid state accelerometer (still in its packaging) in the palm of my hand, illustrating how tiny it is. The accelerometer is the integrated circuit (black chip) located at the centre of the red breakout board.

Raspberry Pi host platform

For interfacing to the accelerometer, I’ve chosen the Raspberry Pi host platform on account of the following key features:

  • Flexible digital bus (the General Purpose Input/Output or GPIO bus) for interfacing to external devices such as the LIS331
  • Built-in Wi-Fi capability (for facilitating the wireless connection to a mobile phone)
  • Convenient to program using the Python language
  • Small physical footprint, low-power consumption
  • Low cost (USD 88 / GBP 65)

Building the prototype

Breadboard

Figure 2 shows the prototype SSL.



Figure 2 SSL “breadboard” prototype. The core of the instrument is the LIS331 accelerometer which can be seen dangling from its connecting wires, visible at the centre of the image (the red breakout board from Figure 1). For convenience when laying out the circuit design, the LIS331 is soldered to a GPIO “wedge” (the black T-shaped breakout board) which is in turn connected to the Raspberry Pi Model 4 B (top right of image) via the ribbon cable. As well as the accelerometer, I’ve incorporated a pair of LEDs and a beeper (plugged-in to the white breadboard) for conveying status information without the need to connect the mobile phone. This will allow the aircrew to instantly check the status of the fatigue meter whilst airborne without the distraction of the mobile phone. I’ve also wired-in a toggle button to emulate the effect of the airspeed switch. For now, this simply toggles the data collection process to test that the airspeed interrupts are managed correctly in software. It doesn’t actually cut power to the device. This will need to wait until I’ve sourced and incorporated the uninterruptible power supply (discussed later in the post). Bottom left in the image is the (“Splinktech”) voltage regulator which converts 28V dc down to 5V dc required by the Raspberry Pi. Top left in the image is a Hewlett Packard laboratory power supply, emulating the Bulldog 28V bus, providing power to the voltage regulator which in turn powers the Raspberry Pi via its standard USB-C power connector.

Software programming

There is a large active community of “makers” who implement hardware/software projects on the Raspberry Pi. As such, there is a great deal of information available online, including sample Python code. Utilising these resources, it was straightforward to write the software kernel to interface to the LIS331 accelerometer via the Raspberry Pi GPIO bus. The following video snippet shows a screenshot of the real-time capture of 3-axis accelerometer data from the LIS331 via the Raspberry Pi. Notice the Z-Axis value of (close to) +1g, obtained because the accelerometer Z-Axis happens to be aligned close to the local vertical. You might ask why the registered value is slightly above +1g when the maximum it should be is 1g when stationary on the Earth’s surface ? As noted in the datasheet for the device, taking the average from the two Z-axes measurements (i.e., the positive and the negative) should eliminate the bias (so turning the device upside down will result in a reading slightly less than 1g).

You can see from the code comments at the top of the screenshot that the code was adapted from open-source community contributions (many thanks to jenfoxbot for getting me started on this hardware/software combination).

Accelerometer real-time readouts from the benchtop prototype SSL

Acceleration threshold counting algorithm

The acceleration measurements need to be converted to counts within the “bins” [-1.5g, -0.5g, 0.25g, 1.75g, 2.5, 3.5g, 5.0g, 6.0g] as defined for the legacy device and the Bulldog fatigue index (FI) calculations. Without knowing the internal details of the legacy device’s gating and counting logic, I’m going to assume a simple “threshold” counting approach, summarised as follows:

  • Only a single axis is used in the counting. This is the local vertical axis (which measures +1g in straight and level flight). By careful positioning of the accelerometer within its housing box, and careful alignment of the box in the aircraft, the local vertical will correspond to a singe axis of the three-axes accelerometer readouts
  • Set the full-scale range of the accelerometer to ±12g from the possible values of ±6g/±12g/±24g thereby ensuring that the allowable loading envelope of the Bulldog is fully captured (-3g to +6g). Note, it may be acceptable to set the accelerometer to ±6g (which would prove higher resolution over the range of interest) but by choosing ±12g we can be sure that the upper bin +6g is fully covered.
  • Set the intrinsic sample rate of the accelerometer to 50Hz (from the possible range of 0.5 Hz to 1kHz) and down-sample the readout via the GPIO bus to 10Hz, thereby providing a measurement every 0.1 seconds to the bin counter. Process the bin counts in batches of 100 samples which corresponds to 10 second chunks (at 10Hz). This combination should provide sufficient bandwidth to capture the dynamic load transient in flight, whilst not overly taxing the Raspberry Pi resources (CPU, data bus, etc).
  • For those bins which record accelerations greater than 1g, the count for a given bin will be incremented by one each time the (single axis local vertical) acceleration passes through the given threshold (bin value) in an upward direction (downward passes will be ignored)
  • For those bins which record accelerations less than 1g, the count for a given bin will be incremented by one each time the acceleration passes through the given threshold in a downward direction (upward passes will be ignored)

The threshold counting algorithm was straightforward to implement in software. Comparative testing (legacy and SSL, side-by-side) will determine if this simple approach is valid, or if a more complex algorithm is required.

Health self-check

As per the design goals, a key feature of the SSL is to eliminate the requirement for periodic overhaul (e.g., every three years for the legacy device), relying instead on internal health-check diagnostics before each flight. As per the datasheet, the LIS331 incorporates such a self-test functionality. It is triggered by sending a signal to the device to instruct it to (electrically) apply calibrated forces along each sensing axis. By checking that the resulting sensed values fall within the published expected range, the health of the accelerometer can be definitively verified. The sign (direction) of the calibrated forces can be flipped, allowing testing to be performed along both directions of each sensing axis. It was straightforward to implement the self-tests in Python in accordance with the datasheet, and incorporate them in the overall SSL software stack.

Wi-Fi, web-server & web-app

As per the design goals, the SSL should provide wireless communication with mobile phones to facilitate convenient transfer of the meter readings and health reports. The Raspberry Pi incorporates both Bluetooth and Wi-Fi for wireless connectivity. I decided to use Wi-Fi owing to its greater flexibility.

Configuring the Raspberry Pi as Wi-Fi hotspot

This is a common usage of the Raspberry Pi and is straight forward to configure. For the SSL, I have configured the Raspberry Pi as a private Wi-Fi hotspot i.e., reachable from connecting devices but with no public internet access. Figure 3 shows the SSL Wi-Fi network (named “FlyLogicalSSLWiFi”) available via my Android mobile phone (on iOS devices, the SSL hotspot similarly appears in the list of available networks).

Figure 3. SSL Wi-Fi hotspot. Simply connect to “FlyLogicalSSLWiFi” from any mobile device to establish a private wireless connection (i.e., without internet) between the SSL and the mobile device. This connection allows transfer of meter readings and health reports from the SSL to the mobile device.

Web-server & web-app

Having established a Wi-Fi connection, the next key question is how to communicate in software between the SSL and the mobile device. My first instinct was to create a traditional “mobile app”. However, that would entail writing separate code for each mobile platform (i.e., Android and iOS, etc). There are software technologies to assist with this (e.g., Xamarin, Flutter, ReactNative, Ionic, etc) which facilitate code-reuse between the mobile platforms. But in my experience with all of these technologies, none allow for 100% code re-use, so there is inevitably a need for “last mile” programming specific to each platform. Moreover, for iOS and Android, there is a need to interface with the respective App Stores which brings its own bureaucracy to the process.

So, I decided instead to build a web-app — which utilises a browser-based user-interface and a suite of web pages hosted on the device. This is effectively a universal solution since every modern smartphone incorporates a browser. Moreover, it means that any device with a Wi-Fi connection and a browser can access the SSL: not just an iOS or Android phone/tablet. For example, a Windows-, Mac-, or ChromeOS- laptop.

The only (minor) disadvantage of using a web-app as the user-interface between the SSL and the mobile device is the consequent need for the SSL to host a web-server in order to serve the web pages for consumption by the web-app. But this really is a minor disadvantage because it is a common use of the Raspberry Pi to host a web-server. As such, I have configured the SSL Raspberry Pi to run the Apache web-server (which along with nginx is one of the most popular web-servers in the world). A key benefit of using Apache is the ease of deployment of PHP-based web-apps. PHP remains one of the most popular web development languages. It is easy to program, so I’ve chosen it for the SSL web-app pages. Figures 4 & 5 show screenshots taken from my mobile phone of the prototype SSL web-app main page and system-health page, respectively. Figures 6 & 7 show screenshots of the downloaded meter readings archive and detailed health report, respectively.

Figure 4 Prototype SSL web-app main page as viewed from my Android phone via the SSL Wi-Fi connection. This page contains the meter readings and totals (accelerometer bin counts) pertaining to the most recent data-capture session (or flight, once in operation). The counts displayed here were created by manually shaking the SSL on the bench (!). The “click here for detailed Health Report” link leads to the page shown in Figure 5. The “Click here to download archived Meter Readings” link triggers a download (from the SSL to the mobile phone) of the meter readings archive, the contents of which are shown in Figure 6. The archive data file is physically stored on the Raspberry Pi micro-SD card and is therefore permanent (i.e., survives power recycles) and can be transferred from the SSL to the mobile phone during any Wi-Fi session.
Figure 5. Prototype SSL web-app system health page as viewed from my Android phone via the SSL Wi-Fi connection. This page contains details of all aspects of the SSL system health as of the latest boot-up, including the results from the accelerometer self-tests. The “_self_test_A” and “_self_test_B” pertain to switching the sign (direction) of the self-test calibrated forces. The “accelerometerIsHealthy” flag is set to “1” (pass) only if the self-test measurements fall within the published ranges from the LIS331 datasheet. The “click here to download Health Report” link triggers a download (from the SSL to the mobile phone) of the health report, the contents of which are shown in Figure 7.
Figure 6. Excerpt from the downloaded SSL meter readings archive which contains the chronological records of bin counts and totals for every data-capture session (aka flight, once in operation). The data is in json format for maximum portability.
Figure 7. The downloaded SSL health report (from Figure 5). Again, the data is in json format for maximum portability

Power supply considerations

SSL power consumption

Measuring the total power consumed using a power monitor plugged into the domestic mains plug which feeds the Raspberry Pi power supply suggests a nominal power consumption of 3W with the accelerometer, Wi-Fi hotspot, and web-server all operational (see Figure 8 ).

Figure 8. Measuring the total power consumed by the prototype SSL running on domestic mains power reveals a power consumption of 3W with all SSL functionality operational. Note this power number includes any power loss in the wall power supply unit (which is likely to be very low).

At a voltage of 5V feeding the Raspberry pi, this equates to a current of 0.6A which is well within the 2A circuit-breaker limit on the Bulldog fatigue meter power circuit. This implies that from an electrical load point-of-view, the SSL can be a plug-and-play replacement of the legacy unit. All that is required is a voltage regulator to convert the Bulldog nominal bus voltage of 28V down to the 5V input required by the Raspberry Pi. No re-cabling required (just a change in connectors).

Figure 9 shows a candidate voltage regulator which I’m testing.

Figure 9. An off-the-shelf dc-dc voltage regulator for converting the 28V from the Bulldog main bus down to the 5V required to power the SSL. The unit can be seen in Figure 2 (lower left in the image) where it is undergoing continuous testing, powering the prototype SSL . It generates no discernible heat (i.e., the unit is not warm to the touch, even after many days of continuous operation).

This is a compact, sealed unit (encased in resin, waterproof and contaminant-proof) which accepts a range of input voltage (12 to 28V dc) and generates a stabilised 5V dc output. Bench-testing with a dc power supply (emulating the Bulldog 28V bus) determines the output to be 5.25V, irrespective of fluctuations in the input voltage. With a current rating of 3.5 A this is more than adequate for the powering the SSL.

Airspeed switch

The most complex part of the power supply story is the need for a rechargeable battery pack to temporarily keep power flowing to the Raspberry Pi whenever the mains supply is cut. Such a power cut will be a routine occurrence. For example, whenever the crew selects “battery master switch off” or whenever the Bulldog airspeed switch detects a low airspeed and cuts power to the fatigue meter circuit. This is intentional behaviour to prevent the fatigue meter from recording the shock loads incurred when the aircraft lands. Likewise, the airspeed switch doesn’t provide any power to the circuit until the airspeed exceeds 73–76 kts (ref RAF Bulldog document AP-101B-3801-1), pertaining to being airborne. This is to prevent the fatigue meter from recording shock loads encountered during ground manoeuvres, taxying over bumpy ground, etc. This does raise the question about the airspeed switch cutting power whenever the airspeed falls below 65–68 kts when not intending to land. This can happen when practicing full stalls or aerobatic stall turns. In which case the fatigue meter would temporarily not be logging legitimate loads. But this is the case for the legacy device as well as the SSL (!)

The reason why interruptions in supply power need to be managed gracefully is that the Raspberry Pi, a fully-fledged computer in its own right, needs to be shutdown cleanly (just like a desktop PC). It is inappropriate — and could actually damage the system — if the power is simply cut without issuing the proper shutdown command to the operating system. By utilising a battery pack in the mode of an uninterruptible power supply (UPS), the power cycles can be managed properly.

I’m currently researching the options for the UPS implementation. Owing to the current global shortage in computer chips, there is a dearth in the supply of such components, so it could take some weeks to source the kit.

Building the flight unit

Having proven with the breadboard prototype that the design meets the requirements in principle, the next step was to build the components into a permanent hardware solution suitable for flight.

Figure 10 shows the result with all the components soldered to a strip-board specifically designed for the Raspberry Pi. Moreover, the SSL hardware is wholly contained on it’s own stripboard that connects to the Raspberry Pi via a stackable GPIO header. This physical modularity is a key benefit of the Raspberry Pi as a host platform. Note that the accelerometer (red breakout board) has been mounted in the geometric centre of the stripboard with the positive Z-axis pointing vertically downwards so that it reads nominal +1g in straight and level flight.

Figure 10. Build-out of the prototype SSL flight unit using soldered joints on a single piece of stripboard rather than the assortment of circuit boards used in the “breadboard” (Figure 2). The cables visible in the bottom of the image are for USB mouse & keyboard and wired networking. These make programming and testing the SSL much more convenient than the alternative approach of accessing the Raspberry Pi for development via wireless remote login. These cables will be absent under normal operation of the SSL when the only physical cable connection will be the power supplied (from the 5V end of the voltage regulator) via a USB-C connector to the Raspberry Pi. When the unit has been finalised, I will further secure all components and joints with (non-conducting) epoxy to provide additional physical robustness. The box for housing the device has not yet been selected since the required dimensions will depend on the choice of UPS which is still an open question. The housing box will be mounted on a frame that fits identically in the location (behind the glovebox in the Bulldog cockpit) currently occupied by the legacy unit. Note that the LEDs are mounted on long cables so they can eventually be position within the housing box behind cut-outs in a manner that they are visible through the perspex window in the Bulldog glovebox (in front of the fatigue meter location). In this way, the aircrew will be able to determine the status of the SSL by simply looking at the LEDs through the perspex window. The final layout of the voltage regulator and the power cable connections has still to be established (once the final dimensions of the housing box are known).

Testing

So far, all testing has been ad hoc, on the bench, whilst building and proving the circuitry and software. Basically, this has amounted to running the system continuously for days, and exciting the accelerometer by manually shaking the unit, checking that the bin counters are responding in the the recorded archive.

More formal testing will be required once the flight unit has been completed. My intention is to fly the SSL alongside the legacy fatigue meter on my Bulldog, and check that both devices give the same readings. In this way, the SSL will have been calibrated against a calibrated instrument (the legacy device).

To obtain approval, I expect that it may also be necessary to formally calibrate the new device using the same approach adopted for the legacy devices (ground-testing on a centrifuge etc). But the aim would be that this can be a one-off exercise by the relevant company (i.e., the replacement company for the now defunct Pandect). Thereafter there should be no requirement for recurring (e.g., three yearly) overhaul / calibration since unlike the legacy device, the SSL has no moving parts (i.e., no mechanical wear). So, if the SSL passes its internal health check on boot-up each time, it would be deemed fit for flight, irrespective of how many years in service. It is the same logic applied when determining whether any other avionics unit is fit for flight. Radios and GPS units etc., don’t need to be overhauled on a recurring basis. They are deemed fit to fly if they power-up properly and exhibit nominal behaviour. Only on failure are they serviced or replaced, irrespective of service life.

Next steps

From today’s perspective, the major next steps are as follows:

  • Source the uninterruptible power supply (UPS), integrate it in the hardware and software stack
  • Select a suitable housing box and mount the SSL components within it
  • Flight test the SSL in parallel with the legacy device and compare the measurements
  • Make any final adjustments to the hardware and software based on the flight tests
  • If it proves to perform with the required accuracy and reliability, submit for mod approval via the LAA for installation on my Permit-to-fly Bulldog.
  • Calibrate the SSL via the ground-test company (if required for the mod approval process)

I will report on progress in future post(s).

Standard
Uncategorized

Mothership

A song for my Mum

Let’s travel on the Mothership
Going at the speed of light
Visit all our other worlds
Where everything’s alright

Lean against the wind and rain
Gaze across the sea
Singing songs from long ago
Dancing you and me

I’ll go to sleep and dream about
Our cottage in the sky
I’ll read the note you left me there
And promise not to cry

About caravans and riverbanks
And fishing rods and reels
Woolworths on the main street
Guitars and bicycles

Dinner table conversations
Dishes in the sink
Homemade Christmas decorations
And now, a final drink

Onwards to the end of time
We run this crazy race
See you at the poolside bar
Beyond the edge of space

Standard
Audio, Plugins

Stereo Panner for Ableton Live 9

Frustrated by the lack of proper stereo panning in Ableton Live 9, I built a VST plugin for this explicit purpose. I know there are (convoluted) ways to achieve this in Ableton Live 9, but it’s all too awkward. I also know that this is fixed in Ableton Live 10, but I’m not ready to migrate to that version yet.

Stereo Panner controls

The plugin is described in detail here, and you can download it (plus other plugins) for free from here (for Windows only).

Standard
Audio, Music

De-Noising Audio using Spectral Subtraction in MATLAB and Ableton Live

Last time I wrote about audio restoration using simple digital filtering (in MATLAB and Ableton Live). I’ve since received another old Havering recording from Walt. Again from an old cassette tape, this recording is rather noisy. In this post, I explain how I cleaned it up using a more elaborate technique than previously.

Again I used MATLAB for the algorithm development aspects of the process, in combination with Ableton Live for the audio and mix management.

The noise

Here is a clip of the lead-in to the show. The noise is apparent.

Snippet of the raw (noisy) recording

Figures 1 and 2 show the noise spectrum (over the full bandwidth and zoomed-in to the low-frequency zone, respectively) computed via the MATLAB pspectrum function.

Figure 1: Noise spectrum revealing the broadband nature of the background noise in the recording.
Figure 2: Noise spectrum, zoomed-in on the low-frequency regime, revealing the 60 Hz “power hum” plus a distinct peak around 1150 Hz in both channels and a lesser peak around 1700 Hz in the left channel only.

The noise has similar characteristics to the last time: some low-frequency “power hum” (Figure 2) plus a broad-band “tape hiss” over the extent of the audio/music bandwidth (Figure 1). Interestingly, the low-frequency power hum (Figure 2) comprises only the fundamental mode (at approximately 60 Hz) rather than the multiple harmonics observed last time. Also, there is a distinct peak around 1150 Hz in both channels and a lesser peak around 1700 Hz in the left channel only.

Suppressing the “power hum”

As last time, notch filtering was used to suppress the low-frequency peaks from Figure 2. However, rather than using Ableton Live’s notch filtering as I did last time, I used MATLAB. This allowed me to create a suite of filters which could be separately configured for the left and right channels (since as observed in Figure 2, the characteristics of the noise peaks varies between the channels). As a starting point, I used the MultiNotchFilter example “plugin” bundled with the MATLAB Audio Toolbox and extended it to have separate controls for each channel (creating what I call the MultiNotchFilterStereo “plugin”). Figure 3 shows a (partial) screenshot of the plugin configured to suppress the peaks identified in the spectrum from Figure 2.

Figure 3: Screenshot of the MultiNotchFilterStereo plugin (adapted from the MultiNotchFilter plugin bundled with MATLAB) loaded into the MATLAB audioTestBench. The plugin has ten notch filters per channel. Only the first seven of the left channel filter controls are visible in the screenshot (there are similar controls for each of the ten filters per channel). Only three of the notches are being used on the left channel (and only two on the right channel), corresponding to the three noise peaks (at 55 Hz, 1136 Hz, and 1702 Hz) in the left channel (and 55 Hz and 1168 Hz for the right channel).

Here is the result of applying the notch filtering to the original noisy clip:

Result of applying the notch filtering to the snippet of the raw (noisy) recording in order to suppress the low-frequency noise components. Comparing with the raw clip presented earlier, it is clear that the filters have had an audible effect on suppressing some of the components of the noise.

Suppressing the “tape hiss”

Instead of simple filtering used last time, I wanted to try something more sophisticated in an attempt to achieve improved broad-band noise suppression with minimal audible artefacts.

The approach adopted was to adapt the SpectralSubtractor “plugin” bundled with the MATLAB Audio Toolbox, again extended to have separate processing for each channel (creating what I call the SpectralSubtractorStereo “plugin”) since the original plugin catered for mono signals only. Figure 4 shows a screenshot of the plugin configured (by trial-and-error listening experiments) to suppress the broadband noise identified in the spectrum from Figure 2.

Figure 4: Screenshot of the SpectralSubtractorStereo plugin loaded into the MATLAB audioTestBench. The plugin (adapted from the SpectralSubtractor plugin bundled with MATLAB) performs noise reduction by spectral subtraction, applied independently to both channels, but with the same user-configurable parameters configured on both channels.

The algorithm works by subtracting a representation of the noise from the noisy signal in the frequency domain. In this case, the representation of the noise is a simple constant amplitude (band-limited) “white noise” model.

The core of the algorithm is encapsulated in the first line of the following two lines of MATLAB code:

mag_X_out = max (0, abs(X_in)-Mag2Subtract);

X_out = mag_X_out.*exp(li*angle(X_in));

where mag_X_out is the magnitude of the processed spectrum, X_in is the noisy signal spectrum, and Mag2Subtract is the user-selected “noise magnitude” (i.e., configured via the the “Noise Estimate” control in Figure 4). In the second line of code, X_out is the processed spectrum created by reuniting the modified magnitude mag_X_out with the original phase of X_in.

Not shown in this code snippet is the application of the Fast Fourier Transform (FFT) and its inverse — to convert to/from the frequency/time domains — nor have I included the machinery for managing the data buffers, since I wanted to emphasise the crux of the algorithm (rather than the utility code around it) — and moreover, I wanted to demonstrate how compact the MATLAB language is for implementing mathematical expressions applied to complex-valued matrices (such as X_in and X_out).

A schematic illustrating the spectral subtraction technique is shown in Figure 5.

Figure 5: De-noising via the technique of spectral subtraction. The plots are in the frequency domain (i.e., after the FFT computation). Note that these are not actual signal spectra, merely pictorial representations to aid the explanation. Also, just a single-channel (mono) signal is depicted here (in the actual processor, the same algorithm is applied independently to each channel). The number of frequency bins (and hence the frequency resolution for a given sample-rate) is determined by the length of the analysis frame (i.e., the number of samples, per channel, sent to the FFT in each successive computation, performed frame-by-frame over the entire signal duration), adjusted via the “Analysis Frame” control in Figure 4. The “Noisy signal” (blue) in the upper plot corresponds to abs(X_in). The “Noise model” (red) corresponds to Mag2Subtract. The “De-noised signal” (green) in the lower plot corresponds to mag_X_out. It has the value zero whenever the “Noisy signal” is below the level of the “Noise model”. Elsewhere, it has the value given by (abs(X_in) minus Mag2Subtract).

In a sense, the “0” branch in the expression for mag_X_out in the code snippet can be thought of as a frequency-dependent noise gate, whereby for each frequency bin, if the spectral magnitude is below the user-selected threshold (i.e., the “white noise” magnitude), the signal output is cut completely. For the other branch, if the spectral magnitude is above the assumed model noise threshold, then that constant threshold level (representing the “white noise” magnitude) is subtracted from each bin.

The noise threshold is user-adjusted by trial-and-error. Too low, the de-noising is not effective. Too high, and audible artefacts appear in the output as a characteristic “tinkling”. This invariably occurs when frequency-domain audio manipulation is pushed too far. Indeed, it can be used as an effect in itself e.g., vocoders and robotic voices, or in the (well-established) technique of cranking up autotune to the extreme. But for the present purposes of de-noising, the parameters have been adjusted such that maximal noise suppression is achieved with minimal perceivable adverse effects on the output signal. Note that the “Analysis Window” (i.e., the type of windowing used before performing the FFT), the “Analysis Frame” (i.e., the length of the data chunk sent to the FFT), and the “Frame Overlap” are commonly-used in spectral analysis (as described in many references, so not detailed here). Suffice it to say, for present purposes, these parameters were selected by trial-and-error (via subjective listening experiments) to give the best result on the audio file in question.

Here is the result of applying the spectral subtraction to the noisy clip using the settings displayed in Figure 4:

Result of applying the spectral subtraction to the previous clip (i.e., the one with the power hum already removed). Comparing with the original raw clip presented at the start, it is clear that the spectral subtraction algorithm is very effective for suppressing the broad-band noise. There is a little bit of “tinkling” evident in the output, but this is effectively masked by the music (once it starts playing).

“One click” plugin creation

Having built and tested the MultiNotchFilterStereo and the SpectralSubtractorStereo “plugins” entirely within the MATLAB environment, I then converted each of them to VST plugins using the “one click” conversion button provided in the MATLAB Audio Toolbox audioTestBench interface.

Additional tweaks to the mix within Ableton Live

I then loaded the VST plugins into Ableton Live, applied a noise gate in front of them, and some equalisation and dynamic range control downstream, as shown in the screenshot in Figure 6.

Figure 6: End-to-end plugin effects chain implemented in Ableton Live for this de-noising project. The first (“Short Cut” noise gate) and last (“Punchy Dance Master” compressor/limiter/equaliser component) are Ableton built-in plugins used to tweak the mix. The middle two components (“MultiNotchFilterStereo” and “SpecralSubtractorStereo”) are the VST plugins built entirely in MATLAB and are the core of the de-noising solution presented in this article.

This effects chain was applied to the noisy recording of the entire radio show. The resulting cleaned-up audio can be streamed from here.

Conclusions

The spectral subtraction method, using a simple flat “white noise” model, is found to be rather effective in removing broad-band “tape hiss” noise from audio/music recordings. Compared with simple digital filtering (covered in the previous post), the spectral subtraction method is found to be superior (from informal subjective listening trials).

As an enhancement of the technique, it would be interesting to try subtracting a shaped noise spectrum (rather than the simple flat value used here). This could be computed from a noise-only portion of the recording. Likewise, it would be interesting to compare the spectral subtraction approach with alternative techniques such as wavelet-based de-noising, machine-learning/deep-learning based de-noising, and adaptive filtering. All these can be explored via MATLAB.

MATLAB is again found to be a very powerful and convenient environment for prototyping the audio processing algorithms. Moreover, the (remarkable) “one click” creation of VST plugins from entirely within MATLAB makes it trivially simple to bring the algorithms into the Digital Audio Workstation (DAW) universe.

Footnote

You may have noticed this logo in the compiled MATLAB VST plugin screenshots above. There is a history to this. Just over twenty years ago, I worked with a very talented programmer, Pepijn Sitter, from The Netherlands, to create an audio effects processing software product called WaveWarp. We distributed it under the trading name Sounds Logical. It was critically acclaimed, winning an Editor’s Choice Award from Electronic Musician Magazine in 2001.

WaveWarp enabled you to build your own audio effects from a library of modular building blocks. In that sense, it’s architecture resembled Simulink, but was fundamentally much faster (even compared with the compiled version of Simulink deployed via the RealTimeWorkshop) on account of the fact that the WaveWarp audio engine (and each individual module) was written in highly-optimised C code (making extensive use of pointer arithmetic) such that it could process multi-channel audio in real-time, sample-by-sample, on a typical desktop PC of the age. Moreover, it had full multi-rate functionality (via a library of decimators, interpolators, polyphase filterbanks, etc) allowing for elaborate mixed sample-rate designs. It used the FFTW (Fastest Fourier Transform in the West) library for spectral analysis, just as MATLAB does now. The WaveWarp software worked in standalone mode or as a DirectX plugin, and even had a real-time interface to MATLAB (akin to the audioTestBench available in the MATLAB Audio Toolbox today).

Alas, WaveWarp is now long gone. Moreover, I lost track of the source-code years ago, and I don’t have a running version. Also, it has almost completely faded from the internet. I could find only this review on PCRecording.com.

Anyway, given that I find myself delving into the world of audio processing again, I thought it fitting to revive the logo.

Standard
Audio, Music

Basic audio restoration using Ableton Live and MATLAB

Walt, the drummer from The Havering, just sent me an mp3 file of a Havering recording from a Stanford College Radio show in 1989. The mp3 file was created from the original recording on a thirty year old cassette tape, so the quality is not fantastic. The aim here is to clean it up and publish it on The Havering song archive.

My Digital Audio Workstation (DAW) of choice when working with audio clips and samples is Ableton Live which is the main environment I’ll use for this mini-project.

This project also presents a good opportunity to test drive the MATLAB Audio Toolbox.

Restoring the audio involves multiple stages, much of which is trial-and-error. Foremost is noise removal.

Noise Removal

Here is the start of the first song (“Trust”). The background noise is rather apparent during the non-music lead-in, continuing into the music:

Snippet of the raw (noisy) recording

Helpfully, because this is a recording of a live radio show, there are lulls in the music where only the noise is present. For example, here is the snippet of noise from the non-music lead-in (amplified for emphasis):

Just the noise lead-in from the previous snippet (amplified)

The first step in removing or suppressing the noise is to try and gain an understanding of it. Since we have the noise-alone snippet, we can analyse it in isolation (this isn’t always the case: often we only have the music-plus-noise available. But we are lucky here). Loading the noise file into MATLAB (via the audioread function) and utilising the pspectrum function to generate the noise spectrum yields the plot displayed in Figure 1:

Figure 1: Noise spectrum revealing the broadband nature of the background noise in the recording.

This is a “textbook” example of broadband noise whereby the power spectrum is effectively uniform over the frequency range of interest (i.e., over the audio range from 20 Hz to 20 kHz, approximately). It does drop off dramatically around 17 kHz or so, but even so, the noise level is effectively constant (and high) over the audio/musical range of interest, and so will be quite tricky to deal with. Listening to the noise, it appears to be classic “tape hiss”, prevalent in analogue recordings such as the cassette tape used in this recording.

It is helpful to zoom-in on the low-frequency portion of the chart and view on a log-scale, as displayed in Figure 2.

Figure 2: Noise spectrum, zoomed-in on the low-frequency regime, revealing the 60 Hz “power hum” and its harmonics

There is a series of distinct peaks. Using the MATLAB findpeaks function reveals these to be at the following frequencies (averaged across both channels): 60 Hz, 120 Hz, 180 Hz, 240 Hz, 300 Hz, 430 Hz, and 680 Hz. The majority of these (60, 120, 180, 240, and 300 Hz) are classic “power hum” (fundamental mode plus four harmonics) from the AC power supply (the recording was made in California, US, where the power-grid AC fundamental frequency is 60 Hz — rather than 50 Hz in the UK).

Suppressing the “power hum”

Since the frequencies are well-defined for the low-frequency “power hum” components of the noise, this suggests utilising a bank of notch filters tuned to each mode of the noise (i.e., to “notch out” each noise component). Ableton Live has a built-in 8-band equalizer which can be used for this purpose. See the screenshot in Figure 3 below where the equalizer has been configured as required.

Figure 3: Ableton Live equalizer component configured with multiple notch filters tuned to suppress the “power hum” harmonics from Figure 2.

Below are the “before” and “after” audio clips. The notch filtering is effective at removing the “power hum”. Note: with these compressed mp3 snippets in this blog article, the low frequencies are suppressed by the mp3 encoding algorithm, so you may have to turn the volume up to hear the difference. Even then, it may be difficult to perceive the differences, though they are readily apparent in the uncompressed WAV files in Ableton and MATLAB.

“Before”: snippet of the raw (noisy) recording (from earlier)
“After”: snippet after processing to remove the “power hum”

Suppressing the “tape hiss”

The simplest approach to suppress the remaining tape hiss (now that the hum has been successfully removed) is to implement digital filtering to target the frequencies where the noise is most apparent to human hearing. In future I may experiment with more sophisticated techniques (e.g., STFT-thresholding, wavelet-transform-thresholding, Deep Learning, adaptive filtering, etc).

But for now, my approach is to design a digital filter with the aim of suppressing the noise (as perceived by a human listener) as far as possible without adversely affecting the music to a significant extent. There will inevitably be a trade-off between these competing goals.

I could continue with Ableton’s built-in filters to experiment with filter design, but for demonstration purposes I’ll switch over to MATLAB which has an extensive library of digital filter design algorithms (via the Signal processing Toolbox and the DSP System Toolbox) which can be brought to bear. Additionally, the Audio Toolbox has real-time audio streaming capabilities which enable the algorithm-under-test to be inserted in a real-time stream to/from audio files or devices or both.

After some trial-and-error , I settled on a high-frequency band-stop filter. Moreover, I selected an algorithm which happens to be provided as one of the out-of-the-box plugin examples (namely, the “Shelving Equalizer”) bundled with the MATLAB Audio Toolbox in order to demonstrate those capabilities.

Figure 4 contains a screenshot of the Shelving Equalizer loaded into a MATLAB audioTestBench which I’ve configured to stream data from a source audio file, through the filters, and out to the audio interface (in this case, a Focusrite Scarlett 2i4 soundcard with ASIO drivers). I manually adjusted the filter parameters by trial-end-error on-the-fly whilst listening to the processed audio in real-time. Note that the low-frequency filter is disabled (by setting its gain to 0 dB).

Figure 4: The audioTestBench utility from the MATLAB Audio Toolbox configured with the Shelving Equalizer with its parameters tuned to suppress the high frequency “tape hiss” (the low-frequency filter is disabled).

Below are the “before” and “after” audio clips (in this case, “before” is not the original raw file, but rather the file with the hum removed from the previous step in the process). As can be heard, the filtering is effective at removing the high-frequency “tape hiss” (again, with these mp3 snippets, you may have to turn the volume up to hear the difference). There is nevertheless some noise remaining in the mid-frequency range which I was not able to filter out without adversely affecting the music.

“Before”: snippet with “power hum” removed (from earlier)
“After”: snippet after further processing to remove the high-frequency “tape hiss”

One-click plugin

A very useful feature of the MATLAB Audio Toolbox is the ability to create a VST plugin from an algorithm prototyped in MATLAB, by clicking a single button. For example, I converted the Shelving Equalizer into a VST plugin by clicking the “generate VST Plugin” button located on the audioTestBench graphical-user-interface. By copying the resulting dll into Ableton’s plugin folder, the Shelving Equalizer becomes available from within Ableton Live, as illustrated in the screenshot in Figure 5 below. This allowed me to process the “tape hiss” via the MATLAB filter design, without having to bring the audio tracks out of Ableton. A considerable convenience.

Figure 5: Shelving Filter designed in MATLAB (see Figure 4), then converted to a VST plugin (via one mouse-click in the MATLAB audioTestBench), and imported to Ableton Live.

Noise Gate

Being a recording of a radio show, there are many quiet intervals between songs (e.g., when the band is introducing the next song, or the DJ is chatting, etc). It is during these lulls that the (remaining) noise is most apparent — and distracting. A simple technique to minimise this distraction is to use a Noise Gate to cut-out the audio when the volume falls below a given threshold. Then, when the music volume increases to performance levels, the music effectively masks the noise. This is a handy consequence of psychoacoustics: even though the noise is still there, we don’t perceive it to be at the same distracting level as we do during the lulls in the music.

Rather than simply deploying a noise gate, we can utilise a clever trick as described in this article. The trick is summarised as follows: (i) make a duplicate of the original noisy track, and keep the original aside for the moment; (ii) reverse the phase of each channel in the duplicate (i.e., multiply the amplitude of every sample by -1). Now, when played together, (i)+(ii) results in complete cancellation and total silence. That’s okay; (iii) pass the phase-reversed channel from (ii) through an inverted noise gate with its upper-and-lower thresholds configured such that only the noise passes through when the music volume is low, and nothing passes through when the music volume increases; (iv) play the original noisy track (i) together with the inverse-gated phase-reversed track (iii). The end result is complete silence during the lulls in the music. Away from the lulls, when the music is playing, the noise is still present, but the distracting noise at low music volumes is completely eliminated, giving the overall impression that the noise has been removed throughout (even though it actually hasn’t). This approach is a simplistic implementation of the technique of active noise cancellation (insofar as it utilises destructive interference of the noise waveform, albeit on the noise-only segments of the track, though without a separate noise measurement and adaptive filtering continually correcting the entire track).

Figure 6 contains a screenshot of Ableton’s built-in phase-reverser and inverted noise gate where the respective parameters have been specifically tuned (by trial-and-error) to implement (iii) on the noisy music recording in question.

Figure 6: Ableton Live’s phase reverser and noise gate (with “flip” enabled to invert the gate’s behaviour), with the thresholds tuned to allow only the noise to pass through the gate. When the music level rises, nothing passes through. The phase-reversed gated signal is added to the non-gated original phase signal such that the noise is totally cancelled at low levels e.g., in the lulls between songs.

Additional tweaks to the mix

Before applying the noise removal process, I reduced the overall dynamic range of the entire track by passing it through a compressor to suppress the peaks. Figure 7 contains a screenshot of the built-in Ableton compressor with appropriate settings for The Havering track (adjusted by trial-and-error).

Figure 7: Ableton Live’s built-in compressor applied to reduce the dynamic range of the original file before application of the de-noising algorithms.

I then applied the aforementioned de-noising processes, after which the resulting track seemed a little “lacking in body” compared with the original. To bring it back to life, I deployed a penultimate stage of filtering (equalisation): specifically, utilising Ableton’s built-in equaliser with its “Dance Master” configuration preset, inserted before the MATLAB-based Shelving Equalizer, as shown in the screenshot below in Figure 8. I also adjusted the overall gain of the final mix to maximise the available volume.

Figure 8 Equalization applied via Ableton Live’s built-in EQ to “revive the body” of the de-noised audio before application of the MATLAB-based Shelving Equalizer.

The final result

Original mp3 noisy recording of the entire radio show (approximately 24 minutes runtime)
Processed mp3 recording of the entire radio show after all stages of restoration have been applied (approximately 21 minutes runtime since silent lulls between songs have been removed)

In my opinion, comparing the noisy track with the cleaned-up track, the restoration has been a success. But it is subjective, so judge for yourself.

Here is the cleaned-up recording on Bandcamp where you can retrieve it in uncompressed FLAC format (better quality than mp3).

Conclusions

Basic digital filtering techniques have been shown to be somewhat effective for removing noise from an mp3 file of a live music recording transcribed from an old cassette tape, with minimal perceptible distortion of the underlying music signal.

The use of a digital audio workstation (e.g., Ableton Live) plus MATLAB is found to be a powerful combination in terms of extensive algorithmic capabilities and ease-of-workflow.

The ability to effortlessly create a VST Plugin from within the MATLAB Audio Toolbox is remarkable and very useful.

All of the mp3 audio snippets presented in this post were created using the MATLAB audiowrite function which supports such export. Another considerable convenience. By contrast, Ableton Live (at least Version 9 which I’m using) does not support mp3 export (!)

It would be interesting to compare the simple approach presented here with more advanced noise-processing techniques (as alluded to earlier), and with commercially-available 3rd party de-noising plugins (such as the much-acclaimed Rx 7).

Footnote

After the concert, the organisers (Amnesty International) sent us a letter thanking us. Here is the letter. It was nice to receive it. The closing sentence makes mention of the very cassette tape used in this restoration project.

All audio content presented in this post is copyright The Havering 1989–2020, all rights reserved.

Standard