Cool Things to DO in 4.2

On March 22 2018, QLab version 4.2 was released with many new features. Here’s 2 cool things you can do, (1 audio and 1 video), only in 4.2, (and later).

Audio Project-Spatial Panner.

This project creates a spatial panner that will make a mono sound source follow a path drawn in a 2D network fade cue, by continuously adjusting the levels of the audio cue’s first 4 sliders, which feed four outputs arranged as a quadraphonic sound stage.

Here it is in action:

Because you may not have a multi-channel playback system with which  to listen to the demo, I have encoded the soundtrack as a binaural recording for playback on headphones.

How it works:

The new QLab 4.2 feature which enables this project is the OSC message:

/cue/{cue_number}/translation {x} {y}

This allows us to use the x and y translation of a dummy video cue, (a disabled text cue works very well for this), to store the current values of the x and  y coordinates of the current cursor position on a drawn path in a 2D OSC fade.

We can then use a script with some maths functions to take these coordinates and interpret them as 4 slider levels which will give a voltage panned output that approximates the cursor position on a quadraphonic sound stage.

The workspace in detail:

Panner

Cue 1 is a ‘fire all children’ group cue which contains the  mono audio cue  matrixed to 4 sliders corresponding to L, R Ls & Rs.

Quad Sliders

This is followed by a Network cue which sets the notes of the cue numbered SCRIPT to the number of the audio cue we are using, in this case cue HELI. This allows the same script to be used for other cues with different audio, e.g if we had another group cue with a train effect in an audio cue numbered TRAIN the OSC message would set the notes of cue SCRIPT to TRAIN.

The next cue in the group disarms all Pan Paths. The workspace can store any number of patterns in the group cue PATTERNS which is nested in another group cue numbered LOOPGROUP. There are 5 Paths numbered PAN 1-5 with different paths drawn in the 2D fade well of an OSC message in a Network cue.

Pan Pattern 1Pan Pattern 2Pan Pattern 3

Pan Pattern 5Pan Pattern 5

The next cue arms one of the Network cues with the Patterns e.g if we wanted to use pattern 3 we would edit the OSC of this cue to:

/cue/PAN3/armed 1

again this allows the group cue to be copied and pasted and used with different audio and different patterns, with minimal reprogramming.

The fade cue following fades the master slider of the audio cue, allowing the cue to fade in independently of the panning activity.

The final cue in the group starts the Group cue numbered LOOPGROUP.

This starts the Network cue we have previously armed that has the path we wish to use drawn within it as a 2D OSC fade

Networkcue

Here we see the path drawn. We are using x and y scales with a maximum value of 100. The density of the cursor data is set with the fps menu. The 2D fade replaces #x# and #y# argument placeholders in the OSC message:

/cue/DUMVID/translation #x# #y#

This sets the x and y translation values of the cue numbered DUMVID to the current cursor values 20 times per second.

DUMVID

A Start Cue numbered  LOOP later in the group restarts the group every time the pattern completes, so that the pattern loops.

Finally the group contains the disarmed text cue which stores our cursor values as translation x and y values.

The Script:

The engine of this project is the script cue in the loop group:

set AppleScript’s text item delimiters to space
tell application id “com.figure53.QLab.4” to tell front workspace
delay 0.1
try
set targetcue to notes of cue “SCRIPT”
repeat while cue “LOOPGROUP” is running
set x to ((translation x of cue “DUMVID”) + 100) / 2
set y to ((translation y of cue “DUMVID”) + 100) / 2
set {x, y} to {my constrainValue(x / 100, 0.01, 0.99), my constrainValue(y / 100, 0.01, 0.99)}
set batchText to (1 – x) * y & linefeed & (x * y) & linefeed & (1 – x) * (1 – y) & linefeed & x * (1 – y) as text
set {fader1, fader2, fader3, fader4} to my batchAwk(batchText)
cue targetcue setLevel row 0 column 1 db fader1
cue targetcue setLevel row 0 column 2 db fader2
cue targetcue setLevel row 0 column 3 db fader3
cue targetcue setLevel row 0 column 4 db fader4
delay 0.05
end repeat
end try
end tell
set AppleScript’s text item delimiters to “”
on constrainValue(theNumber, minLimit, maxLimit)
if theNumber < minLimit then
return minLimit
else if theNumber > maxLimit then
return maxLimit
else
return theNumber
end if
end constrainValue
on batchAwk(theText)
return paragraphs of (do shell script “echo ” & quoted form of theText & ” | awk ‘{print 1+20*log($1)/log(10)}'”)
end batchAwk

This is a fairly complex script, as Applescript doesn’t have much in the way of maths functions, including the log calculations we need for this project. To get round this we use shell scripts to harness the power of the AWK processing language which is included as part of  OS X. To get the maths to work fast enough the script needs  has to be efficient and Rich Walsh did the detailed work on this script to optimise it for speed.

The script first sets it’s target cue to the audio cue we are using, which we previously stored in the notes of the cue numbered SCRIPT.

For as long as the cue numbered LOOPGROUP is running it then repeats the following:

Gets the coordinate values for the current cursor position from the x and y translation values of the dummy video cue numbered DUMVID.

Normalises these coordinate  values to 0 to 100 from their raw values of -100 to 100 which enabled us to use the whole of the 2D fade plotting space to draw our path.

We then call a subroutine, constrainValue  which constrains  these values to 0.01 to 0.99 which ensures we do not get errors from performing  log calculations on 0 or 1.

We then convert the x,y coordinates to absolute values for the four sliders in the audio cue by preparing the data and performing a shell script which passes the data to AWK to do the advanced mathematical processing. The complexities of the scripting are to execute the formulae below in an efficient manner, which is necessary to get an adequate processing speed for the conversion.

The Quad Panning Algorithm:

This is a fairly simplified set of formulae to convert x y coordinates to slider values. There are more complex functions that could be used, but these give acceptable results for general quad panning  purposes.

Slider1=1+log((1-x)*y)/6.4
Slider2=1+log(x*y)/6.4
Slider3=1+log((1-x)*(1-y))/6.4
Slider4=1+log(x*(1-y))/6.4

You can download the workspace for this project here

Some Cautionary Notes:

This is a project that, beyond a proof of concept, for some applications, can produce usable results. However the amount of data, and calculations on that data, puts a considerable strain on the resources of the computer. It’s probably fair to say that QLab still has some way to go in its response to rapid streams of OSC messages. The screen updating of sliders in response to OSC messages has been given a very low priority. The actual audio performance is much better than what might be assumed from the infrequent updating of slider positions, but for some applications and on slower computers, may still not be adequate. Try a pure sine wave as the audio cue and you will certainly hear some anomalies.  For a quick and dirty solution to put some movement into mono cues it may do what you want.

It’s also worth mentioning that placing a sound in space by the brute force means of making it louder in one speaker and less loud in others is a somewhat dated technique. Although pan-pots, both stereo and multichannel, will remain a significant tool for sound stage creation for some time yet, more sophisticated techniques that steer the apparent source of sounds through applying variable time delays at each crosspoint of a routing matrix, are a far more sophisticated technique and vastly increase the number of seats which can experience a vivid 3 dimensional sound stage, or accurately locate a moving sound object.

QLab 4.2 integrates with the DS100 processor from d&b audiotechnik , to provide a sophisticated  control system for advanced spatial routing.

On the next page we will look at a video project that, again, uses a new technique made available in QLab 4.2

Chapter Graphic by Mic Pool. Helicopter sound in demo workspace  licensed under a Creative Commons Attribution licence from DOIBRIDE. Available on the free sound website here

https://freesound.org/people/dobroide/sounds/30789/

MenuGraphic