Most of the literature I've read on real-time programming for audio shows that there should be some sort of lock-free data structure (usually a ring buffer) to pass control messages from the GUI to the audio thread and another one to pass audio samples from the audio thread to the GUI (if one needs to draw the audio wave in the GUI, for example). This is to prevent audio glitches and concurrency issues.
I can see that functions like synth.setParameter() are used, but, I can't figure out if this functions implement some sort of inter-thread communication similar to the one I described or if they use mutexes or something similar instead.
Can someone please explain how the communication between GUI and audio threads work in Tonic and/or the example apps? If a lock-free data structure scheme is not used, why?
Most of the literature I've read on real-time programming for audio shows that there should be some sort of lock-free data structure (usually a ring buffer) to pass control messages from the GUI to the audio thread and another one to pass audio samples from the audio thread to the GUI (if one needs to draw the audio wave in the GUI, for example). This is to prevent audio glitches and concurrency issues.
I can see that functions like synth.setParameter() are used, but, I can't figure out if this functions implement some sort of inter-thread communication similar to the one I described or if they use mutexes or something similar instead.
Can someone please explain how the communication between GUI and audio threads work in Tonic and/or the example apps? If a lock-free data structure scheme is not used, why?