As we have covered in previous articles in this series, SUSHI, Elk’s DAW and plug-in host, and the plug-ins running within it, are headless. This means they all run as command-line processes, and lack a Graphical User Interface (GUI). That also means that a plug-in that is ported to work with ELK, might need some refactoring to compile without including desktop GUI dependencies.
Once your plug-in is running without its original GUI (assuming it had one to begin with), you now need to control it. For control, they rely on the following options:
If your plug-in can be fully controlled by automation parameters and/or MIDI, then it is the host’s responsibility to wrap access to those controls in a suitable way – in this case, SUSHI is this host. SUSHI automatically exports these over OSC and/or gRPC so that it is easy to make a client GUI application for controlling your plug-in.
OSC and gRPC are both crucial components in the process of developing an instrument with the ELK platform. During prototyping, OSC is incredibly flexible, and the ecosystem of tools and devices which support OSC are highly conducive for quick iteration, and thus speedily arriving at a final design for the combination of physical and on-screen controls of the instrument.
Given a finalized design, the implementation of the hardware controls is then achieved using a combination of Elk’s SENSEI, and the optional custom development of a GUI. A common use-case is that the GUI is remotely accessible through a tablet or mobile phone. But the GUI can also be running on the instrument, and interfacing for example over a multi-touch screen, with SUSHI and the hosted plug-ins.
This article will detail the use of OSC, with future articles in the series covering the use of SENSEI, and gRPC.
OSC enables the additional advantageous use case for the final instrument shipped, of enabling end-users to integrate the instrument within the broader ecosystem of OSC capable devices.
Open Sound Control (OSC) is a control message content format developed at CNMAT by Adrian Freed and Matt Wright. It was originally intended for sharing music performance data between electronic musical instruments, computers, and other multimedia devices. OSC messages are commonly transported within home and studio computer networks, but can also be transmitted across the internet. OSC gives musicians and developers more flexibility in the kinds of data they can send over the wire, enabling new applications that can communicate with each other at a high level.
The great advantage of OSC is that messages are self-descriptive, and directly human-readable: just by looking at the text of a message, you can tell what it is for, unlike with any of OSC’s predecessors. So, for example where a note-on MIDI message is an arcane cryptic series of numbers: 1001 0011–0100 0101–0100 1111 , an analogous OSC message would be: /Synth/MIDI/Channel_1/Note_On, tt: “ii”, 69, 79.
A crucial difference to OSC’s predecessors, is that while OSC has a per-message schema, there is no overall fixed schema to define or restrict the set of possible messages, as is the case with legacy protocols (e.g. MIDI, DMX ). That also directly brings us to a third important advantage, that older protocols can be straightforwardly translated to and from OSC:
To describe OSC i paraphrase its creators : the basic unit of OSC is a message, consisting of an Address Pattern (AP), a Type Tag String (TTS), an optional time tag, and arguments. The AP is a string specifying the entity or entities within the OSC server to which the message is directed, a hierarchical name space, reminiscent of a file-system path, or a web URL. The TTS is a compact string representation of the argument types. The core types supported are:
OSC also supports the following Type Tags, although these are less frequently used, and not always supported by OSC-capable programs.
Finally the arguments are the data contained in the message. So in the message /voices/3/freq, ‘f’ 261.62558, the AP is followed by the TTS and finally the corresponding argument. All points of control of an OSC server are organized into a tree-structured hierarchy called the server’s namespace (the aforementioned description). An OSC AP is the full path from the root of the address space tree to a particular node. In the above example the AP points to a node named “freq” that is a child of a node named “3”, itself a child of a node named “voices”. The full set of possible combinations of APs and TTSs that an OSC server responds to, we here refer to as that server’s namespace. And, by OSC server, we refer to any device or program that can respond to and/or transmit OSC messages, be it a synthesizer, TWO, a keyboard controller, a wireless sensor, etc.
OSC provides for several advantages compared to the previous de facto standards of their respective fields, MIDI, DMX, etc. Using OSC, interoperability between an arbitrary number of disparate sources and destinations is straightforward. No longer are digital musical instruments forced to adhere to the strained façade that they can behave as keyboard instruments, as was the case with MIDI, when in fact they are nothing of the sort (see for example drum, wind and guitar controllers).
When starting SUSHI, the namespace of supported messages is echoed, along with their state.
When loading a plug-in in SUSHI, its exposed automation parameters are also exposed automatically over OSC, the namespace for which is also echoed. See an example fraction of a plug-ins namespace below:
/parameter/Synth/VCF_Freq, for VST parameter VCF Freq
/parameter/Synth/VCF_Reso, for VST parameter VCF Reso
/parameter/Synth/VCF_Env, for VST parameter VCF Env
/parameter/Synth/VCF_LFO, for VST parameter VCF LFO
All Type Tag Strings are floats, with a range from 0.0 to 1.0, in accordance with the standard for VST automation parameters.
Given this information, it is very quick to create a GUI in any OSC controller software, for example Open Stage Control, Hexler TouchOSC, or Liine Lemur, which can transmit values to control these parameters of the VST plug-in.
The GUI to the left in the image above, can control the VST Running on the device directly. You can see controls for the filter cutoff frequency and resonance, as well as for the ADSR envelope, and LFO rate. While Open Stage Control requires a desktop/laptop computer to work, TouchOSC and Lemur are similar in functionality, and work also on tablet computers and mobile phones.
Finally, let’s look at an example of what end users can achieve when integrating an OSC-enabled ELK-based instrument with any number of other software and hardware from the OSC-enabled ecosystem.
An artist, or group of artists, can create audio-visual performances where control signals from the musical instruments influence the graphics projections on stage, they influence the light-show, or where one instrument influences the sound-shaping parameters of another. End-users are also easily able to integrate general/custom hardware controllers, and control apps on their tablets, to remote control any set of parameters important for the performance at hand.
And, crucially, this set-up can be made to flexibly vary throughout the performance.
For example, a performance of two improvising pianists, can be accompanied by live computer graphics, where the projections are controlled in part by OSC signals derived from the music, and in part by OSC signals derived from electrophysiological sensors on the pianists’ bodies:
Or projection-mapped video can be made to accompany live electronic music, with the two being made to interact over OSC. Here the visuals are by Healium, accompanying Dusty Kid’s performance:
Vezér developer Imimot’s website has a wonderful collection of inspirational descriptions of projects where Vezér, and therefore OSC, have been used.
Example of possible signal flow. For the OSC Re-Routing / Mapping of the control signals, there are several suitable applications – please refer to the appendix for a selection.
We hope you are now inspired to start learning and using OSC, if you haven’t been doing so already!
At the end of this post, we have included an Appendix with a comprehensive list of software and hardware tools which support OSC.
Thank you for reading! For any questions on OSC and ELK (or anything else) please write to us at email@example.com
 Parts of this section have been adapted from the Wikipedia article on Open Sound Control: http://en.wikipedia.org/wiki/Open_Sound_Control
 M. Wright, A. Freed, and A. Momeni, “OpenSound Control: state of the art 2003,” in Proceedings of the 2003 conference on New interfaces for musical expression, 2003, pp. 153–160.
Zeal has created an in-depth explanation and tutorial video on OSC, explaining what OSC is and what it is for, as well as how OSC can be used to have Max/MSP and Processing communicate, which you may want to refer to after having read this article.
Following is a representative sample of OSC capable hardware and software, to give an overview of what there already is out there.
C++ / C; C#; Objective C; Java; Python; Erlang; …and many more