Skip

Write Your Own Software Synth
June 28, 2011 3:29 PM   Subscribe

The creator of the PSynth app for iPhone explains the basics of software synthesizers in a series of articles on Dr. Dobbs. Creating Oscillators. The Synthesizer Core. The final article promises delays and phasers. The source code is java so the example synth is easily extendable.
posted by Ardiril (15 comments total) 52 users marked this as a favorite

 
The Synthesizer Core: "What's your favorite thing about synthesizing? Mine is synthesizing. Got to synthesize, not synthesizing enough, more synthesizing, got to get to synthesizing! System, synthesizing system, on trial in the synthesizing system, synthesizing system on trial, verdict, gggggGUILTY! Of not synthesizing enough." And so on.

On the actual programming behind it all: a lot more complicated than the "sound on, sound off" I always thought it was.
posted by Slackermagee at 3:55 PM on June 28, 2011 [2 favorites]


I'd like to know what Synthesizer Patel has to say about this.
posted by dunkadunc at 4:07 PM on June 28, 2011 [2 favorites]


Slacker: You mean you read the documentation, err, the article?
posted by Ardiril at 4:07 PM on June 28, 2011


The API documentation said it was best to use floating-point values (floats) for samples because the hardware was optimized for them. So I did. I developed all of the PSynth code using the iPhone emulator provided with the development tools (Xcode) for testing [...] I found out that the real devices don't support float samples like the emulator did. I then had to write more code that accepted streams of float samples and converted them to integers so the hardware could play them.

What? So, basically, the emulators is inaccurate, the hardware is designed stupidly, and the API is flat-out wrong? This seems like a massive waste of energy.
posted by spiderskull at 4:08 PM on June 28, 2011 [1 favorite]


Thanks, Ardiril. I'll play with that Java example later. I like using trackers for writing synthesized music. Some of them like SunVox even have modular synth abilities. But I've been telling myself I'll learn csound and MML or something that gives me more control for a while now. Hopefully this is a good way to start programming soft synths.
posted by MrFTBN at 4:21 PM on June 28, 2011


Wouldn't doing a function call once for each sample (i.e., 22,050 times a second at half rate) be unusably inefficient? I imagine proper softsynths would use buffers, though with some mechanism for interpolating parameters over a buffer if you have the means to tweak knobs and such.
posted by acb at 4:32 PM on June 28, 2011


I'd like to know what Synthesizer Patel has to say about this.

"Needs a security alarm."
posted by Blazecock Pileon at 4:45 PM on June 28, 2011


acb: "Wouldn't doing a function call once for each sample (i.e., 22,050 times a second at half rate) be unusably inefficient? I imagine proper softsynths would use buffers, though with some mechanism for interpolating parameters over a buffer if you have the means to tweak knobs and such."

Generally you don't. As far as I remember (this is for AudioUnits, VSTs, and SuperCollider UGens) the plugin passes a function pointer to the next whatever in the graph. Whenever this function is called it gets an input buffer (sometimes) and an output buffer. A simple sine wave oscillator could do a single copy from a lookup table into the output buffer.
posted by mkb at 4:46 PM on June 28, 2011


In a typical softsynth you define your (plugin / unit generator / whatever you are calling it) as a function that takes some number of buffers as inputs (one for each audio input) some number of buffers as outputs (one for each audio output), and get informed each time the plugin is called how many samples to operate on (all the buffers of course will contain at least that many samples in a simple flat array).

Typically there is another argument too, a struct containing all your non-audio parameters (stored in a standardized way so that other objects in the call graph can modulate those parameters) plus any static per-instance data (which won't be changed outside that instance).
posted by idiopath at 4:56 PM on June 28, 2011 [1 favorite]


Generally you don't even have that level of access to the sound device. You can't say "set the output level to X". You have to fill a buffer.
posted by delmoi at 6:24 PM on June 28, 2011


If this were a university course, I could easily see mating this up with a database manager in the last third of a semester. Throw in a UI designer and this would make one hell of a team project for a software engineering class.
posted by Ardiril at 8:19 PM on June 28, 2011


I imagine proper softsynths would use buffers

Yes, on modern timesliced operating systems due to interrupts and stuff.

If you were programming a DSP or using a hard real-time system you could just read a 16-bit sample, do some processing, and write out the modified sample. Thereby getting a delay of one sample.

On some 8-bit systems you would have 1-bit resolution: you'd be able to flip the speaker on or off, and you'd have to add up the CPU cycles of each instruction to figure out how long it was going to take between cycles. Good times.
posted by RobotVoodooPower at 12:05 AM on June 29, 2011


A word of warning: if you want to do subtractive software synthesis, the direct method in getSample() is quite poor: you'll have to compute the waveform at a much higher rate than the audio rate, or the high frequency components of a saw or square wave will alias badly (the oscillator sounds metallic and out ot tune). Here's a review of better methods.
posted by ikalliom at 10:29 AM on June 29, 2011


" you'll have to compute the waveform at a much higher rate than the audio rate"

Or, as your cited article mentions, you can use one of the other methods of generating bandlimited function tables, and then look up the waveform from the function table (for square and triangle waves a sum of harmonics that stops before you hit nyquist works pretty great without needing the excess computational power needed to operate above the SR).
posted by idiopath at 12:59 PM on June 29, 2011


Exactly. Directly sampling the non-bandlimited waveform like getSample() does is a bad idea, you'll get either bad performance or bad quality (or "special effects"). Working at audio rate is fine as long as you do a little more effort. If you don't want to use tables, linear interpolation is simple and already goes a long way.
posted by ikalliom at 1:17 PM on June 30, 2011


« Older "It may not be fun, but it is even better than...   |   The Avant-Garde Project, an... Newer »


This thread has been archived and is closed to new comments



Post