Have you ever fed Scaler the output of it’s PERFORM mode? If not, you might find some pretty interesting stuff. Below is an unedited sequence created with just 4 chords and some well timed mouse clicks in the Circle of 5ths
Sequence A was recorded clicking on 4 chords in the Circle of 5ths UI while the Adagio/Fine Performance mode was on
Sequence B are those 4 chords from A (in a pattern) drug into my DAW while PERFORM mode was on (hence getting all the performance midi)
Sequence C is the recorded Midi captured while playing back the sequence in B while PERFORM mode was on
The last sequence is the Midi from C played and captured while PERFORM is on
Combine this with real time switching of playback pattern, speed and humanization and the combinations are endless. Just like fractal art, while there are lots of results that are useless, many of the outcomes are a ton of fun and quite interesting. Your ability to connect and time the variations keeps things entertaining.
A couple things to note:
As you can see in the images, each cycle through this process, your note density increases and duration decreases so instrument/sound selection becomes very important
The playback speed x.05 - x2 can change both note playback as well as speed but in unexpected ways
Quantization can also impact notes played and speed in unexpected ways
Try pattern/speed/quant in real time while looping a sequence…it is amazing how cool some things work together.
Here is a little example using only the above sequences starting with just the 4 straight chords and progressing to various playback settings recorded in real time. Because I was playing with such short note duration, I used Staccato Strings and a simple progression that resulted in the playback melody.
Here is the same sequences playing with BBC Discovery and articulation key switches. Nothing but Scaler creating the notes.
A couple additional ways to feed Scaler recursively (to generate both note sequences and chord variations)
Take the midi from a Scaler chord pattern that was created with PERFORM enabled, and drop it back onto Scaler letting Scaler detect the midi. (or use DAW playback and Midi detect) You will get a set of notes and chords that are all related to the pattern performance midi. You can repeat the process to get some unexpected chord and note combinations that you can feed back into a performance pattern for some interesting variations. My experience is that this process will often generate 2 note chords that are great building blocks for adding a deep bass note.
Use the process above but create a set of patterns that you can loop. Now you can use the midi capture function while you adjust settings in real time while Scaler plays the pattern. As far as I can tell, almost all Scaler’s playback and pad settings can be adjusted live and Scaler behaves pretty much perfectly. If I had a way to midi map Scaler settings (or set keyboard shortcuts) it would make for a fun performance tool.
Take the bounce (audio recording) of a Scaler chord/pattern/performance and use Scaler’s audio detect features to pull chords back out of the performance. Because this is an analog detection process, by tweaking the record sensitivity level you can get some subtle variations in chords and voicings. (The detection process might also be impacted by the instrument you have selected in Scaler. I’ve not tested this, but my ear can hear a difference as I move from Off to the various instruments while using audio detect.)
The process of getting data in and out of Scaler can vary a bit based on the DAW you are using. For example, some DAWs support bi-directional drag and drop and some don’t seem to. Some of these techniques will require trial and error on your part. However, if you are like me and you trust you ears more than your fingers and you are more of a mousist than a pianist, you might find it time well spent.
Be patient and explore, quantization is your friend. This is not just random notes being generated, these are parts of well considered pieces that are feeding an iterative creative process. If something sounds way off, but you like some things you are hearing, tweak the playback speed, walk the humanize settings or maybe the voice groupings. You might want to try a different instrument (you just cannot change in real time) Some things play better with others and the trick is to find the pieces that work well together and then stumble into a seemingly endless supply of happy accidents.
I think many of us have stumbled over the accidental feeding of expression notes back into expression mode, but most probably saw it as a bug - I’ve seen some folks here asking for help why things were that chaotic. You’re the first I see to take advantage of its creative potential. It’s nestedness almost has fractal qualities, or perhaps game-of-life? I have an iPad sequencer (Xyntheszr) that actually has a game-of-life morphing mode. I wonder if with some additional experimentation Scaler could be programmed that way too ;-?
Funny you said that…I’ve got a pretty loud KTM desert bike and that 990 twin is music to my ears. A couple summers ago I put a mic on the back and then sampled it into a track I put together for a moto event. So yes, that bike does inspire.