Implementation Idea for treating incoming notes the same way as chords stored in Scaler

I keep running into this limitation I’ve brought up several times, namely that chords played into Scaler from an upstream MIDI source (most often another Scaler managing the master chord progressions, while the downstream Scaler(s) being used for expression, variation of rhythm, melody etc.) don’t get the same treatment regarding voicing & expression options as the chords the Scaler instance maintains itself. And my performance setups are too dynamic to be reliant on the (manual) sync capability. So instead of nagging further about the problem, I wanted to circulate a possible solution approach.

What if Scaler could automatically learn (on the fly) new scales and chords which then influence its execution of expressions, performances, phrases, melodies, rhythms etc - just the way it enables when it already knows a scale or chord progression. Now, with “learn on the fly” I don’t mean the existing DETECT mode. The gap there is today that only works manually, with user interaction. What if we could play in a couple of notes (polyphonically, all at once), at a control octave, say C-2 (a note range rarely if ever used for actual music). So if I wanted to set Scaler temporarily to adjust everything to a C Major scale, I would play simultaneously (e.g. from the DAW’s piano roll) C-2, D-2, E-2, F-2, G-2, A-2, B-2, basically a 7 voice chord containing all the notes from my desired scale. And scaler would pick this up live and configure its current scale accordingly and then adjust all settings dependent on scale to this new information. Like I said, think of it as “Detect” mode, but automatable, remote-controlled. This way you could skirt any more complicated MIDI CC implementation. I’ve actually got inspired toward this idea from how the Synthimuse learns new temporary scales.

Hi @Bernd

The reason behind the difference with the live input is quite simple… no one plays well/quick enough for it to work properly live.

When you press notes (ie: C, E, G) they are not pressed at the exact same time, if we were to process an expression we might actually get the C and E at the same time and start processing an expression based on those 2 notes.

If the expression was doing some sort of simple arpeggio we would start triggering: C, E, C, E, C,…

Then a 3rd note arrives and we have to either “merge” or “retrigger” the expression with the full content. Scaler would then play something like: “C, E, C, E, G” which is still different than what would happen if you had “clicked/triggered” the full chord on the UI.

This is a really simplified case, it starts getting tricky when you play through a phrase in which the first note is supposed to be the “5th” of the chord. If you send “C, E” there is no fifth, unless you consider it to not be a Cmaj chord. But then again, if we do this the output of the expression will be different than sending a full chord.

We have tried adding a slight delay to give time for a full detection but it messes with the timing of the track which is another issue.

When it comes to adapting live to the input, we are going in a slightly different direction which should allow you to do what you want through MIDI CC automation and user-defined content.

In the future, you should be able to define a custom scale and automate Scaler to switch to different scales on the fly. There would still be some manual setup but it will allow arrangements to be more complex/interesting and add the ability to perform variations easily.

I hope this makes sense :slight_smile:


Sounds good! I look forward to MIDI automation, that’ll be a big leap towards more performance capabilities. Thank you for explaining it, and thank you for driving Scaler forward so progressively!

I think that the AUDIO to MIDI conversion was, and likely is, one of the toughest task ever for a software developer :cold_face:

Indeed, there are a few tools currently that do it (almost) perfectly

I find that e.g. EZbass does the conversion pretty well, and saves me often a lot of time trying to have it recognizing MIDIs produced with instruments different from a bass