Control mapping the Record and Trash button

Hi!

I absolutely love the voicing and performance features of Scaler 2, very happy with my purchase. I’m building a ‘digital improvisational bandmember’ which uses several algorithms to realtime generate parts based on audio and midi signals coming in from the live band. I’m planning to use Scaler as a big part of processing harmony elements in the project, using a single input moment per song for the keyboard player to ‘tell’ the computer which chords to use. After that multiple instances of Scaler can delegate different voicing and performance styles to different instrument tracks.

In order to do this, having a way to automate the record and trash button in section A, chord detection, of Scaler, would save me a lot of time. Right now it seems only to be possible to enable these functions through a mouse click. I’d rather not have to write a seperate littlw software element that auto-clicks on the buttons whenever needed, because it doesn’t seem like a clean solution to the problem.

Is there any way, that I’m missing, to reach the functions of these buttons? If not it would be absolutely awesome if control mapping these two elements would be a available in a next version of Scaler.

Thanks in advance,

Kindest regards,
Jan Terlouw

1 Like

Hi Jan, welcome to the forum, your application sounds interesting and thanks for the feature request. Obviously you would probably be the only one that may want those two things automated but we shall discuss. Cheers.

Welcome to the board … lots of helpful folk here.

This sounds like a fascinating project. … makes me think of Bender (Futurama) playing a piano

Scaler is not ‘time aware’ in most functions. So, for example, in audio
detection it recognises changes in tonality (audio) or chord structure (midi) but disregards time. So feeding it a 4 bar piece with tonalities / chords XXYZ, would be interpreted as XYZ.

However, potential issues are that (a) latency from processing inbound data and then rendering it to audio, and (b) more importantly, with the current version AFAIK it can’t ‘chase’ inbound audio or midi … there is no SPP recognition or sync.

However, this may be all wrong and is based on my (limited) misunderstanding, so maybe @Ed1 (developer) may jump in an comment on the plan.

bender

I join the request. I have had that problem using Scaler in Gig Performer

Thanks for your quick and kind responses!

I’m sorry, I forgot to mention that I’d trigger the Chords with a binded Section A using midiclips, generated by other (rythm) modules (built in eg Reaktor 6 or Max For Live). So syncing shouldn’t be an issue.
And yes, a piano playing Bender is actually a nice way to look at this project :stuck_out_tongue:

Thanks for your consideration! Although Scaler already has great features for generative music, I think having more automation possibilities would open a lot of doors for condition based applications.

Kindest regards,
Jan Terlouw

Just to clarify, moving forward we should have a 100% assignable midi mapping feature but that will need to come with a redesigned UI. Next major iteration.

1 Like