Scaler 2 Ableton Instrument Rack Routing

Hi all,
I’ll just start with what I’m trying to do in Live 11…
Create a set list for live performance that has an instance of Scaler 2 for every song.

I want to be able to send the output of Scaler 2 to various instrument tracks sometimes more than one at the same time. So far I have created an instrument rack on one midi track and loaded a number of instances of Scaler into it. For reference I have named each of these instances as Song 1, Song 2 etc.

My core problem is that on the receiving midi track with a soft synth in it I can only choose either Song 1 or Song 2 as the midi input in the lower section of the midi input in the Track. My intention was to ‘change songs’ in the scaler track using the chain selector on the instrument rack. This doesn’t change the midi-in on the receiving track though. I am stuck on whatever ‘song’ I’ve selected to begin with. Input itself doesn’t seem to be midi mappable.

Hope this makes sense and that someone out there has a work around. PS I am wondering if a Max for Live device or IAC driver routing might help but really I dont know what I’m doing - Mike :slight_smile:

1 Like

I want to be able to send the output of Scaler 2 to various instrument tracks sometimes more than one at the same time.

Tip 1: why not putting one different Scaler instance for each instrument?

Tip 2: To send the same Scaler output to different instruments instead, drop multiple MIDI tracks (with Scaler as Input) and use them to route the Scaler output to each different instruments (each instrument with the MIDI track as Input)
:wink:

Thanks Claudio - if I wanted to use the same omnisphere patch / track for two songs (now as 2 instances of scaler on seperate tracks If I understand you correctly) would that mean I have to duplicate the omnisphere track as well?

I don’t know omnisphere, but @panda knows it well, so he will reply when he can

no worries I will check back in tomorrow. The actual receiving VST doesn’t matter so much as this will be a variety on any given track again by chain selector / instrument rack. I can sometmes use up to 8 of these midi instrument tracks (pads / bass / snth / lead / pianos / sequences etc with selectable variations on each). So doing the math, if I had a seperate instance of Scaler for each ‘song’ and can only configure the inputs of these 8 tracks to the one scaler source then I would have an enormous amount of tracks for a set of songs (or am I conceiving it the wrong way?).

Hi

It looks to me as if you are sufferring from the limitations of routing in Ableton Live (all midi is merged into a single channel) when routing between tracks. So I think

is partially correct. You can still have one track for Scaler with an instrument rack of Scalers, but you will need duplicate tracks for Omnisphere.

I’m just trying to get my head round this, but I understand there are some basic issues with doing this sort of thing in Live, with its limited routing capabilities and no channel recognition
I don’t use racks (except to split midi input by pitch/midi note number).to send to different instruments.

I assume that the mention of IAC means that you are on a Mac… I’m Windows.

To make sure I interpret the goal properly, I’ll bounce my interpretation of your approach off you, for you to correct / affirm as relevant.

{1} Since an instance Scaler can generate only one midi stream (setting aside my pitch splitting fiddles) a given song with 4 instruments driven by Scaler would require 4 instances of Scaler (base, chords, melody etc).

{2} If you had 6 songs then,(given the performances etc were all different) if they all had 4 parts, absent of any other provisions, each song would require 4 independent instances. This could be mitigated by Scaler changing state i.e. reading in another set, but that may be difficult.

{3} It seems to me therefore that if there were N songs with M performances fitting the above scenario, by default that would require N x M instances. Although Scaler is pretty lightweight and I could happily run dozens, there is obviously a practical limit depending on your hardware.

{4} However, as you know as a user, because Omnisphere is multitimbral, you may only need one instance of that unless you had more than 8 instrument voices. You can obviously route Omnisphere audio output to different Live audio tracks for mixdown. So you sort of have a many to one midi in and a one to many audio out.

{5} So, on first glance, the above would require 4 Instrument racks, each driving 4 voices of the total song. Each rack would have 6 song parts.

Is this as the sort of framework you were looking for, or have I misread it ? I might then be able to suggest possible ways of doing it. Scale of course supports different midi channels, but that doesn’t help you much in live.

You might for example use MIDI Translator Pro | Bome Software and route output to dummy midi ports, do the logic in BOME, and then route it back to the relevant target, driving this by channel number. I don’t know IAC, but that might do this ?

If Omnisphere can receive inputs from multiple channels at the same time, you can have multiple Scaler instances that output different channels
OR
You can create multiple MIDI tracks that receive the output from one Scaler and send their outputs to different inputs of Omnisphere

But I am starting to suspect you need an automation, i.e. changing stuff on the fly, so I give up
Too complex workflow to me :astonished:

In Live, Omnisphere can receive midi from multiple input tracks, with up to 8 instruments; audio from them can go to multiple Live tracks.

However, firstly Scaler doesn’t present a midi out so the input has to be defined on the target synth midi track - but you can only specify one input at a time, which is part of imikeb (several) challenges.

Secondly, although you can have several Scalers in an instrument rack, and can give them separate midi channels, Live doesn’t recognise this on the receiving channel. If it did, you could direct input to an equivalent channel in Omnisphere. IT’[s easy when you have multiple tracks with a midi source in (could be multiple Scalers, and you can then feed each to a separate channel in an instance of Omnisphere.

The ‘divisi’ arrangement used by jjfagot (and me, albeit in a modified form) can be used for things like Instacomposer and the BBC SO. That means a dummy midi interface (or ports in my case) ands use the logic of something connected to that to do the routing. I used Cantabile.

I’ll wait and see if I understood imkieb first, and then see if we can come up with something. Not sure how he plans to switch/ start a new songs. jjfagot uses Gig Performer.

Mmmm :thinking:

maybe all is linked to the “rack”, but have you tried using multiple Scalers with the output on different channels?
Scaler 1 Channel 1, Scaler 2 Channel 2 and so on, each one with a MIDI track to route that output to Omnisphere?

I tried yesterday and I was able to drive multiple channels in my Hollywood Pop Brass
This is what I mean, but I don’t know if it may work

Thanks so much for the willingness to understand here - what an amazing community! I’m sorry if I am confusing things more…

  1. Yes thats my understanding too - correct me if I am wrong but if I had one instance of scaler on one midi track I am able to send this to any number of seperate midi instruments on seperate midi tracks in Live by choosing that one instance of scaler as the source/input. Possibly I could then use the performance and pattern selectors (via midi controller) in that one instance of scaler to move through different parts of a song. Next step to get as much out of that one instance as possible might be to add midi filters / keyboard splits to the respective instrument tracks so they would only hear (and play) the bits of the scaler performance that I want them to hear. Granted this is if I’m playing with the one performance coming out of one instance of scaler. If I want bass etc I am going to need completely seperate instances.

  2. Yes - as above this seems to be necessary.

  3. Yes N x M instances. After my 2012 Macbook packed up I’ve forked out for a MacAir with M2 chip and this seems to be screaming along so no dramas with CPU. This is where the set seems to get out of control: If V is a midi track with a instrument rack on it where I can happily select from a range of VSt instruments on it via chain selector, I assume I need M x V worth of seperate midi tracks (as the input of each V can only receive midi from 1 instance of scaler (M). This may mean that if I had 6 songs in a set I would have: 6 x M (say 4 parts) x V for each part … in other words without routing somehow from an instrument rack with multiple instances of Scaler this would be 24 scaler tracks and 24 Intrument tracks to receive this midi!! :frowning:

  4. Might be best to take Omnispehere out of the equation for now and forget the multitimbral aspect - that just does my head in. :slight_smile: Back to the original dilemma - the possibility of 1 instrument rack (with M instances of scaler) being able to send info to V instrument tracks…

  5. this is great thinking in itself - Where I was thinking the need for 24 instances as above I could combine voices into an intrument rack where for instance the instrumentation for Song 1 Part 1 was set to Chain 1 on the instrument rack and this chain itself could host a rack with multiple voices (with keysplits if necessary).

  6. The dummy midi track is something I’m hearing about that led me to question IAC which I havent used much at all. Very limited knowledge in this space unfortunately. I will look up Bome - perhaps thats the miracle I’m searching for.

Heartfelt thanks for your input.

1 Like

I’d suggest that Omnisphere is a friend , not a problem. Imagine that you want to have three parts each having different voices (patches) and you were using a monotimbral softsynth like DUNE. You would need three instances of DUNE on separate tracks to do this.
If in fact the voices could be provided by Omnisphere, then you only need one instance of Omnisphere. Although the three voices will consume resource (CPU and memory) because of the shared usage of function, there will be less overall resource usage.

Say DUNE uses R1 units of resource (CPU / memory) for one instance, then there will be 3xR1 used for the three voices

If Omnisphere uses R2 units, the total for the three voices would be R2 + 3m, where m is the marginal extra resource for adding a voice. Bottom line is that if an instance of DUNE consumed the same amount as an instance of Omnisphere (i.e R1=R2) then Omnisphere would win hands downs in resource usage.

Probably Omnisphere would need more than DUNE for a single instance, then rapidly move ahead with extra voices added - it’s all about what code/memory is shared - none in the case of DUNE, some in the case of Omnisphere

Later on I’ll post a simple Omnisphere example with routings.

Warning, if you are not familiar with ‘sysex’ BOME might be a step too far. It’s very powerful, and I believe that Ableton used BOME products in the development of PUSH.
A better bet might be to take a look at Canatabile Cantabile - Software for Performing Musicians
What is doesn’t say is that they have a free version as well, which I used initially before going for the USD 69 version. This can do some of what BOME does in a more user friendly way.

See Playing with multiple MIDI ports (post 1) - (very) poor man's divisi to get some idea.

Now I’ve confirmed my understanding of your goals, I’ll have a look later today (I’m London based) , and post something this evening if possible - depends how long it takes me to experiment.

You have quite a complex goal, so maybe think of it by sub-dividing the elements into different pots.

I’m also unsure if the plan is to, say, have four Scalers playing a backing whilst you are connected to a fifth and play along with some bound keys (as @ClaudioPorcellana) or some other plan. This important because the means to trigger new songs will vary depending on that setup.

So I thought I’d just set up a quick Omnisphere / Scaler set up as one of the building blocks, as you can feel up to 8 Scalers into a single instance. Further, it allows you to lock patches into RAM, which otherwise would be loaded from disc. This opens up the possibility of doing a patch change at the end of each song instead of having several Omnisphere instances.

The overall test looks like this
c1

The ‘scaler pad’ and ‘scaler perf’ each has a scaler instance. For the scalers I chose a simple progression from the common progressions, ‘Triumphant Thickened 2’

c2a

For the second scaler (‘perf’) I chose a performance of ‘Chordal Perf 2’ as the melody line.

As you are aware, the output by default goes to audio, but we can insert a ‘dummy’ midi track to ‘hop’ it over to the synth. This is done by ‘midi hop pad’ and ‘midi hop perf’.

Now set up a fifth track for Omnisphere and load it in. I put a string patch onto channel 1

and a D50 piano into channel 2

c6

You can now control the relative channel volume in the Multi Mixer section
c7

Now just a few small tasks to link it together and trigger it. First, link the dumy tacks to Omnisphere (note the routing is done in the sending track and not the target track to get the many to one setup)
c8

Put a dummy midi clip on the scaler tracks 1 and 2, by right clicking in the track

c9

You can see then here …
dummy

Now in the scalers switch the internal scaler synth sound off, and in settings put DAW sync on
c3

Now trigger the play back in the master track and adjust the track 1 and 2 volumes in the Omnisphere Mixer, and there you have it.

Now, what you could do is add a scaler track with midi input from your keyboard controller and bind the progression. You can either use a scaler internal sound routed to the master out or add a dummy and route it to another Omnisphere patch, trigger the start from your keyboard and play along.

Obviously all this isn’t where you want to be yet, but it maybe gives some ideas. The then challenge is to deal with changing songs and the best setup for that. It may be that Cantabile solo can do that. Also, some routing may be easier using dummy external midi ports, allowing you to control things better by using the ability of scaler to specify the output channel.

I’m not sure if this was of any use - perhaps you were completely familiar with this, in which case apologies…

You can download the Ableton ‘als’ file at https://btcloud.bt.com/web/app/share/invite/Inbt6uNvhU

1 Like

This is awesome @panda . What a legend. Thanks to the comprehensive work up here and the shared file I have that set up going well. And the process has been a learning curve. (I’m based in Sydney by the way).

I’m also unsure if the plan is to, say, have four Scalers playing a backing whilst you are connected to a fifth and play along with some bound keys (as @ClaudioPorcellana) or some other plan. This important because the means to trigger new songs will vary depending on that setup

I know this is missing one of the great advantages of Ableton in terms of synching and looping but the current plan is to trigger about 3-4 instances of Scaler 2 in the left hand with bound keys and 1 instance for the right hand.

I looked into Bomes and you are right @panda that is a steep learning curve. My only experience has been updating midi instruments that require using sysex to do that.

As I am into Audio Audio Modelling software I do have Camelot Pro and will explore that as it seems comparible with Cantible. It all takes me out of Abelton though which is where I was hoping to stay!

There is, but it’s very variable , depending on what you want to achieve. In this case, there are a number of small actions which are easy to do - when you know them. Missing any one of the steps and the headphones will stay silent.

Not necessarily … Ableton works well and integrates with controllers for live performance, so it would be best if you could keep that as your interface. I’ll post another idea developing the Omnisphere example shortly and post that here.