Not sure if this was brought up before as a feature request, but I think it would make sense to apply the voice grouping to MIDI chords coming into Scaler, not just the internal chords played. This would be especially helpful in cascaded/nested Scaler chains, where one Scaler provides the base chord progression, and then subsequent Scaler channels can interpret those chords differently with their own unique expression/voicing settings, without having to store/synchronize the chord progression.
That’s interesting, that would make playing quite transformative. But there would be many rules we would need to consider, what is a chord (how far apart the actual notes), when do we lead / group etc (how far apart the actual chords). Not quite as easy as it should be. I love the Voice Grouping in scaler, given how much feedback we get on it it is clear that it is any area we should put some extra focus on.