No, and well, yes...
AI could in theory be trained to do quad decoding, by training on quad encoded tracks and the original multi-tracks, however, AFAIK all the training has involved regular stereo and original multi-tracks.
I suppose it might be interesting to try the various AI tools on quad encoded files...
Anyway, yeah, there's a whole "remixing" hobby that in the past used labor intensive harmonic editors and now are using AI "music source separation" tools, and I have incorporated into upmix + remix for going beyond 7.1 (immersive upmix) and covered in other threads here.
Several of the open source tools involve installing python and working on the command line, or you can pay for commercial versions based on the same, however there is one that someone has added a GUI and packaged up all the python so you don't have to mess with it:
https://github.com/boy1dr/SpleeterGui
That one will get you up to 5 stereo stems, from your stereo source:
Drums,
Bass,
Vocal
Piano (not generic keyboard)
Other (the original minus the above stems)
Again in theory it could be trained for say "brass", etc. but you need >100 hours of multi-tracks and stereo mixes to do a good job (and a smallish cluster of computers/GPUs).