wshilton / andrew

GNU General Public License v3.0
0 stars 0 forks source link

Integration boundaries #8

Open wshilton opened 1 year ago

wshilton commented 1 year ago

The manner in which features captured in the latent spaces of our mutli-scale, multi-modal VAEs are lifted into other systems is a primary design concern. Before proposing a completely integrated approach, an incremental review of general techniques to transform the latent spaces is now outstanding.

wshilton commented 1 year ago

For generative purposes, we shall need to define a training objective in a reinforcement learning paradigm that targets a therapeutic constitution. Likewise, for encoding, the training objective will target a therapeutic perspective.

wshilton commented 1 year ago

The manner in which features captured in the latent spaces of our mutli-scale, multi-modal VAEs are lifted into other systems is a primary design concern. Before proposing a completely integrated approach, an incremental review of general techniques to transform the latent spaces is now outstanding.

Approaches that are integrated at the outset have been explored, such as in https://proceedings.neurips.cc/paper_files/paper/2020/file/08058bf500242562c0d031ff830ad094-Paper.pdf. Our use case, specifically concerning certain specialized object detection processes (face, body and hand landmarking), does not warrant this treatment. Regardless, the manner in which RL is applied here is a candidate for defining the therapeutic perspective learning goal. The suspicion, currently, is that we will leverage APA PsychNet's many streaming, archived, scripted and unscripted sessions in order to train for this. Further outstanding questions in this direction should now be addressed.

wshilton commented 1 year ago

A natural integration boundary might specify that multi-scale, multimodal unsupervised learning occurs continuously throughout operation. The latent space at any given time is lifted directly for determining the associations between emotive language, spoken language, subtexts, and so on. After satisfaction of some basic conditions, the features are then weighted according to the strength of these associations. For the strongest associations surpassing some measure of confidence, the LLM then proposes a textual description of the feature. Implementation of feedback mechanisms addressing class biases, bifurcations, and so on should guarantee stability.

wshilton commented 1 year ago

A natural integration boundary might specify that multi-scale, multimodal unsupervised learning occurs continuously throughout operation. The latent space at any given time is lifted directly for determining the associations between emotive language, spoken language, subtexts, and so on. After satisfaction of some basic conditions, the features are then weighted according to the strength of these associations. For the strongest associations surpassing some measure of confidence, the LLM then proposes a textual description of the feature. Implementation of feedback mechanisms addressing class biases, bifurcations, and so on should guarantee stability.

The precise manner in which the LLM might generate the proposal is of great interest.