Closed david-thrower closed 1 year ago
Branch name 67-base-models-embeddings-encoders
to paste.
Made proposed changes. Passed the tests for backward compatibility, went suspiciously well. We still need to develop the tests for forward compatibility (new functionality). This will take some hours.
Successfully ran a text model (BERT base embedding) with Cerebros. Unfortunately, the BERT embedding is the bottleneck. Cerebros is not able to augment embeddings from BERT beyond the val_binary_accuracy: 0.8429 that a straight BERT embedding .> Dense(1) layer. BERT's limitation on its embedding resolution doesn't leave us with enough input resolution to capture any further pattern, even with an exhaustive NAS.
I have an idea that is linear algebra based that may be a better text embedding for Cerebros and could give much better resolution and may perform better on small data sets.
I reduced the length of the test for the text model, as this took 2 hours to run 7 neural architecture moieties, 1 run per moity. Reducing to 2.
API Additions appear to be stable. Need to remove test prints. Question: Should we add CV capabilities to the current branch or merge this in now and create another?
Hold to #69
Just added an image classification test. Tests are running. It is likely this will need a few hours of debugging, but this major milestone may be completed soon.
Kind of issue: feature-request-or-enhancement
Parameter base_models, list will take one or many objects of type Keras model (having a 1d output) and will insert these embeddings between the input layer(s) and downstream layers in order that matches the list of inputs. (Mirror of Cerebros Enterprise repo issue of same title)