Agent Cloud is like having your own GPT builder with a bunch extra goodies. The GUI features 1) RAG pipeline which can natively embed 260+ datasources 2) Create Conversational apps (like GPTs) 3) Create Multi Agent process automation apps (crewai) 4) Tools 5) Teams+user permissions. Get started fast with Docker and our install.sh
If you look at the logs, you will see 2x as many prints as rows. E.g. when syncing a table with 1 row:
Datasource retrieved from Mongo: 66fcd0619953c0a0f29d8663
Datasource retrieved from Mongo: 66fcd0619953c0a0f29d8663
First one is from:
rabbit_consume -> process_message -> get_model
Second is from:
process_incoming_messages -> handle_embedding -> embed_text_construct_point -> embed_text -> get_model
Full backtraces after each print:
```
Datasource retrieved from Mongo: 66fcd0619953c0a0f29d8663
backtrace: 0: vector_db_proxy::adaptors::mongo::queries::get_model::{{closure}}
at ./src/adaptors/mongo/queries.rs:41:39
1: vector_db_proxy::messages::tasks::process_message::{{closure}}
at ./src/messages/tasks.rs:67:67
2: vector_db_proxy::adaptors::rabbitmq::models::rabbit_consume::{{closure}}
at ./src/adaptors/rabbitmq/models.rs:105:38
3: ::consume::{{closure}}
at ./src/messages/models.rs:61:88
4: vector_db_proxy::main::{{closure}}::{{closure}}
at ./src/main.rs:128:14
5: as core::future::future::Future>::poll
at /usr/src/debug/rust/rustc-1.81.0-src/library/core/src/future/future.rs:123:9
... removed non useful stack info
Datasource retrieved from Mongo: 66fcd0619953c0a0f29d8663
backtrace: 0: vector_db_proxy::adaptors::mongo::queries::get_model::{{closure}}
at ./src/adaptors/mongo/queries.rs:41:39
1: vector_db_proxy::embeddings::utils::embed_text::{{closure}}
at ./src/embeddings/utils.rs:156:78
2: vector_db_proxy::data::processing_incoming_messages::embed_text_construct_point::{{closure}}
at ./src/data/processing_incoming_messages.rs:82:93
3: vector_db_proxy::data::processing_incoming_messages::handle_embedding::{{closure}}
at ./src/data/processing_incoming_messages.rs:132:6
4: vector_db_proxy::data::processing_incoming_messages::process_incoming_messages::{{closure}}::{{closure}}
at ./src/data/processing_incoming_messages.rs:240:42
5: as core::future::future::Future>::poll
at /usr/src/debug/rust/rustc-1.81.0-src/library/core/src/future/future.rs:123:9
6: tokio::runtime::task::core::Core::poll::{{closure}}
... removed non useful stack info
```
If you look at the logs, you will see 2x as many prints as rows. E.g. when syncing a table with 1 row:
First one is from:
rabbit_consume
->process_message
->get_model
Second is from:
process_incoming_messages
->handle_embedding
->embed_text_construct_point
->embed_text
->get_model
Full backtraces after each print:
``` Datasource retrieved from Mongo: 66fcd0619953c0a0f29d8663 backtrace: 0: vector_db_proxy::adaptors::mongo::queries::get_model::{{closure}} at ./src/adaptors/mongo/queries.rs:41:39 1: vector_db_proxy::messages::tasks::process_message::{{closure}} at ./src/messages/tasks.rs:67:67 2: vector_db_proxy::adaptors::rabbitmq::models::rabbit_consume::{{closure}} at ./src/adaptors/rabbitmq/models.rs:105:38 3: