Open datascientist opened 1 year ago
@datascientist thanks for the issue. @alberttorosyan has this to do with improper run closures?
@datascientist are the 900 calls part of a single execution/scenario or 900 different executions?
@gorarakelyan i have 400+ documents, for each document I ask the LLM to do two separate tasks (that I then process and parse and record), each is a different call/execution. Thus a total of close to 900 calls
🐛 Bug
aimrocks.errors.RocksIOError: b'IO error: While open a file for random read: /.aim/meta/chunks/d9b1e76ebed74634b9545c8e/000009.sst: Too many open files'
Made close to 900 API calls to OpenAI, and ran aim_callback.flush_tracker each time.
To reproduce
Asking OpenAI LLM to do keyword extraction, and entity extraction on a series of documents. Documents are not that long = Average character length of the documents is about 5,250 characters
Expected behavior
I want to be able to use Aim UI to analyze different API calls
Environment
Python 3.9 macOS 13.2.1 aim 3.17.3 aim-ui 3.17.3 aimrecords 0.0.7 aimrocks 0.4.0 langchain 0.0.141 openai 0.27.2 openapi-schema-pydantic 1.2.4
Additional context