Closed lucasmsoares96 closed 6 days ago
Hey @lucasmsoares96,
Thanks for reaching out.
The reason why your program is hanging a few seconds after the call to the model when using LangChain.dart is because you are not closing the client.
From the http docs:
Some clients maintain a pool of network connections that will not be disconnected until the client is closed. This may cause programs using using the Dart SDK (dart run, dart test, dart compile, etc.) to not terminate until the client is closed.
LangChain.dart uses a single client for all the requests. In this way, if you make multiple requests, you benefit from the open connection (plus caching when relevant).
dart_openai, instead, creates a new client for every request and closes it when the request is done (unless you provide your own client).
To fix it, just add chat.close();
when you are done using ChatOpenAI
.
I editted you code to run 50 consequitive request, these are the results:
import 'dart:io';
import 'package:dart_openai/dart_openai.dart' as openai;
import 'package:langchain/langchain.dart' as langchain;
import 'package:langchain_openai/langchain_openai.dart';
Future<int> main(List<String> arguments) async {
try {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
if (openaiApiKey == null) throw Exception('OPENAI_API_KEY not defined');
final stopwatch = Stopwatch()..start();
print('## Testing dart_openai...');
await openaiTest(openaiApiKey);
print('## dart_openai executed in ${stopwatch.elapsed.inSeconds}s');
// dart_openai executed in 30.479278s (0.60s/req)
stopwatch.reset();
print('## Testing langchain_openai...');
await langchainTest(openaiApiKey);
print('## langchain_openai executed in ${stopwatch.elapsed.inSeconds}s');
// ## langchain_openai executed in 28.279913s (0.56s/req)
return 0;
} catch (e) {
stderr.writeln(e);
return 1;
}
}
Future<void> openaiTest(String openaiApiKey) async {
openai.OpenAI.apiKey = openaiApiKey;
for(int i = 0; i < 50; i++) {
final chatCompletion = await openai.OpenAI.instance.chat.create(
model: 'gpt-4o-mini',
messages: [
openai.OpenAIChatCompletionChoiceMessageModel(
role: openai.OpenAIChatMessageRole.user,
content: [
openai.OpenAIChatCompletionChoiceMessageContentItemModel.text(
'Define Dart in 3 words',
)
],
),
],
temperature: 0,
);
print(chatCompletion.choices[0].message.content);
}
}
Future<void> langchainTest(String openaiApiKey) async {
final chat = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(
temperature: 0,
model: 'gpt-4o-mini',
),
);
for(int i = 0; i < 50; i++) {
final usrMsg = langchain.ChatMessage.humanText('Define Dart in 3 words');
final aiMsg = await chat([usrMsg]);
print(aiMsg.content);
}
chat.close(); // ADD THIS
}
@davidmigloz Thank you!
System Info
langchain: 0.7.5 langchain_openai: 0.7.1 Dart: Dart SDK version: 3.5.3 (stable) (None) on "linux_x64" System: Linux ubuntu 5.15.0-122-generic #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Related Components
Reproduction
I was suspecting that the execution time of the langchain.dart code was taking too long to finish. So I created a simple example to compare the total execution time of the langchain.dart lib and the dart_openai lib using the time command to measure the total execution time.
output
As suspected, the total execution time of the langchain.dart lib was 3.35 times greater than that of the dart_openai lib, making it impossible to use the lib in ephemeral scenarios.
Expected behavior
The total execution time of the application should be close to 7 seconds (achieved by the dart_openai lib)