google-deepmind / concordia

A library for generative social simulation
Apache License 2.0
711 stars 163 forks source link

Pass device in launch_concordia_challenge_evaluation.py #97

Open depshad opened 1 month ago

depshad commented 1 month ago

If we are using local model, we need to pass device to utilise the gpu for inference. However, in launch_concordia_challenge_evaluation.py

# Language Model setup
model = utils.language_model_setup(
    api_type=args.api_type,
    model_name=args.model_name,
    api_key=args.api_key,
    disable_language_model=args.disable_language_model,
)

So if we use cmd execution for the evaluation script, device is not passed which is present in utils.language_model_setup so in this case it defaults to 'cpu'.

jzleibo commented 1 month ago

This is not really a blocking issue since I'm sure you can just manually edit the file to pass the device. So probably not super urgent here. Anyway though, in principle we might want to loft this device setting all the way out to become a command line argument, but I would worry a bit about adding model-specific complexity into the interface at that level. @jagapiou what do you think?

jagapiou commented 1 month ago

api_key is already model specific: some models don't support that argument. So I think it's OK to solve it the same way: have device default to None and only forward it if it's explicitly set (sent you a CL).

If we have a lot of model-specific settigns it might be better to have a --model_settings=device=gpu0,use_codestral=True type flag.