simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
194 stars 19 forks source link

Fixes #32 - create GPT4AllModel Option for device #33

Open scotscotmcc opened 2 months ago

scotscotmcc commented 2 months ago

Fixes #32

I hope I'm not getting ahead of myself, but put together this modest commit to allow for passing in the device to GPT4All. It adds an Option for device, and then passes that in when GPT4All(...) is instantiated.

I went back and forth on the default value for this, if it should be cpu or None, and am submitting this with cpu. Currently, GPT4All uses cpu as its default, and I imagine this is unlikely to change. I considered None, which would then make GPT4All decide how to handle a None - which currently is exactly the same as if it is cpu, and so just using cpu seemed to make more sense.

I tested this for myself, but did not create any actual tests for the library for this. It isn't really clear to me how the testing fake responses are working in there...