Closed mossly closed 1 year ago
Hey! So I actually updated the code to make this importing possible, but I realized that the pypi package has not really been updated yet. I reverted this back in the README so it should be good :)
Thanks for your assistance!
I'm pretty close to getting things working but hit another error I've been unable to debug.
generated_level = mario_lm.sample(
File "lm.py", line 113, in sample
[self.prompter.output_hidden(prompt) for prompt in prompts]
File "lm.py", line 113, in
This is using the default prompts = ["many pipes", "many enemies", "some blocks", "high elevation"]
Any help would be greatly appreciated!
Hey! So multiple prompts should now work. You should probably clone the repo or update from pypi. On another note, when using default prompts make sure you're using:
prompts = ["many pipes, many enemies, some blocks, high elevation"]
instead of
prompts = ["many pipes", "many enemies", "some blocks", "high elevation"]
If you split the categories the model could still work, but I haven't tested that out extensively.
Also, make sure you update your transformers
package :)
Thanks for pointing this out though, lmk if there's any other errors!
I updated the repo, transformers, fixed up the prompts as per your advice, and now everything is working on Windows 11 with Python 3.10.
Thanks again for all your help! :)
Hi,
First of all thanks for your novel implementation! This is very cool to see.
When running the minimal code snippet provided in the readme, I got the following error:
I resolved it by changing the line:
from mario_gpt import MarioLM
tofrom mario_gpt.lm import MarioLM
Perhaps this is just a simple syntactical mistake? Or it could be user error on my part...
Thanks for your attention!