likejazz / llama3.np

llama3.np is a pure NumPy implementation for Llama 3 model.
MIT License
958 stars 73 forks source link

Suggestion: User input #4

Open WaitingOak opened 4 months ago

WaitingOak commented 4 months ago

You can promote user input so when the file is run the user gets prompted to input a story prompt instead of having to hardcode it in every time. In llama3.py at line 274 instead of prompt = "I have a dream" you can put something like this prompt = input('Give me a prompt to start writing: ') which will print the text inside the () and will use the text typed by the user as the prompt variable

WaitingOak commented 4 months ago

You can also add on top of the above suggestion to keep prompt going by doing to following indent the last bit of code: `if name == 'main': args = ModelArgs()

    `tokenizer = Tokenizer("./tokenizer.model.np")
    model = Llama("./stories15M.model.npz", args)

    if len(sys.argv) == 1:
        prompt = input('Give me a prompt to start writing: ')
    else:
        prompt = sys.argv[1]

    print(f"\n{prompt}", end="")
    input_ids = np.array([tokenizer.encode(prompt)])
    start = time.time()
    _, L = input_ids.shape
    for id in model.generate(input_ids, args.max_new_tokens):
        L += 1
        output_id = id[0].tolist()
        if output_id[-1] in [tokenizer.eos_id, tokenizer.bos_id]:
            break
        print(tokenizer.decode(output_id), end="")
        sys.stdout.flush()
    elapsed = time.time() - start
    print(f"\n\nToken count: {L}, elapsed: {elapsed:.2f}s, {round(L / elapsed)} tokens/s")`

Make it a function such as: `def start_prompt(Continue): if name == 'main': args = ModelArgs()

    tokenizer = Tokenizer("./tokenizer.model.np")
    model = Llama("./stories15M.model.npz", args)

    if len(sys.argv) == 1:
        prompt = input('Give me a prompt to start writing: ')
    else:
        prompt = sys.argv[1]

    print(f"\n{prompt}", end="")
    input_ids = np.array([tokenizer.encode(prompt)])
    start = time.time()
    _, L = input_ids.shape
    for id in model.generate(input_ids, args.max_new_tokens):
        L += 1
        output_id = id[0].tolist()
        if output_id[-1] in [tokenizer.eos_id, tokenizer.bos_id]:
            break
        print(tokenizer.decode(output_id), end="")
        sys.stdout.flush()
    elapsed = time.time() - start
    print(f"\n\nToken count: {L}, elapsed: {elapsed:.2f}s, {round(L / elapsed)} tokens/s")
Continue = input('Would you like to continue with another prompt(yes/no): ')
if Continue == 'yes':
    start_prompt(Continue)
else:
    exit(1)

start_prompt('yes')`

And call the function to start it off when running the code for the first time.