openlm-research / open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
Apache License 2.0
7.39k stars 379 forks source link

OpenLLaMA can quickly learn how to code #65

Open jorgemcgomes opened 1 year ago

jorgemcgomes commented 1 year ago

I know it's mentioned in the readme of the repo that this model apparently can't code because of the spaces that are merged. And this has been discussed in #40 .

However, I did some fine-tuning on the 3B model using the "fixed" tokenizer by @danielhanchen https://huggingface.co/danielhanchen/open_llama_3b and with use_fast=True. This tokenizer encodes multiple spaces as multiple space tokens, it doesn't get rid of them as the "official" tokenizer.

My fine-tuning dataset includes very little code, as I wasn't really trying to do that. It's just a small part of the instructions in the instruct datasets I used. But then I noticed this in one output of the model. Lo and behold, perfectly indented python code.

class CraftingSystem:
    def __init__(self):
        super().__init__()
        self.items = []

    def add_item(self, item):
        self.items.append(item)

    def get_all_items(self):
        return self.items

    def get_item_name(self, item):
        return item[0]

    def get_item_description(self, item):
        return item[1]

A lot of people out there simply repeating that OpenLLaMA is useless for code, but that doesn't seem to be the case provided the tokenizer configuration is fixed, and a little bit of fine-tuning is done.

snichols commented 1 year ago

Great news, thanks for sharing!

derekelkins commented 1 year ago

It would be interesting if a LoRA could be trained so that one could just apply the LoRA without needing to fine-tune the model. That LoRA may also be able to be applied to other OpenLLaMA-derived models.

young-geng commented 1 year ago

Check out our OpenLLaMA v2 model, which is pretrained with a lot of code. The official release of that will happen very soon.

danielhanchen commented 1 year ago

@jorgemcgomes Oh kinda forgot to reply here! @young-geng Congrats on the new release of v2! Trying it out right now :) Can see both the multiple spaces issue is fixed AND the fast tokenizer is fixed in the Huggingface base repo! (the thermal example you provided) Good work!