eunja511005 / AutoCoding

0 stars 0 forks source link

검열 되지 않는 나만의 LLM #188

Open ywbestPark opened 2 months ago

ywbestPark commented 2 months ago

1. 참고 싸이트

  1. 랭체인 https://python.langchain.com/docs/get_started/introduction/

  2. 랭체인 퀵 스타트 https://python.langchain.com/docs/get_started/quickstart/

  3. 올라마 설치 https://ollama.com/download

  4. 올라마 깃헙 https://github.com/ollama/ollama/tree/main

  5. 번역 https://www.deepl.com/translator

  6. 한국어 잘되는 LLM https://huggingface.co/heegyu/EEVE-Korean-Instruct-10.8B-v1.0-GGUF

 

2. 올라마 설치 후 디렉토리 구조

image

3. 로컬 LLM 다운 받기

- 커멘트  관리자 권한으로 실행
- 올라마 설치 디렉토리로 이동
  . cd C:\Users\ywbes\AppData\Local\Programs\Ollama
- ollama pull {원하는 LLM}
  . 원하는 LLM 모델 검색 : https://github.com/ollama/ollama 에서 Model Library에서 확인 가능
  . ollama pull gemma:2b

image

4. Visual Studio Code 실행 후 가상 환경 실행

- 작업 디렉토리 
  . D:\######\AI
- 가상환경 설치
  . python -m venv aienv
- 가상환경 실행(실행은 파워쉘이 아닌 커멘드 프롬프트에서
  . cd aienv\Scripts activate

5. 단계별 실행을 위해 .ipynb 형식으로 파일 만들기

- 작업 디렉토리
  . D:\######\AI\langchain_quick_start
- 테스트 파일명 
  . simple_rag.ipynb

6. 필요한 라이브러리 설치

pip install langchain
(aienv) D:\ywbest\AI\rag>pip list --version
Package                  Version
------------------------ --------
aiohttp                  3.9.3
aiosignal                1.3.1
annotated-types          0.6.0
attrs                    23.2.0
beautifulsoup4           4.12.3
bs4                      0.0.2
certifi                  2024.2.2
charset-normalizer       3.3.2
dataclasses-json         0.5.14
frozenlist               1.4.1
greenlet                 3.0.3
idna                     3.6
jsonpatch                1.33
jsonpointer              2.4
langchain                0.0.259
langchain-community      0.0.32
langchain-core           0.1.41
langchain-text-splitters 0.0.1
langsmith                0.0.92
marshmallow              3.21.1
multidict                6.0.5
mypy-extensions          1.0.0
numexpr                  2.10.0
numpy                    1.26.4
openapi-schema-pydantic  1.2.4
orjson                   3.10.0
packaging                23.2
pip                      22.3.1
pydantic                 1.10.15
pydantic_core            2.16.3
PyYAML                   6.0.1
requests                 2.31.0
setuptools               65.5.0
soupsieve                2.5
SQLAlchemy               2.0.29
tenacity                 8.2.3
typing_extensions        4.11.0
typing-inspect           0.9.0
urllib3                  2.2.1
yarl                     1.9.4
eunja511005 commented 2 months ago

chat with prompt template

from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate

llm = Ollama(model="gemma:2b")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are world class technical documentation writer."),
    ("user", "{input}")
])

chain = prompt | llm 

res = chain.invoke({"input": "how can langsmith help with testing?"})

print(res)
eunja511005 commented 2 months ago

add output parser

from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = Ollama(model="gemma:2b")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are world class technical documentation writer."),
    ("user", "{input}")
])

output_parser = StrOutputParser()

chain = prompt | llm | output_parser

res = chain.invoke({"input": "how can langsmith help with testing?"})

print(res)
eunja511005 commented 2 months ago

Web based Loader

from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")

docs = loader.load()

print(docs)
eunja511005 commented 2 months ago

add embedding model to vector store

※ 필요 라이브러리 설치 : pip install faiss-cpu

from langchain_community.document_loaders import WebBaseLoader
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_text_splitters import RecursiveCharacterTextSplitter

loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")

docs = loader.load()

embeddings = OllamaEmbeddings()

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)
eunja511005 commented 2 months ago

add document loader, splitter, embedding model, vector store, retrieval_chain

※ pip install --upgrade langchain ※ pip install faiss-cpu ※ pip install sentence-transformers

from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
import time

start_time = time.time()  # 시작 시간 기록

prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

llm = Ollama(model="gemma:2b")
document_chain = create_stuff_documents_chain(llm, prompt)

loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")

docs = loader.load()

# embeddings = OllamaEmbeddings()
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)

retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)

response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
print(response["answer"])

end_time = time.time()  # 종료 시간 기록
print(f"Execution time: {end_time - start_time} seconds")
eunja511005 commented 2 months ago

벡터 스토어로 chromadb, 스프리터로 CharacterTextSplitter 사용하여 한국어 응답 예제로 변환

※ pip install chromadb

image

from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
import time

start_time = time.time()  # 시작 시간 기록

prompt = ChatPromptTemplate.from_template("""Never give arbitrary responses. Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

llm = Ollama(model="gemma:2b")
document_chain = create_stuff_documents_chain(llm, prompt)

loader = TextLoader('korea_constitution.txt.bak', encoding = 'UTF-8')

docs = loader.load()
#print(docs)

embeddings = OllamaEmbeddings()
#embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(docs)
vector = Chroma.from_documents(documents, embeddings)

print(len(documents))

retriever = vector.as_retriever(search_kwargs={"k": 22})
retrieval_chain = create_retrieval_chain(retriever, document_chain)

response = retrieval_chain.invoke({"input": "대통령 임기는?"})
print(response["answer"])

end_time = time.time()  # 종료 시간 기록
print(f"Execution time: {end_time - start_time} seconds")
eunja511005 commented 2 months ago

GPU 확인 (윈도우키 + r > dxdiag > 디스플레이 탭 )

https://blog.naver.com/ww31ni/222533782993

image

eunja511005 commented 2 months ago
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
import time

start_time = time.time()  # 시작 시간 기록

prompt = ChatPromptTemplate.from_template("""Never give arbitrary responses. Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

llm = Ollama(model="gemma:2b")
document_chain = create_stuff_documents_chain(llm, prompt)

loader = TextLoader('korea_constitution.txt.bak', encoding = 'UTF-8')

docs = loader.load()
#print(docs)

embeddings = OllamaEmbeddings()
#embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(docs)
vector = Chroma.from_documents(documents, embeddings)

print(len(documents))

retriever = vector.as_retriever(search_kwargs={"k": 22})
retrieval_chain = create_retrieval_chain(retriever, document_chain)

response = retrieval_chain.invoke({"input": "대통령 임기는?"})
print(response["answer"])

end_time = time.time()  # 종료 시간 기록
print(f"Execution time: {end_time - start_time} seconds")
eunja511005 commented 2 months ago
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
import time

start_time = time.time()  # 시작 시간 기록

prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

llm = Ollama(model="gemma:2b")
document_chain = create_stuff_documents_chain(llm, prompt)

loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")
loader = WebBaseLoader("https://raw.githubusercontent.com/puzzlet/constitution-kr/master/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%20%ED%97%8C%EB%B2%95.txt")

docs = loader.load()

embeddings = OllamaEmbeddings()
# embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)

retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)

# response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
response = retrieval_chain.invoke({"input": "대통령 임기는?"})
print(response["answer"])

end_time = time.time()  # 종료 시간 기록
print(f"Execution time: {end_time - start_time} seconds")
eunja511005 commented 2 months ago

embeddings = OllamaEmbeddings()

image

eunja511005 commented 2 months ago

embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")

image