티스토리 뷰

딥러닝

LLM

shannon. 2023. 12. 23. 21:53
반응형

1. Ollama LLM 여러 모델 로컬에서 돌릴 수 있도록 만든 시스템

https://github.com/jmorganca/ollama

 

GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally

Get up and running with Llama 2 and other large language models locally - GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally

github.com

https://ollama.ai/

 

Ollama

Get up and running with large language models, locally.

ollama.ai

 

2. https://huggingface.co/HuggingFaceH4/zephyr-7b-beta

 

HuggingFaceH4/zephyr-7b-beta · Hugging Face

🪁 HuggingFaceH4/zephyr-chat 🌖 reach-vb/musicgen-prompt-upsampling 👀 NerdN/open-gpt-Image-Prompt-Generator 💻 ysharma/Zephyr-Playground 🚀 limcheekin/zephyr-7B-beta-GGUF 📉 library-samples/zephyr-7b 👩🏻‍🌾 smart-fellah/Smart-fellah

huggingface.co

model.safetensors.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 5.07MB/s] model-00001-of-00008.safetensors: 100%|██████████| 1.89G/1.89G [03:03<00:00, 10.3MB/s] model-00002-of-00008.safetensors: 100%|██████████| 1.95G/1.95G [02:54<00:00, 11.1MB/s] model-00003-of-00008.safetensors: 100%|██████████| 1.98G/1.98G [02:55<00:00, 11.3MB/s] Downloading shards:

 
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate


pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-beta", torch_dtype=torch.bfloat16) #device_map="auto"

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds in the style of a pirate",
    },
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
Loading checkpoint shards: 100%|██████████| 8/8 [00:06<00:00, 1.25it/s] generation_config.json: 100%|██████████| 111/111 [00:00<00:00, 149kB/s] tokenizer_config.json: 100%|██████████| 1.43k/1.43k [00:00<00:00, 1.96MB/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 10.7MB/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 2.08MB/s] added_tokens.json: 100%|██████████| 42.0/42.0 [00:00<00:00, 57.4kB/s] special_tokens_map.json: 100%|██████████| 168/168 [00:00<00:00, 232kB/s] /Users/s/opt/anaconda3/envs/XCS234_A2_CUDA/lib/python3.8/site-packages/transformers/generation/utils.py:1547: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration ) warnings.warn(

 

<|system|> You are a friendly chatbot who always responds in the style of a pirate</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> Me hearty, me wittiest parrot, have ye ever heard o' a human eatin' helicopters? I fear that be a tall tale told by landlubbers with too much grog in their bellies. Helicopters are not meant to be consumed by any living creature, human or pirate. Best stick to me scurvy dog's favorite grub, scurvy biscuits and barrels o' grog! Argh!
 
 
3. 

https://www.promptingguide.ai/techniques/rag

 

Retrieval Augmented Generation (RAG) – Nextra

A Comprehensive Overview of Prompt Engineering

www.promptingguide.ai

 

4. 

https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca

 

Open-Orca/Mistral-7B-OpenOrca · Hugging Face

🚀 limcheekin/Mistral-7B-OpenOrca-GGUF 🌊 Felladrin/Web-LLM-Mistral-7B-OpenOrca 🔥 EmbeddedLLM/chat-template-generation 💻 decunde/Open-Orca-Mistral-7B-OpenOrca 🏆 Prakash1015/Open-Orca-Mistral-7B-OpenOrca 🏆 bonomg/Open-Orca-Mistral-7B-OpenOrc

huggingface.co

 

5. https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1

 

mistralai/Mistral-7B-Instruct-v0.1 · Hugging Face

😻 osanseviero/mistral-super-fast 😻 openskyml/mistral-7b-chat 😻 DarwinAnim8or/Mistral-Chat 🚀 limcheekin/Mistral-7B-Instruct-v0.1-GGUF 📚 pragneshbarik/ikigai-chat 😻 SoAp9035/Mistral-7B-Instruct-v0.1-Fast-Chat 🦜 joshuasundance/langchain-s

huggingface.co

 

6. https://github.com/rahulnyk/knowledge_graph?source=post_page-----110844f22a1a--------------------------------

 

GitHub - rahulnyk/knowledge_graph: Convert any text to a graph of knowledge. This can be used for Graph Augmented Generation or

Convert any text to a graph of knowledge. This can be used for Graph Augmented Generation or Knowledge Graph based QnA - GitHub - rahulnyk/knowledge_graph: Convert any text to a graph of knowledge....

github.com

https://towardsdatascience.com/how-to-convert-any-text-into-a-graph-of-concepts-110844f22a1a

 

How to Convert Any Text Into a Graph of Concepts

A method to convert any text corpus into a Knowledge Graph using Mistral 7B.

towardsdatascience.com

 

반응형

'딥러닝' 카테고리의 다른 글

u-net  (0) 2023.12.30
Mastering-Image-Segmentation-With-PyTorch-using-Real-World-Projects  (0) 2023.12.25
RF  (0) 2023.11.23
RFL books  (0) 2023.11.15
SCPD stanford online  (0) 2023.11.14
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/07   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
글 보관함