当前位置: 首页 > news >正文

自建网站如何上传视频网站一般需要怎么推广

自建网站如何上传视频,网站一般需要怎么推广,企业网站的规划与建设ppt,网站管理与建设试题前言 日常没空,留着以后写 llama-index简介 官网:https://docs.llamaindex.ai/en/stable/ 简介也没空,以后再写 注:先说明,随着官方的变动,代码也可能变动,大家运行不起来,可以进…

前言

日常没空,留着以后写

llama-index简介

官网:https://docs.llamaindex.ai/en/stable/

简介也没空,以后再写

注:先说明,随着官方的变动,代码也可能变动,大家运行不起来,可以进官网查查资料

加载本地embedding模型

如果没有找到 llama_index.embeddings.huggingface

那么:pip install llama_index-embeddings-huggingface

还不行进入官网,输入huggingface进行搜索

from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import SettingsSettings.embed_model = HuggingFaceEmbedding(model_name=f"{embed_model_path}",device='cuda')

 加载本地LLM模型

还是那句话,如果以下代码不行,进官网搜索Custom LLM Model

from llama_index.core.llms import (CustomLLM,CompletionResponse,CompletionResponseGen,LLMMetadata,
)
from llama_index.core.llms.callbacks import llm_completion_callback
from transformers import AutoTokenizer, AutoModelForCausalLMclass GLMCustomLLM(CustomLLM):context_window: int = 8192  # 上下文窗口大小num_output: int = 8000  # 输出的token数量model_name: str = "glm-4-9b-chat"  # 模型名称tokenizer: object = None  # 分词器model: object = None  # 模型dummy_response: str = "My response"def __init__(self, pretrained_model_name_or_path):super().__init__()# GPU方式加载模型self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True)self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True).eval()# CPU方式加载模型# self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)# self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)self.model = self.model.float()@propertydef metadata(self) -> LLMMetadata:"""Get LLM metadata."""# 得到LLM的元数据return LLMMetadata(context_window=self.context_window,num_output=self.num_output,model_name=self.model_name,)# @llm_completion_callback()# def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:#     return CompletionResponse(text=self.dummy_response)## @llm_completion_callback()# def stream_complete(#     self, prompt: str, **kwargs: Any# ) -> CompletionResponseGen:#     response = ""#     for token in self.dummy_response:#         response += token#         yield CompletionResponse(text=response, delta=token)@llm_completion_callback()  # 回调函数def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:# 完成函数print("完成函数")inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式# inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式outputs = self.model.generate(inputs, max_length=self.num_output)response = self.tokenizer.decode(outputs[0])return CompletionResponse(text=response)@llm_completion_callback()def stream_complete(self, prompt: str, **kwargs: Any) -> CompletionResponseGen:# 流式完成函数print("流式完成函数")inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式# inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式outputs = self.model.generate(inputs, max_length=self.num_output)response = self.tokenizer.decode(outputs[0])for token in response:yield CompletionResponse(text=token, delta=token)

基于本地模型搭建简易RAG

from typing import Anyfrom llama_index.core.llms import (CustomLLM,CompletionResponse,CompletionResponseGen,LLMMetadata,
)
from llama_index.core.llms.callbacks import llm_completion_callback
from transformers import AutoTokenizer, AutoModelForCausalLM
from llama_index.core import Settings,VectorStoreIndex,SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbeddingclass GLMCustomLLM(CustomLLM):context_window: int = 8192  # 上下文窗口大小num_output: int = 8000  # 输出的token数量model_name: str = "glm-4-9b-chat"  # 模型名称tokenizer: object = None  # 分词器model: object = None  # 模型dummy_response: str = "My response"def __init__(self, pretrained_model_name_or_path):super().__init__()# GPU方式加载模型self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True)self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True).eval()# CPU方式加载模型# self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)# self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)self.model = self.model.float()@propertydef metadata(self) -> LLMMetadata:"""Get LLM metadata."""# 得到LLM的元数据return LLMMetadata(context_window=self.context_window,num_output=self.num_output,model_name=self.model_name,)# @llm_completion_callback()# def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:#     return CompletionResponse(text=self.dummy_response)## @llm_completion_callback()# def stream_complete(#     self, prompt: str, **kwargs: Any# ) -> CompletionResponseGen:#     response = ""#     for token in self.dummy_response:#         response += token#         yield CompletionResponse(text=response, delta=token)@llm_completion_callback()  # 回调函数def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:# 完成函数print("完成函数")inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式# inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式outputs = self.model.generate(inputs, max_length=self.num_output)response = self.tokenizer.decode(outputs[0])return CompletionResponse(text=response)@llm_completion_callback()def stream_complete(self, prompt: str, **kwargs: Any) -> CompletionResponseGen:# 流式完成函数print("流式完成函数")inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式# inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式outputs = self.model.generate(inputs, max_length=self.num_output)response = self.tokenizer.decode(outputs[0])for token in response:yield CompletionResponse(text=token, delta=token)if __name__ == "__main__":# 定义你的LLMpretrained_model_name_or_path = r'/home/nlp/model/LLM/THUDM/glm-4-9b-chat'embed_model_path = '/home/nlp/model/Embedding/BAAI/bge-m3'Settings.embed_model = HuggingFaceEmbedding(model_name=f"{embed_model_path}",device='cuda')Settings.llm = GLMCustomLLM(pretrained_model_name_or_path)documents = SimpleDirectoryReader(input_dir="home/xxxx/input").load_data()index = VectorStoreIndex.from_documents(documents,)# 查询和打印结果query_engine = index.as_query_engine()response = query_engine.query("萧炎的表妹是谁?")print(response)

ollama 

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.ollama import Ollamadocuments = SimpleDirectoryReader("data").load_data()# bge-base embedding model
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")# ollama
Settings.llm = Ollama(model="llama3", request_timeout=360.0)index = VectorStoreIndex.from_documents(documents,
)

欢迎大家点赞或收藏

大家的点赞或收藏可以鼓励作者加快更新哟~

参加链接:

LlamaIndex中的CustomLLM(本地加载模型)
llamaIndex 基于GPU加载本地embedding模型
 

官网文档

官网_starter_example_loca

官网_usage_custom

http://www.ds6.com.cn/news/28888.html

相关文章:

  • 网站 app长沙网络推广营销
  • 公众号登录失败是什么原因seo的中文意思是什么
  • 濮阳网站建设在哪里关键词歌词
  • iapp如何用网站做软件系列推广软文范例
  • 网站对话窗口怎么做论坛推广案例
  • 济南seo官网优化seo点击软件哪个好用
  • 开个网站做代理赚钱吗企业营销网站制作
  • 外贸做哪个网站好产品推广软文200字
  • 典型网站建设品牌推广与传播怎么写
  • 深圳专业做网站排名多少钱南宁百度推广代理商
  • 浏览器被2345网址导航搜索关键词排名优化服务
  • 网站建设如何提高浏览量营销咨询公司经营范围
  • wordpress添加电台seo的概念
  • 垂直行业门户网站aso优化重要吗
  • 学习做网站多久搭建网站的步骤
  • 做赌博网站判刑b2b是什么意思
  • 百度收录不了网站中国十大it培训机构排名
  • seo网站关键词百度网盘pc端网页版
  • 网站跳转怎么做360冬镜seo
  • 小软件公司一年能挣多少钱seo是指搜索引擎营销
  • 做移动端活动页面参考网站百度客服电话24小时客服电话
  • 网站信任的体验如何做痘痘怎么去除效果好
  • 做网站借用网络图片不违法吧网站制作建设公司
  • wordpress dux主题首页排序百度如何优化
  • 惠州网站建设找惠州邦网站seo推广营销
  • 手机怎么做弹幕小视频网站百度手机网页版
  • 网页制作培训班厦门做网站seo推广公司
  • 企业管理网站开发论文seo运营培训
  • 如何做自已网站济南网站建设公司选济南网络
  • 做了个网站 怎么做seo中国足彩网竞彩推荐