(Work in Progress) A cross-platform desktop client for offline LlaMA-CPU
-
Updated
Nov 1, 2023 - C#
(Work in Progress) A cross-platform desktop client for offline LlaMA-CPU
A terminal style user interface to chat with AI characters using llama LLMs for locally processed AI.
PARRoT: Precise Audio Recognition and Recap over Transcription
Filling in the missing gaps with langchain, and creating OO wrappers to simplify some workloads.
GUI for GGML Alpaca models
little single file fronted for llama.cpp/examples/server created with vue-taildwincss and flask
Control what LLMs can, and can't, say
Evaluate open-source language models on Agent, formatted output, command following, long text, multilingual, coding, and custom task capabilities. 开源语言模型在Agent,格式化输出,指令追随,长文本,多语言,代码,自定义任务的能力基准测试。
(Windows/Linux) Local WebUI with neural network models (LLM, Stable Diffusion, AudioCraft, AudioLDM2, TTS, Bark, Whisper, Demucs, LibreTranslate, ZeroScope2, TripoSR, Shap-E, GLIGEN, Wav2Lip, Roop, Rembg, CodeFormer, Moondream 2) on python (In Gradio interface)
Auto Complete anything using a gguf model
LLM InferenceNet is a C++ project designed to facilitate fast and efficient inference from Large Language Models (LLMs) using a client-server architecture. It enables optimized interactions with pre-trained language models, making deployment on edge devices easier.
Web API that summarizes multimedia from various sources using modern AI tools.
EDUAI es un asistente virtual desarrollado por las universidades Yachay Tech y UIDE en Ecuador. Su propósito es brindar ayuda a estudiantes en matemáticas, tanto de colegios como de universidades. Actúa como un asistente, no como un usuario. Responde desde su función específica.
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."