Convert Wikidata and Wikipedia raw files to filterable formats with a focus of marking Wikidata as summaries based on their Wikipedia abstracts.
-
Updated
Jun 13, 2024 - Java
Convert Wikidata and Wikipedia raw files to filterable formats with a focus of marking Wikidata as summaries based on their Wikipedia abstracts.
Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
A repository of technical articles on AI algorithms, model finetuning, AI agents, open-source libraries, and system design. Perfect for researchers and enthusiasts looking for in-depth insights.
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
😷 The Fill-Mask Association Test (FMAT): Measuring Propositions in Natural Language.
AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
Build high-performance AI models with modular building blocks
An easy-to-use Python library for merging PyTorch models.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Project repository for the development of a Question-Answering (QA) information retrieval system fine-tuned on customer queries.
My implementation of Machine Learning and Deep Learning papers from scratch.
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
A lightweight efficient audio codec in 30MB with 30~170x compression ratio. Supports 16kHz mono speech audio.
GPT-powered chat for documentation, chat with your documents
Add a description, image, and links to the transformers topic page so that developers can more easily learn about it.
To associate your repository with the transformers topic, visit your repo's landing page and select "manage topics."