We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
You can continue the conversation there. Go to discussion →
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have custom librechat.yaml to use Azure OpenAI, but get 404 error when requesting.
2024-05-16T10:15:41.388Z warn: [OpenAIClient.chatCompletion][stream] API error 2024-05-16T10:15:41.390Z error: [handleAbortError] AI response error; aborting request: 404 Resource not found 2024-05-16T10:15:41.400Z debug: [AskController] Request closed
I have custom librechat.yaml
# Configuration version (required) version: 1.0.8 # This setting caches the config file for faster loading across app lifecycle cache: true endpoints: azureOpenAI: # Endpoint-level configuration plugins: false assistants: false groups: # Group-level configuration - group: "OpenAI-EASTUS2" apiKey: "${EASTUS2_API_KEY}" instanceName: "${EASTUS2_INSTANCENAME}" version: "2024-04-01-preview" baseURL: "${EASTUS2_BASEURL}" additionalHeaders: X-Custom-Header: "KevinTest" # Model-level configuration models: gpt-4-turbo-2024-04-09: deploymentName: "GPT4-Turbo-Latest-Dev"
It gives 404 error, logs as below.
2024-05-16T10:15:39.836Z debug: [OpenAIClient] chatCompletion { baseURL: "https://${EASTUS2_BASEURL}/openai/deployments/gpt-4-turbo-2024-04-09/chat/comp... [truncated]", modelOptions.model: "gpt-4-turbo-2024-04-09", modelOptions.temperature: 1, modelOptions.top_p: 1, modelOptions.presence_penalty: 0, modelOptions.frequency_penalty: 0, modelOptions.stop: undefined, modelOptions.max_tokens: undefined, modelOptions.user: "66442d5b7c4c582b181fabe0", modelOptions.stream: true, // 1 message(s) modelOptions.messages: [{"role":"user","content":"test"}], } 2024-05-16T10:15:41.388Z warn: [OpenAIClient.chatCompletion][stream] API error 2024-05-16T10:15:41.390Z error: [handleAbortError] AI response error; aborting request: 404 Resource not found 2024-05-16T10:15:41.400Z debug: [AskController] Request closed
As you can see, the model name was concatnated into url, but Azure requires DeploymentName there
Ref:https://learn.microsoft.com/en-us/azure/ai-services/openai/reference POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01
POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01
Ref:https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/azure_openai
models: gpt-4-vision-preview: deploymentName: "arbitrary-deployment-name" **//This should not be arbitrary** version: "2024-02-15-preview"
No response
2024-05-16T10:15:36.462Z debug: [AskController] { text: "test", conversationId: null, endpoint: "azureOpenAI", chatGptLabel: null, promptPrefix: null, resendFiles: true, imageDetail: "auto", iconURL: undefined, greeting: undefined, spec: undefined, maxContextTokens: undefined, modelOptions.model: "gpt-4-turbo-2024-04-09", modelOptions.temperature: 1, modelOptions.top_p: 1, modelOptions.presence_penalty: 0, modelOptions.frequency_penalty: 0, modelOptions.stop: undefined, modelOptions.max_tokens: undefined, // 20 openAI(s) modelsConfig.openAI: ["gpt-4o","gpt-3.5-turbo-0125","gpt-4-turbo","gpt-4-turbo-2024-04-09","gpt-3.5-turbo-16k-0613","gpt-3.5-turbo-16k","gpt-4-turbo-preview","gpt-4-0125-preview","gpt-4-1106-preview","gpt-3.5-turbo","gpt-3.5-turbo-1106","gpt-4-vision-preview","gpt-4","gpt-3.5-turbo-instruct-0914","gpt-3.5-turbo-0613","gpt-3.5-turbo-0301","gpt-3.5-turbo-instruct","gpt-4-0613","text-davinci-003","gpt-4-0314"], // 12 google(s) modelsConfig.google: ["gemini-pro","gemini-pro-vision","chat-bison","chat-bison-32k","codechat-bison","codechat-bison-32k","text-bison","text-bison-32k","text-unicorn","code-gecko","code-bison","code-bison-32k"], // 3 anthropic(s) modelsConfig.anthropic: ["claude-1","claude-instant-1","claude-2"], // 12 gptPlugin(s) modelsConfig.gptPlugins: ["gpt-4o","gpt-3.5-turbo-0125","gpt-4-turbo","gpt-4-turbo-2024-04-09","gpt-3.5-turbo-16k","gpt-4-turbo-preview","gpt-4-0125-preview","gpt-4-1106-preview","gpt-3.5-turbo","gpt-3.5-turbo-1106","gpt-4-vision-preview","gpt-4"], // 1 azureOpenAI(s) modelsConfig.azureOpenAI: ["gpt-4-turbo-2024-04-09"], // 2 bingAI(s) modelsConfig.bingAI: ["BingAI","Sydney"], // 1 chatGPTBrowser(s) modelsConfig.chatGPTBrowser: ["text-davinci-002-render-sha"], // 15 assistant(s) modelsConfig.assistants: ["gpt-3.5-turbo","gpt-3.5-turbo-0125","gpt-4-turbo","gpt-4-turbo-2024-04-09","gpt-4-0125-preview","gpt-4-turbo-preview","gpt-4-1106-preview","gpt-3.5-turbo-1106","gpt-3.5-turbo-16k-0613","gpt-3.5-turbo-16k","gpt-4","gpt-4-0314","gpt-4-32k-0314","gpt-4-0613","gpt-3.5-turbo-0613"], } 2024-05-16T10:15:38.123Z debug: [BaseClient] Loading history: { conversationId: "44e4324f-a17a-4a61-91cf-2a63dfd776ac", parentMessageId: "00000000-0000-0000-0000-000000000000", } 2024-05-16T10:15:39.423Z debug: [BaseClient] Context Count (1/2) { remainingContextTokens: 127982, maxContextTokens: 127990, } 2024-05-16T10:15:39.423Z debug: [BaseClient] Context Count (2/2) { remainingContextTokens: 127982, maxContextTokens: 127990, } 2024-05-16T10:15:39.424Z debug: [BaseClient] tokenCountMap: { e3c5e678-1ec1-4428-920f-8bd7cf3b328f: 5, } 2024-05-16T10:15:39.424Z debug: [BaseClient] { promptTokens: 8, remainingContextTokens: 127982, payloadSize: 1, maxContextTokens: 127990, } 2024-05-16T10:15:39.424Z debug: [BaseClient] tokenCountMap { e3c5e678-1ec1-4428-920f-8bd7cf3b328f: 5, instructions: undefined, } 2024-05-16T10:15:39.425Z debug: [BaseClient] userMessage { messageId: "e3c5e678-1ec1-4428-920f-8bd7cf3b328f", parentMessageId: "00000000-0000-0000-0000-000000000000", conversationId: "44e4324f-a17a-4a61-91cf-2a63dfd776ac", sender: "User", text: "test", isCreatedByUser: true, tokenCount: 5, } 2024-05-16T10:15:39.836Z debug: [OpenAIClient] chatCompletion { baseURL: "https://${EASTUS2_BASEURL}/openai/deployments/gpt-4-turbo-2024-04-09/chat/comp... [truncated]", modelOptions.model: "gpt-4-turbo-2024-04-09", modelOptions.temperature: 1, modelOptions.top_p: 1, modelOptions.presence_penalty: 0, modelOptions.frequency_penalty: 0, modelOptions.stop: undefined, modelOptions.max_tokens: undefined, modelOptions.user: "66442d5b7c4c582b181fabe0", modelOptions.stream: true, // 1 message(s) modelOptions.messages: [{"role":"user","content":"test"}], } 2024-05-16T10:15:41.388Z warn: [OpenAIClient.chatCompletion][stream] API error 2024-05-16T10:15:41.390Z error: [handleAbortError] AI response error; aborting request: 404 Resource not found 2024-05-16T10:15:41.400Z debug: [AskController] Request closed
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What happened?
I have custom librechat.yaml to use Azure OpenAI, but get 404 error when requesting.
2024-05-16T10:15:41.388Z warn: [OpenAIClient.chatCompletion][stream] API error
2024-05-16T10:15:41.390Z error: [handleAbortError] AI response error; aborting request: 404 Resource not found
2024-05-16T10:15:41.400Z debug: [AskController] Request closed
Steps to Reproduce
I have custom librechat.yaml
It gives 404 error, logs as below.
As you can see, the model name was concatnated into url, but Azure requires DeploymentName there
Ref:https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01
Ref:https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/azure_openai
What browsers are you seeing the problem on?
No response
Relevant log output
Screenshots
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: