Anthropic
This page provides information for connecting Appsmith to Anthropic, which allows you to configure applications with advanced AI features, such as chat completion.
Appsmith is committed to providing safe and responsible access to AI capabilities. Your prompts, outputs, embeddings, and data are not shared with other users and are never utilized to fine-tune models. Learn more about Anthropic's privacy policy here.
Connect Anthropic
Connection parameters
The following section is a reference guide that provides a complete description of all the parameters to connect to an Anthropic datasource.
API Key
The Anthropic uses API keys for authentication. Visit the web console to retrieve the API key.
Query Anthropic
The following section is a reference guide that provides a description of the available commands with their parameters to create Anthropic queries.
Chat
The Chat command generates human-like text based on input prompts. The following section lists all the available parameters:
Models
It refers to the pre-trained language models provided by Anthropic. You can select from the available list of models, including options like claude-2, claude-3, and others.
-
For models belonging to the claude-3 family, the response format follows the messages API.
-
For claude-instant-1.2 and claude-2.1, the response format is based on the completion API.
Max tokens
The maximum number of tokens the response should contain. It allows you to control the length of the generated output. For example, if you set it to 50, the response contains a maximum of 50 tokens, ensuring concise outputs.
System Prompt
These messages help shape the behavior of the model's responses and can be used to add personality, offer instructions, or guide the model in generating more contextually relevant outputs. For example, you can use the system message to give personality to the responses or add task-specific instructions, like:
"You are a chat assistant designed to provide friendly and helpful responses to user inquiries. Aim to maintain a positive and supportive tone throughout the conversation, offering clear guidance and assistance."
Messages
Messages serve as input interactions between the user and the model. You can create multiple messages of each type to make your conversation just the way you want. In the Roles parameter, you can select either Human
or Assistant
. In the Content property, add:
- Assistant: It serves as a means to provide additional context, set guidelines, or convey the overall objective of the task. It helps shape the behavior of the model's responses, like:
"You are a technical support assistant. Provide clear and detailed solutions to user queries related to software issues. If the user mentions a bug, ask for additional details to troubleshoot effectively."
- Human: Input provided by the user to instruct or guide the model. For example, if you are using an Input widget to enter the prompt, you can use
{{userInput.text}}
.
For more information refer to the Anthropic documentation.
Temperature
Temperature determines the level of randomness in the output. It ranges between 0 and 1.
Lower values for temperature result in more focused and analytical outputs (e.g. 0.2), while higher values generate more diverse and creative results (e.g. 0.8). Select a temperature value based on the desired trade-off between coherence and creativity for your specific application.
Vision
The Vision command allows the model to process images and respond to queries related to them.
Models
It refers to the pre-trained language models provided by Anthropic. You can select from the available list of models, including Claude-3-Opus, Claude-3-Sonnet, Claude-3-Haiku.
-
For models belonging to the claude-3 family, the response format follows the messages API.
-
For claude-instant-1.2 and claude-2.1, the response format is based on the completion API.
Max tokens
The maximum number of tokens the response should contain. It allows you to control the length of the generated output. For example, if you set it to 50, the response contains a maximum of 50 tokens, ensuring concise outputs.
System Prompt
These messages help shape the behavior of the model's responses and can be used to add personality, offer instructions, or guide the model in generating more contextually relevant outputs. For instance, you can add a system prompt like:
"Your task is to analyze and interpret the content of each image, offering detailed descriptions and contextual information to enrich the viewer's understanding. Ensure your responses are informative, accurate, and engaging, enhancing the viewer's appreciation of the visual content."
Messages
Messages serve as input interactions between the user and the model. You can create multiple messages of each type to make your conversation just the way you want. In the Roles parameter, you can select either Human
or Assistant
.
Roles:
- Assistant: It serves as a means to provide additional context, set guidelines, or convey the overall objective of the task. It helps shape the behavior of the model's responses, like:
"Focus on delivering clear and relevant information tailored to the task at hand."
- Human: Input provided by the user to instruct or guide the model. For example, if you are using an Input widget to enter the prompt, you can use
{{userInput.text}}
.
For more information refer to the Anthropic documentation.
Type:
-
Text: This represents the task input you want to send to Anthropic. For example, you can use it to instruct the model, such as "find a ball in this image," using
{{UserInput.text}}
. -
Image: This is the image on which the model performs tasks based on the provided text. You can pass the base64 encoded image directly in the request. You can also add multiple images as needed. For example, you can use the Filepicker to upload images, like as
{{FilePicker.files[0].data}}
.
Vision command supports only base64 encoded images; URLs and links are not supported.
For more information refer to the Anthropic documentation.
Temperature
Temperature determines the level of randomness in the output. It ranges between 0 and 1.
Lower values for temperature result in more focused and analytical outputs (e.g. 0.2), while higher values generate more diverse and creative results (e.g. 0.8). Select a temperature value based on the desired trade-off between coherence and creativity for your specific application.