Skip to content

Customizing behavior

talon-ai-tools can be configured by changing settings in any .talon file. You can copy any of the following settings, uncomment them, and change the values to customize which model you use or its runtime behavior.

talon-ai-settings.talon.example
# This is an example settings file.
# To make changes, copy this into your user directory and remove the .example extension
settings():
# user.model_temperature = 0.6
# Works with any API with the same schema as OpenAI's (i.e. Azure, llamafiles, etc.)
# user.model_endpoint = "https://api.openai.com/v1/chat/completions"
# user.model_system_prompt = "You are an assistant helping an office worker to be more productive."
# Change to 'gpt-4' or the model of your choice
# user.openai_model = 'gpt-3.5-turbo'
# Only uncomment the line below if you want experimental behavior to parse Talon files
# tag(): user.gpt_beta
# Use codeium instead of Github Copilot
# tag(): user.codeium

Adding custom prompts

You do not need to fork the repository to add your own custom prompts. Copy the file below, place it anywhere inside your talon user/ directory and follow the pattern of the key value mapping.

customPrompt.talon-list.example
# Copy this file into your user directory and add your own custom prompts
# Any prompts in this list are automatically added into the <user.modelPrompt> capture
# and thus can be used like normal alongside all of the other model commands
list: user.customPrompt
-
# Example of a custom prompt that is unique to a user's personal workflow
check language: I am learning a new foreign language. Check the grammar of what I have written and return feedback in English with references to what I wrote.

Advanced Customization

Configuring Model Name

The word model is the default prefix before all LLM commands to prevent collisions with other Talon commands. However, you can change or override it. To do so just create another talon list with the same name and a higher specificity. Here is an example that you can copy and past into your own configuration files

myCustomModelName.talon-list
list: user.model
-
# Whatever you say that matches the value on the left will be mapped to the word `model`
# and thus allow any model commands to work as normal, but with a new prefix keyword
my custom model name: model

Providing Custom User Context

In case you want to provide additional context to the LLM, there is a hook that you can override in your own python code and anything that is returned will be sent with every request. Here is an example:

from talon import Context, Module, actions
mod = Module()
ctx = Context()
@ctx.action_class("user")
class UserActions:
def gpt_additional_user_context():
"""This is an override function that can be used to add additional context to the prompt"""
result = actions.user.talon_get_active_context()
return [
f"The following describes the currently focused application:\n\n{result}"
]