Arcade Engine configuration
Arcade Engine’s configuration is a YAML file with the following sections:
api
- Configures the server for specific protocolsllm/models
- Defines a collection of AI models available for routingtools
- Configures tools for AI models to useauth
- Configures user authorization providers and token storagetelemetry
- Configures telemetry and observability
Specify a config file
To start the Arcade Engine, pass a config file:
engine --config /path/to/config.yaml
Dotenv files
Arcade Engine automatically loads environment variables from .env
files in the directory where it was called. Use the --env
flag to specify a path:
engine --env .env.dev --config config.yaml
Secrets
Arcade Engine supports two ways of passing sensitive information like API keys:
Environment variables:
llm:
models:
- id: primary
openai:
api_key: ${env:OPENAI_API_KEY}
External files (useful in cloud setups):
llm:
models:
- id: primary
openai:
api_key: ${file:/path/to/secret}
API configuration
HTTP is the only supported protocol for Arcade Engine’s API. The following configuration options are available:
api.development
(optional, default:false
) - Enable development mode, with more logging and simple worker authenticationapi.http.host
(default:localhost
) - Address to which Arcade Engine binds its server (e.g.,localhost
or0.0.0.0
)api.http.read_timeout
(optional, default:30s
) - Timeout for reading data from clientsapi.http.write_timeout
(optional, default:1m
) - Timeout for writing data to clientsapi.http.idle_timeout
(optional, default:30s
) - Timeout for idle connectionsapi.http.max_request_body_size
(optional, default:4Mb
) - Maximum request body size
Sample configuration:
api:
http:
development: true
host: 0.0.0.0
port: 9099
Model configuration
The llm.models
section defines the models that the engine can route to, and the API keys or parameters for each model. Each item in the models
array must have a unique id
and a provider-specific configuration.
This example shows configuration for connecting to OpenAI and Azure OpenAI:
llm:
models:
- id: primary
openai:
api_key: ${env:OPENAI_API_KEY}
default_params:
temperature: 0
- id: secondary
azureopenai:
api_key: ${env:AZURE_OPENAI_API_KEY}
model: "engine-GPT-35"
base_url: "https://mydeployment.openai.azure.com/"
For OpenAI, Cohere, and OctoML, only an API key is needed. For Azure OpenAI, specify a model name and base URL.
For more details on model configuration and model-specific parameters, see model configuration.
Routing
When client code calls the Arcade Model API, it specifies a model by name. Arcade Engine matches the model name to a model in the configuration and routes the request to the appropriate provider.
If more than one model provider is configured, Arcade Engine will attempt to route to the correct provider using known model names.
from openai import OpenAI
client = OpenAI(base_url="http://localhost:9099/v1")
response = client.chat.completions.create(
messages=[
{"role": "user", "content": "Who is the CEO of Apple?"},
],
model="gpt-4o",
)
If two model providers have the same models or a custom model is used, the provider can be explicitly specified using its id
.
For example, given an llm.models
configuration with one provider named primary
, this request will be routed to primary
with the model gpt-4o
:
response = client.chat.completions.create(
messages=[
{"role": "user", "content": "Who is the CEO of Apple?"},
],
model="primary/gpt-4o",
)
Tools configuration
Arcade Engine orchestrates tools that AI models can use. Tools are executed by distributed workers called workers, which are grouped into directors.
The tools.directors
section configures the workers that are available to service tool calls:
directors:
- id: default
enabled: true
workers:
- id: "localworker"
enabled: true
http:
uri: "http://localhost:8002"
timeout: 30
retry: 3
secret: ${env:ARCADE_WORKER_SECRET}
When a worker is added to an enabled director, all of the tools hosted by that worker will be available to the model and the Arcade API.
HTTP worker configuration
The http
sub-section configures the HTTP client used to call the worker’s tools:
uri
(required) - The base URL of the worker’s toolssecret
(required) - Secret used to authenticate with the workertimeout
(required) - Timeout for calling the worker’s toolsretry
(required) - Number of retries to attempt
Workers must be configured with a secret
that is used to authenticate with
the worker. This ensures that workers are not exposed to the public internet
without security.
If api.development = true
, the secret will default to "dev"
for local development only. In production, the secret must be set to a random value.
Auth configuration
Arcade Engine manages auth for tools and agents. This configuration controls what providers are available, and how tokens are stored.
Token store
When users authorize with an external service, their tokens are stored securely in the token store. Tokens are later retrieved from the store when AI models need to perform authorized actions.
Arcade Engine supports in-memory and Redis-based token stores.
In-memory
The in-memory token store is not persistent and is erased when the Engine process shuts down. It’s intended for local development and testing.
auth:
token_store:
in_memory:
max_size: 10000
Redis
The Redis-based token store is persistent and can be used in production environments.
token_store:
redis:
addr: "redis:6379"
password: ${env:REDIS_PASSWORD}
Auth providers
The auth.providers
section defines the providers that users can authorize with. Arcade Engine supports many built-in auth providers, and can also connect to any OAuth 2.0-compatible authorization server.
The providers
array contains provider definitions, each which must have a unique id
within this Arcade Engine instance. There are two ways to configure a provider:
- For built-in providers, use the
provider_id
field to reference them by name. For example:
providers:
- id: default-github
description: "The default GitHub provider"
enabled: true
type: oauth2
provider_id: github
client_id: ${env:GITHUB_CLIENT_ID}
client_secret: ${env:GITHUB_CLIENT_SECRET}
- For custom providers, specify the connection details using the relevant protocol-specific fields, such as
oauth2
. For full documentation on the custom provider configuration, see the OAuth 2.0 provider configuration page.
- id: default-hooli
description: "The default Hooli provider"
enabled: true
type: oauth2
client_id: ${env:HOOLI_CLIENT_ID}
client_secret: ${env:HOOLI_CLIENT_SECRET}
oauth2:
# Connection details here...
You can specify a mix of built-in and custom providers.
Telemetry configuration
Arcade supports logs, metrics, and traces with OpenTelemetry.
If you are using the Arcade Engine locally, you can set the environment
field to local
. This will only output logs to the console.
To connect to OpenTelemetry compatible collectors, set the necessary OpenTelemetry environment variables in the .env file.
environment
and version
are fields that are added to the telemetry attributes, which can be filtered on later.
telemetry:
environment: local
version: ${env:VERSION}
logging:
level: debug # debug, info, warn, error, fatal
encoding: console
Notes
- The Engine service name is set to
arcade_engine
- Traces currently cover the
/v1/health
and/v1/chat/completions
endpoints as well as authentication attempts