Cover Page
Backend Page
Python with Starlette
We assume that your chatterd code base has accumulated code from at least the llmPrompt,
chatter backend tutorials. If you’ve, in addition, also accumulated code from latter tutorials, that’s fine.
Change to your chatterd folder:
server$ cd ~/reactive/chatterd
Add dependencies
If you haven’t installed the following dependencies as part of completing the llmChat
tutorial or llmPlay project, add them now:
server$ uv add dataclasses_json sse_starlette
toolbox
Let us start by creating a toolbox to hold our tools. Create a new Python file,
name it toolbox.py:
server$ vi toolbox.py
Put the following imports at the top of the file:
from dataclasses import dataclass, field
from dataclasses_json import dataclass_json, config
from http import HTTPStatus
import httpx
from typing import Callable, Dict, List, Optional, Awaitable
The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).
Tool/function definition
Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:
@dataclass_json
@dataclass
class OllamaParamProp:
type: str
description: str
enum: Optional[List[str]] = None
@dataclass_json
@dataclass
class OllamaFunctionParams:
type: str
properties: Dict[str, OllamaParamProp]
required: Optional[List[str]] = None
@dataclass_json
@dataclass
class OllamaToolFunction:
name: str
description: str
parameters: Optional[OllamaFunctionParams] = None
@dataclass_json
@dataclass
class OllamaToolSchema:
type: str
function: OllamaToolFunction
Weather tool schema: in this tutorial, we have only one tool resident in the backend. Add the following tool definition to your file:
WEATHER_TOOL = OllamaToolSchema(
type = "function",
function = OllamaToolFunction(
name = "get_weather",
description = "Get current temperature",
parameters = OllamaFunctionParams(
type = "object",
properties = {
"latitude": OllamaParamProp(
type = "string",
description = "latitude of location of interest",
),
"longitude": OllamaParamProp(
type = "string",
description = "longitude of location of interest",
),
},
required = ["latitude", "longitude"],
),
),
)
Weather tool function: we implement the get_weather tool as a getWeather() function that makes an API call to the free Open Meteo weather service. Add the following nested struct definition to hold Open Meteo’s return result. For this tutorial, we’re
only interested in the latitude, longitude, and temperature returned by Open Meteo:
@dataclass_json
@dataclass
class Current:
temp: float = field(
default=None,
metadata=config(field_name="temperature_2m")
)
@dataclass_json
@dataclass
class OMeteoResponse:
latitude: float
longitude: float
current: Current
Here’s the definition of the getWeather() function:
async def getWeather(argv: List[str]) -> tuple[Optional[str], Optional[str]]:
# Open-Meteo API doc: https://open-meteo.com/en/docs#api_documentation
try:
async with httpx.AsyncClient() as client:
response = await client.get(
url=f"https://api.open-meteo.com/v1/forecast?latitude={argv[0]}&longitude={argv[1]}¤t=temperature_2m&temperature_unit=fahrenheit",
)
if response.status_code != HTTPStatus.OK:
return None, f"Open-meteo response: {response.status_code}"
ometeoResponse = OMeteoResponse.from_json(response.content)
return f"Weather at lat: {ometeoResponse.latitude}, lon: {ometeoResponse.longitude} is {ometeoResponse.current.temp}ºF", None
except Exception as err:
return None, f"Cannot connect to Open Meteo: {err}"
The toolbox
Even though we have only one resident tool in this tutorial, we want a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ve chosen to use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a dictionary. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.
Add the following type for an async tool function and the record type containing a tool definition and the async tool function:
type ToolFunction = Callable[[List[str]], Awaitable[tuple[Optional[str], Optional[str]]]]
@dataclass
class Tool:
schema: OllamaToolSchema
function: ToolFunction
Now create a switch-table toolbox and put the WEATHER_TOOL in it:
TOOLBOX: Dict[str, Tool] = {
"get_weather": Tool(WEATHER_TOOL, getWeather),
}
Tool use or function calling
Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON Object carrying the name of the function and the arguments to pass to it. Add these nested struct definitions representing Ollama’s tool call JSON to your file:
@dataclass_json
@dataclass
class OllamaFunctionCall:
name: str
arguments: Dict[str, str]
@dataclass_json
@dataclass
class OllamaToolCall:
function: OllamaFunctionCall
Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.
async def toolInvoke(function: OllamaFunctionCall) -> tuple[Optional[str], Optional[str]]:
tool = TOOLBOX.get(function.name)
if tool:
argv = list(function.arguments.values())
return await tool.function(argv)
return None, None
That concludes our toolbox definition. Save and exit the file.
handlers
Edit handlers.py:
server$ vi handlers.py
imports
If you don’t have code from the llmChat tutorial or llmPlay project
in your code base, add the following imports to the top of the file:
from dataclasses_json import dataclass_json
import json
import re
from sse_starlette.sse import EventSourceResponse
replace your from typing line with:
from typing import List, Optional
Then modify the following import lines:
- add
fieldto thefrom dataclassesline:from dataclasses import dataclass, field - add
configto thefrom dataclasses_jsonline:from dataclasses_json import dataclass_json, config - add the following line:
from http import HTTPStatus - and add below the
import mainline:import toolbox from toolbox import getWeather, toolInvoke, TOOLBOX, OllamaToolCall, OllamaToolSchema
classes
Next add or update the following classes:
- if update, add a tool-calls field to your
OllamaMessageclass:@dataclass_json @dataclass class OllamaMessage: role: str content: str toolCalls: Optional[List[toolbox.OllamaToolCall]] = field( default=None, metadata=config(field_name="tool_calls", exclude=lambda l: not l) # exclude if empty (None, []) ) - if update, add a tools field to your
OllamaRequest:@dataclass_json @dataclass class OllamaRequest: appID: str model: str messages: List[OllamaMessage] stream: bool tools: Optional[List[toolbox.OllamaToolSchema]] = field( default=None, metadata=config(exclude=lambda l: not l) )
The OllamaResponse class (unchanged if exists):
@dataclass_json
@dataclass
class OllamaResponse:
model: str
created_at: str
message: OllamaMessage
For the /weather testing API, add also the following class:
@dataclass
class Location:
lat: str
lon: str
weather
Let’s implement the handler for the /weather API that we can use to
test our getWeather() function later:
async def weather(request):
try:
loc = Location(**(await request.json()))
except Exception as err:
print(f'{err=}')
return JSONResponse(f'Unprocessable entity: {str(err)}',
status_code = HTTPStatus.UNPROCESSABLE_ENTITY)
temp, err = await toolbox.getWeather([loc.lat, loc.lon])
if err:
return JSONResponse({"error": f'Internal server error: {str(err)}'},
status_code = HTTPStatus.INTERNAL_SERVER_ERROR)
return JSONResponse(temp)
llmtools
The underlying request/response handling of llmtools() is basically that of llmchat(),
however with all the mods needed to support tool calling, it’s simpler to just start the
llmtools() handler from scratch. We will name variables according to this scheme:
- camelCase for language-level data objects,
- snake_case for string version of data objects to be used with PostgreSQL or JSON, and, earlier,
- ALL_CAPS for immutable global toolbox and tool definitions.
To store the client’s conversation context/history with Ollama in the PostgreSQL
database, llmtools() first confirms that the client has sent an appID that can
be used to tag its entries in the database. Here’s the signature of llmtools().
We check for the existence of appID and return an HTTP error if it is absent:
async def llmtools(request):
try:
ollamaRequest = OllamaRequest.from_json(await request.body(), infer_missing=True)
except Exception as err:
return JSONResponse({"error": f'Deserializing request: {type(err).__name__}: {str(err)}'}, status_code=HTTPStatus.UNPROCESSABLE_ENTITY)
if ollamaRequest.appID == "":
return JSONResponse(f'Invalid appID: {ollamaRequest.appID}', status_code=HTTPStatus.UNPROCESSABLE_ENTITY)
# retrieve client's tool(s)
Our goal here is to prepend all prior conversations between the client and Ollama as
context to the current prompt. The client’s appID allows us to identify its conversation
with Ollama stored in the PostgreSQL database—similar to how MCP tags JSON-RPC 2.0 messages
with a session ID. Once we confirm that the client has an appID,
we retrieve any tool definitions attached to the ollamaRequest carrying the prompt.
We will assemble these tools along with any tools the client may have previously sent
to Ollama, attached to an earlier prompt, and any tools resident on chatterd and attach
them all to the contextualized prompt request we will POST to Ollama. Replace # retrieve
client's tool(s) with:
try:
# convert tools from client as JSON string (client_tools) and save to db;
# prepare ollama_request for re-use to be sent to Ollama:
# clear tools in request, to be populated later
client_tools = []
if (ollamaRequest.tools):
try:
# has device tools
# must marshal to string to store to db
client_tools = json.dumps([tool.to_dict() for tool in ollamaRequest.tools])
# reset tools, to be populated with
# accumulated tools below, without duplicates
ollamaRequest.tools = None
except Exception as err:
return JSONResponse({"error": f'Serializing request tools: {type(err).__name__}: {str(err)}'},
status_code=HTTPStatus.UNPROCESSABLE_ENTITY)
# insert into DB
except Exception as err:
return JSONResponse({"error": f'Processing request: {type(err).__name__}: {str(err)}'},
status_code=HTTPStatus.INTERNAL_SERVER_ERROR)
# assemble resident tools
Then we insert the current prompt into the database, adding to the client’s conversation
history with Ollama. As shown in the example in Tool definition JSON section, the client’s current prompt could comprise of multiple elements in the messages
array of the ollamaRequest, but the tools will reside in a single tools array next to
the messages array. When there are multiple elements in an ollamaRequest, we want to
insert the tools only once. Below we have chosen to insert the tools only with the first
element of the messages array. Replace the comment # insert into DB with the following code:
if ollamaRequest.messages:
async with main.server.pool.connection() as conn:
async with conn.cursor() as cur:
# insert each message into the database
# insert client_tools only with the first message:
# reset it to empty after first message.
for msg in ollamaRequest.messages:
try:
await cur.execute(
'INSERT INTO chatts (username, message, id, appid, toolschemas) VALUES (%s, %s, gen_random_uuid(), %s, %s);',
(msg.role, msg.content, ollamaRequest.appID, client_tools))
except Exception as err:
return JSONResponse({"error": f'Inserting tools: {type(err).__name__}: {str(err)}'},
status_code=HTTPStatus.INTERNAL_SERVER_ERROR)
# store device's tools only once
client_tools = None
To prepare the full assemblage of tools to send to Ollama, we first attach all the
tools resident on chatterd. Replace # assemble resident tools with:
# append all of chatterd's resident tools to ollamaRequest
ollamaRequest.tools = []
for tool in TOOLBOX.values():
ollamaRequest.tools.append(tool.schema)
# reconstruct ollamaRequest
Then we retrieve the client’s conversation history, including the recently inserted,
current prompt, as the last entry, and put each as a separate element in the
ollamaRequest.messages array, taking care to accumulate any tool(s) present into
ollamaRequest.tools array instead. Replace # reconstruct ollamaRequest with:
try:
# reconstruct ollamaRequest to be sent to Ollama:
# - add context: retrieve all past messages by appID,
# incl. the one just received, and attach them to
# ollamaRequest
# - convert each back to OllamaMessage and
# - insert it into ollamaRequest
# - add each message's clientTools to chatterd's resident tools
# already copied to ollamaRequest.tools.
ollamaRequest.messages = []
async with main.server.pool.connection() as conn:
async with conn.cursor() as cur:
await cur.execute('SELECT username, message, toolcalls, toolschemas FROM chatts WHERE appID = %s ORDER BY time ASC;',
(ollamaRequest.appID,))
rows = await cur.fetchall()
for row in rows:
OllamaMessage.fromRow(row, ollamaRequest)
except Exception as err:
return JSONResponse({"error": f'{type(err).__name__}: {str(err)}'},
status_code=HTTPStatus.INTERNAL_SERVER_ERROR)
# NDJSON to SSE stream transformation
Put the fromRow static method for OllamaMessage in your definition of class OllamaMessage
at the top of the file:
@staticmethod
def fromRow(row, ollamaRequest):
try:
toolcalls = []
if row[2]:
# must deserialize to type to append toolcalls
toolcalls = [OllamaToolCall.from_dict(tool_call) for tool_call in json.loads(row[2])]
ollamaRequest.messages.append(
OllamaMessage(role=row[0], content=row[1], toolCalls=toolcalls))
if row[3]:
# has device tools
# must deserialize to type and append device tools to ollamaRequest.tools
ollamaRequest.tools.extend([OllamaToolSchema.from_dict(tool) for tool in json.loads(row[3])])
except Exception as err:
raise err
ndjson_yield_sse
As we know, Ollama response is in the form of an NDJSON stream, which we
transform into a stream of SSE events using the ndjson_yield_sse function. We
pass this function to starlette’s EventSourceResponse constructor at the end
of the llmtools() handler.
In ndjson_yield_sse(), we first declare an accumulator variable, full_response, to
assemble the reply tokens Ollama streams to us. To accommodate resident-tool call, we use
a flag, sendNewPrompt, to indicate to our stream generator whether:
- to start a resident-tool call connection to Ollama and continue yielding results to the client or
- to conclude streaming to the connection.
While
sendNewPromptistrue—it is initialized totrue, we open a new POST connection to Ollama and send it theollamaRequestmessage. Replace# NDJSON to SSE stream transformationwith:async def ndjson_yield_sse(): full_response = "" sendNewPrompt = True while(sendNewPrompt): sendNewPrompt = False # assume no resident-tool call try: # Send request to Ollama async with client.stream( method = request.method, url = f"{OLLAMA_BASE_URL}/chat", content = ollamaRequest.to_json().encode("utf-8"), # convert the request to JSON ) as response: # handle Ollama response except Exception as err: yield { "event": "error", "data": f'error}' } return EventSourceResponse(ndjson_yield_sse())
We convert each NDJSON line to a language-level type, OllamaResponse in this case, with
semantically meaningful structure and fields that we can more easily manipulate than a
linear byte stream or string. If the conversion is unsuccessful and the model property
of the type is empty, we return an SSE error event and
move on to the next NDJSON line. Otherwise, we append the content of this OllamaResponse.message
to the full_response accumulator. Replace # handle Ollama response with:
tool_calls = ""
tool_result = ""
async for line in response.aiter_lines():
try:
if line:
# deserialize each line into OllamaResponse
ollamaResponse = OllamaResponse.from_json(line)
if not ollamaResponse.model:
# didn't receive an ollamaresponse, report to client as error
yield {
"event": "error",
"data": line.replace("\\\"", "'")
}
# append response token to full assistant message
full_response += ollamaResponse.message.content
# check for tool call
except Exception as err:
yield {
"event": "error",
"data": f'error}'
}
# insert full response into db
The tool call field in OllamaResponse is an array, even though it looks like Qwen3 on Ollama
is presently limited to making only one tool call per HTTP round. We loop through the array and
for each tool call, we try to call its function by calling toolInvoke() from our toolbox.
If there is no tool call, we simply encode the full NDJSON line into an SSE Message event
and yield it as an element of the SSE stream and move on to the next NDJSON line, as we do
in llmchat. Replace # check for tool call with:
# is there a tool call?
if ollamaResponse.message.toolCalls:
# convert toolCalls to JSON string (tool_calls) to be saved to db
tool_calls = json.dumps([toolCall.to_dict() for toolCall in ollamaResponse.message.toolCalls])
for toolCall in ollamaResponse.message.toolCalls:
if not toolCall.function.name:
continue # LLM miscalled
toolResult, err = await toolInvoke(toolCall.function)
# handle tool result
else:
# no tool call, send NDJSON line as SSE data line
yield {
"data": line
}
If the tool is resident, toolInvoke() returns the result of the tool call. There are three
possible outcomes from the call to toolInvoke():
- the tool is resident but the call was unsuccesfull and returns an error,
- the tool is resident and the call was successful, or
- the tool is non-resident.
If the result indicates that an error has occured, we are dealing with the first outcoe above.
We simply report the error to the client and move on to the next NDJSON line.
If there’s no error but toolInvoke() returns null result, this indicates that the tool is
non resident. We forward the tool call to the client as a tool_calls SSE event. Otherwise,
we prepare the result to be saved to PostgreSQL and return the result to Ollama.
Replace # handle tool result with:
if toolResult:
# outcome 2: tool call is resident and no error
# convert toolResult to JSON string (tool_result)
# to be saved to db
tool_result += toolResult if not tool_result else f' {toolResult}'
# create new OllamaMessage with tool result
# to be sent back to Ollama
toolresultMsg = OllamaMessage(
role = "tool",
content = toolResult,
)
ollamaRequest.messages.append(toolresultMsg)
# send result back to Ollama
sendNewPrompt = True
elif err:
# outcome 1: tool resident but had error
yield {
"event": "error",
"data": f'error}'
}
else:
# outcome 3: tool non resident, forward
# to device as 'tool_calls' SSE event
yield {
"event": "tool_calls",
"data": line
}
When we reach the end of the NDJSON stream, we insert the full Ollama response and any resident
tool calls and their results into PostgreSQL database as the assistant’s reply. Any error in
the insertion yields an SSE error event sent to the client. Replace # insert full response
into db with:
async with main.server.pool.connection() as conn:
async with conn.cursor() as cur:
# save full response, including tool call(s), to db,
# to form part of next prompt's history
await cur.execute(
'INSERT INTO chatts (username, message, id, appID, toolcalls) \
VALUES (%s, %s, gen_random_uuid(), %s, %s);',
("assistant", re.sub(r"\s+", " ", full_response),
ollamaRequest.appID, tool_calls)
)
# if there were resident tool call(s), save result(s)
if sendNewPrompt:
await cur.execute(
'INSERT INTO chatts (username, message, id, appid)\
VALUES (%s, %s, gen_random_uuid(), %s);',
('tool', tool_result, ollamaRequest.appID))
We’re done with handlers.py! Save and exit the file.
main package
Edit main.py:
server$ vi main.py
Find the routes array and add these routes right
after the route for /llmprompt:
Route('/llmtools', handlers.llmtools, methods=['POST']),
Route('/weather', handlers.weather, methods=['GET']),
We’re done with main.py. Save and exit the file.
Test run
To test run your server, launch it from the command line:
server$ sudo su
# You are now root, note the command-line prompt changed from '$' or '%' to '#'.
# You can do a lot of harm with all of root's privileges, so be very careful what you do.
server# source .venv/bin/activate
(chattterd) ubuntu@server:/home/ubuntu/reactive/chatterd# granian --host 0.0.0.0 --port 443 --interface asgi --ssl-certificate /home/ubuntu/reactive/chatterd.crt --ssl-keyfile /home/ubuntu/reactive/chatterd.key --access-log --workers-kill-timeout 1 main:server
# Hit ^C to end the test
(chattterd) ubuntu@server:/home/ubuntu/reactive/chatterd# exit
# So that you're no longer root.
server$
Return to the Testing your /llmtools API section.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin | Last updated August 26th, 2025 |