TypeScript with Express
Cover Page
Back-end Page
We assume that your chatterd code base has accumulated code up to at least the llmChat
back end.
toolbox
Let us start by creating a toolbox to hold our tools. Change to your chatterd folder and
create a new TypeScript file, name it toolbox.ts:
server$ cd ~/reactive/chatterd
server$ vi toolbox.ts
Put the following import at the top of the file:
import HttpStatus from "http-status-codes";
The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).
Tool/function definition
Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:
export type OllamaToolSchema = {
type: string
function: {
name: string
description: string
parameters?: {
type: string
properties: Record<string, { // Record not required by spec to preserve insertion order
type: string
description: string
enum?: string[]
}>
required?: string[] // parameters MUST be in function-signature order
}
}
}
Weather tool schema: in this tutorial, we have only one tool resident on the back end,
get_weather. Instead of manually instantiating an OllamaToolSchema for each tool, we read in
(import) the JSON object directly from the schema file. JSON is native JavaScript Object Notation
after all. Add the following line at the top-level of your toolbox.ts file to read in the
get_weather.json tool schema:
import WEATHER_SCHEMA from './tools/get_weather.json' with { type: 'json' }
Weather tool function: we implement the get_weather tool as a getWeather() function that
makes an API call to the free Open Meteo weather service. Add the
following nested object definition to hold Open Meteo’s return result. For this tutorial, we’re only
interested in the latitude, longitude, and temperature returned by Open Meteo:
type OMeteoResponse = {
latitude: number
longitude: number
current: {
temperature_2m: number
}
}
Here’s the definition of the getWeather() function:
export async function getWeather(argv: string[]): Promise<[string?, string?]> {
// Open-Meteo API doc: https://open-meteo.com/en/docs#api_documentation
let response: Response
try {
response = await fetch(`https://api.open-meteo.com/v1/forecast?latitude=${argv[0]}&longitude=${argv[1]}¤t=temperature_2m&temperature_unit=fahrenheit`, {
method: "GET",
})
if (response.status !== HttpStatus.OK) {
return [undefined, `Open-meteo: ${response.status}: ${response.statusText}`]
}
} catch (error) { // create client and get response
return [undefined, "Cannot connect to Open Meteo"]
}
const ometeoResponse = await response.json() as OMeteoResponse
return [`Weather at latitude ${ometeoResponse.latitude} and longitude ${ometeoResponse.longitude} is ${ometeoResponse.current.temperature_2m}ºF`, undefined]
}
The toolbox
Even though we have only one resident tool in this tutorial, we want a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ve chosen to use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a record. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.
Add the following type for an async tool function and the record type containing a tool definition and the async tool function:
type ToolFunction = (args: string[]) => Promise<[string?, string?]>
type Tool = {
schema: OllamaToolSchema
function: ToolFunction
}
Now create a switch-table toolbox and populate it with the weather schema we read in earlier:
export const TOOLBOX: Record<string, Tool> = {
"get_weather": { schema: WEATHER_SCHEMA, function: getWeather},
} as const
Tool use or function calling
Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON
Object carrying the name of the function and the arguments to pass to it. Add these type
definitions representing Ollama’s tool-call JSON to your toolbox.ts file:
export type OllamaToolCall = {
function: OllamaFunctionCall
}
type OllamaFunctionCall = {
name: string
arguments: Record<string, string>
}
Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.
export async function toolInvoke(func: OllamaFunctionCall): Promise<[string?, string?]> {
let tool
if (func.name in TOOLBOX && (tool ||= TOOLBOX[func.name]) ) {
// get arguments in order, they may arrive out of order from Ollama
const argv = tool.schema.function.parameters?.required?.map(prop => func.arguments[prop] ?? '') ?? []
return tool.function(argv)
}
return [undefined, undefined]
}
That concludes our toolbox definition. Save and exit the file.
handlers
Edit handlers.ts:
server$ vi handlers.ts
imports
Add the following imports at the top of the file:
import * as readline from 'readline/promises'
Under import { chatterDB } from './main.js'} add:
import type { OllamaToolCall, OllamaToolSchema } from "./toolbox.js"
import { TOOLBOX, getWeather, toolInvoke } from "./toolbox.js"
type
Next update the following types:
- add a
tool_callsproperty to the end of yourOllamaMessagetype:tool_calls?: OllamaToolCall[] - add a
toolsproperty to the end of yourOllamaRequest:tools?: OllamaToolSchema[]
For the /weather testing API, add also the following type:
type Location = {
lat: string
lon: string
}
weather
Let’s implement the handler for the /weather API that we can use to
test our getWeather() function later:
export async function weather(req: Request, res: Response) {
let loc: Location = req.body
const [temp, error] = await getWeather([loc.lat, loc.lon])
error && logServerErr(res, error)
res.json(temp)
}
llmtools
The underlying request/response handling of llmtools() is basically that of llmchat(),
plus the mods needed to support tool calling. We will name variables according to this scheme:
- camelCase for language-level data objects,
- snake_case for string version of data objects to be used with PostgreSQL or JSON, and,
- ALL_CAPS for immutable global toolbox and tool definitions.
Make a copy of your llmchat() function and rename it llmtools().
In your newly renamed llmtools() function, after deserialiaing req.body to OllamaRequest and
checking that the front end has provided an appID, serialize any tools present in the
OllamaRequest so that we can add them to the PostgreSQL database:
// convert tools from client as JSON string (client_tools) to be saved to db
let client_tools = ollamaRequest.tools ? JSON.stringify(ollamaRequest.tools) : null
Next, when inserting each message into the database, store the client’s tools also, but if there are
more than one messages in the messages array, store the tools only once, with the first message.
Replace await chatterDB`INSERT... in the for (const msg of ollamaRequest.messages) block with
the following:
await chatterDB`INSERT INTO chatts (name, message, id, appid, toolschemas) \
VALUES (${msg.role}, ${msg.content.replace('\n', ' ').replaceAll(" ", " ").trim()}, \
gen_random_uuid(), ${ollamaRequest.appID}, ${client_tools})`
// store client_tools only once
// reset it to empty after first message
client_tools = null
The llmchat() code next reconstructs ollamaRequest to be sent to Ollama by retrieving from the
PostgreSQL database all prior exchanges between the client and Ollama using the
client’s appID. In llmtools(), we first populate ollamRequest.tools with tools
resident on the chatterd back end before reconstructing the ollamaRequest. Replace the following
code:
try {
ollamaRequest.messages = (await chatterDB`SELECT name, message FROM chatts WHERE appid = ${ollamaRequest.appID} ORDER BY time ASC`)
.map(row => ({
role: row.name,
content: row.message
}))
with:
// reset ollamaRequest.tools, then append all of chatterd's
// resident tools to ollamaRequest.tools;
// front-end tools will be added back later, as part of
// reconstructing the appID's context from the db
ollamaRequest.tools = []
for (const tool of Object.values(TOOLBOX)) {
ollamaRequest.tools.push(tool.schema)
}
// reconstruct ollamaRequest to be sent to Ollama:
// - add context: retrieve all past messages by appID,
// incl. the one just received, and attach them to
// ollamaRequest
// - convert each back to OllamaMessage and
// - insert it into ollamaRequest
// - add each message's clientTools to ollamaRequest.tools,
// which should already have chatterd's resident tools
// inserted above.
try {
ollamaRequest.messages =
(await chatterDB`SELECT name, message, toolcalls, toolschemas FROM chatts WHERE appid = ${ollamaRequest.appID} ORDER BY time ASC`)
.map(row => {
if (row.toolschemas) {
ollamaRequest.tools ??= []
ollamaRequest.tools.push(...JSON.parse(row.toolschemas))
}
return {
role: row.name,
content: row.message,
tool_calls: JSON.parse(row.toolcalls),
}
})
For each row, we append any tools the front-end provided to the OllamaRequest.tools array. This
array has previously been populated with available resident back-end tools. Then we append each row,
recording a previous exchange between the client and Ollama, including any tool calls Ollama has
made, into the OllamaRequest.messages array.
Next remove the following code, we will put it inside ndjson_yield_sse() later:
let response = await fetch(OLLAMA_BASE_URL+"/chat", {
method: req.method,
body: JSON.stringify(ollamaRequest),
})
if (!response.body) {
logServerErr(res, "llmChat: Empty response body from Ollama")
return
}
let completion = ''
ndjson_yield_sse
To accommodate resident-tool call, we use a flag, sendNewPrompt, to indicate to our stream
generator whether we have any prompt to send to Ollama. Initially, sendNewPrompt is set to true
to always send the prompt from the front end. Subsequently, if Ollama makes a call for a tool
resident on the back end, we will send the result of the tool call as a new prompt to Ollama.
Replace your ndjson_yield_sse() signature and local variables, up to the while (!done) line to:
async function* ndjson_yield_sse(): AsyncGenerator<string> {
const decoder = new TextDecoder()
let stream_reader: ReadableStreamDefaultReader<Uint8Array>
let chunk: string
let done: boolean
let value: Uint8Array<ArrayBufferLike>
let completion: string
let sendNewPrompt = true
let tool_result: string | undefined
let tool_err: string | undefined
while (sendNewPrompt) {
sendNewPrompt = false // assume no resident tool call
// construct request
let response = await fetch(OLLAMA_BASE_URL + "/chat", {
method: req.method,
body: JSON.stringify(ollamaRequest), // convert the request to JSON
}) // send request to Ollama
res.status(response.status)
if (!response.body) {
yield `event: error\ndata: { "error": "Empty response body from Ollama" }\n\n`
break
}
try {
// the standard fetch() API does not have a built-in method to read a response body line by line.
stream_reader = response.body.getReader()
chunk = ''
done = false
completion = ''
// leave existing `while(!done) {}`, `if (chunk.length > 0)`,
// up to and including the `if (completion) {}` blocks here
} catch (err) { // reading response.body
yield `event: error\ndata: { "error": ${JSON.stringify(err)} }\n\n`
}
} // while sendNewPrompt
} // this line matches/replaces existing ndjson_yield_sse close brace
Whereas previously in llmchat() in the while(!done) {} block we simply yielded each data line after appending it to the completion string, we now must check whether there’s a tool call and yield the data line only if there were no tool call. Replace the following lines:
// send NDJSON line as SSE data line
yield `data: ${line}\n\n`
with:
// is there a tool call?
if (ollamaResponse.message.tool_calls) {
// handle tool calls
} else {
// no tool call, send NDJSON line as SSE data line
yield `data: ${line}\n\n`
}
In handling tool calls, we first serialize the tool call back into a JSON string to be
saved into the database. Replace the comment // handle tool calls with:
// convert toolCalls to JSON string (tool_calls) and save to db
let tool_calls = JSON.stringify(ollamaResponse.message.tool_calls)
for (const toolCall of ollamaResponse.message.tool_calls) {
// but assuming one tool call per response
if (!toolCall.function.name) {
continue // LLM miscalled
}
try {
// save full response, including tool call(s), to db,
// to form part of next prompt's history
await chatterDB`INSERT INTO chatts (name, message, id, appid, toolcalls) \
VALUES ('assistant', ${completion}, gen_random_uuid(), ${ollamaRequest.appID}, ${tool_calls})`
} catch (err) {
yield `event: error\ndata: { "error": ${JSON.stringify((err as PostgresError).toString())} }\n\n`
}
// clear completion and tool_calls, we already stored them
completion = ''
tool_calls = ''; // must keep `;` else interpreted as tool_calls = ''[tool_result, tool_err] // index of string
// make the tool call
} // for toolCall
We call toolInvoke() with the tool’s signature and process the result. There are three possible
outcomes from the call to toolInvoke():
- the tool is resident but the call was unsuccesfull and returned an error,
- the tool is resident and the call was successful, or
- the tool is non-resident.
If the tool call resulted in an error, we store the error as the tool result. We add the tool call
and its result to the OllamaRequest message and set the flag (sendNewPrompt) to send the
OllamaRequest back to Ollama. We also store both the tool call and its result to the database, to
form part of this appID’s context. If the tool call resulted in neither an error nor any returned
result, we interpret that as the tool being non-resident on the back end and forward the tool
call to the front end as an SSE tool_calls event. Replace the comment // make the tool call with:
[tool_result, tool_err] = await toolInvoke(toolCall.function)
// outcome 1: tool resident but had error
// send error back to LLM, don't report to frontend
tool_result ??= tool_err
if (tool_result) {
// outcomes 1 & 2 (tool call is resident and no error)
// reuse OllamaMessage to carry tool result
// to be sent back to Ollama
// first append the tool call itself
ollamaRequest.messages.push(ollamaResponse.message)
// then append the result
ollamaRequest.messages.push({
role: 'tool',
content: tool_result,
})
// don't send tools multiple times
ollamaRequest.tools = undefined
// loop to send tool result back to Ollama
sendNewPrompt = true
try {
// save resident tool call result or error message
await chatterDB`INSERT INTO chatts (name, message, id, appid) \
VALUES ('tool', ${tool_result.replace(/\s+/g, ' ')}, gen_random_uuid(), ${ollamaRequest.appID})`
} catch (err) {
yield `event: error\ndata: { "error": ${JSON.stringify((err as PostgresError).toString())} }\n\n`
}
} else {
// outcome 3: tool non resident, forward to
// front end as 'tool_calls' SSE event
yield `event: tool_calls\ndata: ${line}\n\n`
}
Subsequently remove the argument to the call to ndjson_yield_sse(). The function doesn’t
take any argument now. And delete this line:
res.status(response.status)
and we’re done with handlers.py! Save and exit the file.
main.ts
Edit main.ts:
server$ vi main.ts
Find the initialization of app and add these routes right
after the route for /llmchat:
.post('/llmtools/', handlers.llmtools)
.get('/weather/', handlers.weather)
We’re done with main.ts. Save and exit the file.
Build and test run
![]()
TypeScript is a compiled language, like C/C++ and unlike JavaScript and Python, which are an interpreted languages. This means you must run npx tsgo each and every time you made changes to your code, for the changes to show up when you run node.
To build your server, transpile TypeScript into JavaScript:
server$ npx tsgo
To run your server:
server$ sudo node main.js
# Hit ^C to end the test
You can test your implementation following the instructions in the Testing llmTools APIs section.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin | Last updated March 8th, 2026 |