Project 3: llmAction Back End
Cover Page
For the back end, regardless of your stack of choice, you’ll be using functions
from both the llmTools and Signin tutorials.
Toolbox
Tool definition JSON files
To add ollama_cli as a back-end tool, first create a schema file, ollama_cli.json, in the
back-end tools directory you created in the llmTools tutorial:
server$ cd ~/reactive/chatterd/tools
server$ vi ollama_cli.json
and put the following JSON schema in your ollama_cli.json:
{
"type": "function",
"function": {
"name": "ollama_cli",
"description": "Run Ollama command on host",
"parameters": {
"type": "object",
"properties": {
"token": {
"type": "string",
"description": "chatterID authorization token"
},
"cmd": {
"type": "string",
"description": "Ollama command to run"
},
"arg": {
"type": "string",
"description": "Ollama cmd argument"
}
},
"required": [
"token",
"cmd",
"arg"
]
}
}
}
We will add the ollamaCli() function in the next subsection. Assuming we already have
ollamaCli() defined, using the entry for get_weather in the TOOLBOX switch table
from the llmTools tutorial as example, add an entry to the TOOLBOX switch table
for the ollama_cli tool.
ollamaCli() function
We provide an implementation of ollamaCli() below that you can copy and paste to your
toolbox source file. Our ollamaCli() calls checkAuth() to verify the validity of
the chatterID passed to it. If validation succeeds, checkAuth() returns a “no errror”
indication. If validation fails, chatAuth() returns an error containing the message
“401 Unauthorized: chatterID verification failed, probably expired and token expunged”.
If validation fails due to any kind of error in the validation process, checkAuth()
returns the error. Note that this is not a very secure validation process: the
chatterID itself is not attributed to any user, for example. In the attached code,
we provide the function signature of checkAuth() that ollamaCli() expects. You
can consult and adapt the postauth() function from the Signin tutorial to
implement checkAuth().
Once the chatterID is verified, ollamaCli() forks a process to run the ollama
command. If the return code from the ollama command indicates no error (return code 0),
we send back the stdout output of the command. However, if stdout is empty, we
construct and return a string informing the model that the command has succeeded. The
model needs more explicit and verbose confirmation message than an empty string. If an
error has occurred and an error message is output on stderr, we return the stderr message
to the model as normal output message. The model will parse the message and recognize it
as an error on its own. Sometimes a command prints out progress notifications on stderr. These
messages may contain words that the model could misinterpret as indication of failure. Hence
unless the command’s return code is non-zero, we do not forward any stderr messages to the model.
Implementation of ollamaCli() and the function signature of checkAuth() you can add to your
toolbox source file:
Go
// add to import:
// "bytes"
// "os/exec"
// "time" // if used in checkAuth()
// "github.com/jackc/pgx/v4" // if used in checkAuth()
func checkAuth(chatterID string) error {
// return nil // mock no error
}
func ollamaCli(argv []string) (*string, error) {
var output string
var stdout, stderr bytes.Buffer
err := checkAuth(argv[0])
if err != nil {
return nil, err
}
cmd := exec.Command("ollama", argv[1:]...)
cmd.Stdout = &stdout
cmd.Stderr = &stderr
err = cmd.Run()
if err != nil {
if _, ok := err.(*exec.ExitError); ok {
output = stderr.String()
} else {
return nil, fmt.Errorf("Cannot run ollama command: %w", err)
}
} else {
output = stdout.String()
if len(output) == 0 {
output = argv[1]+"ed"
if len(argv) > 1 {
output += " '"+argv[2]+"'"
}
}
}
return &output, nil
}
Python
import asyncio
import time # if used in checkAuth()
async def checkAuth(chatterID)-> str | None:
# return None # mock no error
async def ollamaCli(argv: list[str]) -> tuple[str | None, str | None]:
err = await checkAuth(argv[0])
if err is not None:
return None, err
try:
child = await asyncio.create_subprocess_exec("ollama", *argv[1:], stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)
stdout, stderr = await child.communicate()
if child.returncode != 0:
return stderr.decode(), None
else:
return stdout.decode() or f"{argv[1]}ed '{argv[2] if len(argv) > 1 else ""}'", None
except Exception as err:
return None, f"Cannot run ollama command: {err}"
Rust
use chrono::Utc; // if used by checkAuth()
use tokio::process::Command;
async fn checkAuth(appState: &AppState, chatterID: &String) -> Result<bool, String> {
// Ok(true) // mock no error
}
pub async fn ollamaCli(appState: &AppState, argv: &[String]) -> Result<Option<String>, String> {
checkAuth(appState, &argv[0]).await?;
let Ok(output) = Command::new(&"ollama").args(&argv[1..]).output().await else {
return Ok(Some(
"500 Internal Server Error: ollama_cli failed".to_string(),
));
};
let mut result = String::new();
if let Some(code) = output.status.code() {
if code != 0 {
result = String::from_utf8(output.stderr).unwrap();
} else if output.stdout.is_empty() {
result = format!(
"{}ed '{}'",
argv[1],
if argv.len() > 1 { &argv[2] } else { "" }
);
} else {
result = String::from_utf8(output.stdout).unwrap();
};
}
Ok(Some(result))
}
TypeScript
import { spawn } from "child_process"
import {chatterDB} from "./main.js" // if used in checkAuth()
import type {PostgresError} from "postgres"; // if used in checkAuth()
async function checkAuth(chatterID: string): Promise<Error|null> {
// return null // mock no error
}
export async function ollamaCli(argv: string[]): Promise<[string?, string?]> {
return new Promise(async (resolve, reject) => {
const [chatterID, ...args] = argv
if (!chatterID) {
reject([null, "Authorization token null"])
return
}
const err = await checkAuth(chatterID)
if (err instanceof Error) {
reject([null, err.message])
}
const child = spawn("ollama", args)
let stdout = ''
let stderr = ''
child.stdout.on('data', data => stdout += data)
child.stderr.on('data', data => stderr += data)
child.on('close', code => {
code ? resolve([stderr, undefined]) : resolve([stdout ||= `${argv[1]}ed '${argv[2] ?? ''}'`, undefined])
})
child.on('error', err => {
reject([null, `Cannot run ollama command: ${err}`])
})
})
}
Testing
As with the llmTools tutorial, you can test your implementation of olamaCli(),
without the HITL guardrail, by adding an API endpoint that calls the tool
directly. A full test of the tool with HITL would have to wait until your front end is implemented.
As usual, you can use either graphical tool such as Postman or CLI tool such as curl to test.
ollama testing API
We will create test scaffolding to test your ollamaCli() tool. In conducting this test, we will be
using a mock token. Modify the checkAuth() in your toolbox to always return no error when you’re
conducting this test.
In your main source file, add an /ollama HTTP POST API endpoint with ollama() as its handler.
In the handlers source file, add the following OllamaCmd struct and ollama() function. The
ollama() handler acts as a mock toolInvoke() function: it deserializes the body of an incoming
HTTP request into an instance of OllamaCmd, assembles the three properties of OllamaCmd into an
array of strings, uses the array as argument to call ollamaCli(), and returns the result of the
call as an HTTP response. Except in Go, your handlers would have to explicitly import ollamaCli
from your toolbox, the same way you imported getWeather.
Go
type OllamaCmd struct {
Token string `json:"token"`
Cmd string `json:"cmd"`
Arg string `json:"arg,omitempty"`
}
func ollama(c echo.Context) error {
var ollamaCmd OllamaCmd
if err := c.Bind(&ollamaCmd); err != nil {
return logClientErr(c, http.StatusUnprocessableEntity, err)
}
argv := []string{ollamaCmd.Token, ollamaCmd.Cmd}
if ollamaCmd.Arg != "" {
argv = append(argv, ollamaCmd.Arg)
}
result, err := ollamaCli(argv)
if err != nil {
return logServerErr(c, err)
}
logOk(c)
return c.JSON(http.StatusOK, result)
}
Python
@dataclass
class OllamaCmd:
token: str
cmd: str
arg: str | None = field(default=None, metadata=config(exclude=lambda l: not l))
async def ollama(request):
try:
ollamaCmd = OllamaCmd(**(await request.json()))
except Exception as err:
print(f"{err=}")
return JSONResponse(
f"Unprocessable entity: {str(err)}",
status_code=HTTPStatus.UNPROCESSABLE_ENTITY,
)
argv = [ollamaCmd.token, ollamaCmd.cmd]
if ollamaCmd.arg:
argv.append(ollamaCmd.arg)
result, err = await ollamaCli(argv)
return JSONResponse(
{"error": f"Internal server error: {str(err)}"},
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
) if err else JSONResponse(result)
Rust
#[derive(Deserialize)]
pub struct OllamaCmd {
token: String,
cmd: String,
arg: Option<String>,
}
pub async fn ollama(
State(appState): State<AppState>,
ConnectInfo(clientIP): ConnectInfo<SocketAddr>,
Json(ollamaCmd): Json<OllamaCmd>,
) -> Result<Json<Value>, (StatusCode, String)> {
let mut argv = vec![ollamaCmd.token, ollamaCmd.cmd];
if let Some(arg) = ollamaCmd.arg {
argv.push(arg);
}
let result = ollamaCli(&appState, &argv).await;
match result {
Err(err) => Err(logServerErr(&clientIP, err)),
Ok(output) => {
logOk(&clientIP);
Ok(Json(json!(output)))
}
}
}
TypeScript
type OllamaCmd = {
token: string
cmd: string
arg?: string
}
export async function ollama(req: Request, res: Response) {
let ollamaCmd: OllamaCmd = req.body
let argv: string[] = [ollamaCmd.token, ollamaCmd.cmd]
ollamaCmd.arg && argv.push(ollamaCmd.arg)
const [result, error] = await ollamaCli(argv)
error && logServerErr(res, error)
res.json(result)
}
Then in Postman/curl, send a HTTP POST to your /ollama API endpoint with the following JSON body:
{
"token": "chatterID",
"cmd": "ls"
}
which should return you a table listing the models available on Ollama.
Once your front end is completed, you can do end-to-end testing, see End-to-end Testing section of the spec.
And with that, you’re done with your back end. Congrats!
Back-end submission guidelines
As usual, git commit your changes to your chatterd source files with the
commit message, "llmaction back end", and push your changes to your git repo.
WARNING: You will not get full credit if your front end is not set
up to work with your back end! You MUST also submit your LLM Prompts and Skills
and Rules file(s) as part of this assignment.
Everytime you rebuild your Go or Rust server or make changes to either of your
JavaScript or Python files, you need to restart chatterd:
server$ sudo systemctl restart chatterd
Leave your chatterd running until you have received your assignment grade.
TIP:
server$ sudo systemctl status chatterd
is your BEST FRIEND in debugging your server. If you get an HTTP error code 500 Internal Server Error or if you just don’t know whether your HTTP request has made it to the server, first thing you do is run sudo systemctl status chatterd on your server and study its output.
If you’re running a Python server, it also shows error messages from your Python code, including any debug printouts from your code. The command systemctl status chatterd is by far the most useful go-to tool to diagnose your back-end server problem.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin | Last updated: March 14th, 2026 |