Go with Echo

Cover Page

Back-end Page

We assume that your chatterd code base has accumulated code up to at least the llmChat back end.

toolbox

Let us start by creating a toolbox to hold our tools. Change to your chatterd folder and create a new Go file, name it toolbox.go:

server$ cd ~/reactive/chatterd
server$ vi toolbox.go

Put the following import() block at the top of the file:

package main

import (
    "encoding/json"
   	_ "embed"
    "fmt"
    "net/http"
)

The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).

Tool/function definition

Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:

type OllamaToolSchema struct {
    Type     string             `json:"type"`
    Function OllamaToolFunction `json:"function"`
}

type OllamaToolFunction struct {
    Name        string               `json:"name"`
    Description string               `json:"description"`
    Parameters  OllamaFunctionParams `json:"parameters,omitempty"`
}

type OllamaFunctionParams struct {
    Type       string                     `json:"type"`
    Properties map[string]OllamaParamProp `json:"properties"` // Map has no ordering
    Required   []string                   `json:"required,omitempty"` // parameters MUST be in function-signature order
}

type OllamaParamProp struct {
    Type        string   `json:"type"`
    Description string   `json:"description"`
    Enum        []string `json:"enum,omitempty"`
}

Weather tool schema: in this tutorial, we have only one tool resident on the back end, get_weather. Instead of manually instantiating an OllamaToolSchema for each tool, we use Go’s embed and encoding/json packages to create one for us from a JSON schema file.

We first load the schema from file to a string using the embed package. Add the following line at the top-level of your toolbox.go file:

// the following line is not a comment
//go:embed tools/get_weather.json
var WEATHER_JSON string

We define jsonToSchema() function to unmarshal a schema into an instance of OllamaToolSchema. You can put it at the top-level in your toolbox.go file, we’ll use it later:

func jsonToSchema(tool string) *OllamaToolSchema {
	var schema OllamaToolSchema
	
	err := json.Unmarshal([]byte(tool), &schema)
    if err != nil {
    	fmt.Printf("Failed to unmarshal %s.json", tool)
     	panic(err)
    }
    return &schema
}

Weather tool function: we implement the get_weather tool as a getWeather() function that makes an API call to the free Open Meteo weather service. Add the following nested struct definition to hold Open Meteo’s return result. For this tutorial, we’re only interested in the latitude, longitude, and temperature returned by Open Meteo:

type OMeteoResponse struct {
    Lat     float64 `json:"latitude"`
    Lon     float64 `json:"longitude"`
    Current struct {
        Temp float64 `json:"temperature_2m"`
    } `json:"current"`
}

Here’s the definition of the getWeather() function:

func getWeather(argv []string) (*string, error) {
    // Open-Meteo API doc: https://open-meteo.com/en/docs#api_documentation
    response, err := http.DefaultClient.Get(fmt.Sprintf(
        "https://api.open-meteo.com/v1/forecast?latitude=%s&longitude=%s&current=temperature_2m&temperature_unit=fahrenheit",
        argv[0], argv[1],
    ))
    if err != nil {
        return nil, fmt.Errorf("Cannot connect to Open Meteo: %w", err)
    }
    defer func() {
        _ = response.Body.Close()
        http.DefaultClient.CloseIdleConnections()
    }()
    if response.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("Open-meteo (%d): %s", response.StatusCode, response.Status)
    }
    var ometeoResponse OMeteoResponse
    if err = json.NewDecoder(response.Body).Decode(&ometeoResponse); err != nil {
        return nil, fmt.Errorf("Cannot decode Open Meteo's response: %w", err)
    }
    weather := fmt.Sprintf("Weather at lat: %f, lon: %f is %fºF",
        ometeoResponse.Lat, ometeoResponse.Lon, ometeoResponse.Current.Temp)
    return &weather, nil
}

The toolbox

Even though we have only one resident tool in this tutorial, we want a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ve chosen to use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a map. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.

Add the following type for a tool function and the record type containing a tool definition and the tool function:

type ToolFunction func(args []string) (*string, error)

type Tool struct {
    Schema    OllamaToolSchema
    Function  ToolFunction
}

Now create a switch-table toolbox and populate it with the weather tool using the jsonToSchema() function we ceated earlier:

var TOOLBOX = map[string]Tool{
    "get_weather": { *jsonToSchema(WEATHER_JSON), getWeather },
}

Tool use or function calling

Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON Object carrying the name of the function and the arguments to pass to it. Add these struct definitions representing Ollama’s tool-call JSON to your toolbox.go file:

type OllamaToolCall struct {
    Function OllamaFunctionCall `json:"function"`
}

type OllamaFunctionCall struct {
    Name      string            `json:"name"`
    Arguments map[string]string `json:"arguments"`
}

Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.

func toolInvoke(function OllamaFunctionCall) (*string, error) {
    tool, ok := TOOLBOX[function.Name]

    if ok {
		    // get arguments in order, they may arrive out of order from Ollama
			var argv []string
			for _, prop := range tool.Schema.Function.Parameters.Required {
				argv = append(argv, function.Arguments[prop])
			}
        return tool.Function(argv)
    }
    return nil, nil
}

That concludes our toolbox definition. Save and exit the file.

handlers

Edit handlers.go:

server$ vi handlers.go

First add this import to the import() block at the top of the file:

		"github.com/jackc/pgx/v4"

struct

Next update the following structs:

For the /weather testing API, add also the following struct:

type Location struct {
    Lat string `json:"lat"`
    Lon string `json:"lon"`
}

weather

Let’s implement the handler for the /weather API that we can use to test our getWeather() function later:

func weather(c echo.Context) error {
    var loc Location

    if err := c.Bind(&loc); err != nil {
        return logClientErr(c, http.StatusUnprocessableEntity, err)
    }

    temp, err := getWeather([]string{loc.Lat, loc.Lon})
    if err != nil {
        return logServerErr(c, err)
    }
    logOk(c)
    return c.JSON(http.StatusOK, temp)
}

llmtools

The underlying request/response handling of llmtools() is basically that of llmchat(), plus the mods needed to support tool calling. We will name variables according to this scheme:

Make a copy of your llmchat() function and rename it llmtools(). In your newly renamed llmtools() function, after deserialiaing the request body to OllamaRequest and checking that the front end has provided an appID, serialize any tools present in the OllamaRequest so that we can add them to the PostgreSQL database:

			// convert tools from client as JSON string (client_tools) and save to db;
			var client_tools []byte
			if ollamaRequest.Tools != nil {
				client_tools, _ = json.Marshal(ollamaRequest.Tools)
			}

Next, when inserting each message into the database, store the client’s tools also, but if there are more than one messages in the messages array, store the tools only once, with the first message. Replace chatterDB.Exec(background, `INSERT... and its error checking code in the for _, msg := range ollamaRequest.Messages block with the following:

				_, err = chatterDB.Exec(background, `INSERT INTO chatts (name, message, id, appid, toolschemas) VALUES ($1, $2, gen_random_uuid(), $3, $4)`,
					msg.Role, msg.Content, ollamaRequest.AppID, client_tools)
				if err != nil {
					return logServerErr(c, err)
				}
		
				// store client_tools only once
				// reset it to empty after first message.
				client_tools = nil

The llmchat() code next reconstructs ollamaRequest to be sent to Ollama by retrieving from the PostgreSQL database all prior exchanges between the client and Ollama using the client’s appID. In llmtools(), we first populate ollamRequest.tools with tools resident on the chatterd back end before reconstructing the ollamaRequest. Add before the // reconstruct ollamaRequest to be sent to Ollama: comment:

		 	// reset ollamaRequest.Tools, then append all of chatterd's 
		  // resident tools to ollamaRequest.Tools;
			// front-end tools will be added back later, as part of reconstructing 
			// the appID's context from the db (see OllamaMessageFromRow())
			ollamaRequest.Tools = nil
			for _, tool := range TOOLBOX {
					ollamaRequest.Tools = append(ollamaRequest.Tools, tool.Schema)
			}

As the comments above indicated, we will need a OllamaMessageFromRow() function. Go does not support static method. Instead, define a global function OllamaMessageFromRow() outside your llmtools() function, for example right under, and also outside, the definition of type OllamaMessage struct {}, at the top of the file:

func OllamaMessageFromRow(row pgx.Rows, ollamaRequest *OllamaRequest) (*OllamaMessage, error) {
    var msg OllamaMessage
    var toolcalls []byte
    var toolschemas []byte

    err := row.Scan(&msg.Role, &msg.Content, &toolcalls, &toolschemas)
    if err != nil {
        return &msg, err
    }

    if toolcalls != nil {
    		// must unmarshal to type to append toolcalls    
        var toolCalls []OllamaToolCall
        _ = json.Unmarshal(toolcalls, &toolCalls)
        msg.ToolCalls = append(msg.ToolCalls, toolCalls...)
    }

    if toolschemas != nil {
        // has front-end device tools
        // must unmarshal to type to append device tools to ollamaRequest.tools
        var tools []OllamaToolSchema
        _ = json.Unmarshal(toolschemas, &tools)
        ollamaRequest.Tools = append(ollamaRequest.Tools, tools...)
    }

    return &msg, nil
}

The function creates and returns an OllamaMessage to store a previous exchange between the client and Ollama stored in the given row, including any tool calls Ollama has made. Then it appends any tools the front-end provided to the ollamaRequest.tools array. This array has previously been populated with available resident back-end tools.

Back in llmtools(), replace the following line:

    rows, err := chatterDB.Query(reqCtx, `SELECT name, message FROM chatts WHERE appid = $1 ORDER BY time ASC`, ollamaRequest.AppID)

with:

		rows, err := chatterDB.Query(reqCtx, `SELECT name, message, toolcalls, toolschemas FROM chatts WHERE appid = $1 ORDER BY time ASC`, ollamaRequest.AppID)

then remove the line var msg OllamaMessage and replace the code inside the for rows.Next() {} block with:

				msg, err := OllamaMessageFromRow(rows, &ollamaRequest)
				if err != nil {
					rows.Close()
					return logServerErr(c, err)
				}
				ollamaRequest.Messages = append(ollamaRequest.Messages, *msg)

This latest code block calls OllamaMessageFromRow() for each row of the database and appends the OllamaMessage returned to the ollamaRequest.Messages array, reconstructing the full prompt history.

Next remove the following code, we will put it inside a loop later:

    // construct request
    requestBody, err := json.Marshal(&ollamaRequest) // convert the request to JSON
    if err != nil {
        return logServerErr(c, err)
    }
    ollama_url := OLLAMA_BASE_URL.String() + "/chat"
    // send request
    request, _ := http.NewRequestWithContext(reqCtx, req.Method, ollama_url, bytes.NewReader(requestBody))
    
    response, err := http.DefaultClient.Do(request)
    if err != nil {
        return logServerErr(c, err)
    }
    defer func() {
        _ = response.Body.Close()
    }()

NDJSON to SSE stream transformation

To accommodate resident-tool call, we use a flag, sendNewPrompt, to indicate whether we have any prompt to send to Ollama. Initially, sendNewPrompt is set to true to always send the prompt from the front end. Subsequently, if Ollama makes a call for a tool resident on the back end, we will send the result of the tool call as a new prompt to Ollama. Add the following code right before the line reader := bufio.NewReader(response.Body), putting that line inside the new for loop:

		var sendNewPrompt = true
		var tool_result *string
		var tool_err error
	
		for sendNewPrompt {
				sendNewPrompt = false  // assume no resident tool calls
		
				// construct request
				requestBody, err := json.Marshal(&ollamaRequest) // convert the request to JSON
				if err != nil {
					err_msg, _ := json.Marshal(err.Error())
					_, _ = fmt.Fprintf(res, "event: error\ndata: { \"error\": %s }\n\n", string(err_msg))
					res.Flush()
					return err
				}
				request, _ := http.NewRequestWithContext(reqCtx, req.Method, OLLAMA_BASE_URL.String() + "/chat", bytes.NewReader(requestBody))
				// send request
				response, err := http.DefaultClient.Do(request)
				if err != nil {
					err_msg, _ := json.Marshal(err.Error())
					_, _ = fmt.Fprintf(res, "event: error\ndata: { \"error\": %s }\n\n", string(err_msg))
					res.Flush()
					return err
				}
				defer func() {
						_ = response.Body.Close()
				}()
		
				clear(tokens)				// free used elements
				tokens = tokens[:0] // reset length, keep capacity
				
				// leave existing code from the line 
				// `reader := bufio.NewReader(response.Body)`
				// to the close brace before logOk(c) here
				
		} // for sendNewPrompt

Whereas previously in llmchat() we simply send NDJSON line as SSE data line after appending it to the tokens array, we now must check whether there’s a tool call and send the SSE data line only if there were no tool call. Replace the following lines:

            // send NDJSON line as SSE line
            _, _ = fmt.Fprintf(res, "data: %s\n\n", line)
            res.Flush()

with:

						// is there a tool call?
						if len(ollamaResponse.Message.ToolCalls) != 0 {
							// handle tool calls
							
						} else {
							// no tool call, send NDJSON line as SSE data line
							_, _ = fmt.Fprintf(res, "data: %s\n\n", line)
							res.Flush()
						}

In handling tool calls, we first marshal the tool call back into a JSON string to be saved into the database. Replace the comment //handle tool calls with:

							// convert ToolCalls to JSON string (tool_calls) and save to db
							tool_calls, _ := json.Marshal(ollamaResponse.Message.ToolCalls)
		
							for _, toolCall := range ollamaResponse.Message.ToolCalls {
									// but assuming one tool call per response
									if toolCall.Function.Name == "" {
											continue // LLM miscalled
									}
			
									// save full response, including tool call(s), to db,
									// to form part of next prompt's history
									_, err =
											chatterDB.Exec(background, `INSERT INTO chatts (name, message, id, appid, toolcalls)
												VALUES ('assistant', $1, gen_random_uuid(), $2, $3)`,
												strings.Join(tokens, ""), ollamaRequest.AppID, tool_calls)
									if err != nil {
											err_msg, _ := json.Marshal(err.Error())
											_, _ = fmt.Fprintf(res, "event: error\ndata: { \"error\": %s }\n\n", string(err_msg))
											res.Flush()
									}

                  // clear tokens and tool_calls, we already stored them			
									clear(tokens)
									tokens = tokens[:0]
									tool_calls = nil
									
									// make the tool call

							} // for toolCall

We call toolInvoke() with the tool’s signature and process the result. There are three possible outcomes from the call to toolInvoke():

  1. the tool is resident but the call was unsuccesfull and returned an error,
  2. the tool is resident and the call was successful, or
  3. the tool is non-resident.

If the tool call resulted in an error, we store the error as the tool result. We add the tool call and its result to the OllamaRequest message and set the flag (sendNewPrompt) to send the OllamaRequest back to Ollama. We also store both the tool call and its result to the database, to form part of this appID’s context. If the tool call resulted in neither an error nor any returned result, we interpret that as the tool being non-resident on the back end and forward the tool call to the front end as an SSE tool_calls event. Replace the comment // make the tool call with:

									tool_result, tool_err = toolInvoke(toolCall.Function)
									if tool_err != nil {
											// outcome 1: tool resident but had error
											// send error back to LLM, don't report to frontend
											msg := tool_err.Error()
											tool_result = &msg
									}
									
									if tool_result != nil {
											// outcomes 1 & 2 (tool call is resident and no error)
											// reuse OllamaMessage to carry tool result
											// to be sent back to Ollama
											// first append the tool call itself
											ollamaRequest.Messages = append(ollamaRequest.Messages, ollamaResponse.Message)
											// then append the result
											ollamaRequest.Messages = append(ollamaRequest.Messages, OllamaMessage{
													Role:    "tool",
													Content: *tool_result,
											})
											
											// don't send tools multiple times
											ollamaRequest.Tools = nil											
											// loop to send tool result back to Ollama
											sendNewPrompt = true
				
											// save resident tool call result or error message
											_, err = chatterDB.Exec(background, `INSERT INTO chatts (name, message, id, appid)
														VALUES ('tool', $1, gen_random_uuid(), $2)`,
														wsRegex.ReplaceAllString(*tool_result, " "), ollamaRequest.AppID)
											if err != nil {
													err_msg, _ := json.Marshal(err.Error())
													_, _ = fmt.Fprintf(res, "event: error\ndata: { \"error\": %s }\n\n", string(err_msg))
													res.Flush()
											}
									} else {
											// outcome 3: tool non resident, forward to 
											// front end as 'tool_calls' SSE event
											_, _ = fmt.Fprintf(res, "event: tool_calls\ndata: %s\n\n", line)
											res.Flush()
									}

We keep the rest of the llmchat() code without further changes and we’re done with handlers.py! Save and exit the file.

main.go

Edit main.go:

server$ vi main.go

Find the global variable router and add these routes right after the route for /llmchat:

    {"POST", "/llmtools/", llmtools},
    {"GET", "/weather/", weather},

We’re done with main.go. Save and exit the file.

Build and test run

To build your server:

server$ go get   # -u  # to upgrade all packages to the latest version
server$ go build

:point_right:Go is a compiled language, like C/C++ and unlike Python, which is an interpreted language. This means you must run go build each and every time you made changes to your code, for the changes to show up in your executable.

To run your server:

server$ sudo ./chatterd
# Hit ^C to end the test

You can test your implementation following the instructions in the Testing llmTools APIs section.


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin Last updated March 8th, 2026