Cover Page

Backend Page

TypeScript with Express

handlers

Change to your chatterd directory and edit handlers.ts:

server$ cd ~/reactive/chatterd
server$ vi handlers.ts

Define these three interfaces to help llmchat() deserialize JSON received from clients. Add these lines right below the import block:

interface OllamaMessage {
    role: string;
    content: string;
}

interface OllamaRequest {
    appID: string;
    model: string;
    messages: OllamaMessage[];
    stream: boolean;
}

interface OllamaResponse {
    model: string;
    created_at: string;
    message: OllamaMessage;
}

To store the client’s conversation context/history with Ollama in the PostgreSQL database, llmchat() first confirms that the client has sent an appID that can be used to tag its entries in the database. Here’s the signature of llmchat() along with its check for client’s appID:

export async function llmchat(req: Request, res: Response) {
    let ollama_request: OllamaRequest;

    try {
        ollama_request = req.body
    } catch (error) {
        logClientErr(res, HttpStatus.UNPROCESSABLE_ENTITY, error.toString())
        return
    }

    if (!ollama_request.appID || ollama_request.appID.length == 0) {
        logClientErr(res, HttpStatus.UNPROCESSABLE_ENTITY, `Invalid appID: ${ollama_request.appID}`)
        return
    }

    // insert into DB

}    

Once we confirm that the client has an appID, we insert its current prompt into the database, adding to its conversation history with Ollama. Replace the comment // insert into DB with the following code:

    try {
        // insert user messages into chatts table
        for (const msg of ollama_request.messages) {
            try {
                await chatterDB`INSERT INTO chatts (username, message, id, appid) VALUES (${msg.role}, ${msg.content.replace('\n', ' ').replaceAll("  ", " ").trim()}, gen_random_uuid(), ${ollama_request.appID})`;
            } catch (error) {
                logServerErr(res, `${error as PostgresError}`)
                return
            }
        }
    } catch (error) {
        logServerErr(res, error.toString())
        return
    }

    // retrieve history

Then we retrieve the client’s conversation history, including the recently inserted, current prompt, as the last entry, and put them in a JSON format expected by Ollama’s chat API. Replace // retrieve history with:

    // retrieve full chat history
    try {
        ollama_request.messages = (await chatterDB`SELECT username, message FROM chatts WHERE appid = ${ollama_request.appID} ORDER BY time ASC`)
            .map(row => ({
                role: row.username,
                content: row.message
            }))
    } catch (error) {
        logServerErr(res, `${error as PostgresError}`)
        return
    }

    // send request to Ollama

We declare an accumulator variable, full_response, to assemble the reply tokens Ollama streams back, and send the request constructed above to Ollama, Replace // send request to Ollama with:

    let response
    let full_response = ''

    try {
        response = await fetch(OLLAMA_BASE_URL+"/chat", {
            method: req.method,
            body: JSON.stringify(ollama_request),
        })

    	// prepare response header

    } catch (error) { // create client and get response
        logServerErr(res, error.toString())
        return
    }

    // SSE conversion and accumulate completion

As we saw in the first tutorial, llmPrompt, Ollama streams the replies as a NDJSON stream. We will transform this NDJSON stream into a stream of SSE events (more details later) to be returned to the client. We next prepare a response header to be used to send each SSE event to the client. Replace // prepare response header with:

        res.writeHead(HttpStatus.OK, {
            'Content-Type': 'text/event-stream',
            'Cache-Control': 'no-cache',
        }).flushHeaders()

For each incoming NDJSON element, we convert it into an OllamaResponse type. If the conversion is unsuccessful, we return an SSE error event and move on to the next NDJSON line. Otherwise, we append the content in the OllamaResponse to the full_response variable. Then we send the full NDJSON line as an SSE data line, of the default and implicit Message event. Replace // SSE conversion and accumulate completion above with:

    try {
        for await (const chunk of response.body) {
            const line = Buffer.from(chunk).toString().replace(/[\n]/, '')

            try {
                // deserialize each line into OllamaResponse
                const ollama_response: OllamaResponse = JSON.parse(line);
                const content = ollama_response.message.content

                // append response token to full assistant message
                full_response += content;
                
                // send NDJSON line as SSE data line
                res.write(`data: ${line}\n\n`); // SSE
            } catch (error) { // didn't receive an ollamaresponse, likely got an error message
                res.write(`event: error\ndata: { "error": ${line.replaceAll("\\\"", "'")} }\n\n`);
            }
        }
    } catch (error) { // loop through response
        res.write(`event: error\ndata: { "error": ${JSON.stringify((error as PostgresError).toString())} }\n\n`)
        res.end()
        return
    }

    // insert full response into database

When we reach the end of the NDJSON stream, we insert the full Ollama response into PostgreSQL database as the assistant’s reply. Replace // insert full response into database with:

    try {
        const assistant_response = full_response.replace(/\s+/g, ' ')
        await chatterDB`INSERT INTO chatts (username, message, id, appid) VALUES ('assistant', ${assistant_response}, gen_random_uuid(), ${ollama_request.appID})`
        // replace 'assistant' with NULL to test error event
    } catch (error) {
        res.write(`event: error\ndata: { "error": ${JSON.stringify((error as PostgresError).toString())} }\n\n`)
    }
    res.end()

If we encountered any error in the insertion above, we send an SSE error event to the client.

We’re done with handlers.ts. Save and exit the file.

main.ts

Edit the file main.ts:

server$ vi main.ts

Find the initialization of app and add this route right after the route for /llmprompt/:

      .post('/llmchat/', handlers.llmchat)

We’re done with main.ts. Save and exit the file.

Build and Test run

:point_right:TypeScript is a compiled language, like C/C++ and unlike JavaScript and Python, which are an interpreted languages. This means you must run npx tsc each and every time you made changes to your code, for the changes to show up when you run node.

To build your server, transpile TypeScript into JavaScript:

server$ npx tsc

To run your server:

server$ sudo node main.js
# Hit ^C to end the test

The cover backend spec provides instructions on Testing llmChat API and SSE error handling.

References


Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, and Sugih Jamin Last updated August 10th, 2025