Tutorial: llmChat Backend
Cover Page
DUE Wed, 10/1, 2 pm
You will need the HTTPS infrastructure from the first tutorial and the PostgreSQL database set up in the second tutorial. If you don’t have those set up, please follow the links and complete them first. You will also need to install your self-signed certificate on your front-end platform following the instructions in the first tutorial for Android or iOS.
Install updates
Remember to install updates available to your Ubuntu back end. If N in the following notice you see when you ssh to your back-end server is not 0,
N updates can be applied immediately.
run the following:
server$ sudo apt update
server$ sudo apt upgrade
Failure to update your packages could lead to your solution not performing at all, with no warning that it is because you haven’t updated, and also makes you vulnerable to security hacks.
Any time you see *** System restart required *** when you ssh to your server, immediately run:
server$ sync
server$ sudo reboot
Your ssh session will be ended at the server. Wait a few minutes for the system to reboot before you ssh to your server again.
appID
Since you will be sharing PostgreSQL database storage with the rest of the class,
we need to identify your entries so that we forward only your entries to Ollama
during your “conversation”. We will “repurpose” chatterdb to serve llmChat.
We store model as username on the chatts table and add a new column,
appID, of type varchar(155):
- Log into an interactive PostgreSQL (
psql) session as userpostgres - Connect to the
chatterdbdatabase - Clear your
chattstable of all oldchatts, use the SQL command:TRUNCATE TABLE chatts; - Add a new column to
chattsto store yourappIDstring:ALTER TABLE chatts ADD COLUMN appID VARCHAR(155); - To verify that you’ve added the new column to the
chattstable, enter:SELECT * FROM chatts;Make sure you get back the following result (though perhaps more stretched out):
username | message | id | time | appID | ----------+----------+----+------+-------+ (0 rows)If so congratulations! You have successfully added the new column!
- Exit PostgreSQL
Rename column
If you want to change a column name, for example, from “appName” to “appID”,
you first remove the “appName” column from your chatts table using:
ALTER TABLE chatts DROP COLUMN appName;
and then add a new “appID” column as per above.
chatterd
All the Chatter APIs in this course won’t interfere with each other and can be
left in the same running chatterd. They don’t need each other to run either;
though, for expediency, we may build a new API handler off an existing one: we
will ask you to copy and rename an old handler. Thus it may be easier if you
just continue to build on the same backend code base, keeping all the previous
handlers around.
Please click the relevant link to setup the chatterd server with llmchat API
using the web framework of your choice:
| Go | Python | Rust | TypeScript |
and return here to resume the server setup once you have your web framework set up.
Testing llmchat API and SSE error handling
As usual, you can test your llmchat API using either graphical tool such as
Postman or CLI tool such as curl. In Postman, point your POST request to
https://YOUR_SERVER_IP/llmchat, provide the Body > raw JSON content as shown
in the example. The same example using curl:
laptop$ curl -X POST -H "Content-Type: application/json" -H "Accept: event/stream" -d '{ "appID": "edu.umich.reactive.postman.llmChat", "model": "tinyllama", "messages": [ { "role": "user", "content": "I live in Ann Arbor" } ], "stream": true }' https://YOUR_SERVER_IP/llmchat
To test llmChat’s ability to provide context, ask a follow up question
regarding what you said in the first example above. If you said, “I live
in Ann Arbor,” you could ask, “In what state do I live?” for example. Even
tinyllama should be able to reason that you live in Michigan.
Testing SSE error handling
On your backend, in the handlers source file, in the llmchat() function, search for the
statement to insert the assistant_response into the PostgreSQL database. In the SQL statement
to INSERT the assistant_response, replace 'assistant' with NULL as the username.
(Rebuild and) Restart your backend.
With your frontend set up to communicate with your so modified backend at YOUR_SERVER_IP,
not mada.eecs.umich.edu, submit a prompt to Ollama. The SQL operation to insert the assitant’s
completion/reply should now fail and an SSE error event, along with its data line, should
be generated and sent to your front end. On the frontend, two things should happen:
- an alert dialog box pops up with the error message, and
- the error message, prepended with
**llmChat Error**, inserted into the assistant’s text bubble. If you see both, congratulations! Your SSE event generation and handling are working!
WARNING: You will not get full credit if your front end is not set up to work with your backend!
Everytime you rebuild your Go or Rust server or make changes to either of your JavaScript or Python files, you need to restart chatterd:
server$ sudo systemctl restart chatterd
Leave your chatterd running until you have received your tutorial grade.
TIP:
server$ sudo systemctl status chatterd
is your BEST FRIEND in debugging your server. If you get an HTTP error code 500 Internal Server Error or if you just don’t know whether your HTTP request has made it to the server, first thing you do is run sudo systemctl status chatterd on your server and study its output, including any error messages and debug printouts from your server.
That’s all we need to do to prepare the back end. Before you return to work on your front end, wrap up your work here by submitting your files to GitHub.
Submitting your back end
We will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.
Navigate to your reactive folder:
server$ cd ~/reactive/
Commit changes to the local repo:
server$ git commit -am "llmchat back end"
and push your chatterd folder to the remote GitHub repo:
server$ git push
If git push failed due to changes made to the remote repo by your tutorial partner, you must run git pull first. Then you may have to resolve any conflicts before you can git push again.
Go to the GitHub website to confirm that your back-end files have been uploaded to your GitHub repo.
References
- Stream updates with server-sent events
- Server-Sent Events: A Comprehensive Guide
- How do server-sent events actually work?
- Server-sent events The spec.
- “Connection: Keep-Alive” prohibited in HTTP/2 and HTTP/3
- Understanding HTTP Streaming: A Practical Guide
- From JSON to Streaming: Building an OpenAI-Compatible Proxy for Ollama with .NET
- MCP: Streamable HTTP
| Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, and Sugih Jamin | Last updated October 21st, 2025 |