Project 1: llmDraft: LLM-assist Chatter
DUE Wed, 09/24, 2 pm
This project may be completed individually or in teams of at most 2. You can partner differently for each project.
Treat your messages sent to chatterd and Ollama as public utterances
with no reasonable expectation of privacy and know that these
are recorded and shared with everyone using chatterd and for the
purposes of carrying out a contextual interaction with Ollama.
Objectives
In addition to the objectives listed in the llmPrompt and
Chatter tutorials, this project has the following objectives for
the frontend:
- Practice and apply the objectives learned in the two tutorials
- Learn how to implement the long-press gesture
LLM-assist chatting
We will build an app called Chatter and its corresponding backend server
called chatterd. Users use the Chatter app to post text messages,
called chatts, to the backend server, which stores each chatt in a
database and shares all posted chatts with everyone else accessing the
backend (be mindful of what you post!). The backend server, aside from a
database to store chatts, also has access to an LLM, which can assist
users in writing a chatt.
The LLM can assist in two ways:
- user can draft a
chattand ask the LLM for rewrite, and - user can request a draft reply on a
chattposted by someone else.
We call the first the reply feature and the second the rewrite
feature. When building Chatter, user can hard-code a “personality”
for the LLM to assume in performing both the reply and rewrite features.
For example, user can instruct the LLM to assume the personality
of a poet when writing a draft.
Expected behavior
Displaying posted chatts, requesting rewrite of a draft chatt, requesting draft reply of another user’s chatt:
DISCLAIMER: the video demoes show you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.
Features and requirements
Your Chatter app must provide the following features and satisfy the
following requirements, including those in any applicable “Implementation
guidelines” documents, to receive full credit.
Front-end UI
As can be seen in the video above, the Chatter app consists of a single
screen with the following UI elements:
- a title bar showing the title
Chatter, - a timeline of posted
chatts with the current user’schattshown on the right and those of other user(s) shown on the left,- each
chattis shown in a message “bubble” with its timestamp at the bottom of the “bubble” in smaller fonts and different color from the message’s color, - the
chatton the left is shown with the posting user’s name shown above the “bubble”, again in smaller font and different color.
- each
- these UI elements at the bottom of the screen:
- a text box in the middle,
- an “AI” button to the left of the text box showing an “AI Star” icon. This button is enabled only when the text box is not empty and there is no networking session in progress,
- a “Send” button on the right of the textbox showing a “paper plane” icon. This button also is enabled only when the text box is not empty and no networking session is in progress.
When a button is “disabled”, it is grayed out and tapping on it has no effect.
While there is a networking session in progress, that is, while waiting
for Ollama’s response on a rewrite or reply request, or while waiting
for the sending of a chatt to complete, the “Send” button’s icon changes
from showing a “paper plane” to showing an animated “loading” circle.
UI Design
One can easily spend a whole weekend (or longer) getting the UI “just right.”
Remember: we won’t be grading you on how beautiful your UI looks nor how precisely it matches the one shown on the video demo. You’re free to design your UI differently, so long as all indicated UI elements are fully visible on the screen, non overlapping, and functioning as specified.
Front-end UX
As demonstrated in the video above:
-
Upon loading, the
Chatterfrontend retrieves all postedchatts from thechatterdbackend and displays each according to the UI description above. -
User can enter text in the textbox and click the “Send” button to post a
chattto the backend. -
Instead of clicking the “Send” button after entering text, user can click the “AI” button to ask for a rewrite suggestion from Ollama, through the
chatterdbackend. The rewrite suggestion is displayed in the text box, replacing the original draft. Once the rewrite suggestion is fully displayed, the user can edit the text box, click the “Send” button to post thechattcurrently in the text box, or click the “AI” button again for another rewrite. -
User can long-press on another user’s posted
chattdisplayed on the left of the timeline to request a draft reply from Ollama, through thechatterdbackend. The reply draft will be shown in the text box at the bottom of the screen. Once the reply draft is fully displayed, the user can edit the text box, click the “Send” button to post thechattcurrently in the text box, or click the “AI” button for a rewrite. -
Ollama’s response will be streamed from the backend. As each piece of the stream arrives, it must be immediately displayed on the text box, appended to what’s already there—instead of waiting for the full reply to complete.
Back-end infrastructures
The chatterd backend accepts HTTP/2 connection from the Chatter frontend,
with HTTPS security. The backend is set up with a self-signed certificate to
provide HTTPS service. The backend is also set up with a PostgreSQL database
to store posted chatts. The backend is further set up to forward user prompts
to an Ollama server and returns Ollama’s reponse to the same user. Prompts
to Ollama and the corresponding responses are not stored in the PostgreSQL
database unless users subsequently posted Ollama’s responses as their chatts.
APIs
The chatterd backend has three API endpoint URLS:
-
postchattandgetchattsas documented in theChatterspecification, and -
llmprompt: as documented in thellmPromptspecification
Implementation guidelines
Backend
The chatterd backend is fully covered in the two tutorial specifications:
-
llmPrompt: for setting up a AWS/GCP instance, installing
an Ollama server, creating a self-signed certificate, and setting up an HTTPS
server. It also provides implementation for the
llmpromptAPI. You will also need to install your self-signed certificate on your front-end platform following the instructions in the first tutorial for Android or iOS. -
Chatter: for setting up a PostreSQL database. It provides
the implemetation for both the
postchattandgetchattsAPIs.
Completing these two tutorials completes the backend. The backend you built
for these two tutorials cumulatively can be used as is in this project, with no
further work required. Any of the alternative backend stacks provided will
work. The host mada.eecs.umich.edu is not available for this programming
assignment beyond the latter of the two deadlines of the tutorials. To
receive full credit, your frontend MUST work with your own backend.
Frontend
The UI and UX for posting chatt on the Chatter frontend, along with the
front-end interactions with the postchatt and getchatts APIs, are all
fully provided in the Chatter tutorial.
The rewrite and reply functionalities and UI/UX are not covered in
the two tutorials. Front-end interaction with the llmprompt API is shown
in the llmPrompt tutorial; your remaining job in this project is
to adapt it to provide the rewrite and reply functionalities, along with
the accompanying UI/UX. Step-by-step guidelines are provided below (follow
the link to your platform of choice):
WARNING: You will not get full credit if your front end is not set up to work with your backend!
Everytime you rebuild your Go or Rust server or make changes to either of your JavaScript or Python files, you need to restart chatterd:
server$ sudo systemctl restart chatterd
Leave your chatterd running until you have received your tutorial grade.
TIP:
server$ sudo systemctl status chatterd
is your BEST FRIEND in debugging your server. If you get an HTTP error code 500 Internal Server Error or if you just don’t know whether your HTTP request has made it to the server, first thing you do is run sudo systemctl status chatterd on your server and study its output, including any error messages and debug printouts from your server.
| Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, Sugih Jamin | Last updated: August 24th, 2025 |