Tutorial: llmChat Compose
Cover Page
The front-end work is mostly in writing two new network functions, llmChat() and llmPrep().
We will build on the code base from the first tutorial, llmPrompt.
Expected behavior
Carrying a “conversation” with an LLM:
DISCLAIMER: the video demo shows you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.
Preparing your GitHub repo
In the following, replace /YOUR:TUTORIALS/ with the name of your tutorials folder.
- On your laptop, navigate to
/YOUR:TUTORIALS/ - Unzip your
llmprompt.zipfile. Double check that you still have a copy of the zipped file for future reference! - Rename your newly unzipped
llmpromptfolderllmchat - Remove your
llmchat’s.gradledirectory by running in a shell window:laptop$ cd /YOUR:TUTORIALS/llmchat/composeChatter laptop$ rm -rf .gradle - Push your local
/YOUR:TUTORIALS/repo to GitHub and make sure there’re no git issues:git push
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your assignment GitHub repo
- Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo, click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- chatterd
|-- chatterd.crt
|-- llmchat
|-- composeChatter
|-- app
|-- gradle
# and other files or folders
/YOUR:TUTORIALS/ folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.
appID
Since you will be sharing PostgreSQL database storage with the rest of the class,
we need to identify your entries so that we forward only your entries to Ollama
during your “conversation.” In your MainActivity.kt file, add this appID
property to your ChattViewModel:
val appID = app.applicationContext.packageName
llmChat()
In ChattStore.kt, first add the following types outside your ChattStore
class:
enum class SseEventType { Error, Message }
@Serializable
data class OllamaMessage(val role: String, val content: String?)
@Serializable
data class OllamaRequest(
val appID: String?,
val model: String?,
val messages: List<OllamaMessage>,
val stream: Boolean
)
@Serializable
@JsonIgnoreUnknownKeys
data class OllamaResponse(
val model: String,
val message: OllamaMessage,
)
Rename your llmPrompt() function and give it the following signature:
suspend fun llmChat(appID: String?, chatt: Chatt, errMsg: MutableState<String>) {
Replace llmprompt in apiUrl to llmchat.
Previously we constructed ollamaRequest Kotlin map and serialize it “manually”
into a request body. In this tutorial, we’re going to rely on Kotlin serialization to
do the serialization for us. Replace this block of code:
val ollamaRequest = mapOf(
"model" to chatt.name,
"prompt" to chatt.message?.value,
"stream" to true,
)
val requestBody = JSONObject(ollamaRequest).toString()
.toRequestBody("application/json; charset=utf-8".toMediaType())
with:
val ollamaRequest = OllamaRequest(
appID = appID,
model = chatt.name,
messages = listOf(OllamaMessage("user", chatt.message?.value)),
stream = true
)
val requestBody = Json.encodeToString(ollamaRequest)
.toRequestBody("application/json; charset=utf-8".toMediaType())
To allow your app to accept SSE stream, replace the following line in the request builder:
.addHeader("Accept", "application/*")
with:
.addHeader("Accept", "text/event-stream")
Parsing SSE stream
We now parse the incoming SSE stream. Continuing in llmChat(), scroll all the way down
to the // streaming NDJSON comment and replace it with the following two lines:
// streaming SSE
var sseEvent = SseEventType.Message
Subsequently, inside the while (!stream.exhausted()) code block, replace the whole
try-catch block with:
if (line.isEmpty()) {
// new SSE event, default to Message
// SSE events are delimited by "\n\n"
if (sseEvent == SseEventType.Error) {
sseEvent = SseEventType.Message
resChatt.message?.value += "\n\n**llmChat Error**: ${errMsg.value}\n\n"
}
continue
}
// parse SSE line
An empty line (caused by two consecutive newlines "\n\n") indicates the
end of an event block. When an empty line is detected, if we are in an Error
event block, as set in the next block of code, we report the error on the
front end’s timeline (we will also pop up an alert dialog box with the erorr
message later). Then we reset the event to the default Message event.
If the next line starts with the text event, we’re starting a new event block, otherwise,
it’s a data line and we handle (save) it depending on the event it’s associated
with. Recall that left unspecified, Message is the default event. Replace the comment,
// parse SSE line with the following:
val parts = line.split(":", limit = 2)
if (parts[0].startsWith("event")) {
val event = parts[1].trim()
if (event == "error") {
sseEvent = SseEventType.Error
} else if (!event.isEmpty() && event != "message") {
// we only support "error" event,
Log.d("LLMCHAT", "Unknown event: '${parts[1]}'")
}
} else if (parts[0].startsWith("data")) {
// not an event line, must be data line;
// multiple data lines can belong to the same event
try {
val ollamaResponse = Json.decodeFromString<OllamaResponse>(parts[1])
if (sseEvent == SseEventType.Error) {
errMsg.value += ollamaResponse.message.content
} else {
resChatt.message?.value += ollamaResponse.message.content
}
} catch (e: IllegalArgumentException) {
errMsg.value += "${e.localizedMessage}\n$apiUrl\n${parts[1]}"
}
}
Be sure to retain the enclosing catch clause below the while loop.
SubmitButton
Finally, replace your call to llmPrompt() in SubmitButton() in the
file MainView.kt with:
llmChat(vm.appID, Chatt(name = vm.onTrailingEnd,
message = mutableStateOf(vm.message.text.toString()),
timestamp = Instant.now().toString()
), vm.errMsg)
and remove the llmPrompt() import from the top of the file.
TODO: llmPrep()
To give instructions to the LLM, we can simply prepend the instructions to a user prompt. With Ollama’s
chat API, we can alternatively provide such instructions to the LLM as "system" prompts. A messages
array element carrying such instruction will have its "role" set to "system". In the back-end spec,
we will create a new API called /llmprep that allows us to send "system" prompts to our chatterd
back end. To use this new API, create a new ChattStore method with the following signature:
suspend fun llmPrep(appID: String, chatt: Chatt, errMsg: MutableState<String>) { }
As with the llmChat() method, post the provided appID and chatt as an OllamaRequest
to chatterd. However, instead of "user", set the "role" in the OllamaMessage to "system", and
put the instructions stored in the chatt’s message into the corresponding "content" property.
We are not expecting any response stream, so set the "stream" field to false.
Target your apiUrl to the URL for /llmprep API.
We’re actually not expecting any specific response from the post to /llmprep. We can leave the Accept
HTTP header field to its default value when building the request.
Finally, post the request and process the returning HTTP response. See the postChatt() method
from the Chatter tutorial if you need help with these.
Usage
To use your newly created llmPrep() method, first add the following properties to your
ChattViewModel in MainActivity.kt file:
var appLaunch = true
val sysmsg = app.getString(R.string.sysmsg)
with the corresponding sysmsg string in /res/values/strings.xml, for example:
<string name="sysmsg">Start every assistant reply with GO BLUE!!!</string>
We found that qwen3:0.6b (522 MB storage, 850+ MB RAM) seems to be the smallest Ollama model
that can follow system prompt. While you’re in the strings.xml file, update your "model" to
qwen3:0.6b.
Then create an instance of the ChattViewModel in your MainActvity class:
val viewModel by viewModels<ChattViewModel>()
and add the following block of code to the onCreate() method, before the call to setContent():
if (viewModel.appLaunch) {
// set up system prompt only once, not on orientation change
viewModel.appLaunch = false
viewModel.appID?.let { appID ->
if (!viewModel.sysmsg.isEmpty()) {
// disable interaction until llmPrep is done
lifecycleScope.launch(Dispatchers.Main.immediate) {
llmPrep(appID, Chatt(
name = viewModel.onTrailingEnd,
message = mutableStateOf(viewModel.sysmsg),
), viewModel.errMsg
)
}
}
}
}
Run and test to verify and debug
You should now be able to run your front end against the provided back end on mada.eecs.umich.edu, by changing the serverUrl property in your ChattStore to mada.eecs.umich.edu. Once you have your back end setup, change serverUrl back to YOUR_SERVER_IP. You will not get full credit if your front end is not set up to work with your back end!
To test your llmPrep() method against mada.eecs.umich.edu, you can use gemma3 model. To test it
against your own back end running on a *-micro instance, use qwen3:0.6b model. Assuming the system
prompt is, “Start every assistant reply with GO BLUE!!!”, you should see “GO BLUE!!!” prepended to all
responses from the LLM.
The back-end spec provides instructions on testing llmChat’s API and
SSE error handling.
Congratulations! You’re done with the front end! (Don’t forget to work on the back end!)
Front-end submission guidelines
We will only grade files committed to the main branch. If you’ve created multiple
branches, please merge them all to the main branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - Since you have pushed your back end code, you’ll have to click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo
under the folder llmchat. Confirm that your repo has a folder structure outline similar to the following. If
your folder structure is not as outlined, our script will not pick up your submission and, further, you may have
problems getting started on latter tutorials. There could be other files or folders in your local folder not listed
below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing
GitHub for Reactive, only files needed for grading
will be pushed to GitHub.
reactive
|-- chatterd
|-- chatterd.crt
|-- llmchat
|-- composeChatter
|-- app
|-- gradle
# and other files or folders
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
References
SSE
Appendix: imports
| Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, and Sugih Jamin | Last updated: January 15th, 2026 |