Tutorial: llmChat Compose

Cover Page

DUE Wed, 10/1, 2 pm

The front-end work is mostly in writing a new network function, llmChat(), plus an additional couple of small changes to the rest of the code. We will build on the simpler code base from the first tutorial, llmPrompt.

Expected behavior

Carrying a “conversation” with an LLM:

DISCLAIMER: the video demo shows you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
        |-- composeChatter
            |-- app
            |-- gradle  
    # and other files or folders

YOUR*TUTORIALS folder on your laptop should contain the chatter.zip file in addition.

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Dependencies

Add the following line to your build.gradle (Module:), in the plugins {} block at the top of the file:

plugins {
    // . . .
    kotlin("plugin.serialization") version "2.2.10"
}

then scroll down to the bottom of the file. In the dependencies {} block add:

dependencies {
    // . . .
    implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.9.0")
}

and tap on Sync Now on the Gradle menu strip that shows up at the top of the editor screen.

appID

Since you will be sharing PostgreSQL database storage with the rest of the class, we need to identify your entries so that we forward only your entries to Ollama during your “conversation”. In your MainActivity.kt file, add this appID property to your ChattViewModel:

    val appID = app.applicationContext.packageName

Note:

  1. To avoid overflowing the PostgreSQL database on mada.eecs.umich.edu, we may periodically empty the database.
  2. Ollama handles only one connection at a time, putting all other connections “on hold”. Try to limit your “conversations” with Ollama to be simple tests such as “Hi my name is Ishmael” then “What is my name?”, just to see that it can relate to previous interaction. If you experience long wait time trying to interact with Ollama it could be due to other classmates trying to access it at the same time.

llmChat()

In ChattStore.kt, first add the following types outside your ChattStore class:

enum class SseEventType { Error, Message }

@Serializable
data class OllamaMessage(val role: String, val content: String?)

@Serializable
data class OllamaRequest(
    val appID: String?,
    val model: String?,
    val messages: List<OllamaMessage>,
    val stream: Boolean
)

@Serializable
@JsonIgnoreUnknownKeys
data class OllamaResponse(
    val model: String,
    val created_at: String,
    val message: OllamaMessage,
)

Rename your llmPrompt() function with the following signature:

    suspend fun llmChat(appID: String?, chatt: Chatt, errMsg: MutableState<String>) {

Previously we constructed jsonObj Kotlin map and serialize it “manually” into a request body. In this tutorial, we’re going to rely on Kotlin serialization to do the serialization for us. Replace this block of code:

        val jsonObj = mapOf(
            "model" to chatt.username,
            "prompt" to chatt.message?.value,
            "stream" to true,
        )
        val requestBody = JSONObject(jsonObj).toString()
            .toRequestBody("application/json; charset=utf-8".toMediaType())

with:

        val ollamaRequest = OllamaRequest(
            appID = appID,
            model = chatt.username,
            messages = listOf(OllamaMessage("user", chatt.message?.value)),
            stream = true
        )
        val requestBody = Json.encodeToString(ollamaRequest)
            .toRequestBody("application/json; charset=utf-8".toMediaType())

Next replace llmprompt in apiUrl to llmchat.

To allow your app to accept SSE stream, replace this line:

            .addHeader("Accept", "application/*")

with:

            .addHeader("Accept", "text/event-stream")

Parsing SSE stream

Finally, we’re ready to parse the incoming stream as an SSE stream. SSE stream consists of text strings in a specific format:

event: eventName
data: a line of info associated with eventName

event: newEvent
data: a line of newEvent info
data: another line of newEvent info

data: a line of info implicitly associated with the default Message event

data: another line also of the Message event

Each event is tagged with an event line followed by the event’s name. An event line is delimited with a newline ('\n' or, for streams from a Python server or on Windows, "\r\n"). Then follow one or more lines of data associated with that event, each delimited with a newline. An empty line (or two consecutive newlines "\n\n") denotes the end of an event block.

A data line after an empty line is assumed to be part of the default Message event, which is allowed to be unspecified, as in the last two data lines in the above example.

Continuing in llmChat(), scroll all the way down to the line of code, val stream = response.body.source() and add the following line above it:

            var sseEvent = SseEventType.Message

Inside the subsequent while (!stream.exhausted()) code block, replace the try-catch block inside the while loop with:

                if (line.isEmpty()) {
                    // new SSE event, default to Message
                    // SSE events are delimited by "\n\n"
                    if (sseEvent == SseEventType.Error) {
                        resChatt.message?.value += "\n\n**llmChat Error**: ${errMsg.value}\n\n"
                    }
                    sseEvent = SseEventType.Message
                    continue
                }

An empty line (caused by two consecutive newlines "\n\n") indicates the end of an event block. When an empty line is detected, if we are in an Error event block, as set in the next block of code, we report the error on the timeline (we will also pop up an alert dialog box with the erorr mssage later). Then we reset the event to the default Message event.

If the next line starts with event, we’re starting a new event block, otherwise, it’s a data line and we handle (save) it depending on the event it’s associated with. Recall that left unspecified, Message is the default event.

                val parts = line.split(":", limit = 2)
                if (parts[0].startsWith("event")) {
                    val event = parts[1].trim()
                    if (event == "error") {
                        sseEvent = SseEventType.Error
                    } else if (!event.isEmpty() && event != "message") {
                        // we only support "error" event,
                        // "message" events are, by the SSE spec,
                        // assumed implicit by default
                        Log.d("LLMCHAT", "Unknown event: '${parts[1]}'")
                    }
                } else if (parts[0].startsWith("data")) {
                    // not an event line, we only support data line;
                    // multiple data lines can belong to the same event
                    try {
                        val ollamaResponse = Json.decodeFromString<OllamaResponse>(parts[1])
                        if (sseEvent == SseEventType.Error) {
                            errMsg.value += ollamaResponse.message.content
                        } else {
                            resChatt.message?.value += ollamaResponse.message.content
                        }
                    } catch (e: IllegalArgumentException) {
                        errMsg.value += parseErr(e.localizedMessage, apiUrl, parts[1])
                    }
                }

Be sure to retain the enclosing catch clause below the while loop.

SubmitButton

Finally, replace your call to llmPrompt() in SubmitButton() in the file MainView.kt with:

                llmChat(vm.appID, Chatt(username = vm.model,
                    message = mutableStateOf(vm.message.text.toString()),
                    timestamp = Instant.now().toString()
                ), vm.errMsg)

Congratulations! You’re done with the front end! (Don’t forget to work on the backend!)

Run and test to verify and debug

You should now be able to run your front end against the provided back end on mada.eecs.umich.edu, by changing the serverUrl property in your ChattStore to mada.eecs.umich.edu. Once you have your backend setup, change serverUrl back to YOUR_SERVER_IP. You will not get full credit if your front end is not set up to work with your backend!

The backend spec provides instructions on testing llmChat’s API and SSE error handling.

Front-end submission guidelines

We will only grade files committed to the main branch. If you’ve created multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmchat. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
        |-- composeChatter
            |-- app
            |-- gradle  
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.

References

SSE

Appendix: imports


Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, and Sugih Jamin Last updated: August 8th, 2025