Project 1: llmDraft Compose

Cover Page

If you have not done the llmPrompt and Chatter tutorials, please complete them first. The rest of this spec will assume you have completed both tutorials and continue from where they left off.

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmdraft
        |-- composeChatter
            |-- app
            |-- gradle

YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Chatt

We will use the same Chatt structure used in the two tutorials. No change to the Chatt.kt file.

Rewrite UI

The rewrite UI consists of an AI button to the left of the text box at the bottom of the screen in MainView. Taking inspiration from the SubmitButton in the same file, create an “AI” button with the following “AI star” icons from Figma Community that we have cached locally:

Put both png files in your /app/res/drawable folder in the left/navigation pane of Android Studio. You can then use them as the painterResource of your Icon, for example:

        Icon(painter = painterResource(R.drawable.star), /* ... */)

Enable the button if and only if the textbox is not empty and we’re not already in the process of asking Ollama for a suggestion and awaiting its reply. When the button is enabled, display the star.png image, otherwise display the star_disabled.png image

When the button is enabled, when clicked, it sends the draft in the text box, along with a “rewrite” prompt, to Ollama on the back end. We will discuss this process in its own section later.

Reply UI

The reply UI consists of providing a callback function to the onLongPress parameter of detectTapGesture function. You provide this callback function as the event handler of the pointerInput modifier that you attach to the message bubble in the ChattView composable in the ChattScrollView.kt file. See how we add the pointerInput modifier to Scaffold in MainView for example usage.

When the user long presses on a chatt posted by another user and there is no outstanding request to Ollama for a rewrite or another reply suggestion, we send the message in the selected posted chatt, along with a “reply” prompt, to Ollama on the back end. We will discuss this process in its own section later.

A REQUIREMENT of the reply feature: the above two conditions must be met to trigger the feature. If the user clicks on their own posted chatt or if there is already an ongoing request to Ollama that has not returned, the reply feature is not triggered.

Sending request prompt to Ollama

The final part of the assignment is to send request prompt to Ollama. In my solution to the assignment, I split this task into two parts:

  1. a ChattStore method to handle the networking with the back end, and
  2. a ChattViewModel method to put together the prompt to send to Ollama using the networking method above.

llmDraft()

I name the method to handle networking with the back end, llmDraft(). Its implementation is patterned after llmPrompt() from the llmPrompt tutorial. Here’s the full signature I use:

    suspend fun llmDraft(chatt: Chatt, updateDraft: suspend (String) -> Unit, errMsg: MutableState<String>) { }

The first and last parameters are the same as those of llmPrompt(). Since Ollama reply is streamed, we call updateDraft() every time an element of the stream arrives, passing it the arriving chunk. The function updateDraft() appends the newly arriving chunk to the accumulated reply draft. When we call updateDraft(), Compose will re-compose any composables observing the updated draft. Making updates to TextFieldState, which we use to hold the draft, observable is more complicated than updating a MutableState<T>. We encapsulate the update process in the updateDraft() function parameter that we pass in to llmDraft().

Unlike the llmPrompt tutorial, in this tutorial we do not need to show a timeline of user exchanges with Ollama. Thus we do not need to create and append a dummy chatt message to the chatts array.

As in llmPrompt(), create a JSON Object from the chatt paramater. This is the prompt you will send to Ollama through chatterd’s llmprompt API, the same API used in the llmPrompt tutorial. Once we get an isSuccessful response, we can decode each line of the returning stream directly into an OllamaReply instance and pass the instance’s response property to the updaeteDraft() parameter. All error handling from llmPrompt() can be used as is.

promptLlm() and updateDraft()

My promptLlm() function prepares a Chatt message with the appropriate prompt and calls llmDraft(). I put this function in the MainView.kt file. Here’s the function signature I use:

    suspend fun promptLlm(vm: ChattViewModel, prompt: String) { }

When the user clicks the AI button to issue a rewrite request, I call promptLlm() with the following rewrite prompt:

"You are a poet. Rewrite the content below to a poetic version. Don\'t list 
options. Here\'s the content I want you to rewrite:"

Feel free to create your own prompt, though I found the last phrase, “Here’s the content I want you to rewrite:” most helpful, especially for short content. It seems to help the model recognize and separate the content from the prompt instruction.

In my promptLlm(), the chatt I pass to llmDraft() has its name property set to the name of the LLM model I want to use. If you’re running your back end on a *-micro instance, you may want to pull and use the qwen3:0.6b model. I found gemma3:270m to not follow my prompt instructions reliably.

Then I concatenate the message I want Ollama to work on with the prompt parameter passed in to promptLlm(). Remember that chatt’s message is of type MutableState<String>?, so you need to use mutableStateOf() when assigning your constituted prompt to this property.

The message property of ChattViewModel is of type TextFieldState. Using TextFieldState, instead of a simple MutableState<String>, allows us to use a newer version of Compose’s TextField UI element, which automatically enlarges the text box, up to the lineLimits we’ve previously specified, and makes it scrollable. This is useful when displaying Ollama’s reply. Unfortunately updating TextFieldState is more complicated than simply updating a value property, as we do with MutableState<T>. Instead TextFieldState has the following methods:

Once I’ve stored the prompt and the view model’s message property into the chatt variable I will pass to llmDraft(), I call TextFieldState.clearText() to clear the view model’s message property so that I can use it to store the draft returned by Ollama.

To accumulate Ollama’s streamed replies into the view model’s message property, I add the following updateDraft property to my ChattViewModel:

    val updateDraft: suspend (String) -> Unit = {
        message.edit {
            append(it)
        }
    }

and pass the lambda expression as the updateDraft argument of llmDraft() .

In the case when the user clicks the AI button to issue a rewrite request, the message property of ChattViewModel already contains the draft message the user wants Ollama to rewrite. When issuing a reply draft request, however, the message is held in the selected chatt posted by another user. We must first copy this message into the message property of ChattViewModel before calling promptLlm(). This I do in the onLongPress callback above, before I call promptLlm():

                          vm.message.setTextAndPlaceCursorAtEnd(msg.value)

When calling promptLlm() to request a reply draft, this is the reply prompt I use,

"You are a poet. Write a poetic reply to this message I received. Don\'t
list options. Here\'s the message I want you to write a poetic reply to:"

In both cases, I always set a flag to indicate that an Ollama request is “in progress”, before calling promptLlm(), to prevent multiple ongoing requests.

The setting of this flag is done when our Chatter app is running on the Main/UI thread, so it is thread safe, i.e., there wouldn’t be multiple threads trying to set this flag at the same time.

We launch promptLlm() using vm.viewModelScope so that our request to Ollama survives the composable lifecycles. Since I use a LazyColumn to show ChattViews, if I launch promptLlm() in the composable’s CoroutineScope, scrolling a ChattView off the screen terminates its composable and, subsequently, its Ollama request.

The Ollama request will similarly be terminated on device orientation change if I don’t launch it in a viewModelScope. To ensure interactivity of the app, I launch promptLlm() with Dispatchers.Default. You can see how this is done when SubmitButton calls postChatt().

To achieve thread-safety, the setting of the flag to prevent multiple ongoing Ollama requests must be done before I launch promptLlm() on Dispatchers.Default. Once promptLlm() finishes and returns, I reset the flag to indicate that the Ollama request is concluded and the user can start another one.

Additional UX (optional)

The following UX features are intended to increase the perceived responsiveness and interactivity of the app. You can choose to implement them to match the demo video, but you won’t be deducted points if you don’t (nor will there be extra credit if you do!).

That’s all for llmDraft!

Run and test to verify and debug

As mentioned earlier, pull and use the qwen3:0.6 model if you are running your back end on a *-micro instance.

Be sure to run your front end against your back end. You will not get full credit if your front end is not set up to work with your back end!

Submission guidelines

If you have not submitted your back end as part of completing the llmPrompt and Chatter tutorials, follow the instructions in those tutorials to submit your back end. Otherwise, you don’t need to submit your back end again.

Submit your updated front end for llmDraft. As usual, we will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmdraft. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmdraft
        |-- composeChatter
            |-- app
            |-- gradle  

YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.

Appendix: imports


Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, Sugih Jamin Last updated: Januara 10th, 2026