Project 1: llmDraft Compose
Cover Page
DUE Wed, 09/24, 2 pm
If you have not done the llmPrompt and
Chatter tutorials, please complete them first.
The rest of this spec will assume you have completed both tutorials
and continue from where they left off.
Preparing your GitHub repo
- On your laptop, navigate to
YOUR*TUTORIALS*/ - Create a zip of your
chatterfolder - Rename your
chatterfolder**project1** - Remove your project1’s
.gradledirectory by running in a shell window:laptop$ cd YOUR*TUTORIALS/project1/composeChatter laptop$ rm -rf .gradle - Push your local
YOUR*TUTORIALS*/repo to GitHub and make sure there’re no git issues:<summary>git push</summary>- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your
reactiveGitHub repo - Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo,
click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- chatterd
|-- chatterd.crt
|-- project1
|-- composeChatter
|-- app
|-- gradle
YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.
Chatt
We will use the same Chatt structure used in the two tutorials. No change to the Chatt.kt file.
Rewrite UI
The rewrite UI consists of an AI button to the left of the text box at the
bottom of the screen in MainView. Taking inspiration from the SubmitButton
in the same file, create an “AI” button with the following “AI star” icons from
Figma Community that we have cached locally:
- https://reactive.eecs.umich.edu/img/llmDraft/Android/star.png
- https://reactive.eecs.umich.edu/img/llmDraft/Android/star_disabled.png
Put both png files in your /app/res/drawable folder in the left/navigation
pane of Android Studio. You can then use them as your Icon image as
painterResource, for example:
Icon(painter = painterResource(R.drawable.star), /* ... */)
Enable the button if and only if the textbox is not empty and we’re not
already in the process of asking Ollama for a suggestion and awaiting its reply.
When the button is enabled, display the star.png image, otherwise display the
star_disabled.png image
When the button is enabled, when clicked, it sends the draft in the text box, along with a “rewrite” prompt, to Ollama on the backend. We will discuss this process in its own section later.
Reply UI
The reply UI consists of providing a callback function to the onLongPress
parameter of detectTapGesture function, which you provide as the event handler block to the pointerInput modifier to the message bubble in the ChattView struct in ChattScrollView.kt file. See how we add the pointerInput modifier to Scaffold
in MainView for an example use.
When the user presses down on a chatt posted by another user and there is no
outstanding request to Ollama for a rewrite or another reply suggestion, we send
the message in the selected posted chatt, along with a “reply” prompt, to
Ollama on the backend. We will discuss this process in its own section later.
This is a requirement of the reply feature: the above two conditions must
be met to trigger the feature. If the user clicks on their own posted chatt or
if there is already an ongoing request to Ollama that has not returned, the
reply feature is not triggered.
Sending request prompt to Ollama
The final part of the assignment is to send request prompt to Ollama. In my solution to the assignment, I split this task into two parts:
- a
ChattStoremethod to handle the networking with the backend, and - a
ChattViewModelmethod to put together the prompt to send to Ollama using the networking method above.
llmDraft()
I name the method to handle networking with the backend, llmDraft().
Its implementation is patterned after llmPrompt() from the
llmPrompt tutorial. Here’s the full
signature I use:
suspend fun llmDraft(chatt: Chatt, updateDraft: suspend (String) -> Unit, errMsg: MutableState<String>) { }
The first and last parameters are the same as those of llmPrompt().
Since Ollama reply is streamed, we call updateDraft() every
time an element of the stream arrives, passing it the arriving chunk.
The function updateDraft() appends the newly arriving chunk to
the accumulated reply draft. When we call updateDraft(), Compose will
re-compose any composable observing the updated draft. When we have
a parameter of type MutableState<String>, such as errMsg, updating
its value property, as we do in llmPrompt(), triggers recomposition
of composables observing the argument. As we will see later, however,
the observable UI element, TextFieldState, we use to hold the draft is
more complicated, thereby necessitating passing an updateDraft()
function to update the underlying, observable value.
As in llmPrompt(), first create a JSON Object from the chatt
paramater. This is the prompt you will send to Ollama through chatterd’s
llmprompt API, the same API used in llmPrompt(). We wait for
a response from Ollama and check that it is an isSuccessful response.
Subsequent code is how this function differs from llmPrompt(). Unlike
llmPrompt(), we do not need to show a timeline of user exchanges
with Ollama. Thus, instead
of creating a dummy chatt message that we append to the chatts array,
we can decode each line of the returning stream into an OllamaReply
class directly and pass the response property of OllamaReply to a
call of updateDraft(). We can adopt all error handling code from
llmPrompt() though.
promptLlm(_:)
My promptLlm() method puts together a prompt and call llmDraft().
To gain easy access to states in the view model, I make promptLlm()
a method of ChattViewModel found in file MainActivity.kt.
Here’s the full signature I use:
suspend fun promptLlm(prompt: String) async { }
When the user clicks the AI button to issue a rewrite request, I call
promptLlm() with a rewrite prompt. This is the rewrite prompt I use,
"You are a poet. Rewrite the content below to a poetic version. Don\'t list
options. Here\'s the content I want you to rewrite: "
You should feel free to create your own prompt, though I found the last phrase,
"Here's the content I want you to rewrite: " most helpful, especially for
short content. It seems to help the model recognize and separate the content
from the prompt instruction.
In my promptLlm(), I first create a chatt to be passed to llmDraft().
I set the username of this chatt to the name of the LLM model I want
Ollama to use, which should be just tinyllama if you’re using a *-micro
instance as your backend. Then I set the message of this chatt to be
a concatenation of the prompt parameter passed to promptLlm() with
ChattViewModel’s message property, to which the text box at the bottom
of MainView saves its message. Remember that chatt’s message is of type
MutableState<String>?, so you need to use mutableStateOf() when assigning
your constituted prompt to this property.
The message property of ChattViewModel is of type TextFieldState.
Using TextFieldState, instead of a simple MutableState<String>, allows
us to use a newer version of Compose’s TextField UI element, which
automatically enlarges the text box, up to the lineLimits we’ve
previously specified, and make it scrollable. This is useful when
displaying Ollama’s reply. Unfortunately we cannot simply update its
value property when updating a TextFieldState, as we do with MutableState<T>.
Instead we use one the following methods of TextFieldState:
-
clearText(): to clear the field, -
edit(): to modify the field, and -
setTextAndPlaceCursorAtEnd().
Once the view model’s message property has been copied into a chatt variable
I will send to llmDraft(), I clear it using TextFieldState.clearText().
I will use the cleared ChattViewModel’s message property to store the
draft returned by Ollama. The second argument to llmDraft() is a method
to update message to accumulate the chunks of reply Ollama streams back.
Here’s the updateDraft lambda expression I use to call the TextFieldState.edit()
method of ChattViewModel’s message property. The updateDraft lambda
expression appends the string passed as its argument to message.
val updateDraft: suspend (String) -> Unit = {
message.edit {
append(it)
}
}
While the rewrite function simply takes the content of ChattViewModel’s
message property to form part of the prompt, the reply function must
first put the selected chatt, posted by another user, into this variable,
so that when promptLlm() grab the content of this property to form the
prompt, it is already populated with the selected chatt message. In the
onLongPress callback used in ChattView above, before I call promptLlm(),
I copy the selected chatt’s mesage to the message property of
ChattViewModel using TextFieldState.setTextAndPlaceCursorAtEnd() method.
vm.message.setTextAndPlaceCursorAtEnd(msg.value)
When calling promptLlm(), this is the reply prompt I use,
"You are a poet. Write a poetic reply to this message I received. Don\'t
list options. Here\'s the message I want you to write a poetic reply to: "
In both cases, I always set a flag indicating that an Ollama request is
ensuing before calling promptLlm(). This is to prevent multiple ongoing
requests.
The setting of this flag is done when our
Chatterapp is running on the single-threaded Main/UI thread, so it is thread safe, i.e., there wouldn’t be multiple threads trying to set this flag at the same time.
We launch promptLlm() using vm.viewModelScope so that our request
to Ollama survives the composable lifecycles. Since we use a LazyColumn
to show ChattViews, scrolling a ChattView off the screen terminates
its composable and, subsequently, its Ollama request if we launch
promptLlm() in the composable’s CoroutineScope. The Ollama request
will similarly be terminated if don’t launch it in a viewModelScope
on device orientation change. To ensure interactivity of the app, I
launch promptLlm() with Dispatchers.Default. You can see how this
is done when SubmitButton calls postChatt().
To achieve thread-safety, the setting of the flag to prevent multiple
ongoing Ollama requests must be done before you launch promptLlm()
on Dispatchers.Default. Once promptLlm() finishes and returns, I reset
the flag to indicate that the Ollama request is concluded and the user can
start another one.
Additional UX (optional)
The following UX features are intended to increase the perceived responsiveness and interactivity of the app. You can choose to implement them to match the demo video, but you won’t be deducted points if you don’t (nor will there be extra credit if you do!).
-
When an Ollama request is ongoing, the message in the text box at the bottom of
ContentViewchanges to notify user that the request is ongoing. -
When a
chattposted by another user is selected to generate areply, the background of the “selected”chattis displayed in gray until a reply draft is fully received. -
This one is more a workaround than a feature: when the textbox is not being edited, i.e., shown without the soft keyboard visible, it doesn’t scroll the text streamed to it beyond the first few lines. Thus in my implementation of
promptLlm(), I manually scroll the text box afterllmDraft()is done and returns:withContext(AndroidUiDispatcher.Main) { msgScroll.animateScrollTo(msgScroll.maxValue) }Since scrolling
messagedirectly updates the UI, it must be done on theAndroidUiDispatcher.Mainthread, henceanimateScrollTo()is wrapped inwithContext(AndroidUiDispatcher.Main) { }As you can see above, to manually scroll the text box, I added a
msgScrollproperty toChattViewModel:val msgScroll = ScrollState(0)which I initialize by adding this assignment to the
scrollStateparameter of myOutlinedTextFieldinMainView:scrollState = vm.msgScroll,
That’s all for Project 1!
Run and test to verify and debug
Be sure to run your front end against your backend. You will not get full credit if your front end is not set up to work with your backend!
Submission guidelines
If you have not submitted your backend as part of completing the llmPrompt
and Chatter tutorials, follow the instructions in those tutorials to submit
your backend. Otherwise, you don’t need to submit your backend again.
Submit your updated frontend for Project 1. As usual, we will only grade files
committed to the main branch. If you use multiple branches, please merge
them all to the main branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - If you have pushed code to your repo, click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub
repo under the folder project1. Confirm that your repo has a folder structure outline similar to the following.
If your folder structure is not as outlined, our script will not pick up your submission and, further, you may
have problems getting started on latter tutorials. There could be other files or folders in your local folder
not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions
in Preparing GitHub for Reactive Tutorials, only
files needed for grading will be pushed to GitHub.
reactive
|-- chatterd
|-- chatterd.crt
|-- project1
|-- composeChatter
|-- app
|-- gradle
YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
Appendix: imports
| Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, Sugih Jamin | Last updated: August 27th, 2025 |