Project 3: llmAction Compose
Cover Page
Preparing your GitHub repo
If you have not completed the llmTools and Signin tutorials, please
complete them first. We will base this project on the llmTools code base, but will be copying over
the bulk of the Signin code also.
In the following, replace /YOUR:TUTORIALS/ with the name of your tutorials folder.
- On your laptop, navigate to
/YOUR:TUTORIALS/ - Create a zip of your
llmtoolsfolder - Rename your
llmtoolsfolderllmaction - Prepare your repo as you have done in previous tutorials and projects
- Push your local
/YOUR:TUTORIALS/repo to GitHub and make sure there’re no git issues:git push
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your
reactiveGitHub repo - Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo,
click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- chatterd
|-- chatterd.crt
|-- llmtools
|-- llmaction
|-- composeChatter
|-- app
|-- gradle
|-- tools
# and other files or folders
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.
Obtaining chatterID
The main purpose of the get_auth tool is for the LLM to obtain an authorization token. We require
LLMs to acquire an authorization token to make operative tool calls that can take action with
real-world side-effects. The authorization token requirement forms our human-in-the-loop guardrail.
In this project, the chatterID first used in the Signin tutorial is our authorization token. To
create a chatterID requires user to sign in using Google Signin. To store/load the chatterID for
reuse within its limited lifetime, we further require biometric authentication from the user. Both
Google Signin and biometric check requirements mean that the authorization token can only be
obtained from user-facing front-end devices.
The front-end UI/UX and the network streaming and tool call infrastrucure are those of llmTools.
To these, we add code from the Signin tutorial to use Google Signin and biometric authentication.
Signin migration
-
Follow the instructions in the
Signintutorial to install the Google Signin SDK. Then copy over the OAuth client ID you created. You can review the instructions in the tutorial on how to create these, especially step 7 onClient ID. -
Copy over code from the
Signintutorial that- handles signing in to Google,
- use the
idTokenobtained from Google Signin to further obtain achatterIDfrom thechatterdback end, - saves and loads
chatterIDto/from secure storage with biometric authentication, and - updates to the rest of the application needed to support
Signin.
ChattViewModel
Recall that to obtain Google ID, the CredentialManager and subsequent biometric check both
require access to Android Context. Whereas in the Signin tutorial we are running on composables
and have access to Context, the get_auth tool is invoked a network event handler with no access
to any Context. To provide the CredentialManager and biometric check with Context, we wrap the
call to signin() in a composable function, SigninView(). A composable, unfortunately, can only
be called from another composable. To bridge get_auth() and SigninView(), add the following
properties to your ChattViewModel in MainActivity.kt:
var showOk = mutableStateOf(false)
var getSignedin = mutableStateOf(false)
var signinCompletion: (() -> Unit)? = null
Further, since we need to access ChattViewModel from the networking code, but only for
chatterID-related operations, we make ChattViewModel available to the ChatterID singleton:
- add a
vmproperty toChatterID, of type nullableChattViewModeland intialized tonull, - at app launch, in
MainActivity, before callingsetContent():- assign an instance of
ChattViewModeltoChatterID’s newvmproperty, - on
viewModel.appLaunch, loadchatterIDfrom previous run of the app as we did in theSignintutorial, without updatingonTrailingEndproperty, which must remain immutable, - make a call to
/llmprepAPI as we do in thellmChattutorial to instruct the LLM with the following system prompt, and
- assign an instance of
- update your
sysmsgin/app/res/values/strings.xmlwith:<string name="sysmsg">Use pull to add a model, rm to remove or delete one, and ls to list or show models---always provide an empty string \'\' as the argument arg to ls. Show listing as a table. Use get_auth to get authorization token before each call to ollama_cli.</string>You can also update the initial
messageinstrings.xmlto say:<string name="message">List Ollama models.</string>
Make sure that your MainActivityclass extends/inherits fromFragmentActivityinstead ofComponentActivityto perform biometric check.
Toolbox
Assuming you have created the get_auth JSON schema file in your tools folder, using the
getAuth() function we will define in the next subsection, add an entry to the TOOLBOX switch
table to register the get_auth tool. If you’re not sure how to do this, you can use the entry for
get_location in the TOOLBOX switch table from the llmTools tutorial as example.
getAuth() function
The main purpose of the getAuth() function is to return a valid authorization token to the LLM. A
“valid” authorization token in this project is a chatterID that has not expired. The function
getAuth() first checks if there’s a valid chatterID. If so, it returns the chatterID.
Otherwise, it launches SigninView to let user sign in with Google Signin. Repeat
these steps until there’s a valid chatterID.
To launch the SigninView composable, the getAuth() function does three things:
- grabs
ChattViewModelfromChatterIDand obtains the current code continuation (getAuthAt) by callingsuspendCoroutine(), - creates a completion closure that simply resumes execution at the saved continuation
(
getAuthAt.resume()) and assigns it tovm.signinCompletion, and - with
vm.signinCompletionprepared, setsvm.getSignedintotrue, upon which the reactive UI framework launchesSigninView.SigninViewin turn will launch theCredentialManager.getCredential().
suspend fun getAuth(argv: List<String>): String {
while (true) {
if (id != null) { // ChatterID.id
return "Authorization token is: $id"
}
vm?.let { vm -> // ChatterID.vm
suspendCoroutine { getAuthAt ->
vm.signinCompletion = {
getAuthAt.resume(Unit)
}
vm.getSignedin.value = true
}
// here be getAuthAt
}
if (id == null) { // failed to sign in, or vm is null
return "401 Unauthorized. Inform user that authentication token is unavailable and end session."
}
}
}
If sign in is successful, there should now be a valid chatterID we can return to Ollama in the
form of a new prompt. Otherwise, we return HTTP status code “401: Unauthorized” as the tool call
result to Ollama, as shown in the code above. Ollama will in turn inform the user that authorization
has failed in its prompt completion.
Non-secured HTTP
In our achitecture, the path taken by chatterID from the front end back to Ollama goes through
chatterd. Due to Ollama’s design, the connection between chatterd and Ollama is non-secured
HTTP connection, which is a security vulnerability.
SigninView composable
We now create a SigninView() composable that can obtain a LocalContext.current available to any composable and use it to call the signin() function. The signin() function is the same as the one from the SignInGoogle.kt file from the Signin tutorial:
@Composable
fun SigninView() {
val vm: ChattViewModel = viewModel()
val context = LocalContext.current
LaunchedEffect(Unit) {
// this will be terminated and restarted on configuration
// change and then biometric check to save chatterID will fail
withContext(Dispatchers.Main.immediate) {
// so that SigninView doesn't terminate before LaunchedEffect is done
signin(context, vm)
vm.getSignedin.value = false
vm.signinCompletion?.invoke()
}
}
}
In your MainView, launch SigninView before ChattScrollView() if vm.getSignedin is true.
With that, you’re done with the llmAction project! Congratulations!
Run and test to verify and debug
Support for Ollama 0.20.2
Ollama on mada.eecs.umich.edu was recently upgraded to support gemma4 models. Ollama version 0.20.2 puts the thinking output of models on a separate JSON field, thinking. To support this new version, please make the following changes to your ChattStore.kt:
data class OllamaMessage(
...
val thinking: String? = null,
...
)
When parsing an SSE data line, in addition to accummulating tokens in the content field of a
message, accummulate tokens in the thinking field of message also:
ollamaResponse.message.content?.let { token ->
// ... keep existing code
}
ollamaResponse.message.thinking?.let { token ->
if (token.isNotEmpty()) {
resChatt.message?.value += token
}
}
The updated code should continue to work with older versions of Ollama.
Please see the End-to-end testing section of the spec to test your front-end implementation.
Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that
we know what your server IP is. You will not get full credit if your front end is
not set up to work with your back end!
Front-end submission guidelines
Unlike in previous tutorials and projects, there is one CRUCIAL extra step to do
before you push your lab to GitHub:
- Copy
debug.keystorein (~/.android/on Mac Terminal and Windows PowerShell) to yourllmactionlab folder. - Put a copy of the
SHA1 certificate(in the format ofxx:xx:xx:...) you used to obtain your Client ID in theREADME.mdfile at your repo’s top level folder.
Without these we won’t be able to run your app.
Be sure you have submitted your modified back end in addition to submitting
your updated front end. As usual, we will only grade files committed to the
main branch. If you use multiple branches, please merge them all to the
main branch for submission.
LLM Rules.md, Skills.md, and Prompts.txt files
You are allowed to use LLMs to work on this project. Create a Prompts.txt file in /YOUR:PROJECT/
folder and list all the LLM(s) you have used and found most helpful in completing the project. Also
put in this file a well-organized record of the prompts you used to help build the project.
If you have not used any LLM, you can pledge, “I have not used any LLM to complete this project.” However, it is an Honor Code violation to say so if you have actually used one.
If you have used LLM(s), create your Rules.md and Skills.md files and put them in
/YOUR:PROJECT/ folder. You may find the articles listed in the LLM Rules and Prompts topic on the
course discourse page helpful in
building your Rules.md and Skills.md files.
Add your Prompts.txt, Rules.md, and Skills.md files to your git repo.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - If you have pushed code to your repo, click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub
repo under the folder llmaction. Confirm that your repo has a folder structure outline similar to the following.
If your folder structure is not as outlined, our script will not pick up your submission and, further, you may
have problems getting started on latter tutorials. There could be other files or folders in your local folder
not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions
in Preparing GitHub for Reactive Tutorials, only
files needed for grading will be pushed to GitHub.
reactive
|-- Prompts.txt
|-- Rules.txt
|-- Skills.txt
|-- chatterd
|-- chatterd.crt
|-- llmchat
|-- llmaction
|-- composeChatter
|-- app
|-- gradle
|-- tools
# and other files or folders
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin | Last updated: April 13th, 2026 |