Project 3: llmAction Compose

Cover Page

Preparing your GitHub repo

If you have not completed the llmTools and Signin tutorials, please complete them first. We will base this project on the llmTools code base, but will be copying over the bulk of the Signin code also.

In the following, replace /YOUR:TUTORIALS/ with the name of your tutorials folder.

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
    |-- llmaction
        |-- composeChatter
            |-- app
            |-- gradle
    |-- tools            
    # and other files or folders

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Obtaining chatterID

The main purpose of the get_auth tool is for the LLM to obtain an authorization token. We require LLMs to acquire an authorization token to make operative tool calls that can take action with real-world side-effects. The authorization token requirement forms our human-in-the-loop guardrail. In this project, the chatterID first used in the Signin tutorial is our authorization token. To create a chatterID requires user to sign in using Google Signin. To store/load the chatterID for reuse within its limited lifetime, we further require biometric authentication from the user. Both Google Signin and biometric check requirements mean that the authorization token can only be obtained from user-facing front-end devices.

The front-end UI/UX and the network streaming and tool call infrastrucure are those of llmTools. To these, we add code from the Signin tutorial to use Google Signin and biometric authentication.

Signin migration

ChattViewModel

Recall that to obtain Google ID, the CredentialManager and subsequent biometric check both require access to Android Context. Whereas in the Signin tutorial we are running on composables and have access to Context, the get_auth tool is invoked a network event handler with no access to any Context. To provide the CredentialManager and biometric check with Context, we wrap the call to signin() in a composable function, SigninView(). A composable, unfortunately, can only be called from another composable. To bridge get_auth() and SigninView(), add the following properties to your ChattViewModel in MainActivity.kt:

    var showOk = mutableStateOf(false)

    var getSignedin = mutableStateOf(false)
    var signinCompletion: (() -> Unit)? = null

Further, since we need to access ChattViewModel from the networking code, but only for chatterID-related operations, we make ChattViewModel available to the ChatterID singleton:

Toolbox

Assuming you have created the get_auth JSON schema file in your tools folder, using the getAuth() function we will define in the next subsection, add an entry to the TOOLBOX switch table to register the get_auth tool. If you’re not sure how to do this, you can use the entry for get_location in the TOOLBOX switch table from the llmTools tutorial as example.

getAuth() function

The main purpose of the getAuth() function is to return a valid authorization token to the LLM. A “valid” authorization token in this project is a chatterID that has not expired. The function getAuth() first checks if there’s a valid chatterID. If so, it returns the chatterID. Otherwise, it launches SigninView to let user sign in with Google Signin. Repeat these steps until there’s a valid chatterID.

To launch the SigninView composable, the getAuth() function does three things:

  1. grabs ChattViewModel from ChatterID and obtains the current code continuation (getAuthAt) by calling suspendCoroutine(),
  2. creates a completion closure that simply resumes execution at the saved continuation (getAuthAt.resume()) and assigns it to vm.signinCompletion, and
  3. with vm.signinCompletion prepared, sets vm.getSignedin to true, upon which the reactive UI framework launches SigninView. SigninView in turn will launch the CredentialManager.getCredential().
suspend fun getAuth(argv: List<String>): String {
    while (true) {
        if (id != null) { // ChatterID.id
            return "Authorization token is: $id"
        }

        vm?.let { vm ->  // ChatterID.vm
            suspendCoroutine { getAuthAt ->
                vm.signinCompletion = {
                    getAuthAt.resume(Unit)
                }
                vm.getSignedin.value = true
            }
            // here be getAuthAt
        }
        if (id == null) { // failed to sign in, or vm is null
            return "401 Unauthorized. Inform user that authentication token is unavailable and end session."
        }
    }
}

If sign in is successful, there should now be a valid chatterID we can return to Ollama in the form of a new prompt. Otherwise, we return HTTP status code “401: Unauthorized” as the tool call result to Ollama, as shown in the code above. Ollama will in turn inform the user that authorization has failed in its prompt completion.

Non-secured HTTP

In our achitecture, the path taken by chatterID from the front end back to Ollama goes through chatterd. Due to Ollama’s design, the connection between chatterd and Ollama is non-secured HTTP connection, which is a security vulnerability.

SigninView composable

We now create a SigninView() composable that can obtain a LocalContext.current available to any composable and use it to call the signin() function. The signin() function is the same as the one from the SignInGoogle.kt file from the Signin tutorial:

@Composable
fun SigninView() {
    val vm: ChattViewModel = viewModel()
    val context = LocalContext.current

    LaunchedEffect(Unit) {
        // this will be terminated and restarted on configuration
        // change and then biometric check to save chatterID will fail
        withContext(Dispatchers.Main.immediate) {
            // so that SigninView doesn't terminate before LaunchedEffect is done
            signin(context, vm)
            vm.getSignedin.value = false
            vm.signinCompletion?.invoke()
        }
    }
}

:point_right:In your MainView, launch SigninView before ChattScrollView() if vm.getSignedin is true.

With that, you’re done with the llmAction project! Congratulations!

Run and test to verify and debug

Support for Ollama 0.20.2

Ollama on mada.eecs.umich.edu was recently upgraded to support gemma4 models. Ollama version 0.20.2 puts the thinking output of models on a separate JSON field, thinking. To support this new version, please make the following changes to your ChattStore.kt:

data class OllamaMessage(
    ...
    val thinking: String? = null,
    ...
)

When parsing an SSE data line, in addition to accummulating tokens in the content field of a message, accummulate tokens in the thinking field of message also:

                            ollamaResponse.message.content?.let { token ->
                                // ... keep existing code
                            }
                            ollamaResponse.message.thinking?.let { token ->
                                if (token.isNotEmpty()) {
                                    resChatt.message?.value += token
                                }
                            }

The updated code should continue to work with older versions of Ollama.

Please see the End-to-end testing section of the spec to test your front-end implementation.

Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that we know what your server IP is. You will not get full credit if your front end is not set up to work with your back end!

Front-end submission guidelines

:point_right: Unlike in previous tutorials and projects, there is one CRUCIAL extra step to do before you push your lab to GitHub:

Without these we won’t be able to run your app.

Be sure you have submitted your modified back end in addition to submitting your updated front end. As usual, we will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

LLM Rules.md, Skills.md, and Prompts.txt files

You are allowed to use LLMs to work on this project. Create a Prompts.txt file in /YOUR:PROJECT/ folder and list all the LLM(s) you have used and found most helpful in completing the project. Also put in this file a well-organized record of the prompts you used to help build the project.

If you have not used any LLM, you can pledge, “I have not used any LLM to complete this project.” However, it is an Honor Code violation to say so if you have actually used one.

If you have used LLM(s), create your Rules.md and Skills.md files and put them in /YOUR:PROJECT/ folder. You may find the articles listed in the LLM Rules and Prompts topic on the course discourse page helpful in building your Rules.md and Skills.md files.

Add your Prompts.txt, Rules.md, and Skills.md files to your git repo.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmaction. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- Prompts.txt
    |-- Rules.txt
    |-- Skills.txt
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
    |-- llmaction
        |-- composeChatter
            |-- app
            |-- gradle
    |-- tools            
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin Last updated: April 13th, 2026