Project 3: llmAction SwitUI

Cover Page

Preparing your GitHub repo

If you have not completed the llmTools and Signin tutorials, please complete them first. We will base this project on the llmTools code base, but will be copying over the bulk of the Signin code also.

In the following, replace /YOUR:TUTORIALS/ with the name of your tutorials folder.

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
    |-- llmaction
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter
    |-- tools            
    # and other files or folders

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Obtaining chatterID

The main purpose of the get_auth tool is for the LLM to obtain an authorization token. We require LLMs to acquire an authorization token to make operative tool calls that can take action with real-world side-effects. The authorization token requirement forms our human-in-the-loop guardrail. In this project, the chatterID first used in the Signin tutorial is our authorization token. To create a chatterID requires user to sign in using Google Signin. To store/load the chatterID for reuse within its limited lifetime, we further require biometric authentication from the user. Both Google Signin and biometric check requirements mean that the authorization token can only be obtained from user-facing front-end devices.

The front-end UI/UX and the network streaming and tool call infrastrucure are those of llmTools. To these, we add code from the Signin tutorial to use Google Signin and biometric authentication.

Signin migration

ChattViewModel

Recall that the GoogleSignIn SDK pops up a UIKit window. We launch it by toggling getSignedin and providing signinCompletion, both properties of ChattViewModel. Since we need to access ChattViewModel from the networking code only for chatterID-related operations, we make ChattViewModel available to the ChatterID singleton:

Toolbox

Assuming you have created the get_auth JSON schema file in your tools folder, using the getAuth() function we will define in the next subsection, add an entry to the TOOLBOX switch table to register the get_auth tool. If you’re not sure how to do this, you can use the entry for get_location in the TOOLBOX switch table from the llmTools tutorial as example.

getAuth() function

The main purpose of the getAuth() function is to return a valid authorization token to the LLM. A “valid” authorization token in this project is a chatterID that has not expired. Implement the function getAuth(). The function first checks if there’s a valid chatterID. If so, it returns the chatterID. Otherwise, it launches SigninView to let user sign in with Google Signin. Repeat these steps until there’s a valid chatterID.

If sign in is successful, there should now be a valid chatterID we can return to Ollama in the form of a new prompt. Otherwise, we *must return HTTP status code “401: Unauthorized” as the tool call result to Ollama. Ollama will inform the user that authorization has failed in its prompt completion.

Consult how SubmitButton in the Signin tutorial accomplishes both of the above and adapt it to implement getAuth() accordingly.

Non-secured HTTP

In our achitecture, the path taken by chatterID from the front end back to Ollama goes through chatterd. Due to Ollama’s design, the connection between chatterd and Ollama is non-secured HTTP connection, which is a security vulnerability.

With that, you’re done with the llmAction project! Congratulations!

Run and test to verify and debug

Support for Ollama 0.20.2

Ollama on mada.eecs.umich.edu was recently upgraded to support gemma4 models. Ollama version 0.20.2 puts the thinking output of models on a separate JSON field, thinking. To support this new version, please make the following changes to your ChattStore.swift:

struct OllamaMessage: Codable {
    ...
    let thinking: String?
    ...
    
    enum CodingKeys: String, CodingKey {
        ...
        case thinking = "thinking"
        ...
    }
}

Once in llmPrep(appID:chatt:errMsg:showOk:) and twice in llmTools(appID:chatterrMsg:) when instantiating the OllamaMessage struct, set thinking: nil.

When parsing an SSE data line, in addition to accummulating tokens in the content field of a message, accummulate tokens in the thinking field of message also:

                        if let token = ollamaResponse.message.content, !token.isEmpty {
                            // ... keep existing code
                        } else if let token = ollamaResponse.message.thinking, !token.isEmpty {
                            resChatt.message?.append(token)
                        }

The updated code should continue to work with older versions of Ollama.

Please see the End-to-end testing section of the spec to test your front-end implementation.

Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that we know what your server IP is. You will not get full credit if your front end is not set up to work with your back end!

Front-end submission guidelines

:point_right: Unlike in previous tutorials and projects, there is one CRUCIAL extra step to do before you push your lab to GitHub: ensure that the Bundle identifier under the Signing & Capabilities tab of your Project pane is the one you used to create your OAuth client ID. Otherwise we won’t be able to run your app.

Be sure you have submitted your modified back end in addition to submitting your updated front end. As usual, we will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

LLM Rules.md, Skills.md, and Prompts.txt files

You are allowed to use LLMs to work on this project. Create a Prompts.txt file in /YOUR:PROJECT/ folder and list all the LLM(s) you have used and found most helpful in completing the project. Also put in this file a well-organized record of the prompts you used to help build the project.

If you have not used any LLM, you can pledge, “I have not used any LLM to complete this project.” However, it is an Honor Code violation to say so if you have actually used one.

If you have used LLM(s), create your Rules.md and Skills.md files and put them in /YOUR:PROJECT/ folder. You may find the articles listed in the LLM Rules and Prompts topic on the course discourse page helpful in building your Rules.md and Skills.md files.

Add your Prompts.txt, Rules.md, and Skills.md files to your git repo.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmaction. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- Prompts.txt
    |-- Rules.txt
    |-- Skills.txt
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
    |-- llmaction
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter
    |-- tools            
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin Last updated: April 13th, 2026