Tutorial: llmTools Compose

Cover Page

We assume that you’re building off your llmChat tutorial’s front end. If you haven’t done that tutorial, you should complete it first. You will also need the LocManager from the Maps tutorial. To access your back end, you will need your self-signed certificate installed on your front-end.

The front-end work involves mostly:

Preparing your GitHub repo

In the following, replace /YOUR:TUTORIALS/ with the name of your tutorials folder.

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
        |-- composeChatter
            |-- app
            |-- gradle
    |-- tools            
    # and other files or folders

We added the tools folder in the cover Tool definition JSON section earlier. Go ahead and do it now if you haven’t. We will be needing it to complete the front end.

/YOUR:TUTORIALS/ folder on your laptop should contain the zipped files from other tutorials in addition.

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Dependencies

We will be using Kotlin’s Serialization package to create tool schemas for us from JSON schema files using Android’s AssetManager. So as not to have to add and remove files manually, we make this process “reactive,” by having Android Studio watch a folder for file addition/deletion. Add the following line to the android {} block in your build:gradle (Module:):

android {
    // . . .
    sourceSets {
        getByName("main") {
            assets.directories += project.rootDir.parentFile.resolve("tools").absolutePath
        }
    }    
}

The path project.rootDir.parentFile should automatically be pointing to /YOUR:TUTORIALS/ folder, where we’ve put the tools folder containing the get_location.json schema file.

To read device location, add the following line to the dependencies {} block near the bottom of the file:

dependencies {
    // . . .
    implementation("com.google.android.gms:play-services-location:21.3.0")
}

Tap on Sync Now on the Gradle menu strip that shows up at the top of the editor screen. After Gradle sync finishes, confirm that there’s now an assets folder above your res folder on your Project View (left-most pane) with your get_location.json in it.

Once you’ve build your project, you can further verify that you’ve successfully added get_location.json schema file to your AssetManager:

  1. Select from Android Studio’s top menu Build > Analyze APK...
  2. Select your generated APK (usually some form of app-*-debug.apk, e.g., app-arm64-v8a-debug.apk on macOS)
  3. In the APK Analyzer pane, look for and expand the assets folder
  4. Confirm that get_location.json is listed.

Toolbox

Let us start by creating a toolbox to hold our tools. Create a new Kotlin Class/File > File and name it Toolbox.kt.

The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).

Tool/function definition

Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:

@Serializable
data class OllamaToolSchema(
    val type: String,
    val function: OllamaToolFunction
)

@Serializable
data class OllamaToolFunction(
    val name: String,
    val description: String,
    val parameters: OllamaFunctionParams? = null
)

@Serializable
data class OllamaFunctionParams(
    val type: String,
    val properties: Map<String, OllamaParamProp>? = null,
    val required: List<String>? = null
)

@Serializable
data class OllamaParamProp(
    val type: String,
    val description: String,
    @SerialName("enum") // Required because enum is a keyword
    val enum_: List<String>? = null
)

Location tool schema: in this tutorial, we have only one tool on device, get_location. Instead of manually instantiating an OllamaToolSchema for each tool, we will use Kotlin’s Serialization package to create one for us from a corresponding JSON schema file. We have set up Android Studio in the “Dependencies” section above to load JSON schema file from the tools folder in your /YOUR:TUTORIALS/ folder.

Next we define jsonToSchema() function to load a schema file using Android’s AssetManager and decode the schema into an instance of OllamaToolSchema. You can put this function definition at the top-level of your Toolbox.kt file:

fun jsonToSchema(tool: String): OllamaToolSchema {
    try {
        val schema = App.assets.open("${tool}.json") // App not redundant
            .bufferedReader().use {                  // "use" context manager
                it.readText()
            }
        try {
            return Json.decodeFromString<OllamaToolSchema>(schema)
        } catch (e: Throwable) {
            error("Failed to decode ${tool}.json: $e")
        }
    } catch (e: IOException) { // import java.io.IOException
        error("Failed to find asset ${tool}.json: $e")
    }
}

To access the AssetManager in jsonToSchema(), we need to define an App class with access to Android’s application context. Put the following class in your MainActivity.kt, outside the MainActivity class:

class App: Application() {
    companion object {
        private var _assets: AssetManager? = null
        val assets: AssetManager
            get() = _assets ?: error("App.assets accessed before Application.onCreate()")
    }
    
    override fun onCreate() {
        super.onCreate()
        _assets = this.assets
    }
}

Then we need to register it in AndroidManifest.xml. Put the following line right after the opening <application:

      android:name=".App"

Location tool function: we implement the get_location tool as a getLocation() function that reads the device’s latitude and longitude data off the Location Manager from the Maps tutorial. Here’s the definition of the getLocation() function. Put it at the top-level of your Toolbox.kt file also:

suspend fun getLocation(argv: List<String>): String = 
    "latitude: ${LocManager.location.value.latitude}, longitude: ${LocManager.location.value.longitude}"

Location Manager

We don’t need all of the functionalities of LocManager, but it is the least amount of work and lowest chance of introducing bugs if we just copy the whole LocManager.kt file from the Maps tutorial: open both the Maps and llmTools projects in Android Studio, then alt-drag the LocManager.kt and Extensions.kt files from the Project View pane of the Maps project to the llmTools project’s. You will also need to request permission to read the location. Follow the instructions in the two sections Requesting permission and Accessing and starting the Location Manager of the Maps tutorial.

If you have not completed the Maps tutorial, please follow the instructions in the Location manager section to set up the LocManager. You don’t need to complete the rest of the Maps tutorial.

The toolbox

Even though we have only one resident tool in this tutorial, we build a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ll use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a dictionary. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.

Back in your Toolbox file, add the following suspending tool-function type. Also add a record type comprising two properties: a tool definition schema (schema) and an suspending tool function (function):

typealias ToolFunction = suspend (List<String>) -> String?

data class Tool(
    val schema: OllamaToolSchema,
    val function: ToolFunction,
)

Now create a switch-table toolbox and populate it with the location tool using the jsonToSchema(_:) function we created earlier:

val TOOLBOX = mapOf(
    "get_location" to Tool(schema = jsonToSchema("get_location"), function = ::getLocation),
)

Tool use or function calling

Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON Object carrying the name of the function and the arguments to pass to it. Add these nested struct definitions representing Ollama’s tool call JSON to your Toolbox.kt file:

@Serializable
@JsonIgnoreUnknownKeys
data class OllamaToolCall(val function: OllamaFunctionCall)

@Serializable
@JsonIgnoreUnknownKeys
data class OllamaFunctionCall(
    val name: String,
    val arguments: Map<String, String>
)

Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.

suspend fun toolInvoke(function: OllamaFunctionCall): String? {
    return TOOLBOX[function.name]?.run {
        val argv: MutableList<String> = mutableListOf()
        for (label in schema.function.parameters?.required ?: listOf()) {
            // get arguments in order, arguments may arrive out of order from the back end
            function.arguments[label]?.let {
                argv.add(it)
            }
        }
        function(argv)
    }
}

That concludes our toolbox definition.

Strings

Make the following changes in your /res/values/strings.xml:

ChattStore

classes

In your ChattStore.kt, add:

@Serializable @JsonIgnoreUnknownKeys data class OllamaMessage( // … val thinking: String? = null, @SerialName(“tool_calls”) val toolCalls: List? = null )

@Serializable data class OllamaRequest( // … // change messages property to var // … var tools: MutableList? = null )


## <tt>llmTools()<tt>

Rename your `llmChat()` function `llmTools()`, with the same function signature otherwise.

**Update your `apiUrl` to point to `/llmtools`.**

Find your instantiation of `OllamaRequest` and add the following `tools` property to the instantiation:
```kotlin
            tools = if (TOOLBOX.isEmpty()) { null } else { mutableListOf() }

Once ollamaRequest is instantiated, populate its tools property with on-device tools: ```kotlin // append all of on-device tools to ollamaRequest for ((_, tool) in TOOLBOX) { ollamaRequest.tools?.add(tool.schema) }


Next replace the following lines (we wil put them back inside a loop below):
```kotlin
        val requestBody = Json.encodeToString(ollamaRequest)
            .toRequestBody("application/json; charset=utf-8".toMediaType())
        
        val request = Request.Builder()
            .url(apiUrl)
            .addHeader("Accept", "text/event-stream")
            .post(requestBody)
            .build()

with:

        val partialRequest = Request.Builder()
            .url(apiUrl)
            .addHeader("Accept", "text/event-stream")

To accommodate on-device tool call, we use a flag, sendNewPrompt, to indicate whether we have any prompt to send to Ollama. Initially, sendNewPrompt is set to true to always send user’s initial prompt to Ollama. Subsequently, if Ollama makes a call for an on-device, we will send the result of the tool call as a new prompt to Ollama. Add the following code right before the start of the try-catch block, putting the try-catch block inside the new while loop:

        var sendNewPrompt = true
        while (sendNewPrompt) {
            sendNewPrompt = false
        
            val requestBody = Json.encodeToString(ollamaRequest)
                .toRequestBody("application/json; charset=utf-8".toMediaType())
        
            val request = partialRequest
                .post(requestBody)
                .build()
                
            // leave existing try-catch block here
            
        } // while sendNewPrompt                

Back in the try-block, since we now have an additional ToolCall arm to the SseEventType, when we get an empty incoming line, we need to set sseEvent back to the default .Message type regardless of the current sseEvent type. Move sseEvent = SseEventType.Message line out of the if (sseEvent == SseEventType.Error) {} block. Put it right before continue.

In parsing a SSE event line, in addition to the existing two cases, if we see event == "tool_calls, set sseEvent = SseEvenType.ToolCalls.

In parsing a SSE data line, , replace the content of the else if (parts[0].startsWith("data")) {} block with:

          val data = parts[1].trim()
          if (sseEvent == SseEventType.Error) {
              errMsg.value += data
              continue
          }
          
          try {
              val ollamaResponse = Json.decodeFromString<OllamaResponse>(data)
          
              ollamaResponse.message.content?.let { token ->
                  if (token.isNotEmpty()) {
                      resChatt.message?.value += token
                  }
              }        
              ollamaResponse.message.thinking?.let { token ->
                  if (token.isNotEmpty()) {
                      resChatt.message?.value += token
                  }
              }
              
              // check tool call and make the tool call
          
          } catch (e: IllegalArgumentException) {
              errMsg.value += "${e.localizedMessage}\n$apiUrl\n${parts[1]}"
          }
          

If we’re in the .ToolCalls SSE event state, we handle the tool call. Replace the comment //check tool call and make the tool call with:

              if (sseEvent == SseEventType.ToolCalls) {
                  ollamaResponse.message.toolCalls?.let {
                      // message.content is usually empty
                      for (toolCall in it) {
                      
                          toolInvoke(toolCall.function)?.let { toolResult ->
                              // reuse OllamaMessage to carry tool result
                              // to be sent back to Ollama
                              ollamaRequest.messages =
                                  listOf(OllamaMessage(
                                      role = "tool", content = toolResult))

                              // don't send tools multiple times
                              ollamaRequest.tools = null
              
                              // send result back to Ollama
                              sendNewPrompt = true
                          } ?: run {
                              // tool unknown, report to user as error
                              errMsg.value += "llmTools ERROR: tool '${toolCall.function.name}' called"
                              resChatt.message?.value += "\n\n**llmTools Error**: tool '${toolCall.function.name}' called\n\n"
                          }
                      }
                  }
              }

And we’re done with llmTools() and with ChattStore!

SubmitButton

Finally, in SubmitButton() in MainView.kt, replace the call to llmChat(), with a call to llmTools().

That should do it for the front end!

Run and test to verify and debug

Please see the End-to-end testing section of the spec to test your front-end implementation.

Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that we know what your server IP is. You will not get full credit if your front end is not set up to work with your back end!

Front-end submission guidelines

We will only grade files committed to the main branch. If you’ve created multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmtools. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
        |-- composeChatter
            |-- app
            |-- gradle
    |-- tools            
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.

Appendix: imports


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin Last updated March 8th, 2026