Tutorial: llmTools Compose

Cover Page

DUE Wed, 11/12, 2 pm

You can build off the llmChat or llmPrompt tutorial’s frontend. To access your backend, you will need your self-signed certificate installed on your front-end.

The front-end work involves mostly:

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
        |-- composeChatter
            |-- app
            |-- gradle  
    # and other files or folders

YOUR*TUTORIALS folder on your laptop should contain the zipped files from other tutorials in addition.

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Dependencies

To read device location, add the following line to the dependencies {} block near the bottom of your build:gradle (Module:):

dependencies {
    // . . .
    implementation("com.google.android.gms:play-services-location:21.3.0")
}

and tap on Sync Now on the Gradle menu strip that shows up at the top of the editor screen.

You will also need to request permission to read the location. In your AndroidManifest.xml file, find android.permisssion.INTERNET and add the following lines right below it:

    <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
    <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
    <uses-permission android:name="android.permission.HIGH_SAMPLING_RATE_SENSORS"
        tools:ignore="HighSamplingRate" />

ChattViewModel

Since you will be sharing PostgreSQL database storage with the rest of the class, we need to identify your entries so that we forward only your entries to Ollama during your “conversation”. If you’re building off llmChat, you should already have appID defined in your code. Otherwise, add this appID property to your ChattViewModel in MainActivity.kt file:

    val appID = app.applicationContext.packageName

To start a new, empty context history, change your appID to a random string of less than 155 ASCII characters with your uniqname in it.

While we’re modifying ChattViewModel, change the model and username values in your /res/strings.xml both to qwen3 (you will be changing both to qwen3:0.6b when testing your backend).

Toolbox

Let us start by creating a toolbox to hold our tools. Create a new Kotlin Class/File > File and name it Toolbox.kt.

The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).

Tool/function definition

Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:

@Serializable
data class OllamaToolSchema(
    val type: String,
    val function: OllamaToolFunction
)

@Serializable
data class OllamaToolFunction(
    val name: String,
    val description: String,
    val parameters: OllamaFunctionParams? = null
)

@Serializable
data class OllamaFunctionParams(
    val type: String,
    val properties: Map<String, OllamaParamProp>? = null,
    val required: List<String>? = null
)

@Serializable
data class OllamaParamProp(
    val type: String,
    val description: String,
    val enum: List<String>? = null
)

Location tool schema: in this tutorial, we have only one tool on device. Add the following tool definition to your file:

val LOC_TOOL = OllamaToolSchema(
    type = "function",
    function = OllamaToolFunction(
        name = "get_location",
        description = "Get current location",
        parameters = null
    )
)

Location tool function: we implement the get_location tool as a getLocation() function that reads the device’s latitude and longitude data off the Location Manager from the maps tutorial. Here’s the definition of the getLocation() function:

suspend fun getLocation(argv: List<String>): String? = 
    "latitude: ${LocManager.location.value.latitude}, longitude: ${LocManager.location.value.longitude}"

Location Manager

We don’t need all of the functionalities of LocManager, but it is the least amount of work and lowest chance of introducing bugs if we just copy the whole LocManager.kt file from the maps tutorial: open both the maps and llmTools projects in Android Studio, then alt-drag the LocManager.kt file from the left/navigation pane of the maps project to the llmTools project’s left pane. Also copy over the file Extensions.kt.

If you have not completed the maps tutorial, please follow the instructions in the Location manager section to set up the LocManager. And follow the instructions in the paragraph at the end of “Accessing …” section to set up your Extensions.kt. You don’t need to complete the rest of the maps tutorial.

We will need to make one change to the signature of the LocManager class to make it a singleton. Replace:

class LocManager(context: Context) {
    val locManager = LocationServices.getFusedLocationProviderClient(context)
    private val sensorManager = context.getSystemService(Context.SENSOR_SERVICE) as SensorManager

with:

object LocManager {
    lateinit var locManager: FusedLocationProviderClient
    private lateinit var sensorManager: SensorManager

Then replace its initialization block from:

    init {
        LocationServices.getFusedLocationProviderClient(context)
            .getCurrentLocation(Priority.PRIORITY_HIGH_ACCURACY, CancellationTokenSource().token)
            .addOnCompleteListener {
                if (it.isSuccessful) {
                    location = mutableStateOf(it.result)
                } else {
                    Log.e("LocManager: getFusedLocation", it.exception.toString())
                }
            }
    }

to a method:

    fun init(context: Context) {
        locManager = LocationServices.getFusedLocationProviderClient(context)
        sensorManager = context.getSystemService(Context.SENSOR_SERVICE) as SensorManager

        LocationServices.getFusedLocationProviderClient(context)
            .getCurrentLocation(PRIORITY_HIGH_ACCURACY, CancellationTokenSource().token)
            .addOnCompleteListener {
                if (it.isSuccessful) {
                    location = mutableStateOf(it.result)
                } else {
                    Log.e("LocManager: getFusedLocation", it.exception.toString())
                }
            }
    }

We now follow up the permission tags added to AndroidManifest.xml with code to prompt user for access permission. In the onCreate() of your MainActivity class in MainActivity.kt add the following lines before the call to setContent {}:

        registerForActivityResult(ActivityResultContracts.RequestPermission()) { granted ->
            if (!granted) {
                toast("Location access denied")
                finish()
            }
            LocManager.permission.value = granted
        }.launch(Manifest.permission.ACCESS_FINE_LOCATION)

You may need to manually import Manifest (of android).

Then initialize LocManager by replacing the whole setContent block with:

        setContent {
            if (LocManager.permission.value) {
                LocManager.init(applicationContext)
                LocManager.StartUpdatesWithLifecycle()
                MainView()
            }
        }

The toolbox

Even though we have only one resident tool in this tutorial, we want a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ve chosen to use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a dictionary. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.

Back in your Toolbox file, add the following type for a suspending tool function and the record type containing a tool definition and the suspending tool function:

typealias ToolFunction = suspend (List<String>) -> String?

data class Tool(
    val schema: OllamaToolSchema,
    val function: ToolFunction,
    val arguments: List<String>
)

Now create a switch-table toolbox and put the LOC_TOOL in it:

val TOOLBOX = mapOf(
    "get_location" to Tool(schema = LOC_TOOL, function = ::getLocation, arguments = emptyList()),
)

Tool use or function calling

Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON Object carrying the name of the function and the arguments to pass to it. Add these nested struct definitions representing Ollama’s tool call JSON to your file:

@Serializable
data class OllamaToolCall(val function: OllamaFunctionCall)

@Serializable
data class OllamaFunctionCall(
    val name: String,
    val arguments: LinkedHashMap<String, String>
)

Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.

suspend fun toolInvoke(function: OllamaFunctionCall): String? {
    return TOOLBOX[function.name]?.run {
        val argv: MutableList<String> = mutableListOf()
        for ((_, arg) in function.arguments) {
            argv.add(arg)
        }
        function(argv)
    }
}

That concludes our toolbox definition.

ChattStore

classes

Next add the following enum class and three data classes to your file. If you are building off the llmChat code base, you only need to add the ToolCalls enum constant to your SseEventType and a tool-calls field to your OllamaMessage:

enum class SseEventType { Error, Message, ToolCalls }

@Serializable
data class OllamaMessage(
    val role: String,
    val content: String?,
    @SerialName("tool_calls") val toolCalls: List<OllamaToolCall>? = null
)

and a tools field to your OllamaRequest:

@Serializable
data class OllamaRequest(
    val appID: String?, // PA2
    val model: String?,
    var messages: List<OllamaMessage>,
    val stream: Boolean,
    var tools: MutableList<OllamaToolSchema>? = null
)

The OllamaResponse class remains unchanged:

@Serializable
@JsonIgnoreUnknownKeys
data class OllamaResponse(
    val model: String,
    val created_at: String,
    val message: OllamaMessage,
)

The OllamaError class in your file also remains unchanged.

llmTools()

The underlying request/response handling of llmTools() is basically that of llmChat(), however with all the mods needed to support tool calling, it’s simpler to just start llmTools() from scratch. We will be reusing the parseErr() function and the rest of the ChattStore class from the previous tutorials.

To your ChattStore class, add the following method. We first set up the chatts array to show the user prompt and to prepare a new chatt element to put Ollama’s response. We also set up the apiUrl to point to the right chatterd API:

    suspend fun llmTools(appID: String?, chatt: Chatt, errMsg: MutableState<String>) {
        chatts.add(chatt)
        val resChatt = Chatt(
            username = "assistant (${chatt.username ?: "ollama"})",
            message = mutableStateOf(""),
            timestamp = Instant.now().toString()
        )
        chatts.add(resChatt)

        val apiUrl = "${serverUrl}/llmtools"

        // setup Ollama request with tools

    }

We now prepare an OllamaRequest to carry the user’s appID, prompt, and any on-device tools the user may provide. Replace // setup Ollama request with tools with:

        val ollamaRequest = OllamaRequest(
            appID = appID,
            model = chatt.username,
            messages = listOf(
                OllamaMessage(
                    "user",
                    chatt.message?.value,
                    null
                )
            ),
            stream = true,
            tools = if (TOOLBOX.isEmpty()) { null } else { mutableListOf() }
        )

        // append all of on-device tools to ollamaRequest
        for ((_, tool) in TOOLBOX) {
            ollamaRequest.tools?.add(tool.schema)
        }

        // send request and any tool result to chatterd
Mapping client connections to Ollama's rounds

Recall that Ollama is a stateless server, meaning that it doesn’t save any state or data from a request/response interaction with the client. In the backend spec, we saw that a prompt requiring chained tool calls—first call get_location then call get_weather—is to Ollama three separate interactions (or HTTP rounds) with chatterd:

From the client’s perspective, however, it sees only two connections to chatterd:

To accommodate sending tool call result, we use a flag, sendNewPrompt, to let llmTools() know that it has on-device tool call result to send to Ollama. While sendNewPrompt is true—it is initialized to true, we open a new POST connection to chatterd and send it the ollamaRequest message. Replace // send request and any tool result to chatterd with:

        var sendNewPrompt = true
        while (sendNewPrompt) {
            sendNewPrompt = false

            val requestBody = Json.encodeToString(ollamaRequest)
                .toRequestBody("application/json; charset=utf-8".toMediaType())

            val request = Request.Builder()
                .url(apiUrl)
                .addHeader("Accept", "text/event-stream")
                .post(requestBody)
                .build()

            try {
                val response = client.newCall(request).await()
                if (!response.isSuccessful) {
                    errMsg.value = parseErr(
                        response.code.toString(),
                        apiUrl, response.body.string()
                    )
                    return
                }

                // handle SSE stream

            } catch (e: Throwable) {
                errMsg.value = "llmTools: ${e.localizedMessage ?: "failed"}"
            }
        } // while sendNewPrompt

We parse the SSE stream the same as we did it in the llmChat tutorial. Please review the Parsing SSE Stream section of the tutorial for explanation of the code. Replace //handle SSE stream with the following, which is structurally the same as the code in the llmChat tutorial:

                var sseEvent = SseEventType.Message
                val stream = response.body.source()
                while (!stream.exhausted()) {
                    val line = stream.readUtf8Line() ?: continue
                    if (line.isEmpty()) {
                        // new SSE event, default to Message
                        // SSE events are delimited by '\n\n'
                        if (sseEvent == SseEventType.Error) {
                            resChatt.message?.value += "\n\n**llmTools Error**: ${errMsg.value}\n\n"
                        }

                        // assuming ToolCall event handled inline
                        sseEvent = SseEventType.Message
                        continue
                    }

                    // If the next line starts with `event`, we're starting a new event block
                    val parts = line.split(":", limit = 2)
                    if (parts[0].startsWith("event")) {

                        // handle event types

                    } else if (parts[0].startsWith("data")) {
                        // not an event line, we only support data line
                        // multiple data lines can belong to the same event
                        try {
                            val ollamaResponse = Json.decodeFromString<OllamaResponse>(parts[1])

                            ollamaResponse.message.content?.let {
                                if (it.isNotEmpty()) {
                                    if (sseEvent == SseEventType.Error) {
                                        errMsg.value += it
                                    } else {
                                        resChatt.message?.value += it
                                    }
                                }
                            }
                            
                            // check for and handle tool calls

                        } catch (e: IllegalArgumentException) {
                            errMsg.value += parseErr(e.localizedMessage, apiUrl, parts[1])
                        }
                    }
                } // while stream not exhausted

In addition to Message and Error, we have ToolCalls as a third arm of SseEventType. Replace // handle event types with:

                        val event = parts[1].trim()
                        when (event) {
                            "error" -> sseEvent = SseEventType.Error
                            "tool_calls" -> {
                                // new tool calls event!
                                sseEvent = SseEventType.ToolCalls
                            }
                            else ->
                                if (!event.isEmpty() && event != "message") {
                                    // we only support "error" and "tool_calls" events,
                                    // "message" events are, by the SSE spec,
                                    // assumed implicit by default
                                    Log.d("LLMTOOLS", "Unknown event: '${parts[1]}'")
                                }
                        }

Then replace the comment, // check for and handle tool calls with:

                            if (sseEvent == SseEventType.ToolCalls) {
                                ollamaResponse.message.toolCalls?.let {
                                    // message.content is usually empty
                                    for (toolCall in it) {
                                        toolInvoke(toolCall.function)?.let { toolResult ->
                                            // create new OllamaMessage with tool result
                                            // to be sent back to Ollama
                                            ollamaRequest.messages =
                                                listOf(OllamaMessage(
                                                    role = "tool", content = toolResult, toolCalls = null))
                                            ollamaRequest.tools = null

                                            // send result back to Ollama
                                            sendNewPrompt = true
                                        } ?: run {
                                            // tool unknown, report to user as error
                                            errMsg.value += "llmTools ERROR: tool '${toolCall.function.name}' called"
                                            resChatt.message?.value += "\n\n**llmTools Error**: tool '${toolCall.function.name}' called\n\n"
                                        }
                                    }
                                }
                            }

And we’re done with llmTools() and with ChattStore!

SubmitButton

Finally, in MainView.kt > SubmitButton() > IconButton > onClick, inside the viewModelScope.launch {} block, replace the call to llmPrompt(), or llmChat(), with:

                vm.appID?.let {
                    llmTools(it, Chatt(username = vm.model,
                        message = mutableStateOf(vm.message.text.toString()),
                        timestamp = Instant.now().toString()
                    ), vm.errMsg)
                }

That should do it for the front end!

Run and test to verify and debug

Please see the End-to-end testing section of the spec to test your frontend implementation.

Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that we know what your server IP is. You will not get full credit if your front end is not set up to work with your backend!

Front-end submission guidelines

We will only grade files committed to the main branch. If you’ve created multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder llmtools. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmtools
        |-- composeChatter
            |-- app
            |-- gradle   
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.

Appendix: imports


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin Last updated October 29th, 2025