Tutorial: llmTools SwiftUI
Cover Page
DUE Wed, 11/12, 2 pm
This tutorial can be completed on the iOS simulator.
You can build off the llmChat or llmPrompt tutorial’s frontend.
To access your backend, you will need your self-signed certificate
installed on your front-end.
The front-end work involves mostly:
- preparing the toolbox and tool invocation infrastructure,
- incorporating the Location Manager from the
mapstutorial- and calling it from the
get_locationtool,
- and calling it from the
- adding
llmTools(appID:chatt:errMsg:)toChattStoreto process tool calls in the SSE stream:- adding tool management to Ollama message handling,
- handling
tool_callsSSE event, - calling the tool(s) if available and returning the results to Ollama,
- or reporting error to user if tool called is not available.
Preparing your GitHub repo
- On your laptop, navigate to
YOUR*TUTORIALS/ - Unzip your
llmchat.ziporllmprompt.zipfile. Double check that you still have a copy of the zipped file for future reference! - Rename your newly unzipped folder
llmtools - Check whether there’s a
DerivedDatafolder in yourswiftUIChatterfolder; if so, delete it:laptop$ cd YOUR*TUTORIALS/llmtools/swiftUIChatter laptop$ ls -d DerivedData # if DerivedData exists: laptop$ rm -rf DerivedData - Push your local
YOUR*TUTORIALS/repo to GitHub and make sure there’re no git issues:<summary>git push</summary>- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your assignment GitHub repo
- Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo, click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- chatterd
|-- chatterd.crt
|-- llmtools
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
# and other files or folders
YOUR*TUTORIALS folder on your laptop should contain zipped files from other tutorials in addition.
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.
ChattViewModel
Since you will be sharing PostgreSQL database storage with the rest of the class,
we need to identify your entries so that we forward only your entries to Ollama
during your “conversation”. If you’re building off llmChat, you should already
have appID defined in your code. Otherwise, add this appID property to your
ChattViewModel in swiftUIChatterApp.swift file:
let appID = Bundle.main.bundleIdentifier
To start a new, empty context history, change your appID to a random string
of less than 155 ASCII characters with your uniqname in it.
While we’re modifying ChattViewModel, change its model and username properties
both to qwen3 (you will be changing both to qwen3:0.6b when testing your backend).
Toolbox
Let us start by creating a toolbox to hold our tools. Create a new Empty file and name
it Toolbox.swift. Add import Foundation to the top of the file.
The contents of this file can be categorized into three purposes: tool/function definition, the toolbox itself, and tool use (or function calling).
Tool/function definition
Ollama tool schema: at the top of Ollama’s JSON tool definition is a JSON Object respresenting a tool schema. The tool schema is defined using nested JSON Objects and JSON Arrays. Add the full nested definitions of Ollama’s tool schema to your file:
struct OllamaToolSchema: Encodable {
let type: String
let function: OllamaToolFunction
}
struct OllamaToolFunction: Encodable {
let name: String
let description: String
let parameters: OllamaFunctionParams?
}
struct OllamaFunctionParams: Encodable {
let type: String
let properties: [String:OllamaParamProp]?
let required: [String]?
}
struct OllamaParamProp: Encodable {
let type: String
let description: String
let enum_: [String]?
enum CodingKeys: String, CodingKey {
// to map json field to property
// if specify one, must specify all
case type = "type"
case description = "description"
case enum_ = "enum"
}
}
Location tool schema: in this tutorial, we have only one tool on device. Add the following tool definition to your file:
let LOC_TOOL = OllamaToolSchema(
type: "function",
function: OllamaToolFunction(
name: "get_location",
description: "Get current location",
parameters: nil
)
)
Location tool function: we implement the get_location tool as a getLocation(_:) function that reads the device’s latitude and longitude data off the Location Manager
from the maps tutorial. Here’s the definition of the getLocation(_:) function:
func getLocation(_ argv: [String]) async -> String? {
"latitude: \(LocManagerViewModel.shared.location.lat), longitude: \(LocManagerViewModel.shared.location.lon)"
}
Location Manager
We don’t need all of the functionalities of the LocManager, but it is the least
amount of work and lowest chance of introducing bugs if we just copy the whole
LocManager.swift file from the maps tutorial: open both the maps and
llmTools projects in Xcode, then alt-drag the LocManager.swift file from the
left/navigation pane of the maps project to the llmTools project’s left pane.
If you have not completed the
mapstutorial, please follow the instructions in the Location manager section to set up theLocManager. You don’t need to complete the rest of themapstutorial.
Then update the Location struct in LocManager.swift to:
struct Location: Decodable {
var lat: CLLocationDegrees
var lon: CLLocationDegrees
var speed: CLLocationSpeed = 0.0
enum CodingKeys: String, CodingKey {
// to ignore other keys
case lat, lon
}
}
You will also need to request permission to read the location. First provide justification-for-access to the Info list. Click on your project name (first item in your left/navigator pane), then click on the project in the TARGETS section, and then the Info tab. In the Custom iOS Target Properties section:
- if you have your
mapsproject open, you can copyPrivacy - Location When In Use Usage Descriptionfrom yourmapsproject’sCustom iOS Target Propertieshere, - if you haven’t completed the
mapstutorial, right click (or ctl-click) on any row in the table and chooseAdd Row(screenshot). SelectPrivacy - Location When In Use Usage Description(you can type to match search).
In the Value field to the right of Privacy - Location When In Use Usage Description
enter the reason you want to access location, for example, “to get weather at location”.
What you enter into the value field will be displayed to the user when seeking their permission.
Then in your swiftUIChatterApp.swift file, add the following initializer to you swiftUIChatterApp:
init() {
LocManager.shared.startUpdates()
}
The toolbox
Even though we have only one resident tool in this tutorial, we want a generalized architecture that can hold multiple tools and invoke the right tool dynamically. To that end, we’ve chosen to use a switch table (or jump table or, more fancily, service locator registry) as the data structure for our tool box. We implement the switch table as a dictionary. The “keys” in the dictionary are the names of the tools/functions. Each “value” is a record containing the tool’s definition/schema and a pointer to the function implementing the tool. To send a tool as part of a request to Ollama, we look up its schema in the switch table and copy it to the request. To invoke a tool called by Ollama in its response, we look up the tool’s function in the switch table and invoke the function.
Back in your Toolbox file, add the following type for an async tool function and the record type containing a tool definition and the async tool function:
typealias ToolFunction = ([String]) async -> String?
struct Tool {
let schema: OllamaToolSchema
let function: ToolFunction
let arguments: [String]
}
Now create a switch-table toolbox and put the LOC_TOOL in it:
let TOOLBOX = [
"get_location": Tool(schema: LOC_TOOL, function: getLocation, arguments: []),
]
Tool use or function calling
Ollama tool call: Ollama’s JSON tool call comprises a JSON Object containing a nested JSON Object carrying the name of the function and the arguments to pass to it. Add these nested struct definitions representing Ollama’s tool call JSON to your file:
struct OllamaToolCall: Codable {
let function: OllamaFunctionCall
}
struct OllamaFunctionCall: Codable {
let name: String
let arguments: [String:String]
}
Tool invocation: finally, here’s the tool invocation function. We call this function to execute any tool call we receive from Ollama response. It looks up the toolbox for the tool name. If the tool is resident, it runs it and returns the result, otherwise it returns a null.
func toolInvoke(function: OllamaFunctionCall) async -> String? {
if let tool = TOOLBOX[function.name] {
var argv = [String]()
for label in tool.arguments {
// get arguments in order, Dict doesn't preserve insertion order
if let arg = function.arguments[label] {
argv.append(arg)
}
}
return await tool.function(argv)
}
return nil
}
That concludes our toolbox definition.
ChattStore
structs
Next add the following enum and three structs to your file. If you are building
off the llmChat code base, you only need to add the ToolCalls case arm to your
SseEventType and a tool-calls field to your OllamaMessage:
enum SseEventType { case Error, Message, ToolCalls }
struct OllamaMessage: Codable {
let role: String
let content: String?
let toolCalls: [OllamaToolCall]?
enum CodingKeys: String, CodingKey {
// to map json field to property
// if one is specified, must specify all
case role = "role"
case content = "content"
case toolCalls = "tool_calls"
}
}
and a tools field to your OllamaRequest:
struct OllamaRequest: Encodable {
let appID: String?
let model: String?
var messages: [OllamaMessage]
let stream: Bool
var tools: [OllamaToolSchema]?
}
The OllamaResponse struct remains unchanged:
struct OllamaResponse: Decodable {
let model: String
let created_at: String
let message: OllamaMessage
enum CodingKeys: String, CodingKey {
// to ignore other keys
case model, created_at, message
}
}
The OllamaError struct in your file also remains unchanged.
llmTools(appID:chatt:errMsg:)
The underlying request/response handling of llmTools(appID:chatt:errMsg:) is basically
that of llmChat(appID:chatt:errMsg:), however with all the mods needed to support tool
calling, it’s simpler to just start llmTools(appID:chatt:errMsg:) from scratch.
We will be reusing the parseErr(code:apiUrl:data:) function and the rest of the
ChattStore class from the previous tutorials.
To your ChattStore class, add the following method. We first set up the chatts
array to show the user prompt and to prepare a new chatt element to put Ollama’s
response. We also set up an HTTP request to carry the user prompt to Ollama:
func llmTools(appID: String, chatt: Chatt, errMsg: Binding<String>) async {
self.chatts.append(chatt)
var resChatt = Chatt(
username: "assistant (\(chatt.username ?? "ollama"))",
message: "",
timestamp: Date().ISO8601Format())
self.chatts.append(resChatt)
guard let last = chatts.indices.last else {
errMsg.wrappedValue = "llmTools: chatts array malformed"
return
}
guard let apiUrl = URL(string: "\(serverUrl)/llmtools") else {
errMsg.wrappedValue = "llmTools: Bad URL"
return
}
var request = URLRequest(url: apiUrl)
request.timeoutInterval = 1200 // for 20 minutes
request.httpMethod = "POST"
request.setValue("application/json; charset=utf-8", forHTTPHeaderField: "Content-Type")
request.setValue("text/event-streaming", forHTTPHeaderField: "Accept")
// setup Ollama request with tools
}
We now prepare an OllamaRequest to carry the user’s appID, prompt, and any
on-device tools the user may provide. Replace // setup Ollama request with tools with:
var ollamaRequest = OllamaRequest(
appID: appID,
model: chatt.username,
messages: [OllamaMessage(role: "user", content: chatt.message, toolCalls: nil)],
stream: true,
tools: TOOLBOX.isEmpty ? nil : []
)
// append all of on-device tools to ollamaRequest
for (_, tool) in TOOLBOX {
ollamaRequest.tools?.append(tool.schema)
}
// send request and any tool result to chatterd
Mapping client connections to Ollama's rounds
Recall that Ollama is a stateless server, meaning that it doesn’t save any state or data
from a request/response interaction with the client. In the backend spec, we saw that
a prompt requiring chained tool calls—first call get_location then call get_weather—is to Ollama three separate interactions (or HTTP rounds) with chatterd:
- The first interaction between
chatterdand Ollama carries the user’s prompt, with both on-device and backend-resident tools. This round is completed by Ollama’s returning a tool call forget_location. - The second interaction carries the result of the
get_locationtool call from the client, whichchatterdforwards to Ollama. This round is completed by Ollama’s returning the second tool call forget_weather, which is served bychatterd. - The third interaction carries the result of the
get_weathertool call. This round is completed by Ollama’s returning the combined results tochatterd.
From the client’s perspective, however, it sees only two connections to chatterd:
- the first one (lines 1 and 4 in the Tool-call Handling figure)
maps directly to
chatterd’s first connection to Ollama (lines 2 and 3), - the second one (lines 5 and 9) starts Ollama’s second round (line 6),
but doesn’t complete until the completion of Ollama’s third round (line 9).
Due to
chatterd“short-circuiting” the client to serve the residentget_weathertool call, the client doesn’t see the completion of thechatterd’s second interaction, nor the initiation of its third interaction, with Ollama. The backend forwards all non-tool call messages from Ollama’s second and third rounds onto the second connection between the client andchatterd.
To accommodate sending tool call result, we use a flag, sendNewPrompt, to let
llmTools(appID:chatt:errMsg:) know that it has on-device tool call result to
send to Ollama. While sendNewPrompt is true—it is initialized to true,
we open a new POST connection to chatterd and send it the ollamaRequest
message. Replace // send request and any tool result to chatterd with:
var sendNewPrompt = true
while sendNewPrompt {
sendNewPrompt = false
guard let requestBody = try? JSONEncoder().encode(ollamaRequest) else {
errMsg.wrappedValue = "llmTools: JSONEncoder error"
return
}
request.httpBody = requestBody
do {
let (bytes, response) = try await URLSession.shared.bytes(for: request)
if let http = response as? HTTPURLResponse, http.statusCode != 200 {
for try await line in bytes.lines {
guard let data = line.data(using: .utf8) else {
continue
}
errMsg.wrappedValue = parseErr(code: "\(http.statusCode)", apiUrl: apiUrl, data: data)
}
if errMsg.wrappedValue.isEmpty {
errMsg.wrappedValue = "\(http.statusCode) \(HTTPURLResponse.localizedString(forStatusCode: http.statusCode))\n\(apiUrl)"
}
return
}
// handle SSE stream
} catch {
errMsg.wrappedValue = "llmTools: failed \(error)"
}
} // while sendNewPrompt
We parse the SSE stream the same as we did it in the llmChat tutorial. Please review
the Parsing SSE Stream section of the tutorial for explanation of the code. Replace //handle SSE stream with the following, which is
structurally the same as the code in the llmChat tutorial:
var sseEvent = SseEventType.Message
var line = ""
for try await char in bytes.characters {
if char != "\n" && char != "\r\n" { // Python eol is "\r\n"
line.append(char)
continue
}
if line.isEmpty {
// new SSE event, default to Message
// SSE events are delimited by "\n\n"
if (sseEvent == .Error) {
resChatt.message?.append("\n\n**llmTools Error**: \(errMsg.wrappedValue)\n\n")
chatts[last] = resChatt // otherwise changes not observed!
}
// assuming .ToolCall event handled inline
sseEvent = .Message
continue
}
// If the next line starts with `event`, we're starting a new event block
let parts = line.split(separator: ":", maxSplits: 1, omittingEmptySubsequences: false)
let event = parts[1].trimmingCharacters(in: .whitespaces)
if parts[0].starts(with: "event") {
// handle event types
} else if parts[0].starts(with: "data") {
// not an event line, we only support data line;
// multiple data lines can belong to the same event
let data = Data(event.utf8)
do {
let ollamaResponse = try JSONDecoder().decode(OllamaResponse.self, from: data)
if let token = ollamaResponse.message.content, !token.isEmpty {
if sseEvent == .Error {
errMsg.wrappedValue += token
} else {
resChatt.message?.append(token)
chatts[last] = resChatt // otherwise changes not observed!
}
}
// check for and handle tool calls
} catch {
errMsg.wrappedValue += parseErr(code: "\(error)", apiUrl: apiUrl, data: data)
}
}
line = ""
} // for char in bytes.char
In addition to Message and Error, we have ToolCalls as a third arm of SseEventType. Replace // handle event types with:
let event = parts[1].trimmingCharacters(in: .whitespaces)
switch event {
case "error":
sseEvent = .Error
case "tool_calls":
// new tool calls event!
sseEvent = .ToolCalls
default:
if !event.isEmpty && event != "message" {
// we only support "error" and "tool_calls" events,
// "message" events are, by the SSE spec,
// assumed implicit by default
print("LLMTOOLS: Unknown event: '\(parts[1])'")
}
}
Then replace the comment, // check for and handle tool calls with:
if sseEvent == .ToolCalls, let toolCalls = ollamaResponse.message.toolCalls {
// message.content is usually empty
for toolCall in toolCalls {
let toolResult = await toolInvoke(function: toolCall.function)
if toolResult != nil {
// create new OllamaMessage with tool result
// to be sent back to Ollama
ollamaRequest.messages = [OllamaMessage(role: "tool", content: toolResult, toolCalls: nil)]
ollamaRequest.tools = nil
// send result back to Ollama
sendNewPrompt = true
} else {
// tool unknown, report to user as error
errMsg.wrappedValue += "llmTools ERROR: tool '\(toolCall.function.name)' called"
resChatt.message?.append("\n\n**llmTools Error**: tool '\(toolCall.function.name)' called\n\n")
chatts[last] = resChatt // otherwise changes not observed!
}
}
}
And we’re done with llmTools(appID:chatt:errMsg:) and with ChattStore!
SubmitButton
Finally, in ContentView.swift > SubmitButton() > Button, inside the Task {} block in the button’s action parameter, replace the call to llmPrompt(_:errMsg:) with:
if let appID = vm.appID {
await ChattStore.shared.llmTools(
appID: appID,
chatt: Chatt(username: vm.model,
message: vm.message, timestamp: Date().ISO8601Format()),
errMsg: Bindable(vm).errMsg)
}
If you’re working with the llmChat base code, the check for appID is already in
the code, you only need to replace the call to llmChat(appID:chatt:errMsg:) with
a call to llmTools(appID:chatt:errMsg:).
That should do it for the front end!
Run and test to verify and debug
Please see the End-to-end testing section of the spec to test your frontend implementation.
Once you finished testing, change your serverUrl back to YOUR_SERVER_IP so that
we know what your server IP is. You will not get full credit if your front end is
not set up to work with your backend!
Front-end submission guidelines
We will only grade files committed to the main branch. If you’ve created multiple
branches, please merge them all to the main branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - Since you have pushed your back end code, you’ll have to click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo
under the folder llmtools. Confirm that your repo has a folder structure outline similar to the following. If
your folder structure is not as outlined, our script will not pick up your submission and, further, you may have
problems getting started on latter tutorials. There could be other files or folders in your local folder not listed
below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing
GitHub for Reactive, only files needed for grading will
be pushed to GitHub.
reactive
|-- chatterd
|-- chatterd.crt
|-- llmtools
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
# and other files or folders
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin | Last updated October 29th, 2025 |