Tutorial: llmPrompt SwiftUI
Cover Page
DUE Wed, 09/03, 2 pm
This tutorial introduces you to the iOS app development environment and basic development tools on the backend. You’ll learn some Swift syntax and language features for the front end and, to a lesser extent, get acquainted with the backend language and web stack of your choice. You will use SwiftUI to build reactive UI declaratively on the front end. It can be completed on the iOS simulator. Let’s get started!
Expected behavior
Posting a prompt and receiving and displaying streamed response:
DISCLAIMER: the video demo shows you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.
Be patient, the app on your device or simulator will be very slow because we’re running in debug mode, tethered to Xcode, not as stand-alone app in release mode. It could take several seconds after launch for the app’s first screen to appear.
Preliminaries
Before we start, you’ll need to prepare a GitHub repo to submit your tutorials and for us to communicate your tutorial grades back to you. Please follow the instructions in Preparing GitHub for Reactive Tutorials and Projects and then return here to continue.
If you don’t have an environment set up for iOS development, please read our notes on Getting Started with iOS Development first.
Creating an Xcode project
In the following, replace <YOUR UNIQNAME>
with your uniqname. Apple will complain if your Bundle Identifier
is not globally unique. Using your uniqname is one way to generate a unique Bundle Identifier
.
Depending on your version of Xcode, the screenshots in this and subsequent specs may not look exactly the same as what you see on screen.
- Click
Create a new Xcode project
in “Welcome to Xcode” screen (screenshot) - Select
iOS > App
and clickNext
(screenshot, be careful that you selectiOS
and notmacOS
) - Enter
Product Name
: swiftUIChatter -
Team
: Noneif you don’t have one yet, otherwise choose your
Personal Team
-
Organization Identifier
: edu.umich.<YOUR UNIQNAME>
👈👈👈replace
<YOUR UNIQNAME>
with yours, remove the angle brackets,< >
-
Interface
: SwiftUI -
Language
: Swift - Leave the other fields as
None
and all boxes unchecked, clickNext
- On the file dialog box that pops up, put your
swiftUIChatter
folder in👉👉👉 YOUR*TUTORIALS/llmprompt/swiftUIChatter/
, whereYOUR*TUTORIALS
is the name you give to your assignment GitHub repo clone in Preparing GitHub for Reactive above. - Leave
Create Git repository on my Mac
UNCHECKED (screenshot). We will add the files to GitHub using GitHub Desktop instead. - Click
Create
Once the project is created, navigate to your project editor (top line of Xcode left pane showing your Product Name
). Xcode will then show the General
settings for your project in its middle pane. In the Minimum Deployments
section, using the drop-down selector, choose iOS 18 or later.
Next click the Signing & Capabilities
tab (up top, next to the General
tab). In the Signing
section. If you selected None
for Team
when creating your project above, you will need to specify a Team
. If you don’t yet have a Personal Team
yet, please create one now (for free) using your Apple ID. In the drop down menu next to Team
select Add an Account...
at the bottom of the menu, sign in using your Apple ID and follow the prompts to create one. Finally confirm that your Bundle identifier
is edu.umich.<YOUR UNIQNAME>.swiftUIChatter
. Apple will complain if your Bundle Identifier
is not globally unique.
Checking GitHub
Open GitHub Desktop and
- Click on
Current Repository
on the top left of the interface - Click on the assignment GitHub repo you cloned above
- Add Summary to your changes and click
Commit to main
at the bottom of the left pane - If you have a team mate and they have pushed changes to GitHub, you’ll have to click
Pull Origin
and resolve any conflicts, re-commit to main, and - Finally click on
Push Origin
to push changes to GitHub
If you are proficient with git, you don’t have to use GitHub Desktop. However, we can only help with GitHub Desktop, so if you use anything else, you’ll be on your own.
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- llmprompt
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your tutorials and you will get a ZERO.
Xcode project structure
The left or Navigator
pane of your Xcode window should show your project files under swiftUIChatter
project (top-line), in a swiftUIChatter
folder:
-
swiftUIChatterApp
: named after your project, this file tells iOS the entry point (@main
) of your app. Only one data type (struct
) can be so tagged. This struct describes theScene
in which the window hierarchy of your app resides.WindowGroup
is the window hierarchy for yourScene
(we’ll discuss the keywordsome
later). Unlike on the iPads or Macs where an app can have multiple scenes, each app has only one scene on the iPhones. -
ContentView
: It will hold the timeline of our exchanges with Ollama later.
UI Design
One can easily spend a whole weekend (or longer) getting the UI “just right.”
We won’t be grading you on how beautiful your UI looks. You’re free to design your UI differently, so long as all indicated UI elements are fully visible on the screen, non overlapping, and functioning as specified.
#Preview
The #Preview
feature is used by Xcode only during development, to preview your View(s). If the preview pane is not showing, you can toggle it by checking Canvas
on the Adjust Editor Options
menu on the top right corner of your Xcode window (screenshot). The preview only renders your View, it is not a simulator, it won’t run non-UI related code. Given the small sizes of our apps, I found the preview to be of limited use and rather slow and would therefore just comment out the #Preview
feature, which automatically disables preview and closes the Canvas
pane.
Chatter app
Chatt
In all our tutorials, we will use a structure called Chatt
to hold exchanges with the backend to be displayed on screen. We store the definition of this structure in a file called
Chatt.swift
. Create a new Swift file:
- Right click on the
swiftUIChatter
folder (not project, second line) on the left/navigator pane - Select
New Empty File...
- Rename the file from
Untitled.swift
toChatt.swift
- A
chatt
holds at the minimum the following fields. When theChatter
app is used to interact with Ollama, theusername
field may be used to hold the LLM model instead of the actual user’s name. Similarly, themessage
field will be used to hold user’s prompt to Ollama in such use case. Place the following struct definition forChatt
in the file:import Foundation struct Chatt: Identifiable { var username: String? var message: String? var id: UUID? = UUID() var timestamp: String? // so that we don't need to compare every property for equality static func ==(lhs: Chatt, rhs: Chatt) -> Bool { lhs.id == rhs.id } }
We declare the
Chatt
struct as conforming to theIdentifiable
protocol, which simply means that it contains anid
property that SwiftUI can use to uniquely identify each instance in a list. We use randomly generated UUID to identify eachchatt
. We also provide a==
operator to equate two instances as long as they have the sameid
.
ChattStore
Create another Swift file, call it ChattStore.swift
.
While the frontend sends messages to the backend in the form of Chatt
messages, Ollama can either response with OllamaError
or OllamaReply
. Put the following
in your ChattStore.swift
:
import Observation
import SwiftUI
struct OllamaError: Decodable {
let error: String
}
struct OllamaReply: Decodable {
let model: String
let created_at: String
let response: String
}
Compliance with the Decodable
protocol allows Swift Codable
package to
automatically convert JSON strings received from the network into these
Swift structures.
Then add the following ChattStore
singleton:
@Observable
final class ChattStore {
static let shared = ChattStore() // create one instance of the class to be shared, and
private init() {} // make the constructor private so no other instances can be created
private(set) var chatts = [Chatt]()
private let serverUrl = "https://YOUR_SERVER_IP"
}
Once you have implemented your own back-end server, you will replace
mada.eecs.umich.edu
with your server’s IP address.
The first two declarations in ChattStore
make it a singleton object, meaning there will ever be only one instance of this class when the app runs. We will keep user’s interaction with Ollama in the chatts
array. Since we want only a single copy of the chatt
s data, we make this a singleton object. By Swift’s convention, the singleton instance is stored in its shared
property.
We annotate the ChattStore
class with the @Observable
macro (part of the Observation
package) to publish its public properties for subscription. When a SwiftUI View
subscribes to a published observable variable (the subject), it will be notified and the View
will be recomputed and re-rendered automatically as necessary. The chatts
array will be used to hold user exchanges with Ollama. While we want chatts
to be readable outside the class, we don’t want it publicly modifiable, and so we have set its “setter” to private
.
Since chatt
s are retrieved from and posted to the chatterd
back-end
server, we will keep all network functions to communicate with the server as
methods of this class. For this tutorial, we will have only one network function, llmPrompt(_:errMsg:)
.
To send a prompt to the backend, the user calls the asynchronous function llmPrompt(_:errMsg:)
, which starts by appending the user’s prompt to the chatts
array. Add the following function definition to your ChattStore
class:
func llmPrompt(_ chatt: Chatt, errMsg: Binding<String>) async {
self.chatts.append(chatt)
// prepare prompt
}
The errMsg
parameter of llmPrompt(_:errMsg:)
is of type Binding<String>
, which means that updating its wrappedValue
property will notify observers of the
variable. We’ll see later that updating errMsg
will cause an alert dialog box to
pop up, to warn the user.
For this tutorial, we interact with Ollama using its generate
API. Ollama’s generate
API expects incoming prompts to be JSON objects with the following fields:
{
"model": "string",
"prompt": "string",
"stream": boolean
}
We use the data passed in through chatt
to create such a JSON object for Ollama.
Add the following code to your llmPrompt(_:errMsg:)
, replacing // prepare prompt
:
let jsonObj: [String: Any] = [
"model": chatt.username as Any,
"prompt": chatt.message as Any,
"stream": true
]
guard let requestBody = try? JSONSerialization.data(withJSONObject: jsonObj) else {
errMsg.wrappedValue = "llmPrompt: JSONSerialization error"
return
}
// prepare request
We first assemble a Swift dictionary comprising the key-value pairs of data we want to post to the server. We can’t just post the Swift dictionary as is though. The server may not, and actually is not, written in Swift, and in any case could have a different memory layout for various data structures. Presented with a chunk of binary data, the server will not know that the data represents a dictionary, nor how to reconstruct the dictonary in its own dictionary layout. To post the Swift dictionary, therefore, we call JSONSerialization.data(withJSONObject:)
to encode it into a serialized JSON object that the server will know how to parse, which we then put in a requestBody
.
Below we use the requestBody
to populate a URLRequest
, with the appropriate POST URL.
Add the following code to the function, replacing // prepare request
:
guard let apiUrl = URL(string: "\(serverUrl)/llmprompt") else {
errMsg.wrappedValue = "llmPrompt: Bad URL"
return
}
var request = URLRequest(url: apiUrl)
request.timeoutInterval = 1200 // for 20 minutes
request.httpMethod = "POST"
request.setValue("application/json; charset=utf-8", forHTTPHeaderField: "Content-Type")
request.setValue("application/*", forHTTPHeaderField: "Accept")
request.httpBody = requestBody
// connect to chatterd and Ollama
We initiate a connection to our chatterd
backend and send the request.
Our backend simply forwards the request to Ollama. Check that the connection
has been made successfully. If we fail to connect to our backend (the catch
block) or Ollama returned any HTTP error, we simply report it to the user,
and end session. Replace // connect to chatterd and Ollama
with:
do {
let (bytes, response) = try await URLSession.shared.bytes(for: request)
if let http = response as? HTTPURLResponse, http.statusCode != 200 {
for try await line in bytes.lines {
guard let data = line.data(using: .utf8) else {
continue
}
errMsg.wrappedValue = parseErr(code: "\(http.statusCode)", apiUrl: apiUrl, data: data)
}
if errMsg.wrappedValue.isEmpty {
errMsg.wrappedValue = "\(http.statusCode) \(HTTPURLResponse.localizedString(forStatusCode: http.statusCode))\n\(apiUrl)"
}
return
}
// prepare placeholder
} catch {
errMsg.wrappedValue = "llmPrompt: failed \(error)"
}
If the connection has been made successfully, we create a placeholder chatt
for the incoming response and append it to the chatts
array. The response is
streamed and we want each arriving element to be displayed right away, hence
the need for a placeholder chatt
. Put the following code at the end of your
do
block, replacing // prepare placeholder
:
var resChatt = Chatt(
username: "assistant (\(chatt.username ?? "ollama"))",
message: "",
timestamp: Date().ISO8601Format())
self.chatts.append(resChatt)
guard let last = chatts.indices.last else {
errMsg.wrappedValue = "llmPrompt: chatts array malformed"
return
}
// receive Ollama response
Finally, we receive each newline-delimited JSON (NDJSON) response and, if
the line is not empty, we decode it into OllamaReply
. The decoding is done
using Swift’s Codable
package. Upon successful decoding, the response
property
in OllamaReply
is appended to the message
property of our placeholder
resChatt
and we trigger reactive update of the display. Put the following
code at the end of your do
block, replace // receive Ollama response
:
for try await line in bytes.lines {
guard let data = line.data(using: .utf8) else {
continue
}
do {
let ollamaResponse = try JSONDecoder().decode(OllamaReply.self, from: data)
resChatt.message?.append(ollamaResponse.response)
} catch {
errMsg.wrappedValue += parseErr(code: "\(error)", apiUrl: apiUrl, data: data)
resChatt.message?.append("\nllmPrompt Error: \(errMsg.wrappedValue)\n\n")
}
self.chatts[last] = resChatt // otherwise changes not observed!
}
Here’s the parseErr(code:apiUrl:data:)
helper function, put it inside your ChattStore
class, outside the llmPrompt(_:errMsg:)
function:
private func parseErr(code: String, apiUrl: URL, data: Data) -> String {
do {
let errJson = try JSONDecoder().decode(OllamaError.self, from: data)
return errJson.error
} catch {
return "\(code)\n\(apiUrl)\n\(String(data: data, encoding: .utf8) ?? "error decoding failed")"
}
}
ChattViewModel
We will have several variables accessed by multiple SwiftUI Views. Instead of
passing these variables back and forth, we put them in a viewmodel we hoist onto
the SwiftUI environment. A View that requires access to these variables can easily
reach for the viewmodel in the environment. Put the following class in your
swiftUIChatterApp.swift
file, after the import SwiftUI
line:
import Observation
@Observable
final class ChattViewModel {
let model = "tinyllama"
let username = "tinyllama" // instead of uniqname
let instruction = "Type a message…"
var message = "howdy?"
var errMsg = ""
var showError = false
}
We set the username
property to be the model
requested of the LLM to help
with the display of user prompt vs. LLM response. We declare ChattViewModel
to be an @Observable
class so that its mutable properties, message
,
errMsg
, and showError
, when changed, can trigger a reactive update of the
View(s) observing them.
Replace your swiftUIChatterApp
struct definition with the following:
@main
struct swiftUIChatterApp: App {
let viewModel = ChattViewModel()
var body: some Scene {
WindowGroup {
NavigationStack {
ContentView()
.onAppear {
let scenes = UIApplication.shared.connectedScenes
let windowScene = scenes.first as? UIWindowScene
if let wnd = windowScene?.windows.first {
let lagFreeField = UITextField()
wnd.addSubview(lagFreeField)
lagFreeField.becomeFirstResponder()
lagFreeField.resignFirstResponder()
lagFreeField.removeFromSuperview()
}
}
}
.environment(viewModel)
}
}
}
We first instantiate the ChattViewModel
, then put it in the SwiftUI environment
with .environment(viewModel)
. We wrap the call to ContentView()
in a
NavigationStack()
to get a navigation bar in our ContentView
.
The .onAppear{ /*...*/ }
block we put on ContentView()
is for debugging
only. On some versions of Xcode, the soft keyboard is very laggy when the
app is run in debugging mode, tethered to Xcode; this .onAppear{}
block
shakes the keyboard out of its lagginess, a bit.
Prop drilling vs. State hoisting
The app will have one instant of the ChattViewModel
.
Almost every View in the app must access this instance of the ChattViewModel
.
We could pass ChattViewModel
to every Views, their child-Views, and so on
down the hierarchy of the View tree. In React this is called “prop drilling”
as the HTML prop
erties needed to render the UI are passed down and down
to the bottom of the UI hierarchy, even if some intermediate components
do not need access to these properties.
Alternatively, we can “hoist” the needed state to the top of the UI sub-tree
(which may be the root of the tree in the limit) and have each UI component
needing the state data search up its UI sub-tree until it finds the state.
The state is said to be “provided” to the sub-tree. The Provider
usually
maintains a look-up table of available states, identifiable by the type
of the state. When the same data type is provided at different levels of
the UI-tree, the one lowest in the hierarchy above the component searching
for the state will match.
The states or values of environment objects are scoped to the sub-tree where the data is provided. The advantage of using an environment object is that we don’t have to pass/drill it down a sub-tree yet Views in the sub-tree can subscribe and react to changes in the object.
In SwiftUI, data hoisted and made available to a View sub-tree is called an
environment
object. Views within that sub-tree can subscribe to the
environment
object and be notified of changes.
ChattScrollView
We want to display user exchanges with Ollama in a timeline view. First we define what
each row of the timeline contains. Create a new empty file, ChattScrollView.swift
and put the following lines in the file:
import SwiftUI
struct ChattView: View {
let chatt: Chatt
let isSender: Bool
var body: some View {
VStack(alignment: isSender ? .trailing : .leading, spacing: 4) {
// chatt displayed here
}
.padding(.horizontal, 16)
}
}
For each chatt
, we check whether we’re displaying the user’s message or a response
from Ollama. In the former case, we display the row flush right, else flush left.
Below we check if the message is empty. If it’s not empty, we first display the
sender’s name if it is not from the user. Then we display the message in a “message
bubble”, followed by the timestamp on the message. We put these three elements inside a VStack
which arranges its elements in a vertical stack (a column). Add the
following lines inside your VStack{}
block, replacing // chatt displayed here
:
if let msg = chatt.message, !msg.isEmpty {
Text(isSender ? "" : chatt.username ?? "")
.font(.subheadline)
.foregroundColor(.purple)
.padding(.leading, 4)
Text(msg)
.padding(.horizontal, 12)
.padding(.vertical, 8)
.background(Color(isSender ? .systemBlue : .systemBackground))
.foregroundColor(isSender ? .white: .primary)
.cornerRadius(20)
.shadow(radius: 2)
.frame(maxWidth: 300, alignment: isSender ? .trailing : .leading)
Text(chatt.timestamp ?? "")
.font(.caption2)
.foregroundColor(.gray)
Spacer()
.frame(maxWidth: .infinity)
}
We put a Spacer()
in the VStack
that spans the full width of the screen to
force the VStack
to use the full width.
When we declare a struct
as conforming to View
, such as in the case of ChattView
,
it is required to have a property called body
of type some View
. The property
body
is where you describe your View
: which UI elements will be included, how they
relate to each other positionally, e.g., one above the other? or side by side? The
keyword some
here means that the actual type will be determined at compile time,
depending on actual usage, and it can be any type that conforms to View
.
If your locale has a language that reads left to right, leading
is the same as left
; for languages that read right to left (RTL), leading
is the same as right
(conversely and similarly trailing
). Most of the time you would use leading
and trailing
to refer to the two ends of a UI element, reserving left
and right
to use with the physical world, e.g., when giving direction.
You can option-click (⌥-click) on a View
(e.g., VStack
, Text
, or NavigationStack
) to bring up a menu of possible actions on it. The Show SwiftUI Inspector
menu item allows you to visually set the paddings, for example. The inspector is also accessible directly by ctl-option-click (⌃⌥-click), bypassing the menu.
DSL
Notice how type inference and the use of trailing closure makes HStack
, VStack
, NavigationStack
, etc. look and act like keywords of a programming language used to describe the UI, separate from Swift. Hence SwiftUI is also considered a “domain-specific language (DSL)”, the “domain” in this case being UI description.
Now that we have a description of each row, we can put the rows in a list. Put the
the following View in your ChattScrollView.swift
file, outside ChattView
:
struct ChattScrollView: View {
@Environment(ChattViewModel.self) private var vm
var body: some View {
ScrollView {
LazyVStack {
ForEach(ChattStore.shared.chatts) {
ChattView(chatt: $0, isSender: $0.username == vm.username)
}
}
}
}
}
ForEach
element in the chatts
array in ChattStore
, ChattView
constructs and
returns a View
, which LazyVStack
then displays. LazyVStack
only loads array
elements that are visible on screen. Recall that we have previously tagged
ChattStore
an @Observable
. When a View accesses ChattStore
’s property chatts
,
SwiftUI automatically subscribes the View to chatts
property so that the View can
be automatically recomputed and re-rendered when chatts
is modified.
ChattScrollView
helps ChattView
determine whether a chatt
belongs to the
user by comparing the sender’s username
against the username
stored in the
viewmodel obtained from SwiftUI’s environment.
SubmitButton
While ChattView
displays each chatt
and ChattScrollView
puts the ChattView
s
in a scrollable list, SubmitButton
actually sends each user’s prompt to the backend
and receives Ollama’s response and put both in the chatts
array for
ChattScrollView
to display.
In your ContentView.swift
file, put the following code below import SwiftUI
:
import Observation
struct SubmitButton: View {
@Binding var scrollProxy: ScrollViewProxy?
@Environment(ChattViewModel.self) private var vm
@State private var isSending = false
var body: some View {
Button {
isSending = true
Task (priority: .background){
await ChattStore.shared.llmPrompt(
Chatt(username: vm.model,
message: vm.message,
timestamp: Date().ISO8601Format()),
errMsg: Bindable(vm).errMsg)
// completion code
}
} label: {
// icons
}
// modifiers
}
}
when the button is clicked, we set isSending
to true
and call
llmPrompt(_:errMsg:)
with the user’s username, stored in the viewmodel’s
username
property, and user’s prompt, stored in the viewmodel’s message
property.
The viewmodel is obtainable from SwiftUI’s environment. The errMsg
property
in the viewmodel is passed as a Bindable
so that it can be modified by
llmPrompt(_:errMsg:)
—think of it like pass-by-reference (it’s not actually
pass-by-reference, but you gain the same capability to modify the variable).
In calling llmPrompt(_:errMsg:)
, we also specify that the asynchronous function
is to be run with background
priority, which could have it scheduled on a
background thread.
Upon returning from llmPrompt(_:errMsg:)
, we reset vm.message
and isSending
and
check whether any error has been reported, and set vm.showError
accordingly. Then
we scroll the display to the bottom of displayed chatts
. The last step must be
done in userInitiated
priority to be visible to the user. Add the following code
inside the Task {}
block, replacing the comment // completion code
:
vm.message = ""
isSending = false
vm.showError = !vm.errMsg.isEmpty
Task (priority: .userInitiated) {
withAnimation {
scrollProxy?.scrollTo(ChattStore.shared.chatts.last?.id, anchor: .bottom)
}
}
For the button’s label
, we provide two icons: one to show a “loading” view if
we’re still waiting for Ollama’s response (isSending
is true
) and one to show
a “paperplane” submit icon otherwise. Add the following code inside the
label:{}
block, replacing the comment // icons
:
if isSending {
ProgressView()
.progressViewStyle(CircularProgressViewStyle(tint: .secondary))
.padding(10)
} else {
Image(systemName: "paperplane.fill")
.foregroundColor(vm.message.isEmpty ? .gray : .yellow)
.padding(10)
}
We also disable the button if isSending
is true
or if there’s no message to send.
Add the following modifiers to Button
by replacing the comment // modifiers
:
.disabled(isSending || vm.message.isEmpty)
.background(Color(isSending || vm.message.isEmpty ? .secondarySystemBackground : .systemBlue))
.clipShape(Circle())
.padding(.trailing)
ContentView
We now have all the pieces we need to build our ContentView
. Assuming you have
commented out or deleted struct Preview
as described earlier, replace your struct ContentView
definition with:
struct ContentView: View {
@Environment(ChattViewModel.self) private var vm
@State private var scrollProxy: ScrollViewProxy?
@FocusState private var messageInFocus: Bool // tap background to dismiss kbd
var body: some View {
VStack {
ScrollViewReader { proxy in
ChattScrollView()
.onAppear {
scrollProxy = proxy
}
}
// prompt input and submit
}
// tap background to dismiss kbd
.navigationTitle("llmPrompt")
.navigationBarTitleDisplayMode(.inline)
// show error in an alert dialog
}
}
ContentView
puts the ChattScrollView
at the top of its column (VStack
).
ChattScrollView
is wrapped in a ScrollViewReader
which allows us to
programmatically “scroll” the view using the proxy
handle, which we
store in the structure-wide scrollProxy
so that it is available outside
the ScrollViewReader
closure. We also give our ContentView
the title
Chatter
in the navigation bar at the top of the screen.
Below ChattScrollView
, we now put a text box, where user can enter their Ollama
prompt, and the SubmitButton
. We put these text box and button inside an HStack
(horizontal stack or row). Elements in an HStack
are displayed side by side in
a row. Replace // prompt input and submit
with:
HStack (alignment: .bottom) {
TextField(vm.instruction, text: Bindable(vm).message)
.focused($messageInFocus) // to dismiss keyboard
.textFieldStyle(.roundedBorder)
.cornerRadius(20)
.shadow(radius: 2)
.background(Color(.clear))
.border(Color(.clear))
SubmitButton(scrollProxy: $scrollProxy)
}
.padding(EdgeInsets(top: 0, leading: 20, bottom: 8, trailing: 0))
Similar to how we passed vm.errMsg
to llmPrompt(_:errMsg:)
as a Bindable
,
now we pass vm.message
to TextField
as a Bindable
so that when user types
into the TextField
, TextField
can modify vm.message
as if it were passed
by reference. We also give vm.instruction
to TextField()
, which will be
shown as a “background” text that automatically goes away when the user starts
typing.
SubmitButton
uses scrollProxy
to programmatically scroll the screen to the
last item it added to the chatts
array. To do that, SubmitButton
must update
scrollProxy
to point to the last item; so scrollProxy
must also be passed
as a Binding
. However, since scrollProxy
is declared as a @State
variable,
not part of an @Observable
class, we don’t need to use Bindable
to pass it
as Binding
; we can use its projectedValue
, signified by the use of the $
sign, instead.
When the user taps any where on the screen other than the TextField
, we
dismiss the soft keyboard. Replace // tap background to dismiss keyboard
near
the bottom of the definition of ContentView
with:
.contentShape(.rect)
.onTapGesture {
messageInFocus.toggle()
}
Before we leave ContentView
, we check whether vm.showError
is true. If so,
we show an alert dialog with the error message in vm.errMsg
. Replace
// show error in an alert dialog
with:
.alert("LLM Error", isPresented: Bindable(vm).showError) {
Button("OK") {
vm.errMsg = ""
}
} message: {
Text(vm.errMsg)
}
Congratulations! You’re done with the front end! (Don’t forget to work on the backend!)
Run and test to verify and debug
You should now be able to run your front end against the provided back end on mada.eecs.umich.edu
. Change serverUrl
in ChattStore
from YOUR_SERVER_IP
to mada.eecs.umich.edu
.
If you’re not familiar with how to run and test your code, please review the instructions in the Getting Started with iOS Development.
Completing the back end
Once you’re satisfied that your front end is working correctly, follow the back-end spec to build your own back end:
With your back end completed, return here to prepare your front end to connect to your back end via HTTP/2 with HTTPS.
Installing your self-signed certificate
Download a copy of your chatterd.crt
to YOUR*TUTORIALS
on your laptop. Enter the following commands:
laptop$ cd YOUR*TUTORIALS
laptop$ scp -i reactive.pem ubuntu@YOUR_SERVER_IP:reactive/chatterd.crt chatterd.crt
Install your chatterd.crt
onto your iOS:
On iOS simulator
Drag chatterd.crt
on your laptop and drop it on the home screen of your simulator. That’s it!
To test the installation, launch a web browser on the simulator and access your server
at https://YOUR_SERVER_IP/llmprompt
.
On iOS device
AirDrop chatterd.crt
to your iPhone or email it to yourself.
Then on your device:
WARNING: DO ALL 10 STEPS: IT IS A COMMON ERROR TO MISS THE LAST THREE STEPS!
- If you AirDrop your
chatterd.crt
, skip to next step. If you emailed the certificate to yourself, view your email and tap the attachedchatterd.crt
.If you don’t using Apple’s Mail app on your iPhone, you may have to “share” the cert and choose
Save to Files
, then launch theFiles
app on your phone and in theDownloads
folder locate yourchatterd.crt
and tap it. - You should see a
Profile Downloaded
dialog box pops up. - Go to
Settings > General > VPN & Device Management
and tap on the profile withYOURSERVERIP
. - At the upper right corner of the screen, tap
Install
. - Enter your passcode.
- Tap
Install
at the upper right corner of the screen again. - And tap the somewhat dimmed out
Install
button. - Tap
Done
on the upper right corner of screen. -
Go back to
Settings > General
-
Go to
[Settings > General >] About > Certificate Trust Settings
-
Bravely slide the toggle button next to
YOURSERVERIP
to enable full trust of your CA’s certificate and clickContinue
on the dialog box that pops up
To test the installation, launch a web browser on your device and access your server
at https://YOUR_SERVER_IP/llmprompt
. Since /llmprompt
does not have a GET
method, the browser
may say, Cannot GET /llmprompt
. As long as you’re not getting a security-related error message,
it indicates that your self-signed certificate is installed correctly.
You can retrace your steps to remove the certificate when you don’t need it anymore.
If you run into problem using HTTPS on your device, the error code displayed by Xcode may help you debug. This post has a list of them near the end of the thread.
Finally, change the serverUrl
property of your ChattStore
class from mada.eecs.umich.edu
to YOUR_SERVER_IP
.
Build and run your app and you should now be able to connect your mobile front end to your back end via HTTPS.
Your frontend must work with both mada.eecs.umich.edu and your backend.
You will not get full credit if your submitted front end is not set up to work with your backend!
Front-end submission guidelines
We will only grade files committed to the main
branch. If you’ve created multiple
branches, please merge them all to the main branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repository
on the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to main
at the bottom of the left pane - Since you have pushed your back end code, you’ll have to click
Pull Origin
to synch up the repo on your laptop - Finally click
Push Origin
to push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo
under the folder
llmprompt
. Confirm that your repo has a folder structure outline similar to the following. If
your folder structure is not as outlined, our script will not pick up your submission and, further, you may have
problems getting started on latter tutorials. There could be other files or folders in your local folder not listed
below, don’t delete them. As long as you have installed the course .gitignore
as per the instructions in Preparing
GitHub for Reactive, only files needed for grading will
be pushed to GitHub.
reactive
|-- chatterd
|-- chatterd.crt
|-- llmprompt
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md
(click the pencil icon at the upper right corner of the README.md
box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md
if you work by yourself.
Invite eecsreactive@umich.edu
to your GitHub repo. Enter your uniqname (and that of your team mate’s) and the link to your GitHub repo on the Tutorial and Project Links sheet. The request for teaming information is redundant by design.
References
General iOS and Swift
Getting Started with SwiftUI
- Quick guide on SwiftUI essentials
- A guide to the SwiftUI layout system - Part 1
- How to effectively leverage the power of new #Preview feature in SwiftUI
SwiftUI at WWDC
- Introducing SwiftUI: Building Your First App
- Introduction to SwiftUI
- WWDC20: Advancements in SwiftUI
- SwiftUI Essentials
- App Essentials in SwiftUI
- Data Flow Through SwiftUI
- Data Essentials in SwiftUI
- Stacks, Grids, and Outlines in SwiftUI
- Integrating SwiftUI
SwiftUI Programming
- The New Navigation System in SwiftUI
- Custom navigation bar title view in SwiftUI
- How to add button to navigation bar in SwiftUI
- The future of SwiftUI navigation (?)
- How to create views in a loop using ForEach
- How to convert
UIColor
to SwiftUI’sColor
State Management
- Singleton
- State and Data Flow
- The @State Property Wrapper in SwiftUI Explained
- Discover Observation in SwiftUI
- A Deep Dive into Observation
- Working with @Binding in SwiftUI
- Stanger things around SwiftUI’s state
- The Inner Workings of State Properties in SwiftUI
- Observer vs Pub-Sub pattern
- Observation
- ObservationIgnored
- EnvironmentValues
- View.environment(::)
- SwiftUI View Lifecycle
- View modifiers
- Great SwiftUI see the section “Prefer No Effect Modifiers over Conditional Views”
Toolbar and keyboard
- How to create a toolbar and add buttons to it
- How to dismiss the keyboard for a TextField
-
How to control the tappable area of a view using contentShape()
-
Disabling user interactivity with allowsHitTesting() discusses
contentShape()
near the end of article.
-
Disabling user interactivity with allowsHitTesting() discusses
- SwiftUI Alert: Best Practices and Examples
Async/await
Networking
Working with JSON
-
Swift Tip: String to Data and Back for use in
getChatts()
-
Convert array to JSON in swift for use in
postChatt(_:)
- How can I define Content-type in Swift using NSURLSession
- How to parse JSON using Coding Keys in iOS
NDJSON
Prepared by Ollie Elmgren, Tiberiu Vilcu, Nowrin Mohamed, Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin | Last updated: August 27th, 2025 |