Project 1: llmDraft SwiftUI
Cover Page
If you have not done the llmPrompt and Chatter
tutorials, please complete them first. The rest of this spec will assume you have completed
both tutorials and continue from where they left off.
Preparing your GitHub repo
- On your laptop, navigate to
YOUR*TUTORIALS/ - Create a zip of your
chatterfolder - Rename your
chatterfolderllmdraft - Push your local
YOUR*TUTORIALS/repo to GitHub and make sure there’re no git issues:git push
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your
reactiveGitHub repo - Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo,
click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this structure outline:
reactive
|-- chatterd
|-- chatterd.crt
|-- llmdraft
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.
If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.
Chatt
We will use the Chatt structure used in the Chatter tutorial. This is the one LLM-related
tutorial we don’t need individual chatts in the chatts timeline to be observable and hence
do not need Chatt to be an @Observable class.
Rewrite UI
The rewrite UI consists of an AI button to the left of the text box at the bottom of the screen
in ContentView. Taking inspiration from the SubmitButton in the same file, create an “AI”
button with the iOS built-in “sparkles” as its Image.
Disable the button if textbox is empty or we’re already in the process of asking Ollama for a suggestion and awaiting its reply. When the button is disabled, set the icon color to gray, otherwise set it to blue.
When the button is enabled, when clicked, it sends the draft in the text box, along with a “rewrite” prompt, to Ollama on the back end. We will discuss this process in its own section later.
To make TextField automatically enlarge the text box and make it scrollable when displaying
Ollama’s reply, add an axis: .vertical argument to the TextField UI element and add
a linelimit modifier so:
TextField(vm.instruction, text: Bindable(vm).message, axis: .vertical)
.lineLimit(1...6)
If you’re re-using vm.instruction to hold Ollama’s reply (draft), you will need to change
it to a var in ChattViewModel. I prefer to use a new, separate observable property in the
view model to hold Ollama’s reply and keep the default instruction in vm.instruction.
Reply UI
The reply UI consists of adding the onLongPressGesture recognizer as a
modifier to the message bubble in the ChattView type in ChattScrollView.swift file.
When the user long presses on a chatt posted by another user and there is no outstanding
request to Ollama for a rewrite or another reply suggestion, we send the message in the
selected posted chatt, along with a “reply” prompt, to Ollama in the back end.
We will discuss this process in its own section later.
A REQUIREMENT of the reply feature: the above two conditions must be met to trigger
the feature. If the user clicks on their own posted chatt or if there is already an ongoing
request to Ollama that has not returned, the reply feature is not triggered.
Sending request prompt to Ollama
The final part of the assignment is to send request prompt to Ollama. In my solution to the assignment, I split this task into two parts:
- a
ChattStoremethod to handle the networking with the back end, and - a
ChattViewModelmethod to put together the prompt to send to Ollama using the networking method above.
llmDraft(_:draft:errMsg:)
I name the method to handle networking with the back end, llmDraft(_:draft:errMsg:).
Its implementation is patterned after llmPrompt(_:errMsg:) from the
llmPrompt tutorial. Here’s the full
signature I use:
func llmDraft(_ chatt: Chatt, draft: Binding<String>, errMsg: Binding<String>) async { }
The first and last parameters are the same as those of llmPrompt(_:errMsg:).
The second parameter will hold the draft returned by Ollama. Since Ollama reply is
streamed, we will update the argument passed to the draft parameter every
time an element of the stream arrives. Notice that draft is of type Binding<String>,
which allows its argument to be observed by a SwiftUI View. When we change the
wrappedValue property of draft, SwiftUI will re-render the View observing
the argument. Take a look at how llmPrompt(_:errMsg:) updates errMsg. You can
similarly update draft.wrappedValue to update the draft parameter.
Unlike the llmPrompt tutorial, in this tutorial we do not need to show a timeline of
user exchanges with Ollama. Thus we do not need to create and append a dummy chatt message
to the chatts array.
As in llmPrompt(_:errMsg:), create a JSON Object from the chatt paramater. This
is the prompt you will send to Ollama through chatterd’s llmprompt API, the same API
used in the llmPrompt tutorial. Once we get a valid (HTTP status code 200) response,
we can decode each line of the returning stream directly into an OllamaReply instance and
pass the instance’s response property to the draft parameter. All error handling from
llmPrompt(_:errMsg:) can be used as is.
promptLlm(_:prompt:)
My promptLlm(_:prompt:) function prepares a Chatt message with the appropriate
prompt and calls llmDraft(_:draft:errMsg:). I put this function in the ContentView.swift
file. Here’s the function signature I use:
func promptLlm(_ vm: ChattViewModel, prompt: String) async { }
When the user clicks the AI button to issue a rewrite request, I call
promptLlm(_:prompt:) with the following rewrite prompt:
"You are a poet. Rewrite the content below to a poetic version. Don't list
options. Here's the content I want you to rewrite:"
Feel free to create your own prompt, though I found the last phrase, “Here’s the content I want you to rewrite:” most helpful, especially for short content. It seems to help the model recognize and separate the content from the prompt instruction.
In my promptLlm(_:prompt:), the chatt I pass to llmDraft(_:draft:errMsg:) has its name
property set to the name of the LLM model I want to use. If you’re running your back end
on a *-micro instance, you may want to pull and use the qwen3:0.6b model. I found
gemma3:270m to not follow my prompt instructions reliably.
Then I concatenate the message I want Ollama to work on with the prompt parameter
passed in to promptLlm(_:prompt:).
Once I’ve stored the prompt and the view model’s message property into the chatt variable
I will pass to llmDraft(_:draft:errMsg:), I clear the view model’s message property
so that I can use it to store the draft returned by Ollama.
In the case when the user clicks the AI button to issue a rewrite request, the message
property of ChattViewModel already contains the draft message the user wants
Ollama to rewrite. When issuing a reply draft request, however, the message
is held in the selected chatt posted by another user. We must first copy this
message into the message property of ChattViewModel before calling promptLlm(_:prompt:).
When calling promptLlm(_:prompt:) to request a reply draft, this is the reply prompt I use,
"You are a poet. Write a poetic reply to this message I received. Don't list
options. Here's the message I want you to write a poetic reply to:"
In both cases, I always set a flag to indicatie that an Ollama request is
“in progress”, before calling promptLlm(_:prompt:), to prevent multiple
ongoing requests.
The setting of this flag is done when our
Chatterapp is running onMainActor, so it is thread safe, i.e., there wouldn’t be multiple threads trying to set this flag at the same time.
To ensure interactivity of the app, I call promptLlm(_:prompt:) as an asynchronous
task with a .background priority. You can see how this is done when SubmitButton
calls postChatt(_:errMsg:). To achieve thread-safety, the setting of the flag to
prevent multiple ongoing Ollama requests must be done before I spawn off this other
Task. Once promptLlm(_:prompt:) finishes and returns, I reset the flag to indicate that
the Ollama request is concluded and the user can start another one.
Additional UX (optional)
The following UX features are intended to increase the perceived responsiveness and interactivity of the app. You can choose to implement them to match the demo video, but you won’t be deducted points if you don’t (nor will there be extra credit if you do!).
-
When an Ollama request is ongoing, the message in the text box at the bottom of
ContentViewchanges to notify user that the request is ongoing. -
When a
chattposted by another user is selected to generate areply, the background of the “selected”chattis displayed in gray until a reply draft is fully received. -
This one is more a workaround than a feature: when the textbox is not in focus, i.e., shown without the soft keyboard visible, it doesn’t scroll the text streamed to it beyond the first few lines. Thus in my implementation of the long-press gesture, I also set the focus on the text box with:
vm.messageInFocus.wrappedValue = trueAs you can see above, I added a
messageInFocusproperty to myChattViewModel:var messageInFocus: FocusState<Bool>.Binding!which I initialize in my
ContentView’s.taskmodifier:vm.messageInFocus = $messageInFocus
That’s all for llmDraft!
Run and test to verify and debug
As mentioned earlier, pull and use the qwen3:0.6 model if you are running your back end on
a *-micro instance.
Be sure to run your front end against your back end. You will not get full credit if your front end is not set up to work with your back end!
Submission guidelines
If you have not submitted your back end as part of completing the llmPrompt and Chatter
tutorials, follow the instructions in those tutorials to submit your back end. Otherwise,
you don’t need to submit your back end again.
Submit your updated front end for llmDraft. As usual, we will only grade files committed
to the main branch. If you use multiple branches, please merge them all to the main
branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - If you have pushed code to your repo, click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded
to your GitHub repo under the folder llmdraft. Confirm that your repo has a folder structure
outline similar to the following. If your folder structure is not as outlined, our script
will not pick up your submission and, further, you may have problems getting started on latter
tutorials. There could be other files or folders in your local folder not listed below, don’t
delete them. As long as you have installed the course .gitignore as per the instructions in
Preparing GitHub for Reactive Tutorials,
only files needed for grading will be pushed to GitHub.
reactive
|-- chatterd
|-- chatterd.crt
|-- llmdraft
|-- swiftUIChatter
|-- swiftUIChatter.xcodeproj
|-- swiftUIChatter
YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
| Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin | Last updated: Januaray 10th, 2026 |