Project 1: llmDraft SwiftUI

Cover Page

DUE Wed, 09/24, 2 pm

If you have not done the llmPrompt and Chatter tutorials, please complete them first. The rest of this spec will assume you have completed both tutorials and continue from where they left off.

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- project1
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter

YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Chatt

We will use the same Chatt structure used in the two tutorials. No change to the Chatt.swift file.

Rewrite UI

The rewrite UI consists of an AI button to the left of the text box at the bottom of the screen in ContentView. Taking inspiration from the SubmitButton in the same file, create an “AI” button with the iOS built-in “sparkles” as its Image.

Disable the button if textbox is empty or we’re already in the process of asking Ollama for a suggestion and awaiting its reply. When the button is disabled, set the icon color to gray, otherwise set it to blue.

When the button is enabled, when clicked, it sends the draft in the text box, along with a “rewrite” prompt, to Ollama on the backend. We will discuss this process in its own section later.

To make TextField automatically enlarge the text box and make it scrollable when displaying Ollama’s reply, add an axis: .vertical argument to the TextField UI element and add a linelimit modifier so:

                TextField(vm.instruction, text: Bindable(vm).message, axis: .vertical)
                    .lineLimit(1...6)

If you’re re-using vm.instruction to hold Ollama’s reply (draft), you will need to change it to a var in ChattViewModel. I prefer to use a new, separate observable property in the view model to hold Ollama’s reply so that I can keep the default instruction in vm.instruction.

Reply UI

The reply UI consists of adding the onLongPressGesture recognizer as a modifier to the message bubble in the ChattView struct in ChattScrollView.swift file.

When the user presses down on a chatt posted by another user and there is no outstanding request to Ollama for a rewrite or another reply suggestion, we send the message in the selected posted chatt, along with a “reply” prompt, to Ollama on the backend. We will discuss this process in its own section later.

This is a requirement of the reply feature: the above two conditions must be met to trigger the feature. If the user clicks on their own posted chatt or if there is already an ongoing request to Ollama that has not returned, the reply feature is not triggered.

Sending request prompt to Ollama

The final part of the assignment is to send request prompt to Ollama. In my solution to the assignment, I split this task into two parts:

  1. a ChattStore method to handle the networking with the backend, and
  2. a ChattViewModel method to put together the prompt to send to Ollama using the networking method above.

llmDraft(_:draft:errMsg:)

I name the method to handle networking with the backend, llmDraft(_:draft:errMsg:). Its implementation is patterned after llmPrompt(_:errMsg:) from the llmPrompt tutorial. Here’s the full signature I use:

    func llmDraft(_ chatt: Chatt, draft: Binding<String>, errMsg: Binding<String>) async { }

The first and last parameters are the same as those of llmPrompt(_:errMsg:). The second parameter will hold the draft returned by Ollama. Since Ollama reply is streamed, we will update the argument passed to the draft parameter every time an element of the stream arrives. Notice that draft is of type Binding<String>, which allows its argument to be observed by a SwiftUI View. When we change the wrappedValue property of draft, SwiftUI will re-render the View observing the argument. Take a look at how llmPrompt(_:errMsg:) updates errMsg, and similarly update draft.wrappedValue when you need to update the draft parameter.

As in llmPrompt(_:errMsg:), first create a JSON Object from the chatt paramater. This is the prompt you will send to Ollama through chatterd’s llmprompt API, the same API used in llmPrompt(_:errMsg:). We wait for a response from Ollama and check that it is a valid (HTTP status code 200) response.

Subsequent code is how this function differs from llmPrompt(_:errMsg:). Unlike llmPrompt(_:errMsg:), we do not need to show a timeline of user exchanges with Ollama. Thus, instead of creating a dummy chatt message that we append to the chatts array, we can decode each line of the returning stream into an OllamaReply class directly and pass the response property to the draft parameter. We can adopt all error handling code from llmPrompt(_:errMsg:) though.

promptLlm(_:)

My promptLlm(_:) method puts together a prompt and call llmDraft(_:draft:errMsg:). To gain easy access to states in the view model, I make promptLlm(_:) a method of ChattViewModel found in file swiftUIChatterApp.swift. Here’s the full signature I use:

    func promptLlm(_ prompt: String) async { }

When the user clicks the AI button to issue a rewrite request, I call promptLlm(_:) with a rewrite prompt. This is the rewrite prompt I use,

"You are a poet. Rewrite the content below to a poetic version. Don't list 
options. Here's the content I want you to rewrite: "

You should feel free to create your own prompt, though I found the last phrase, "Here's the content I want you to rewrite: " most helpful, especially for short content. It seems to help the model recognize and separate the content from the prompt instruction.

In my promptLlm(_:), I first create a chatt to be passed to llmDraft(_:draft:errMsg:). I set the username of this chatt to the name of the LLM model I want Ollama to use, which should be just tinyllama if you’re using a *-micro instance as your backend. Then I set the message of this chatt to be a concatenation of the prompt parameter passed to promptLlm(_:) with ChattViewModel’s message property, to which the text box at the bottom of ContentView saves its message.

Then I clear ChattViewModel’s message property, to be used to store the draft returned by Ollama. Then I call llmDraft(_:draft:errMsg:) so:

        await ChattStore.shared.llmDraft(chatt, draft: Bindable(self).message, errMsg: Bindable(self).errMsg)

The self in Bindable(self) refers to the ChattViewModel class instance. By wrapping it in a Bindable, I’m passing a Binding<String>, matching my llmDraft(_:draft:errMsg:) signature above, which allows arguments to the draft and errMsg to be observable.

While the rewrite function simply takes the content of ChattViewModel’s message property to form part of the prompt, the reply function must first put the selected chatt, posted by another user, into this variable, so that when promptLlm(_:) grab the content of this property to form the prompt, it is already populated with the selected chatt message.

This is the reply prompt I use,

"You are a poet. Write a poetic reply to this message I received. Don't list
options. Here's the message I want you to write a poetic reply to: "

In both cases, I always set a flag indicating that an Ollama request is ensuing before calling promptLlm(_:). This is to prevent multiple ongoing requests.

The setting of this flag is done when our Chatter app is running on MainActor, so it is thread safe, i.e., there wouldn’t be multiple threads trying to set this flag at the same time.

The function promptLlm(_:) is an async function, which means you must await it when calling. Once it finishes and await returns, I reset the flag to indicate that the Ollama request is concluded and the user can start another one.

To ensure interactivity of the app, I call promptLlm(_:) as an asynchronous task with a .background priority. You can see how this is done when SubmitButton calls postChatt(_:errMsg:). To achieve thread-safety, the setting of the flag to prevent multiple ongoing Ollama requests must be done before you spawn off this other Task.

Additional UX (optional)

The following UX features are intended to increase the perceived responsiveness and interactivity of the app. You can choose to implement them to match the demo video, but you won’t be deducted points if you don’t (nor will there be extra credit if you do!).

That’s all for Project 1!

Run and test to verify and debug

Be sure to run your front end against your backend. You will not get full credit if your front end is not set up to work with your backend!

Submission guidelines

If you have not submitted your backend as part of completing the llmPrompt and Chatter tutorials, follow the instructions in those tutorials to submit your backend. Otherwise, you don’t need to submit your backend again.

Submit your updated frontend for Project 1. As usual, we will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder project1. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- project1
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter   

YOUR*TUTORIALS folder on your laptop should contain the llmprompt.zip and chatter.zip files in addition.

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin Last updated: August 24th, 2025