PA2: llmPlay SwiftUI

Cover Page

DUE Wed, 10/29, 2 pm

Expected behavior

DISCLAIMER: the video demoes show you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.

Preparing your GitHub repo

If you have not completed the llmchat tutorial, please do so first.

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
    |-- pa2
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter
    # and other files or folders

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your assignment and you will get a ZERO.

Interacting with Ollama

I think of the interaction with Ollama as separated into three phases:

  1. When the app launches, it starts a game by prompting Ollama, through chatterd, for a set of hints about a city.
  2. As the game progress, the app interacts with chatterd/Ollama on chatterd’s new llmplay API endpoint: receiving hints, sending guesses.
  3. Finally, the app must be able to receive, recognize, and handle a LatLon SSE event that chatterd sends when Ollama announces a winner.

llmPlay(appID:chatt:hints:winner:errMsg:)

Here’s the full signature I use for llmPlay(appID:chatt:hints:winner:errMsg:) in ChattStore:

    func llmPlay(appID: String, chatt: Chatt,
                 hints: Binding<String>,
                 winner: ((Location) -> ())?,
                 errMsg: Binding<String>) async { }

The first, second, and last parameters are the same as those of llmChat(appID:chatt:errMsg:). The hints parameter is an observable string we use to hold hints Ollama returns about the city for the user to guess. We update hints similar to how we update errMsg. Appending newly arriving chunks to hints accumulates the hints. SwiftUI automatically re-renders any View observing an observable variable, such as hints and errMsg. The winner parameter is a closure that takes a Location as argument and handles the winning notification. Location is defined in LocManager.swift.

If you build your llmPlay(appID:chatt:hints:winner:errMsg:) by modifying llmChat(appID:chatt:errMsg:), don’t forget to change the API endpoint to llmplay.

When the app launches, to prompt Ollama to start the game, I launch llmPlay(appID:chatt:hints:winner:errMsg:) in the init() method of my swiftUIChatterApp. The message in the chatt I send to start the game consists only of the string "START". I launch llmPlay(appID:chatt:hints:winner:errMsg:) after I’ve started the location updates.

I collect the hints returned by Ollama in an observable property hints in my ChattViewModel, also in swiftUIChatterApp.swift. Since this initial prompt will never result in any winning notification, I give nil as the argument for the winner parameter.

Unlike in the llmchat tutorial, we do not need to show a timeline of user exchanges with Ollama. Thus, instead of creating a dummy chatt message to append to a chatts array, as we did in llmChat(appID:chatt:errMsg:), you can decode each Message data line of the returning stream into an OllamaReply class and append its content to the hints parameter passed in to llmPlay(appID:chatt:hints:winner:errMsg:). Remember to clear the hints parameter before you start processing Ollama’s reply and start appending to it. Handling of SSE Error event and all other error handling code can be adopted from llmChat(appID:chatt:errMsg:).

To recognize and handle a LatLon SSE event, you may want to read closely the llmChat(appID:chatt:errMsg:) code for handling SSE Error event, and the accompanying explanation in the llmchat tutorial. To implement SSE LatLon event handling, first add a LatLon enum case to your SseEventType at the top of ChattStore file. Then for each line of the incoming stream, when you detect an event tag, in addition flagging an Error event, you want to recognize and flag a LatLon event.

When parsing a data line, if it’s part of a LatLon event, decode the line into a Location data class and call the winner closure with the decoded Location as its argument. Since winner can be nil, to invoke it, you must first check that it is indeed not nil:

                            winner?(location)

We will discuss what goes into the winner closure at llmPlay(appID:chatt:hints:winner:errMsg:) use site in SubmitButton later.

Game UI

The Game UI is simply the ContentView with a Map(position:) taking the place of ScrollViewReader(). We want to move the map’s camera around when there’s a winner notification.

First add to your ContentView.swift:

import MapKit

then add an observable cameraPosition property to your ContentView().

    @State var cameraPosition: MapCameraPosition = .userLocation(fallback: .automatic)

As we did when calling Map(position:) in the maps tutorial, call it with the cameraPosition defined above. The map’s camera will automatically move to match where this property is pointing at.

You can remove the scollProxy property as we will no longer be showing a timeline of chatts.

Below all that, we want to show the hints returned by Ollama in a Text View above the TextField View where user enters their guesses. The content of the Text View should be the observable property hints in ChattViewModel where you’ve collected the hints from Ollama. Put these two text boxes in a VStack().

Next to these two text boxes, we want to show the SubmitButton:

Finally, as usual, ContentView must pop up an alert dialog box if there were any error messages.

Additional UX (optional)

The following UX feature is intended to increase the perceived responsiveness and interactivity of the app. You can choose to implement it to match the demo video, but you won’t be deducted point if you don’t (nor will there be extra credit if you do!).

While waiting for the first batch of hints from Ollama, the message in the text box holding the hints should show Waiting for hints… and the SubmitButton is disabled and shows a ProgressView() which changes back to the “paper plane” icon when the hints finished arriving.

That’s all for PA2!

Run and test to verify and debug

Be sure to run your front end against your backend. You will not get full credit if your front end is not set up to work with your backend!

Submission guidelines

Be sure you have submitted your modified backend in addition to submitting your updated frontend. As usual, we will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder pa2. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- chatterd
    |-- chatterd.crt
    |-- llmchat
    |-- pa2
        |-- swiftUIChatter
            |-- swiftUIChatter.xcodeproj
            |-- swiftUIChatter
    # and other files or folders

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.


Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, Sugih Jamin Last updated: August 13th, 2025