Project 2: llmPlay: Where in the World?

DUE Wed, 10/29, 2 pm

This programming assignment (PA) may be completed individually or in teams of at most 2. You can partner differently for each PA.

We will build a game that let user play guess-the-city game with an LLM. When the user guessed right, the LLM sends the city’s lat/lon to the app, which then moves the map camera to center on the city.

Due to the limitations of smaller LLMs, we found that the gemma3:12b model is the smallest that can play this game. We have provided access to this model for the completion of this assignment. See the Testing section below.

Treat your messages sent to chatterd and Ollama as public utterances with no reasonable expectation of privacy and know that these are recorded for the purposes of carrying out a contextual interaction with Ollama and are potentially shared with everyone using chatterd.

Objectives

In addition to the objectives listed in the llmChat and Maps tutorials, this PA has the following objectives:

Features and requirements

Your app must provide the following features and satisfy the following requirements, including those in any applicable “Implementation guidelines” documents, to receive full credit.

Front-end UI

As can be seen in the video in either of the front-end specs, the app consists of a single screen with the following UI elements:

  1. a title bar showing the title Where in the world?,
  2. a map that initially shows the device’s current location, with a current location “blue dot,” on the map
  3. these UI elements at the bottom of the screen:
    • a text box spanning the left and middle part of the input area,
    • a “Send” button on the right of the textbox showing a “paper plane” icon. This button is enabled only when the text box is not empty and no networking session is in progress.

      When the button is “disabled”, they are grayed out and tapping on it has no effect.

      While there is a networking session in progress, that is, while waiting for Ollama’s hints or evaluation of user’s guesses, the “Send” button’s icon changes from showing a “paper plane” to showing an animated “loading” circle.

UI Design

One can easily spend a whole weekend (or longer) getting the UI “just right.”

:point_right: Remember: we won’t be grading you on how beautiful your UI looks nor how precisely it matches the one shown on the video demo. You’re free to design your UI differently, so long as all indicated UI elements are fully visible on the screen, non overlapping, and functioning as specified.

Front-end UX

As demonstrated in the video above:

API

We introduce a new API endpoint, which we call llmplay. Its protocol handshake and data formats are mosty similar to the llmchat endpoint documented in the llmchat specification. Please consult that document if you haven’t done the tutorial. In addition to the llmchat protocol, when the user made a correct guess in the game, the llmplay API handler returns a LatLon SSE event with the following data line:

event: latlon
data: { "lat": double, "lon": double }

We will not be using the postmaps and getmaps APIs from the maps tutorial.

Back-end infrastructures

Implementation and submission guidelines

Backend

For the backend, regardless of the stack of choice, you should build off the code base from the llmchat tutorial.

First, make a new API endpoint for llmplay, with its eponymous handler.

If you make a copy of the llmchat() URL handler to serve as the basis for llmplay(), there are basically a small number of changes you need to make:

As usual, git commit your changes to your chatterd source files with the commit message, "pa2 back end", and push your changes to your git repo.

Frontend

Due to the location access and permission request to do so, you would want to build your frontend off the maps tutorial front end. However, for ChattStore, we don’t need postChatt and getChatts, instead you want to build off llmChat from the llmchat frontend.

More detailed guidelines for the two frontend platforms are available in the following separate specs:

Testing

Frontend

We found LLM model smaller than gemma3:12b to not be able to play this guess the city game. To test your frontend, you will use mada.eecs.umich.edu, specifying gemma3:12b model. Unfortunately, Ollama can only hold one conversation at a time. If you have a host with sufficient resources, you may want to pull model gemma3:12b (8GB) to Ollama running on your host for your own use.

Backend

For grading purposes, please pull gemma3:1b (815 MB) to your AWS/GCP backend instance. You may want to remove tinyllama first:

server$ ollama rm tinyllama
server$ ollama pull gemma3:1b

Test your backend with the following test case. The “system” message is the START prompt to use in place of the one above:

{
    "model": "gemma3:1b",
    "messages": [
        { "role": "system", "content": "Repeat after user verbatim." },
        { "role": "user", "content": "WINNER!!!!!:39.91:-79.47" }
    ],
    "stream": true
}

:point_right:For grading purposes, please leave your chatterd backend running on your instance with the above as the START prompt.

From Postman or using curl, the above should return, “WINNER!!!!!:39.91:-79.47”. Using the above as the START prompt, a working frontend should immediately move its camera to center on the vicinity of Fallingwater, PA upon launch.

We found gemma3:1b to at least be able to follow instructions to return the winner notification string verbatim; tinyllama cannot. On a *-micro instance of AWS or GCP, however, it could take Ollama over 2 and a half minutes (or longer) to reply. So be patient.

:point_right:WARNING: You will not get full credit if your front end is not set up to work with your backend!

Everytime you rebuild your Go or Rust server or make changes to either of your JavaScript or Python files, you need to restart chatterd:

server$ sudo systemctl restart chatterd

:warning:Leave your chatterd running until you have received your tutorial grade.

:point_right:TIP:

server$ sudo systemctl status chatterd

is your BEST FRIEND in debugging your server. If you get an HTTP error code 500 Internal Server Error or if you just don’t know whether your HTTP request has made it to the server, first thing you do is run sudo systemctl status chatterd on your server and study its output.

If you’re running a Python server, it also shows error messages from your Python code, including any debug printouts from your code. The command systemctl status chatterd is by far the most useful go-to tool to diagnose your back-end server problem.


Prepared by Chenglin Li, Xin Jie ‘Joyce’ Liu, Sugih Jamin Last updated: August 8th, 2025