Project 3: llmAction: Agentic AI in Action

Course Schedule

The exciting thing about agentic AIs is they can act on the physical world, beyond looking up information. They can do this when we give them tools that are not just pure functions, but tools with real-world, irreversible side effects. We call such tools “operative tools.” We will build such an operative tool in this project, albeit one with very limited capabilities. While the actions of this tool indeed cannot be undone, and the resources consumed cannot be reclaimed, their ultimate outcomes can still be rectified and nullified.

To guard against potential harm operative tools can inflict upon their users and the physical world, we put limits and guardrails against their capacity. They can only perform a defined set of actions in a contained system (we hope). We impose a human-in-the-loop (HITL) checks. In this project, we require authorization by authenticated user before our operative tool can perform any of its actions.

The full specification for this assignment, including implementation guidelines, will be published after HW3 has been turned in. HW3 has you work on the design and implementation approaches of the project.

Partial credits

To help you with time management, break your approach down to smaller tasks, and to help structure your solution, you can earn partial credits by completing the following two tutorials by their deadlines, as listed in the Course Schedule:

Completing and submitting the tutorials by their respective deadlines are optional, though the features and functionalities, embodied in the tutorials are REQUIRED of this project.

This project may be completed individually or in teams of at most 2. You can partner differently for each project. Only ONE member of a team needs to submit the project and its tutorials.

Objectives

In addition to the objectives listed in the llmTools and Signin tutorials, this project has the following objectives:

Expected behavior

Sending a prompt to Ollama triggering operative tool use with human-in-the-loop check. The human “approval” consists of verifying the validity of an authorization token obtained after authentication with Google Signin. Storage and access to the limited-lifetime authorization token on user’s device requires biometric authentication:

DISCLAIMER: the video demoes show you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and this spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.

Features and requirements

To receive full credits, your app must provide the following features and satisfy the following requirements, including those in any applicable “Implementation guidelines” documents.

Front-end UI

As can be seen in the video demo, the app consists of a single screen with the following UI elements:

  1. a title bar showing the title LlmAction with HITL,
  2. a timeline of posted prompts shown on the right of screen and LLM responses on the left,
  3. the following UI elements placed at the bottom of the screen:
    • a text box spanning the left and middle part of the input area,
    • a “Send” button on the right of the textbox showing a “paper plane” icon. This button is enabled only when the text box is not empty and no networking session is in progress.

      When the button is “disabled”, it is grayed out and tapping on it has no effect.

      While there is a networking session in progress, that is, while waiting for Ollama’s response to a prompt, the “Send” button’s icon changes from a “paper plane” to an animated “loading” circle,

  4. the app allows user to sign in with Google Signin,
  5. the app serves as a front end to obtain user’s biometric authentication on device.
UI Design

One can easily spend a whole weekend (or more!) getting the UI “just right.”

:point_right: Remember: we won’t be grading you on how beautiful your UI looks nor how precisely it matches the one shown on the video demo. You’re free to design your UI differently, so long as all indicated UI elements are fully visible on the screen, non overlapping, and functioning as specified.

Front-end UX

To invoke the provided operative tool, the LLM must first obtain authorization from the user by invoking another provided tool, get_auth. When an LLM invokes get_auth, the app allows user to sign in to Google Signin and obtain a limited-lifetime authorization token. Storage and access to this authorization token on device is guarded by biometric authentication.

API

We use the /llmprep, /llmtools, and /adduser APIs from the previous tutorials; there are no new APIs.

Back-end infrastructures

The back end is expected to have the same requirements and provide all the tool-calling and communication infrastructure as described in the llmTools tutorial, including converting NDJSON to SSE streams.

The back end provides an ollama_cli tool that LLMs can invoke to perform Ollama’s commands, such as listing models currently available, pulling and removing models, etc.

On mada.eecs.umich.edu, the Ollama the tool can access is not the Ollama processing user prompts. So you don’t have to worry about crippling the Ollama serving the app. When testing your own back end with your own instance of Ollama that your app relies on, be careful not to delete the model you need to run your app!

Implementation and submission guidelines

Additional implementation guidelines will be published after the due date of HW3.

Back end

For the back end, regardless of the stack of choice, you should build off the code base from both the llmTools and Signin tutorials.

Front end

The front end UI/UX is simply that of llmTools. To this, add the code from the Signin tutorial to use Google Signin and biometric authentication.

End-to-end Testing

Testing of llmAction is very similar to testing llmTools.

You will need a working front end to fully test your back end’s handling of tool calls. Once you have your front end implemented, first test it against the provided back end on mada.eecs.umich.edu. We found that LLM models smaller than qwen3:8b (5.2 GB RAM) cannot conduct chained tool calls, nor assemble tool arguments from multiple sources.

With qwen3:8b specified as the model to use, send a request to mada.eecs.umich.edu with the prompt, “List all models available on Ollama.” After a (sometimes very long) thinking process, the model will call the get_auth tool on your device.

Once the model is granted authorization, it will call the ollama_cli tool on the back end (chatterd) to run the command ollama ls to list all available Ollama models and complete the prompt.

Since mada is a shared resource and Ollama serves one HTTP request at a time, you would have to wait your turn if others are using mada. If your laptop has the necessary resources (8GB+ HD, 5GB+ RAM), you may want to pull model qwen3:8b to the Ollama running on your laptop and use it to test your app locally before testing on mada. In any case, don’t wait until the deadline to test your code and then get stuck behind a long line of classmates trying to access mada.


Prepared by Xin Jie ‘Joyce’ Liu, Chenglin Li, Sugih Jamin Last updated: March 14th, 2026