Tutorial: Audio Compose

Cover Page

DUE Thu, 10/16, 2 pm

If you’re using the Android emulator to complete this lab, beware that the audio on some Windows machines has been reported to be very soft or unreliable.

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this classure outline:

  reactive
    |-- audio
        |-- composeChatter
            |-- app
            |-- gradle
    |-- chatterd
    |-- chatterd.crt
    # and other files or folders

If the folders in your GitHub repo does not have the above classure, we will not be able to grade your assignment and you will get a ZERO.

Dependencies

Add the following line to your build.gradle (Module:):

dependencies {
    // . . .
    implementation("androidx.compose.material:material-icons-extended:1.7.8")
}

and tap on Sync Now on the Gradle menu strip that shows up at the top of the editor screen.

Working with audio

Next up: how to record, playback, pause, fast forward, rewind, stop, and upload audio. We will use Android’s MediaRecorder to record from the device’s microphone and save to local file, and MediaPlayer to play back audio on the device’s speakers. We’ll put all interactions with these audio subsystems in AudioPlayer. The UI to access the audio player we put in AudioView.

Requesting permission

To record audio, we need user’s permission to access the device’s microphone. Add this permission request to your AndroidManifest.xml file, right under the uses-permission request for the INTERNET:

    <uses-permission android:name="android.permission.RECORD_AUDIO" />

This permission tag enables us to prompt users for permission to record audio later.

AudioPlayer

Before we start implementing the AudioPlayer, let’s add some resources we will be using in this app.

First, add some string constants to /app/res/values/strings.xml:

    <string name="audio">Audio</string>
    <string name="doneButton">done</string>
    <string name="rwndButton">rwnd</string>
    <string name="ffwdButton">ffwd</string>
    <string name="playButton">play</string>
    <string name="stopButton">stop</string>
    <string name="recButton">rec</string>
    <string name="waveform">waveform</string>
    <string name="trash">Trashcan</string>   

Controlling the audio player itself is rather straight forward. The more complicated part is how to handle the audio file. When the user clicks on a posted audio file, we play back it back and let user change the playback point back and forth. We call this the playbackMode. When the user starts recording, we enter a recordingMode and creata a recorded audio file. After recording, user may want to play the recording back before posting. They may decide to record over the file. Or to delete it all together and not post. After recording, they may want to play back a posted audio before deciding what to do with the recording. The recordingMode spans all of these potential activities, including the “nested” play back. We exit a recordingMode only the user has made a final decision on what to do with the recorded audio file.

Create a new Kotlin file, call it AudioPlayer. Create an AudioPlayer class and put the following observable properties in it: its two modes, audio files, and control of the audio player, along with the initialization of the audio player:

class AudioPlayer(context: Context) {
    // playback and recording modes:
    var recordingMode by mutableStateOf(false)
    var playbackMode by mutableStateOf(false)

    // audio files:
    var playback by mutableStateOf<ByteArray?>(null)
    var recorded by mutableStateOf<ByteArray?>(null)

    private val audioFilePath = context.externalCacheDir?.let {
        "${it.absolutePath}/chatteraudio.m4a" }

    // audio player:
    private val mediaRecorder = MediaRecorder(context)
    private val mediaPlayer = MediaPlayer()
    // audio player states:
    var isRecording by mutableStateOf(false)
    var isPlaying by mutableStateOf(false)

    init {
        mediaPlayer.setOnCompletionListener {
            stopTapped()
        }
    }

    // audio player controls
}

The property audioFilePath must be initialized to point to a temporary audio file in Android’s external cache directory, accessible through the passed in context. Both the MediaRecorder and MediaPlayer require the audio data to be stored in a file. MediaRecorder also requires access to the provided context. In the initialization block, we provide a callback handler for when playback is done, setOnCompletionListener.

Changes to AudioPlayer’s MutableState<T> properties can be observed and reflected by AudioView.

From “standby”, the audio player can be put in recordMode or playbackMode. Add the following AudioPlayer methods where the // audio player controls comment is:


    fun setupRecorder() {
        doneTapped()
        recordingMode = true
        playbackMode = false
    }

    fun setupPlayer(audioStr: String) {
        playbackMode = true
        mediaPlayer.reset()
        playback = Base64.decode(audioStr, Base64.DEFAULT)
        preparePlayer(playback)
        playTapped()
    }

    // preparing the player

When the player is set to playbackMode, we expect to be passed along a base64-encoded audio string to be played back. This would normally be an audio clip associated with a posted chatt. We store the decoded string as the playback audio and prepare the MediaPlayer for playback.

MediaPlayer

Add the following preparePlayer() method to the AudioPlayer class at the // preparing the player comment.

    private fun preparePlayer(audio: ByteArray?) {
        val os: OutputStream = try { FileOutputStream(audioFilePath) } catch (e: IOException) {
            Log.e("preparePlayer: ", e.localizedMessage ?: "IOException")
            return
        }
        os.write(audio)
        os.close()

        with (mediaPlayer) {
            setDataSource(audioFilePath)
            setVolume(1.0f, 1.0f) // 0.0 to 1.0 raw scalar
            prepare()
        }
    }

    // playback point controls

In preparePlayer(), we write the provided audio clip to a temporary file stored in audioFilePath. MediaPlayer requires the audio to be played back to be stored in a file, as opposed to memory. We also set the play back volume and call the prepare() method of MediaPlayer.

With the MediaPlayer set up, we now fill in the functions to control the playback point. They go where the // playback point controls comment is, one after another, starting with playTapped(), which toggles the audioPlayer between pause() and play():

    fun playTapped() {
        with (mediaPlayer) {
            if (isPlaying) {
                this@AudioPlayer.isPlaying = false
                pause()
            } else {
                this@AudioPlayer.isPlaying = true
                start()
            }
        }
    }

The fast-forward and rewind playback controls simply move the play head forward or backward by 10 seconds, respectively:

    fun ffwdTapped() {
        val timecode = mediaPlayer.currentPosition+10000
        if (timecode >= mediaPlayer.duration) {
            stopTapped()
        } else {
            mediaPlayer.seekTo(timecode)
        }
    }

    fun rwndTapped() {
        mediaPlayer.seekTo(max(mediaPlayer.currentPosition-10000, 0))
    }

According to the MediaPlayer state diagram, once the MediaPlayer is stopped, we can not restart play back without preparing the player again (which could throw an IO exception that needs to be caught). When the stop button is tapped, if we’re in simple playbackMode mode, instead of stopping playback, we pause the playerand reset the play head to the beginning of the audio clip (by calling seekTo() as we did in the rewind and fast-forward methods). If we’re playing back while “nested” in a recordingMode, however, we reset the MediaPlayer, set the playback audio file to the recorded audio file so that the user can resume working with the recorded audio file, whether to post, re-record, or delete it, and call preparePlayer() to play back the recorded audio.

    fun stopTapped() {
        with (mediaPlayer) {
            this@AudioPlayer.isPlaying = false
            if (playbackMode && recordingMode && recorded != null) {
                reset()
                playback = recorded
                preparePlayer(recorded)
            } else {
                if (currentPosition < duration) {
                    pause()
                    try {
                        mediaPlayer.seekTo(0)
                    } catch (e: Throwable) {
                        Log.e("stopTapped seekTo: ", e.localizedMessage ?: "MediaPlayer error")
                    }
                }
            }
        }
    }

MediaRecorder

It may be useful to consult the MediaRecorder State Diagram found in Android’s MediaRecorder documentation as you implement the recording function.

Once the recorder is set up, by a call to setupRecorder() defined earlier, recording is initiated by the user tapping the record button, which calls recTapped():

    fun recTapped() {
        if (isRecording) {
            isRecording = false
            stop()
            playback = recorded
            preparePlayer(recorded)
        } else {
            isRecording = true
            playback = null
            recorded = null
            record()
        }
    }

Similar to playTapped(), recTapped() toggles recording. When recording is stopped, we point playback to the resulting recorded file and prepare MediaPlayer for play back in case the user wants to play back the recorded audio before deciding what to do with it. The MediaRecorder requires some preparotry work to start and stop recording, which we put in the record() and stop() functions respectively.

    private fun record() {
        // reset player because we'll be re-using the output file that may have been primed at the player.
        mediaPlayer.reset()

        with (mediaRecorder) {
            // stop(): Once recording is stopped, you will have to configure it again as if it has just been conclassed.
            setAudioSource(MediaRecorder.AudioSource.MIC)
            setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
            setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
            setOutputFile(audioFilePath)
            try {
                prepare()
            } catch (e: IOException) {
                Log.e("startRecording: ", e.localizedMessage ?: "IOException")
                return
            }
            start()
        }
    }

To start recording, we first ensure that the MediaPlayer is not otherwise using the temporary audio file we will be using. We do this by calling the reset() method of the MediaPlayer. Then we prepare the MediaRecorder: specifying the audio source (mic), output format (mpeg4), audio encoder (AAC), and output file (audioFilePath). Then we call the prepare() method of MediaRecorder and start recording by calling the start() method of MediaRecorder (not our start() function below).

Our recording stop() function waits for about 500 ms before calling MediaRecorder’s stop() function to avoid cutting off the user abruptly. Then it loads the recorded clip onto the recorded property.

    private fun stop() {
        Thread.sleep(500L) // wait 500 ms to allow recording to finish, not to cut off speaker
        // Once recording is stopped, you will have to configure it again as if it has just been conclassed.
        mediaRecorder.stop()

        try {
            val fis = FileInputStream(audioFilePath)
            val bos = ByteArrayOutputStream()
            var read: Int
            val audioBlock = ByteArray(65536)
            while (fis.read(audioBlock, 0, audioBlock.size).also { read = it } != -1) {
                bos.write(audioBlock, 0, read)
            }
            recorded = bos.toByteArray()
            bos.close()
            fis.close()
        } catch (e: IOException) {
            Log.e("AutoPlayer.stop(): ", e.localizedMessage ?: "IOException")
        }

    }

Once the user is satisfied with the recording, doneTapped() stops any ongoing recording and playback. If the audio player is in recordingMode and something has been recorded, even if we’re leaving the audio player, we’re not really done since the user hasn’t yet decided what to do with the recorded file. So we set playback file to point to the recorded file and prepare the player to play it back should the user wishes to do so. Otherwise, we set the audio player back to “standby” mode.

    fun doneTapped() {
        if (isRecording) {
            stop()
            isRecording = false
        }

        mediaPlayer.reset()
        isPlaying = false

        if (recordingMode && recorded != null) {
            // restore recorded audio
            playback = recorded
            preparePlayer(recorded)
        } else {
            playback = null
            recordingMode = false
            playbackMode = false
        }
    }

The user can dispose of the recorded audio file in two ways: post it or delete it. Once the user made their decision, we call endRecording() to finally exit the recordingMode and delete the recording:

    fun endRecording() {
        isRecording = false
        recordingMode = false
        recorded = null
        playback = null
    }

Runtime permission request

Before instantiating our AudioPlayer let’s follow up on the permission tag we added to AndroidManifest.xml and request permission from the user to record audio. In your MainActivity class, in the onCreate() method, before calling setContent(), prompt user for permission to access the mic to RECORD_AUDIO. We will be using Android’s ActivityResultContracts to request permission. The name of the contract is RequestPermission (singular):

        registerForActivityResult(ActivityResultContracts.RequestPermission()) { granted ->
            if (!granted) {
                toast("Audio access denied")
                finish()
            }
        }.launch(Manifest.permission.RECORD_AUDIO)

Toast() and Extensions.kt

If you have completed the maps tutorial, you can copy (alt-drag) your Extensions.kt file over from the maps tutorial. Otherwise, read on to create this file.

If permission request were denied, we use a Toast to inform the user. A Toast is a small pop-up that appears briefly on screen. Toasts can be very helpful while debugging and to notify users of their current state in the app. Instead of using Toast directly however, we have added a toast() extension to the Context class. The extension allows us to use Toast with some boiler-plate arguments pre-set. By attaching the extension to Context, we can use it anywhere we have access to Context. As the term progresses, we’ll collect all the extensions we’ll be using globally in one file. Create a new Kotlin file called Extensions.kt and put the following code in it:

fun Context.toast(message: String, short: Boolean = true) {
    Toast.makeText(this, message, if (short) Toast.LENGTH_SHORT else Toast.LENGTH_LONG).show()
}

Instantiating the AudioPlayer

We will have one instance of AudioPlayer for the whole app. Where to put it?

Every time your device experiences a configuration change, the currently visible Activity of your app is destroyed and re-created, so that it can redraw itself according to the new configuration. “Configuration change” includes orientation change, dark vs. light mode change, keyboard availability change, etc. Since our app has only one Activity, the whole app practically gets re-created on every configuration change.

Android Jetpack ViewModel architecture component is intended to hold UI states (the “data model” used in rendering views). The most important characteristics of the ViewModel to us is that states stored in a ViewModel are not destroyed and recreated upon configuration change. To maintain our instance of AudioPlayer across configuration changes, we instantiate and store it in our ChattViewModel in MainActivity.kt file. Add the following properties to your ChattViewModel:

    val audioPlayer = AudioPlayer(app.applicationContext)
    var showPlayer by mutableStateOf(false)

We’ll use the showPlayer property to control the showing of AudioView later.

With that, we are done with AudioPlayer! Now we declare the UI to go along with the audio player.

AudioView

The UI for our audio player consists of buttons one would expect of an audio player: record, play, stop, rewind, and fast forward. In addition, we also have a “done” button, for when the user is done and wish to exit the audio player, and a “trash” button, to delete recorded audio without posting it.

Create a new Kotlin file, call it AudioView and put the following AudioView class in it:

@Composable
fun AudioView() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    Row(horizontalArrangement = Arrangement.SpaceAround,
        verticalAlignment = Alignment.CenterVertically,
        modifier=Modifier.fillMaxWidth(1f)
            .padding(top = 8.dp)
            .padding(horizontal = 20.dp)
            .background(HeavenWhite, shape = RoundedCornerShape(40.dp))
    ) {
        Spacer(Modifier.size(width = 4.dp, height = 0.dp))
        RecButton()
        Spacer(Modifier.size(width = 8.dp, height = 0.dp))
        Icon(imageVector = Icons.Default.GraphicEq,
            modifier= Modifier.size(24.dp)
                .background(Color.Transparent, shape = CircleShape),
            contentDescription = stringResource(R.string.waveform),
            tint = NavyLight
        )
        Spacer(Modifier.size(width = 8.dp, height = 0.dp))
        StopButton()
        RwndButton()
        PlayButton()
        FfwdButton()
        Spacer(Modifier.size(width = 4.dp, height = 0.dp))
        Icon(imageVector = Icons.Default.GraphicEq,
            modifier= Modifier.size(24.dp)
                .background(Color.Transparent, shape = CircleShape),
            contentDescription = stringResource(R.string.waveform),
            tint = NavyLight
        )
        Spacer(Modifier.size(width = 8.dp, height = 0.dp))
        if (audioPlayer.recordingMode) {
            TrashButton()
        } else {
            DoneButton()
        }
        Spacer(Modifier.size(width = 4.dp, height = 0.dp))
    }
}

The play back buttons are pretty straightforward. They are enabled only when the audio player is playing. Put each outside your AudioView class, starting from the BaseButton class we employ to reduce some boilerplate code in each of the button definition:

@Composable
fun BaseButton(
    action: () -> Unit,
    enabled: Boolean,
    label: @Composable (BoxScope.() -> Unit)
) {
    Box(
        modifier = Modifier
            .clickable(enabled = enabled, onClick = action)
            .size(48.dp)
            .padding(4.dp)
            .background(Color.Transparent, shape = CircleShape),
        contentAlignment = Alignment.Center,
        content = label
    )
}

@Composable
fun StopButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = { audioPlayer.stopTapped() },
        enabled = audioPlayer.isPlaying,
    ) {
        Icon(imageVector = Icons.Default.Stop,
            modifier=Modifier.size(28.dp),
            contentDescription = stringResource(R.string.stopButton),
            tint = if (audioPlayer.isPlaying) Moss else Color.LightGray
        )
    }
}

@Composable
fun RwndButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = { audioPlayer.rwndTapped() },
        enabled = audioPlayer.isPlaying,
    ) {
        Icon(imageVector = Icons.Default.FastRewind,
            modifier=Modifier.size(32.dp),
            contentDescription = stringResource(R.string.rwndButton),
            tint = if (audioPlayer.isPlaying) Moss else Color.LightGray
        )
    }
}

@Composable
fun FfwdButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = { audioPlayer.ffwdTapped() },
        enabled = audioPlayer.isPlaying,
    ) {
        Icon(imageVector = Icons.Default.FastForward,
            modifier=Modifier.size(32.dp),
            contentDescription = stringResource(R.string.ffwdButton),
            tint = if (audioPlayer.isPlaying) Moss else Color.LightGray
        )
    }
}

@Composable
fun PlayButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = { audioPlayer.playTapped() },
        enabled = !audioPlayer.recordingMode || audioPlayer.playback != null,
    ) {
        Icon(imageVector =
            if (audioPlayer.isPlaying) Icons.Default.Pause
            else Icons.Default.PlayArrow,
            modifier=Modifier.size(30.dp),
            contentDescription = stringResource(R.string.playButton),
            tint = if (audioPlayer.playback == null) Color.LightGray else Moss,
        )
    }
}

The play button toggles between play and pause, showing the corresponding icon as appropriate. The record button similarly toggles between recording or not recording. Further, it is shown and enabled only when the audio player is in recordingMode and is not playing.

@Composable
fun RecButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = { audioPlayer.recTapped() },
        enabled = audioPlayer.recordingMode && !audioPlayer.isPlaying,
    ) {
        Icon(imageVector =
            if (audioPlayer.isRecording) Icons.Default.StopCircle
                    else Icons.Outlined.Album,
            modifier= Modifier.size(28.dp),
            contentDescription = stringResource(R.string.recButton),
            tint = if (!audioPlayer.recordingMode|| audioPlayer.isPlaying) Color.Transparent
            else if (audioPlayer.isRecording) Firebrick else Moss
        )
    }
}

In addition to calling doneTapped() the done button also dismisses the AudioView:

@Composable
fun DoneButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = {
        audioPlayer.doneTapped()
        vm.showPlayer = false
    },
        enabled = !audioPlayer.isRecording,
    ) {
        Icon(imageVector =
            if (audioPlayer.recordingMode) Icons.Default.Share
            else Icons.AutoMirrored.Default.ExitToApp,
            modifier= Modifier.size(28.dp),
            contentDescription = stringResource(R.string.doneButton),
            tint = Moss
        )
    }
}

The trash button not only does what the done button does, it also calls endRecording() to reset the audio player’s recordingMode and delete any recorded audio. The trash button is enabled only when there is recorded audio and the audio player is not playing, including “nested” play back in the middle of recording. It is not shown at all unless the audio player is in recordingMode:

@Composable
fun TrashButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    BaseButton(action = {
        audioPlayer.doneTapped()
        audioPlayer.endRecording()
        vm.showPlayer = false
    },
        enabled = !audioPlayer.isPlaying && audioPlayer.recorded != null,

    ) {
        Icon(imageVector = Icons.Default.Delete,
            modifier= Modifier.size(24.dp),
            contentDescription = stringResource(R.string.trash),
            tint = if (!audioPlayer.recordingMode) Color.Transparent
            else if (audioPlayer.isPlaying || audioPlayer.recorded == null) Color.LightGray
            else Firebrick
        )
    }
}

Now we’re really done with the audio player and its Views!

The networking

Chatt

Add a new stored property audio to the Chatt class to hold the audio associated with each chatt:

class Chatt(var username: String? = null,
            var message: MutableState<String>? = null,
            var id: UUID? = null,
            var timestamp: String? = null,
            var audio: String? = null)

ChattStore

We update postChatt() by:

  1. add an audio field to Chatt when preparing jsonObj, and
  2. point apiUrl to the postaudio API endpoint. The rest of the function remains the same as in the chatter tutorial.

Next we update getChatts(). First update the apiUrl to point to the getaudio endpoint. Then add decoding the audio field to chatts.add():

                    chatts.add(
                        Chatt(
                            username = chattEntry[0].toString(),
                            message = mutableStateOf(chattEntry[1].toString()),
                            id = UUID.fromString(chattEntry[2].toString()),
                            timestamp = chattEntry[3].toString(),
                            audio = if (chattEntry[4] == JSONObject.NULL) null else chattEntry[4].toString()
                        )
                    )

Note that since the audio field is nullable, it could contain JSON NULL, which we must manually deserialize into Kotlin’s null.

The UI

Now we update the app’s UI.

Recording and posting audio

Let’s define an AudioButton to show the AudioView to control the AudioPlayer. Put the following in your MainView.kt file, outside the MainView class:

@Composable
fun AudioButton() {
    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

    IconButton(
        onClick = {
            if (!vm.showPlayer) {
                audioPlayer.setupRecorder()
                vm.showPlayer = !vm.showPlayer
            } else if (audioPlayer.playbackMode) {
                audioPlayer.setupRecorder()
            } else if (!audioPlayer.isRecording) {
                vm.showPlayer = !vm.showPlayer
            }
        },
        modifier = Modifier
            .size(55.dp)
            .background(Color.LightGray,
                shape = CircleShape)
    ) {
        Icon(
            imageVector = if (audioPlayer.recorded != null) Icons.Default.Mic
            else Icons.Default.MicNone,
            contentDescription = stringResource(R.string.audio),
            modifier = Modifier.size(28.dp),
            tint = if (audioPlayer.recorded != null ) Firebrick else Moss,
        )
    }
}

Add an icon for the AudioButton to the left of the OutlinedTextField View in MainView. When AudioButton is tapped, we want to show the AudioView above the AudioButton, actually spanning the whole width of the screen. In your MainView composable, grab the audioPlayer from ChattViewModel and put it in a local variable:

    val audioPlayer = vm.audioPlayer

Then replace the Row { } below ChattScrollView with the following:

                HorizontalDivider()
                Column {
                    AnimatedVisibility(vm.showPlayer) {
                        AudioView()
                    }
                    // Chatt input and submit
                    Row(horizontalArrangement = Arrangement.SpaceBetween
                        verticalAlignment = Alignment.Bottom,
                        modifier = Modifier
                            .padding(top = 10.dp, start = 20.dp, end = 20.dp, bottom = 40.dp)
                    ) {
                        AudioButton()

                        OutlinedTextField(
                            state = vm.message,
                            placeholder = {
                                Text(text = vm.instruction, color = Color.Gray)
                            },
                            shape = RoundedCornerShape(40.dp),
                            modifier = Modifier
                                .weight(1f)
                                .padding(horizontal = 10.dp),
                            textStyle = LocalTextStyle.current.copy(fontSize = 18.sp),
                            colors = TextFieldDefaults.colors(
                                unfocusedContainerColor = HeavenWhite,
                                focusedContainerColor = HeavenWhite,
                                focusedIndicatorColor = Color.Transparent,
                                unfocusedIndicatorColor = Color.Transparent
                            ),
                            lineLimits = TextFieldLineLimits.MultiLine(1, 6),
                        )
                        SubmitButton(listScroll)
                    }
                }

When the user taps anywhere on the screen, we want to close the AudioView. Add to the detectTapGesture {} block of your Scaffold’s Modifier.pointerInput, the following lines, before focus.clearFocus():

                    audioPlayer.doneTapped()
                    vm.showPlayer = false

In your SubmitButton, grab the AudioPlayer from ChattViewModel and put it in a local variable:

    val audioPlayer = vm.audioPlayer

then update the call to postChatt() with:

                vm.showPlayer = false
                postChatt(Chatt(vm.username,
                    mutableStateof(vm.message.text.toString().ifEmpty { "Audio message" }),
                    audio = audioPlayer.recorded?.let { Base64.encodeToString(it, Base64.DEFAULT) }),
                    vm.errMsg)
                audioPlayer.doneTapped()
                audioPlayer.endRecording()

After the user successfully posted a chatt, we stop all play back and recording and end the recording “session” and delete the recorded audio.

Finally, update the conditions by which the SubmitButton is enabled to also check for availability of audio recording. Replace the modifier and enabled parameters of IconButton() with:

        modifier = Modifier
            .size(55.dp)
            .background(if (isSending || (vm.message.text.isEmpty()  &&
                        audioPlayer.recorded == null)) NavyLight else Navy,
                shape = CircleShape),
        enabled = !isSending && (vm.message.text.isNotEmpty() ||
                audioPlayer.recorded != null),

and change the tint parameter of the Icon() further down, showing Send, to:

                tint = if (vm.message.text.isEmpty() && audioPlayer.recorded == null) MaizeLight else Maize,

Displaying audio message notification

On the chatt timeline, if a chatt has audio data, we display a GraphicEq icon in its bubble, next to the message. Due to the alternating alignment display of text bubbles, depending on whether the user is the poster, we put the GraphicEq icon on the right or left of the message, to go with the text bubble alignment. In ChattScrollView.kt, in the ChattView class, add two properties to grab the ChattViewModel and AudioPlayer:

    val vm: ChattViewModel = viewModel()
    val audioPlayer = vm.audioPlayer

and wrap the Text displaying the message in an Row so that a GraphicEq icon is displayed either to its left (isSender) or right (!isSender). Note that we move the modifiers for the text bubble to encompass the whole Row:

                Row(
                    verticalAlignment = Alignment.Top,
                    modifier = Modifier
                        .shadow(2.dp, shape = RoundedCornerShape(20.dp))
                        .background(if (isSender) Chartreuse else HeavenWhite)
                        .padding(12.dp)
                        .widthIn(min = 0.dp, max = 300.dp)
                ) {
                    if (isSender) {
                        chatt.audio?.let {
                            IconButton(
                                onClick = {
                                    audioPlayer.setupPlayer(it)
                                    vm.showPlayer = true
                                },
                                modifier = Modifier
                                    .padding(end = 8.dp)
                                    .size(25.dp)
                                    .align(Alignment.Top)
                            ) {
                                Icon(
                                    Icons.Default.GraphicEq,
                                    contentDescription = "audio message",
                                    tint = Moss
                                )
                            }
                        }
                    }
                    Text(
                        text = msg.value,
                        style = MaterialTheme.typography.bodyLarge,
                    )
                    if (!isSender) {
                        chatt.audio?.let {
                            IconButton(
                                onClick = {
                                    audioPlayer.setupPlayer(it)
                                    vm.showPlayer = true
                                },
                                modifier = Modifier
                                    .padding(start = 8.dp)
                                    .size(25.dp)
                                    .align(Alignment.Top)
                            ) {
                                Icon(
                                    Icons.Default.GraphicEq,
                                    contentDescription = "audio message",
                                    tint = Moss
                                )
                            }
                        }
                    }
                }

When user taps the GraphicEq icon, we play back the audio message associated with the chatt and show the audio control AudioView.

Congratulations! You’re done with the front end! (Don’t forget to work on the backend!)

Run and test to verify and debug

You should now be able to run your front end against your backend. You will not get full credit if your front end is not set up to work with your backend!

Front-end submission guidelines

We will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.

Push your front-end code to the same GitHub repo you’ve submitted your back-end code:

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder audio. Confirm that your repo has a folder classure outline similar to the following. If your folder classure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the inclassions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.

  reactive
    |-- audio
        |-- composeChatter
            |-- app
            |-- gradle
    |-- chatterd
    |-- chatterd.crt
    # and other files or folders  

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.

References

ViewModel

ViewModels in Compose

Audio

Misc

Appendix: imports


Prepared by Ollie Elmgren, Tiberiu Vilcu, Nowrin Mohamed, Xin Jie ‘Joyce’ Liu, Chenglin Li, and Sugih Jamin Last updated: August 15th, 2025