Tutorial: Images Compose
Cover Page
DUE Wed, 11/5, 2 pm
This tutorial requires Android 14, minSDK API Level 34. If your device cannot run Android 14, you can use the emulator to complete the tutorial. Be aware however that some students have reported the emulator audio to be very soft or unreliable on some Windows machines.
Preliminaries
Preparing your GitHub repo
- On your laptop, navigate to
YOUR*TUTORIALS/ - Unzip of your
chatter.zipfile. Double check that you still have a copy of the zip file for future reference! - Rename your newly unzipped
chatterfolder**images** - Remove your images’s
.gradledirectory by running in a shell window:laptop$ cd YOUR*TUTORIALS/images/composeChatter laptop$ rm -rf .gradle - Push your local
YOUR*TUTORIALS/repo to GitHub and make sure there’re no git issues:<summary>git push</summary>- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on your assignment GitHub repo
- Add Summary to your changes and click
Commit to main - If you have pushed other changes to your Git repo, click
Pull Originto synch up the clone on your laptop - Finally click on
Push Originto push changes to GitHub
- Open GitHub Desktop and click on
Go to the GitHub website to confirm that your folders follow this classure outline:
reactive
|-- chatter.zip
|-- chatterd
|-- chatterd.crt
|-- images
|-- composeChatter
|-- app
|-- gradle
|-- llmprompt.zip
# and other files or folders
If the folders in your GitHub repo does not have the above classure, we will not be able to grade your assignment and you will get a ZERO.
Dependencies
Add the following to your build.gradle (Module:). Inside the
kotlin {} block, under its compilerOptions {} subblock,
add another element to optIn such that the optIn line says:
optIn.addAll("androidx.compose.material3.ExperimentalMaterial3Api",
"androidx.media3.common.util.UnstableApi",)
We add two libraries: Coil, a third-party library for downloading and displaying images, and Exoplayer, a part of Google’s Media3 library for downloading and playing back videos. Note that we’re using Coil 2 instead of Coild 3 due to compatibility reasons:
dependencies {
// . . .
implementation("androidx.compose.material:material-icons-extended:1.7.8")
implementation("androidx.media3:media3-exoplayer:1.8.0")
implementation("androidx.media3:media3-ui:1.8.0")
implementation("io.coil-kt:coil-compose:2.7.0")
}
Tap on Sync Now on the Gradle menu strip that shows up at the top of the editor screen.
Adding camera feature and requesting permissions
Our application will make use of the camera feature. Navigate to your AndroidManifests.xml file and add the following inside the <manifest...> ... </manifest> block, above the android.permisssion.INTERNET line.
<uses-feature android:name="android.hardware.camera"
android:required="false" />
Setting android:required="false" let users whose devices don’t have a camera to continue to use the app. However, we would then have to manually check at run time whether a camera is present and if not, disable picture and video taking.
Next we must declare we will be asking user’s permission to access the device’s camera, mic, and image gallery. Add these permission tags to your app’s AndroidManifest.xml file, under the android.permisssion.INTERNET line:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.READ_MEDIA_IMAGES" />
<uses-permission android:name="android.permission.READ_MEDIA_VIDEO" />
<uses-permission android:name="android.permission.READ_MEDIA_VISUAL_USER_SELECTED" />
Without these permission tags, we wouldn’t be able to prompt the user for permission later on.
We also need to declare that we will be quering for image cropping capability from external Activities. Add the following to your AndroidManifest.xml, for example before the <application...> ... </application> block:
<queries>
<intent>
<action android:name="com.android.camera.action.CROP" />
<data android:mimeType="image/*" />
</intent>
</queries>
Inside the <application block, above android:networkSecurityConfig line, add:
android:enableOnBackInvokedCallback="true"
This allows us to specify BackHandler() later.
Adding resources
We add some string constants to /app/res/values/strings.xml:
<string name="album">Album</string>
<string name="camera">Camera</string>
<string name="video">Video</string>
<string name="trash">Trashcan</string>
Working with images and videos
Images and videos can be uploaded to the server either by picking one from the device’s photo album or by taking a picture/video with the device’s camera. Android has separate APIs for taking picture and recording video. When posting a chatt, we will want a button to access the album, one for taking photo, another for recording video, and a preview of the images to be posted. On the chatt timeline, we will want posted images and videos to be downloaded and displayed alongside their corresponding chatts.
We break our work down into these parts:
- Allocating some scratch spaces to hold our working image and video files.
- Creating buttons to pick from album, take picture, or record video. Each of these
launches a separate “legacy” Android Views
Activityand we need to define a separate funciton to launch eachActivityand obtain its result. - Displaying image and video preview.
Let’s create two Kotlin files: ImageView.kt and Media.kt. We’ll put the “main logic”
of working with images in the first file and helper functions and classes in the latter
file. As we will be switching back and forth between these two files, be attentive to
which file you’re supposed to update.
We start with video recording.
Video recording
First we allocate some scratch space to hold our working video file. To retain this space
across configuration changes, we put it in a ViewModel. Add the following ImageViewModel
class to your ImageView.kt file:
class ImageViewModel(app: Application): AndroidViewModel(app) {
val app = app
val content = app.contentResolver
val hasCamera = app.packageManager.hasSystemFeature(PackageManager.FEATURE_CAMERA_ANY)
private var _videoStoreUri: Uri? = null
val videoStoreUri: Uri?
get() {
if (_videoStoreUri == null) {
_videoStoreUri =
mediaStoreAlloc(app, "video/mp4")
}
return _videoStoreUri
}
var videoUri by mutableStateOf<Uri?>(null)
var videoReloaded by mutableStateOf(true)
// image storage
// view model clean up functions
}
URI
URI stands for Uniform Resource Identifier and is a standard, hierarchical way to name things on the Internet as defined in RFC2396. It is different from URL in that it doesn’t necessarily tell you how to locate the thing.
Add the mediaStoreAlloc() utility function that allocates scratch space in Android’s
MediaStore to hold our temporary image and video files to your Media.kt file:
fun mediaStoreAlloc(context: Context, mediaType: String): Uri? {
return context.contentResolver.insert(
if (mediaType.contains("video"))
MediaStore.Video.Media.EXTERNAL_CONTENT_URI
else
MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
ContentValues().apply {
put(MediaStore.MediaColumns.MIME_TYPE, mediaType)
put(MediaStore.MediaColumns.RELATIVE_PATH, Environment.DIRECTORY_PICTURES)
})
}
We next obtain user permissions to record video (and all the other image-related operations
we will be doing in this tutorial). Add the following to your ImageView.kt file, outside
your ImageViewModel.
@Composable
fun ImageButtons() {
val vm: ChattViewModel = viewModel()
val imagevm: ImageViewModel = viewModel()
var isLaunching by rememberSaveable { mutableStateOf(true) }
val getPermissions =
rememberLauncherForActivityResult(RequestMultiplePermissions()) { results ->
results.forEach {
if (!it.value) {
vm.errMsg.value = "${it.key} access denied"
}
}
}
LaunchedEffect(Unit) {
if (isLaunching) {
isLaunching = false
getPermissions.launch(
arrayOf(
Manifest.permission.CAMERA,
Manifest.permission.RECORD_AUDIO,
Manifest.permission.READ_MEDIA_IMAGES,
Manifest.permission.READ_MEDIA_VIDEO,
Manifest.permission.READ_MEDIA_VISUAL_USER_SELECTED,
)
)
}
}
// launch and obtain video recording activity
}
When the app runs and a system dialog box shows up prompting, “Allow composeChatter to access photos and videos on this device?”, please select “Allow all”. The tutorial is not equipped to handle the other permissions.
In the Audio tutorial, we use registerForActivityResult() to request permissions when
the app launches its MainActivity. Here we use the Compose version,
rememberLauncherForActivityResult(). The compose version takes care of registering
for ActivityResultContract at the right time during the activity launch sequence.
It turns out, to launch the Android Views Activity to perform video recording etc. also
requires use of rememberLauncherForActivityResult(). Android has a TakePicture() API
to launch picture taking Activity, CaptureVideo() to launch video recording Activity,
and two alternate APIs for picking media: GetContent() and PickVisualMedia(). Each of
them has its custom ActivityResultContract we use to invoke each.
RecordVideo()
We wrap Android’s CaptureVideo() API in a RecordVideo() subclass to provide control
of the video quality and duration limit of the video before calling CaptureVideo().
We will use RecordVideo() instead of CaptureVideo() to launch video recording. Let’s
put RecordVideo() helper in Media.kt:
class RecordVideo: ActivityResultContracts.CaptureVideo() {
override fun createIntent(context: Context, input: Uri): Intent {
val intent = super.createIntent(context, input)
// extend CaptureVideo ActivityResultContract to
// specify video quality and length limit.
with (intent) {
putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1) // 0 for low quality, but causes green stripping on emulator
putExtra(MediaStore.EXTRA_DURATION_LIMIT, 5) // secs, there's a 10 MB upload limit
}
return intent
}
}
You can change the EXTRA_DURATION_LIMIT and EXTRA_VIDEO_QUALITY to different values.
However, be mindful that our back-end server limits client upload size to 10 MB. Three
seconds of video captured at resolution of 1960x1080 results in 3 MB of data.
On the emulator, when video recording is done, sometimes the emulator complains that
Camera keeps stopping. This is ok, just clickClose appand carry on.
forViewResult
Now we create a launcher for RecordVideo() in ImageView.kt. Inside the
ImageButtons() function, replace the comment, // launch and obtain video recording activity
with:
val forVideoResult =
rememberLauncherForActivityResult(RecordVideo()) { hasVideo ->
if (hasVideo) {
imagevm.videoUri = imagevm.videoStoreUri
imagevm.videoReloaded = !imagevm.videoReloaded
} else {
vm.errMsg.value = "RecordVideo: failed or user cancelled"
}
}
// video record button
We have created and registered an ActivityResultContract of type RecordVideo().
We have also created a launcher for the activity and remembered it in the forVideoResult
variable, so that it won’t be created again when ImageButtons() is recomposed.
With the launcher, we have provided a callback function to handle the result
returned by the Activity.
When the activity completes, any recorded video will be stored in imagevm.videoStoreUri,
which we provide when we launch the launcher in the onClick action of the RecordVideoButton()
below. We store this Uri in imagevm.videoUri. The variable imagevm.videoReloaded is
how we will force VideoPlayer() to reload even when the video’s (re-used) Uri
hasn’t changed, but its content has.
RecordVideoButton()
The onClick action of the RecordVideoButton() below checks that imagevm.vidoeStoreUri
has been allocated, or allocates it if not, and launches RecordVideo() activity, passing
it imagevm.videoStoreUri as the scratch space to put recorded video. In your ImageButtons() composable, replace // video record button with:
@Composable
fun RecordVideoButton() {
IconButton(
onClick = {
imagevm.videoStoreUri?.let { forVideoResult.launch(it) }
},
enabled = imagevm.hasCamera
) {
Icon(
imageVector = if (imagevm.videoUri == null)
Icons.Outlined.Videocam else Icons.Default.Videocam,
contentDescription = stringResource(R.string.video),
modifier = Modifier.scale(1.5f),
tint = if (imagevm.videoUri == null) Moss else Firebrick
)
}
}
// launch and obtain picture taking activity
Note that RecordVideoButton() is enabled only if the device has a camera.
Photo taking
As with video, we first allocate some scratch space to hold our working image file.
Add the following to your ImageViewModel in ImageView.kt file, replacing the
comment // image storage:
private var _imageStoreUri: Uri? = null
val imageStoreUri: Uri? // by mutableStateOf<Uri?>(null)
get() {
if (_imageStoreUri == null) {
_imageStoreUri = mediaStoreAlloc(app, "image/jpeg")
//cropIntent?.data = imageStoreUri
}
return _imageStoreUri
}
var imageUri by mutableStateOf<Uri?>(null)
var imageReloaded by mutableStateOf(true)
// cropped image storage
Cropping photo
To allow user to crop the picture they have taken before posting it, we
allocate more scratch space, for use by the cropper. Add to your
ImageViewModel, replacing // cropped image storage:
private var _cropperStoreUri: Uri? = null
val cropperStoreUri: Uri?
get() {
if (_cropperStoreUri == null) {
_cropperStoreUri =
mediaStoreAlloc(app, "image/jpeg")
}
return _cropperStoreUri
}
private var _cropIntent: Intent? = null
val cropIntent: Intent?
get() {
if (_cropIntent == null) {
_cropIntent = CropIntent(app, cropperStoreUri)
}
return _cropIntent
}
fun resetCropper() {
_cropperStoreUri?.let { content.delete(it, null, null) }
_cropperStoreUri = null
_cropIntent = null
}
We have also added cropIntent in the above. It holds the Activity that
performs image cropping obtained from CropIntent(). We
added a resetCropper() method to clear the cropper’s
scratch space after each use an uninitialize cropIntent.
Android seems to be reusing buffer instead of overwriting them.
We put the following CropIntent() helper function in Media.kt file:
fun CropIntent(context: Context, croppedImageUri: Uri?): Intent? {
val intent = Intent("com.android.camera.action.CROP")
intent.type = "image/*"
val listofCroppers =
context.packageManager.queryIntentActivities(intent, PackageManager.ResolveInfoFlags.of(0L))
// No image cropping Activity registered
if (listofCroppers.size == 0) {
Log.e("CROP", "Device does not support image cropping")
return null
}
intent.component = ComponentName(
listofCroppers[0].activityInfo.packageName,
listofCroppers[0].activityInfo.name)
//https://android.googlesource.com/platform/packages/apps/Camera2/+/5f8c30e/src/com/android/camera/crop/CropActivity.java
// create a crop box:
intent.putExtra("outputX", 414.36)
.putExtra("outputY", 500)
.putExtra("aspectX", 1)
.putExtra("aspectY", 1)
// enable zoom and crop
.putExtra("scale", true)
.putExtra("crop", true)
croppedImageUri?.let {
intent.putExtra(MediaStore.EXTRA_OUTPUT, it)
} ?: run {
// NOT USED in tutorial
intent.putExtra("return-data", true)
}
return intent
}
We first check if Android’s undocumented built-in image cropping
capability is available on device. If it is, we’ll opportunistically provide this functionality.
The cropped image will be put in the croppedImageUri parameter, we passed
imagevm.cropperStoreUri as the argument to this parameter.
Now, we’re set up to take picture. Android’s camera API doesn’t allow user to perform
both image and video capture with one call, instead we need to launch two different
ActivityResultContract from two different buttons. For picture taking, we will use
Android’s built-in TakePicture() API as is, with no further customization.
forPictureResult and forCropResult
As with video taking, we create the launcher with callback function for TakePicture()
in ImageView.kt. Inside your ImageButtons() composable, replace the comment
// launch and obtain picture taking activity with:
val forCropResult = rememberLauncherForActivityResult(StartActivityForResult()) { result ->
if (result.resultCode == Activity.RESULT_OK) {
result.data?.data.let {
imagevm.imageUri = it
}
} else {
// post uncropped image
imagevm.imageUri = imagevm.imageStoreUri
vm.errMsg.value = "Crop error: ${result.resultCode.toString()}"
}
imagevm.imageReloaded = !imagevm.imageReloaded
}
val forPictureResult = rememberLauncherForActivityResult(TakePicture()) { hasPhoto ->
if (hasPhoto) {
imagevm.cropIntent?.let { forCropResult.launch(it) }
} else {
vm.errMsg.value = "TakePicture: failed or user cancelled"
}
}
// picture taking button
We have created and registered two result contracts for two activities: TakePicture() for
taking picture and another for cropping image. There is no custom ActivityResultContract for
image cropping. Instead, we launch a generic StartActivityForResult to perform the cropping.
We created launchers with callback functions for these contracts. If picture taking was
successful and the device has a cropper, we launch the cropper. The cropped image will be
put in imagevm.cropperStoreUri, accessible as result.data?.data, which we now store in
imagevm.imageUri. If there is no cropped image, e.g., if no cropper exists, the taken
picture will be stored in imagevm.imageUri instead. The variable imagevm.imageReload
is to force reloading of new content put in the (re-used) imagevm.imageUri.
TakePictureButton()
As with video, the onClick action of the TakePictureButton() first checks that
imagevm.imageStoreUri has been allocated, or allocates it if not, and launches
the TakePicture activity, passing it imagevm.imageStoreUri as the scratch space
to put taken picture. In addition, it also pass imagevm.imageStoreUri as the image
to be cropped, if a cropper exists. Replace the comment, // picture taking button with:
@Composable
fun TakePictureButton() {
IconButton(
onClick = {
imagevm.resetCropper()
imagevm.imageStoreUri?.let {
imagevm.cropIntent?.data = it
forPictureResult.launch(it)
}
},
enabled = imagevm.hasCamera
) {
Icon(
imageVector = if (imagevm.imageUri == null)
Icons.Outlined.PhotoCamera
else Icons.Default.PhotoCamera,
contentDescription = stringResource(R.string.camera),
modifier = Modifier.scale(1.2f),
tint = if (imagevm.imageUri == null) Moss else Firebrick
)
}
}
// launch picking activity and obtain result
TakePictureButton() is also enabled only if the device has a camera.
Picking from the album
Android has two alternatives for picking media items: GetContent() or PickVisualMedia.
GetContent() allows you to pick all your media from both your Google Drive and your local
device’s Photos album. PickVisualMedia(), on the other hand, only allows you to pick from
your local Photos album and only recent photos and videos. As Google puts it, you can pick
only media “user has selected.” PickVisualMedia() does have a nicer, more “modern” UI.
For this tutorial, we use GetContent(). The launcher for both APIs are identical, you just need to specify your choice of ActivityResultContract to launch.
forContentResult
Add a launcher for GetContent() in your ImageButtons(). Replace the comment,
// launch picking activity and obtain result with:
val forContentResult = rememberLauncherForActivityResult(GetContent()) { uri ->
uri?.let {
if (imagevm.content.getType(uri).toString().contains("video")) {
imagevm.videoUri = uri
} else {
// cropper cannot work with original Uri, must copy
imagevm.imageStoreUri?.let {
uri.copyTo(imagevm.content, it)
imagevm.cropIntent?.data = it
imagevm.cropIntent?.let { forCropResult.launch(it) }
}
}
}
}
// album picking button
The video or image user picked from the photo album is returned in the application’s
content resolver, which we’ve made accessible as imagevm.content. If user had
picked a video, simply return the picked Uri. If, on the other hand, they’ve picked
an image, we want to allow user to crop the image before posting. However, the
cropper cannot work with content in the application’s content resolver, so we first
copy it to our scratch space, imagevm.imageStoreUri. Then we pass this copy to
the cropper as the image to be cropped and launch the cropper.
We have declared the copyTo() function as an extension function for the Uri class.
Create a new Kotlin file called Extensions.kt and put copyTo() in Extensions.kt:
fun Uri.copyTo(resolver: ContentResolver, target: Uri): Unit {
val inStream = resolver.openInputStream(this) ?: return
val outStream = resolver.openOutputStream(target) ?: return
val buffer = ByteArray(8192)
var read: Int
while (inStream.read(buffer).also { read = it } != -1) {
outStream.write(buffer, 0, read)
}
outStream.flush()
outStream.close()
inStream.close()
}
PickMediaButton()
This button simply launches media picking activity, allowing picking of all media types
from the album. Replace // album picking button in ImageButtons() with:
@Composable
fun PickMediaButton() {
IconButton(
onClick = {
imagevm.resetCropper()
forContentResult.launch("*/*")
},
) {
Icon(
imageVector = if (imagevm.imageUri == null
&& imagevm.videoUri == null)
Icons.Outlined.PhotoLibrary
else Icons.Default.Photo,
modifier = Modifier.scale(1.1f),
contentDescription = stringResource(R.string.album),
tint = if (imagevm.imageUri == null && imagevm.videoUri == null)
Moss else Firebrick
)
}
}
// trash button
When you launch GetContent(), you may be presented with a list of files under Recent files. DO NOT pick from this list, GetContent() cannot retrieve from Recent files. Instead:
-
if you see the
DriveandPhotosapp icon underBROWSE FILES IN OTHER APPS, click onPhotosto pick from on-device photo album. Or click onDriveto pick from Google Drive. FromPhotoswe can post a photo or a video (≤ 3 secs long) with achatt. But fromDrive, we can only post a photo; trying to post a video fromDriveresults in an error message saying that the image must be larger than 50x50 pixels. -
if you don’t see the
DriveandPhotosicons, click on the hamburger menu at the top left corner to reveal the navigation drawer. Then chooseOpen from > PhotosorOpen from > Drive. The warning about not being able to post videos fromDriveabove applies.
Trash and ViewModel clean up
If the user decides not to post their chatt with its attached image(s), if any, we provide
a TrashButton() to clear these. Replace // trash button in ImageButtons() with:
@Composable
fun TrashButton() {
IconButton(
onClick = {
vm.message.clearText()
imagevm.imageUri = null
imagevm.videoUri = null
},
enabled = vm.message.text.isNotEmpty() || imagevm.imageUri != null ||
imagevm.videoUri != null,
) {
Icon(imageVector = Icons.Default.Delete,
modifier = Modifier.scale(1.3f),
contentDescription = stringResource(R.string.trash),
tint = if (vm.message.text.isEmpty() && imagevm.imageUri == null &&
imagevm.videoUri == null) Color.LightGray
else Firebrick
)
}
}
// all together now
We also clean up all the scratch spaces we have allocated when ImageViewModel is
deallocated. Add the following to your ImageViewModel class, replacing the comment,
// view model clean up functions with:
fun clearUris() {
_imageStoreUri?.let { content.delete(it, null, null) }
_imageStoreUri = null
imageUri = null
_cropperStoreUri?.let { content.delete(it, null, null) }
_cropperStoreUri = null
_cropIntent = null
_videoStoreUri?.let { content.delete(it, null, null) }
_videoStoreUri = null
videoUri = null
}
override fun onCleared() {
super.onCleared()
clearUris()
}
Row of buttons
We close off ImageButtons() composable by lining up the buttons defined above
side by side in a row. Add the following to your ImageButtons() composable,
replacing // all together now:
Row(
modifier = Modifier
.fillMaxWidth(1f)
.background(color = WhiteSmoke)
.padding(start = 4.dp, top = 0.dp, bottom = 0.dp, end = 80.dp),
verticalAlignment = Alignment.CenterVertically,
horizontalArrangement = Arrangement.SpaceEvenly,
) {
TrashButton()
Spacer(
modifier = Modifier
.size(50.dp)
)
PickMediaButton()
Spacer(
modifier = Modifier
.size(5.dp)
)
Row {
RecordVideoButton()
Spacer(
modifier = Modifier
.size(5.dp)
)
TakePictureButton()
}
}
Previewing image and video
To allow preview of taken picture, recorded video, or picked picture or video before posting,
we define an ImagePreview() composable that displays the video and/or picture side-by-side.
Add the following to your ImageView.kt file, outside existing class and composable:
@Composable
fun ImagePreview() {
val context = LocalContext.current
val vm: ImageViewModel = viewModel()
Row(
modifier = Modifier
.fillMaxWidth(1f)
.background(color = WhiteSmoke)
.padding(start = 60.dp, top = 0.dp, bottom = 10.dp, end = 60.dp),
horizontalArrangement = Arrangement.SpaceEvenly,
verticalAlignment = Alignment.CenterVertically
) {
vm.videoUri?.let { uri ->
VideoPlayer(
modifier = Modifier
.height(180.dp)
.background(Color.Transparent)
.aspectRatio(.6f, matchHeightConstraintsFirst = true),
uri, vm.videoReloaded, autoPlay = true
)
}
vm.imageUri?.let { uri ->
AsyncImage(
model = ImageRequest.Builder(context)
.data(uri)
.setParameter("reload", vm.imageReloaded)
.build(),
contentDescription = "Photo to be posted",
contentScale = FillHeight,
modifier = Modifier
.height(180.dp)
.background(Color.Transparent),
)
}
}
}
The one big advantage of both AsyncImage() and ExoPlayer(), on which VideoPlayer()
builds on, is that both automatically handle accessing and downloading remote URLs. We
don’t have to worry about initiating connection to the remote server nor managing data
download. Just give these APIs the URL to an imags or video and they handle the
downloading and display of the images end-to-end, from network to screen.
AsyncImage() is an API of the Coil 3rd-party library for downloading images. We use
Coil 2 instead of Coil 3 so that we can use vm.imageReloaded to force it to reload the
data(uri) when the uri itself doesn’t change but we have changed its content.
We are not able to make Coil 3 do the same.
Exoplayer
VideoPlayer() is our own wrapper around Google’s
Media3 Exoplayer
for playing back video. Add the following to your Media.kt file:
@Composable
fun VideoPlayer(modifier: Modifier = Modifier, videoUri: Uri, reload: Boolean = true,
autoPlay: Boolean = false) {
val context = LocalContext.current
val lifecycle = LocalLifecycleOwner.current.lifecycle
var showPause by rememberSaveable { mutableStateOf(true) }
val videoPlayer = remember { ExoPlayer.Builder(context).build() }
var playbackPoint by rememberSaveable { mutableStateOf(0L) }
// reset the videoPlayer whenever videoUri and/or reload change
LaunchedEffect(videoUri, reload) {
playbackPoint = 0L
with (videoPlayer) {
playWhenReady = autoPlay
setMediaItem(fromUri(videoUri))
seekTo(currentMediaItemIndex, playbackPoint)
prepare()
}
}
Box(modifier = modifier) {
AndroidExternalSurface(
modifier = modifier,
onInit = {
onSurface { surface, _, _ ->
videoPlayer.setVideoSurface(surface)
surface.onDestroyed { videoPlayer.setVideoSurface(null) }
}
}
)
IconButton(modifier = modifier,
onClick = {
with (videoPlayer) {
if (isPlaying) {
playbackPoint = 0L.coerceAtLeast(contentPosition)
pause()
} else {
if (playbackState == Player.STATE_ENDED) {
seekTo(currentMediaItemIndex, 0L)
}
play()
}
}
}
) {
Icon(imageVector =
if (showPause) Icons.Default.Pause
else Icons.Default.PlayArrow,
contentDescription = null,
modifier = Modifier.scale(2f),
tint = WhiteSmoke
)
}
}
DisposableEffect(Unit) {
val observer = LifecycleEventObserver { _, event ->
when (event) {
Lifecycle.Event.ON_START -> {
if (autoPlay) {
videoPlayer.play()
}
}
Lifecycle.Event.ON_PAUSE -> {
playbackPoint = 0L.coerceAtLeast(videoPlayer.contentPosition)
videoPlayer.pause()
}
else -> {}
}
}
lifecycle.addObserver(observer)
// Exoplayer event listener
val listener = object : Player.Listener {
override fun onIsPlayingChanged(isPlaying: Boolean) {
showPause = isPlaying
}
}
videoPlayer.addListener(listener)
onDispose {
// WARNING: also disposes on orientation change, prior to the change!
// Cannot tell a priori whether disposal is due to orientation change
// or dismissal
videoPlayer.removeListener(listener)
videoPlayer.release()
lifecycle.removeObserver(observer)
}
}
}
ExoPlayer.Builder() creates an instance of the ExoPlayer, which we put inside remember so that it is created only once at VideoPlayer launch, and not on recomposition nor orientation changes. We keep this remembered instance of ExoPlayer in videoPlayer.
VideoPlayer() takes parameters videoUri to play back and reload to indicate whether it should reload the Exoplayer. Reloading the Exoplayer is a side effect, so we put the code for reloading inside a LaunchedEffect(). However, instead of running the LaunchedEffect() only once, upon first launch, we want it run everytime videoUri or reload changes, hence we pass these as the keys/arguments to LaunchedEffect(). Inside LaunchedEffect(), setMediaItem() updates the Exoplayer with the current videoUri.
In MainView we use Scaffold() to lay out UI elements in a composable. Here we use AndroidExternalSurface() which “provides a dedicated drawing Surface as a separate layer positioned, by default, behind the window holding the AndroidExternalSurface composable. The Surface provided can be used to present content that’s external to Compose, such as a video stream (from a camera or a media player), OpenGL, Vulkan… The provided Surface can be rendered into using a thread different from the main thread.”
Finally, the DispossableEffect() block allows the video player to pause, play, and be disposed of on the app’s appropriate lifecycle events.
That’s will be our final addition to Media.kt.
Viewing posted image and video
One more composable for ImageView.kt before we leave it. ImageView() displays downloaded
video and/or picture associated with each posted chatt side-by-side, aligned to the right
or left depending on whether the current user was the sender of the chatt. We will be
calling this composable from ChattView later. It uses VideoPlayer() to play back video
as ImagePreview() does. For display image, it uses SubcomposeAsyncImage() of Coil instead
of AsyncImage(). SubcomposeAsyncImage() allows showing of the CircularProgressIndicator()
when the image is still downloading. Also we put VideoPlayer() and SubcomposeAsyncImage()
in a LazyRow() instead of a Row() here so that only chatts that are visible on screen
will have their images downloaded and shown. Further, if a chatt does not have either a
video nor an image, LazyRow() will not take up any screen space.
@Composable
fun ImageView(chatt: Chatt, isSender: Boolean) {
LazyRow(verticalAlignment = Alignment.Top,
horizontalArrangement = Arrangement.spacedBy(10.dp),
modifier=Modifier
.widthIn(max=300.dp)
) {
chatt.videoUrl?.let {
item {
VideoPlayer(
modifier = Modifier
.height(150.dp)
.aspectRatio(.6f, matchHeightConstraintsFirst = true),
it.toUri()
)
}
}
chatt.imageUrl?.let {
item {
SubcomposeAsyncImage(
it,
contentDescription = "Photo posted with chatt",
loading = { CircularProgressIndicator() },
contentScale = FillHeight,
modifier = Modifier
.height(150.dp)
)
}
}
}
}
The networking
Chatt
Add two new stored properties to the Chatt class to hold the image and video URLs
associated with a chatt:
class Chatt(var username: String? = null,
var message: MutableState<String>? = null,
var id: UUID? = null,
var timestamp: String? = null,
var imageUrl: String? = null,
var videoUrl: String? = null)
ChattStore
We first update getChatts(). Update the apiUrl to point to the getimages endpoint.
Then add decoding for the imageUrl and videoUrl fields to chatts.add():
chatts.add(
Chatt(
username = chattEntry[0].toString(),
message = mutableStateOf(chattEntry[1].toString()),
id = UUID.fromString(chattEntry[2].toString()),
timestamp = chattEntry[3].toString(),
imageUrl = if (chattEntry[4] == JSONObject.NULL) null else chattEntry[4].toString(),
videoUrl = if (chattEntry[5] == JSONObject.NULL) null else chattEntry[5].toString(),
)
)
Note that since both the imageUrl and videoUrl fields are nullable, they
could contain JSON NULL, which we must manually deserialize into Kotlin’s null.
multipart/form-data
Unlike other tutorials in this course, the data we want to post here is not short strings
that we can put in a JSON object. And, unlike the downloading of images and videos, we
cannot rely on a library to handle uploading from a URL for us. Instead we need to upload
our large data using HTTP multipart/form-data representation/encoding.
A web page with a form to fill out usually has multiple fields (e.g., name, address, net worth, etc.). Data from these multiple parts of the form is encoded using HTTP’s multipart/form-data representation. One advantage of this encoding is that binary data can be sent as is, not encoded into a string of printable characters, as we must if using JSON. Since we don’t have to encode the binary data into character string, we can stream it directly from file to network without loading it into memory first, allowing us to send much larger files. We use the multipart/form-data encoding with OkHttp3 to send images and videos in this tutorial.
To upload multipart/form-data without OkHttp3, using lower-level networking API, you will need more detailed knowledge of the HTTP protocol.
Replace your postChatt() function in ChattStore.kt with:
suspend fun postChatt(username: String?, message: String?, imageFile: File?, videoFile: File?, errMsg: MutableState<String>) {
val mpFD = MultipartBody.Builder().setType(MultipartBody.FORM)
.addFormDataPart("username", username ?: "")
.addFormDataPart("message", message ?: "")
imageFile?.let {
mpFD.addFormDataPart("image", "chattImage",
it.asRequestBody("image/jpeg".toMediaType()))
}
videoFile?.let {
mpFD.addFormDataPart("video", "chattVideo",
it.asRequestBody("video/mp4".toMediaType()))
}
val apiUrl = "${serverUrl}/postimages"
val request = Request.Builder()
.url(apiUrl)
.post(mpFD.build())
.build()
try {
val response = client.newCall(request).await()
if (!response.isSuccessful) {
errMsg.value = "postChatts: ${response.code}\n$apiUrl"
}
response.body.close()
} catch (e: IOException) {
errMsg.value = "postChatt: ${e.localizedMessage ?: "POSTing failed"}"
}
}
The method constructs the “form” to be uploaded as comprising:
- a part with key “username” whose value contains the username (or the empty string if
null), - a part with key “message” constructed similarly, and then
- an optional part with key “image” with data in the file
imageFile. The image has been JPEG encoded. The string “chattImage” is how the data is tagged, it can be any string. TheMediaType()documents the encoding of the data, and finally, - the last part is also optional and has key “video”. It is handled similarly to the “image”
part. If the
Fileprovided is in storage, the data is transferred directly from storage to network without loading first into memory.
Note that the apiUrl of the request has been set to the postimages API endpoint.
The UI
Now we update the app’s UI.
Posting images
We put ImageButtons() as a row of buttons above the existing input area consisting of
the OutlinedTextField and SubmitButton in MainView. Once the user has taken a picture,
record a video, or picked something from the album to post, we present the image(s) in
ImagePreview() row above ImageButtons(). To that end, simply replace the Row { } below ChattScrollView with the following:
HorizontalDivider()
ImagePreview()
ImageButtons()
Row(horizontalArrangement = Arrangement.SpaceEvenly,
verticalAlignment = Alignment.CenterVertically,
modifier = Modifier
.fillMaxWidth(1f)
.imePadding()
.background(color = WhiteSmoke)
.padding(top = 4.dp, start = 20.dp, end = 20.dp, bottom = 40.dp)
) {
OutlinedTextField(
state = vm.message,
placeholder = {
Text(text = vm.instruction, color = Color.Gray)
},
modifier = Modifier
.weight(1f)
.padding(end = 12.dp)
.shadow(1.dp, shape = RoundedCornerShape(40.dp)),
textStyle = LocalTextStyle.current.copy(fontSize = 18.sp),
colors = TextFieldDefaults.colors(
unfocusedContainerColor = HeavenWhite,
focusedContainerColor = HeavenWhite,
focusedIndicatorColor = Color.Transparent,
unfocusedIndicatorColor = Color.Transparent
),
lineLimits = TextFieldLineLimits.MultiLine(1, 6),
)
SubmitButton(listScroll)
}
Replace your SubmitButton in MainView.kt with:
@Composable
fun SubmitButton(listScroll: LazyListState) {
val context = LocalContext.current
val imagevm: ImageViewModel = viewModel()
val vm: ChattViewModel = viewModel()
var isSending by remember { mutableStateOf(false) }
IconButton(
onClick = {
isSending = true
vm.viewModelScope.launch (Dispatchers.Default) {
var imageFile: File? = null
var videoFile: File? = null
imagevm.imageUri?.run {
toFile(context, vm.errMsg)?.let {
imageFile = it
} ?: run {
vm.errMsg.value = "Unsupported image format or file not on device"
}
}
imagevm.videoUri?.run {
toFile(context, vm.errMsg)?.let {
videoFile = it
} ?: run {
vm.errMsg.value = "Unsupported video format or file not on device"
}
}
val message = vm.message.text.toString()
postChatt(vm.username, message.ifEmpty { "Image(s) attached" },
imageFile, videoFile,vm.errMsg)
if (vm.errMsg.value.isEmpty()) { getChatts(vm.errMsg) }
vm.message.clearText()
imagevm.imageUri = null
imagevm.videoUri = null
isSending = false
withContext(AndroidUiDispatcher.Main) {
listScroll.animateScrollToItem(chatts.size)
}
}
},
modifier = Modifier
.size(55.dp)
.background(if (vm.message.text.isEmpty()
&& imagevm.imageUri == null
&& imagevm.videoUri == null)
NavyLight else Navy,
shape = CircleShape),
enabled = !(isSending || (vm.message.text.isEmpty()
&& imagevm.imageUri == null
&& imagevm.videoUri == null)),
) {
if (isSending) {
CircularProgressIndicator(
color = Gray88,
strokeWidth = 4.dp,
modifier = Modifier.size(24.dp)
)
} else {
Icon(
Icons.AutoMirrored.Filled.Send,
contentDescription = stringResource(R.string.send),
tint = if (vm.message.text.isEmpty()
&& imagevm.imageUri == null
&& imagevm.videoUri == null)
MaizeLight else Maize,
modifier = Modifier.size(28.dp)
)
}
}
}
We rely on an extension function to the Uri type to convert Uri to File. Add
the following to your Extensions.kt:
fun Uri.toFile(context: Context, errMsg: MutableState<String>): File? {
if (!(authority == "media" || authority == "com.google.android.apps.photos.contentprovider")) {
// for on-device media files only
errMsg.value = "${authority.toString()}: media file not on device"
return null
}
var file: File? = null
if (scheme.equals("content")) {
val cursor = context.contentResolver.query(
this, arrayOf("_data"),
null, null, null
)
cursor?.run {
moveToFirst()
val col = getColumnIndex("_data")
if (col != -1) {
val path = getString(col)
if (path != null) {
file = File(path)
}
}
close()
}
}
return file
}
Displaying posted image(s)
On the chatt timeline, to display image(s) posted with a chatt, add:
ImageView(chatt, isSender)
below all the Text() elements inside the if (msg.value.isNotEmpty()) {}
block of ChattView() composable in the ChattScrollView.kt file.
Congratulations! You’re done with the front end! (Don’t forget to work on the backend!)
Run and test to verify and debug
You should now be able to run your front end against your backend. You will not get full credit if your front end is not set up to work with your backend!
Front-end submission guidelines
We will only grade files committed to the main branch. If you use multiple branches, please merge them all to the main branch for submission.
Push your front-end code to the same GitHub repo you’ve submitted your back-end code:
- Open GitHub Desktop and click on
Current Repositoryon the top left of the interface - Click on the GitHub repo you created at the start of this tutorial
- Add Summary to your changes and click
Commit to mainat the bottom of the left pane - If you have pushed code to your repo, click
Pull Originto synch up the repo on your laptop - Finally click
Push Originto push all changes to GitHub
Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder images. Confirm that your repo has a folder classure outline similar to the following. If your folder classure is not as outlined, our script will not pick up your submission and, further, you may have problems getting started on latter tutorials. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the inclassions in Preparing GitHub for Reactive Tutorials, only files needed for grading will be pushed to GitHub.
reactive
|-- chatter.zip
|-- chatterd
|-- chatterd.crt
|-- images
|-- composeChatter
|-- app
|-- gradle
|-- llmprompt.zip
# and other files or folders
Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your tutorial doesn’t build, run, or open.
IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.
Review your information on the Tutorial and Project Links sheet. If you’ve changed your teaming arrangement from previous tutorial’s, please update your entry. If you’re using a different GitHub repo from previous tutorial’s, invite eecsreactive@umich.edu to your new GitHub repo and update your entry.
References
Exoplayer and AndroidView
- How to use Exoplayer library to play videos
- Android Compose Videos with ExoPlayer
- Player events
- AndroidExternalSurface
Android Camera
- How to pick Image from gallery in jetpack compose
- Android: Let user pick image or video from Gallery
- Capturing Images from Camera in Android with Jetpack Compose: A Step-by-Step Guide
- ImageView disappear after changed orientation
Image download
- Loading images using coil in Jetpack Compose
- Coil: Getting Started
- Recompose the painter when the Uri passed in doesn’t change
- Benchmarking Image Loading Libraries on Android
- JPEG Formats - Progressive vs. Baseline
- Progressive JPEGs and green Martians
Image cropping
- Cropping saved images in Android
- com.android.camera.crop
- Package visibility in Android 11
- No, Android Does Not Have a Crop Intent
Not updated since Android 11:
Image upload
OkHttp3
- Posting a multipart request (.kt, .java)
- java.io.FileNotFoundException: /storage/emulator/0/New_file.txt: open failed: EACCES (Permission denied)
- How to convert
content://media/external/images/media/Ytofile:///storage/sdcard0/Pictures/X.jpgin android? - Adding content to RequestBody
- Adding image as bytearray
- Kotlin - OkHttp - Return from onResponse
MediaStore and scoped storage
- Demystifying internal vs external storage in modern Android
- How to save an image in Android Q using MediaStore
- How to save an image in a subdirectory on android Q whilst remaining backwards compatible
- How to save an image in Android Q using MediaStore?
- Scoped Storage in Android 10 & Android 11
- Storage Updates in Android 11
- The Quick Developers Guide to Migrate Their Apps to Android 11
- Granular media permissions (Android 13)
- Permissionless is the future of Storage on Android
- Grant partial access to photos and videos
Appendix: imports
| Prepared by Benjamin Brengman, Wendan Jiang, Alexander Wu, Ollie Elmgren, Tianyi Zhao, Nowrin Mohamed, Chenglin Li, Xin Jie ‘Joyce’ Liu, Yibo Pi, and Sugih Jamin | Last updated: October 19th, 2025 |