This week, we shifted focus toward real-world testing on the bike to evaluate how our adaptive audio responds during different ride conditions. The goal: determine what structure and sound design will work best for the final demo.
ML
Introduced a background InferenceService that runs machine learning on live sensor data (pitch, roll, yaw, g-force) using a TensorFlow Lite model. It processes data on a background thread, periodically performs inference when the buffer is ready, and broadcasts the results to the rest of the app. The service is tied into music playback and controlled by the MusicService.
Also added proper storage access and permissions for saving sensor recordings on Android 10+, updated the data format for logging, and included a placeholder .tflite model. The training script was updated to support g-force input, a more TFLite-friendly architecture, and data augmentation for better model performance.
Other Updates
- FMOD playback is now fully integrated into a background
MusicService, allowing music to continue playing even when the screen is off. This ensures a seamless riding experience without interruption. The service properly manages lifecycle events, including cleanup when the app is swiped from Recents or shut down. - Fixed an issue with the reverse button that was caused by unsafe state changes. The listener now only responds when the state actually changes, preventing rapid toggling glitches.
- Implemented a proper stop mechanism for FMOD playback to ensure music stops completely when the app is closed.
Next Steps
- Train the model using ML
- Finalize music structure for demo
- Record backup video for demo
- Write the report

Leave a comment