Predicting Osteoporosis Risk Through Psychological Wellness and Neural Network Classification

16 October 2025, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

This research examines the relationship between overall wellness and bone health, focusing on the prediction and early detection of osteoporosis (OP). Since bone health is influenced by a complex interplay of nutrition, exercise, and psychological well-being, the study proposes a combined biological and computational approach to assess OP risk more effectively. The Well-Being Inventory (WBI) is used as a holistic measure of physical and mental wellness to determine its correlation with bone density and OP susceptibility. Establishing this connection could allow for a cost-effective, non-invasive method of identifying individuals at risk before severe deterioration occurs. To complement this wellness-based evaluation, the study develops a deep learning pipeline that classifies bone density using audio data. Bone sounds are converted into mel-spectrograms through the librosa Python library, producing visual representations of sound frequencies. These spectrograms are then labeled as high-density or low-density and used to train a Convolutional Neural Network (CNN) in PyTorch. The model learns to recognize frequency patterns that differentiate strong from weak bone structures. When new recordings are analyzed, the CNN automatically predicts bone density, providing a radiation-free and data-driven alternative to traditional DEXA scans. By integrating wellness metrics and artificial intelligence, the study presents an innovative framework for assessing bone health. This dual approach emphasizes prevention, enabling earlier detection and intervention in populations at risk of OP while reducing healthcare costs and avoiding the limitations of existing diagnostic techniques.

Keywords

Machine Learning
Artificial Intelligence
Osteoporosis/Osteopenia
Wellness
Bone Health

Supplementary materials

Title
Description
Actions
Title
Python Code Used to Train AI Model
Description
This Python script trains a Convolutional Neural Network (CNN) using PyTorch to classify spectrogram images—visual representations of bone audio recordings—into categories such as high or low bone density. It begins by importing essential libraries, detecting whether a GPU is available for faster computation, and setting up the data directory containing labeled spectrograms. The images are preprocessed through resizing and tensor conversion before being loaded in batches with the DataLoader. The custom-built CNN consists of two convolutional and pooling layers that extract spatial features, followed by fully connected layers that classify the images into two categories. During training, the model optimizes its parameters using the Adam optimizer and cross-entropy loss, iteratively improving prediction accuracy over ten epochs. After training, it reports loss and accuracy for each epoch, then saves the trained model as “density_model.pth” for future use in automatically identifying bone density from spectrogram inputs.
Actions
Title
Python Code to Convert Wellness and Frequency Data into Risk of Osteoporosis
Description
This Python script loads a trained Convolutional Neural Network (CNN) to predict bone density from a single spectrogram image. Using PyTorch, it defines the same CNN architecture as the training script, consisting of two convolutional and pooling layers followed by fully connected layers for classification. The predict() function loads the saved model weights (density_model.pth), prepares the input image by resizing it to 128×128 pixels, converting it to a tensor, and adding a batch dimension. The image is then passed through the network in evaluation mode to prevent gradient computation. The model outputs class scores, from which the highest value determines the prediction—either “high_density” or “low_density.” The result is printed in a user-friendly format. The script can be run from the command line, taking the image path as an argument, and serves as a simple, automated tool for analyzing bone density from spectrogram data.
Actions
Title
Depiction of the Frequency of Bone Tapping Compared to Density
Description
This Python script automatically converts audio recordings into mel-spectrogram images for use in machine learning models. Using the Librosa and Matplotlib libraries, it processes .wav audio files stored in labeled subfolders (e.g., “high_density” and “low_density”) within a main directory. The function create_spectrogram() loads each audio file, generates a mel-spectrogram—a visual representation of sound frequencies over time—and converts the power scale to decibels for clearer contrast. It then plots the spectrogram without axes or borders, saving it as a clean .png image. The generate_spectrograms_by_class() function automates this for all audio files across multiple class folders, preserving the same structure in the output directory. This ensures each spectrogram corresponds to its correct label. Overall, the script serves as a crucial preprocessing step for transforming raw bone audio data into standardized visual inputs suitable for deep learning classification tasks.
Actions

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.