Methodology

Explainable Parkinson's detection from speech features.

This route presents the project as a reviewable research system: data preparation, model selection, explainability, and the web interface that ties those stages together.

Primary modality

Speech features

Interpretability

SHAP-based

Presentation mode

Research showcase

Project overview

Built to balance predictive performance with traceable reasoning.

The platform combines machine learning inference with a presentation layer designed for scrutiny. Reviewers can move from raw feature input to model output, grouped SHAP drivers, and benchmark results without leaving the interface.

The emphasis is not just on whether the model predicts a likely Parkinson's pattern, but on how that output can be explained in ways that are useful for faculty review and technical discussion.

By pairing plain-language summaries with advanced feature-level tables, the system supports both fast scanning and deeper model interpretation during demos or project evaluations.

Review posture

Trust comes from visibility.

Prediction routeOutcome + confidence
Explainability routeGrouped SHAP evidence
Performance routeReadable model comparison

Pipeline

A six-step pathway from dataset to reviewable dashboard.

The pipeline below shows the research framing behind the interface, not just the UI screens.

Speech Dataset

Step 1

PD speech feature records establish the input distribution used by the deployed interface.

Feature Extraction

Step 2

Acoustic descriptors such as PPE, DFA, RPDE, jitter, and shimmer are prepared for inference.

Machine Learning Models

Step 3

Saved classifiers such as Logistic Regression, SVM, Random Forest, and XGBoost provide benchmark context.

Prediction

Step 4

The interface produces a Parkinson's likelihood score together with confidence and model context.

Explainable AI

Step 5

Grouped SHAP signals and raw feature-level contributions surface why the score moved.

Research Dashboard

Step 6

The frontend packages prediction, interpretation, and benchmark review into one polished route structure.

Frontend

Next.js, Tailwind CSS, and Recharts provide the responsive shell, page hierarchy, and model visualizations.

Backend

Flask exposes the prediction, explainability, features, and metrics endpoints that the frontend depends on.

Interpretability

SHAP values and grouped feature summaries make the model output more defensible during technical review.

Dataset credibility

Voice-derived measurements anchor the project in a concrete clinical signal domain.

Reviewer usability

The interface is structured to let judges scan the story quickly and inspect the details when needed.

Scope clarity

This is a research and decision-support demo, not a clinical diagnosis tool.