Explainable AI research showcase

Parkinson's voice screening, presented with evidence instead of hype.

This platform turns acoustic voice features into a clear research workflow: upload inputs, compare saved models, inspect prediction confidence, and surface the grouped SHAP drivers behind each result.

Audience

Faculty, judges, and research reviewers

Focus

Prediction, interpretability, and model credibility

Live study snapshot

XGBoost

Reviewer-ready output

Best accuracy

Loading

Study features

...

1

Capture a voice sample profile

Use structured CSV input or the manual analysis workspace to submit acoustic measurements.

2

Run the strongest saved model

Compare support for XGBoost and Random Forest without changing the backend contract.

3

Inspect explainable evidence

Review grouped SHAP drivers, plain-language explanations, and model performance context.

Study posture

A sharper first impression without changing the research workflow.

The redesign keeps the deployed app architecture intact while reframing the interface around trust, readability, and evidence-led presentation.

Dataset

756

Voice-derived study samples available to the deployed interface.

Feature set

22

Selected acoustic indicators surfaced through the upload workspace.

Best accuracy

94.0%

Current headline benchmark from XGBoost.

Workflow

The platform now reads like a guided review instead of a disconnected set of demos.

Each route supports one part of the story: capture inputs, inspect the result, explain the model, and compare saved evaluation benchmarks.

A faculty-ready narrative that connects dataset quality, model choice, and interpretability.

An analysis workflow built to show both plain-language summaries and technical depth.

A restrained interface that foregrounds evidence instead of decorative UI noise.

Analysis workspace

Upload a CSV or enter feature values manually without leaving the review flow.

Clear result framing

Probability, confidence, model context, and next-step links are grouped for fast review.

Evidence trail

Explainability and performance routes turn the prediction into a defensible research narrative.

Explainability

Grouped SHAP outputs keep the science visible.

Instead of hiding the model behind a single percentage, the platform surfaces which families of signal changes drove the score.

Voice instability82%
Frequency pattern shifts68%
Energy variation56%
Wave-pattern complexity43%

Performance framing

Model benchmarks stay readable for non-technical reviewers.

Plain-language metric summaries coexist with the original technical tables so judges can scan first and dive deeper second.

Headline model benchmark
XGBoost
Explainability route
Grouped drivers + raw SHAP
Review stance
Decision-support, not diagnosis

Next step

Open the analysis workspace and evaluate the full prediction-to-explanation flow.

The visual redesign is meant to support the live demo, so the strongest proof is the complete path from feature input to interpreted result.