Multiclass Matthews Correlation Coefficient Calculator

Compute robust MCC for multiclass classifiers quickly. Enter matrices, review metrics, and download reports easily. See reliability trends across categories with practical diagnostic detail.

Calculator

Rows represent actual classes. Columns represent predicted classes.

Actual \ Predicted Class A Class B Class C
Class A
Class B
Class C

Formula Used

Multiclass MCC = (cN − Σ(Ak × Pk)) / √[(N² − ΣPk²)(N² − ΣAk²)]

This version works for two classes and many classes. It remains useful when classes are imbalanced and when simple accuracy hides important mistakes.

How to Use This Calculator

  1. Choose the number of classes in your model.
  2. Enter class names in the same order as the matrix.
  3. Fill each confusion matrix cell with nonnegative counts.
  4. Keep rows as actual labels and columns as predicted labels.
  5. Press Calculate MCC to view the score and supporting metrics.
  6. Download the result set as CSV or PDF for reporting.

Example Data Table

The sample below matches the default example loaded in this page.

Actual \ Predicted Class A Class B Class C
Class A 45 3 2
Class B 4 38 5
Class C 1 6 41

For this example, the multiclass MCC is approximately 0.7827. That indicates strong agreement across all three classes.

Why This Multiclass MCC Calculator Matters

Multiclass classification is common in applied statistics. It appears in medical screening, risk scoring, document tagging, image labeling, and customer segmentation. Many teams still report only accuracy. That can be misleading. A model may look strong while failing a minority class. The multiclass Matthews correlation coefficient gives a more balanced signal.

Balanced Evaluation for Real Datasets

MCC uses the full confusion matrix. It checks how predictions align with actual outcomes across every class. It also penalizes systematic error. This makes it stronger than accuracy when your classes are uneven. A model that always predicts the largest class can score well on accuracy. MCC will expose that weakness.

Useful for Comparison and Validation

This calculator helps analysts compare models with a single interpretable score. Values near 1 show strong agreement. Values near 0 suggest weak or random behavior. Negative values show inverse structure. That means your classifier may be learning the wrong pattern. The calculator also reports macro precision, macro recall, macro specificity, macro F1, and per-class support. Those details help you explain where the model succeeds and where it slips.

Designed for Clear Reporting

You can enter any confusion matrix size from two to ten classes. Results appear immediately above the form for faster review. The export buttons help you move the output into presentations, validation notes, or audit files. This is useful in academic studies, quality assurance work, and production model monitoring.

Better Decisions with Better Metrics

If you evaluate multiclass models, MCC deserves a place in your workflow. It is compact, balanced, and statistically informative. Use it when you need a dependable summary of classification quality instead of a flattering but incomplete number.

FAQs

1. What does multiclass MCC measure?

It measures overall agreement between actual and predicted classes using the full confusion matrix. It summarizes performance across all classes in one balanced score.

2. What range can MCC take?

MCC ranges from -1 to 1. A value near 1 means strong agreement. A value near 0 suggests weak structure. Negative values indicate inverse prediction behavior.

3. Why use MCC instead of accuracy alone?

Accuracy can look strong when one class dominates. MCC checks the whole confusion matrix and reacts more honestly to imbalance and uneven error patterns.

4. Can I use this calculator for binary models?

Yes. Set the class count to 2 and enter a standard 2×2 confusion matrix. The same formula still works correctly.

5. What if the denominator becomes zero?

The result becomes undefined. This usually happens when predictions or actual counts have no variation. The calculator shows that clearly instead of forcing a misleading score.

6. Should rows be actual or predicted classes?

Rows should be actual classes and columns should be predicted classes. Keep that order consistent because row and column totals are used in the formula.

7. Does this calculator show class-level detail?

Yes. It reports support, predicted totals, true positives, false positives, false negatives, precision, recall, specificity, F1, and balanced accuracy for each class.

8. When is MCC especially helpful?

MCC is especially helpful for imbalanced datasets, model comparison, validation reports, and classification audits where a single fair summary score is needed.

Related Calculators

Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.