Four Pillars for a Trustworthy AI

A system that shows integrity operates efficiently as defined by its designers, but whether it is enough will depend on whether the developers made design decisions that are acceptable by the society.

In order to trust an AI, the algorithms, software and production deployment systems must work within specified standards and in a predictable and understandable way - i.e. with ML Integrity. ML Integrity is the core criterion that a machine learning algorithm must demonstrate in practice and production to drive trust.


Four Pillars of ML Integrity:
Many factors combine to generate a prediction. For this entire AI system and flow to behave with integrity, four pillars must be established. These are mentioned below:
  1. ML Health: ML model and production deployment system must behave in production as expected and within norms specified by the data scientist.

  2. ML Reproducibility: All predictions must be reproducible. If an outcome cannot be faithfully reproduced, there is no way to know for sure what led to the outcome or debug issues.

  3. ML Explainability: it must be possible to determine why the ML algorithm behaved the way that it did for any particular prediction and what factors led to the prediction.

  4. ML Security: the ML algorithm must be healthy and explainable in the face of malicious or non-malicious attacks.

Loading...      
Loading...