Scope compliance uncertainty is referred to the uncertainty resulting from the use of ML models in the context beyond its scope. SafeML is a statistical distance-based tool, that is aimed at evaluating scope compliance uncertainty. The relevant statistical information is computed at design time, and its corresponding characteristics are monitored at runtime. The publicly available library is linked below.
Statistical Model agnostic Interpretability with Local Explanation (SMILE) is a tool that is aimed at providing explanation by making use of statistical distances. SMILE is currently under development and is applicable for use cases such as computer vision, regression, and LLMs.
Deep Knowledge is a method that uses test coverage criterion, enabling effective quality assessments of DNNs. It uses generalization-based test coverage criterion to test neural networks. The tool is currently under development and is being evaluated on several use case. A runtime instantiation of the tool is also available.
Machine learning components are by definition dataset oriented. Datasets lacking in the quality can result in poor or misunderstood performance of the algorithms. Dataset quality assessment and evaluation is crucial. To this end, D-ACE framework is under development for comprehensive evaluation of the dataset.