- BOS-6-009
- SFO-6-010
- YVR-6-011
- BLR-6-012
- EDI-6-017
- LON-6-018
AI Transparency and Detecting Bias in AI Algorithms
Artificial Intelligence systems are often referred to as a “black box” as they are highly complex and it can be challenging to explain or identify which factors led to the decision or prediction, and how. NTT DATA is seeking products and services which provide quantifiable explanations to the recommendations or determinations made by an AI algorithm. This transparency is important in AI-based applications where trust matters and predictions carry societal implications, as in criminal justice applications or financial lending.
Similarly, NTT DATA is also seeking products and services which identify the potential for bias in a data set which an algorithm is to be trained on or bias in existing trained AI algorithms. This may be may be detected as insufficient sample size, skewed data, or other causes. As algorithms are only as good – or as bad – as the data they are trained on, the detection of data that may contain racial, gender, communal or ethnic biases is important. Ideally, the solution scans for signs of bias, indentifies potential bias and provides recommendations.
- Related keywords
- AI
- ML
- Bias
- discrimination
- unethical and unfair results/consequences
- Social challenges to be addressed through collaboration
- Target10:
Reduce inequality within and among countries/All SDG
- Market size of collaboration business or business scale
- Indicative
- Assets and opportunities to be offerred
<Opportunities>
Customers or segments