mlopscommunity / open-questions-ai-quality

0 stars 0 forks source link

Unbundling 'trust' into concrete expectations #38

Open adamboazbecker opened 4 months ago

adamboazbecker commented 4 months ago

Will the concept of 'trust' be unbundled into more concrete expectations? What are they likely to be?

y27choi commented 4 months ago

The way I like to have 'trust' in the system I build is by evaluating it in specific scenarios that matter to my use case. Once I fully understand the behavior of the system in different types of scenario, I can have more confident in the system I build and therefore I can ship my model with much more confidence.

skirmer commented 4 months ago

It may be tangential, but there's also a data security/privacy component to trust. Having a transparent AI Ethics policy and taking your data consent processes seriously helps users and customers trust your AI both to do what it's claiming to do but also to have fewer undesirable side effects like data loss or misuse, or discriminatory effect.

aransbotham commented 3 months ago

In my experience, trust has always been a broad term that covers several concrete expectations that end-users may have about a product/service. If you are responsible for delivering a trusted product or service you have to break down underlying requirements of the system, including measurable outcomes (often times via SLAs or XLAs). All of that to say, trust will continue to require being 'unbundled'; and what it means to have a trustworthy or trusted AI application includes many of the same attributes that trust has always had for data-driven product/services, it's more about enabling responsible AI applications at scale (given that not every individual or organization building AI-fueled applications is versed in the subject).

These are a couple of resources I've appreciated on Responsible AI Practices: