mlopscommunity / open-questions-ai-quality

0 stars 0 forks source link

Leading with quality and trust #9

Open adamboazbecker opened 1 month ago

adamboazbecker commented 1 month ago

How can I better lead my team by using quality, trust, and proper evaluation as a north star?

ktrnka commented 1 month ago

Leadership is very situationally-dependent and complex but hopefully I can get the discussion started!

On quality:

Trust can mean so many different things. Thinking back to doctors trusting our ML features, some of the things that helped:

BradCan2 commented 1 week ago

In part, concepts like trust, quality, leadership, etc. are challenging because they are "essentially contested concepts" (Gallie, 1959) and resist any definitive definitions. But, it is possible to acknowledge that rabbit hole of abstraction and relativism, and focus on the pragmatic issues. As I argue here, trust in AI or Trusted AI involves trustworthiness. In short, trustworthiness develops when a series of risks are taken which result in positive experiences. For instance, new parents or anyone who has seen a baby learning to walk has seen trustworthiness in action. Stepping forward trusting that balance, ground, etc. will successfully catch forward momentum creates Trustworthiness in walking. In turn, trust and trustworthiness can't be seperated from risks and risk-taking.

Risks occur when there are deviations from the "reference narrative" (i.e., the plan for the future) . Risks are not changes to the status quo. Instead, risks are deviations from the plan for the future. When there are no or low deviations from the plan then risks are low. When risks are conceived of as deviations from the plan then the deviations can be tracked via the robustness (strength) and resilience (duration) of the deviations. Often these deviations are measurable in advance of the deviation happening. Therefore contingencies can be put in place to mitigate the impact of the deviations. Best practices re: risks associated with large complex systems, as posed by AI systems, are increasingly viewed as proactive mitigation processes rather than processes reacting to the probabilities of deviation (Bent Flybjerg). In short, manage risk deviations by planning for their mitigation early on and continually.

Insofar as Quality AI and its relationship with Trustworthy AI, defining terms again is important. I conceive of Quality in three fundamental ways. First, quality that is locked in and second quality that moves things forward. A broad example is a religion locks in certain quality aspects of , for example, an established faith tradition (stability, trust etc) but this has an entropic and stagnate quality. But then a wild prophet comes along and renews the faith but also disrupts everything. Pirsig described this dual nature of Quality as an endless "racheting" process. The third way of thinking about Quality relates to the first two ways in a paradoxical manner. By that I mean Quality AI (or quality anything) is aspirational in nature. It can only can be partially "claimed" if the scientific (episteme) and technical (techne) expertise applied to it never reaches it. In short, Quality AI or quality anything is a personal process of virtue endlessly devoted to excellence (arete) in the AI system, not an end state.

So, to the question does Trustworthy AI happen by maintaining AI Quality? If excellence is an essential characteristic of AI Quality then it is a process of locking in Quality as well as constantly pursuing Quality. This amounts to mastery. Similarly, Trustworthy AI relates to Quality AI (and vice versa) in a similar way. The processes of risk-to-positive result that build trustworthiness have the characteristic of "quality racheting" i.e. move it forward, lock it in, move it forward, lock it in etc. Traditional, solid aspects of quality get locked down by structural mastery and then innovative masters in the craft break things open for movement forward once again. In AI Quality its is arguable that the lockdown, traditional Quality AI should involve, for instance, regulations ensuring the practical wisdom (phronesis) ensuring the net good of humanity is the addressed first as a deviation associated with risk and building trustworthiness. Todate that has not happened. As noted, disruption is not the only kind of quality nor is it more important than the locked down type of quality. In many ways, the disruptive aspect of Quality AI techne is happening without an equally strong lockdown of quality AI phronesis. Disruption itself is not excellence. Technical disruption with out a lockdown of practical wisdom "the good" quality, in fact, contains risk characteristics that appear to be existentially robust and resilient deviations from the plan for the future. In general, people recognize this as indicated by surveys resulting in low overall trust in AI. The trustworthiness process takes time to build and constant change in AI means a constant rebuilding of trustworthiness. Quality often has paradoxical characteristics, and in this case the phrase "Go slow to go fast" comes to mind.

j-space-b commented 1 week ago

Needs to be clearly defined for legibility. Hypothesis that trust correlates highly with transparency could hold true for various use cases (willing to bet on it and prove this out).