It recently occurred to me that several explainers I wrote (for built-in AI features) should probably be restricted to a secure context. I just didn't realize it while writing the explainer.
A question in the questionnaire (which I did fill out) would have made me realize it sooner.
Although the full guidance on when features are restricted is probably difficult to summarize (and perhaps still controversial), I think a question like
Have you considered whether your feature would be best restricted to secure contexts?
It recently occurred to me that several explainers I wrote (for built-in AI features) should probably be restricted to a secure context. I just didn't realize it while writing the explainer.
A question in the questionnaire (which I did fill out) would have made me realize it sooner.
Although the full guidance on when features are restricted is probably difficult to summarize (and perhaps still controversial), I think a question like
would be helpful for others in my situation.