You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Machine learning systems, such as large language models, that generate content are starting to be integrated into collaborative environments. Examples include office applications and IDEs (e.g., for code generation).
How, and to what extent, should these capabilities be addressed in this document?
The text was updated successfully, but these errors were encountered:
Based on discussion at the Task Force meeting, we should consider adding the following, subject to refinement.
Content generated automatically (e.g., by a machine learning system) should be flagged for manual, human review. For example, it could be designated as a suggested change. Such content may include material introduced to improve accessibility, for example automatically generated captions, descriptions of images or video, or summaries of text.
Automatically generated content should, if feasible, include accessibility-related alternatives, such as captions or descriptions. If this is not possible, human collaborators should be prompted to provide such alternatives.
Automatically generated summaries of tracked issues or comment threads may be useful in improving the accessibility of communications among collaborators.
We acknowledge that content-generating machine learning technology may be implemented locally to the user, or remotely. Nothing in our requirements depends on either implementation choice.
Machine learning systems, such as large language models, that generate content are starting to be integrated into collaborative environments. Examples include office applications and IDEs (e.g., for code generation).
How, and to what extent, should these capabilities be addressed in this document?
The text was updated successfully, but these errors were encountered: