Fairness
Members of the Georgetown community should be mindful that AI outputs may sometimes contain bias. To promote fairness and meet Georgetown’s obligations under the law, it is important to take appropriate steps to avoid using AI in ways that could unintentionally result in discrimination against protected individuals or groups. Users of AI should make efforts to validate the accuracy of generated content with trusted first-party sources and monitor the reliability decisions made with the use of an AI tool. Users cannot assume that any answers or content generated by AI are factually correct, given that AI tools are known to “hallucinate” (i.e. generate factually incorrect responses), nor that such answers or content result in equitable or unbiased decisions, given that training data may include biases that propagate throughout the use of the AI tool.
Such steps may include: reviewing the output of the AI tool used for indications of bias; using vetted data for training of Georgetown-developed AI tools; implementing and relying on AI tools for high-risk institutional functions only with appropriate institutional authorization; and using AI tools that have been evaluated by Georgetown University Information Services.