Skip to content

Microsoft Copilot: Compliance and ethical considerations

Three stacked Microsoft 365 screens, with the left screen on top. The top screen has a white box in the middle that says, "Introducing Copilot."

The University of Utah has approved the use of Microsoft Copilot, though users should always log in to the university's instance for data and legal protections and follow guidelines.

Since OpenAI’s introduction of ChatGPT in November 2022, a “space race” of generative artificial intelligence (AI) tools began, with companies and organizations rolling out new large language models (LLMs) and promising to transform work and creativity. Concerns about algorithmic bias in automated decision-making, legal challenges about copyright and fair use of training material, more opportunities for malicious actors to breach enterprise or personal data, as well as AI hallucinations or nonsensical or inaccurate outputs, however, have left users uncertain about whether the tools are safe to use.

Even Microsoft Copilot, the only commercial generative AI tool approved for use at the University of Utah, comes with risk. By following some key guidelines and behaviors, however, the U’s instance of Copilot can help draft documents, summarize information, analyze data, and even provide a few laughs in a safer, more accurate and credible manner.

Powered by ChatGPT-4, and DALL-E for images, Copilot ingests a user’s prompt, transmits it outside of the university’s self-regulated IT environment, and accesses generative AI systems in Microsoft’s cloud to provide a response, also called an output. In the simplest terms, Copilot evaluates the prompt against LLM training data, considers context cues from the prompt, and generates an output using sophisticated mathematical reasoning. Copilot’s AI systems are regularly tuned, adjusted, and updated to enhance accuracy and decrease occurrences of bias and hallucination.

Critical legal protections are in place when using a verified university account (they are not present when using a personal Copilot account or open-source LLMs), so it’s important that users log in with their U credentials before using the tool. These protections are paramount because Microsoft is a “prompt processor” — when users disclose or share restricted data, it leaves the university’s IT environment and is processed or accessed by third parties, including Microsoft and its subcontractors, which may violate law, expose trade secrets, or infringe on trademarks, or risk intellectual property.

According to U regulations, users are required to limit prompts to approved data, meaning nonspecific or nonidentifiable, publicly available information.

Users are prohibited from disclosing protected health information (PHI) because certain legally required contracts, such as a Health Insurance Portability and Accountability Act (HIPAA)Business Associate Agreements, are not in place under the U’s Microsoft Copilot license. Users also should never input other legally restricted data, such as confidential information, student data, privileged communications, trade secrets, personally identifiable information like health or financial data, or unpublished intellectual property.

With these cautions in mind, students, faculty, and staff can experiment with Copilot’s prompt engineering with little risk. When logged in with a U account, commercial data protections provide certain IT security and privacy configurations, delete prompt data, prohibit the transfer of ownership of prompt data or outputs to third parties, and prohibit Microsoft from using user input to train the underlying LLM(s).

When using Microsoft Copilot, consider compliance, ethics, communication, and editing — human behaviors that a machine can’t replicate.

Compliance

Compliance is a community effort and builds an important foundation for using technology.

Start by using university-approved services, such as Copilot, and university-managed devices to ensure data protection is enabled and IT security configurations are in place. Next, understand the nature of the data you are interacting with before disclosing it to a third party, such as Microsoft. This includes understanding the expectations around the handling of that data. Certain categories of data, such as restricted data, can lead to state or federal regulatory violations or civil legal liability.

Ethics

Ethics around using and developing skilled (and perhaps eventually sentient) AI tools is complex, with competing views and strong opinions. People are grappling with this across the globe, including scholars who’re publishing ethical frameworks. When considering generative AI at a university, some topics include its place in pedagogical, clinical, and workplace optimization, as well as data usage and governance.

Some tips for ethical use include:

  • Accessibility — Consider a training program to ensure that AI is accessible to students and employees equally, and folks who don’t consider themselves “technically savvy” aren’t left behind or think of AI tools as incompatible with their teaching, learning, or work.
  • Bias and accuracy — Research indicates that algorithms can be biased against some groups, compounding systemic discrimination. Additionally, outputs can be wrong but sound convincing and authoritative. There are reputational and legal risks of relying on inaccurate and biased information. Monitor and verify outputs before using them, check sources, and be mindful about when generative AI use is inappropriate.

Communication

Be transparent when you use Copilot or other generative AI tools.

Communicate specific generative AI use expectations with your students, colleagues, and audience, relying first on university policy and use guidelines, and second on your collaborators’ comfort and internal workflows. Always cite the use of generative AI tools. Reading or viewing AI content without noting its use can be jarring, misleading, and feel inauthentic.

Editing

Become a good editor and teach others to enhance their editing skills when using generative AI tools.

One of the most human things about you is your personal sense of style and taste; these are uniquely you. You have developed your voice from your education and experiences. Your taste and editing skills are critical when using Copilot. Consider generative AI outputs as a first draft and have a heavy hand in editing to apply your personal voice and taste.

What are groups doing with Microsoft Copilot?

The Copilot logo above a translucent text box that reads, "A whole new way to work." The bottom of the background resembles purple and blue ribbons winding from front left to back right, with peach and other gradient hues at the top.

Copilot can support your work.

Learn more

Have an information privacy topic you’d like to know more about? Contact Bebe Vanek, information privacy administrator for University of Utah Health Compliance Services, at bebe.vanek@hsc.utah.edu.

  • Preparing first drafts of letters, kudos, or emails. Remember to review and edit for your voice and appropriate content.
  • Compiling and summarizing large amounts of information from various sources and preparing a summary or initial outline for a presentation. Be sure to check sources for accuracy and credibility.
  • Analyzing large data sets and publicly available documents to summarize information. Use caution to verify accuracy.
  • Generating appropriate images to brainstorm content for social media marketing or office materials. Be sure to follow approved university marketing guidelines.
  • Experimenting with prompt engineering and learning something new. This technology can be fun, and some outputs are inadvertently funny while you adjust to these skills.

U of U resources

Many resources are available for those interested in using Microsoft Copilot or getting involved with AI discussions at the university.

Share this article:

 

Last Updated: 8/28/24