Main Navigation

The data privacy ‘GUT Check’ for synthetic media like ChatGPT

The rise of synthetic media like OpenAI’s ChatGPT is changing the way many types of content are produced and consumed—from academics to entertainment. As with any innovation, synthetic media raises concerns, such as data privacy and security, ethical issues and the potential to spread misinformation. Ultimately, it’s up to you to determine whether the risks outweigh the benefits.

Synthetic media is loosely defined as any form of media (visual, textual, audio) generated by or in collaboration with artificial intelligence, such as large language models (LLM). An LLM is “a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive data sets,” according to NVIDIA. Companies, scholars and organizations, including government agencies, mine data for compilation into massive data sets using the copyright principle of "fair use,” which permits the limited use of copyrighted material without permission.

Currently, the most noteworthy synthetic media platform is ChatGPT, which boasts over 100 million active users. Using a dialogue format, ChatGPT can “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” according to OpenAI. Other platforms include Google’s Bard, Meta’s LLaMA (Large Language Model Meta AI) and Synthesis.

As with every innovation, there are immediate fans, stans and naysayers. Companies, such as BuzzFeed and ESPN, have voiced support for the technology, indicating that synthetic media will be used to create public-facing content, such as BuzzFeed’s quizzes and ESPN’s commentary. Architects and visual designers also are experimenting with visual synthetic media in their drafting to save time and money.

Some privacy professionals express concern that innovations that use synthetic media will cause more harm than good because it increases avenues for malicious attacks. Criminals have already created websites that impersonate ChatGPT and other OpenAI platforms to phish personal and financial information or prompt users to download files containing malware. Other concerns include the technology’s capability to produce believable fake comments, videos or other media, leading to a spread of misinformation. Educational institutions are grappling with ChatGPT-written research papers, with ChatGPT appearing as a co-author of at least four published papers and preprints, despite unsettled legal questions of who (or what) owns content generated by large learning models.

Some companies are working to protect user data by building corrective technology to identify plagiarism and "deep fakes" created by synthetic media. Lawyers and legislators are bringing legal and ethical questions forward, encouraging data protection-focused terms of use.

One thing is certain: Synthetic media is here to stay.

That’s why it’s important to increase our technological literacy and implement healthy, curious and cautious data privacy habits to protect against malicious threat actors, accidental disclosure of sensitive or personal information and legal liability. To accomplish this, implement a “GUT Check” when using new technology to keep your data protected.

The “GUT Check” for data privacy

Need help?

Concerned about a data security incident? Contact the campus IT Help Desk at 801-581-4000, the hospital ITS Service Desk at 801-587-6000, or the Information Service Office's Security Operations Center at SOC@utah.edu for immediate assistance.

Did you receive a malicious or suspicious email? Use the Phish Alert button in UMail or forward the email as an attachment to phish@utah.edu.

Want to learn more? Reach out to the offices below.

  • Office of General Counsel: Contact Ogc-admin@lists.utah.edu if you are evaluating a service for your organization and are provided with a contract for goods or services.
  • Privacy Office: Contact baa@utah.edu if a third-party vendor will be accessing, viewing, storing, or using university-protected health information (PHI). If the terms of service or contract suggest data collection, a business associate agreement (BAA) may be legally necessary. Contact privacy@utah.edu with general inquiries about information privacy and your rights and responsibilities.
  • IT Governance, Risk, & Compliance: Contact ISO-GRC@utah.edu if you are assessing software or an information system for your organization. The U’s Information Security Office must evaluate the security of new software or hardware.
  • PIVOT: Contact PIVOT Center—Partners for Innovation, Ventures, Outreach & Technology (utah.edu) if you have an idea for innovating systems using apps or software.

Have a privacy topic you’d like to know more about? Contact Bebe Vanek, information privacy administrator for University of Utah Health Compliance Services, at bebe.vanek@hsc.utah.edu.