Search F News...

AI in an Art School

The ethical concerns of using AI at SAIC.

By Featured, SAIC

Illustration by Bei Lin

For some sci-fi fans, “Artificial Intelligence” (AI) may conjure iconic silver screen references like Sarah Connor and Skynet or HAL. For those unfamiliar with the ʼ80s classic “Terminator” or the 1968 film “2001: A Space Odyssey,” AI poses less foreboding threats. Image generators, like DeepAIʼs Text to Image AI Image Generator or Lensaʼs digital avatar creations, offer fun, playful experiments for users. Other examples of AI-based programs include essay and short story writing abilities, recipe generators, autocomplete, and predictive text features, as well as navigation maps that offer response-driving suggestions based on current traffic.

What exactly is AI?

According to this definition by TechTarget, it is “the simulation of human intelligence processes by machines, especially computer systems.” More recently, ChatGPT has taken center stage, as shared exchanges of chats between humans and AI programs alert us to the reality that computer systems can, more and more, sound like humans. How will this impact the classroom and studio here at the School of the Art Institute of Chicago (SAIC)? AI is already used in the education sector to create study aids, measure learning and student performance, and assist computing and data analysis. According to a June 6, 2020, Forbes article, AI is also used to help create access for people with disabilities and help online platforms identify fake news posts.

But when does AI go from being a useful tool like grammar-checking software into something that allows users to avoid learning or creating work altogether? In other words, when is it cheating? One writer, in a September 22 Slate article, posits the ethics question using an analogy to sports: Is using AI to write papers more like performance-enhancing drugs, or more like performance-enhancing gear (which is encouraged)? That simplistic approach still doesnʼt encompasses the ethical challenges that the use of AI poses or the subtle questions posed by common features like grammar checks and predictive text in word processing programs. Faculty, staff, older students, and parents may recall a time when using a calculator was considered cheating. What about AI that autocorrects color in film and photography?

For Dr. Raja El Halwani, professor of philosophy at SAIC, it is not the mere use of artificial intelligence, but how a student uses the technology that is the key ethical question. Is the student submitting work as their own without acknowledging it when it is not theirs? “The ethical dilemma is the same and has always been the same. It is not going to change. The ethical dilemma for the students is basically, should I cheat or should I not cheat?” says Halwani.

Similarly, according to a Feb. 5, 2023, article in Jumpstart magazine, even though content created by AI is not plagiarized from other sources, the use of that content raises questions of academic integrity if the AI platform isnʼt acknowledged and given credit in the studentʼs work. So, as noted in the Slate article, the real challenge for institutions like SAIC may be less about using AI but detecting when AI content is being used impermissibly.

“As a school of art and design, weʼve always had questions of what is the artistʼs work, what is an individualʼs content, what is someone elseʼs content,” says Paul Jackson, Associate Dean of Undergraduate Studies. “Itʼs the issue of appropriation: taking someone elseʼs work and spinning it in some way. Under what circumstances is that collaborative versus appropriation?” Jackson sees the nuance: “Itʼs not
necessarily using the tool, but it is how the tool gets used and with what level of transparency.”

SAIC currently does not have a specific schoolwide policy on student (or faculty) use of AI The current student handbook, instead, defines academic misconduct more broadly to include plagiarism and cheating, which

may consist of the submission of the work of another as oneʼs own; unauthorized assistance on a test or assignment; submission of the same work for more than one class without the knowledge and consent of all instructors; or the failure to properly cite texts or ideas from other sources.

Jackson says that SAIC recently provided optional language to faculty regarding AI that they may use in their course syllabus. This optional language, which faculty may revise, sets a clear boundary:

Meanwhile, Jackson says the School is currently discussing what policy and revisions to the student handbook might be needed. This does not mean, however, that SAIC plans to prohibit AI. In fact, as Jackson noted, “The first conversation is how, when, and to what extent should using these tools be a conduct issue? There are going to be many moments and methods of using AI generative technology that do not represent a conduct infringement.”

Halwani, who is currently on sabbatical, shared that his colleagues in other institutions have leaned into AI as a generative tool while still demanding a level of human creativity. “To get around [appropriation], they ask students to submit work that uses ChatGPT and work with the students to build on that.” Halwani gives a sample assignment to use AI and write a one-page paper on Aristotleʼs “Theory of Virtue.” “Then, the teacher and the students work further on this paper. The teacher might push the student to think more about what the Chat has given, contrast it with what they learned in class, and make corrections.” Essentially, the teacher works with the technology instead of against it.

Similarly, in a Feb. 12, 2023, Yale News article, some Yale professors encourage students to use ChatGPT while also exploring creative solutions that rely on studentʼs own thinking. The Student Handbook for the Rhode Island School of Design (RISD) encourages risk-taking, while also noting that risks which challenge conventions “must at all times adhere to the fundamental value underlying academic conduct at RISD: honesty in the creation and presentation of oneʼs work as well as in oneʼs relations to others and their work.”

AI offers tremendous potential as a tool that can aid and advance academic and artistic research. Are the ethical implications of this new technology a modern version of whether artists using a camera obscura are drawing?

“There are going to be plenty of interesting ways that students can use this tool that are going to be within bounds of our conduct policy,” says Jackson.

“So weʼre not going to rush to create those gates. It is simply a question of what that usage looks like — usage that has integrity versus usage that would lack integrity.”

Leave a Reply

Your email address will not be published. Required fields are marked *

6 − 5 =