Chalmers Advanced Python

AI policy

Introduction

Generative AI (or simply AI) refers to tools that use probability models and machine learning methods to generate new content based on data from previously published content. Examples of modern generative AI tools include ChatGPT (OpenAI), Gemini (Google) and Copilot (Microsoft). Many desktop applications and web search engines today also incorporate AI into their products by default, i.e. without you choosing to activate or use it.

The use of Generative AI is neither required nor prohibited in this course. Instead, we take a case-based approach, with different uses of AI classified as “permitted,” “problematic,” and “prohibited.” The following guide is intended to support you in making informed decisions about using AI in your studies, as well as to explain the course policy and provide information about the AI disclosure statement you are required to submit as part of the examination.

Permitted, problematic, and prohibited use

AI tools are powerful but not without risks and trade-offs. Apart from environmental impact, privacy concerns, and inherent bias, AI tools often can and do produce output which is incorrect. But since this output always looks convincing, it can be very hard to detect – especially if you are a beginner in the subject area.

Our general advice is to not use generative AI to complete tasks that you do not have the knowledge and skills to complete yourself. To put this in other words: you will not have access to any AI tools in the exam. So if you can’t complete a task without the help of AI, then you will not be able to pass the exam.

To give you a clearer idea of what this means for this course, we outline the different categories of AI use below.

🟢 Permitted use

Using AI as a tutor to explain specific questions is considered permissible if it helps you understand a concept better (although always remember, there are no guarantees that its explanations are correct).

Example: “Explain why [::-1] can be used to reverse a string in Python”

🟠 Problematic use

Using AI to look up things in the documentation of a language/library/framework is considered problematic. A particular reason for this is that API details frequently change between versions and AI tools often do not have adequate context to answer accurately. Moreover, being able to understand and use programming documentation is a learning outcome of this course, and therefore something which you need to learn to do yourself.

Example: “Is count a keyword argument in Python’s str.replace() method?”

Using AI to help in debugging is also considered problematic. Fixing bugs in your program requires a deep understanding of what your code does, which usually comes from careful analysis and iterative testing. AIs cannot perform any kind of reasoning, they can only repeat patterns they have seen before. While it is possible that your bug is common enough for an AI to recognise it, you will likely learn nothing by fixing your bugs in this way.

Example: “My code gives this error message, please fix the bug for me.”

🔴 Prohibited use

Using AI to write complete code for you is completely prohibited. This is equivalent to getting someone else to do your work for you, or copying someone’s work and claiming it to be your own, and thus constitutes cheating.

Example: “Give me Python code which implements this assignment description.”

Self-disclosure

At the end of the course, you must submit an AI self-disclosure in which you explain the extent to which you did (or did not use) AI tools during this course. This is to be completely individually.

References