Skip to Main Content
       

AI Literacy: Responsible Use

Your guide to AI concepts, basic skills, tools, and responsible use.

Academic Integrity

Fairmont State University's Academic Integrity policy can be found in our Student Code of Conduct under the heading "Standards of Conduct".

You should read the entire Student Code of Conduct, but the relevant section when it comes to AI usage for your coursework is as follows:

"[5.B.1] Plagiarism

Using someone else’s words, ideas, or academic work without proper citation or permission. This includes submitting work completed by another individual or AI tools without attribution, or copying material from sources without proper acknowledgement."

Some professors will include an AI syllabus statement outlining their expectations for their course. It is your responsibility to check with your professors and clarify the extent to which you are able to utilize AI in each specific course. 

Environmental Impact

There are serious environmental considerations when it comes to the creation, expansion, and maintenance of AI data centers.

  • Carbon emissions: Data centers consume large amounts of electricity sourced from fossil fuels.
  • Water scarcity: Data centers use a lot of water for cooling.
  • Hazardous waste: Data centers create electronic waste as a byproduct of running AI tools.
  • Rare earth mining: Hardware components of AI technology require rare earth elements which requires mining.  

Laws and Regulations

Intellectual property

There are many legal questions regarding the interaction of AI content and copyright, patent, and trade secrets infringement. 

Common legal questions revolving around AI include:

Legal accountability

As of now, the United States does not have any laws regulating AI but President Biden's administration proposed a Blueprint for an AI Bill of Rights. The European Union has established laws governing AI, which could potentially serve as a guide for other countries. 

Transparency in AI decision making

Disclosures about how AI tools make decisions builds trust among users, and it's important to verify that AI is making effective and fair decisions on behalf of real people. This is especially important when AI is used for critical tasks in industries like law enforcement, finance, and healthcare. 

Your Data Privacy

In almost all cases, AI tools you use collect and store your personal data. There are three main sources for these data:

I. Information that is already publicly available on the internet.

II. Information that is licensed from third party data brokers.

III. Information that users or human trainers freely provide.

 

Follow these practices to better manage what personal data is available:

  • Be selective of what information you post online (social media, forums, comments, etc.)
  • Review data sharing settings for your devices and browsers (location, contacts, microphone permissions, photo gallery/camera permissions, etc.). 
  • Do not use sensitive information as inputs for AI tools (username/passwords, D.O.B., SSN, address, email, phone numbers, etc.)

Further reading: AI, data privacy and you by UNC IT Services.

Bias

The following is a non-exhaustive set of examples of bias in AI.

Confirmation bias

A user-end bias which is the tendency to favor information that aligns with prior assumptions, beliefs, and values. Users over-rely on AI when they uncritically accept recommendations that align with their own predictions.

Implicit bias

These are unconscious biases present in the humans who produced the data an AI is trained on. These biases are transferred into the data.

Sampling bias 

These are biases in datasets that result from skewed data collection methods or the overrepresentation of subsets of data. This could be data from leading questionnaires, oversampling from convenient or volunteer sources, or a lack of randomization. 

Algorithmic bias 

This is when algorithms make decisions that systemically disadvantage a group of people, often because these populations are underrepresented in the dataset. 

Temporal bias 

These are biases created when datasets are limited to specific periods of time, creating a narrow data environment for AI to learn from. This is especially troublesome when the latest research for your field isn't included in an AI's training data. Data subject to temporal bias are also likely full of implicit biases common to an historical time period.