Fairmont State University's Academic Integrity policy can be found in our Student Code of Conduct under the heading "Standards of Conduct".
You should read the entire Student Code of Conduct, but the relevant section when it comes to AI usage for your coursework is as follows:
Using someone else’s words, ideas, or academic work without proper citation or permission. This includes submitting work completed by another individual or AI tools without attribution, or copying material from sources without proper acknowledgement."
Some professors will include an AI syllabus statement outlining their expectations for their course. It is your responsibility to check with your professors and clarify the extent to which you are able to utilize AI in each specific course.
There are serious environmental considerations when it comes to the creation, expansion, and maintenance of AI data centers.
There are many legal questions regarding the interaction of AI content and copyright, patent, and trade secrets infringement.
Common legal questions revolving around AI include:
Legal accountability
As of now, the United States does not have any laws regulating AI but President Biden's administration proposed a Blueprint for an AI Bill of Rights. The European Union has established laws governing AI, which could potentially serve as a guide for other countries.
Transparency in AI decision making
Disclosures about how AI tools make decisions builds trust among users, and it's important to verify that AI is making effective and fair decisions on behalf of real people. This is especially important when AI is used for critical tasks in industries like law enforcement, finance, and healthcare.
In almost all cases, AI tools you use collect and store your personal data. There are three main sources for these data:
I. Information that is already publicly available on the internet.
II. Information that is licensed from third party data brokers.
III. Information that users or human trainers freely provide.
Follow these practices to better manage what personal data is available:
Further reading: AI, data privacy and you by UNC IT Services.
The following is a non-exhaustive set of examples of bias in AI.
A user-end bias which is the tendency to favor information that aligns with prior assumptions, beliefs, and values. Users over-rely on AI when they uncritically accept recommendations that align with their own predictions.
These are unconscious biases present in the humans who produced the data an AI is trained on. These biases are transferred into the data.
These are biases in datasets that result from skewed data collection methods or the overrepresentation of subsets of data. This could be data from leading questionnaires, oversampling from convenient or volunteer sources, or a lack of randomization.
This is when algorithms make decisions that systemically disadvantage a group of people, often because these populations are underrepresented in the dataset.
These are biases created when datasets are limited to specific periods of time, creating a narrow data environment for AI to learn from. This is especially troublesome when the latest research for your field isn't included in an AI's training data. Data subject to temporal bias are also likely full of implicit biases common to an historical time period.