Skip to Main Content

Generative AI Research Guide

Hallucinations

AI hallucinations are a significant concern in the implementation of Large Language Models (LLMs) and AI applications. Hallucinations are characterized as the generation of misleading or factually incorrect content, often due to the model's reliance on diverse and potentially biased training data from the Internet. This phenomenon can disrupt the reliability of AI outputs and poses challenges to users who may struggle to verify the accuracy of the information based on their familiarity with the subject matter. Therefore, there is a pressing need for critical evaluation and validation of AI-generated outputs to mitigate risks associated with these inaccuracies while understanding the limitations of AI systems.

Read more:

          Butler, J., Puleo, J., Harrington, M., Dahmen, J., Rosenbaum, A., Kerkhoffs, G., … & Kennedy, J. (2024). From technical to understandable: artificial intelligence large language models improve the readability of knee radiology reports. Knee Surgery Sports Traumatology Arthroscopy, 32(5), 1077-1086. https://doi.org/10.1002/ksa.12133

          Buholayka, M., Zouabi, R., & Tadinada, A. (2023). Is chatgpt ready to write scientific case reports independently? a comparative evaluation between human and artificial intelligence. Cureus. https://doi.org/10.7759/cureus.39386

          Aljamaan, F., Temsah, M., Altamimi, I., Al‐Eyadhy, A., Jamal, A., Alhasan, K., … & Malki, K. (2024). Reference hallucination score for medical artificial intelligence chatbots: development and usability study. Jmir Medical Informatics, 12, e54345. https://doi.org/10.2196/54345

          Rao, A., Pang, M., Kim, J., Kamineni, M., Lie, W., Prasad, A., … & Succi, M. (2023). Assessing the utility of chatgpt throughout the entire clinical workflow: development and usability study. Journal of Medical Internet Research, 25, e48659. https://doi.org/10.2196/48659

          Temsah, M., Alhuzaimi, A., Almansour, M., Aljamaan, F., Alhasan, K., Batarfi, M., … & Nazer, R. (2024). Art or artifact: evaluating the accuracy, appeal, and educational value of ai-generated imagery in dall·e 3 for illustrating congenital heart diseases. Journal of Medical Systems, 48(1). https://doi.org/10.1007/s10916-024-02072-0

Environment

Generative AI poses environmental challenges that warrant critical examination. Most notably, the training and operation of AI models require significant energy resources, contributing to substantial carbon emissions. Studies have illustrated that the environmental footprint of AI involves extensive electricity consumption linked to data centers and algorithmic training processes, with some estimates indicating considerable CO2 emissions associated with these activities.

Read more:

          Naeeni, S. and Nouhi, N. (2023). The environmental impacts of ai and digital technologies. aitechbesosci, 1(4), 11-18. https://doi.org/10.61838/kman.aitech.1.4.3

          Ven, H., Corry, D., Elnur, R., Provost, V., Syukron, M., & Tappauf, N. (2024). Does artificial intelligence bias perceptions of environmental challenges?. Environmental Research Letters, 20(1), 014009. https://doi.org/10.1088/1748-9326/ad95a2

          Taherdust, H. (2023). Towards artificial intelligence in sustainable environmental development. Artificial Intelligence Evolution, 49-54. https://doi.org/10.37256/aie.4120232503

          Smuha, N. (2021). Beyond the individual: governing ai’s societal harm. Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1574

Inaccuracy and Bias

Generative AI systems are subject to significant issues of inaccuracy and bias, which can have profound implications on their outputs and societal trust. One prominent concern is that these models often reflect and amplify existing biases inherent in their training data. For instance, biases related to race and gender can be perpetuated in outputs if they are not systematically identified and mitigated during the AI development process. This amplification of bias raises critical ethical questions about the use of generative AI in sensitive areas, such as healthcare, where incorrect or biased recommendations can adversely affect patient care. Additionally, inaccuracies can arise from the presence of misleading or misrepresented data during the training phase, resulting in generative outputs that do not align with factual information.

Read more:

          Ferrara, E. (2023). Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003

          Parikh, A., Michael, C., Conger, J., McCoy, A., Chang, J., & Zhang-Nunes, S. (2024). Accuracy and bias in artificial intelligence chatbot recommendations for oculoplastic surgeons. Cureus. https://doi.org/10.7759/cureus.57611

          Grassini, S. and Koivisto, M. (2024). Understanding how personality traits, experiences, and attitudes shape negative bias toward ai-generated artworks. Scientific Reports, 14(1). https://doi.org/10.1038/s41598-024-54294-4

          Kouzelis, L. and Spantidi, O. (2024). Enhancing historical extended reality experiences: prompt engineering strategies for ai-generated dialogue. Applied Sciences, 14(15), 6405. https://doi.org/10.3390/app14156405

Lack of Attribution

The lack of attribution in generative AI-generated content raises significant ethical and legal challenges. As AI systems increasingly produce original works, such as art, literature, and academic content, questions emerge regarding who deserves credit for these outputs. This dilemma is further complicated when AI tools are fine-tuned based on a user's prior contributions, leading to uncertainties in attribution and accountability.

Additionally, the absence of clear frameworks for recognizing and protecting AI-generated works under intellectual property laws exacerbates this issue. Current laws struggle to articulate the ownership rights related to such content, often leaving creators uncertain about their legal standing. This lack of clarity can foster an environment where copying and using AI-generated materials without proper attribution becomes rampant, thereby undermining ethical standards in creative fields and academic settings.

Read more:

          Earp, B., Mann, S., Liu, P., Hannikainen, I., Khan, M., Chu, Y., … & Savulescu, J. (2024). Effects of personalization on credit and blame for ai-generated content: evidence from four countries.. https://doi.org/10.31219/osf.io/68k3c

          Earp, B., Mann, S., Liu, P., Hannikainen, I., Khan, M., Chu, Y., … & Savulescu, J. (2024). Credit and blame for ai–generated content: effects of personalization in four countries. Annals of the New York Academy of Sciences, 1542(1), 51-57. https://doi.org/10.1111/nyas.15258

          Itanyi, N. (2024). Reconceptualizing the protection of ai-generated works in the digital age: an analysis of the intellectual property laws in nigeria and the united states. Business Law Review, 45(Issue 6), 180-184. https://doi.org/10.54648/bula2024022

          Guo, H. and Zaini, S. (2024). Artificial intelligence in academic writing: a literature review. Asian Pendidikan, (4), 46-55. https://doi.org/10.53797/aspen.v4i2.6.2024

Confidentiality, Intellectual Property Risks, & Copyright

Generative AI presents significant confidentiality and intellectual property risks, primarily due to its ability to process and generate content from vast datasets that may include sensitive information. As these systems can inadvertently reproduce confidential data drawn from their training sets, the potential for data leaks and privacy violations is a consistent concern.

Intellectual property risks further complicate the landscape. The use of generative AI can lead to potential infringement of third-party copyrights and trade secrets if the AI generates content closely resembling existing works without appropriate licensing or attribution. Moreover, AI may create new works where ownership rights become blurred, leaving creators uncertain about their entitlements.

Read more:

          Liu, Y., Huang, J., Li, Y., Wang, D., & Xiao, B. (2024). Generative ai model privacy: a survey. Artificial Intelligence Review, 58(1). https://doi.org/10.1007/s10462-024-11024-6

          Paul, B. and Anuradha, A. (2024). Artificial intelligence in different business domains., 13-33. https://doi.org/10.4018/979-8-3693-1565-1.ch002

          Prinz, K. (2025). Managing the legal risks of artificial intelligence on intellectual property and confidential information.. Consulting Psychology Journal Practice and Research. https://doi.org/10.1037/cpb0000287

Unintended Uses

Unintended uses of generative AI in higher education present both opportunities and challenges that institutions must navigate carefully. One of the most pressing concerns is the potential for academic dishonesty. With tools like ChatGPT available, students may be tempted to use generative AI to complete assignments, leading to increased instances of plagiarism and undermining the integrity of educational assessments. 

Read more:

          Guo, H. and Zaini, S. (2024). Artificial intelligence in academic writing: a literature review. Asian Pendidikan, (4), 46-55. https://doi.org/10.53797/aspen.v4i2.6.2024