Understanding Artificial Intelligence
Defining Artificial Intelligence
Artificial Intelligence (AI) refers to the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. The AI that falls under the category of ‘generative AI’ refers to models or algorithms that create brand-new output, such as text, photos, videos, code, data, or 3D renderings, from the vast amounts of data they are trained on. The models ‘generate’ new content by referring back to the data they have been trained on, making new predictions.
Since ChatGPT launched on November 30, 2022, generative AI has gained increasing prevalence in use and will be a focus of this unit. Since 2022, there has been an explosion of other platforms utilizing generative Al, from Microsoft incorporating ChatGPT into its web browser Copilot, to Google’s Gemini, Claude, and many more (see Lesson 2). (If your institution has recently updated your MS Office Suite, for example, you will likely see AI options within your daily office tools.)
How ‘Smart’ is AI?
Artificial Intelligence (AI) refers to the capacity of machines to exhibit intelligence, which includes processes such as perception, synthesis, and inference. This is distinct from the intelligence displayed by humans and non-human animals. The leading AI scholar Kate Crawford, in her book Atlas of AI, highlights that this form of “Artificial Intelligence” is not truly artificial or intelligent. It heavily relies on human labor and human-generated data, primarily focusing on predicting outcomes rather than engaging in reasoning or understanding as humans do. However, others predict that the lines between human intelligence and artificial intelligence are becoming increasingly blurred (see, for example, Ray Kurzweil’s The Singularity is Nearer: When We Merge With AI).
Generative AI Capabilities
Less than six months after ChatGPT was released, people were already using it for a variety of purposes, such as organizing research and reading academic articles; writing speeches, resumes, and emails; and planning their workouts or learning a second language. Today’s generative AI capabilities are even more impressive.
Generative AI can generate outputs for a variety of purposes, including:
These are just a few examples of the possibilities. However, generative AI’s performance on any of these tasks may still contain fundamental flaws. For example, requested references sometimes are found to be “hallucinated,” or made up based on logical predictions. As generative AI becomes more common and integrated into everyday platforms, it becomes imperative that we consider not only its abilities but also its limitations, to ensure we are using this technology in ways that are responsible, ethical, and accountable.
Generative AI Limitations
In contrast to AI, much of human knowledge, thinking, and communication stems from goal-driven activities, social interactions, modeling others’ actions, and many different types of engagements and experiences in the real world (Kleiman, 2023). AI may never match the richness of the human experience, as it lacks (at least for now) innate human characteristics, qualities, and skills such as creativity, contextual awareness, emotional intelligence, consciousness, and empathy.
There are alternate perspectives; some argue that current AI could be conscious according to certain definitions, or that it should be considered to have rights. However, most computer scientists have argued that current AI system behavior is fully explained by the underlying algorithms and data.
According to Kathryn Conrad and Sean Kamperman in their presentation for the AI, Digital Literacy, & Ethics Educators’ Summit (2023), most LLMs cannot:
- Reliably verify the veracity of its outputs
- This has been termed an AI “hallucination” where AI will authoritatively state content as correct when it has no basis in fact.
- Consistently produce factually reliable outputs
- AI systems are also not able to consistently separate fact from fiction, and will often use the patterns they have learned to generate text that is simply not true.
- Understand what it’s talking about or what it’s doing
- Recognize hateful, offensive, or biased speech without significant training/ guardrails
- Because AI data sets were designed and programmed by humans, the potential for bias is significant since the data on which AI is trained inherently may transfer those biases to the output and/or include incomplete or unrepresentative data. Biases may be subtle or severe, reflecting societal and structural discrimination.
AI content also may be out of touch with current events and information.