No, AI (Artificial Intelligence) did not write this post. But perhaps it could have? My written work exists on this blog and could be pulled into the vast pool of data that a Large Language Model (LLM) AI program uses to generate content. When I was challenged to explore AI tools, my first petulant instinct was to ask a program to write this post for me. I even used belle of the ball ChatGPT for the first time (not my cup of tea). What could be a better "exploration" of AI tools? As this sophomoric protest desire might reveal, I am hardly a fan of the modern AI boom. Momentum has keep me a Chrome browser user for many years, but the inescapability of AI summaries in my Google searches might be what drives me away (I am hardly alone in my desire to live without this function).
 |
Image Source: Blog Author |
Logically and realistically I recognize that the latest AI trends are a natural evolution of the way that these technologies have already been integrated into my daily digital life. But the messaging that came with the public opening of ChatGPT and certain stable diffusion programs (as well as the push from companies that invested in this boom) has been fatiguing. Still, as public opinions ebbs and flows, data shows that these programs are making there way further into our lives. This piece from Wired looks at the friction between Americans being concerns about AI but also embraing the use of ChatGPT in their work life.
The Future has AI in it
Realistically, we don't know what forms AI programming will take in the future. But it's the safest bet that it is here to stay. Even experts are conflicted about what world this builds for us- check out the variation in this Pew Research Center survey.
As and educator and information sciences professional, it would be negligent to let my personal opinions keep my students from learning about AI tools. I currently work with first graders who get most of their tech education during their extensive Library/Information Sciences class, so current AI trends haven't directly made their way into my class. (I've felt my stomach drop as my students confidently report whatever their Alexa told them as the absolute truth, but I try to take this in stride and ask questions rather than shutting them down.) The higher grades seem to be feeling more of the pinch; maybe that is why high school teachers are more likely to see have a negative view of AI in education as reported in this survey.
Teaching With, Around, and Through AI
This last point is what I personally need to hold on to. I believe in instilling growth mindset in students and challenging them to do hard things because I trust that the human mind is inquisitive. I am happy to get to currently work at a school that holds off on assigning traditional grades to elementary students so they can focus on learning as the goal instead of "performing".
Still keeping my acceptance slightly salty, I look for more concrete advice about ways to to use AI and chatbots specifically to promote active learning. Ditch That Textbook has a gentle guide for educators that includes the note that it's okay to use traditional paper and pencil assignments during an adjustment period as well as providing suggestions about how to assess memory and understanding while taking advantage of collaboration opportunities. Marc Hayes has a this set of suggestions specifically for primary grades teachers- with acknowledgements of limitations.
One struggle I actively face with my first graders is that this is the year they really starting writing as an expression of thoughts and means to communicate, not just a physical action. Even in the very well staffed classroom I get to work in, it is difficult to spread around individual attention. This led me to actually be interested in Class Companion, a feedback generation program. I'm not sure I'd use it with my particular age group (they are way too likely to take every suggestion at exact face value and love to have something to copy), I would try it with slightly more established writers who could benefit from the coaching.
So it seems I'm not immune to the charms of AI programs. But any instruction about or using modern generative AI would not be complete without looking at the ethical pain points. Articles like this one or plans like this one that look at how teaching the ethics of AI needs to be part of any growing AI users education were therefore on my radar.
Harm Reduction- AI and the Environment
 |
Image Source: Blog Author, screen shot from BlueSky |
One of my frustrations with automatically generated AI summaries or features that cannot be disabled is that they force my complicity into the enormous ecological impact of LLM AI programs. An individual's personal use of resources is rarely the most important drain when large systems exist (reports about carbon footprints have already illuminated this), but when it happens automatically for every Google Search? Any classroom using AI should be discussing this; luckily there are resources to help.
Teaching What You Abhor- AI Art Generation
Last year my school ran a program about using AI art to illustrate a book. I understand why they did this program. I know suddenly generating an image is enticing, especially to impatient youth who haven't had the time and practice to develop artistic skills and personal styles. But it also distressed me.
Science News Explores, writing here about the ethical debate around if AI Art is theft, concluded "It depends on who you ask". Other reporting looks at the tricky legal landscape that is currently being disputed. But something can be legal without being ethical, so I asked. I asked the many friends and associates I have who work in the physical arts. The overwhelming majority did not want their art to be used to train AI's stable diffusion technology but were at a loss of how to share their work. They agreed with artist Julie Curtis as quoted by curator Bianca Bosker in this podcast (ironically titled "Democratizing Art", a phrase AI art advocated use to describe a benefit of the technology) "An idea is not a painting. Painting is constant decision making."
My students love their art class. They experience the deep joy of creation. When arguing against fearing AI, John Spencer (post linked again) talks about how the joy of creation will drive people to keep making content even if AI is available. What he doesn't acknowledge is that this may be true, but the economic impact of AI art is way more of a present concern than the existential questions of why make art. As someone on record as being concerned about the way technology will be wielded to disenfranchise people, this is my main concern about the full throated embrace of AI generated art, even if we can clear up the theft concerns.
Going From Here
It's useful to get uncomfortable sometimes and working with explicit AI tools certainly makes me uncomfortable. I will have to learn to work with this; data shows students certainly are.
But my hope would be that there can be serious and hard discussions and improvements made to how AI is used, discussed, and regulated so I can do so with only my personal discomfort holding me back.  |
Image Source: Free Malaysia Today |
No comments:
Post a Comment