Ethical Questions Generative AI Poses
4 Ethical Questions Generative AI Poses
The answers will help determine the future of this transformational tool.
The surface-level wisdom about AI-generated text is that it is going to change everything about work and society, quite possibly on the scale of major societal shifts like the industrial revolution and the internet age.
4 QUESTIONS GENERATIVE AI POSES
- When is AI writing ethical?
- How will AI affect teaching?
- Will AI replace learning?
- Who owns knowledge?
One of the ways generative AI is like those big, transformative leaps is the ethical considerations and questions this technology compels us all to ask and answer. Technology innovations in our lifetime alone have made us consider issues such as frayed human connections, easy access to negative products and influences, misinformation, threats to personal data and personal privacy and an accelerating immediacy culture, to name just a few. We are already seeing, and starting to discuss, the big social and moral questions surrounding generative AI.
Because I work for a company that works in education and with educators, I know that’s where some of these most difficult and confounding questions exist. I also think that, because education is a sort of functioning human laboratory, the AI-related questions that start in education are some of the best predictors for what will happen elsewhere. Here are a few ethical or deep questions AI is already requiring us to ask and answer in education.
When Is AI Writing Ethical?
One of the first and more obvious questions is whether there should be a line drawn between acceptable and unacceptable uses of AI-generated text. Is every use appropriate? Should every possible use be banned or at least disfavoured?
My sense is that very few people firmly believe in either zero-sum answer. Most people probably can agree that some uses for AI-text generation are fine and even helpful. Yet for other uses, that answer is probably no. Does anyone care if AI takes a box score from a baseball game and writes a 200-word summary? I doubt it.
At the same time, we have seen that AI software can reliably pass law school and medical exams. Letting students use AI for writing is one thing but using generative AI in an exam setting raises the question: Who is really passing the exams?
The emerging minimum standard for AI writing isn’t about quantity or even subject but rather disclosure about its use. Even though people may not care that AI is writing stories about sports or the stock market or new government reports, they probably will want to know so they can evaluate what they are reading accordingly.
The same standards should apply to academic endeavours or work. It may be just fine to let the text bot write part of your research paper or marketing memo, but your teachers and bosses will more than likely want to know. And using it on an exam is not fairly assessing your intellectual ability.
The question that follows, then, isn’t so much ethical as it is problematic: If AI can do your work, why does your company need to pay you?
I suspect we’ll eventually get to a place where we accept that some parts of some work and entire parts of other work are open to AI creation. Other areas, for instance medical school, we will regard differently.
How Will AI Affect Teaching?
We may be best served by first asking if AI actually can replace teaching or the delivery of knowledge and skills. The answer, as it is for most complex questions, is both yes and no.
AI can already tell anyone how to bake a birthday cake or change the oil in a car. If that’s teaching or delivering knowledge, then our answer to the above is yes. Then again, you can find the same knowledge on YouTube or any one of a million places on Google.
But is that teaching? Probably not.
Teaching is more human and involves mentoring, role modelling and nurturing emotional as well as cerebral growth. AI can create a quiz and grade the answer. But that’s not teaching.
More than likely we will see AI-generated text tools make teachers better by removing some of the mundane, repetitive tasks that consume their days. Writing permission slips, creating study guides and developing lesson plans and tests can all be taken over by AI, leaving teachers more time and energy to do the important stuff that AI can’t.
That is a development we are likely to see in other careers, too — law, accounting, marketing and manufacturing. It’s easy to see how good AI will make the people who do these jobs more efficient and more focused on the more creative, more nuanced and more human aspects of their careers.
Will AI Replace Learning?
We have seen this argument before, just under different circumstances. If AI can write a 500-word summary about the Battle of the Bulge — and it can, by the way — what is the point in taking time to learn all that information? If AI can drive you to work, why learn to drive? If AI can explain calculus, why take calculus?
These are all good questions, but the answer to the question about AI replacing learning is no.
AI only knows what’s already known. It cannot do new research, discover new information or synthesize personal experiences with book smarts.
Calculators can do math. We still teach it and we still expect people to learn it because the concepts are foundational. Knowing how numbers relate to one another is essential for living. If you don’t understand the difference between subtracting and dividing, you can’t tell the calculator what you want it to do anyway.
Moreover, AI only knows what’s already known. It cannot do new research. It cannot discover new information and it cannot synthesize personal experiences with book smarts, at least not yet. So, the process of learning new things will go on.
Furthermore, AI makes errors. It repeats what has already been written by someone else somewhere else even if that information is inaccurate, misleading or incomplete. What AI types and adds to the sphere of knowledge will further contribute to its inaccuracies.
This means that to live in a world where AI will create things, human beings have to know when AI is wrong and the only way to know when the AI is wrong is to know the right one. That means we’ll always be learning.
Can Knowledge Be Owned?
This has been an ethical question for hundreds of years.
On some level, knowledge should be free, belonging to no one. We should all be able to know whatever we want from the menu of everything that’s known.
On another level, we understand the need for patents and copyrights and degrees and licenses. Without placing some knowledge behind walls or having a way to protect the generator of that knowledge, there will be little incentive to create more of it and no way to insulate from those who may pretend to know the thing that someone else created.
The U.S. Copyright Office has already signalled that it will not extend protections to material created by AI. That is a big development that makes sense. Because AI can only repeat or rephrase what’s already out there, things that already belong to someone, protecting it again would be odd.
So, at least as it applies to the knowledge that AI has to share, the answer is no one owns it because we all do.
ARTIFICIAL INTELLIGENCE DATA SCIENCE EXPERT CONTRIBUTORS