Tech
Helping AI developers and future tech leaders practice ethical reasoning as they create new technologies
Technologies that use artificial intelligence are being developed at a rapid pace. How will they impact people? Will these technologies be good for society?
RadioIQ spoke with an engineer, and a philosopher, who are both interested in teaching software developers how to include ethical reasoning as they’re creating AI.
Traditionally, engineering students haven’t always been expected to take ethics or philosophy courses. In recent years though, some schools, including Harvard, have started bringing philosophers and computer science professors together to teach future developers to ask ethical questions about the technologies they create.
Virginia Tech also has a new course to teach graduate engineering students how to be better critical thinkers.
“We need not just innovation as quickly as possible to make money, we need responsible innovation for some of these emerging technologies,” said Kendall Giles, one of the professors teaching the class. He said with more AI being designed, he sees a growing need for developers to think through the possible outcomes and harms their technologies might create.
“It’s not like technologies of the past,” Giles said. “The scale, the potential benefits and the risks are extremely high.”
These risks include misinformation, job loss and ethical bias. Cansu Canca is a philosophy professor at Northeastern University, where she works on several projects to help engineers think through ethical questions.
“Am I doing a good thing by creating this system? What are the types of harm that can happen?” Canca said.
Canca also launched a consulting company, called the AI ethics lab, which works with developers to integrate ethical reasoning in every AI project.
“Ethics is very logic based and very mathematical, very analytical,” Canca said. “And once engineers realize that this is what’s going on, they actually quite enjoy engaging with it. Because logic forms the basis of writing code. And it forms the basis of ethical reasoning.”
She said some people worry that this adds extra steps and slows down innovation. She argues that done right, ethics actually streamlines the creative process for developers.
“They don’t have to keep going back and asking, ‘well did we forget something? Are we doing something wrong?’ No they understand the risks. They mitigate the risks and they move on to the next phase,” Canca said.
She added that a lot of developers she talks with are concerned about how AI technologies they make will impact society.
“They want to create. They want to try things out. Which is, generally speaking, a great thing,” Canca said. “But if you have the innovation ambition coupled with the demands of capitalism, without any safeguards, what you end up with is really just go ahead, do it, deploy it, and if it hurts, well they’ll figure it out later.”
She said investors can put pressure on companies to develop better policies around ethics. After all, if these technologies have ethical failings, that could be a liability for the company.
She said regulators have a place in this too, in creating safeguards for AI technology.
“Which are not, and should not, be very detailed,” Canca said. “Because we don’t want regulation to be micromanaging everything. But it really draws the boundaries of what is legally allowed and within that, we have to figure out, we have to work on what should be ethically allowed.”
A survey published earlier this year, by a consulting company called Wavestone, found that 99 percent of tech CEOs recognize a need for ethical safeguards within their company. But only 42 percent said ethical policies are in place.
Giles said the workforce leading these companies hasn’t traditionally been trained to do ethical reasoning.
“The types of technical leaders that I’m seeing, they need this wider perspective,” Giles said.
He believes if future developers learn to think through ethical issues, they’ll be better innovators, and do more good, with the technologies they create.