File.

RE: “Carleton professor designs emotion-detecting robot,” Jan. 15-22. 

In the Jan. 15-22 issue of the Charlatan, there was a story on Carleton professor Anthony Whitehead’s project to build a robot capable of reading people’s emotions through a process of scanning facial expressions and classifying the emotion behind them.

Fascinatingly, Whitehead’s project comes amidst a new wave of breakthroughs in programming from self-driving cars to prosthetic limbs controlled by the mind.

However, some prominent individuals in the tech sector have been making their worries with artificial intelligence known. Tycoons Elon Musk, Bill Gates, and astrophysicist Stephen Hawking have been bringing “robophobia” bac­­k into the public consciousness.

On Jan. 21, Musk, founder of Tesla Motors and PayPal, told American financial network CNBC that he invested in Vicarious, an artificial intelligence firm, in order to “keep an eye” on how artificial intelligence research is evolving.

For the past year, Stephen Hawking has argued in print and television interviews that if artificial intelligence overtakes human intelligence, machines would be able to redesign themselves and “evolve” far faster than humans could.

In an op-ed he co-authored for The Independent UK, Hawking called the advance of artificial intelligence “potentially the best or worst thing to happen to humanity in history.”

Before we cast away our smartphones, let’s keep in mind that the robot apocalypse Musk, Gates, and Hawking are predicting is decades away at least.

Despite advances, artificial intelligence is in its infant stages. Neither Siri nor Cortana are ready to follow in HAL’s footsteps quite yet.

However, these warnings come from staggeringly intelligent individuals and they wouldn’t waste time talking about this issue if they weren’t truly concerned. While Skynet may lie far off in the future, the march of AI has serious ramifications on today’s world.

In a few short years, drone strikes have become the method of choice for the United States in its fight against global terrorism, striking targets in the Middle East. But their use has concerned many, and the UN issued a report on their use against civilian targets.

Just last year, the UN had two meetings on “Killer Robots,” autonomous drones that can attack without human input.  In May, 87 member states met with experts to discuss the moral, legal, and practical ramifications of machines that can “decide” when to kill a human being. Another meeting was held in November and a third is scheduled for this April to discuss banning autonomous weapons under the Geneva conventions.

In February 2014, the European Parliament passed a resolution outright banning development, production, and use of autonomous weapons.

This is just one example of how artificial intelligence is shaping our world. Every day we are getting closer to the worlds envisioned by writers like Phillip K. Dick and Isaac Asimov, even if it is evolving one step at a time.

Musk and Hawking are right to question how artificial intelligence will affect our civilization, because it is doing so as we speak. This does not mean we should cower from marvelous projects like Whitehead’s emotion-detecting robot, but that we should discuss how we want our technology to reflect our values going forward.