Intel Develops Controversial AI to Detect Emotional States of Students
But debates on AI, science, ethics and privacy abound.
An Intel-developed software solution aims to apply the power of artificial intelligence to the faces and body language of digital students. According to Protocol, the solution is being distributed as part of the "Class" software product and aims to aid in teachers' education techniques by allowing them to see the AI-inferred mental states (such as boredom, distraction, or confusion) of each student. Intel aims to expand the program into broader markets eventually. However, the technology has been met with pushbacks that bring debates on AI, science, ethics and privacy to the forefront.
The AI-based feature, which was developed in partnership with Classroom Technologies, is integrated with Zoom via the former's "Class" software product. It can be used to classify students' body language and facial expressions whenever digital classes are held through the videoconferencing application. Citing teachers' own experiences following remote lessons taken during the COVID-19 pandemic, Michael Chasen, co-founder and CEO of Classroom Technologies, hopes its software gives teachers additional insights, ultimately bettering remote learning experiences.
The software makes use of students' video streams, which it feeds into the AI engine alongside contextual, real-time information that allows it to classify students' understanding of the subject matter. Sinem Aslan, a research scientist at Intel who helped develop the technology, says that the main objective is to improve one-on-one teaching sessions by allowing the teacher to react in real-time to each student's state of mind (nudging them in whatever direction is deemed necessary).
But while Intel and Classroom Technologies' aim may be well-intentioned, the basic scientific premise behind the AI solution - that body language and other external signals can be accurately used to infer a person's mental state - is far from being a closed debate.
For one, research has shown the dangers of labeling: the act of fitting information - sometimes even shoehorning it - into easy to perceive (but ultimately and frequently too simplistic) categories.
We don't yet fully understand the external dimensions through which people express their internal states. For example, the average human being expresses themselves through dozens (some say even hundreds) of micro expressions (dilating pupils, for instance), macro expressions (smiling or frowning), bodily gestures, or physiological signals (such as perspiration, increased heart rate, and so on).
It's interesting to ponder the AI technology's model - and its accuracy - when the scientific community itself hasn't been able to reach a definite conclusion on translating external action toward internal states. Building houses on quicksand rarely works out.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Another noteworthy and potential caveat for the AI engine is that expressing emotions also vary between cultures. While most cultures would equate smiling with an expression of internal happiness, Russian culture, for instance, reserves smiles for close friends and family - being overly smiley in the wrong context is construed as a lack of intelligence or honesty. Expand this towards the myriad of cultures, ethnicities, and individual variations, and you can imagine the implications of these personal and cultural "quirks" on the AI model's accuracy.
According to Nese Alyuz Civitci, a machine-learning researcher at Intel, the company's model was built with the insight and expertise of a team of psychologists, who analyzed the ground truth data captured in real-life classes using laptops with 3D cameras. The team of psychologists then proceeded to examine the videos, labeling the emotions they detected throughout the feeds. For the data to be valid and integrated into the model, at least two out of three psychologists had to agree on how to label it.
Intel's Civitci himself found it exceedingly hard to identify the subtle physical differences between possible labels. Interestingly, Aslan says Intel's emotion-analysis AI wasn't assessed on whether it accurately reflected students' actual emotions, but rather on its results being instrumental or trustable by teachers.
There are endless questions that can be posed regarding AI systems, their training data (which has severe consequences, for instance, on facial recognition tech used by law enforcement) and whether its results can be trusted. Systems such as these can either prove beneficial, leading teachers to ask the right question, at the right time, to a currently troubled student. But it can also be detrimental to student performance, well-being, and even their academic success, depending on its accuracy and how teachers use it to inform their opinions on students.
Questions surrounding long-term analysis of students' emotional states also arise - could a report from systems such as these be used by a company hiring students straight out of university, with labels such as "depressed" or "attentive" being thrown around? To what measure of this data should the affected individuals have access? And what about students' emotional privacy - their capacity to keep their emotional states internalized? Are we comfortable with our emotions being labeled and accessible to anyone - especially if there's someone in a position of power on the other side of the AI?
The line between surveillance and AI-driven, assistive technologies seems to be thinning, and the classroom is but one of the environments at stake. That brings an entirely new interpretation for wearing our hearts on our sleeves.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
hotaru251 Couldn't a parent list this as collecting a childs data w/o permission which is against law in some places (like CA)?Reply -
USAFRet
This will go back and forth in the courts.hotaru251 said:Couldn't a parent list this as collecting a childs data w/o permission which is against law in some places (like CA)?
Eventually, someone will lose. -
DavidC1 From the article,Reply
Systems such as these can either prove beneficial, leading teachers to ask the right question, at the right time, to a currently troubled student. But it can also be detrimental to student performance, well-being, and even their academic success, depending on its accuracy and how teachers use it to inform their opinions on students.
In fact that's the least controversial part of the problems it can create. What about intrusion of privacy?
It's said DARPA's motto is that everything has two sides. Meaning it can be used for good and bad. The significance is that they don't mean it in a general way. They mean that it'll always be used for the good and bad.
Intel is also responsible for pushing a research that can read what you are thinking and translate that into an image/video on a screen. Remember the two side quote - what happens during future interrogations, where you have to struggle in your head to keep a secret? Where they can just jack you to a computer and see what you think?
The excuse is that it's going to be done to "help mental health patients and the disabled" or nonsense like that. Sure, only if that's how the technologies are used. It never is only used that way.
See how governments are starting to become authoritarian over the world with the covid thing, and technologies are starting to enable all the worst dystopian novels and movies put together. Not a good formula at all.