Key Facts:
-
The AI system uses ten categories of social emotions to identify violations of social norms.
-
The system has been tested on two large datasets of short texts, validating its models.
-
This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.
Making an AI that detects violations of social norms itself sounds like a violation of social norms
Odd, the AI says it isn’t. It must be you that is in violation.
This sounds like a prequel to the show Psycho-Pass.
This will absolutely be used to oppress the neurodivergent at some point
Could be helpful if it silently (or at least subtly) warns the user that they’re approaching those boundaries. I wouldn’t mind a little extra assistance preventing those embarrassing after-the-fact realizations. It’d have to be done in a way that preserves privacy though.
Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.
Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.
Can we like… maybe have some good as in morally use cases for AI?
I know we had the medical diagnosis one, that was nice. Maybe some more like that?
deleted by creator
No, more like the ones that give early warning signs of like, dementia or something.