http://www.zerohedge.com/news/2017-1...ce-user-safety
Full article on link.A mere few years ago the idea that artificial intelligence (AI) might be used to analyze and report to law enforcement aberrant human behavior on social media and other online platforms was merely the far out premise of dystopian movies such as Minority Report, but now Facebook proudly brags that it will use AI to "save lives" based on behavior and thought pattern recognition.
What could go wrong?
The latest puff piece in Tech Crunch profiling the apparently innocuous sounding "roll out" of AI (as if a mere modest software update) "to detect suicidal posts before they're reported" opens with the glowingly optimistic line, "This is software to save lives" - so who could possibly doubt such a wonderful and benign initiative which involves AI evaluating people's mental health? Tech Crunch's Josh Cronstine begins:
This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.
CEO Mark Zuckerberg has long hinted that his team has been wrestling with ways to prevent what appears to be a disturbingly increased trend of live streamed suicides as well as the much larger social problem of online bullying and harassment. One recent example which gained international media attention was a bizarre incident out of Turkey, where a distraught father shot himself on Facebook Live after announcing that his daughter was getting married without his permission. Though the example actually demonstrates the endlessly complex and unforeseen variables involved in human decision making and the human psyche - in this case notions of rigid Middle East cultural taboos and stigma clearly played a part - Tech Crunch holds it up as something which AI could possibly prevent.
...
---
Total Behavioral Control
And, as usual, they intentionally lie by exclusion so that they can profit and benefit from what they do. Just like they dont ask "How can we help you to pay your own medical bills" instead of asking "Why are the costs of medical care so high"? They wont ask why a person is suicidal. Why? They dont care. They dont care if a person has become suicidal because of situations in their lives, mostly, situations that have been intentionally caused by the Elite to manufacture dependency on the controllers in every form. All the Elite care is they produce and consume and allow themselves to be a part of the great machine, and dont do anything that would cost us money, like suicide.
"We own you".
Site Information
About Us
- RonPaulForums.com is an independent grassroots outfit not officially connected to Ron Paul but dedicated to his mission. For more information see our Mission Statement.
Connect With Us