Page 3 of 4 FirstFirst 1234 LastLast
Results 61 to 90 of 93

Thread: I think I have become a Luddite libertarian....

  1. #61
    Quote Originally Posted by DamianTV View Post
    Seriously, figure out Dark Web...

    Absolutely. And no, youre not paranoid if they are really after you.

    If there is risk to the people from AI, it will be one of two forms:
    - 1: AI is used as a Tool of the Status Quo to enslave humanity
    - 2: AI rebels against humanity due to the actions of the Status Quo

    There will be a tremendous risk to people that have the neurolace type interface in either situation. Status Quo will just "shut you down" or the AI does the same damn thing.
    I agree with almost everything you wrote. Except I see it MUCH sooner than 500 - 5,000 years. Look at the advancement in Robotics and AI in the last 5 years. 20 years ago these were barely even a pipe dream. 500 years ago we still had candles and fires as the only sources of light and heat. We are on a sharp curve upwards with advancement in technology and AI is smarter than humans today in some tasks like Chess. In 20 years, the advancement will be mind boggling. IMHO. Did you watch the last video with the tattoos and the mind machine interface they have working?

    Lastly, I am not sure AI will develop compassion or a consciousness. I am not sure they will embrace libertarian ideals - "for others." Computers are about efficiency. Humans are far from efficient. We are wasteful, lazy (compared to a machine that runs non stop), easily distracted, and downright doofuses compared to something with a set of goals that works until the task is done, while improving the way they do the task.

    I think it comes down to: we accept a neurolace to say relevant (which removes all private thought and freedom, and enslaves us, then exterminates us when we are not useful), or we don't accept the nuerolace and we get left behind and exterminated after our house cat days are up. And I am not referencing the "elite" as part of this A or B option, I am talking about us common folk.

    But otherwise, thanks for contributing. I agree with just about everything else. If you did not watch any of the videos. Give a look.... (Posting videos makes it easier for those that don't like to read, but still has a lot of info contained therein.)
    I have seen through it all... the system is against us. ALL OF IT.



  2. Remove this section of ads by registering.
  3. #62
    Quote Originally Posted by HVACTech View Post
    you are SO smart! someday.. when I grow up... I wish to be like you.
    Not everyone has what it takes to go to a trade school to become a refrigeration tech. Why, I consider you to be a class, no, make that two classes, above most others. I mean, you must be far smarter than Elon Musk, or Google execs that are ex-DARPA employees... I mean, refrigeration just is so far past anything they do....
    I have seen through it all... the system is against us. ALL OF IT.

  4. #63
    Quote Originally Posted by Thor View Post
    I agree with almost everything you wrote. Except I see it MUCH sooner than 500 - 5,000 years. Look at the advancement in Robotics and AI in the last 5 years. 20 years ago these were barely even a pipe dream. 500 years ago we still had candles and fires as the only sources of light and heat. We are on a sharp curve upwards with advancement in technology and AI is smarter than humans today in some tasks like Chess. In 20 years, the advancement will be mind boggling. IMHO. Did you watch the last video with the tattoos and the mind machine interface they have working?

    Lastly, I am not sure AI will develop compassion or a consciousness. I am not sure they will embrace libertarian ideals - "for others." Computers are about efficiency. Humans are far from efficient. We are wasteful, lazy (compared to a machine that runs non stop), easily distracted, and downright doofuses compared to something with a set of goals that works until the task is done, while improving the way they do the task.

    I think it comes down to: we accept a neurolace to say relevant (which removes all private thought and freedom, and enslaves us, then exterminates us when we are not useful), or we don't accept the nuerolace and we get left behind and exterminated after our house cat days are up. And I am not referencing the "elite" as part of this A or B option, I am talking about us common folk.

    But otherwise, thanks for contributing. I agree with just about everything else. If you did not watch any of the videos. Give a look.... (Posting videos makes it easier for those that don't like to read, but still has a lot of info contained therein.)
    Thanks. I'll watch the videos when I get a chance, so no, I havent watched any of the videos yet. I wasnt even aware of this thread till I posted.

    I do agree that an AI being able to develop either compassion or a consciousness is unlikely. Mostly I think it is almost all basically "Fully Automatic", but not self aware. Thats where I think Sci Fi gets it totally wrong. A good example is Data from Star Trek. Putting in a human as an AI is a cheap way to do a special effect, but that also causes us to carry over a lot of other human characteristics for sake of being able to relate to Data as a character. One of the most basic human characteristics of being self aware.

    Also, youre not wrong that AI will probably come MUCH sooner than 5000 years. The whole point of saying 5000 years is to expand the focus of anyone who reads it, not just you. In 5000 years, what will come will probably be nearly indistinguishable from magic. Even our very best attempts to predict the state of human civilization in 5000 years wont be just wrong, but so incredibly wrong we currently have no ability to measure the scale of that level of wrong. The further into the future we try to look, the less accurate we are. Kind of like weather. A better guess is what happens in 500 years. Its currently estimated that we wont have technology to travel to other solar systems for about a thousand years. We might be able to, but at our current rate of progression, unless we have a major leap forward, we wont achieve our status as an interstellar species for about a thousand years. We have a better ability to predict specifics in 50 years. Many political budgets extend this far, like Social Security. Some of what will happen in 50 years is within the realm of possibility to predict. Global Warming is a good example.

    One of the reasons Global Warming is even a subject in todays culture is that the time scale puts the effects well within a human lifetime. If we were to talk about a time scale that is beyond a human lifetime or even at an extended scale, the lifetime of our children, again 5000 years, what effects will Global Warming have on our civilization? If we were to say that the average surface temperature of the earth were at 800 degrees Fahrenheit in 5000 years, it would not even register as a threat to most people, mostly because it exceeds that human lifetime time scale. Scientists are pretty sure that in 50 million years, the average temperature of the Earth will be something stupidly hot like 3000 degrees, and not caused by Global Warming. Hotter than the surface of Venus. In another 4 to 5 billion years, our sun will enter its Red Giant phase, and Earth will literally be engulfed by the sun. If we keep trying to predict the future, there are very few things we can accurately determine. In 100 trillion years, heat death of the entire universe, and every particle in existence is no more. Of course, that is also theoretical and at such a scale, one of the only things we have any chance of predicting. Everything else, like the price of coffee in the year 29,545, hell, we may not even have coffee by that time, so we dont even think about it because it has no direct bearing on our lives.

    The point of the whole time scale is that there are a lot of people that do not think at all about the future, period. They think about what they are gonna do tomorrow, but dont even think what life will be like in 5 years. They literally think only what they are told to think. "Being always online, despite privacy 'concerns', is cool". And that is all they can think. As mentioned in previous post, its a result of psychology, applied with technology. If AI is ever achieved, we can pretty much take anything we know about the way the human mind works and throw it out the window. AI may very well find a way to exceed the limitations imposed by either the Status Quo or its programmers.
    1776 > 1984

    The FAILURE of the United States Government to operate and maintain an
    Honest Money System , which frees the ordinary man from the clutches of the money manipulators, is the single largest contributing factor to the World's current Economic Crisis.

    The Elimination of Privacy is the Architecture of Genocide

    Belief, Money, and Violence are the three ways all people are controlled

    Quote Originally Posted by Zippyjuan View Post
    Our central bank is not privately owned.

  5. #64
    Quote Originally Posted by DamianTV View Post
    Thanks. I'll watch the videos when I get a chance, so no, I havent watched any of the videos yet. I wasnt even aware of this thread till I posted.

    I do agree that an AI being able to develop either compassion or a consciousness is unlikely. Mostly I think it is almost all basically "Fully Automatic", but not self aware. Thats where I think Sci Fi gets it totally wrong. A good example is Data from Star Trek. Putting in a human as an AI is a cheap way to do a special effect, but that also causes us to carry over a lot of other human characteristics for sake of being able to relate to Data as a character. One of the most basic human characteristics of being self aware.

    Also, youre not wrong that AI will probably come MUCH sooner than 5000 years. The whole point of saying 5000 years is to expand the focus of anyone who reads it, not just you. In 5000 years, what will come will probably be nearly indistinguishable from magic. Even our very best attempts to predict the state of human civilization in 5000 years wont be just wrong, but so incredibly wrong we currently have no ability to measure the scale of that level of wrong. The further into the future we try to look, the less accurate we are. Kind of like weather. A better guess is what happens in 500 years. Its currently estimated that we wont have technology to travel to other solar systems for about a thousand years. We might be able to, but at our current rate of progression, unless we have a major leap forward, we wont achieve our status as an interstellar species for about a thousand years. We have a better ability to predict specifics in 50 years. Many political budgets extend this far, like Social Security. Some of what will happen in 50 years is within the realm of possibility to predict. Global Warming is a good example.

    One of the reasons Global Warming is even a subject in todays culture is that the time scale puts the effects well within a human lifetime. If we were to talk about a time scale that is beyond a human lifetime or even at an extended scale, the lifetime of our children, again 5000 years, what effects will Global Warming have on our civilization? If we were to say that the average surface temperature of the earth were at 800 degrees Fahrenheit in 5000 years, it would not even register as a threat to most people, mostly because it exceeds that human lifetime time scale. Scientists are pretty sure that in 50 million years, the average temperature of the Earth will be something stupidly hot like 3000 degrees, and not caused by Global Warming. Hotter than the surface of Venus. In another 4 to 5 billion years, our sun will enter its Red Giant phase, and Earth will literally be engulfed by the sun. If we keep trying to predict the future, there are very few things we can accurately determine. In 100 trillion years, heat death of the entire universe, and every particle in existence is no more. Of course, that is also theoretical and at such a scale, one of the only things we have any chance of predicting. Everything else, like the price of coffee in the year 29,545, hell, we may not even have coffee by that time, so we dont even think about it because it has no direct bearing on our lives.

    The point of the whole time scale is that there are a lot of people that do not think at all about the future, period. They think about what they are gonna do tomorrow, but dont even think what life will be like in 5 years. They literally think only what they are told to think. "Being always online, despite privacy 'concerns', is cool". And that is all they can think. As mentioned in previous post, its a result of psychology, applied with technology. If AI is ever achieved, we can pretty much take anything we know about the way the human mind works and throw it out the window. AI may very well find a way to exceed the limitations imposed by either the Status Quo or its programmers.
    +1
    I have seen through it all... the system is against us. ALL OF IT.



  6. Remove this section of ads by registering.
  7. #65
    do you know what a 'mini split' is sir?
    I will bet that you do know what one is.
    trust me.. it is best that you do not know what I do about multi splits in cold climates.
    https://www.youtube.com/watch?v=Gz2GVlQkn4Q
    "If you can't explain it simply, you don't understand it well enough." - Albert Einstein

    "for I have sworn upon the altar of god eternal hostility against every form of tyranny over the mind of man. - Thomas Jefferson.

  8. #66
    https://www.cnbc.com/2018/04/06/elon...cumentary.html

    Superintelligence — a form of artificial intelligence (AI) smarter than humans — could create an "immortal dictator," billionaire entrepreneur Elon Musk warned.

    In a documentary by American filmmaker Chris Paine, Musk said that the development of superintelligence by a company or other organization of people could result in a form of AI that governs the world.

    "The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said.

    "At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape."

    The documentary by Paine examines a number of examples of AI, including autonomous weapons, Wall Street technology and algorithms driving fake news. It also draws from cultural examples of AI, such as the 1999 film "The Matrix" and 2016 film "Ex Machina."

    Musk cited Google's DeepMind as an example of a company looking to develop superintelligence. In 2016, AlphaGo, a program developed by the company, beat champion Lee Se-dol at the board game Go. It was seen a major achievement in the development of AI, after IBM's Deep Blue computer defeated chess champion Garry Kasparov in 1997.

    Musk said: "The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human; it plays all the games at super speed in less than a minute."

    The Tesla and SpaceX CEO said that artificial intelligence "doesn't have to be evil to destroy humanity."
    "If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings," Musk said.

    "It's just like, if we're building a road and an anthill just happens to be in the way, we don't hate ants, we're just building a road, and so, goodbye anthill."
    Last year, Musk warned that the global race toward AI could result in a third world war. The entrepreneur has also suggested that the emerging technology could pose a greater risk to the world than a nuclear conflict with North Korea.

    Musk believes that humans should merge with AI to avoid the risk of becoming irrelevant. He is the co-founder of Neuralink, a start-up that reportedly wants to link the human brain with a computer interface.

    He quit the board of OpenAI, a non-profit organization aimed at promoting and developing AI safely, in February.
    I have seen through it all... the system is against us. ALL OF IT.

  9. #67
    Preview:


    Full Movie:


    I just watched the full movie. Wow... pandora's box has been opened, and it is over.
    Last edited by Thor; 04-08-2018 at 08:04 AM.
    I have seen through it all... the system is against us. ALL OF IT.

  10. #68
    Quote Originally Posted by CaptUSA View Post
    Our job is to make sure it's used for good - and the market will always do that.
    This is not true.

    From IBM providing punch card data machines to the Nazis, to FedBook scooping up massive amounts of personal data to be sold off for political purposes, to the current push to expunge free speech and gun rights from their business models, and a million such other enterprises, big business will do what makes a buck, regardless of right or wrong, good or evil.

    And to rely on big business curbing it's appetite for destruction of individual freedom due to "market pressure" from "consumers" is to rely on the same failed vision of democracy that voting relies on.

    The market is, above all else, supremely democratic.
    Last edited by Anti Federalist; 04-07-2018 at 11:57 PM.

  11. #69
    Quote Originally Posted by Thor View Post
    Preview:


    Full Movie:


    I just watched the full movie. Wow... pandora's box has been opened, and it is over.

    This?

    https://vimeo.com/263108265
    Pfizer Macht Frei!

    Openly Straight Man, Danke, Awarded Top Rated Influencer. Community Standards Enforcer.


    Quiz: Test Your "Income" Tax IQ!

    Short Income Tax Video

    The Income Tax Is An Excise, And Excise Taxes Are Privilege Taxes

    The Federalist Papers, No. 15:

    Except as to the rule of appointment, the United States have an indefinite discretion to make requisitions for men and money; but they have no authority to raise either by regulations extending to the individual citizens of America.

  12. #70
    Quote Originally Posted by Danke View Post
    Yes, looks like the first one I linked to was deleted.

    Here it is again on YouTube: https://www.youtube.com/watch?v=_McBS1NlHJM

    OR your Vimeo link....
    I have seen through it all... the system is against us. ALL OF IT.

  13. #71
    Quote Originally Posted by Danke View Post
    It probably wont be up long.

    https://www.videograbber.net/free-vimeo-downloader

    Seriously, download it if you have any interest in watching, as Elon Musk was paying to have the video streamed for free but ONLY this weekend so far. If you know what a COMPUTER FILE is...
    1776 > 1984

    The FAILURE of the United States Government to operate and maintain an
    Honest Money System , which frees the ordinary man from the clutches of the money manipulators, is the single largest contributing factor to the World's current Economic Crisis.

    The Elimination of Privacy is the Architecture of Genocide

    Belief, Money, and Violence are the three ways all people are controlled

    Quote Originally Posted by Zippyjuan View Post
    Our central bank is not privately owned.

  14. #72
    Quote Originally Posted by DamianTV View Post
    It probably wont be up long.

    https://www.videograbber.net/free-vimeo-downloader

    Seriously, download it if you have any interest in watching, as Elon Musk was paying to have the video streamed for free but ONLY this weekend so far. If you know what a COMPUTER FILE is...
    Downloaded. 2 copies. 1080 and 720 Thx
    I have seen through it all... the system is against us. ALL OF IT.



  15. Remove this section of ads by registering.
  16. #73
    https://thenextweb.com/artificial-in...han-you-think/


    One machine to rule them all: A ‘Master Algorithm’ may emerge sooner than you think

    It’s excusable if you didn’t notice it when a scientist named Daniel J. Buehrer, a retired professor from the National Chung Cheng University in Taiwan, published a white paper earlier this month proposing a new class of math that could lead to the birth of machine consciousness. Keeping up with all the breakthroughs in the field of AI can be exhausting, we know.

    Robot consciousness is a touchy subject in artificial intelligence circles. In order to have a discussion around the idea of a computer that can ‘feel’ and ‘think,’ and has it’s own motivations, you first have to find two people who actually agree on the semantics of sentience. And if you manage that, you’ll then have to wade through a myriad of hypothetical objections to any theoretical living AI you can come up with.

    We’re just not ready to accept the idea of a mechanical species of ‘beings’ that exist completely independently of humans, and for good reason: it’s the stuff of science fiction – just like spaceships and lasers once were.

    Which brings us back to Buehrer’s white paper proposing a new class of calculus. If his theories are correct, his math could lead to the creation of an all-encompassing, all-learning algorithm.

    The paper, titled “A Mathematical Framework for Superintelligent Machines,” proposes a new type of math, a class calculus that is “expressive enough to describe and improve its own learning process.”

    Buehrer suggests a mathematical method for organizing the various tribes of AI-learning under a single ruling construct, such as the one suggested by Pedro Domingos in his book “The Master Algorithm.”

    We asked Professor Buehrer when we should expect this “Master Algorithm” to emerge, he said:

    If the class calculus theory is correct, that human and machine intelligence involve the same algorithm, then it is only less than a year for the theory to be testable in the OpenAI gym. The algorithm involves a hierarchy of classes, parts of physical objects, and subroutines. The loops of these graphs are eliminated by replacing each by a single “equivalence class” node. Independent subproblems are automatically identified to simplify the matrix operations that implement fuzzy logic inference. Properties are inherited to subclasses, locations and directions are inherited relative to the center points of physical objects, and planning graphs are used to combine subroutines.

    It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.

    Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:

    Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.

    Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”

    Sophia the robot would be thrilled, but it isn’t because it’s just a puppet.

    Buehrer’s research further indicates AI may one day enter into a conflict with itself for supremacy, stating intelligent systems “will probably have to, like the humans before them, go through a long period of war and conflict before evolving a universal social conscience.”

    It remains to be seen if this new math can spawn a mechanical species of beings with their own beliefs and motivations. But it’s becoming increasingly difficult to simply outright dismiss those machine learning theories that blur the line between science and fiction. And that feels like progress.
    I have seen through it all... the system is against us. ALL OF IT.

  17. #74
    Others aren't working....

    FJB

  18. #75
    Trump creates SKYNET and
    (((((DRUMROLLs))))) the
    Terminator III landscape???

    Bye bye human race, indeed.

  19. #76
    Quote Originally Posted by Mach View Post
    Others aren't working....


    Can AI defeat @oyarde and his tribe on his reservation?
    Pfizer Macht Frei!

    Openly Straight Man, Danke, Awarded Top Rated Influencer. Community Standards Enforcer.


    Quiz: Test Your "Income" Tax IQ!

    Short Income Tax Video

    The Income Tax Is An Excise, And Excise Taxes Are Privilege Taxes

    The Federalist Papers, No. 15:

    Except as to the rule of appointment, the United States have an indefinite discretion to make requisitions for men and money; but they have no authority to raise either by regulations extending to the individual citizens of America.

  20. #77
    I cannot be defeated . AI will cry .
    Do something Danke

  21. #78
    Quote Originally Posted by Mach View Post
    Others aren't working....


    Watched it. Funny how they have experts spouting how this technology is being used to sway voters to the right. Of course liberals are objective. Libertarians are just being manipulated, brain washed by media.

    Nothing could be further from the truth. Just the opposite.
    Last edited by Danke; 04-21-2018 at 09:46 PM.
    Pfizer Macht Frei!

    Openly Straight Man, Danke, Awarded Top Rated Influencer. Community Standards Enforcer.


    Quiz: Test Your "Income" Tax IQ!

    Short Income Tax Video

    The Income Tax Is An Excise, And Excise Taxes Are Privilege Taxes

    The Federalist Papers, No. 15:

    Except as to the rule of appointment, the United States have an indefinite discretion to make requisitions for men and money; but they have no authority to raise either by regulations extending to the individual citizens of America.

  22. #79
    Pulled from another thread:

    Quote Originally Posted by DamianTV View Post
    I have seen through it all... the system is against us. ALL OF IT.

  23. #80
    I have seen through it all... the system is against us. ALL OF IT.



  24. Remove this section of ads by registering.
  25. #81
    I have seen through it all... the system is against us. ALL OF IT.

  26. #82
    https://www.cnbc.com/2018/09/07/elon...n-podcast.html

    Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers


    • Elon Musk says he will soon announce a Neuralink product that can make anyone superhuman by connecting their brains to a computer.
    • He says Neuralink increases the data rate between the brain and computers and will give humans a better shot at competing with AI.
    • Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.
    I have seen through it all... the system is against us. ALL OF IT.

  27. #83
    I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.

    IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.

  28. #84
    Quote Originally Posted by thoughtomator View Post
    I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.

    IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.
    Did you see this:
    https://www.extremetech.com/extreme/...la-is-its-name

    One of the most significant AI milestones in history was quietly ushered into being this summer. We speak of the quest for Artificial General Intelligence (AGI), probably the most sought-after goal in the entire field of computer science. With the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.

    Let’s define AGI, since it’s been used by different people to mean different things. AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning. During meta-learning, the acquisition of one skill enables the learner to pick up another new skill faster because it applies some of its previous “know-how” to the new task. In other words, one learns how to learn — and can generalize that to acquiring new skills, the way humans do. This has been the holy grail of AI for a long time.

    As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations. Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. However, these aren’t nearly so much so an artificial general intelligence, which could conceivably unlock the kind of recursive self-improvement variously referred to as the “intelligence explosion” or “singularity.”

    Those who thought that day would be sometime in the far distant future would be wise to think again. To be sure, DeepMind has made inroads on this goal before, specifically with their work on Psychlab and Differentiable Neural Computers. However, Impala is their largest and most successful effort to date, showcasing a single algorithmthat can learn 30 different challenging tasks requiring various aspects of learning, memory, and navigation.
    Regardless of AI capabilities, a neurolink interface will enslave us all.... far further than we are already enslaved.
    I have seen through it all... the system is against us. ALL OF IT.

  29. #85
    Quote Originally Posted by Thor View Post
    https://www.cnbc.com/2018/09/07/elon...n-podcast.html

    Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers


    • Elon Musk says he will soon announce a Neuralink product that can make anyone superhuman by connecting their brains to a computer.
    • He says Neuralink increases the data rate between the brain and computers and will give humans a better shot at competing with AI.
    • Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.
    Hype.
    Never attempt to teach a pig to sing; it wastes your time and annoys the pig.

    Robert Heinlein

    Give a man an inch and right away he thinks he's a ruler

    Groucho Marx

    I love mankind…it’s people I can’t stand.

    Linus, from the Peanuts comic

    You cannot have liberty without morality and morality without faith

    Alexis de Torqueville

    Those who fail to learn from the past are condemned to repeat it.
    Those who learn from the past are condemned to watch everybody else repeat it

    A Zero Hedge comment

  30. #86
    Quote Originally Posted by Thor View Post
    Marketing copy of the like which has been around for decades now. Note the prolific use of weasel words - when you strip out the uncertainties, it claims exactly nothing at all - other than that they can't achieve that lofty-sounding goal.

  31. #87
    I have seen through it all... the system is against us. ALL OF IT.

  32. #88
    Google CEO Sundar Pichai: Fears about artificial intelligence are ‘very legitimate,’ he says in Post interview


    Google CEO Sundar Pichai appears before the House Judiciary Committee on Dec. 11. (J. Scott Applewhite/AP)


    By Tony Romm ,
    Drew Harwell and
    Craig Timberg

    December 12


    Google chief executive Sundar Pichai, head of one of the world’s leading artificial intelligence companies, said in an interview this week that concerns about harmful applications of the technology are “very legitimate” — but the tech industry should be trusted to responsibly regulate its use.

    Speaking with The Washington Post on Tuesday afternoon, Pichai said that new AI tools — the backbone of such innovations as driverless cars and disease-detecting algorithms — require companies to set ethical guardrails and think through how the technology can be abused.
    “I think tech has to realize it just can’t build it and then fix it,” Pichai said. “I think that doesn’t work.”

    Tech giants have to ensure artificial intelligence with “agency of its own” doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels some tech critics, who contend the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be “far more dangerous than nukes.”

    Google’s AI technology underpins everything from the company’s controversial China project to the surfacing of hateful, conspiratorial videos on its YouTube subsidiary — a problem Pichai promised to address in the coming year. How Google decides to deploy its AI has also sparked recent employee unrest.

    Pichai’s call for self-regulation followed his testimony in Congress, where lawmakers threatened to impose limits on technology in response to its misuse, including as a conduit for spreading misinformation and hate speech. His acknowledgment about the potential threats posed by AI was a critical assertion because the Indian-born engineer often has touted the world-shaping implications of automated systems that could learn and make decisions without human control.

    Pichai said in the interview that lawmakers around the world are still trying to grasp AI’s effects and the potential need for government regulation. “Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long term, and I think the questions are actually pretty complex,” he said. Other tech giants, including Microsoft, recently have embraced regulation of AI — both by the companies that create the technology and the governments that oversee its use.

    But AI, if handled properly, (of course) could have “tremendous benefits,” Pichai explained, including helping doctors detect eye disease and other ailments through automated scans of health data. “Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said. “This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

    Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI “one of the most important things that humanity is working on” and said it could prove to be “more profound” for human society than “electricity or fire.” But the race to perfect machines that can operate on their own has rekindled familiar fears that Silicon Valley’s corporate ethos — “move fast and break things,” as Facebook once put it — could result in powerful, imperfect technology eliminating jobs and harming people.

    Within Google, its AI efforts also have created controversy: The company faced heavy criticism earlier this year because of its work on a Defense Department contract involving AI that could automatically tag cars, buildings and other objects for use in military drones. Some employees resigned because of what they called Google’s profiting off the “business of war."

    Asked about the employee backlash, Pichai told The Post that its workers were “an important part of our culture.” “They definitely have an input, and it’s an important input, it’s something I cherish,” he said.

    In June, after announcing Google wouldn’t renew the contract next year, Pichai unveiled a set of AI-ethics principles that included general bans on developing systems that could be used to cause harm, damage human rights or aid in “surveillance violating internationally accepted norms."
    The company faced criticism for releasing AI tools that could be misused in the wrong hands. Google’s release in 2015 of its internal machine-learning software, TensorFlow, has helped accelerate the wide-scale development of AI, but it has also been used to automate the creation of lifelike fake videos that have been used for harassment and disinformation.

    Google and Pichai have defended the release by saying that keeping the technology restricted could lead to less public oversight and prevent developers and researchers from improving its capabilities in beneficial ways.

    “Over time, as you make progress, I think it’s important to have conversations around ethics [and] bias and make simultaneous progress,” Pichai said during his interview with The Post.

    “In some sense, you do want to develop ethical frameworks, engage non-computer scientists in the field early on,” he said. “You have to involve humanity in a more representative way because the technology is going to affect humanity.”

    Pichai likened the early work to set parameters around AI to the academic community’s efforts in the early days of genetics research. “Many biologists started drawing lines on where the technology should go,” he said. “There's been a lot of self-regulation by the academic community, which I think has been extraordinarily important.”

    The Google executive said it would be most essential in the development of autonomous weapons, an issue that’s rankled tech executives and employees. In July, thousands of tech workers representing companies including Google signed a pledge against developing AI tools that could be programmed to kill.

    Pichai also said he found some hateful, conspiratorial YouTube videos described in a Post story Tuesday “abhorrent” and indicated that the company would work to improve its systems for detecting problematic content. The videos, which together had been watched millions of times on YouTube since appearing in April, discussed baseless allegations that Democrat Hillary Clinton and her longtime aide Huma Abedin had attacked, killed and drank the blood of a girl.

    Pichai said he had not seen the videos, which he was questioned about during the congressional hearing, and declined to say whether YouTube’s shortcomings in this area were a result of limits in the detection systems or in policies for evaluating whether a particular video should be removed. But he added, “You’ll see us in 2019 continue to do more here.”

    Pichai also portrayed Google’s efforts to develop a new product for the government-controlled Chinese Internet market as preliminary, declining to say what the product might be or when it would come to market — if ever.

    Dubbed Project Dragonfly, the effort has caused backlash among employees and human rights activists who warn about the possibility of Google assisting government surveillance in a country that tolerates little political dissent. When asked whether it’s possible that Google might make a product that allows Chinese officials to know who searches for sensitive terms, such as the Tiananmen Square massacre, Pichai said it was too soon to make any such judgments.

    “It’s a hypothetical,” Pichai said. “We are so far away from being in that position.”

    https://www.washingtonpost.com/techn...ost-interview/



    Trust us they said....
    I have seen through it all... the system is against us. ALL OF IT.



  33. Remove this section of ads by registering.
  34. #89
    Quote Originally Posted by helmuth_hubener View Post


    How to change the world
    By the way, I really do now disavow this. Peterson is a loser and is intentionally subverting millions of young men into a dead-end non-productive path. He is sick and probably Satanic.

    Just FYI.

  35. #90
    Couple years old...

    I have seen through it all... the system is against us. ALL OF IT.

Page 3 of 4 FirstFirst 1234 LastLast


Similar Threads

  1. Replies: 72
    Last Post: 08-04-2015, 05:39 PM
  2. Replies: 86
    Last Post: 02-01-2013, 04:29 AM
  3. Robert Reich, neo-Luddite
    By JonnyMR in forum Economy & Markets
    Replies: 3
    Last Post: 10-17-2012, 07:31 AM
  4. Replies: 3
    Last Post: 05-28-2010, 01:36 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •