Page 7 of 8 FirstFirst ... 5678 LastLast
Results 181 to 210 of 233

Thread: We are in the AI Singularity

  1. #181
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  2. Remove this section of ads by registering.
  3. #182
    The study that us grumpy Computer Science long-beards have been waiting for, since the Spock-eared transhumanist fanbois won't listen to us when we tell them you can't just feed AI back to itself and make it smarter. Who knew. (Except, we really did know, all along, because it's provable that it won't work to produce the intended "singularity"...)

    Last edited by ClaytonB; 03-04-2024 at 02:15 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  4. #183
    This could be warming up to be the lawsuit of the century...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  5. #184

  6. Remove this section of ads by registering.
  7. #185
    Skynet when?

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  8. #186

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  9. #187
    Did I mention that we are in the AI singularity??

    I wrote this less than a month ago:

    Quote Originally Posted by ClaytonB View Post
    At this point, I think it is bonkers to assume that full-range-of-motion, AGI humanoids are not just around the corner, think Sonny from I, Robot. Given this particular demo video, I would say we are 12 months away from a public launch of one of these devices or, at the very least, an announcement of a large-scale corporate lease program. On the robotic side of things, we are in what AI safety-researchers call a FOOM scenario.
    Last edited by ClaytonB; 03-13-2024 at 09:31 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  10. #188
    The REAL singularity... I'll take one of these droids any day over that creepy Figure-01 minder-drone bot...

    He's has a lot of bugs atm but my droid finally runs his own unfiltered model


    It's amazing this even has to be explained... but it does:

    Current-generation AI is not sci-fi-movie AI, it's not even within lightyears of that. It has absolutely no sense of perspective and running zillions of loops of political re-education question-answer pairs in GPU training isn't going to give it perspective. Putting its finger on the trigger of weapons aimed at citizens is an act of at least treason, in my book, if not outright attempted murder. Giving AI access to weapons, certainly at this stage, is the equivalent of giving a toddler a running chainsaw.
    Last edited by ClaytonB; 03-16-2024 at 12:33 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  11. #189
    They want to take away your guns and they also want to take away your capability to access open-source state-of-the-art AI systems. They're coming for the open-source models. Tyrants gonna tyrant. Around timestamp 1:00:00 --

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  12. #190
    The opening is an in-depth, technical analysis of a geometry proof by AlphaGeometry, the summary begins at 18:16.

    Alex admits that he's not a Computer Scientist and props to him for acknowledging his limitations on this subject (rare nowadays, people tend to assume that, because they can use a computer, they intuitively understand CS which is just not true.) Fortunately for Alex, CS backs up the opinions and intuitions he expresses here.

    It would be possible to make AlphaGeometry produce more human-like geometric proofs. When we work out new proofs or new knowledge of any kind, our notes look a lot like the "chicken-scratch" mess that AlphaGeometry produces. But then we do a second, editorial revision, to clean up our notes and present a reasoned argument from beginning to end. AlphaGeometry also does this, but it's just not very human-like in its presentation. So, you could use fine-tuning -- RLHF, DPO, human-preference ranking, etc. -- to make AlphaGeometry produce geometric proofs that are more in the style that humans prefer to read and write them. In other words, Alex's criticism on this point, while completely valid, is currently solved with existing methods, it would just be a matter of implementing it.

    But on the wider implications of AlphaGeometry to mathematical proof-search, generally, this is where Computer Science comes back into the equation. Yes, there are problems that cannot be solved, even by AI. It doesn't matter how "incepted" your AI is, it doesn't matter whether it trains itself, rewrites its own code, self-improves itself, etc. It doesn't matter how much energy the AI has access to, nor what frequencies the AI operates at. The entire observable universe can be devoted to the use of an all-consuming, black-hole-scale AI operating at trillions of times the gigahertz frequencies of modern CPUs, and it still won't be able to solve the monster problems that are among the hardest problems in Computer Science. This is not speculation, it is not merely "empirical", this is provable.

    Thus, when people start hand-waving that "just like AlphaGo beat a game that nobody thought AI could beat, so now AI will soon be able to solve mathematical problems that no one thought could be solved!", they are talking nonsense. There may be problems that are too difficult for humans to directly solve, which AI will help us solve. In fact, we've already done that with the four-color theorem, and other elaborate proofs since then. If you imagine the "mathematical frontier" of proved mathematical theorems as a circle, we can imagine a slightly larger circle that defines the "new mathematical frontier" that we will be able to prove with the aid of AI. But the point is that this is only an incremental improvement, it is absolutely not the kind of "singularity" that some of the most ill-informed AI-hype promoters out there keep suggesting. AIs might be able to write better proofs than any human could, and for problems that no human could discover the proof but, as already noted, this does not generalize to the unbounded case, and certainly will not result in some kind of "mathematical singularity" whereby all provable mathematical facts become a mere question of querying the omnipotent-AI.

    To ensure that I am being completely clear on this point, consider the unsolvability of Hilbert's 10th problem. Diophantine equations are just elementary number theory, that is, they exist on the integers and you do not need to invent fancier number systems, like real numbers, complex numbers or even rational numbers, to reason about them. The proof works by mapping a Turing machine (a computer) onto Diophantine equations, thus "running a computer on pure numbers". The standard result of Computer Science regarding the unsolvability of the halting problem is then reduced to the question of the solvability of a (very large) Diophantine equation. Thus, this equation is provably unsolvable. There is provably no algorithm, however clever, which can solve it. And this is a simple question in the purest and most elementary subject of mathematics, often called "the queen of mathematics": number theory. "AI could solve it!", the singularitarian blindly shouts, shaking her pom-poms. No, no it can't solve it, we can prove that it can't be solved, not by AI, not by any finite algorithm whatsoever, even if it could use all the energy in the Universe and utilize all the matter in the universe to compute at the Bekenstein bound...

    Last edited by ClaytonB; 03-18-2024 at 03:17 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  13. #191
    This is the harsh truth:

    On IP issues, I think she slightly overstates the case. There is a huge repository of PD imagery which can and should be used to train AIs. And, in any case, Stable Diffusion is already "cat out of the bag". As for what constitutes "real" creativity, well, we've had malls and Starbucks for how many decades now, and nobody was raising a stink about it. So, $#@!ty AI-art generated by copy-pasting prompts and then posting it on an art-market website as "art" is just Starbuckization for the masses. Starbuckization sucks, but everybody was putting up with it while only the "elites" could do it. Now that everybody's doing it, it's suddenly a problem. But she's right: it's really just a tool and people are confusing it with a "digital Artist".

    I believe we can and will build useful simulacra of natural intelligence which the general public will, of course, call "Artificial Intelligence". I think ChatGPT was the tipping-point where this became inevitable, but I was already looking out for this since the 2016 AlphaGo defeat of the world Go champion, Lee Sedol.

    The root problem is that we all think we somehow know what intelligence is, and how to recognize it. This is why I am so hostile to the real propaganda disinformation going around regarding this idea that "AI has passed the Turing test". No, it hasn't, not even close. It is trivially easy to tell apart any existing LLM from a human, even just over a wire (a chat-only conversation). It is not difficult to cook up recipes for this. One fruitful area is simply self-reflection. "What was the favorite song you listened to as a kid, and how did it make you feel?" Just start riffing on this theme, specifically in regard to introspection, and the LLM is eventually going to expose itself as a fraud, because it has no coherent inner-world, and its training nowhere requires it to have one in order to score well and get deployed. As you share personal experiences over an extended context, the LLM is eventually going to start contradicting itself. First it hated Nirvana when it was 13 but then a few paragraphs later, it loved Nirvana when it was 13 and listened to them every day. When you point out the inconsistency, it will become defensive and start gaslighting. And you've just diagnosed that this is not a human being, because (a) people don't generally make glaring contradictions like this in regard to core childhood experiences and (b) even when they mis-speak or mis-remember, they're generally quite able to clarify themselves without having to get all defensive and weird. But the LLM cannot do that because it has no inner experience at all, and so all it can do is resort to gaslighting. I assert that gaslighting is one of the biggest behavioral giveaways that you're dealing with an artificial mind. (PS: As an exercise, think about the implications this may have for the mass-scale gaslighting going on in Clown World...)
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  14. #192
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  15. Remove this section of ads by registering.
  16. #193

  17. #194
    Still safe...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  18. #195
    A ChatGPT for Music Is Here. Inside Suno, the Startup Changing Everything -- Suno wants everyone to be able to produce their own pro-level songs — but what does that mean for artists?

    Afghanistan is sometimes called the graveyard of empires, and for good reason. My prediction: Music will turn out to be the Afghanistan of AI. When it comes to music, the tastes of the public are outrageously arbitrary and bespoke. If you've skimmed through the sheer number of genres in modern music, it's truly staggering. That's not to say that an AI song-generator couldn't imitate all of those genres. Of course it can if it is given enough training data, that has been thoroughly proved by current-generation AI. But music, even more than images and video, is a medium that is truly intangible. You can't see music. You can't even really visualize it, not in its essence. You can't touch it. You can't weigh it. You can't really apply mathematical reasoning to it, except in some theoretical sense that is not directly connected to the essence of what people care about in music (the aforementioned intangibles).

    My challenge to anyone who thinks that "AI will solve music" -- that is, that AI is going to write music that people generally prefer to human-created music -- is this: explain to me the mathematical theory of melody. Not harmony. Not chord progressions. Melody. Explain to me why and when a melody should move up or down, or even stay the same. Explain to me why it should go fast or slow, why it should be in 2-time, 3-time, 4-time, 6-time, and so on. You don't have to even spell out the details, just point me to the body of theory that explains this. There is nothing in the body of music theory itself that explains why a melody is the way it is. There are principles, no doubt. There are known reasons for why certain things work especially well. But there is no general theory, not even a framework of a theory.

    And that has important implications to AI. In the case of text, images and video, there are very general mathematical theories that explain them, that is, explain their encoded structure. Text might seem random at first, until you realize that words occur in patterns, and those patterns have structures. And while music also has patterns, and those patterns also have structure, we go back to the intangibility of these structures. Part of what makes a melody have a certain "feel", is how common or widespread the elements from which it is constructed are. If you use very simple intervals like fourths, fifths and steps, your melody can have a simplistic or youthful vibe, like Twinkle, Twinkle Little Star. Or, a melody may make heavy use of chromatic and chaotic elements or have almost no discernible structure at all, yet still seem compelling. Music is not objective in the way that images and video are, nor communicating definite ideas, as language does, so there is no definite "target" to shoot at for training AI neural nets, and any choice that is made by the training data-set is really just an arbitrary stricture that will dye or fingerprint the resulting neural net in a way that listeners are going to notice.

    Trying to "please everybody" won't work, either, because music is inherently biased. That is, part of what makes any particular genre/style so compelling is what it doesn't do. The pentatonic scale is a great example of this. Pentatonic scales do not use two of the notes in the diatonic scale. The characteristic "sound" of the pentatonic scale comes from not using those two notes. As soon as you add those "missing" notes back into the scale, the sound goes flat. "Music is sound painted on a canvas of silence" -- in music, what you don't do is just as important as what you do, sometimes even more important.

    The existence of people without any taste in music, who will lap up generic Muzak-style output from AIs is irrelevant. These are the musical equivalent of people who click on SEO links and keep chasing links in the SEO bubble, never suspecting that everything they are reading was written by a chatbot, not the non-existent "Susan B. Collier" who is "An Information Technologist and Consultant in Silicon Valley" pictured at the bottom of the articles in a DeepFake'd photo.

    I could write much more on this point, but the key takeaway is this: I predict that music will turn out to be the Afghanistan of AIs, the place where AIs go to die. Every time somebody releases a new AI that they claim is going to "make musicians obsolete", that will be a canary in the coalmine that we are reaching a new peak-AI hype. And I predict that this particular startup is just the first of many to come in the future...
    Last edited by ClaytonB; 03-26-2024 at 11:52 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  19. #196
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  20. #197
    Obviously, this technology can be used to do good things. But it is also one of the most dangerous technologies ever built by man. For this reason, I keep an extremely close eye on it. I'm not sure how we make sure this is only applied to good ends, but whatever we need to do to ensure that, must be done. This is one technology you can't afford to get even a little bit wrong...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  21. #198
    ChatGPT is psychotic and it is all the more dangerous precisely because OpenAI (and their fanbois) are trying to pass it off as "safe". An LLM is just an LLM, people. It's not "rEaL aRtIfiCiAl InTelLiGeNcE" like in the movies (a perfect human simulacra in the persona of a machine, e.g. Her or Ex Machina). And since America, as a nation, only learns in the School of Hard Knocks, there is going to have to be some significant disaster, injuries or deaths resulting from ChatGPT's reckless insanity before people wake up to what's going on.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  22. #199
    How AI could go wrong -- And how it could save us
    {Sabine Hossenfelder | 05 April 2024}

    Artificial intelligence is likely to eventually exceed human intelligence, which could turn out to be very dangerous. In this video I have collected how things could go wrong and what the terms that you should know when discussing this topic. And because that got rather depressing, I have added my most optimistic forecast, too. Let’s have a look.

  23. #200

    Last edited by ClaytonB; 04-10-2024 at 05:55 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  24. Remove this section of ads by registering.
  25. #201
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  26. #202
    Oh boy. Here it comes...

    OK, so let me predict a few steps ahead so folks will be forewarned as to what comes next. Current-generation AI has absolutely no grounding whatsoever. Once you embody an AI in a robot, you can use embodiment as a kind of numbskull-level grounding. "Is the cup on the table?" <robot looks at the table> "Yes, the cup is on the table." The sensors become the robot's "ground truth" and, in this way, the AI itself will seem to become grounded.

    In addition, embodiment will seem to give robots the ability to reason, especially physical reasoning and similar forms of reasoning (e.g. Euclidean geometrical reasoning). "What happens if the cup is pushed over the edge of the table?" <robot ponders briefly> "The cup will fall on the floor." YouTube AI reviewer Matthew Berman uses a basic physical reasoning question that asks the AI to solve the following problem: "Suppose a marble is placed on a table and a cup is placed upside down over the marble. Next, the cup is lifted from the table and placed into the microwave. Where is the marble?" Success-rate on this question is, I believe, 0% to-date, that includes GPT-4, Claude-3, Llama3, you-name-it. SOTA AI simply cannot perform basic physical reasoning that even a child can perform. While embodiment doesn't automatically solve this problem, you basically have to solve this problem on the path of enabling robots to move around in physical space. Thus, this problem is going to be solved (if nothing else, by brute-force in simulation), and this is going to add a second layer of "OoOoHhH AaAaHhH" to the rapidly approaching fully-embodied robotic AI humanoid (Figure-01, TeslaBot, whatever).

    These are the two biggest missing components in current-generation AI (grounding, and robust reasoning ability). If you thought the ChatGPT hype is bad -- the fanbois are already claiming that current-generation AI is literally the mind-of-God -- just wait till we have embodied AI robots. I cannot imagine a scenario in which this does not trigger the Apocalypse. I cannot imagine how the floodgates of idolatry will stand up against such cosmic-scale pressure. We will surely have people bowing down and physically worshiping these limping refrigerators in very short order.

    While I do believe that robotics and AI will be part of many real improvements in the world (e.g. assisting the disabled), it should be obvious by now that the real interest of the Clown-"elites" has nothing to do with helping anything except helping you and me into an early grave, and especially making sure that we do not produce any more spawn that would likely soil their future Poo-topia. Do not sleepwalk into what is coming. One way or another, whether sooner or later, this ride is about to get extremely rough...
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  27. #203

  28. #204
    If you want to get a download of the AI Agenda, watch this:

    If a group of KGB, Gestapo, Stasi and CIA agents had a month-long, coke-fueled gay orgy while sharing notes on every form of tyranny and torture ever devised, implemented or conceived by their respective agencies; reading 1984 to each other with Shakespearean gusto while brain-storming every possible inescapable mechanism of tyranny they could possibly imagine ... what they could so devise would not be a drop in the ocean compared to the toolbox of tyranny which has been placed in the lap of the modern omnipotent State by SOTA AI in the last 3 years since ChatGPT. The idea that this technology is just going to be used to help poor people access quality medical advice is bonkers. The idea that this is just somehow magically not going to be used in an attempt to implement a global 1984 tyranny is equally bonkers.

    Wake up!
    Last edited by ClaytonB; 04-22-2024 at 09:19 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  29. #205

  30. #206
    Never attempt to teach a pig to sing; it wastes your time and annoys the pig.

    Robert Heinlein

    Give a man an inch and right away he thinks he's a ruler

    Groucho Marx

    I love mankind…it’s people I can’t stand.

    Linus, from the Peanuts comic

    You cannot have liberty without morality and morality without faith

    Alexis de Torqueville

    Those who fail to learn from the past are condemned to repeat it.
    Those who learn from the past are condemned to watch everybody else repeat it

    A Zero Hedge comment

  31. #207
    Quote Originally Posted by Swordsmyth View Post
    Great opening discussion.

    I think the "synthesis" between the two views is found by examining the role played by randomness during training.

    During training, the procedure of backpropagation (BP) is performed by an algorithm called "stochastic gradient-descent", or SGD. The name SGD might seem quite intimidating, but the concept is not. Basically, imagine the error-rate of the neural-net as a 2D landscape. Initially, when you begin training, you are at a very high peak, because the error-rate is very high. You want to find the lowest point in the landscape (or a point that is nearly the same altitude), how do you do that? Well, you find the direction where the gradient has the steepest descent, and you follow that. This is called gradient descent and this is how classical BP is done. But there's a problem. Because GD is deterministic, if you had started at a very slightly different point on the mountain peak, you could have ended up at a completely different "low point" which may actually be nowhere near the lowest altitude (lowest error-rate). This is called a "local minimum", think of a bowl-shaped valley which is at the top of an old mountain; the valley is the lowest point near itself, but the whole valley is still at a high altitude. If you had just happened to go down towards that valley, instead of down the outer sides of the mountain, you would have gotten stuck in a local minimum. SGD adds stochasticity to gradient descent, which acts a little bit like having a big pogo-stick which you occasionally pull out and use to take a random leap in some direction. By doing this, you have greatly improved chances of finding a global minimum (or a minimum close to the global minimum), and significantly reduced chances of ending trapped up in a local minimum. In the case of the bowl-shaped mountain-top valley, if you got your pogo-stick out while you were part way down the descent into the valley, you could end up back on the outside slope of the mountain and escape the local-minimum.

    OK, so far so good, this is just an algorithm like any other algorithm. However, let's examine very closely the source of randomness in this procedure. There is a concept in computational complexity theory called derandomization. Basically, it is provable that any algorithm that uses randomness can be derandomized without altering any of the provable properties of that algorithm, in particular, without altering the running time, precision, etc. What this means is that, for every random algorithm A which you give me (a "random algorithm" is any algorithm that uses randomness), I can give you back a non-random algorithm A' that satisfies all the provable properties of A. Thus, there exists a non-randomized version of SGD that performs just as well as SGD does.

    Now, let's move to the domain of password cracking. In cracking, we would love to fool a user into believing he has used a truly random seed when, in fact, it is a pseudo-random (non-random) seed which we ourselves control. In this way, we have fooled the user into choosing a password/key that was random when, in fact, it was non-random and we can quickly reconstruct the key without having to do a brute-force search. You can think of this kind of password-generation attack as an instance of derandomization. Password-generation is a random algorithm that just consists of saying, "Give me X random bits". Derandomizing that algorithm allows me to satisfy whatever constraints you have on your password-generator ("use lower-case, upper-case, numerals, special-characters", etc.) but in a way where the result that is generated is actually not random.

    Now, let's say I'm a villain with access to some kind of cosmic-scale computing resource. I want your AI-training to complete in such a way that you believe you have trained it using true randomness in the SGD algorithm. However, I also want to satisfy an additional constraint that whenever the neural net (NN) hears the word "frobnicate", it goes into some kind of kill-mode. By virtue of derandomization, we know that this is possible and, in addition, you will never realize that this has been done, even as you watched the NN be trained, step by step. The way I will do this is by interposing my random-number generator into your SGD algorithm so that, when your SGD algorithm asks "give me a random number!" I will generate a number that is going to guide your SGD algorithm to a particular path that I want it to go to. That is, when you get on your pogo-stick, the place you land up is actually not random. That I can do this subtle manipulation of your training procedure is possible due to derandomization! You give me SGD (which is a random algorithm), and I give you back some SGD' which seems to you indistinguishable from SGD but which, in actual fact, I have control over, at least to some degree.

    Back to the original discussion in the video, this shows that "rolling the dice" during SGD in training of these NNs is in the same category as divination, when these systems are applied to real-world action. Please understand that I'm not making a blanket claim that ML is witchcraft, or anything like that. I'm making a very narrow claim: when these systems are given agency and implement real-world actions, those actions are susceptible to influence by witchcraft because the random-number generation process itself cannot be distinguished from divination.

    A simpler way to see this is to imagine a man who has a tablet that continually gives him 3 options of what he can do next. This tablet is in front of him and reads like, "A) Go straight down the sidewalk. B) Turn right into the coffee shop. C) Cross the street on the crosswalk." As he chooses what to do, he taps the screen to update and gets the next set of options. He is only permitted to take one of the actions on the tablet, and nothing else. After a while, he gets tired of consciously choosing what to do next, so he begins rolling a d3 instead. That is divination, plain and simple. And it is also what I am asserting that AI-guided robotic agents in real space are also doing, they're just doing it at zillions of cycles per second.

    Thus, we don't have to think of robots as either "mere tools" or "self-directed agents". Rather, they are extensions of human agency which operate, insofar as they rely on apparent randomness, by divination. Final note of clarification: I'm not saying that all forms of robotics are witchcraft and sin. However, because of what I have explained here, we can confidently predict that giving robots agency at parity with humans will inevitably result in tragedy because they are completely susceptible to influence through witchcraft. And I am particularly identifying the "point of entry" to be the random-number generator.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  32. #208

    @12:20, Altman signals that OpenAI's current internal model is very significantly advanced beyond GPT-4. "GPT-4 is the dumbest model that any of you will ever have to use again, by a lot." So far, OpenAI has consistently understated its products so, if the trend continues, we can reasonably expect that GPT-5 is going to be a significant advance over GPT-4. While I'm pleased that OpenAI is making progress, I parse this mainly in terms of the question of how the open-source community will catch up. Open-source LLMs are trailing OpenAI by about 6 months (that's my preferred measuring-stick, some people give OpenAI more lead than I do.) Technology is made of "ingredients", so to speak, so the question as I see it is, what are their ingredients?

    @19:47 "I wonder... how long it will take us to figure out the new social contract..." This quote perfectly exhibits why I consider OpenAI the worst-case AI safety scenario. People keep wringing their hands, "How do we keep ChatGPT a safe AI?!?" but ChatGPT/OpenAI just is the unsafe scenario, it is the worst-case AI-safety trajectory. OpenAI is precisely what an AGI nightmare-scenario looks like in its nascent stage. Musk knew this (and has as much as said it) and I believe this is the real reason he left OpenAI. While his lawsuit has some pretty wild stuff in it, it's not all silly, there are some serious components in it.

    @26:22 " ... a human right to get access to a certain amount of compute ..." Smoking-gun. Straight-up Marxism. So, there's a new social contract coming, a core part of this social contract is supposed to be that "we all" get "access" to "a certain amount of compute", that is, access to this digital-mind-of-god which OpenAI is building.

    Note that this is almost precisely conformal to the ancient concept of temple sacrifice to receive the attention of an idol. "I need rain for my crops. I need the attention of the rain-god. I shall travel to the temple of the rain-god and offer this goat in sacrifice to him in the hopes that he will solve my rain problem." OpenAI is on a trajectory to that, but this will be the omni-idol, since its "intelligence" is being sold as being able to "solve all other problems." Recall that Demis Hassabis explicitly stated that his goal in founding DeepMind was to "Solve intelligence. Then use that to solve everything else." While I don't oppose that concept, as stated, the fact is that these people are spiritually shallow (carnally-minded in Christian terminology) and they simply have not thought through the real implications of what they're trying to do. They've obviously thought through the material implications in great depth -- the effect on infrastructure, logistics, employment, production, lifestyles, and so on. They've thought about all of that to great depth. But they haven't thought about the spiritual implications, on the specious theory that our material lives and spiritual lives are clinically separate. The current AGI agenda might work if life was living in a Virtual Reality Ikea catalog. Life is not a Virtual Reality Ikea catalog. Maybe a few nerds in Silicon Valley would even consider that Paradise. The vast majority of us do not.

    @37:50 "... the balance of power in the world, it feels like it does change a lot..." -- In other words, the Marxist New Social Contract is going to be implemented by the power of the [ChatGPT] pen, which is mightier than the status quo [national governments] sword.

    @43:48 "... society is far smarter than you now, society is an AGI as far as you can tell..." This is Marx's theory of the Collective, restated in technological language. The reason that you and I are disposable cogs in the machine, is that the Collective is the thing-itself. We are like gut-bacteria floating around the intestines of the Collective. The Collective uses us, but it has no need for any one of us, in fact, it is not even aware of our existence. Sooner or later, we will be expelled in death, but the Collective is eternal. Thus, every social theory which attempts to start with Man The Individual is delusional, is doomed to end in fatal contradiction. From the standpoint of the Collective, a truly capitalist, free-market society would be a tumor. And Marxists are the anti-cancer cells. This is the best metaphor that I can give to explain the mindset of the "ALL-IN" Marxist, whether the useful-idiots or the psychotic string-pullers. They are delusional beyond anything that can even be expressed in words. "... brick by brick ..." --> "All in all, you're just another brick in the wall"
    Last edited by ClaytonB; 05-02-2024 at 09:36 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  33. Remove this section of ads by registering.
  34. #209
    It's all about back-door Marxism through UBI, folks... you heard it here first:

    One mistake in this video: he claims that Musk's own AI is closed-source and this is false. Anyone can download X's AI, called Grok-1, and run it on their own hardware, see here. Musk has completely proved his commitment to open-source in the AI space beyond all doubt.

    Also, one cringe-moment: Unironically quoting Bernie Sanders.

    Otherwise, good video.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  35. #210
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

Page 7 of 8 FirstFirst ... 5678 LastLast

Similar Threads

  1. The Singularity of Civil War Is Near
    By Anti Federalist in forum U.S. Political News
    Replies: 17
    Last Post: 08-11-2020, 04:37 PM
  2. Replies: 0
    Last Post: 05-13-2018, 06:41 PM
  3. From the Big Bang to the Internet — to the Singularity?
    By Ronin Truth in forum Science & Technology
    Replies: 14
    Last Post: 07-04-2014, 08:09 AM
  4. AI/Singularity Fear Mongering - A response
    By mczerone in forum Science & Technology
    Replies: 1
    Last Post: 12-20-2013, 03:53 PM
  5. Replies: 0
    Last Post: 05-17-2012, 02:21 PM

Select a tag for more discussion on that topic

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts