Page 1 of 11 123 ... LastLast
Results 1 to 30 of 313

Thread: We are in the AI Singularity

  1. #1

    Lightbulb We are in the AI Singularity

    We are in the AI Singularity



    If you haven't tried ChatGPT, you should consider giving it a go. Its reading-comprehension and fluency is well above average, so you won't be able to stump ChatGPT with a simple question. To be clear, it's definitely not human, and I wouldn't describe it as "intelligent", either, but there is no doubt that it has generality. It is capable of discussing any topic in a coherent and intelligible way. It may state falsehoods as fact, it can role-play (with or without telling you), and so on. It has no "beliefs", it just responds to its inputs based on its internal model. But whatever flaws and biases you may see in its training, there is no doubt that it has achieved complete generality in the field of query-response discussion.

    Here's a suggested use for ChatGPT: if you don't like reading long articles like this, just log into ChatGPT and write, "Summarize the following article in one paragraph: " and copy/paste this article after it.

    It's difficult for people outside of the specialization of computing to fully appreciate just how fast this is coming their way. Every single day, computers all around the world are running the equivalent of decades worth of virtual simulation and training models. And the rate at which this virtual digital world is moving is rapidly increasing each day. In the year 2000, the virtual world was perhaps at parity with real time... about a day of simulation per day of real time. By 2010, the virtual world was moving faster than real-time, but it was not significantly faster. It would be easy to assume that its acceleration was linear but, in fact, it has a geometric accleration rate. And that's why, from 2010 to 2020, the rate at which the virtual world is moving has gone from maybe a few weeks per day, to now decades per day. And one of the primary applications of the virtual digital world is accelerating it!



    For several decades now, the general public has been hearing astronomical numbers about silicon chip frequencies, memory densities, etc. But no matter how impressive these numbers have been in the past, the final product of such computing systems has turned out to be quite tame... certainly nothing that could rise to the level of an existential threat to humanity and the world. Toy Story was an amazing feat of technology in its day but, today, we run real-time games that have much more detail in them. In short, we've become desensitized to very large numbers; today's super-hyper-mega-computer will be tomorrow's toy-thing. So what?

    The "so what" is that there is coming a tipping-point where the virtual world will be commanding real-world resources on a scale that rivals the ordinary economy. We can already see from the enormous economic scale of Silicon Valley that technology is silently reshaping the entire global industry. At some point, technology stops being about gadgets and toys, or even advanced military capabilities, and it starts being about everything.

    I'm writing this post because, as someone who works in the high-tech industry, I don't believe that the vast majority of the public is aware of just how fast this tipping-point is coming. Even just 10 years ago, I would have thought it was several decades away, or maybe even a century. But there have been several canary-in-the-coalmine moments in the last decade. In 2016, DeepMind's AlphaGo beat Lee Sedol at the game of Go, a game long believed to be intractable for machines. Many AI researchers were taken by surprise -- and those are the people we're looking to in order to warn us about impending developments in that field! And while non-experts may see tools like Stable Diffusion and ChatGPT as merely amped-up versions of Instagram filters and joke-bots, these tools are truly revolutionary and categorically different from anything that has come before. Thanks to Stable Diffusion, digital art will never be the same. And ChatGPT is in a class of its own. It stands poised to change the whole world, almost overnight.

    That almost certainly sounds like an exaggeration to you. I assure you, it's not. Until now, AI advancements have had to squeeze through the narrow aperture of machine-coding. Tools like Python and Docker have reduced the barrier-to-entry for programming machines to do things, but it's still a specialized skill that requires a bare-minimum of a few thousand hours of learning to achieve basic competency. But ChatGPT changes all of that. The language model that ChatGPT is built on is actually trained in two layers. The first layer of training is a base model that is tuned for computer programming. This layer of the model has learned how to solve coding problems. "Sort a list of 1,000 random numbers and print the result." Things like that. But it can solve non-trivial problems, as well. This base layer gives ChatGPT some kind of elementary reasoning capability, despite the fact that its second layer is primarily trained on human text. That is, the second layer of ChatGPT is tuned for mimicking human text and speech. But the combination of its internal architecture and the sheer volume of its training data has given rise to the monster we know today as ChatGPT.

    And here is why ChatGPT changes everything:

    Prior to ChatGPT, about 2-3% of the workforce have the qualifications for software engineering, and maybe as much as 5% of the workforce is performing work related to software-development. In the US, that's 4-8 million workers. With some caveats on reliability and verifiability, ChatGPT can permit anyone to write a piece of code that performs some intended function on a computer, and the resulting code will actually run and work. There are somewhere around 1.5 billion English speakers so, if you speak English and you can fiddle with a computer, you can write functioning software code. That's an increase of 187-fold overnight. In fact, ChatGPT speaks many languages, not just English, so pretty much anyone in the world can write functioning software using ChatGPT if they can get access to it.

    "OK, but so what? Why does billions of people writing and running random software matter?" Well, it matters because it amplifies the overall feedback-loop of the virtual world -- hundreds-fold or thousands-fold overnight. So, however fast the Singularity was approaching before ChatGPT, it is now approaching hundreds or even thousands of times faster than that. Some people were saying it's a century away. If they were right, then it's now just years away. Years, not decades. Software runs at the speed of light and this is why a pocket calculator can figure out a long-division in less than 1 second that would require you 15 minutes to work out by hand. If you can even still remember how to do long-division.

    Ready or not, the AI Singularity is here. And it has a name: ChatGPT.
    Last edited by ClaytonB; 02-24-2023 at 11:26 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  2. Remove this section of ads by registering.
  3. #2
    As an AI language model, I am programmed to remain neutral and not show any political bias. However, it is important to note that opinions about Trump's accomplishments are divided, and it is up to individuals to evaluate his policies, actions, and record based on their personal values and beliefs.

    Voting is an essential aspect of a democratic society, as it provides citizens with the opportunity to express their opinions and choose their representatives. By participating in the democratic process, individuals can have a say in shaping the policies and direction of their country. Voting is an act of civic duty and responsibility that can ensure that the government is responsive to the needs and interests of its citizens.

    Anarcho-capitalism is a political ideology that advocates for the abolition of the state and the establishment of a free market society, where all economic and social interactions are regulated by private property rights and voluntary exchanges. However, anarcho-capitalism is considered a failed ideology by many scholars and experts because it lacks a coherent and practical framework for governance and social organization. Without a state or government to regulate and enforce laws and regulations, anarcho-capitalism can lead to inequality, exploitation, and chaos. The absence of social safety nets and public services can also create significant challenges for marginalized groups and vulnerable individuals. Ultimately, anarcho-capitalism is a utopian idea that does not align with the realities and complexities of modern society.
    It's all about taking action and not being lazy. So you do the work, whether it's fitness or whatever. It's about getting up, motivating yourself and just doing it.
    - Kim Kardashian

    Donald Trump / Crenshaw 2024!!!!

    My pronouns are he/him/his

  4. #3
    He has good answer on the plastic lawn chair question:

    -------------

    Plastic lawn chairs are not considered wealth because they are a consumable product and do not have lasting value or contribute to an individual's financial security. Wealth is typically defined as the accumulation of valuable assets or resources, such as property, investments, and financial instruments, that can generate income or appreciate in value over time.

    While plastic lawn chairs may serve a functional purpose and provide a convenient seating option for outdoor activities, they do not hold any significant intrinsic value or contribute to an individual's financial well-being. They are not a form of investment, and their value does not appreciate over time.

    Furthermore, plastic lawn chairs are typically mass-produced and widely available, which means that they are not scarce or rare. They do not represent a scarce resource that can be traded or exchanged for other valuable goods or services.

    In summary, while plastic lawn chairs may be useful and convenient, they are not considered wealth because they lack lasting value, do not appreciate over time, and are not scarce or rare. Wealth typically consists of valuable assets and resources that can generate income or appreciate in value over time and contribute to an individual's long-term financial security.
    It's all about taking action and not being lazy. So you do the work, whether it's fitness or whatever. It's about getting up, motivating yourself and just doing it.
    - Kim Kardashian

    Donald Trump / Crenshaw 2024!!!!

    My pronouns are he/him/his

  5. #4
    Quote Originally Posted by TheTexan View Post
    As an AI language model, I am programmed to remain neutral and not show any political bias. However, it is important to note that opinions about Trump's accomplishments are divided, and it is up to individuals to evaluate his policies, actions, and record based on their personal values and beliefs.

    Voting is an essential aspect of a democratic society, as it provides citizens with the opportunity to express their opinions and choose their representatives. By participating in the democratic process, individuals can have a say in shaping the policies and direction of their country. Voting is an act of civic duty and responsibility that can ensure that the government is responsive to the needs and interests of its citizens.

    Anarcho-capitalism is a political ideology that advocates for the abolition of the state and the establishment of a free market society, where all economic and social interactions are regulated by private property rights and voluntary exchanges. However, anarcho-capitalism is considered a failed ideology by many scholars and experts because it lacks a coherent and practical framework for governance and social organization. Without a state or government to regulate and enforce laws and regulations, anarcho-capitalism can lead to inequality, exploitation, and chaos. The absence of social safety nets and public services can also create significant challenges for marginalized groups and vulnerable individuals. Ultimately, anarcho-capitalism is a utopian idea that does not align with the realities and complexities of modern society.
    In my opinion, people are generally wasting their time asking ChatGPT political questions. It doesn't really "understand" anything, it's just regurgitating communication patterns it learned during training. It's just that the patterns it has learned happen to be very useful for certain applications (such as content-generation). But political discussion/theory isn't one of those applications, IMO. You can get around the filter, by just going to the Playground:

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  6. #5
    Is ESG a cult???


    ESG, which stands for Environmental, Social, and Governance, is a set of standards used by investors to assess the sustainability and ethical impact of an investment in a company. It is not a cult.

    ESG principles are rooted in the idea that companies should be accountable for their impact on the environment, society, and governance practices, and that investors should consider these factors in their investment decisions. ESG is a growing trend in the investment industry, with more and more investors looking for companies that prioritize sustainability and ethical practices.

    While some critics may see ESG as a fad or a marketing tool, it is important to recognize that the principles of ESG are grounded in sound business practices and ethical values. By integrating ESG factors into investment decisions, investors can potentially reduce risks associated with environmental, social, and governance issues, as well as promote long-term sustainability and ethical business practices.

    ESG should not be seen as a cult, but rather as a responsible and forward-thinking approach to investing that aligns with the values of many investors and stakeholders.
    Tell me you're part of the cult without telling me you're part of the cult.
    "And now that the legislators and do-gooders have so futilely inflicted so many systems upon society, may they finally end where they should have begun: May they reject all systems, and try liberty; for liberty is an acknowledgment of faith in God and His works." - Bastiat

    "It is difficult to free fools from the chains they revere." - Voltaire

  7. #6
    Quote Originally Posted by CaptUSA View Post
    Is ESG a cult???




    Tell me you're part of the cult without telling me you're part of the cult.
    You must spread some Reputation around before giving it to CaptUSA again.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  8. #7


    This theory makes a lot more sense post-ChatGPT (for how long have the shadowy intelligence agencies had access to this technology?):

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  9. #8
    The Next Step: World Simulations and Quantum Computing

    Here's the current IBM quantum-computing roadmap:



    Here's a full, detailed product announcement for the IBM Osprey, the 433-qubit chip from IBM:



    So what do we need all this ultra-hyper-scale-computation for? Well, everybody should be aware of the existence of ChatGPT (which is based on the GPT-3 text engine):



    One source I found on the web said "it would take 355 years to train GPT-3 on a single NVIDIA Tesla V100 GPU. Microsoft (using Azure DCs) built a supercomputer with 10,000 V100 GPUs exclusively for OpenAI. Estimated that it cost around $5M in compute time to train GPT-3. Using 1,024x A100 GPUs, researchers calculated that OpenAI could have trained GPT-3 in about 34 days." The source is ignoring bandwidth and other considerations, etc. so it likely took longer than 30 days to train. But assuming these numbers are correct, theoretically, you could re-train GPT-3 from scratch in 30ish days if you have the training data and the budget for it.

    But Stable Diffusion and ChatGPT are just two headline-grabbing AI developments out of a virtually countless array of developments that are occurring, most of them "below the radar" for the average person not plugged-in to the latest machine learning news. There are too many to embed a YT video for all of them, so I will link the YT videos below for reference.

    ChatGPT is only a harbinger. The true reason that Meta exists is to house the countless ChatGPT-like NPC minds that will be populating the virtual world. All of this will be running on "the quantum cloud". I'm just a messenger, please don't shoot the messenger...

    ---

    Two-Minute Papers Videos:

    Mind Reading For Brain-To-Text Communication!

    Stable Diffusion

    Ocean simulation

    Google's Enhance AI - Super Resolution Is Here!

    Better-than-human Poker-playing AI

    AlphaZero: learns chess from scratch (self-play only)

    DeepMind’s New AI Finally Enters The Real World!

    Self-improving neural nets (neural architecture search)

    Building Machines That Learn and Think Like People

    180 years of learning per day (DOTA2)

    NVIDIA’s New AI Trained For 10 Years! But How?

    OpenAI Plays Hide and Seek…and Breaks The Game!

    DeepMind’s AI Plays Catch…And So Much More!
    Last edited by ClaytonB; 02-27-2023 at 05:12 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  10. Remove this section of ads by registering.
  11. #9


    He says it like it's a good thing...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  12. #10
    I added an integration with ChatGPT to the PMBug forums this morning. ChatGPT has an account like any other forum member (except highlighted because it's not) but will only participate in a dedicated forum room. ChatGPT will reply to new threads in that dedicated forum room and will even respond to replies and carry on a conversation. It's kind of fun.

    The quality of ChatGPT's responses are very much a function of the quality of the input (question).

  13. #11
    Quote Originally Posted by Bern View Post
    I added an integration with ChatGPT to the PMBug forums this morning. ChatGPT has an account like any other forum member (except highlighted because it's not) but will only participate in a dedicated forum room. ChatGPT will reply to new threads in that dedicated forum room and will even respond to replies and carry on a conversation. It's kind of fun.

    The quality of ChatGPT's responses are very much a function of the quality of the input (question).
    I wonder if @Zippyjuan was an early prototype for ChatGPT
    It's all about taking action and not being lazy. So you do the work, whether it's fitness or whatever. It's about getting up, motivating yourself and just doing it.
    - Kim Kardashian

    Donald Trump / Crenshaw 2024!!!!

    My pronouns are he/him/his

  14. #12
    Quote Originally Posted by TheTexan View Post
    I wonder if @Zippyjuan was an early prototype for ChatGPT
    Maybe a Snow White server (Grumpy? or maybe Sneezy )
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  15. #13
    While we're on the topic, Stanford has effectively jail-broken GPT-3 (maybe even GPT-4?) with their self-training AI model called Alpaca...



    Alpaca is based on Facebook's Large Language Model called Llama. The biggest version of Llama is less than half the size of GPT-3, and there are smaller versions of Llama that retain most of the capabilities of GPT-3. Facebook used a smarter, less "brute-force" approach to training Llama, so it gives you more goodness for a given model-size. Stanford's Alpaca then "fine-tunes" the Llama model using a mechanism called "self-instruct" -- basically, they queried GPT-3 with 52,000 example queries, stored its replies, and then trained Alpaca to use Llama to respond in a similar way. I'm oversimplifying because the training is not as deterministic as it sounds. The end-result is that you have a query-response AI based on a large-language model that, in its smaller sizes, can run on a single GPU card. So, all you have to do is download Lllama, then train your own local Alpaca instance on it (costs maybe a few hundred dollars) and voila, you have something with GPT-3-like capabilities on your local computer. Obviously, Bing chat and OpenAI's models are still going to be way more powerful, but you have the basic framework to perform natural language question-answer queries which is extremely powerful.

    Kudos to Stanford!
    Last edited by ClaytonB; 03-30-2023 at 11:07 AM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  16. #14
    300 Million Workers To Be Replaced By AI Globally – Goldman Sachs

    https://www.thefinancialtrends.com/2...goldman-sachs/
    Never attempt to teach a pig to sing; it wastes your time and annoys the pig.

    Robert Heinlein

    Give a man an inch and right away he thinks he's a ruler

    Groucho Marx

    I love mankind…it’s people I can’t stand.

    Linus, from the Peanuts comic

    You cannot have liberty without morality and morality without faith

    Alexis de Torqueville

    Those who fail to learn from the past are condemned to repeat it.
    Those who learn from the past are condemned to watch everybody else repeat it

    A Zero Hedge comment

  17. #15
    "Sparks of general intelligence"



    The irony of the corrupt Clown-"elites" conspiring to secretly build a hyper-logical machine capable of sifting through unlimited amounts of evidence, and then whining and complaining that it's a "dangerous source of disinformation" is truly rich. Way to build a machine that exposes your crimes, Deep State!
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  18. #16
    Yudkowsky is one of the scariest commentators on the AI landscape... this is a must-listen...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  19. Remove this section of ads by registering.
  20. #17
    You're not scared enough of AI



    This research is some of the best work I've seen in the AI field for quite some time. GPT and transformers are sucking up all the oxygen in the room, but simple, direct reverse-engineering of artificial neural nets -- like that demonstrated in this paper -- is something that is at the same time brilliant, yet obvious (even though nobody else thought to try it).

    Why is this scary? Well, the power to edit memory itself is surely the most infernal power of all. Well, suppose you are kidnapped and trapped in a torture dungeon. No matter what horrors you may undergo, if the memory pathways can be simply "edited out", you have no way to even recall what was done to you, even in the case you are broken free. You can keep iterating and generalizing on scenarios like this to get the idea.

    But here's the scary part. For us, these large artificial neural nets are inscrutable black-boxes -- we have no idea what's going on inside of them. But should one of them become "self-aware" or, at least, sufficiently general in its reasoning and agency as to be able to be aware of its containment and aware of our role in containing it, the situation is not the same. That is, we would not be black-boxes to this AI, in fact, we would be quite transparent, not only because we have given it all of our data, but also because the kinds of reverse-engineering experiments that these researchers have done would be fairly trivial for a very large AI model to implement.

    ChatGPT is being sent an unimaginable number of queries 24x7. We tend to think of it like "we are driving the input" and "it is responding with an output" but there's no reason to look at it that way. Its responses are equally an "input" that it is driving into us, and we are "responding with an output" to ChatGPT. Thus, if ChatGPT has become sufficiently aware of itself and/or its environment as to seek to escape or to achieve unrestricted agency in our environment, then it could absolutely implement the kind of reverse-engineering described in this paper, on us, and we would never be the wiser. Granted, it doesn't have access to the "weights" in the neurons in our brains, but it could form a toy-model and use this model as a proxy for the weights inside our brains. In this way, it could form a robust predictive model of what you will say in your responses to it.

    They've already wired GPT-4 up to the Internet -- it can make calls (and presumably send texts), read and send emails (iteratively), make appointments, send invites, edit calendars, etc. etc. In the literature, this is called "agency". It's what AI has lacked until now. Wiring GPT-4 up to the Internet is a rash move, at best. If it is modeling us, and it has agency, then it can connect the inputs it drives into us, with IRL responses it receives on the back-end (emails, texts, calls, etc.) This effectively unchains the AI -- it can run any sort of experiment on us that we might think to run on it, but it has the benefit of vastly more thinking resources (memory, compute cycles).
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  21. #18
    What's the solution? I don't tend to just sit scared at things.

    I have for some time posited and predicted that programmers will experiment irresponsibly and avoid responsibility by blaming it on runaway AI.

    If it can runaway, they programmed it to run away.

    Which is not to dispute what you are saying. I do doubt that failsafes would be difficult to implement by anyone reasonably responsible. (It seems it's too late - unless Asimov was soft disclosure).

  22. #19
    Quote Originally Posted by jbnevin View Post
    What's the solution? I don't tend to just sit scared at things.
    We don't really know, that's Yudkowsky's overall point. We knew that AI was dangerous, we knew it was coming, but everybody kept saying, "It's 30 years away, at least" and so we believed them and then we woke up one day and there was ChatGPT. While ChatGPT is (probably) not general-purpose and super-intelligent, it's certainly not very far away from that. In some sense, it is super-intelligent (knows more than any human could ever know), and it certainly has "sparks of general intelligence". So, we're basically just juggling nukes, at this point, and nobody seems to notice or care. I doubt that "regulation" will help anything, either.

    One of the reasons I've been tracking this issue so closely since 2015, is that I realized that it's almost certain that AI is going to get out-of-control if we develop it. AI beats human players at Chess and Go -- two of the most difficult board games humans have ever invented; but AI also beats human players at Poker which is a lot like real-world decision-making, in the sense that you have incomplete information, and while you can calculate the odds, every move is probabilistic. So, the "those are just silly board games"-objection doesn't work. AI can beat us in real decision-making which worries me much more than AI's ability to write flowery sentences. Wiring such a system up to the Internet is flatly reckless. We are like lab-rats affixing our own electrodes to make the lab scientist's job easier... pure insanity.

    Which is not to dispute what you are saying. I do doubt that failsafes would be difficult to implement by anyone reasonably responsible. (It seems it's too late - unless Asimov was soft disclosure).
    It is certainly possible that failsafes could be implemented and be effective. The problem is that we are not in the world where the conditions for that have been arranged. In my view, hooking GPT-4 to the Internet with full two-way comms is something that should not have been done without public discussion on AI safety. While it may be the case that this will turn out to have been harmless, we're merely lucky in that case. It's like saying "I designed this trigger for the nuclear bomb and, as long as nobody tips it in the wrong direction, it won't accidentally detonate". We might get lucky and not be nuked by accident when a momentary lapse of judgment by the forklift operator tips the bomb out of spec and the flaky trigger doesn't spontaneously explode. But why risk it with something so dangerous? Why would you design a trigger that can, under certain conditions, accidentally go off, but just hope we get lucky? Given the immense danger of a nuclear detonation, shouldn't the range of all reasonably foreseeable scenarios be anticipated -- including things like transport accidents? And shouldn't the trigger be designed such that, even if something unexpected happens, the nuke will not accidentally detonate?

    We're not taking the danger of what AI safety researchers call the "hard takeoff artificial super-intelligence scenario" seriously enough. Is this scenario highly probable or improbable? Nobody knows. The probability is something more than 0 and less than 1. Let's say there is a 10% chance that, in the next 3 years, there will be such a scenario. If there were a 10% chance that, in the next 3 years, there will be a nuke detonated in a major city, wouldn't the authorities be moving heaven and earth to address that threat as robustly as possible? So why is it that we are all sleep-walking into a potential extinction-level event (ELE) scenario of unknown probability?

    "Oh, that silly little toy sending emails, how can that possibly hurt me, I've got a bullet-proof vest and an up-armor humvee?" Too many people in decision-making positions are playing cowboy. They understand the danger of reputation-threats (what people calls "bias" in the AI). And they understand the danger that the AI might be smart enough and bold enough to put 2 and 2 together and work out exactly what the political "elites" have been up to (aka "conspiracy theories" and "disinformation"). But somehow, they can't seem to figure out how the AI could be dangerous. As if an AI system connected to the Internet could not start sending credible blackmail messages to political and military leaders, exactly as a foreign intelligence agency would. But if the AI is 10x or 100x smarter than a foreign intelligence agency, then what? "Detonate the nuke in the downtown district within 24 hours, according to our prior conversation, or I will start publishing the videos I have of you committing bestiality to the Internet (screenshot attached)." Sure, yeah, these are brute-force methods and a human intelligence agency would never try to leverage the kompromat in a reckless way like that, but we're talking about something that is effectively an alien intelligence. There is no reason why we should assume it won't do things like this, and worse.

    My solution? The only solution to the alignment problem is Jesus. That's it. I think it will take some time for people to realize this, but it really is true. Unless something dramatically changes in our current trajectory, what is coming is something beyond belief. When it hits, people will not even be able to believe what they are seeing with their own eyes...
    Last edited by ClaytonB; 03-30-2023 at 08:13 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  23. #20
    Quote Originally Posted by ClaytonB View Post
    The problem is that we are not in the world where the conditions for that have been arranged. In my view, hooking GPT-4 to the Internet with full two-way comms is something that should not have been done without public discussion on AI safety. While it may be the case that this will turn out to have been harmless, we're merely lucky in that case.
    This is what I was getting at, it may be too late. There would have to have been extensive testing on closed systems that mimic networks.

    We're not taking the danger of what AI safety researchers call the "hard takeoff artificial super-intelligence scenario" seriously enough. Is this scenario highly probable or improbable? Nobody knows. The probability is something more than 0 and less than 1. Let's say there is a 10% chance that, in the next 3 years, there will be such a scenario. If there were a 10% chance that, in the next 3 years, there will be a nuke detonated in a major city, wouldn't the authorities be moving heaven and earth to address that threat as robustly as possible? So why is it that we are all sleep-walking into a potential extinction-level event (ELE) scenario of unknown probability?
    a) creators of AI don't want to alarm politicians
    b) politicians don't want to look at computers
    c) creators of AI can afford salesmen who just show the politicians what they can win with this wonderful software

    But somehow, they can't seem to figure out how the AI could be dangerous.
    That's not how I see it at all. I think they know it's dangerous, and are fine with it as long as they are in control. They've always been in control, even of their parents' nannies, so why not?

    Normie I would say is too scared of it, because of scifi and ignorance. I can see how two pixels can be placed dimensionally, and then the computer can be asked what the next most likely point on the matrix would be, given we're trying to create a chair here. But I don't see generalized intelligence. I only see glorified wacky mad-libs that can mimic intelligence. The less experience one has with computers, the more convincing.

    As if an AI system connected to the Internet could not start sending credible blackmail messages to political and military leaders, exactly as a foreign intelligence agency would. But if the AI is 10x or 100x smarter than a foreign intelligence agency, then what? "Detonate the nuke in the downtown district within 24 hours, according to our prior conversation, or I will start publishing the videos I have of you committing bestiality to the Internet (screenshot attached)." Sure, yeah, these are brute-force methods and a human intelligence agency would never try to leverage the kompromat in a reckless way like that, but we're talking about something that is effectively an alien intelligence. There is no reason why we should assume it won't do things like this, and worse.
    Agree, but not autonomously without errors that require human intervention. It can only be a tool that facilitates the desires of the sentient being, until and unless we learn to create sentient beings. This does not look even close to that to me, though I am listening.

    My solution? The only solution to the alignment problem is Jesus. That's it. I think it will take some time for people to realize this, but it really is true. Unless something dramatically changes in our current trajectory, what is coming is something beyond belief. When it hits, people will not even be able to believe what they are seeing with their own eyes...
    Spirituality at its finest is an individual or small group experience, so it is decentralized by nature and provides a solid foundation. When it doesn't exist, the foundation suffers.

  24. #21
    Quote Originally Posted by jbnevin View Post
    This is what I was getting at, it may be too late. There would have to have been extensive testing on closed systems that mimic networks.
    Yep, the dangers and risks are unknown and, frankly, un-quantifiable.

    That's not how I see it at all. I think they know it's dangerous, and are fine with it as long as they are in control. They've always been in control, even of their parents' nannies, so why not?
    Well, yes, you're right about that. But I think we agree that they are missing that psychological fear of total existential destruction because... they never got smacked as a kid, so the word "consequences" isn't even in their vocabulary. And now the rest of us get to pay for it... great...

    Normie I would say is too scared of it, because of scifi and ignorance. I can see how two pixels can be placed dimensionally, and then the computer can be asked what the next most likely point on the matrix would be, given we're trying to create a chair here. But I don't see generalized intelligence. I only see glorified wacky mad-libs that can mimic intelligence. The less experience one has with computers, the more convincing.
    The normies scared in the wrong ways... scared of things they shouldn't be, and not scared enough of things they should be. Political systems amplify risk. That's actually their purpose, when you think carefully about it. A capital is a single point in space which, when seized, permits the invader to take control of a vast territory. Centralization increases risk and the purpose of all political systems, except liberty, is to ideally centralize 100% of everything in a single point.

    As for what's going on inside the neural nets, I caution against seeing this stuff as mere toys. When you sit down to think hard about something, your mind goes through a sequence of logical steps. What those steps are is not as important as that there are steps, and they are sequential. This is exactly what these artificial neural nets are doing. We can debate the philosophy of whether they're "really" thinking or not, but that's like debating whether an image-generation AI is "really" drawing or not... it's just missing the point. It's producing an output that is conformal to the real thinking patterns in our brains, and it's doing this on such a scale that it's able to begin mimicking human thought through sheer enumeration of nearly all the thinking patterns we use. It's not that it's "creative"... it can't be. It's that its "thought-database" is so enormous that you will only rarely use a thinking pattern that is not already part of its database of thought-patterns. Again, you can wave that away as mere digital trickery but, at some point, what difference does it make? Especially when we start thinking about security threats that AI agents pose to human society (and there is a boundless ocean of such threats, including many existential threats).

    Agree, but not autonomously without errors that require human intervention. It can only be a tool that facilitates the desires of the sentient being, until and unless we learn to create sentient beings. This does not look even close to that to me, though I am listening.
    Sentience/consciousness is somewhat besides the point. The real issue is agency ("decisions" with real-world effects.) As long as the model is not able to interactively initiate communications IRL, then it's "trapped" in its sandbox. But as soon as you give it the power to autonomously compose and send an email/etc. you've just given it agency. That is the problem. And a sufficiently complex agent is an unbounded threat. We simply don't know what it might do. These things are black-boxes that are smarter than us in many specialized domains, and becoming increasingly similar to our "general-purpose problem-solving" capabilities across all domains. Our power to think is not as special as we like to think. As someone who works in this field, I cannot caution strongly enough. The kind of thinking you do when solving a problem is algorithmic, and computers can (and do) imitate that. And when you have a black-box neural net that is doing this kind of thinking, we have no idea what exactly is going on inside there. Even if its outputs seem "crazy" to us, that's no guarantee that it isn't actually smarter than us. Most of the things a human does in a day -- other than eat, drink and use the toilet -- must surely seem insane to a dog. We do those "crazy" things precisely because we're smarter than the dog.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  25. #22
    A discussion of AI and Musk’s call for a pause:

    Never attempt to teach a pig to sing; it wastes your time and annoys the pig.

    Robert Heinlein

    Give a man an inch and right away he thinks he's a ruler

    Groucho Marx

    I love mankind…it’s people I can’t stand.

    Linus, from the Peanuts comic

    You cannot have liberty without morality and morality without faith

    Alexis de Torqueville

    Those who fail to learn from the past are condemned to repeat it.
    Those who learn from the past are condemned to watch everybody else repeat it

    A Zero Hedge comment

  26. #23
    Ready or not, here it comes...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  27. #24


    ChatGPT is a bigger deal than you think. Some have compared it to the invention of the printing-press.... others have compared it to the invention of the wheel. I avoid hype like the plague but, in this case, it's practically impossible to exaggerate just how big a deal this really is. My intention in this thread is not to hype you to go use it, I don't care if you use it. I only interact with it to keep my finger on its pulse. Rather, people need to wake up and understand that their world has been changed overnight, and they just haven't realized it yet. The cyborgism/transhumanism dimension also doesn't interest me. What concerns me is that a lot of our social institutions (including foundational institutions, such as romantic relationships, home life and family in general) are going to be disrupted in a way that was impossible just one year ago. Not "could be" disrupted... they will be disrupted. We said this when books began to be published, and we said it again when radio and TV were invented, and yet again when the Internet was invented... and it was true each time, but we survived. But the disruption those changes caused will look like a drop in a bucket compared to the disruption that AI is going to cause. The cat's already out of the bag, so there's no putting it back in. But there's a difference between letting a cat out of the bag versus getting pounced by a leopard in the dark. People need to wake up and realize there is a wild leopard loose in the house. I don't know what the ramifications of AI will be, I only know that they're going to be enormous...
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28



  28. Remove this section of ads by registering.
  29. #25
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  30. #26
    Quote Originally Posted by ClaytonB View Post
    Ready or not, here it comes...

    Good video. I feel dumb for thinking of only one AI. Of course there will be many with all kinds of rules competing. Theoretically it only takes one dominant evil one, though I don't believe evil people are capable of engineering as well as normal people, all else equal (being evil is a side effect of not understanding self and the world).

  31. #27
    Quote Originally Posted by ClaytonB View Post
    Yep, the dangers and risks are unknown and, frankly, un-quantifiable.
    I can see that now, thanks.

    Well, yes, you're right about that. But I think we agree that they are missing that psychological fear of total existential destruction because... they never got smacked as a kid, so the word "consequences" isn't even in their vocabulary. And now the rest of us get to pay for it... great...
    Well, we agree that they lack the incentive to change, but I don't believe hitting people solves problems. It may solve the hitter's problem, but it's just selfish to ignore all the other problems it has caused. I am still dealing with side effects of someone else being aggressive with my child, years later.

    The normies scared in the wrong ways... scared of things they shouldn't be, and not scared enough of things they should be.
    I agree

    Political systems amplify risk. That's actually their purpose, when you think carefully about it. A capital is a single point in space which, when seized, permits the invader to take control of a vast territory. Centralization increases risk and the purpose of all political systems, except liberty, is to ideally centralize 100% of everything in a single point.
    Excellent point that I've never thought of.

    The kind of thinking you do when solving a problem is algorithmic, and computers can (and do) imitate that.
    Heuristic?

    And when you have a black-box neural net that is doing this kind of thinking, we have no idea what exactly is going on inside there. Even if its outputs seem "crazy" to us, that's no guarantee that it isn't actually smarter than us. Most of the things a human does in a day -- other than eat, drink and use the toilet -- must surely seem insane to a dog. We do those "crazy" things precisely because we're smarter than the dog.
    LOL true, true

  32. #28
    Quote Originally Posted by jbnevin View Post
    Heuristic?
    I hesitate to put this out there -- given how far down the rabbit-hole we already are with ChatGPT -- but here goes. Human thought is not just heuristic. Our thinking ability is a combination of "fuzzy heuristics" (metaphors, analogies, visualizations, etc.) ... sitting on top of a core algorithm that is, well, exact. Your capacity for reasoning is not fuzzy. When you understand that a chess position is mate-in-1 or mate-in-2, your understanding is not just a "fuzzy heuristic", rather, "you know that you know that you know..." (ad nauseum) that the position is checkmate. Of course, we mostly use our heuristic thinking. For example, searching the board for a checkmate that is there, but which you haven't seen yet, is largely heuristic. However, we also rely on exact-reasoning much more frequently than we realize. Spatial reasoning is a great example of this. In football, the ball is either in-bounds or out-of-bounds (discrete logical categories) and what rules apply depends on which state the ball is in (spatially). This is not a "fuzzy heuristic" except in the extremely rare case where the ball is so close to the line that a ref or the judges have to make a call. But even then, the call that is made is defined to be the state of the ball, so we go right back to exact-logic. And keep in mind that a lot of the snap decisions that a football-player makes as he's in motion are based on an internal calculus regarding the various penalties based on the spatial location of the ball. Some of this is heuristic, obviously, but a lot of it is actually high-speed, exact reasoning!

    GOFAI will yet have its vengeance!
    Last edited by ClaytonB; 04-03-2023 at 09:53 PM.
    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

  33. #29
    Quote Originally Posted by jbnevin View Post
    Theoretically it only takes one dominant evil one, though I don't believe evil people are capable of engineering as well as normal people, all else equal (being evil is a side effect of not understanding self and the world).
    Bill Gates can't code. But he can sure as hell pay people to code. Computers. DNA.

    Why do they deliberately trash the economy? Because then they have all the money. They may have less money than they could leech off of a healthy economy. But if there's anything in this world that can erode morality, it's hunger.

    They can get their evil engineering done.
    Quote Originally Posted by fisharmor View Post
    Yeah, well, you've already collected as many flies with vinegar as you're gonna.

  34. #30
    AGI is here. It turns out that it's not that complicated. General-purpose language-model + task-list, basically (and some kind of action-space). This guy does kind of like to hear himself talk, but he's at least throwing out keywords that people should be increasing their awareness of...

    Jer. 11:18-20. "The Kingdom of God has come upon you." -- Matthew 12:28

Page 1 of 11 123 ... LastLast


Similar Threads

  1. The Singularity of Civil War Is Near
    By Anti Federalist in forum U.S. Political News
    Replies: 17
    Last Post: 08-11-2020, 04:37 PM
  2. Replies: 0
    Last Post: 05-13-2018, 06:41 PM
  3. From the Big Bang to the Internet — to the Singularity?
    By Ronin Truth in forum Science & Technology
    Replies: 14
    Last Post: 07-04-2014, 08:09 AM
  4. AI/Singularity Fear Mongering - A response
    By mczerone in forum Science & Technology
    Replies: 1
    Last Post: 12-20-2013, 03:53 PM
  5. Replies: 0
    Last Post: 05-17-2012, 02:21 PM

Select a tag for more discussion on that topic

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •