But the possibility that these machines will cause mayhem because we don’t know how to enforce that they do what we want them to do, I think that’s a real thing to worry about.ĬOLEMAN: Welcome to another episode of Conversations with Coleman. GARY: Extinction is a pretty, you know, extreme outcome that I don’t think is particularly likely. You probably end up dead at the end of that. Whose preferences we could not shape and by default, if that happens, if you have something around it, it is like much smarter than you and does not care about you one way or the other. At this rate, we end up with something that is smarter than us, smarter than humanity, that we don’t understand. They end up as giant inscrutable matrices of floating point numbers that nobody can decode. The AIs are grown more than built, you might say. What’s, uh, what’s the big fear here? Make the case.ĮLIEZER: We don’t understand the things that we build. And I’ve been grateful, as always, for the reactions on Twitter (oops, I mean “X”), such as: “Skipping all the bits where Aaronson talks made this almost bearable to watch.”ĬOLEMAN: Why is AI going to destroy us? ChatGPT seems pretty nice. I’ve now added links to the transcript and fixed errors. I prepared this transcript for my fellow textophile Steven Pinker and am now sharing it with the world! As a free bonus, here’s a GPT-4-assisted transcript of my recent podcast with James Knight, about common knowledge and Aumann’s agreement theorem. I expect that in a year or two, if not sooner, we’ll have AIs that can do better still by directly processing the original audio (which would tell the AIs who’s speaking when, the intonations of their voices, etc).Īnyway, thanks so much to Coleman, Eliezer, and Gary for a stimulating conversation, and to everyone else, enjoy (if that’s the right word)! It wasn’t perfect-I had to edit the results further to produce what you see below-but it was still a huge time savings for me compared to starting with the raw subtitles. Fortunately, and appropriately for the subject matter, I’ve recently come into possession of a Python script that grabs the automatically-generated subtitles from any desired YouTube video, and then uses GPT-4 to edit those subtitles into a coherent-looking transcript. I know many of my readers are old fuddy-duddies like me who prefer reading to watching or listening. I’m merely the “Schlumberger Chair,” which has no leadership responsibilities.) (My one quibble with Coleman’s intro: extremely fortunately for both me and my colleagues, I’m not the chair of the CS department at UT Austin that would be Don Fussell. You can watch the roundtable here on YouTube, or listen here on Apple Podcasts. In any case, the result was that you sometimes got me and Gary against Eliezer, sometimes me and Eliezer against Gary, and occasionally even Eliezer and Gary against me … so I think it went well! Maybe Coleman was looking for three people with the most widely divergent worldviews who still accept the premise that AI could, indeed, go catastrophically for the human race, and that talking about that is not merely a “distraction” from near-term harms. A month ago Coleman Hughes, a young writer whose name I recognized from his many thoughtful essays in Quillette and elsewhere, set up a virtual “AI safety roundtable” with Eliezer Yudkowsky, Gary Marcus, and, err, yours truly, for his Conversations with Coleman podcast series.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |