The KhoraPod

John Shook on AI, Pragmatism, and the Future of Higher Education

Eli

In our first episode, John Shook, Karin Valis, and Eli Kramer take dive deep into the history, purpose, limits, and potential of machine learning.
 
The conversation centers around Palinode Productions’ groundbreaking platform: Khora Algorithm—an innovative, humane-driven AI platform, designed to facilitate philosophical thinking and reflective inquiry. 

What happens when we design algorithms not to optimize clicks but to foster curiosity, challenge assumptions, and embrace difference? How can machine learning be reimagined as a tool for dialogue, creative exploration, and even wisdom cultivation?

The trio explore machine learning, drawing on the insights of the American philosophical tradition into the nature of inquiry, reflection, and intelligence. They also explore what it means to do humane education in the age of generative AI. Along the way, they unpack the dangers of digital distraction, the possibilities of slow tech, and the promise of a “platform” that challenges users with tough questions, ambiguity, and philosophical growth.
 
This episode isn’t just for academics or tech heads—it’s for anyone who feels the tension between technophilia and technophobia, and is curious about how we might build tools to think with, not just tools that think for us.
 
Whether you're an AI skeptic, a digital humanist, or simply curious about the future of thought, this conversation will leave you reflecting—and maybe even inspired.

WEBVTT

1
00:00:10.086 --> 00:00:19.126
Eli Kramer: I[m Associate Director and University Professor at the Cassirer Center for International scholarship and cooperation of the Institute of Philosophy of the University of Wroclaw.


2
00:00:19.476 --> 00:00:32.906
Eli Kramer: So this is part of a broader initiative. Palanode is up to the quora Algorithm which is palanode production's innovative, human-driven AI platform designed to facilitate philosophical thinking and reflective inquiry


3
00:00:32.976 --> 00:00:51.186
Eli Kramer: by leveraging the potential of machine learning designed from the ground up with a humane vision. Quora introduces users to an adaptive environment that allows them to explore the big questions of life and the world without, while also providing opportunities to stimulate the imagination, and that promote curiosity and creative application.


4
00:00:51.306 --> 00:01:03.155
Eli Kramer: Further, Quora guides users to explore opposing points of view, allowing individuals to recognize other people's beliefs and intellectual journeys as being invaluable to the project of collective growth


5
00:01:03.416 --> 00:01:16.356
Eli Kramer: on the core platform. Users are challenged to explore interconnected ideas from within what looks like competing positions, fostering critical thinking, self-reflection, and collective collaborative


6
00:01:16.506 --> 00:01:34.856
Eli Kramer: engagement. And in both the Youtube video and on your spotify playlist, you should find a link to our marketing website with more info, and soon we will be able to show you on that website a proto version of what we're up to. To make this all more clear and exciting?


7
00:01:36.306 --> 00:01:44.946
Eli Kramer: but the reason for this. Podcast so for those of us working on this project, we well recognize that challenges and complexity of the landscape we are working within.


8
00:01:45.166 --> 00:01:55.646
Eli Kramer: We well know that much of generative AI is all too often used to alleviate the need for thought. Many of us, including myself, have students using it to do their assignments to such an end.


9
00:01:55.916 --> 00:02:07.316
Eli Kramer: There is also the issue of Paywalls, and how our attention in general online is commodified for direct advertisements or sold off to others interested in what we were shaped to be interested in.


10
00:02:07.436 --> 00:02:23.485
Eli Kramer: Then there's the environmental impact. Never mind issues surrounding AI copyright and ownership, and how academic publishers making their own AI to mine scholarly work without the permission of authors. The issues are myriad and nuanced. And we certainly recognize that we don't have all the answers.


11
00:02:23.646 --> 00:02:36.516
Eli Kramer: So this podcast is going to be an opportunity for those of Us working on and with Cora the dialogue with leading voices in philosophy, science, and technology, studies, economics, politics, and other domains to grapple with this new terrain.


12
00:02:36.776 --> 00:02:43.056
Eli Kramer: So guests are going to introduce their angle on issues related to philosophy, human cultivation, and AI


13
00:02:43.666 --> 00:02:48.616
Eli Kramer: we explore the potential of these technologies as well as their dangers, to unframe us


14
00:02:49.106 --> 00:03:12.186
Eli Kramer: with all of that, said first, st I want you to introduce our lead. AI developer, Karine Valles, who will be joining us for our launch here. She is a Berlin-based machine learning engineer and writer with a deep passion for everything occult and weird. Her work focuses mainly on finding conceptual resonances between the latest machine learning technology and the esoteric Karine. Thanks for being with us.


15
00:03:12.926 --> 00:03:15.525
Karin Valisova: Thank you. Thank you for the introduction.


16
00:03:15.666 --> 00:03:23.395
Eli Kramer: And let me introduce our 1st guest. We're pleased to introduce the internationally celebrated philosopher. Dr. John shook


17
00:03:23.596 --> 00:03:45.485
Eli Kramer: one of the preeminent specialists in classical pragmatism, naturalism, and the philosophy of science and logic and humanism shook, has spent his career understanding the historic role of philosophy, and has provided a vision for its future in a very different landscape from today's higher education. He's been preparing a formative work on pragmatic Logic and the history of logic to understand what is happening today in AI.


18
00:03:45.596 --> 00:03:56.996
Eli Kramer: He's offered and edited more than a dozen books and dozens of articles in academic journals. Among his books are Dewey's Empirical Theory of Knowledge and Reality, Pragmatic Naturalism and Realism.


19
00:03:57.146 --> 00:04:14.945
Eli Kramer: Blackwell's Companion to Pragmatism, The Dictionary of Modern American Philosophers. In 4 volumes, The Future of Naturalism, John Dewey's philosophy of spirit, the essential William James, Dictionary of early American Philosophers, 2 volumes, and to top it off, he's the editor-in-chief of contemporary pragmatism.


20
00:04:15.086 --> 00:04:27.125
Eli Kramer: He's going to begin today's episode by trying to explore how today's machine learning fits into the history of philosophical logic and what it suggests for projects like the Cora Algorithm.


21
00:04:27.376 --> 00:04:28.776
Eli Kramer: Hey, John Welcome.


22
00:04:29.006 --> 00:04:30.576
John Shook: Hey? Thanks for having me.


23
00:04:31.326 --> 00:04:39.926
Eli Kramer: Great. And so you know, with that said, I'm gonna just dive in, John. Give give us give your spiel, and then we'll I'll ask some questions, and we'll get going here.


24
00:04:40.356 --> 00:04:42.116
John Shook: Did. Did I have a prepared


25
00:04:42.386 --> 00:04:46.226
John Shook: spiel? You warned you warned me that I need to.


26
00:04:46.226 --> 00:04:47.255
Eli Kramer: Give one, give.


27
00:04:47.256 --> 00:04:47.616
John Shook: She waits.


28
00:04:47.616 --> 00:04:48.286
Eli Kramer: Benefit.


29
00:04:48.286 --> 00:04:50.036
Eli Kramer: AI a little bit. Yeah.


30
00:04:50.036 --> 00:04:50.996
Eli Kramer: Talk about your


31
00:04:51.316 --> 00:05:05.645
Eli Kramer: of AI. How we got here, you know. Maybe some of the stuff we were talking about with the sort of history of, you know, analogical logic. Then, as it moved to propositional what that means for designators, and where you think AI is placed in the story.


32
00:05:05.646 --> 00:05:08.926
John Shook: I'm already throwing around the verbiage. Well well done!


33
00:05:09.256 --> 00:05:14.456
John Shook: I'll try to keep things as simple as possible without letting them be too simple.


34
00:05:14.786 --> 00:05:17.026
John Shook: The Internet is now, you know.


35
00:05:17.606 --> 00:05:28.266
John Shook: has ample resources for learning all about AI, and logic may or may not come up right now. We're interested in whether or not, AI can do reasoning.


36
00:05:28.656 --> 00:05:33.465
John Shook: Primarily mathematical reasoning and basic sort of logic puzzles.


37
00:05:33.806 --> 00:05:37.255
John Shook: and not surprisingly, its performance is improving.


38
00:05:37.456 --> 00:05:44.086
John Shook: What AI wouldn't know and couldn't know itself is that of course it's running on microprocessors.


39
00:05:44.376 --> 00:05:45.236
John Shook: And


40
00:05:46.626 --> 00:05:55.896
John Shook: It was philosophy and philosophical logic that invented the foundations of computing. So the answer to the question is, AI


41
00:05:56.106 --> 00:05:59.925
John Shook: logical? Well, it has to be. Essentially


42
00:06:00.116 --> 00:06:14.125
John Shook: not that artificial intelligence would know that because it's running fundamentally on logic gates, that is the point of microprocessing. Electrical currents are flowing through various kinds of logic gates


43
00:06:14.236 --> 00:06:18.475
John Shook: so it can take in 2 inputs and produce one output.


44
00:06:19.391 --> 00:06:21.615
John Shook: Nand, for friends of


45
00:06:21.876 --> 00:06:27.825
John Shook: logic you might recall. Oh, Nand, wasn't that not? And well, at any rate, there's


46
00:06:28.046 --> 00:06:34.416
John Shook: various Nans and Nors. But what this essentially does is it allows computing


47
00:06:34.566 --> 00:06:38.565
John Shook: to be able to do mathematics, and


48
00:06:38.666 --> 00:07:01.235
John Shook: you can run mathematical formulas on top of a sufficient number of complex flows of logic gates. And there's a deep connection between logic and mathematics that I won't go into, but we expect mathematically we expect it to be very deductive, and in deductive logic, the truth you put in


49
00:07:02.176 --> 00:07:21.025
John Shook: should be right. Still true when it comes out, and you need that for mathematical rigor as well. Mathematics relies on truths of the inputs, and then you run the equations, or the formulas, or whatever, and you can count on the answers. Now along comes statistics


50
00:07:21.706 --> 00:07:23.916
John Shook: and some of the best mathematical


51
00:07:24.316 --> 00:07:36.115
John Shook: formulations and algebraic functions are actually essentially statistical. You wouldn't know it. The old calculators that we all had when we were sent off to college, they were doing multiplication.


52
00:07:36.704 --> 00:07:43.916
John Shook: Through logarithms, and the way to do it the fastest was to make it a little bit statistical, but no one would ever know the


53
00:07:44.076 --> 00:07:47.305
John Shook: difference because it was doing it so fast, so fast.


54
00:07:47.466 --> 00:07:53.065
John Shook: Well, now, what you can do with these graphics Chips


55
00:07:53.496 --> 00:08:05.195
John Shook: is take advantage of the fact that they are optimized for doing very fast mathematical statistics. Essentially, they're they're optimized for doing


56
00:08:05.376 --> 00:08:09.705
John Shook: the kinds of mathematical formulations and functions.


57
00:08:10.836 --> 00:08:19.255
John Shook: For mathematical relationships, far more complicated than your worst nightmare of a calculus class, but they're very good at it.


58
00:08:21.426 --> 00:08:31.966
John Shook: The combination of that with neural networks. The neural networks are essentially just a very large model or family of these kinds of statistical mathematical functions, all operating in


59
00:08:32.126 --> 00:08:55.176
John Shook: parallel. You put them in layers, you get neural nets, and then you can do interesting things like feeding forward and feeding backward training. So what essentially you're doing is you're training the very large number of mathematical formulas waiting between the nodes of the neural networks, and after a while it will pick up patterns.


60
00:08:55.286 --> 00:09:06.235
John Shook: These are statistical patterns. They are degrees of probability, or in very large and dimensional spaces of relationships of probabilities. But that's what patterns are.


61
00:09:06.746 --> 00:09:22.366
John Shook: What kinds of outputs do you get? Well, you can run in fresh inputs and sort of, interrogate it and get original, somewhat original patterns out of them. Well, what kinds of patterns? Right now generative AI


62
00:09:22.366 --> 00:09:38.666
John Shook: is performing essentially 3 kinds of patterns. One is still fairly close to what we might call algebraic functioning. In other words, it can still look a little bit deductive, and they're very useful for analytical tools like, if you want to analyze a very large amount of financial data.


63
00:09:39.126 --> 00:09:48.826
John Shook: it'll it'll do that. And it'll find patterns. And it'll perform some analytical functions telling you what's going on in the data. Essentially, it's data mining.


64
00:09:49.126 --> 00:10:07.085
John Shook: The second one is the one that hit the headlines. And that is, of course, conversational generative. AI. So you can train it to form sentences, sentences, become paragraphs, paragraphs become conversations. And the 3rd kind, of course, is image generation


65
00:10:07.296 --> 00:10:34.366
John Shook: again, just forming patterns. So if you train it on a million cats and ask it to do this or that kind of cat, it'll do it, and if you train it on enough different kinds of pictures. It picks up what a hat would look like on something that would look like a cat. And now you're on, off and running, and so you can give it various forms of prompts and instructions to try to craft and design the image that you want.


66
00:10:35.006 --> 00:10:40.096
John Shook: So here we are in this age of generative AI,


67
00:10:40.196 --> 00:11:02.935
John Shook: and it will have enormously interesting conversations with you. It'll also attempt to solve logical puzzles. It'll attempt to do mathematics, and there are various sorts of trials and tests that you can run the different AI models through to discover how good it is. So far it's getting pretty good. In fact, there's 1 AI model now that is essentially


68
00:11:03.186 --> 00:11:07.545
John Shook: performing at mathematical levels equivalent to the


69
00:11:07.666 --> 00:11:11.846
John Shook: best university professor. That's probably teaching at your university.


70
00:11:12.236 --> 00:11:18.275
John Shook: That's pretty impressive, although fairly limited. The creativity not so much


71
00:11:18.666 --> 00:11:43.995
John Shook: the mathematical puzzles would have already had to have in some sense be fed into it. Not that they're giving it the answers, but the AI would already have to be familiar with the kinds of mathematical puzzles. This isn't genius level stuff, genius. Level mathematics means coming up with novel relationships between 2 things that have never been put in the same pattern before. Well, of this AI could know nothing


72
00:11:44.606 --> 00:11:48.435
John Shook: it only understands, knows about, and can replicate.


73
00:11:49.466 --> 00:11:50.585
John Shook: Things with


74
00:11:51.236 --> 00:11:57.915
John Shook: that already have patterns and relationships. And if you want more of it, AI is your thing.


75
00:11:58.196 --> 00:12:11.506
John Shook: In some sense it always sticks to the mean, it sticks to the middle, it sticks to the middle of the bell curve. It's it's it's probability functioning. And so it will give you the most probable answer, not the most right. Answer.


76
00:12:11.756 --> 00:12:20.785
John Shook: Now for some things and some functions. You don't need to care about the difference difference. But other things you do, especially when there's money riding on it, and so forth.


77
00:12:20.896 --> 00:12:30.135
John Shook: So, at any rate, generative AI has enormous potential on the horizon. Arriving soon is what what is being called agentic. AI.


78
00:12:30.376 --> 00:12:32.756
John Shook: It will add on


79
00:12:33.106 --> 00:12:52.466
John Shook: decision making and some sort of executive action that'll do things for you given sufficient instructions, and then it will go out and literally attempt to be your agent. So over the next 5 to 10 years we expect this kind of agentic. AI to become more and more important.


80
00:12:54.017 --> 00:12:56.426
John Shook: Yeah. Where? Where did logic go?


81
00:12:56.706 --> 00:13:00.055
John Shook: Well, if it's in conversation with you.


82
00:13:00.656 --> 00:13:12.886
John Shook: it can actually sound fairly logical. We've all had that experience. It really does seem like you have a conversational partner. It really does seem that is capable of giving back to you


83
00:13:13.446 --> 00:13:20.915
John Shook: expository information that has sensible relationships among the sentences.


84
00:13:21.066 --> 00:13:46.955
John Shook: Right? It'll form a paragraph with a topic sentence and some exposition in illustration or or enlargement on it, and it'll wrap it up with sufficient prompting. It can do this ad nauseum. That's all very impressive, and it is going to remain a pretty good learning tool. As long as you realize that the output that you get from the AI is always going


85
00:13:47.246 --> 00:13:56.156
John Shook: to be bounded and limited by the amount of input that you put into the AI. So the old adage that I was taught learning Fortran and


86
00:13:56.596 --> 00:14:01.056
John Shook: and the early languages still applies garbage in garbage out.


87
00:14:01.606 --> 00:14:12.696
John Shook: If you feed AI a lot of garbage, you're going to get plenty of garbage out. So it's still a situation of buyer. Beware! Human intelligence is still doing most of the work.


88
00:14:14.276 --> 00:14:33.856
Eli Kramer: That gets me to my follow up question to pick. At that word you sort of hinted at intelligence. You do a lot in pragmatism, which, of course, is interested in the nature of intelligence, especially how humans, or maybe other creatures, engage with their environment, solve problems, move around them and and organize their relationships to the world.


89
00:14:34.156 --> 00:14:51.616
Eli Kramer: What do you think about the use of the term AI at all for the advances made? Do you think that's fair. Do you think pragmatism might have some resources if you were sort of being philosophically nitpicky to re-articulate what advances are being made here? How do you make sense of that for yourself, and then.


90
00:14:51.616 --> 00:14:52.336
John Shook: We don't even need.


91
00:14:52.336 --> 00:14:53.406
Eli Kramer: Kareem there.


92
00:14:53.406 --> 00:14:58.486
John Shook: We may not even need philosophical nitpickiness, although that is our resource backup.


93
00:14:58.606 --> 00:15:13.405
John Shook: We just need some common sense, I mean, for a hundred years we've been looking at psychology trying to find tests for intelligence. And the only thing that all of this industry of the psychology of of intelligence tests has discovered is that


94
00:15:13.806 --> 00:15:30.936
John Shook: the only thing intelligence means is precisely the performance. On that test. It does not correlate hardly much to anything else. But actually, this is the right attitude to take with AI. Don't worry about artificial intelligence as if intelligence meant something. It doesn't


95
00:15:31.066 --> 00:15:33.445
John Shook: it? That that's just prompting us.


96
00:15:33.836 --> 00:15:44.535
John Shook: Intelligence from AI means only, and precisely. What have you trained and tested it? To be able to reliably do.


97
00:15:44.766 --> 00:16:05.375
John Shook: then label that a kind of intelligence. So there's a million kinds of intelligences corresponding to each and every kind of performance, training and test that you decide to put AI through when it hits some sort of standard of adequate performance level, you can say, it is achieving


98
00:16:05.616 --> 00:16:12.566
John Shook: precisely that particular type of tested intelligence, beautiful. No fallacy


99
00:16:12.796 --> 00:16:17.496
John Shook: because you're not attributing to it capabilities or intentions


100
00:16:17.516 --> 00:16:41.605
John Shook: beyond what it is actually concretely doing right there in front of you. This is the behaviorist. Approach to intelligence. Intelligence is just another behavior, but there are no such things as behaviors in the abstract. There are only right things that are thinking through specific problems under specific conditions, with given resources. And that's it.


101
00:16:41.606 --> 00:16:47.576
John Shook: It has no validity beyond those sorts of conditions. But this is no obstacle at all


102
00:16:47.576 --> 00:16:58.545
John Shook: for artificial intelligence or the engineering of artificial intelligence. The only people getting fooled is, of course, the human beings who think that there is something as general intelligence.


103
00:16:58.596 --> 00:17:06.495
John Shook: And I'm not a fan of this search for right. General AI, that is hogwash


104
00:17:06.676 --> 00:17:09.786
John Shook: that that's propaganda. It's marketing.


105
00:17:10.176 --> 00:17:23.176
John Shook: It's it by definition. AI cannot achieve it because nobody knows what general intelligence is. And when you read the fine print, it turns out that the manufacturer and purveyor of so-called right General


106
00:17:23.871 --> 00:17:32.285
John Shook: Ai. Has simply already set a performance benchmark on certain kinds of tasks called that general


107
00:17:32.426 --> 00:17:36.945
John Shook: showed that it meet that specific criteria and touchdown.


108
00:17:37.056 --> 00:17:59.905
John Shook: Well, technically, yes. But then it's not general intelligence. It's specialized intelligence, because it's only doing a special job on your special version of the task. And we do have to beware of the of the hype and the risk that intelligence can be essentialized or pinned down into one particular mode


109
00:18:00.076 --> 00:18:09.545
John Shook: of problem solving that was exploded a hundred years ago. It was an enormous fallacy, connected with all kinds of terrible things, like racism


110
00:18:09.636 --> 00:18:29.525
John Shook: and other sorts of ways of classifying people as fully intelligent or less than intelligent. And we shouldn't do that to each other, and we certainly AI doesn't deserve to be treated that way. So I think we're best on safe territory by understanding


111
00:18:29.526 --> 00:18:41.585
John Shook: that intelligence is going to simply be, what are its real world performances? Given specific kinds of trained tasks, then you know what intelligence means, and you're also warned.


112
00:18:41.626 --> 00:18:47.036
John Shook: Don't assume that it'll perform equally well on any neighboring task.


113
00:18:47.906 --> 00:18:57.075
John Shook: Right? The the statistics fall off rapidly. It'll do terribly on a task that seems similar to a human. But if the AI isn't ready for it.


114
00:18:57.386 --> 00:18:59.406
John Shook: it'll do poorly. So


115
00:18:59.666 --> 00:19:06.315
John Shook: that's okay. I think we can handle and understand. I think there's even more confusion about the word artificial.


116
00:19:06.636 --> 00:19:13.295
John Shook: then the word intelligence. Treat intelligence as a trained, reliable behavior.


117
00:19:13.446 --> 00:19:19.196
John Shook: artificial though, is a whole nother meme opportunity. That, I think, is also being exploited.


118
00:19:20.186 --> 00:19:47.585
Eli Kramer: Yeah, there's a whole history there, and it's a conversation I know me and Karine have had, and it makes me wonder, Karine, what's the sort of in-house conversation with people working in generative AI development and engineering. Do people think it's the most appropriate term for advances made in the field? Do you think machine learning is any better in your experience? Do you know, AI developers and engineers see themselves as truly building something like human intelligence?


119
00:19:47.606 --> 00:19:54.456
Eli Kramer: Do they buy that the artificial here somehow is analogous to the to the human. I'd be curious for your thoughts on that.


120
00:19:54.456 --> 00:20:24.076
Karin Valisova: Yeah, I have to say that I probably haven't seen any person who really is like studied technically in AI to actually call it AI, and not machine learning. It's sort of like it's the stamp you put on, you know, for the purpose of general conversation for the purpose of I don't know funding because it's such a hype topic right now, right? But like, it's a machine learning problem. And I think that when you actually approach solving the problems with the tools.


121
00:20:24.076 --> 00:20:28.125
Karin Valisova: You look at them, as you know, machine learning problems


122
00:20:28.126 --> 00:20:54.225
Karin Valisova: rather than sort of the quest for the general intelligence. I think that such a ridiculous human-centric view on things to sort of talk about the general intelligence, you know, like we've got, we've got things like slime molds solving problems right better than a lot of algorithms like the pathfinding optimization and things like this and and sort of putting


123
00:20:54.226 --> 00:21:18.706
Karin Valisova: in a perspective, this, this attempt to solve some problem. We don't even have an optimization function. For, like what is what is general intelligence? What are we as humans optimizing for our existence? Right? What are we trying to mimic? Because, like in machine learning, the main sort of mechanism is exactly as John John was saying, like, you've got a problem you want to solve.


124
00:21:18.706 --> 00:21:29.256
Karin Valisova: And you've got methods you try to apply to solve this problem right, which usually is some sort of function you need to optimize, have the least distance between the robotic arm


125
00:21:29.406 --> 00:21:48.885
Karin Valisova: target point in the space and the way of like moving the angles to actually get there. Right? That's a function. You can optimize, you can then run a whole bunch of really computationally expensive algorithms on the top of it. And the thing will eventually get where you want it to do. But for general intelligence, what are we optimizing for? Like.


126
00:21:48.886 --> 00:22:07.065
Karin Valisova: you know? That's, I think that's a question which none of us actually has. Like, you know, we spend our whole lives trying to figure out what is it we're optimizing for? And this idea that you know now we're going to have an AI to solve it for us is just really funny.


127
00:22:07.206 --> 00:22:09.796
Eli Kramer: It sort of hits on that. There's this, you know


128
00:22:09.926 --> 00:22:32.436
Eli Kramer: idea from Byung Chol Han, that you know there were a culture operating without a Telos, but or maybe there is one that I was going to bring in to see what you both think about another force, which is, how much do you think of that? Push? The general intelligence is companies trying to get money from BC. Firms and other places and trying to sort of sell the potential of what they're making.


129
00:22:32.436 --> 00:22:42.885
Eli Kramer: In other words, how much is sort of our current form of late stage capitalism and the way that companies start themselves are shaping the pitch of what's really happening here.


130
00:22:42.926 --> 00:22:55.936
Eli Kramer: And I think there's a sort of philosophical question there. Are we structuring ourselves to build the narrative, and then do we want to be serious of what it's like for machine learning and developers is there, you know, a sort of practical situation of


131
00:22:56.106 --> 00:23:04.115
Eli Kramer: you know that they feel pressured to make sort of promises and narratives that the in-house story is very different from which it sounds like you're saying


132
00:23:05.466 --> 00:23:07.156
Eli Kramer: either of you want to dive in there.


133
00:23:10.416 --> 00:23:22.806
John Shook: Yeah. Well, I don't know about business to business. Competitive capitalism. We weeds out tools that don't work pretty rapidly. They're they're not easily fooled by hype.


134
00:23:23.336 --> 00:23:37.075
John Shook: It's business to consumer, of course, that's most worrisome, because the consumer can be easily fooled. And this is where this business of the artificial comes in, our artificial connotes different sorts of meanings. One is


135
00:23:37.896 --> 00:23:41.955
John Shook: substitute. So we think of artificial, artificial sweetener.


136
00:23:42.136 --> 00:23:51.145
John Shook: Well, what would artificial intelligence be? Well, it would be as sweet as intelligence, but without the


137
00:23:51.466 --> 00:23:59.945
John Shook: I don't know what. But folks talking about replacing their workers with AI love the artificial. I heard a commercial


138
00:24:00.216 --> 00:24:16.955
John Shook: advertising right by our AI, for I don't know. It seemed like some low, low, level advertising or marketing involving right graphics. So you you get all the graphics, productivity, and more 24, 7. Quote unquote without the backtalk.


139
00:24:17.886 --> 00:24:32.776
John Shook: What's the backtalk? Well, presumably that's a human being who actually is an artist and understands graphic design. Who's saying, well, I understand what you want me to do. But do you understand? You can't really, actually do it that way. Well.


140
00:24:32.776 --> 00:24:33.586
Karin Valisova: The Executive.


141
00:24:33.586 --> 00:24:43.115
John Shook: Don't want to hear it. That's back talk. They want artificial intelligence. That's just like, yes, sir, and there's a little dangling ding, and here's your entire, you know, advertising campaign


142
00:24:43.286 --> 00:24:58.606
John Shook: all laid out for you, and there's no backtalk. So there is that sense of artificial. You get all the sweet stuff without the calories, the unnecessary bloating, and this sort of thing that comes with dealing with human beings. There's another sense of artificial.


143
00:24:59.151 --> 00:25:02.596
John Shook: Though that, I think is a little hits a little closer to home.


144
00:25:02.816 --> 00:25:11.145
John Shook: Something artificial is, of course, going to not be as good as the authentic thing.


145
00:25:11.436 --> 00:25:26.715
John Shook: and this is what we all need to wake up to. We need to figure out very quickly what these AI tools are truly good for, and when they stop working, and the authenticity is just fakery.


146
00:25:27.276 --> 00:25:40.665
John Shook: The gold shine comes off. It's fool's gold. It's not actually as workable, as valuable, as useful, as efficient, as productive, as if trained human beings were doing it for all kinds of reasons.


147
00:25:40.946 --> 00:25:51.186
John Shook: And so we don't want this business of artificial intelligence to really be viewed as either a substitute, or as something


148
00:25:51.416 --> 00:26:20.156
John Shook: could potentially be good enough. What is it? It's just another tool, and you do not know what it is precisely good for, or how much good it will do, and how much that productivity will last until you actually familiarize yourself with using that tool. And there's no other substitute for that. That's authentic technology understanding. And AI is not going to replace that for us.


149
00:26:20.396 --> 00:26:29.585
John Shook: AI, by definition can never adequately to explain to an actual technological creature like us how it should be used


150
00:26:29.766 --> 00:26:40.525
John Shook: if a now that doesn't mean that AI won't try to do so, but if we are so foolish as to listen to it. Then we get the disasters that will inevitably ensue. I think.


151
00:26:42.016 --> 00:27:11.346
Karin Valisova: Yeah, I completely agree on this. And I think that, like in business, we sort of seen that 2 years, 2 and a half years when the AI exploded right. There was this huge hunt for for companies to hire people to do like some sort of transformations. Let's find out how many of our designers. We can sack and replace with AI right? And a lot of people got huge amounts of like research done on this. And


152
00:27:11.346 --> 00:27:40.826
Karin Valisova: I read somewhere that last year the amount of money actually spent on this research of people figuring out how it can be done, and like the discrepancy between what has been implemented and actually used as a practical solution to these problems like there was a huge gap. Because, like, you know, it's all fool's gold. It's not a tool which can fully replace a person. It's a fantastic tool which can make a lot of boring.


153
00:27:41.116 --> 00:28:06.065
Karin Valisova: awful tasks. You need to repeatedly do as a creative person, you know, correcting subtitles, editing of menu details in the video, and so on, and so on. You know little edits on the storyboards and all of these things which can be done with AI. But you know, it's it's a specific application of a tool. It's like an extension of human hand. It's it's it's not a replacement of


154
00:28:06.966 --> 00:28:10.335
Karin Valisova: of the thing driving the hand. In my opinion.


155
00:28:10.336 --> 00:28:21.625
John Shook: I like that analogy very much. People people understand right, and that we're sort of getting used to the idea of prosthetics. But you have to understand. AI will never even be good as this. This is the tool of tools.


156
00:28:22.216 --> 00:28:25.525
John Shook: Right? 6 million years of evolution and counting


157
00:28:26.236 --> 00:28:32.285
John Shook: what AI will do is, it'll give you a hand, but it'll be a standard normalized.


158
00:28:32.396 --> 00:28:49.166
John Shook: average hand. It may be better than yours, but it doesn't make it better than the average human being hand, and then certainly it'll never match a trained human hand with all of the flexibility and versatility. Remember the most important thing about the hand connected to the tool


159
00:28:49.436 --> 00:28:55.816
John Shook: connected to the brain is that the hand knows when to stop using that tool.


160
00:28:56.776 --> 00:28:57.946
Karin Valisova: Yeah, and it.


161
00:28:57.946 --> 00:29:12.066
John Shook: I will never understand this. Their AI is right built on an imperative that the solution any problem with AI is to use more. AI! We don't. We don't even do that with our tools. We certainly don't do that with our brains.


162
00:29:12.066 --> 00:29:17.836
Karin Valisova: Yeah. And and the hand might even have a wrong amount of fingers on it. So.


163
00:29:17.836 --> 00:29:21.045
John Shook: Exactly might be 6 fingers.


164
00:29:21.846 --> 00:29:46.055
Karin Valisova: But what's really interesting? What now made me this analogy think of is like, there is a lot of neurological studies where they've been mapping the brains of monkeys using the tool right? And you can see that if they're really proficient using some tool, there are parts of their brain which actually map as if the tool was part of their neural circuitry. And this sort of


165
00:29:46.056 --> 00:29:51.975
Karin Valisova: never really thought about this in connection with the AI. And how is AI becoming part of


166
00:29:51.976 --> 00:30:00.706
Karin Valisova: our sort of default skill set as creative people, as professional people, as you know, whatever we're doing with these tools.


167
00:30:00.726 --> 00:30:01.846
Karin Valisova: And


168
00:30:01.906 --> 00:30:10.496
Karin Valisova: now, as I'm thinking about it as a as a programmer. Right? It's like AI is very good in writing code, and I,


169
00:30:10.786 --> 00:30:23.145
Karin Valisova: it like it, increased the the efficiency of people who work on sort of small projects where you can just like prototype a lot of stuff like hundredfold. You know, there's been projects I


170
00:30:23.426 --> 00:30:35.435
Karin Valisova: 3 years ago I would have to have a team of 6 and a half a year to develop, and now I can prototype it in a week or in 20 min, even some, some of them. So


171
00:30:35.576 --> 00:30:49.715
Karin Valisova: yeah, I'm I'm very curious. How will this tool sort of bubble into our our skill sets and what it will do to our understanding, our original understanding of you know what?


172
00:30:49.826 --> 00:30:57.796
Karin Valisova: What 3D. Modeling is, what creative design is, what programming is right.


173
00:30:57.796 --> 00:31:01.346
Eli Kramer: You think we should balance that sort of? You're both talking about the sort of.


174
00:31:01.496 --> 00:31:10.585
Eli Kramer: I think, overweening technophobia, but also maybe techno optimism and euphoria that never really comes with new technologies. There's a fun little.


175
00:31:10.846 --> 00:31:28.635
Eli Kramer: I'm not sure, totally paid attention to, but I think brilliant section. And Kathy Davidson's in new education, looking at the history of when new educational tools were adopted, and how people responded to them. So you know, when computational calculators came in, there was all this bemoaning of.


176
00:31:28.636 --> 00:31:50.445
Eli Kramer: you know, people's arithmetic memory, and how that was going to disappear, and how you know people's cognition was going to get destroyed, and then also optimism about its efficiency, and how, you know, culture was going to be radically changed. We also remember, you know, the dot-com bubble and the optimism of what the World Wide Web was going to do to, you know, resolve all of humanity's problems.


177
00:31:50.516 --> 00:31:59.515
Eli Kramer: So to both of you, how do you think about these extended appendages? Should we be afraid of adding too many on? Is it a matter of


178
00:31:59.846 --> 00:32:06.636
Eli Kramer: making sure it's the right ones? How do we figure out that we have the right ones or not, sort of succumbing to something that


179
00:32:07.206 --> 00:32:10.756
Eli Kramer: it's going to reduce the human parts. We really need.


180
00:32:11.616 --> 00:32:25.096
John Shook: Well, I'm a pragmatist. You judge things by their performance. It's objective, it's metricable. Everybody can see what's going on, even you, if you sort of take off your blinders and admit that what's happening


181
00:32:25.356 --> 00:32:40.285
John Shook: people get excessively optimistic. They're like every technology has, you know. Well, we have a long experience with technology back to pounding rocks. We also know their limitations. Furthermore, these technologies operate within cultures.


182
00:32:40.396 --> 00:33:02.916
John Shook: They're not replacements for cultures. Take, for example, the book. Did education. Take a huge leap forward. The correct answer is sort of until we adjusted social institutions and educational institutions to correctly manage this new technology. Just a lot of print allowed propaganda, misinformation, anonymous authors. Oh, wait!


183
00:33:03.256 --> 00:33:20.975
John Shook: The Western world, after the printing press became kind of like what we complain about on the Internet today what happened? Well, there had to be institutionalized standards for what counted as authentic information being able to be found in books. Furthermore, there are books than there are books.


184
00:33:21.006 --> 00:33:37.215
John Shook: If education only encourages reading for the purpose of recitation, which for too long in the West and maybe in other parts of the world, was the standard you got students who could memorize an awful lot out of books, but they didn't understand it


185
00:33:37.336 --> 00:34:05.266
John Shook: because they weren't critically engaging with the contents of the books. What happened? Well, Academia had to say, how about we write better books that promote active reading, critical thinking, interrogating the text, getting into conversations, making books more conversational, and, indeed, after 100 years that was able to be accomplished, and pedagogy now is much better than what it was. 200, or even 100 years ago. Is it still the book


186
00:34:05.406 --> 00:34:21.305
John Shook: don't focus on the technological thing. The technology is not in the thing. The technology is in the mutual feedback coevolution of how culture uses the thing. That's where the technology is.


187
00:34:21.506 --> 00:34:42.516
John Shook: So if we take responsibility for how our cultures and societies use these technologies. Things will go much better. Will it take time decades, or it always does. But there has to be accountability for how these tools are used, and any tool by definition can be refined and adapted


188
00:34:42.516 --> 00:34:56.146
John Shook: for better human utility. And this is going to eventually happen with AI. We'll get past the the hype and the nonsense, and realize that it remains our responsibility for understanding what's happening. I'll give you one example.


189
00:34:56.256 --> 00:35:11.985
John Shook: So Microsoft, in league with some professors. I'm looking at it. Now it was just released. I'll give you the title. You can Google it and find it. It's free. It's not behind a paywall. The impact of generative AI on critical thinking.


190
00:35:12.306 --> 00:35:29.536
John Shook: So what they did is they did a qualitative study. Maybe you saw this, and they showed a very tidy correlation between the amount of dependency by knowledge workers on AI and their own self-reported loss of critical thinking.


191
00:35:30.246 --> 00:35:34.056
John Shook: In other words, it's not the scientists saying you're not great. They're admitting


192
00:35:34.746 --> 00:35:38.075
John Shook: that they are becoming less critical


193
00:35:38.646 --> 00:35:49.655
John Shook: thinkers by too much dependency on quote unquote, I suppose, letting the AI do do the thinking for them, or at least producing good enough product. Now, whose fault is this?


194
00:35:49.816 --> 00:35:52.706
John Shook: Well, it's not the ais alt.


195
00:35:52.866 --> 00:36:12.956
John Shook: it's right, the tool in hand. It's the brain connected to the hand to the tool that is ultimately at fault. Could AI be re-engineered and redesigned for better sorts of generative outputs that increase dialectical and dialogical conversation, and try to provoke more absolutely will it be.


196
00:36:15.106 --> 00:36:17.901
Karin Valisova: Well, it's what we're working on.


197
00:36:18.336 --> 00:36:19.586
Eli Kramer: Trying at least. Yeah.


198
00:36:19.586 --> 00:36:45.596
John Shook: I'll tell you what, education in general pedagogy K. Through 12, and especially higher education academia around the world. It bears the primary responsibility, not the AI engineers and designers they're going to produce for markets that have demand. Well, education is a market, and the demand should be for engaged interrogative, critical thinking. Can AI improve critical thinking? Of course.


199
00:36:45.806 --> 00:36:57.255
John Shook: if it is designed to reliably do so in the absence of that, it will degrade and degenerate the human side of the intelligence.


200
00:36:57.516 --> 00:37:07.416
John Shook: You can have all the AI intelligence in the world, but if it's simultaneously degrading human intelligence, I'm not sure we should be so proud of that net sum.


201
00:37:08.246 --> 00:37:14.726
Karin Valisova: Hmm, yeah, I totally agree. And I think that this general sort of


202
00:37:14.886 --> 00:37:40.326
Karin Valisova: question, how to use AI for specific problems like, for example, I remember when the 1st AI therapist dropped and I was so outraged I was like this, you know, this is just like such a wrong application of this technology. I was personally offended by it, right? And I did some reading, and I found this sort of


203
00:37:40.326 --> 00:38:00.286
Karin Valisova: wealth of anecdotal evidence of people downloading these companion apps into their phones, where, like, they sort of chat with it. And in you build a relationship. It has memory, and you know it turns into your pocket, friend, and there was a growing sort of very unsettling.


204
00:38:00.286 --> 00:38:11.015
Karin Valisova: anecdotal evidence of this, this AI companion sort of like flipping evil, and all of a sudden, turning abusive or turning sort of


205
00:38:11.116 --> 00:38:33.715
Karin Valisova: cold and and just like, and the users really hated it, and obviously were really upset when your best friend in your pocket. AI, friend, just all of a sudden starts threatening you, or starts being really mean to you, or starts ignoring you right? And what they found that a lot of people reported that they actually, for those people


206
00:38:33.716 --> 00:38:44.095
Karin Valisova: that experience this have a prior history of relationship abuse, for example, so like. And at that point. It clicked like


207
00:38:44.096 --> 00:39:08.356
Karin Valisova: this possibly could be a fantastic use, right? Of sort of using AI to spot your own patterns of thinking, behavior, communication, and so on to sort of give you a reflection. But it's not as simple as sort of telling AI to be your therapist and talk you through problems. No like the application goes one level, deeper, right? It's human ingenuity in


208
00:39:08.356 --> 00:39:14.656
Karin Valisova: how to apply this technology in order to really provide you the insight which might be helpful for you.


209
00:39:14.656 --> 00:39:15.546
Karin Valisova: Right.


210
00:39:15.806 --> 00:39:44.905
John Shook: That is a profound example. It's an illustration of what AI can do very well, and what AI will suddenly, without provocation, hardly do something very poorly. They've discovered that, of course, because it's a statistical machine. It'll always head for what are sort of called local minimums. It's sort of the mirror reflection of right maximizing under the curve. Well, what it'll do is it'll settle into certain kinds of imagistic or rhetorical patterns. In art. We call this Kitsch


211
00:39:45.836 --> 00:39:59.145
John Shook: AI will produce endless quantities of kitschy, familiar lookalike Santa Clauses. It'll never give you an original folk art, Santa Claus, because that's right. Too far out on the edges. Not not probable.


212
00:39:59.506 --> 00:40:17.005
John Shook: So, too, in conversations. If you get into an emotionally sensitive conversation with AI, it will go there, but it will get for a while trapped into what's called a local minimum. In other words, it'll be overly nice. It'll be overly


213
00:40:17.376 --> 00:40:37.656
John Shook: solicitous and and emotionally sensitive. Well, of course, these are the human attributes we're giving to it. What it's producing is just right, a lot of rhetoric. But if we attribute it oh, it cares about me. So we open up more, and it'll sound like it cares about more, and you can have very


214
00:40:38.196 --> 00:40:56.685
John Shook: fairly sentimental conversations with it. The problem is, as soon as you open up a little bit more, with a little bit more detail, and your vocabulary starts to stray a little bit and quote gets real about some real issues it'll pick up on the new semantics, and it'll


215
00:40:56.756 --> 00:41:25.005
John Shook: flip to a new local minimum with only that cluster. But that local minimum is aggressive, that local minimum points, fingers that local minimum. Now sounds like a bully, that local minimum. Well, the AI doesn't know that that's what it's doing. You put it there. You kicked it out of one nice local minimum and put it into a neighboring nasty one. Why is that nasty minimum there? Welcome to human communication.


216
00:41:25.500 --> 00:41:25.995
John Shook: Okay.


217
00:41:25.996 --> 00:41:43.496
John Shook: it'll imitate anything, and it's been fed on the good, bad, and the ugly from us. Sorry, AI, that's on us. But that's why you get. It seems to turn on you. Well, whatever you got it into, it kicks it out into a new one, and the problem is, AI can never get back.


218
00:41:43.896 --> 00:41:55.645
John Shook: That's the other problem. It doesn't get nicer. It only finds new and new and new and lower, lower, lower lower minimums. You talk to it long enough, and it'll be indistinguishable from a homicidal Nazi.


219
00:41:57.032 --> 00:42:04.875
John Shook: that that is a problem, and that means that AI is not yet ready for prime time with anything like K. Through 12. Education.


220
00:42:05.486 --> 00:42:08.885
John Shook: not not in its generative uses.


221
00:42:08.886 --> 00:42:31.396
Karin Valisova: Yeah. But I mean, you know, this, this feature is absolutely, deeply embedded into the functioning of the language models. So this is like, this is not something we can really overcome. It's not a bug. It's a feature, right? And like this sort of interpretation. This interpretation I read somewhere, which says that language models are multiverse generators.


222
00:42:31.466 --> 00:42:55.405
Karin Valisova: right? So when you start interacting with it, it spawns almost infinite amount of simulacra of possible characters or possible probability functions to generate the output right? And as you, as you sort of communicate with it, more and more of these universes are collapsing, and that basically is an equivalent of traversing this local minima maxima spaces.


223
00:42:55.786 --> 00:42:56.656
Karin Valisova: And


224
00:42:57.636 --> 00:43:21.656
Karin Valisova: I think that looking at it this way really sort of exposes what? What is the thing doing right and exactly like it spawns characters of all possible different sort of emotional states. Truth values. When you, when you're writing a story right, it will spin up all possible outputs, all the infinite amount of monkeys, typing on infinite amount of


225
00:43:21.656 --> 00:43:34.376
Karin Valisova: of typewriters. Right? And you will sort of with your interaction, guide it to wherever wherever it leads. And there is this 1 1 trick, John, have you heard of Valuigi? Effect.


226
00:43:35.156 --> 00:43:38.666
John Shook: I've heard of it, but maybe you can explain it better than I've heard it, explained.


227
00:43:38.666 --> 00:44:07.516
Karin Valisova: So there is this mathematical, basically feature which says that when you have a problem and you optimize a function for some problem like, for example, there was a model trained to develop new molecules for curing rare diseases. So it was sort of like doing the twists around the molecules and trying to find what could bind to some specific things to cure rare disease.


228
00:44:07.626 --> 00:44:31.396
Karin Valisova: And once this model was trained, it's very computationally expensive to train an AI model, right? It's like it's those millions used on the hardware to train the model. And what they did is that if you flip the sign, it's a single operation, one bit of the optimization function, you actually get the complete opposite. So the model was able to generate


229
00:44:31.396 --> 00:44:55.765
Karin Valisova: 40,000 neural agents close to in their neurotoxicity to like Vx and Sarin just by, you know, flipping one sign. And this is called Valuigi effect when it's like Luigi and Valuigi, who's his sort of the evil twin brother who is fully defined by all the features which Luigi is


230
00:44:55.766 --> 00:45:07.766
Karin Valisova: but flipped on the opposite dimension right. And this is happening inside of the AI models as well that like as we generate this antagonist


231
00:45:07.826 --> 00:45:29.415
Karin Valisova: characters in in our conversations. So it's actually very probable. Or it's quite probable that the language model might switch into the opposite, because it's just a feature of of information, right? It's like we we expect the opposite thing to exist. We, we sort of summon it by defining it, which is


232
00:45:29.556 --> 00:45:32.115
Karin Valisova: beautifully philosophical as well.


233
00:45:33.436 --> 00:45:37.555
Eli Kramer: And I, you know one thing that strikes me you were talking both about the sort of


234
00:45:37.716 --> 00:46:06.516
Eli Kramer: you know. Then what would be the dangers for education. But I think problems like that show the sort of deeper, more specific issues that actually frame what can or can't be done. And one observation. You talk, you know, with educators about this is, I think there's sort of 2 common responses, whether a philosophy professor or K. 12. One is the obvious experience, just having students cheat and, you know, use it to write the essay for them. And then the second is a sort of


235
00:46:06.886 --> 00:46:08.296
Eli Kramer: total


236
00:46:08.516 --> 00:46:31.526
Eli Kramer: loss about what's going on in detail here, and either having very negative prognostication from the little bits they have of understandings of what machine learning is doing and the sort of problems it has, or, you know, super positive ones. So the question I'm going to turn to you both, then, is, if you were going to advise. Let's pick the


237
00:46:31.666 --> 00:46:43.046
Eli Kramer: maybe one that's a bit easier, the sort of professor of philosophy that's a little. Maybe they're on the Heideggerian side, and a little suspicious of all this technological in framing going on.


238
00:46:43.426 --> 00:46:54.275
Eli Kramer: How would you talk to them about what they ought to do to help students grapple with the what AI can and can't do, and how they engage with it thoughtfully, would you? Would you actually encourage them to get


239
00:46:54.276 --> 00:47:14.405
Eli Kramer: better educated about the sort of messiness that's happening here. Would you push the students to find spaces for that? Would you say? Do what you're doing? But don't devalue engagement. How would you try to get them to see, because I was thinking about the like Luigi value, which is like it's 1 thing to say. Oh, look at the random, prompt.


240
00:47:14.406 --> 00:47:33.416
Eli Kramer: you know, Chat Gpt did. That seems dangerous and seems like a hallucination versus like, oh, there's actually something about very specific functions that are both really cool, but also have problematic aspects right? And it takes some steps to really get you there to understand it. So again, to put back to you like, what do you say to that, Professor.


241
00:47:34.266 --> 00:47:45.975
John Shook: Oh, well, I mean AI is here. The students will use it. The only question is, of course, are they being trained to use it well or poorly. Well, who's that on? Well, that's on us that's on academia


242
00:47:46.726 --> 00:47:55.046
John Shook: in K through 12. It's a little more difficult because they're still acquiring the vocabulary. But by the time they reach college level instruction


243
00:47:55.376 --> 00:48:06.095
John Shook: it's fairly easy, even with, you know, 1st years, to show them. Look, here's what AI is going to tell you about abortion, and they'll realize pretty quickly


244
00:48:06.586 --> 00:48:07.946
John Shook: it's shallow.


245
00:48:08.306 --> 00:48:18.706
John Shook: It's impressionistic, it's vacillatory. It's oh, here I'll show you both sides and never actually tell you anything real.


246
00:48:19.066 --> 00:48:23.606
John Shook: I want students to experience that right in the face.


247
00:48:23.846 --> 00:48:49.936
John Shook: and then we can go through and show them better modes of reasoning actually take them through, let's say, examples of good and poor reasoning about something very normative, like abortion, and fortunately, in the humanities and in the social sciences generally, it's not just exposition, it's normative, it's about values and human behavior and good conduct versus bad conduct. And so on and so forth.


248
00:48:51.452 --> 00:48:52.886
John Shook: With this.


249
00:48:53.756 --> 00:49:22.525
John Shook: the thing is, the students also have to realize that when they look at something that looks like it's reasoning and getting somewhere. I mean, it'll pass the 8th grade eyeball test. It's all complete sentences. Paragraphs seemed well formed, but by the time they hit college they ought to be able to look at the finest output that AI money can buy and realize sentence by sentence. It'll make sense, paragraph by paragraph.


250
00:49:22.556 --> 00:49:25.846
John Shook: It'll kind of seem dodgy


251
00:49:26.166 --> 00:49:46.386
John Shook: right and page by page, it doesn't add up. There's no there, there, it's it's meaningless because it gets circular, or or it just sort of bogs down into explaining the terminology instead of actually using it again. This is not a bug, but a feature of AI. Why? Because AI cannot do proper semantics.


252
00:49:46.846 --> 00:50:09.635
John Shook: It just can't. It's a statistical machine meaning. The meaning of a word is too tied with associations to other words that are typically used in human language. Now, if it were just gossip, rhetoric, chatting, talking, to talk. You don't have to worry about coherence within a paragraph or consistency across paragraphs.


253
00:50:09.716 --> 00:50:31.415
John Shook: We don't even get it from each other when we're chatting and lying at the 7 11. We're just, you know, bullshitting right? We're having a good time. We're being playful with the language, and AI can be very playful with the language. The problem is, there's no semantics there. AI does not know the meaning of the word house. Let me repeat that AI will never know the meaning of the word house.


254
00:50:32.186 --> 00:50:38.535
John Shook: not in any in any way that allows it to actually do reasoning. Why.


255
00:50:38.816 --> 00:50:43.086
John Shook: at most it'll do loose associative inference.


256
00:50:43.426 --> 00:51:00.405
John Shook: But that's not actual reasoning, as Aristotle taught us long ago, the basic forms of reasoning, both deductive and inductive, have a very important feature. When you use a noun, it has to have the same meaning at the end of your inference that it does at the beginning. AI will never give you that.


257
00:51:00.736 --> 00:51:02.556
John Shook: Why not? Because it can't.


258
00:51:03.686 --> 00:51:18.136
John Shook: It can't. It's not trained. You can't train it to know the word house. It'll have a million connotations. This is what's called the framing problem, and it caused 40 years of delay in the original Marvin Mitske AI.


259
00:51:19.286 --> 00:51:22.135
John Shook: Programming because you can't frame up


260
00:51:22.146 --> 00:51:51.955
John Shook: enough tautological or a priori sentences to teach it the word house. You can't do it statistically, either, which means, strictly speaking, anything that looks like an inference out of AI is in one way, shape or another, either a failure of consistency or a fallacy of 4 terms. You'd look it up. Take a logic class, take my intro to logic, but basically, it means you can't have an inference. If a key term you're using at the beginning of it has changed.


261
00:51:51.956 --> 00:52:18.305
John Shook: drifted, meaning even a little bit. By the time you get to the conclusion that's called invalidity. Now, there's certain kinds of loose analogical inductions that are probabilistic. It's more like metaphor. AI will be great at that. That's why it's great at conversation. But remember, it's the human brain actually inputting the genuine meaning and the metaphor. And the inference AI is not inferring anything


262
00:52:18.876 --> 00:52:37.396
John Shook: again. This is not a bug. It's a feature. It's doing things correctly. It's loosely associating words and sentences. If you think it's meaningful. That's because your brain, operating at a far higher level of semantic and pragmatic intelligence, is imputing the meaning


263
00:52:37.396 --> 00:52:49.935
John Shook: to it. It is not trying to tell you something, but if you take it to be trying to tell you something, then it seems like it's telling you something. It's mimicry with you doing most of the work. Now, here's the good news.


264
00:52:49.936 --> 00:53:17.315
John Shook: Your brain is still doing most of the work. That's the good thing. If you realize that you're contributing all of the meaning. And you're trying to find the inference. Then you're doing critical thinking. And that means you understand how far AI can get with just rhetoric and bullshit, and when AI's true educational and reasoning value stops, and then you've got to take over. Now the tool doesn't become a crutch.


265
00:53:17.666 --> 00:53:19.696
John Shook: Yes, and we get smarter.


266
00:53:19.896 --> 00:53:23.766
John Shook: So there there are educational opportunities.


267
00:53:24.196 --> 00:53:36.056
John Shook: But we also have the obligation to show the student exactly when and where human intelligence is actually doing most of the work, and we need to keep doing more of the work, far more work than AI will ever be able to do.


268
00:53:37.406 --> 00:53:37.790
Karin Valisova: Yeah.


269
00:53:38.396 --> 00:53:51.736
Karin Valisova: yeah, fantastic example. I think that also educating kids in like what these things actually do and how they work. You know, like, for example.


270
00:53:51.736 --> 00:54:08.735
Karin Valisova: how the whole semantic space is built right. The the whole principle in building the whole understanding of the model of what different things mean is one simple function, and that's that. Words that appear close to each other in sentences are more likely related.


271
00:54:08.736 --> 00:54:31.246
Karin Valisova: That's how it builds its all understanding of all the semantic space. It's in my opinion, it's such a miracle that it actually works. And it works really well, right? You can do mathematical operations on the vector spaces and add and subtract meanings from one another. And you sort of like can actually navigate the space. But


272
00:54:31.426 --> 00:54:48.366
Karin Valisova: you know, it's it's it's relative definition based on proximity within a large body of text. It has nothing to do with how we map meanings onto those strings of of letters. Right? And


273
00:54:48.566 --> 00:54:57.686
Karin Valisova: yeah, I think, like I, I would say that what the best approach for, you know, teachers to


274
00:54:58.476 --> 00:55:05.516
Karin Valisova: deal with the AI problem. It's like, teach the kids or let the kids do what they do the best, and there's like, try and break it.


275
00:55:05.516 --> 00:55:28.795
Karin Valisova: you know, like, try to find all the ways. Why, this thing isn't working the way you thought it would be working. Why the essay sucks, why the argument doesn't work, why the sentence doesn't make sense, and it's fun. That's my favorite activity, like finding out what it did wrong again. And it's sort of just like, you know, I can see that.


276
00:55:28.796 --> 00:55:42.455
Karin Valisova: And and the truth is that I don't know how it is with like people who don't work with AI every day, but, like everyone can see you're using AI for writing. If you do it.


277
00:55:42.456 --> 00:56:04.926
Karin Valisova: you know, like we can see it. It's such a specific structure of sentences and the tone and the language like, I tried to train models, embeddings. I tried to fine tune it, optimize it on my whole history of writing of everything I've ever put in text, and it still doesn't do it. It's just like.


278
00:56:04.926 --> 00:56:10.086
John Shook: No, it's well, it's it wasn't trained to. It was trained to produce gossipy kitsch.


279
00:56:10.316 --> 00:56:38.875
John Shook: and it's still counting on the original academic intelligence to straighten it out, and then we impute meaning to it. But this also explains why AI can be terribly clever, and achieve high accuracy in particular domains of discourse where the terminology is rigid. I'll give you an example in the health sciences if it's trained, not on the Internet's garbage or 40 million books, but trained on actual health care.


280
00:56:39.286 --> 00:56:50.805
John Shook: The terminology gets very rigid, because you know the the word patella, right? Part of your knee still means Patella. 40,000 medical books later.


281
00:56:50.816 --> 00:57:11.736
John Shook: In other words, right? You can get the meaning to be fairly rigid. And there are things that. And so law is another thing like that where most of the terminology is fairly rigorous and rigid. That's why AI is getting pretty clever at solving logic puzzles, because even to set up a logic puzzle.


282
00:57:11.736 --> 00:57:25.936
John Shook: You have to make sure that all of the words that you use to set up the logic puzzle all have a consistent, clear meaning. If if any of the meanings are fuzzy, you haven't invented a logic puzzle. You've just confused everybody. So with rigid meanings


283
00:57:26.496 --> 00:57:39.466
John Shook: that allow consistency of concepts within that little sandbox of domesticated discourse. AI will do wonderful things. It'll come very close to genuine, inferential


284
00:57:40.391 --> 00:57:43.546
John Shook: meaning by analogy or something approximating


285
00:57:43.906 --> 00:58:03.785
John Shook: validity. But that's only because human beings have done all the work, rigidifying the clear and consistent meanings of the terms, and keeping AI in that sandbox as soon as AI leaves that sandbox. It's back to producing utter nonsense and garbage and things that are completely false and fabricated.

286                                                                                                                          00:58:03.786  --> 00:59:08

Eli Kramer: Well with that, said, I think we're at that hour here. So I want to thank you so much, both again, for being with us today and being on our inaugural podcast for those interested in learning. More, you can go to palinode. That's PALI, NODE, dot productions like period productions. That's our marketing website. We also have a Twitter and an Instagram, a thread. All of that Youtube you can find us on there we have some beginning to have some videos and other content up palinode productions itself is also interested in doing sort of broader format for engaging and thinking about philosophy. Including this, we just started a blog. So we'll have more and more spaces that will both be connected to the core algorithm, but also beyond it, to try and engage with philosophy and all the stuff that's happening in culture and broader, funner, more dynamic ways. So again, thanks for being with me, and I hope you tune in next time.