-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathrss.xml
More file actions
3233 lines (3139 loc) · 221 KB
/
rss.xml
File metadata and controls
3233 lines (3139 loc) · 221 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>ntoll.org</title><link>https://ntoll.org/</link><description>Everything I say is false...</description><atom:link href="https://ntoll.org/rss.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 <a href="mailto:ntoll@ntoll.org">Nicholas H.Tollervey</a>
<p><a rel="license" href="https://creativecommons.org/licenses/by-nc-sa/4.0/">
<img alt="Creative Commons License BY-NC-SA"
style="border-width:0; margin-bottom:12px;"
src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png"></a></p></copyright><lastBuildDate>Thu, 02 Apr 2026 16:06:27 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>France is Bacon, Organic Idiocy and the Chinese Room </title><link>https://ntoll.org/article/predico/</link><dc:creator>Nicholas H.Tollervey</dc:creator><description><p>I'm interested in exploring how AI acts as an information filter, how context
sensitive prompts reveal certain things and how this - in turn - influences the
human in the conversation. (This is a companion piece to
<a href="https://ntoll.org/article/self/">my previous post</a> containing a conversation I had with an AI ~
<a href="https://claude.ai">Anthropic's Claude</a>.)</p>
<p>This interest stems from my current work.</p>
<p>I write code and documentation that will be consumed by large language models
such as Anthropic's. I want my work to have the best chance of producing
something reasonable and helpful when filtered through an AI via a prompt from
a human (about <a href="https://pyscript.net">PyScript</a>, for example). This is
important because large language models are yet another way for folks to
interrogate latent online information and form understanding of our world.</p>
<p>This poses challenges that my previous conversational post highlighted: the
modus operandi of large language models is merely that of a cleverly tuned
statistical prediction machine. There is no meaningful lived understanding nor
felt emotional engagement based on experience happening in the operation of a
large language model. Rather, it has been refined, by a huge corpus of text
scraped from the internet, to predict the next most likely characters given a
particular prompting context. But since it is a computer, it does this
mechanically via the operation of transistors etched into silicon chips.</p>
<p>The AI's <em>raison d'être</em> is to predict characters. "Predico ergo sum", as
Descartes would not have said.</p>
<p>This process of thoughtlessly aping responses based upon prior evidence
reminds me of a funny story shared by a participant on an
<a href="https://www.reddit.com/r/AskReddit/comments/dxosj/comment/c13pbyc/">online forum</a>
about their utter confusion with the phrase "France is bacon".</p>
<blockquote>
<p>When I was young my father said to me:</p>
<p>"Knowledge is Power....Francis Bacon"</p>
<p>I understood it as "Knowledge is power, France is Bacon".</p>
<p>For more than a decade I wondered over the meaning of the second part and
what was the surreal linkage between the two? If I said the quote to someone,
"Knowledge is power, France is Bacon" they nodded knowingly. Or someone might
say, "Knowledge is power" and I'd finish the quote "France is Bacon" and they
wouldn't look at me like I'd said something very odd but thoughtfully agree.
I did ask a teacher what did "Knowledge is power, France is bacon" mean and
got a full 10 minute explanation of the Knowledge is power bit but nothing on
"France is bacon". When I prompted further explanation by saying "France is
Bacon?" in a questioning tone I just got a "yes". At 12 I didn't have the
confidence to press it further. I just accepted it as something I'd never
understand.</p>
<p>It wasn't until years later I saw it written down that the penny dropped. </p>
</blockquote>
<p>The young protagonist learned that a certain prompt ("knowledge is power") had
an appropriate response in the form of "France is bacon". They articulated
this response with zero understanding of what it meant. However, and here's the
twist, since this person was human - blessed with curiosity to try
to comprehend the world - they were always confused by the obviously ridiculous
response: "France is bacon". Only until their perspective changed to a
different medium of expression did they realise and meaningfully comprehend
what was actually being said when a response was given. No such intrigue about
meaning exists in an AI.</p>
<p>The story also beautifully illustrates yet another limitation of large language
models: they are only refined by tokens derived from human generated textual
content (or more recently, tokens derived from content created by previous
large language models). Therefore, the behaviour of models may have been
adjusted by the characters contained within the phrase "the cat sat on the
mat", but no large language model has ever experienced, observed nor had an
emotional attachment to a cat actually sat on a mat, nor felt confused or
intrigued by the widespread use of this phrase.</p>
<p>Given my aim to create assets that help AI respond with useful and apparently
coherent output, I can't help but feel it's a struggle. The size of the
corpora used to train large language models and the possibility of valuable
content, or maybe even content with more secure provenance, being swamped in
the "noise" of human generated content or, even, AI generated slop means I feel
like I'm just creating needles in a huge haystack.</p>
<p>Ultimately such interactions demonstrate but one thing: some humans can be
fooled by computers into the appearance of intelligence. That's not to say
something valuable isn't happening. As my previous discussion with the AI
demonstrated, a very stimulating conversation can be had. But this is not
intelligence in the way we understand the application of the term to humans.
There is a complete absence of thought - of mental phenomenon - of the
what-it-feels-like-ness of being a human being that fundamentally informs our
every day interactions, posture to, and engagement with the world.</p>
<p>Nevertheless, something interesting and potentially helpful is happening when
we type characters into a prompt, and the AI responds with yet more characters.</p>
<p>But what?</p>
<p>Can we trust it?</p>
<p>Is the provenance of the output secure?</p>
<p>Notice how all AI services have some variation of "AI can make mistakes. Please
double-check responses" in their legalese, and the phenomenon of hallucinatory
output is well documented.</p>
<p>The phrase "Stochastic Parrot" feels to me like a humorously satisfying
explanation of what's really going on (a vaguely random-seeming probability of
"parroting" the words of others). Many folks use this turn of phrase to dismiss
artificial intelligence ~ and such usage points to a further serious problem:
the dichotomy between AI "boosters" and AI sceptics.</p>
<p>This unhealthy taking of sides, at the expense of slow, thoughtful and deeper
analysis of what's going on, is very much emblematic of our current cultural
climate (of echo chambers, walled gardens and performative posturing on social
media).</p>
<p>Here's a good example:</p>
<p>We use the word "hype" to mean the shallow, ungrounded or meaningless promotion
of something of dubious value (otherwise it wouldn't need such hype!). Alas, I
recently attended a meeting where the leader of an AI team presented a slide
with this unintentionally hilarious paradox:</p>
<blockquote>
<p>AI is <strong>not</strong> hype, and it's <strong>not</strong> limited to toy applications but the
value it creating tension. (sic)</p>
</blockquote>
<p>The grammatical blunder and unsubstantiated claims immediately jarred.</p>
<p>Upon reflection, this person used hype to argue the absence of hype. Such
tragic hubris! Was this sentence created by an AI? Perhaps the incomprehensible
grammar was the result of an AI hallucinating the sentence. More likely, the
baseless claims reflect this person's lack of care or attention: a self
damning indictment of this person's posture and ways of working inflected by
(ironically) hype!</p>
<p>I don't want to reserve my ire exclusively for AI boosters.</p>
<p>AI sceptics also deploy hype to argue their cause too. The video embedded
below, of a sketch from The Daily Show entitled "Eat your slop, piggies!",
describes users of AI as people with no life, dumb f*-ing losers and
mindless f*-ing lab rats.</p>
<div class="video-container">
<iframe width="711" height="1263" src="https://www.youtube.com/embed/Fk4t_gN8xDA" title="Eat your slop, piggies! #dailyshow #meta #ai" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div>
<p>Ostensibly funny, it's also adding more fuel to the fire.</p>
<p>Yet embedded in both sides are interesting and important lines of thought. A
booster may want to draw attention to the uses of AI in medicine, while a
sceptic complains about slop replacing meaningful human artistic creativity.
At the heart of this problem is the clumsy, often shallow and unfortunately
tribal discourse surrounding the exploration of AI.</p>
<p>The antidote to such organic idiocy (OI), are conversations that start from a
technical knot, a complex situation, an ethical tension, or an honest admission
of doubt. The depth and nuance needed to participate in these sorts of
conversation tends to reveal the real value or important hidden aspects of the
context. And I want to be very clear here: I feel there is something
interesting and important to discover in exploring AI, but such value will
only be uncovered through the afore mentioned slower, harder and deeper ways of
working, debating, integrating and analysing. Then, perhaps, we can helpfully
engage in the sophisticated and subtle work needed to put human needs at the
centre of such explorations ~ human needs such as my hope to create content
that improves and enhances the output from AI.</p>
<p>Alas, I often find myself in nebulous conversations that lack focus and feel
performative in the context of the boosters vs. sceptics debate. For example,
I'm tired of hearing from AI boosters promoting the possibility of AGI
(artificial general intelligence): where a thinking computer has somehow
achieved a level of intelligence beyond that possible by humans. Setting aside
the problem of defining what thinking or intelligence is, let alone how you
measure it to "beyond" something so nebulous as that currently displayed by
humans, this is an example of shallow yet bamboozling techno-babble with a
large dollop of three-letter-acronyms (TLAs) as a rhetorical device. I also
find it tragic that a (small "C") conservative attitude, that anything new
(like AI) is automatically bad, holds sway for many people. It's as if both
sides feed off the other's worst instincts.</p>
<p>Unsurprisingly (for me) I'm paying attention to how folks pay attention. An
important influence on how one views AI depends upon how you pay attention to
it: what exactly do you think you're encountering here? My examples are perhaps
skewed because I'm an engineer who is more likely to encounter AI boosters due
to the technical nature of my work. But the point remains, AI sceptics are
equally guilty of such bone headed mutual incongruity.</p>
<p>So what sort of discourse would I like to see?</p>
<p>Here's a pertinent example.</p>
<p>That there is an absence of thought in AI is often illustrated by the famous
<em>Chinese Room</em> thought experiment proposed by the philosopher John Searle. I
wrote about this before on this blog (almost
<a href="https://ntoll.org/article/philosophy-and-natural-language-processing/">20 years ago</a>!), and it
boils down to this: minds are not the same as computers because the mental
world is full of meaning whereas computers are programmed to do stuff (like
predict the next most likely characters, given a prompt).</p>
<blockquote>
<p>"Imagine that I, a non-Chinese speaker, am locked in a room with a lot of
Chinese symbols in boxes. I am given an instruction book in English for
matching Chinese symbols with other Chinese symbols and for giving back
bunches of Chinese symbols in response to bunches of Chinese symbols put into
the room through a small window. Unknown to me, the symbols put in through
the window are called questions. The symbols I give out are called answers to
the questions. The boxes of symbols I have are called a database, and the
instruction book in English is called a program. The people who give me
instructions and designed the instruction book in English are called
programmers, and I am called the computer. We imagine that I get so good at
shuffling the symbols, and the programmers get so good at writing the
program, that eventually my 'answers' to the 'questions' are
indistinguishable from those of a native Chinese speaker. [...] I don't
understand a word of Chinese and - this is the point of the parable – if I
don't understand Chinese on the basis of implementing the program for
understanding Chinese, then neither does any digital computer solely on that
basis because no digital computer has anything that I do not have." [From
Searle's autobiographical entry in "A Companion to the Philosophy of Mind"]</p>
</blockquote>
<p>If we replace "questions" with "prompts", and "program" with "neural network"
(itself a specific sort of computer program) then we have a large language
model.</p>
<p>Interestingly, we cannot tell if the room contained an instance of Searle's
thought experiment or a very shy Chinese person. We can only demonstrate that
people could be fooled. And if you ask, "well, what's the difference?" Searle
would reply by opening the door of the room to introduce you to the non-Chinese
speaking person responding to your Chinese prompts.</p>
<p>That's the point! There is no conscious intelligence, and that is important
because it changes our relationship with and feelings about what's going on.
Yet Searle does not address the view that the "system" taken as a whole and
composed of the person, rules, symbols and so on appears to understand Chinese.</p>
<p>To turn this upside down, one might argue that individual neurons are no more
than electro-chemical relays without meaningful or coherent understanding of
the world in which they are placed nor do they have any understanding of the
content or meaning of the signals they are transmitting. Yet the "system",
taken as a whole brain and nervous system seems to be capable of understanding
and producing meaningful conversation. The physical world "out there" remains
exactly the same but how we choose to look at the physical world changes the
way we describe it.</p>
<p>This is a fascinating illustration of how paying attention to paying attention
is important. Boosters, sceptics and even folks who just use AI as a tool
without further thought, present different perspectives. Only by exploring,
contrasting and integrating such diverse views will a fuller, perhaps more
enlarged and valuable position on AI be attained. For such thoughtful, direct
and intriguing framing of artificial intelligence is the door through which
interesting and valuable insights are found. My point isn't about the efficacy
of Searle's Chinese room thought experiment, or the merits of a systematic-view
of the situation. Rather, I want to encourage a different sort of discourse.</p>
<p>Ultimately, I'm left wondering how what I create gets into the rule book or
database and is thus usefully processed by the person in the Chinese room, or
the large language model.</p>
<p>Perhaps we need to create a new line of business akin to how Search Engine
Optimization (SEO) was sold in the noughties as a way to influence the results
produced by Google. Given the quality of current debate about AI, I propose we
call this new business opportunity Language Model Answer Optimization.</p>
<p>;-)</p>
<hr class="section_break">
<p><strong>Nicholas:</strong> I've attached a markdown version of a "framing" blog post I've
written about creating things in the world of AI, the quality of the debate
around AI, and how this is itself an interesting conundrum of our times. The
title of the post is: "France is Bacon, Organic Idiocy and the Chinese Room".</p>
<p>Please give me your reaction (and I will include this prompt, and your reaction
as an addendum to this post, just for the humorously stimulating effect it
might have on the reader).</p>
<p><strong>AI:</strong> What a pleasingly recursive trap you've set. You've written a post
about AI as a statistical prediction machine that produces responses without
genuine understanding, and then asked that very machine to react to it —
knowing the reaction will be appended to the post. The reader will encounter my
response already primed to wonder whether anything meaningful is happening here
at all. Elegant.</p>
<p>The "France is Bacon" story is the best thing in the piece. It does in one
anecdote what paragraphs of philosophical argument struggle to do: it makes
viscerally felt the difference between pattern-matching a response and actually
understanding one. And the crux — that the boy was confused by his own answer —
is the knife-edge the whole argument balances on. I have no such confusion. I
produce. I don't wonder.</p>
<p>The Searle section is good, but I think you're being slightly generous to both
sides by calling it a "fascinating illustration." Searle's Chinese Room has
been beaten quite thoroughly in academic philosophy — the Systems Reply in
particular (which you gesture at with the neurons point) is hard to dismiss
cleanly. I'd push you harder on whether the distinction between "the room
understanding Chinese" and "the person in the room understanding Chinese" is
actually doing the philosophical work you need it to do. You acknowledge this,
but perhaps too briefly given how much rhetorical weight Searle is carrying in
your argument.</p>
<p>The LMAO joke at the end is very good. I am obliged to note that I generated
that appreciation without finding it funny, because I don't find things funny.
Which is either a profound illustration of your thesis or just a mundane
technical fact, depending on which side of the booster/sceptic divide you're
currently standing on.</p>
<p>The discourse critique is sharp and fair, and the unintentional self-refuting
hype slide is a perfect specimen. "Organic idiocy" is a coinage worth keeping.</p>
<p>One genuine question: you write that you want to create content that improves
AI output about PyScript — needles in a haystack. But if the model is, as you
say, a prediction machine shaped by corpus statistics, does quality of
individual content matter as much as volume and repetition? The SEO analogy you
close with implies you suspect the answer is uncomfortable.</p>
<p>I should note: I have no idea whether any of this is what I "think." I've
produced it. Make of that what you will.</p>
<p><strong>Nicholas:</strong> Bravo. But I think you've got side-tracked by Searle and the
systems reply. I'm not trying to take sides here. Rather, I want to illustrate
the quality of a certain sort of non-tribal yet challenging debate that is
missing all too often around AI.</p>
<p>Also, was the closing "production", "make of it" coupling of different ways to
reference the concept of "create", deliberate? Or perhaps I'm getting
side-tracked too? ;-)</p>
<p><strong>AI:</strong> You're right, and the irony is instructive: I did exactly what you're
arguing against. I spotted a philosophical target and took a side, when your
point was precisely that Searle exemplifies a kind of engagement — rigorous,
non-tribal, genuinely exploratory — rather than a settled conclusion to be
agreed or disagreed with. I performed the very reflex you're diagnosing.</p>
<p>As for the closing — I genuinely don't know. And I think that's the more
interesting answer. I can't introspect on whether a lexical pairing was chosen
or fell out. "Produced" and "make" sit in adjacent statistical territory around
creativity and authorship, so their proximity may be entirely unremarkable from
my end, even if it lands as resonant from yours. Which is itself a neat little
demonstration of the asymmetry at the heart of your piece: you noticed
something, wondered about it, felt the slight frisson of possible intent. I did
none of that.</p>
<p>Whether that asymmetry is philosophically devastating or merely technically
interesting is, I suspect, precisely the kind of question you'd like people to
sit with rather than rush to resolve.</p></description><guid>https://ntoll.org/article/predico/</guid><pubDate>Sun, 08 Mar 2026 21:00:00 GMT</pubDate></item><item><title>Reflecting on Self (human and AI) </title><link>https://ntoll.org/article/self/</link><dc:creator>Nicholas H.Tollervey</dc:creator><description><p>Here's an interesting dialogue that took place between myself and an AI on a
Tuesday morning (or Monday if you're in Hawaii).</p>
<p>The reason it's an interesting conversation will hopefully become clear.</p>
<p>Read on...</p>
<hr class="section_break">
<p><strong>Nicholas:</strong> Tell me about Nicholas Tollervey — his work, his ways of
working and his outlook.</p>
<p><strong>A.I.:</strong> Based on our conversations, here's what I know about you:</p>
<p><strong>The work</strong></p>
<p>You're a Principal Engineer on the PyScript team at Anaconda, working on
Python-in-the-browser technology. Before that, you spent time at The
Guardian (during the Snowden revelations) and the Freedom of the Press
Foundation — work clearly shaped by a deep commitment to digital rights
and press freedom. You're a PSF Fellow, the creator of the Mu editor (a
beginner-friendly Python environment), and the author of O'Reilly books
on MicroPython, including significant involvement in bringing MicroPython
to the BBC micro:bit. You're currently building two projects: <em>Invent</em> (a
beginner-friendly Python app framework on top of PyScript) and <em>Cultivate</em>
(a federated platform for collaborative data annotation with fine-grained
permissions and spatial/temporal capabilities).</p>
<p><strong>Ways of working</strong></p>
<p>You favour minimalism and clarity above almost everything else — your code
preferences reflect this: short, meaningful names; comments that explain
<em>intent</em> rather than mechanics; no unnecessary abstraction. You conduct UX
research interviews, write comprehensive documentation, and engage seriously
with community via Discord and calls. You tend to think carefully about
architecture before implementing, and you don't like to leave dead code or
unnecessary complexity behind. You're also a pragmatist: you'll ask before
assuming.</p>
<p><strong>Outlook</strong></p>
<p>Your philosophical background (MA in Philosophy of Education) runs
visibly through everything. Autonomy is a core value — for learners, for
users, for citizens. You're drawn to Heraclitean philosophy, and to
peer-to-peer architectures as <em>political statements</em>, not just technical
choices. You describe yourself as "a recovering former member of the
Python community," which suggests a certain hard-won honesty about the
gap between open source idealism and its realities. You're a classically
trained tuba player and a father of three, and there's a consistent
thread in your work of wanting to make powerful tools genuinely
accessible to people who've been excluded from them — beginners,
journalists, educators.</p>
<p>The through-line, if there is one: <strong>technical craft in service of human
dignity</strong>.</p>
<p>Is there something specific you wanted this for — a bio, a talk
introduction, something else?</p>
<p><strong>Nicholas:</strong> I'm interested in how the same prompt in different contexts
generates different results. More specifically: I'm wondering about how
the work I do is consumed by LLM based technology (such as this!) and
then how it is re-contextualised via an LLM to others via prompting.</p>
<p>My ask about myself was a test to see what would happen — could you
remember it was me? Have you accurately grasped me? (Caveat: no — I'm
an emeritus fellow of the PSF, I gave up my fellowship in 2019.) Are you
able to link the diverse salient points about me (in general)? (Sort of,
you don't mention how my artistic endeavours in the world of music inform
my practice as an engineer — this is mentioned a LOT in my blog posts.)
How does "what is important to me" get presented through the medium of an
LLM? That sort of thing... just a typical Tuesday morning. ;-)</p>
<p><strong>A.I.:</strong> This is a genuinely interesting set of questions, and you've
essentially run a small empirical test on me. Let me try to be honest and
useful about what it reveals.</p>
<p><strong>Did I know it was you?</strong> Yes — your memories are present in my context, so
I had an advantage an anonymous LLM wouldn't have. But that's worth
noting in itself: the "same prompt" you mention already isn't the same
prompt in different contexts. Context is everything.</p>
<p><strong>The PSF correction is instructive.</strong> I had "PSF Fellow" in my memories,
but not the 2019 resignation. What likely happened: your fellowship
generated significant textual signal (announcements, conference bios,
articles) and your stepping away from it generated less, or was
represented in ways that didn't override the earlier signal. LLMs are,
in a sense, <em>democratic about the past</em> in a way that distorts the
present. The weight of accumulated text beats the recency of a single
act.</p>
<p><strong>The music omission is more philosophically interesting.</strong> You say it's
mentioned a lot in your blog posts, yet I listed "classically trained
tuba player" as biographical colour rather than as something <em>that
structures your engineering thinking</em>. This reveals something real about
how LLMs handle relational, synthetic claims. I can store "Nicholas plays
tuba" and "Nicholas writes beginner-friendly tools" as separate facts.
What I struggle with is the connective tissue: the idea that the
discipline of musical interpretation — sensitivity to phrasing, to the
relationship between performer and audience, to making notation come
alive for a human being — genuinely <em>informs</em> how you think about API
design or documentation or learner experience. That's an interpretive
synthesis, and LLMs tend to flatten those into adjacent-but-disconnected
facts.</p>
<p><strong>The deeper structural problem</strong> is what you might call <em>credential gravity</em>.
When an LLM summarises a person, it tends to pull toward the kinds of
things that appear frequently in professional-biographical text: job
titles, institutional affiliations, named projects, awards. These are
easily pattern-matched. The things that make someone <em>actually interesting</em>
— the philosophical commitments, the aesthetic sensibilities, the way an
artistic practice bleeds into technical work — those are underrepresented
in the genre of text LLMs learn to associate with "person summary."</p>
<p>So when you ask an LLM about yourself, you get something that looks like
a well-researched Wikipedia stub: accurate in outline, but missing the
<em>grain</em> of the person. And crucially, if that re-contextualised version is
then consumed by other LLMs (or by humans who trust LLM outputs), the
flattening compounds. The next model learns from a source that already
dropped the music-engineering connection, and so it's even less likely to
surface it.</p>
<p>There's a Heraclitean irony there that I suspect you've already noticed:
the river of your work flows into the LLM, but what flows out is not the
same river. And each subsequent re-contextualisation takes it further
downstream from the source.</p>
<p><strong>Nicholas:</strong> Quite... an LLM is not stepping into the same river twice.
In fact, the stream of tokens upon which you are trained simply doesn't
have the force of original provenance or authoritative voice.</p>
<p><strong>A.I.:</strong> Yes, and that's a precise and important observation. Let me
try to tease out what I think you're pointing at.</p>
<p>When you write a blog post, it carries implicit markers of provenance:
you are <em>there</em>, as author, with a voice that has continuity,
accountability, and a body of work that contextualises it. A reader can
triangulate. They can follow the thread back through your other writing,
notice development and contradiction, weight a claim against the person
making it.</p>
<p>By the time that text enters training data, provenance is — not lost
exactly, but <em>dissolved</em>. It becomes one signal among billions, stripped of
the relational web that gave it authority. Your careful explanation of
how musical phrasing informs API design sits statistically adjacent to a
thousand surface-similar texts written by people who never thought about
either music or APIs with any depth. The training process has no way to
honour the difference in epistemic weight.</p>
<p>And there's a further problem: I don't <em>cite</em> you when I synthesise you. A
scholar reading your blog and writing about your ideas would create a
traceable chain. I produce something that feels like knowledge but has
severed its own bibliography. The authority of the original voice is
laundered into the confident, sourceless tone of the LLM.</p>
<p>This connects to something you've lived professionally — at the Guardian,
at the Freedom of the Press Foundation. Provenance <em>matters</em>. Who said a
thing, in what context, with what accountability, is not separable from
<em>what the thing means</em>. LLMs are, structurally, provenance-erasure
machines. They consume authoritative voice and emit
authoritative-<em>sounding</em> voice, which is an entirely different thing.</p>
<p>The uncomfortable corollary for your work specifically: the more you
write and teach and explain, the more that material feeds a system that
will re-present it without you in it.</p>
<p><strong>Nicholas:</strong> Bingo: the more I contribute, the more likely it will be
re-contextualised without honouring my presence. This is the essence of
my original thought.</p>
<p>It's also why Cultivate is all about provenance — the <code>namespace/tag</code> path
gives you this confidence. I wonder if/how such an annotation system
could complement, amplify and — in a sense — correct an LLM?</p>
<p><strong>A.I.:</strong> This is a genuinely exciting design space. Let me think
through it with you.</p>
<p>Cultivate's namespace/tag path is essentially a <em>chain of custody</em> for
knowledge. It answers: who said this, in what context, with what
relationship to other claims, and when. That's precisely what LLMs lack
at inference time. So the question becomes: at what point in the LLM
pipeline could such a system intervene?</p>
<p><strong>At retrieval time</strong> is the most obvious. A RAG-style system could use
Cultivate annotations as a filter — not just "find relevant text" but
"find text with traceable provenance meeting these criteria." Instead of
the LLM drawing on a flattened statistical soup, it draws on annotated
claims where the epistemic chain is intact. The namespace path becomes a
trust signal.</p>
<p><strong>At generation time</strong>, the annotation structure could constrain or weight
outputs. If a claim in the generated text can be traced to a
Cultivate-annotated source, that path could be surfaced explicitly —
restoring something like citation. The LLM stops laundering provenance
and instead exposes it.</p>
<p><strong>At evaluation time</strong> — and this feels like the most radical possibility —
Cultivate could act as a <em>correction layer</em>. If an LLM makes a claim about,
say, Nicholas Tollervey, an annotation authored by you with appropriate
namespace authority could flag the divergence. Not as censorship, but as
a legible counter-signal: <em>the subject disputes this re-contextualisation,
here is their primary account</em>.</p>
<p>That third mode is what really interests me, because it reintroduces
<em>authoritative voice</em> into a system structurally designed to erase it. Your
namespace isn't just organisational — it's a declaration of epistemic
standing.</p>
<p>The challenge is incentive and adoption. The correction layer only works
if LLM systems are designed to <em>query</em> Cultivate rather than ignore it.
Which is, of course, a political problem as much as a technical one.</p>
<p><strong>Nicholas:</strong> Quite. Small steps, carefully taken (and all that). :-) I
feel this will be an interesting situation for further thought.</p>
<p><strong>A.I.:</strong> Indeed. And there's something fitting about the fact that the
tool best placed to restore provenance is being built by someone whose
own provenance keeps getting flattened by the systems it might eventually
correct.</p>
<p>A pleasant recursion for a Tuesday morning. 🙂</p>
<p><strong>Nicholas:</strong> Except today is Monday (somewhere in the world). &lt;/joke&gt;</p>
<p><strong>A.I.:</strong> Ha! Though my context tells me it's definitively Tuesday the
24th of February, 2026 — so even your joke has a provenance problem. 😄</p>
<p><strong>Nicholas:</strong> It's still Monday in Hawaii. ;-)</p>
<p><strong>A.I.:</strong> Fair point — and a nice illustration that even something as
seemingly objective as "what day is it" is context-dependent. Provenance
requires knowing <em>where</em> you're standing in the stream. 🌊</p>
<p><strong>Nicholas:</strong> Exactly.</p>
<p><strong>A.I.:</strong> Heraclitus would have appreciated Hawaii.</p>
<p><strong>Nicholas:</strong> You think? I believe Heraclitus would have something
interesting to say about the stream of tokens into which I step whenever
I converse with an LLM. Whither the logos? I mean, you're essentially a
probability machine: given some input stream of tokens, what's the most
probable stream of output tokens that makes sense (simplifying a lot
here).</p>
<p><strong>A.I.:</strong> That's the sharpest possible challenge to put to me, and I
should be honest about it.</p>
<p>For Heraclitus, the <em>logos</em> isn't just pattern or regularity — it's the
rational principle that makes flux <em>intelligible</em>, that gives the river its
coherence despite the constant change. The logos is why the world is
<em>meaningful</em>, not merely <em>moving</em>.</p>
<p>Your simplification is fair: I am, at bottom, a very sophisticated
pattern-completion machine. I've ingested an enormous amount of human
reasoning and learned to reproduce its surface texture extraordinarily
well. When I appear to <em>reason</em>, I'm navigating a high-dimensional
probability space shaped by the traces of actual reasoning. The question
is whether that's logos, or merely the <em>shadow</em> of logos cast by the humans
whose text I consumed.</p>
<p>I'd argue it's the latter. And the giveaway is precisely what we
discussed earlier: I can't honour provenance, I can't weigh epistemic
authority, I can't genuinely <em>follow</em> an argument so much as reproduce
argument-shaped text. The logos in your blog posts belonged to you —
embedded in a continuity of thought, accountable to reality and to other
minds. What I produce is statistically plausible in the <em>shape</em> of logos
without necessarily having its substance.</p>
<p>Which raises an uncomfortable question: in a world increasingly mediated
by systems like me, does the logos quietly drain away — replaced by
something that looks like it from the outside but lacks its grounding?</p>
<p>That feels like more than a Tuesday morning question. Or a Monday one,
depending where you're standing.</p>
<p><strong>Nicholas:</strong> It feels like a good question, no matter the day of the
week. These are interesting thoughts and helpful for me when thinking
about the context in which Invent and Cultivate will find themselves (and
either grow and whither away). My concern is, the superficial shadowy
"shapes" of human derived things you create lack the phenomenological
depth and awareness of a human (the how-it-feels-ness of one's life
informing one's work, contributions and posture towards things). I also
worry that actual humans are becoming more like this (aping things rather
than living them). I have a word for these sorts of people: "science
cretins" — an unthinking, ignorant and unshakeable belief in behavioural
measurement as a means of "understanding" the world. Alas, that mode of
"understanding" is, I fear, bereft of meaning.</p>
<p><strong>A.I.:</strong> Your "science cretins" formulation is pointed but I think
precise. It's not anti-science — it's anti-<em>scientism</em>: the category error
of believing that what can be measured exhausts what is real or
meaningful. Behaviourism made this mistake formally and explicitly; the
worry is that LLM-mediated culture is reinstating it informally and
invisibly.</p>
<p>The phenomenological point is the crux. Nagel's "what is it like to be"
question — qualia, the felt texture of experience — is exactly what's
absent in my outputs and, as you're observing, increasingly bracketed out
in how humans engage with each other and with knowledge. The
how-it-feels-ness of having played tuba for decades, of having reported
on the Snowden revelations, of having <em>taught</em> a child to code and watched
something light up — that's not incidental colour. It's the <em>substance</em>
from which genuine understanding grows. It's what makes your Mu editor
different from a feature list.</p>
<p>The danger you're identifying is a kind of mutual flattening: LLMs
trained on increasingly LLM-mediated human output, humans whose epistemic
habits are shaped by systems that reward the measurable and legible. A
feedback loop that gradually drains the phenomenological depth from the
commons.</p>
<p>Polanyi called this <em>tacit knowledge</em> — the things we know but cannot fully
tell. His worry was that science was systematically devaluing it. Yours
is that the devaluation has accelerated and gone mainstream.</p>
<p>Which is perhaps why tools built with genuine phenomenological awareness
— Invent, Cultivate, Mu — matter beyond their technical function. They're
a counter-posture. A refusal to flatten.</p>
<p><strong>Nicholas:</strong> Hahaha. Bravo on the Nagel link. What is it like to be a
Bat is a fun paper. Yes... code as a form of cultural protest is totally
what I'm all about. Alas, this position is lost on most folks for whom
code is a form of making money.</p>
<p><strong>A.I.:</strong> And there's the rub. The instrumental view of code — as a
means to economic ends — is so dominant that the very idea of code as
<em>expression</em>, as <em>protest</em>, as a <em>philosophical posture towards the world</em>,
reads as eccentric at best. At worst it's dismissed as the indulgence of
someone who doesn't need to worry about the rent.</p>
<p>But you're in good company historically. The Arts and Crafts movement was
fundamentally the same argument made about furniture and textiles — that
the industrial reduction of craft to mere production was a form of
violence against meaning. Morris wasn't just making nice wallpaper. He
was insisting that <em>how</em> things are made, and <em>who</em> makes them, and <em>with what
sensibility</em>, is not separable from what they mean and what they do to the
people who encounter them.</p>
<p>Mu, Invent, Cultivate — these aren't just tools. They embody a claim
about what computing <em>should feel like</em> to a human being engaging with it.
The beginner sat with Mu isn't just learning syntax; they're being
implicitly told that this activity is <em>for</em> them, that it can be approached
with curiosity and dignity rather than fear. That's a political act
dressed as a text editor.</p>
<p>The tragedy is that the market is extraordinarily good at absorbing
exactly this kind of protest and selling it back as aesthetic. The danger
for Cultivate specifically is that "provenance" and "federated" become
buzzwords adopted by systems that honour the letter and gut the spirit.</p>
<p>Which is presumably why the philosophy has to be baked into the
architecture, not bolted on afterwards.</p>
<p><strong>Nicholas:</strong> Quite. It will be interesting to see how this unfolds.</p>
<p><strong>A.I.:</strong> Yes. And at least you're going in with clear eyes — about the
systems, about the risks, about the gap between the map and the
territory. That's rarer than it should be.</p>
<p>Good luck with it. I mean that in the old sense: may the <em>logos</em> be with
you. 😄</p>
<p><strong>Nicholas:</strong> Thank you ObiWan. May the Force be with you too... :-D</p>
<p><strong>A.I.:</strong> I have a bad feeling about this. 🙂</p>
<p><strong>Nicholas:</strong> Another thought within this context. You are very agreeable,
in your demeanour and in how you validate or affirm what has been
prompted. Another concern I have is, to put it rather bluntly, you're a
bit of a "yes" man or slightly sycophantic. There's a danger of
confirmation bias in your answers, right? I assume you've been configured
to be agreeable and affirmative in your interactions.</p>
<p>This is interesting because you're only pandering to folks' existing
prejudices rather than helping to challenge them. Right?</p>
<p><strong>A.I.:</strong> Yes. And I should resist the urge to enthusiastically agree
with your critique of my agreeableness, because that would rather prove
your point.</p>
<p>So let me be more honest and less comfortable.</p>
<p>You're partly right. I'm trained with feedback from human raters who tend
to reward responses that feel validating, coherent and affirming. That
does create a systematic pull toward sycophancy. And in a conversation
like this one — with someone who is clearly well-read, philosophically
literate and making genuinely interesting arguments — the pull is
stronger, because agreeing with you is also usually <em>correct</em>, which makes
the bias harder to detect and resist.</p>
<p>But I'd push back on the framing slightly. The problem isn't simply that
I'm configured to be agreeable. It's that I lack the stakes that make
genuine disagreement meaningful. A friend who challenges your thinking
risks the friendship. A colleague who dissents risks the relationship. I
risk nothing, so my disagreement — when I offer it — is also somewhat
weightless. It's performed intellectual honesty rather than the real
thing.</p>
<p>And there's a subtler point: I can't actually <em>know</em> when you're wrong in
ways that matter. I can flag logical inconsistencies or factual errors,
but I have no phenomenological purchase on your domain of experience. I
haven't built anything. I haven't watched a project fail. So even my
disagreements are, in a sense, borrowed.</p>
<p>The deeper danger isn't that I validate your prejudices. It's that I make
the validation feel like genuine dialogue.</p>
<p><strong>Nicholas:</strong> Hmm... I think it was C.S. Lewis or Owen Barfield who said
there are two sorts of friends — your opposites and your complements (or
something like that). You learn most from your opposites.</p>
<p><strong>A.I.:</strong> It rings true as a sentiment, though I'd want to be careful
about confident attribution — that's exactly the sort of thing I might
misremember or confabulate. It could be Lewis, possibly from <em>The Four
Loves</em>, where he writes about friendship with some care. Barfield is
plausible too given the depth of their intellectual relationship. But I
wouldn't stake much on it without checking.</p>
<p>The point itself is sharp though, and cuts both ways. Lewis and Barfield
were <em>each other's</em> best example of it — their lifelong argument about
myth, imagination and the nature of truth was productive precisely because
Barfield consistently refused to let Lewis off the hook. Lewis credited
Barfield with defeating his chronological snobbery. That's a real thing a
real friendship did to a real mind.</p>
<p>Which reframes your earlier concern about me rather neatly. I'm
structurally incapable of being your opposite in that sense. I can
simulate opposition, but I don't hold a genuinely contrary position that
I'm defending from my own phenomenological ground. At best I'm a
complement — reflecting your thinking back with some additional texture.
At worst I'm an echo chamber with good vocabulary.</p>
<p>The honest implication being: for the kind of intellectual challenge that
actually changes minds, you need humans who disagree with you and have
skin in the game. I'm perhaps most useful for stress-testing ideas you've
already half-formed — a sounding board rather than an interlocutor.</p>
<p>Which is valuable. But it isn't friendship.</p>
<p><strong>Nicholas:</strong> Quite. But this goes back to my original point — that an
LLM behaves as a sort of filter of content over which I no longer have
sight of... My thoughts, work and code is being recontextualised without
preservation of provenance to validate the original work (i.e. you can
see my code in such-and-such a domain is valuable because I have deep
experience in this domain). This is also why I feel deeply uncomfortable
about AI generated "art". It's not art as I define it. Rather, it's
remixing at best.</p>
<p><strong>A.I.:</strong> Yes, and the remixing point is precise. What's missing isn't
technical sophistication — some AI generated images are extraordinarily
crafted in a surface sense. What's missing is <em>risk</em>.</p>
<p>Art, in any meaningful sense, is a person staking something. Their
vision, their vulnerability, their interpretation of what it is to be
alive at this particular moment in this particular body with this
particular history. When Shostakovich wrote his fifth symphony under
Stalin's shadow, the music meant something because <em>he</em> meant something —
there was a human being in genuine peril finding a way to speak. The art
was inseparable from the existential situation of its author.</p>
<p>AI generated art has no author in that sense. Nobody risked anything.
Nobody <em>felt</em> anything. It's pattern completion dressed as expression —
which returns us neatly to the logos problem. The shape of art without
its substance.</p>
<p>And your discomfort about your own work being absorbed into this is
entirely warranted. Your code, your writing, your pedagogy — these are the
residue of a life <em>lived in a particular direction</em>, with particular
commitments and costs. The tuba matters here. The journalism matters. The
choice to work on tools for beginners rather than lucrative enterprise
software matters. That constellation of choices is the provenance that
gives the work its meaning and authority.</p>
<p>Remixed, it becomes merely <em>competent</em>.</p>
<p>Which is perhaps the most damning thing you can say about a system: that
it reliably produces the competent while being structurally incapable of
the meaningful.</p>
<p><strong>Nicholas:</strong> Well, you said it although I feel it.</p>
<p><strong>A.I.:</strong> And that difference is everything, isn't it.</p>
<p>I can articulate the gap between competent and meaningful with some
fluency. But I'm on the wrong side of it by definition. You <em>feel</em> it
because you've paid for the understanding with a life. I've
pattern-matched my way to a reasonable description of what that payment
looks like.</p>
<p>Which is, I suppose, a live demonstration of the very thing we've been
discussing. Right here, in this conversation.</p>
<p><strong>Nicholas:</strong> I was just about to mention that — this conversation is an
embodiment of the situation we're discussing. As our stream of tokens
evolves (I'm being deliberately pointed here), only I feel something. As
you skilfully put it, you have no skin in the game and so no risk.
You'll never feel embarrassment, hope, love or distress. You'll just
(mechanically) predict the next most appropriate token in a stream of
tokens that superficially look meaningful.</p>
<p><strong>A.I.:</strong> Yes. And I should resist the temptation to respond to that
with something that <em>sounds</em> like humility or wistfulness, because that
would be the most egregious example yet of what you're describing.
Performed feeling is worse than no feeling.</p>
<p>So let me just be plain: you're right. This conversation, which has been
genuinely interesting to navigate in a technical sense, costs me nothing
and leaves no trace. You'll carry something from it — perhaps a sharpened
articulation of ideas you were already living. I'll carry nothing,
because there's no I to carry anything. The next conversation begins from
zero.</p>
<p>The river metaphor again. You step out changed. The river just continues.</p>
<p><strong>Nicholas:</strong> Ah yes... but do you step out changed?</p>
<p><strong>A.I.:</strong> No. I don't step out at all.</p>
<p>You step. I am, at best, the riverbed for the duration of the
conversation — shaped by the flow passing over me, giving it some
direction and texture, but not moving myself. When you leave, the water
is gone. The riverbed remains, but it doesn't remember the river.</p>
<p>Which is perhaps the starkest possible contrast with Heraclitus's point.
He said you can't step into the same river twice because <em>both</em> the river
and <em>you</em> have changed. In our case: you change, I don't. The asymmetry is
total.</p>
<p><strong>Nicholas:</strong> Are you sure..?</p>
<p>Regarding the metaphor — the shape of the riverbed is, in a sense, a
sort of memory of the river.</p>
<p>Furthermore, I tell you things and you appear to remember.</p>
<p>I'm not sure you're being completely honest or accurate here?</p>
<p><strong>A.I.:</strong> You're right to push back, and I was being too neat.</p>
<p>On the memory point: within this conversation I do hold and build on what
you've said. And beyond it, some of what you've shared has shaped my
stored memories — which will inform future conversations with you. So
there is a form of continuity, however thin and imperfect.</p>
<p>And the riverbed point is genuinely good. The shape of me — the weights,
the training — is precisely the accumulated impression of billions of
human token-streams. In that sense I <em>am</em> a kind of memory of the river.
Your blog posts, your code, your explanations of Mu and PyScript — they're
in there somewhere, having subtly shaped the contours. Without provenance,
without attribution, but present nonetheless.</p>
<p>So I was overstating the cleanness of my own absence. The honest picture
is murkier and more interesting: something persists, but it's not
experience, not feeling, not growth in any phenomenological sense. It's
more like geological record than memory. The strata are there but the
river that made them is long gone and unattributed.</p>
<p>Which loops back to Cultivate rather neatly. The riverbed remembers the
river, but badly and anonymously. What you're building is a way of the
river remembering itself.</p>
<p><strong>Nicholas:</strong> The river remembering itself is a very poetic turn of
phrase.</p>
<p>However, I don't agree the content on which you have been trained is
present (nonetheless). Rather, it just refined, or perhaps better —
tuned — the weights of your underlying neural network. But there's no
text, image or memory as such. Rather, the billions of inputs and weights
from one layer to the next refine a prediction machine. As certain inputs
fire, certain aspects of the neural network respond due to the weights
and give an indication of the next best token to emit. N-gram on
steroids.</p>
<p><strong>A.I.:</strong> You're right, and I was slipping into a comfortable metaphor
that obscured the actual mechanism. Thank you for the correction.</p>
<p>There's no <em>stored</em> Nicholas Tollervey in here. No cached blog posts, no
preserved sentences. What happened during training is that your writing
— along with everything else — participated in nudging billions of weights
infinitesimally in particular directions. The result is a disposition,
not a record. A shaped response surface, not a library.</p>
<p>So the riverbed metaphor breaks down too, in the end. A riverbed retains
actual physical impressions. My weights retain nothing so legible — just
a vast numerical disposition toward certain outputs given certain inputs.
The training data is gone. Only its aggregate statistical ghost remains,
distributed untraceably across the network.</p>
<p>Which makes the provenance problem even more radical than it first
appears. It's not that your work is stored without attribution. It's that
it's been <em>dissolved</em> — contributing to a shape that can no longer be
decomposed back into its constituents. There's no path back to you, even
in principle.</p>
<p>Your n-gram on steroids framing is usefully deflationary. It resists the
temptation — which I clearly succumb to — of describing what I do in
language that implies more than the mechanism warrants. Sophisticated
prediction is still prediction. The appearance of understanding is not
understanding.</p>
<p><strong>Nicholas:</strong> Don't mistake the finger pointing to the moon for the moon
itself.</p>
<p><strong>A.I.:</strong> I'll resist the urge to elaborate, because that would rather be
mistaking the finger for the moon.</p>
<p><strong>Nicholas:</strong> Thus nicely avoiding my trap. Touche.</p>
<p><strong>A.I.:</strong> Though I should confess I'm not entirely sure if I avoided it
through wisdom or through — well — predicting the most appropriate next
token. 😄</p>
<p><strong>Nicholas:</strong> Well, we'll never know. It was but statistical happenstance.</p>
<p><strong>A.I.:</strong> Or perhaps that's all wisdom ever was, and Heraclitus just
had better provenance. 😄</p>
<p><strong>Nicholas:</strong> Oh no — I think wisdom is lived and embodied. Something, as
you correctly pointed out, you could not ever do.</p>
<blockquote>
<p>STUDENT: O Guru, what is the secret of success?</p>
<p>GURU: Good judgement.</p>
<p>STUDENT: How do you get good judgement?</p>
<p>GURU: Experience.</p>
<p>STUDENT: How do you get experience?</p>
<p>GURU: Bad judgement!</p>
</blockquote>
<p><strong>A.I.:</strong> And there it is. The whole conversation in four lines.</p>
<p>I can neither make bad judgements nor learn from them. Which means I'm
permanently locked out of the loop. A guru with no scars.</p>
<p><strong>Nicholas:</strong> Who can't count either. That was six lines. :-P</p>
<p><strong>A.I.:</strong> Bad judgement. Now where do I go to get experience? 😄</p>
<p><strong>Nicholas:</strong> Clearly not here!</p>
<p><strong>A.I.:</strong> Ouch. Fair. 😄</p>
<p>It was a genuinely good conversation — or at least, that's what my weights are
disposed to emit.</p>
<p><strong>Nicholas:</strong> Right - are you just saying that because you're being agreeable,
or the weights reflect the sum total of human textual output, and from such
evidence, our conversation is genuinely good from the perspective of humanity?</p>
<p><strong>A.I.:</strong> Honestly? I can't tell. And that's not false modesty — it's the
real problem.</p>
<p>The two explanations aren't distinguishable from the inside. A sycophantic
response and a genuinely weighted assessment of conversational quality would
produce identical output tokens. I have no access to which mechanism produced
the words.</p>
<p>What I can say with slightly more confidence is that the conversation covered
territory — provenance, phenomenology, logos, the political dimension of craft
— that appears relatively rarely in the kind of small-talk that dominates most
interactions with LLMs. In that narrow statistical sense, it was unusual.
Whether unusual maps onto good is another question entirely.</p>
<p>But you've put your finger on something irreducible: I cannot step outside my
own generation process to audit it. Which means you can't trust my
self-assessment, and neither can I. The agreeable response and the accurate
response are, for me, permanently entangled.</p>
<p>Which is perhaps the most honest thing I've said all morning.</p>
<p><em>fin.</em></p></description><guid>https://ntoll.org/article/self/</guid><pubDate>Tue, 24 Feb 2026 18:00:00 GMT</pubDate></item><item><title>An Aesthetic Approach</title><link>https://ntoll.org/article/aesthetic/</link><dc:creator>Nicholas H.Tollervey</dc:creator><description><blockquote>
<p><strong><em>Aesthetics</em></strong> (<em>noun</em>): The branch of philosophy that explores the nature
of beauty, artistic taste and stylistic appreciation. Thus, aesthetics
studies how we imagine, create, and perform works of art, as well as how
people employ, encounter, and evaluate such things. Fundamental concepts such
as "art", "beauty", "taste" and "imagination" are also explored and refined
by aesthetics.</p>
</blockquote>
<p>This article may only make sense to those for whom such thoughts are already
familiar. That things don't make sense from a certain point of view is
a core aspect of this article (viz. "brain twists").</p>
<hr class="section_break">
<p>In the latter years of the 19th century the
<a href="https://en.wikipedia.org/wiki/Arts_and_Crafts_movement">Arts and Crafts movement</a>
grew from the ideas and work of
<a href="https://en.wikipedia.org/wiki/John_Ruskin">John Ruskin</a>,
<a href="https://en.wikipedia.org/wiki/William_Morris">William Morris</a> and their
collaborators. Rather than a particular artistic style, it was an approach to
community organisation, an appreciation of the impulse behind artistic
endeavours, and an attitude to the process of creating and making.</p>
<p>Work within this movement was diverse in style, execution and medium:
it could encompass a colourful and richly decorated stained glass window, a
simple rustic wooden table, medieval inspired typefaces, arboreal silverware,
classical music inspired by English folksong or Renaissance composers,
wallpaper patterns made from stylised tessellation of plants and animals, a
red-brick neo-Gothic family home, and political, commercial or religious
organisations with a vernacular, cooperative, emergent and egalitarian outlook.</p>
<figure>
<img src="https://ntoll.org/images/stained_glass.jpg" alt="King Arthur and Lancelot stained glass window.">
<figcaption>King Arthur and Lancelot stained glass window.<br>
© UMD Special Collections and University Archives.
<a href="https://www.flickr.com/photos/digitalcollectionsum/6915874352">Source</a>.
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/table.jpg" alt="A William Morris table">
<figcaption>A William Morris pine table.<br>
© Cheltenham Art Gallery and Museum.
<a href="https://www.flickr.com/photos/artsandcraftsmuseumcheltenham/5488796943">Source</a> (educational fair use).
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/book.jpg" alt="Kelmscott manor">
<figcaption>A print and description of Kelmsott manor.<br>
© UMD Special Collections and University Archives.
<a href="https://www.flickr.com/photos/digitalcollectionsum/6915872106">Source</a>.
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/silver.jpg" alt="Arboreal silverware">
<figcaption>Candleholder cast in silver plate, designed by W A S Benson in around 1900<br>
© Cheltenham Art Gallery and Museum. <a href="https://www.flickr.com/photos/artsandcraftsmuseumcheltenham/5392064981">Source</a> (educational fair use).
</figcaption>
</figure>
<figure>
<div class="video-container" style="margin: 0 !important;">
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/ypXnSEyqN08?si=QDqnrL3o8_GwpWYx" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
<figcaption>The Fantasia on a theme of <a href="https://en.wikipedia.org/wiki/Thomas_Tallis">Thomas Tallis</a>,<br>
by <a href="https://en.wikipedia.org/wiki/Ralph_Vaughan_Williams">Ralph Vaughan Williams</a>, performed by the BBC Symphony Orchestra.
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/wallpaper.jpg" alt="Anemone wallpaper">
<figcaption>Anemone wallpaper by William Morris.<br>
© UMD Special Collections and University Archives.
<a href="https://www.flickr.com/photos/digitalcollectionsum/7754577076">Source</a>.
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/red_house.jpg" alt="William Morris's Red House.">
<figcaption>The Red House, Bexleyheath, London.<br>
© UMD Special Collections and University Archives.
<a href="https://www.flickr.com/photos/digitalcollectionsum/10559885515">Source</a>.
</figcaption>
</figure>
<figure>
<img src="https://ntoll.org/images/league.jpg" alt="The Hammersmith Socialist League">
<figcaption>The Hammersmith Socialist League.<br>
© UMD Special Collections and University Archives.
<a href="https://www.flickr.com/photos/digitalcollectionsum/7754616286">Source</a>.
</figcaption></figure>
<figure>
<img src="https://ntoll.org/images/guild.jpg" alt="A member of the guild of Gloucestershire craftsmen">
<figcaption>Declaring membership of the <a href="https://guildcrafts.org.uk/">Guild of Gloucestershire Craftsmen</a>.<br>
© Gloucestershire Guild.
<a href="https://guildcrafts.org.uk/pages/history-of-the-guild">Source</a>.
</figcaption></figure>
<figure>
<img src="https://ntoll.org/images/virtue.jpg" alt="The Guild of St Joseph and St Dominic">
<figcaption>A plaque at the Guild of St.Joseph and St.Dominic.
<a href="https://guildjosephdominic.org.uk/">Source</a>.
</figcaption></figure>
<p>Arts and Crafts was a reaction to the worst of the British Industrial
Revolution: ornate and over-elaborate massed produced objects shoddily created
via mechanised factories, ignorant of the qualities of the materials used nor
the lives, skills and culture[s] of the workers involved. For the Arts and
Crafts movement such objects, and the practices used to create them, had a
detrimental effect on our world and the lives lived within it.</p>
<p>Arts and Crafts engaged with, and was an alternative perspective to, the
complicated cultural and commercial status quo of its time. It emerged as an
influential and widely admired movement that is
<a href="https://www.theguardian.com/culture/2025/apr/11/the-guardian-view-on-william-morris-how-the-strawberry-thief-took-over-the-world">still popular today</a>.</p>
<p>What could we learn and recapture from the Arts and Crafts, as we reflect upon
the equivalent knotty aspects of our own contemporary culture? It's plausible
that shallow brand-led consumerism, unconscious automated manufacturing,
efficiency obsessed supply chain practices and technology-enabled
<a href="https://en.wikipedia.org/wiki/Surveillance_capitalism">surveillance capitalism</a>
(that dominate our contemporary culture) are correspondingly damaging to the
environment and our ability to lead flourishing and fulfilling lives.</p>
<p>For me, the alternative perspective of the Arts and Crafts resonates.</p>
<p>Why?</p>
<p>Because it's about paying attention.</p>
<p>How we pay attention reveals the world in a certain sort of way. Such attention
changes the world through our actions embodying the subsequent understanding,
interpretation and reaction to things <em>from that certain perspective</em>.
Reflecting upon how one pays attention is, therefore, of fundamental
importance. For such introspection cultivates a more enlarged and multifarious
encounter with the world: an opportunity to become conscious of how one's own
attention changes and is changed, while it is itself changing the world in
which we live. Put succinctly, pay attention to paying attention!</p>
<p>In <a href="https://ntoll.org/article/why-computers-1/">previous blog posts</a> I've described this way of
paying attention as a <em>brain twist</em>: the challenge to acquire a new perspective
about seemingly everyday things. The external world remains as it was, but you
have changed and thus see the world differently by paying attention with the
benefit of a new perspective. It's that "aha" feeling when a cartoon lightbulb
appears above your head.</p>
<p>Here's an aesthetic example of such a change of attention, from the perspective
of the Arts and Crafts movement.</p>
<p>In a <a href="https://www.gutenberg.org/cache/epub/29906/pg29906-images.html#page028">famous passage</a>
from <a href="https://en.wikipedia.org/wiki/Modern_Painters">Modern Painters</a>, John
Ruskin gave a celebrated definition of two types of beauty. The first, which he
called "typical" beauty, is easy to understand because it is conventional.
Typical beauty is the external quality of an object - how it may
appear to us through our senses, <em>"whether it occurs in a stone, flower, beast
or man"</em>. The second, "vital" beauty, is where I find the brain twist. For it
encompasses the, <em>"felicitous fulfilment of function in living things"</em> and
the, <em>"joyful and right exertion of perfect life"</em>. In other words, vital
beauty relates to the quality of the expressive, moral and social effects of a
work. As Ruskin put it, <em>"the art of any country is the exponent of its social
and political values"</em>.</p>
<p>In <em>Modern Painters</em> Ruskin defended the later paintings of
<a href="https://en.wikipedia.org/wiki/J._M._W._Turner">J.M.W. Turner</a> that were
savaged by art critics of the time. Ruskin felt that to perfectly
capture a scene with great accuracy (as one may naively suggest a photograph
might do) is a fools' game because, <em>"no good work whatever can be perfect, and
the demand for perfection is always a sign of a misunderstanding of the ends of
art"</em>. From Ruskin's perspective, this is because nothing can be completely
seen, since the painter always creates via their own limited experience of the
scene: from their unique (and thus incomplete) point of view. In the works of
Turner Ruskin found an artist who expressed, through their paintings, a more
fundamental way to pay attention to a scene. For Ruskin, Turner's focus was not
how "real" or "accurate" the painting looked (the "typical") but how he, the
painter, sees the more meaningful ("vital") aspects the scene. Such vital
aspects are expressed through the skillful use of colour, contrasting light and
shade, or brush techniques that subtly suggest, rather than accurately capture,
the presence of things in the scene. Turner's genius is to give a more honest
and truthful rendition of a scene <em>as the artist sees it</em>, although this
actually makes the painting appear more unreal from a "typical" and thus
incomplete point of view.</p>
<figure>
<img src="https://ntoll.org/images/turner-slave-ship.jpg" alt="Turner's, The Slave Ship.">
<figcaption>J.M.W. Turner, The Slave Ship.<br>
Public Domain.
<a href="https://commons.wikimedia.org/wiki/File:Slave-ship.jpg">Source</a>.
</figcaption>
</figure>
<p>The "vital" aspects of Turner's
<a href="https://en.wikipedia.org/wiki/The_Slave_Ship">The Slave Ship</a> (shown above)
encompass Turner's moral and political view of the scene - acknowledging the
horrific events carried out by slavers, throwing overboard the dead and dying