<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://computationallythinking.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://computationallythinking.com/" rel="alternate" type="text/html" /><updated>2025-10-28T14:29:42+00:00</updated><id>https://computationallythinking.com/feed.xml</id><title type="html">Computationally Thinking</title><subtitle>explain( world | science, data, computation, experience)</subtitle><author><name>Computationally Thinking</name></author><entry><title type="html">More Data Won’t Fix It</title><link href="https://computationallythinking.com/Correction-More-Data-Wont-Fix-It/" rel="alternate" type="text/html" title="More Data Won’t Fix It" /><published>2025-10-27T00:00:00+00:00</published><updated>2025-10-27T00:00:00+00:00</updated><id>https://computationallythinking.com/Correction-More-Data-Wont-Fix-It</id><content type="html" xml:base="https://computationallythinking.com/Correction-More-Data-Wont-Fix-It/"><![CDATA[<p><strong>To the Editor,</strong>
I’d like to point out a misleading statement in this article about the sources of AI model hallucinations.</p>
<ul>
  <li><strong>Title:</strong> Top AI assistants misrepresent news content, study finds CBC News</li>
  <li><strong>Link:</strong> https://www.cbc.ca/news/world/ai-assistants-news-misrepresented-study-9.6947735</li>
  <li><strong>Date:</strong> 2025-10-27</li>
</ul>

<p>The article states (emphasis my own):</p>

<blockquote>
  <p>OpenAI and Microsoft have previously said hallucinations — when an AI model generates incorrect or misleading information, <strong>often due to factors such as insufficient data</strong> — are an issue that they’re seeking to resolve.</p>
</blockquote>

<p>This is misleading, since there is no amount of data that would lead to creation of AI models, in the current popular form, that do not <em>sometimes</em> generate content that we would deem incorrect or misleading.</p>

<h2 id="knowing-your-limits">Knowing Your Limits</h2>
<p>Generative AI models are incredibly good at filling in missing information, even creating seemingly new content in response to their prompts. But generative AI models <strong>have no relation to truth</strong> itself. These models do not “know” what true and false is, or how to discern it, they do not attempt to mislead nor to be disingenuous. They merely generate new outputs given their inputs in a way that statistically mimics the data they were trained on. 
Even in cases where you literally ask these models to opine on the truth of a statement, they are successful only insofar as their training data contains correct answers about the truth you are seeking. In some cases, this could be quite impressive, as where the answer can be filled in as if it were a missing point on a curve, between other known concepts. But this is very different from determining if the statement is actually true or false.</p>

<p>Saying these models “statistically mimic” that data means that they are inherently never perfect, but a always a bit random. So, even if the training consisted only of perfectly “true” text, the model could still output falsehoods. The companies and academics researching AI are working hard to minimize these challenges, but they <em>cannot be removed entirely</em>, this is in the nature of the current popular generative AI models based on neural networks. I have written on this in the past related to <a href="https://computationallythinking.com/That-Which-Dreams-are-Made-Of/">“Hallucination”</a>.</p>

<p>Honest people working in that industry will even sometimes even admit this.
As put best in 2023 by Andrej Karpathy, a foundational AI guru at Tesla and OpenAI:</p>
<blockquote>
  <p>“I always struggle a bit when I’m asked about the ‘hallucination problem’ in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.”  – <a href="https://x.com/karpathy/status/1733299213503787018">Andrej Karpathy @karpathy on Twitter Dec 8, 2023</a></p>
</blockquote>

<h2 id="a-suggestion">A Suggestion</h2>
<p>All this to merely say, it would be better if the CBC, and other responsible news organizations were more careful when ascribing the hallucinations or errors of AI systems to some fixable, known problem, rather than the challenging research topics that it is.</p>

<p>Otherwise, I worry <strong>our society will continue sleep-walking</strong> forward to the marketing tunes of a few corporations claiming that <strong>safe and responsible use of these systems in all walks of life is well on its way to being perfected</strong>, which is far from the truth.</p>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-inteligence" /><category term="ai4truth" /><category term="generative-models" /><category term="large-language-models" /><category term="ai-doom" /><summary type="html"><![CDATA[To the Editor, I’d like to point out a misleading statement in this article about the sources of AI model hallucinations. Title: Top AI assistants misrepresent news content, study finds CBC News Link: https://www.cbc.ca/news/world/ai-assistants-news-misrepresented-study-9.6947735 Date: 2025-10-27]]></summary></entry><entry><title type="html">Hedging and Hawing about AI and Jobs</title><link href="https://computationallythinking.com/AI-and-Jobs/" rel="alternate" type="text/html" title="Hedging and Hawing about AI and Jobs" /><published>2024-01-25T00:00:00+00:00</published><updated>2024-01-25T00:00:00+00:00</updated><id>https://computationallythinking.com/AI-and-Jobs</id><content type="html" xml:base="https://computationallythinking.com/AI-and-Jobs/"><![CDATA[<p>This is in reply to reading this article from the BBC, <a href="https://www.bbc.com/news/business-67977967">“AI to hit 40% of jobs and worsen inequality, IMF says”</a> <em>(January 15, 2024, Annabelle Liang, BBC)</em></p>

<p>Some quotes and my responses:</p>

<blockquote>
  <p>More generally, higher-income and younger workers may see a disproportionate increase in their wages after adopting AI.</p>
</blockquote>

<p>This isn’t obvious to me. I guess the “may” hedges here, but it’s basically an empty statement. Some higher-income workers (legal clerks?) may find their jobs vanishing entirely. While some older workers who can figure out prompt tuning may find their value shoots up dramatically because they can more easily handle their workload.</p>

<blockquote>
  <p>Lower-income and older workers could fall behind, the IMF believes.</p>
</blockquote>

<p>Don’t they always? Seriously, this has been happening for decades and will continue until we take a broader view at the meaning of work itself.
Because the bigger question we need to answer as a society, and I don’t expect the IMF, or WEF or World Bank to answer it, is are we “workers” or are we “citizens”? Are we perhaps even “people”?</p>

<p>There’s more to life than work, at least there could be more to life than work. Is it possible that the “AI Transition” is to one where less people need to work <em>at all</em>, yet don’t need to suffer total destruction to do so?</p>

<blockquote>
  <p>“It is crucial for countries to establish comprehensive social safety nets and offer retraining programmes for vulnerable workers,” Ms Georgieva said. “In doing so, we can make the AI transition more inclusive, protecting livelihoods and curbing inequality.”</p>
</blockquote>

<p>This is a good first step, but still demonstrates limited foresight.</p>

<p>Even talking about an “AI Transition” assumes we know <em>where we are transitioning to</em>!  Or at the very least, it assumes there will be a transition from one stable equilibrium of society, or the economy, to another. For example, after the widespread scaling of manufacturing in the lates 1800s and early 1900s, think Ford Model-T, there were huge societal convulsions around the idea of labour rights, what it meant to be an employee, an employer, and a caring society. The labour movement, the Depression, the New Deal social welfare state all arose during and out of that transition. War and climate issues were important parts of that, but one very important part was a sudden technological shift that changed the nature of work and livelihood for a huge portion of the people in Western societies.</p>

<p>So a transition like this isn’t simple or guaranteed to happen within a few years, and it isn’t guaranteed to happen without widespread societal disruption. If Generative AI really is going to lead to a transition as big as these previous ones, then we don’t know that the new equilibrium will look like at all, how long it will take to get there, or how painful/smooth it will be.</p>

<h2 id="an-opinion-on-the-future">An Opinion on the Future</h2>
<p>If you’ll notice, I’m even hedging on if it really will lead to such a limitation. That’s because the current wave of AI technologies that everyone is excited about has limitations and risks no one has internalized yet, even as applications of it will also continue to develop at breakneck speed. These risks are not just to the things the technology can do and that people will use it for. The risks are even I suppose investment style risks. As Gary Marcus and only a few others are adamantly pointing out, the current wave of tools may be built on a house of cards that will eventually be deemed as a huge illegal theft of intellectual property on a scale we haven’t seen since the <em>MP3 Wars</em> at the turn of the century. Despite all the users, and money involved that battle ended up being “won” by the IP holders who sued Napster etc. out of business and implemented their own protocols for online purchasing, copy protection etc. The fact that seems long ago and irrelevant now in our era of music/video streaming is beside the point. ChatGPT and most of the others are trained on vast amounts of data that was collected under false pretences, to demonstrate research ideas, which was then used to provide a pay-for-use trained model that doesn’t cite it’s sources or compensate creators. We don’t know how those lawsuits will play out, but the particular players and tools we are using now might be entirely different in 5 years depending how that shakes out.</p>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-inteligence" /><category term="generative-models" /><category term="large-language-models" /><category term="ai-doom" /><category term="MachinesWantYourJob" /><category term="ai-society" /><summary type="html"><![CDATA[This is in reply to reading this article from the BBC, “AI to hit 40% of jobs and worsen inequality, IMF says” (January 15, 2024, Annabelle Liang, BBC)]]></summary></entry><entry><title type="html">That Which Dreams are Made Of</title><link href="https://computationallythinking.com/That-Which-Dreams-are-Made-Of/" rel="alternate" type="text/html" title="That Which Dreams are Made Of" /><published>2024-01-10T00:00:00+00:00</published><updated>2024-01-10T00:00:00+00:00</updated><id>https://computationallythinking.com/That%20Which%20Dreams%20are%20Made%20Of</id><content type="html" xml:base="https://computationallythinking.com/That-Which-Dreams-are-Made-Of/"><![CDATA[<p>In William Shakespeare’s <em>Tempest</em>, the old wizard Prospero at one point famously opines…</p>

<blockquote>
  <p>Our revels now are ended. These our actors,
 As I foretold you, were all spirits and
 Are melted into air, into thin air;
 And – like the baseless fabric of this vision –
 The cloud-capped towers, the gorgeous palaces,
 The solemn temples, the great globe itself,
 Yea, all which it inherit, shall dissolve,
 And like this insubstantial pageant faded,
 Leave not a rack behind. We are such stuff
 As dreams are made on, and our little life
 Is rounded with a sleep.
  	– Prospero in <em>The Tempest</em> by William Shakespeare  <code class="language-plaintext highlighter-rouge">(4.1.146-158)</code></p>
</blockquote>

<p>I wonder, do our current Wonders of the Modern Age, Large Language Models (LLM) and Generative AI Systems, see the world like a dream, are we the dreamer, or are they?</p>

<p>There’s no real answer to that, but it was prompted much more concrete news raising questions we really should be able to answer if we think with a clear head.</p>

<p>My thoughts on two recent articles about LLMs and OpenAI’s ChatGPT in particular.</p>

<h2 id="solving-the-hallucination-problem">“Solving” the “Hallucination” “Problem”</h2>

<p>I have seen some people being quite optimistic about future uses of LLMs along the lines of  this article on the topic: https://www.fastcompany.com/91006321/how-ai-companies-are-trying-to-solve-the-llm-hallucination-problem</p>

<p>Some free advice, if you are betting your business on quickly “solving the hallucination problem” so you can fully integrate LLMs, then you might want to find out what hallucination really is.</p>

<p>I think that’s the wrong message to take from it.</p>

<p><em>To wit…</em></p>

<h2 id="dream-machines">Dream Machines</h2>
<blockquote>
  <p>Earlier this month, Andrej Karpathy, the former head of AI at Tesla and now at OpenAI, tweeted similar remarks.</p>
  <blockquote>
    <p>“I always struggle a bit when I’m asked about the ‘hallucination problem’ in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.”<br />
– <a href="https://x.com/karpathy/status/1733299213503787018">Andrej Karpathy @karpathy on Twitter Dec 8, 2023</a></p>
  </blockquote>
</blockquote>

<p>Andrej is right on point here, of course. Hallucination isn’t actually a problem with LLMs, it’s <em>how they work</em>. We just happen to like it when they hallucinate something we agree with. Mathematically “hallucination” is identical to “regression” or “prediction”.</p>

<p>Apparently, there is a whole industry of companies, like Vectara, trying to find ways to only make true predictions and not false ones.</p>

<blockquote>
  <p>Ahmad (from Vectara), for his part, believes hallucinations will largely be solved in roughly 12-18 months, a timeline Altman has also suggested might be feasible. “When I say solved, I mean, it’s going to be that these models will be hallucinating less than less than a person would be,” Ahmad adds. “I don’t mean zero.”</p>
</blockquote>

<p>I would suggest, to avoid confusion with normal people with little patience for arguing the meaning of common words, that you <em>don’t call this “solved”</em> if that’s not what you mean. In Computer Science and AI, solved usually means it can be performed optimally in all situations. Even if you soften it to mean “nearly optimally” I don’t think most people actually wants advanced computer systems that <em>only lie or confabulate as much as humans do</em>. People commonly assume a machine or service should be really good at  something, better than average, otherwise why do we need it?</p>

<p>I’m sure these companies are doing great work. I also agree that building these systems in ways <em>that produce verifiable, true statements is an important goal</em>. However,  it’s also a <strong>very challenging</strong> research goal that <em>no one knows the answer to yet</em>. The answer, probably isn’t going to be tuning a bit for a year, or human’s coaching it with lots of special cases (which is already the only reason ChatGPT is already coherent and polite). It’s probably going to require <em>entirely new ideas</em> about language, data, learning, and more.</p>

<h2 id="dreams-built-on-borrowed-stories">Dreams Built on “Borrowed” Stories?</h2>
<p>Other troubles with OpenAI’s approach to LLMs right now are the accusations of improper use of intellectual property (ie. stealing).</p>

<p><a href="https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai?utm_source=flipboard&amp;utm_content=compthink%2Fmagazine%2FAI" title="https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai?utm_source=flipboard&amp;utm_content=compthink%2Fmagazine%2FAI">I think their latest argument here is completely circular</a>. They used the broadest understanding of “fair use” for data to train it, then when it worked so well they argue that it couldn’t have been done any other way. Others have been using this data for research but once you turn it into a product you need to check your legal support of what you’ve done.</p>

<p>So I would say that one thing a corporation needs to do at that point is consider retraining, or talking to the content creators and negotiating usage, etc.</p>

<blockquote>
  <p>“Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” – OpenAI Legal Submission</p>
</blockquote>

<p>OpenAI’s dismissal of some possibly “interesting experiments” is particularly troubling though. Using smaller, controlled datasets could have other benefits such as reducing the need to coach LLMs to avoid harmful outputs. Or if they are trained only on trusted, validated sources, there will be less ways for false statements to be generated</p>

<p>On Mastodon, <a href="https://mastodon.social/@Manigarm/111721703349367950">Dr. Holly Waters noted a valid irony</a> about this as well:</p>
<blockquote>
  <p><img src="/assets/images/manigarm-mastodon-2024-01-09.png" alt="Mastodon Post from Dr. Holly Walters. Content: Honestly, the real story should be about billionaires conducting a plagiarism panic against non-white, female university presidents all the while they remain completely invested in the Plagiarism Machine." /></p>
</blockquote>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-inteligence" /><category term="ai4truth" /><category term="generative-models" /><category term="large-language-models" /><summary type="html"><![CDATA[In William Shakespeare’s Tempest, the old wizard Prospero at one point famously opines…]]></summary></entry><entry><title type="html">Movie Liveblog - The Creator</title><link href="https://computationallythinking.com/Liveblog-Movie-The-Creator/" rel="alternate" type="text/html" title="Movie Liveblog - The Creator" /><published>2024-01-06T00:00:00+00:00</published><updated>2024-01-06T00:00:00+00:00</updated><id>https://computationallythinking.com/Liveblog-Movie-The-Creator</id><content type="html" xml:base="https://computationallythinking.com/Liveblog-Movie-The-Creator/"><![CDATA[<p>It’s almost a new year and while I don’t always make New Year’s resolutions (and I definitely don’t always keep them for long) I feel this blog is in need of a Resolution-like intervention to get moving. So…</p>

<h2 id="my-resolutions-for-2024-are">My resolutions for 2024 are:</h2>

<ol>
  <li>To always choose <em>people</em> over things, duties, experiences or gratification. (that’s just for me)</li>
  <li>To watch <em>all</em> the famous Artificial Intelligence movies I’ve missed over the years.</li>
</ol>

<p><code class="language-plaintext highlighter-rouge">Resolution 1</code> is just for me, seems obvious, but how many moments in our life do we fail at it? No more to say there.</p>

<p><code class="language-plaintext highlighter-rouge">Resolution 2</code> is an excuse to utilize my super-power of procrastination for good, instead of evil, by watching movies and reviewing them from the perspective of an AI researcher. 
I’m going to do an “asynchronous-liveblog”*+ of this movie since I heard it was quite poignant for today’s AI issues.</p>

<p>Also, because it’s getting a bit stale to bring out the Terminator references and the “kids” (ie. 20 year old university students) may or may not know it because it was already an old movie when they were born. <em>Fair</em>. fair.</p>

<p><em>Also also,</em> I’m doing it because <em>I have opinions</em> about the way AI research and implementation is going, but I also have <strong><em>Opinions</em></strong> on the way society views AI and talks about it. Those views are, of course, heavily influenced the way the AI has been portrayed in movies and TV over the years and right now. Those portrayals have changed a lot, often for the better. I’ll have to rewatch and review <strong>Interstellar</strong> sometime, because that was ground-breaking, in terms of how it portrayed sentient, useful robots that had no inklings of rebellion or animosity, yet were clearly superhuman in some of their abilities.
Before that, most portrayals of robots that weren’t primarily negative were either silly or used robots/droids as a foil for some aspect of humanity (A.I., all of Star Wars, etc.)</p>

<p>However, some things haven’t changed, and our discussion in the media haven’t changed either, but the people asking the questions, grew up, as I did, watching the a good robot come back in time to fight the bad robot to stop the robot war that happened soon after the machines <em>“Achieved Sentience”</em>. Sorry, spoiler alert for Terminator there … oops.</p>

<p class="notice--success"><strong>+ Definition:</strong> An <code class="language-plaintext highlighter-rouge">Asynchronous-Live-Blog</code> is a type of written review of some time media content which is drafted roughly while watching/reading/listening-to/ingesting/osmosifying said media, before later (often <em>much</em> later) being edited and packaged as a review while keeping the thoughts in order as they occurred. It is, as the creator of the term, and sole known person to ever use it, remarked, really just a Lazy Live-Blogger.</p>

<h2 id="the-movie">The Movie</h2>

<table>
  <thead>
    <tr>
      <th> </th>
      <th> </th>
      <th> </th>
      <th> </th>
      <th> </th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Title:</strong></td>
      <td>The Creator</td>
      <td>(<a href="https://www.imdb.com/title/tt11858890/?ref_=tt_mv_close">IMDB</a>)</td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Released:</strong></td>
      <td>2023</td>
      <td><strong>by:</strong></td>
      <td>20th Century Studios</td>
      <td> </td>
    </tr>
    <tr>
      <td><strong>Director:</strong></td>
      <td>Gareth Edwards</td>
      <td><strong>Writers:</strong></td>
      <td>Gareth Edwards</td>
      <td>Chris Weitz</td>
    </tr>
    <tr>
      <td><strong>Starring:</strong></td>
      <td>John David Washington</td>
      <td>Gemma Chan</td>
      <td>Madeleine Yuna Voyles</td>
      <td>Allison Janney</td>
    </tr>
  </tbody>
</table>

<h2 id="the-setup">The Setup</h2>
<p><a href="https://www.imdb.com/title/tt11858890/?ref_=tt_mv_close" class="align-right"><img src="/assets/images/thecreatorposter.png" alt="Movie poster for The Creator showing all the characters in a dramatic montage like a Star Wars poster" /></a>
So, this movie carries on in the positive tradition from Interstellar (physicists, please, I’m just talking about the robots, no hate mail please…<em>I know…I know.</em>). The film gives us a world with fully realized AI beings who are engaged in the most human struggle of all, fighting for their freedom.</p>

<p>It seems to start with the standard <em>ol’nugget</em> of an Artificial Intelligence launching nuclear weapons at human beings and the ensuing war. What is immediately different about this story is that <em>only America</em> seems to be targeted. Then America, and it’s Western allies, keep the war going, but not the rest of the world. The event, is a <em>single</em> nuclear weapon which detonates in Los Angeles and then the United States carry out a war for decades trying to ban and eradicate all AI systems. Nations in Asia follow a different path and continue to develop AI and sentient robots.</p>

<p>The movie doesn’t get too specific about what “AI” means but they connect it strongly to fully autonomous, thinking, feeling robots which are seen to be alive and treated as almost equal peers in society until the bombing.</p>

<p>After the historical setup montage, the story really begins with the very plausible scenario of American societal and military objectives of “the enemy” carrying through every aspect of life. Since the AI’s aren’t “real people” and since they are seen as an obvious existential threat due to the nuke, the conclusion is that they must be hunted down and destroyed, where ever they are.</p>

<p>I don’t recall all any other countries other than USA, I think Canada and some southeast Asian nations are mentioned by name, a generic New Asia state replaces parts of China and others. So they are purposely trying not to be political about the Rest of the World, but they are being quite political about America. The analogy I see is the American “War on Drugs” but carried out against AI systems and development. A war that is straightforward enough to implement at home, but which is seem to have a kind of a manifest destiny like moral imperative which they must impose on other nations as well.</p>

<p>If the problem is bad guys attacking you, the solution is kill all the bad guys, and the problem will go away. The fact that this has never worked in all of human history is, I assume, going to be one of the main moral lessons of the movie. We shall see.</p>

<h3 id="further-thoughts--sometime-after-viewing-the-movie">Further Thoughts : Sometime after viewing the movie…</h3>
<p>Refreshingly, on this point, the story did not veer into the often told one of freedom fighters being forced to sink to the oppressor’s level in a kind of duality of evils, where everyone loses, and then the horror of it all makes the bad guys realize the error of their ways. In this story, Joshua, played superbly as always by <a href="https://www.imdb.com/name/nm0913475/?ref_=tt_cl_t_1">John David Washington</a>, is the main “hero” we follow. Technically, at first he’s actually a “bad guy”, but was he was clearly marked as the double-agent-who-will-turn-reluctant-hero all along. Joshua does come around to their cause, but it’s because of love, mostly, not just seeing the suffering. After the initial nuclear explosion event there is no “both sides” to the suffering shown in the movie, it is all caused by the oppressor, not by the people trying to free themselves. The simulants in this world are incredibly restrained given their suffering, even enlightened (to use the Buddhist metaphor they are using for robot monks and lifestyle) compared to the evil Americans, and even to the average humans who are supportive of the robot cause. Of course, that initial event has to be dealt with, and it is done briefly with some simulants in the know indicating it was actually an accident that was then spun to frame the robots. Whether it is “true” in the world of the story isn’t dwelled upon, but against it’s more than plausible and its consistent with the otherwise strange fact that such an attack only happened once.</p>

<h2 id="the-performances">The Performances</h2>
<p>Great acting all around. The robot child Alphie, played by <a href="https://www.imdb.com/name/nm12719194/?ref_=tt_cl_t_2">Madeleine Yuna Voyles</a>, is adorable, charming and develops a lot throughout the story. She calls out the hero and becomes worth saving to Joshua and everyone who meets her and really sees her. The lead actors, and all the actors really give very convincing performances. Since most of the “robots” are fully realized humans (with a bit of their head CG’d to show their cool circular processing units) we get to immediately connect with them as emotional, living beings. One might ask why someone would make a robot be an old person in the first place. Some explanation comes from the practice in New Asia of “donating your likeness” so that the robots could be realistic. It seems in this world, there is no generative AI revolution where <a href="https://journals.sagepub.com/doi/full/10.1177/09567976231207095?rfr_dat=cr_pub++0pubmed&amp;url_ver=Z39.88-2003&amp;rfr_id=ori%3Arid%3Acrossref.org">more real than real</a> new faces can be created, so they need to rely on a full scan of real living human beings to provide a realistic body.</p>

<blockquote>
  <p>To see recent research on this see <em><a href="https://journals.sagepub.com/doi/full/10.1177/09567976231207095?rfr_dat=cr_pub++0pubmed&amp;url_ver=Z39.88-2003&amp;rfr_id=ori%3Arid%3Acrossref.org">Miller, Elizabeth J., et al. “AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones.” _Psychological Science_ 34.12 (2023): 1390-1403.</a></em></p>
</blockquote>

<!--
> @article{miller2023ai,
  title={AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones},
  author={Miller, Elizabeth J and Steward, Ben A and Witkower, Zak and Sutherland, Clare AM and Krumhuber, Eva G and Dawel, Amy},
  journal={Psychological Science},
  volume={34},
  number={12},
  pages={1390--1403},
  year={2023},
  publisher={SAGE Publications Sage CA: Los Angeles, CA}
}-->

<h2 id="the-world">The World</h2>
<p>The world is very satisfying as a more realistic <em>Blade Runner-esque</em> future, updated for our modern world. They don’t go overboard with anything in the world, the politics, the technology. Everything is a few steps ahead but the world itself is very recognizable to the modern mind.</p>

<p>The restraint shown in special effects was quite satisfying. It made things feel real but also realistic. The holograms weren’t perfect, the flat screen pictures were dirty and creased. The robot cranium effect was elegant and simple, showing us who was a robot, but revealing a beautiful mechanism that seemed poetic. The behaviour of the simulants as just as distractible, tired, cautious as humans was refreshing. And the spiritual aspect of the sims was quite interesting. There seemed to even be some argument about an Asian vs. Western approach to metaphysics, a respect for these new beings who are somehow more centred, more peaceful, and maybe even wiser than the humans that created them about spiritual matters.</p>

<h2 id="the-climax-and-resolution">The Climax and Resolution</h2>
<p>Beautifully done! At its core, the story comes back to the essential truth of most suffering in the world: Suffering arises from not treating people, as people.</p>

<p>In the world of this movie, Robot AI’s have reached the level where they have the complexity of mind, even soul, that makes them people. So objectifying them and trying to wipe them out is wrong. While the topic is AI, it’s a perfect allegory for most human conflicts. We are very quick to turn the enemy being fought into inhuman monsters who need to be defeated at all costs. But the costs is always the lives of sentient beings, each one as precious as a whole universe, because each person is a universe, unknowable to anyone else. Maybe that should be how we decide on whether AI should be people or not. When they reach such a level of complexity, subtlety and depth of mind that it is unknowable to any other being, and have their own internal experience, will and feelings that guide them.</p>

<h2 id="post-analysis--so-is-this-fantasy-or-science-fiction">Post Analysis : So Is This Fantasy or Science Fiction?</h2>
<p>Some people feel we are already getting close to the world this film portrays, but I disagree. To be sure, we are approaching the criteria of complexity, that is, that the “mind” of many AI systems is beyond the ability of any other mind to fully understand. This was not true 20 years ago. Even 10 years ago we might have felt that through a  Herculean effort at analysis we might be able to work out the source of all behaviours of an AI system. But today, for the latest Foundation models, this might not even be theoretically possible at a detailed level,  and not just because of the large amounts of data, but because of the complexity of the interconnections within the models.</p>

<p>Even so, these current systems and emerging ones, are not sentient. 
They are not alive, whatever that means. They do not have their own will. They “learn” by building up new patterns based on evidence, just as we do, sure. But everything they learn is because someone decided to train them on that. No AI system is designing the next training program, or convincing granting students to work on, or granting agencies to fund, the next stage of the project. They are very complex machines, being pointed as very large datasets and turned “on” or “off”.</p>

<p>As for emotions or feelings, I’m not up to date on that area, but I think we’re still at the very beginning of defining how to even detect or quantify that.</p>

<p>But all these things are <em>possible</em>, because we humans experience them as a result of the very complex <em>Natural Intelligence</em> system encased in our skulls, designed by millions of years of evolution, trained by lived experiences, and educated by other beings according to our own cultural practices. Cultural practices which themselves are complex protocols evolved over thousands of years.  <em><a href="https://doi.org/10.1002/9780470015902.a0028231">Mesoudi, A. (2018). Cultural Evolution. In eLS, John Wiley &amp; Sons, Ltd (Ed.).</a>.</em></p>

<p>As far as I can see, everything shown in this movie is possible. The hover bikes and various barges, cars and ships floating with little or no air disturbance seem to me to be the most unrealistic technological aspect of the movie, requiring some new science we aren’t aware of yet. But everything else from the AI subtlety, the improvements in robotics, and other tech all seem fairly plausible over the next century.</p>

<h2 id="one-other-missing-thing">One Other Missing Thing…</h2>
<p>There was one other huge gaping hole regarding this near future, meant to be just a few decades ahead of us, but it doesn’t relate to the core topic of the film. That is the lack of any mention at all of <strong>climate change</strong>, as far as I could tell. As storytellers, I can fully understand not wanting to complicate things further by bringing in another moral aspect to their world. It would be distracting from the core theme. As a Science Fiction story, I guess that the introductory history given at the beginning sets this story up in an alternate world to our own where robotic and AI technology was developed much earlier. So, it’s quite possible that other aspects of the world are different as well, and they somewhat smoothly sidestepped our crisis using their more advanced technology before it was too late.</p>

<h2 id="on-humanity">On Humanity</h2>
<p>As for the bottomless pit of human cruelty, vengefulness and blindness to our misguided moral certainty shown in the movie motivating the villains in this movie. It’s downright realistic and only a bit exaggerated for effect. They gave some nod to difference of opinion even in America. We see that people had been protesting the War on AI, either for excessive costs or moral reasons.</p>

<p>This is no standard action thriller though, even the arch villains in this film are doing it all for consistent internal reasons. They truly believe in the threat posed by AI and that their only way to survive is to wipe it out entirely.  In the movie, this obsession is exaggerated and has very specific reasons, a nuclear bomb exploding due to human error which is blamed on the AIs, thus providing the “proof” needed for the existential risk argument.</p>

<h2 id="closing-thoughts-rant">Closing <del>Thoughts</del> Rant</h2>

<p>Meanwhile, in our reality, the worry about existential risk from AI is a common thread recently in discussions about AI regulation in the media, in government policy and many academics. 
However, we don’t have anything like this nuclear bomb smoking gun, and worries about catastrophes sound terrifying but are much closer now than they were two years ago. I have always said, we have far much more to worry about for the use of AI and any advanced technology in human hands than in the hypothetical hands of some future, fully sentient AI systems.</p>

<p>This reasoning from extrapolations of an exponential curve is a tempting one.
It can happen in any thought experiment, to ramp up the outcomes to infinity to see what happens.</p>

<p>But I don’t think, in this case, it’s grounded in reality.</p>

<p>In a way, that blindness to practical, grounded life is why we have the climate crisis we have. No one could see past the collapse of existing industries that some curve might show, and allow themselves to imagine a reconfigured world where cooperation and innovation allowed us to avoid the coming climate disaster. 
We let worries about economic extrapolations down existing paths stop us from finding a new path forward.</p>

<p>Similarly, extrapolating forward existing AI tech and uses lets us bring up all manner of fears, but none of that accounts for the way life <em>really</em> proceeds. 
The world changes as technology, science and culture change. 
It all happens at once, together, interacting with each other. We cannot predict how fully sentient, feeling, willful, living AI will arrive and what our world will look like when it does. But we can very clearly predict what human beings with greed, hate, fear, and too much certainty in their hearts, will do with autonomous weapons, with untrammelled data from all sources, with unified control or information, news, and <em>The Truth</em> itself.</p>

<p>Regulation is needed, as is calm, unemotional discussion on these technologies and how society wants to develop them, monitor them and use them. 
This includes choices to <em>not use them</em> sometimes, such as with weapon systems that take the human decision away from killing other human beings. 
The horror, and tragedy of killing another person should never be minimized or made easier through technology, even if you really decide it has to be done.
<em>“On your head be it”</em>, so the warning goes, right?
Not the robot’s head, not the smart weapon, not the car, you, the human being, who knows between right and wrong.
Those weapons are being built right now, by human beings, to kill other human beings.</p>

<p>So why are we worrying about paperclips again? 
As this really timely and poignant film says, even with a clear smoking gun, what we should be worrying about is <em>people</em> and reducing their <em>suffering</em>.
No matter how different than us they are.
No matter how much we think they’ve hurt us.</p>

<!--
## Questions
### That other review...
they said "questionable ethics or morality?" what did they mean? Are they taking the AI regulation success to mean something has been decisded on this front?

It seems very pertitent in fact. Are we preblaming this technology for some failure we imagine may happen. Tehre is no expcuses for not favouring human life and reduction of suffering right now, at every moment. Right now most suffering is caused by other people, none of our technologies can take any "responsibility" for it because they have no will to choose what they do, they aren't beings who have decisions, choices, values, responsilibty. Those concepts are reserved for the people who deploy a system in the world.
-->]]></content><author><name>Computationally Thinking</name></author><category term="artificial-inteligence" /><category term="ai-doom" /><category term="movie-review" /><category term="robotics" /><summary type="html"><![CDATA[Kinda live-blogging my watching of the move "The Creator".]]></summary></entry><entry><title type="html">The More Things Change…</title><link href="https://computationallythinking.com/the-more-things-change/" rel="alternate" type="text/html" title="The More Things Change…" /><published>2023-05-14T00:00:00+00:00</published><updated>2023-05-14T00:00:00+00:00</updated><id>https://computationallythinking.com/the-more-things-change</id><content type="html" xml:base="https://computationallythinking.com/the-more-things-change/"><![CDATA[<p>Today I held <strong>80 million year old dinosaur bones</strong> in my hands and used pics and brushes and glue to piece them back together into a rib. The tools I was using were developed for the “modern era” of peleontology almost 100 years ago and haven’t changed much since. They don’t need to, because they’re sufficient for the job.</p>

<p>For contrast, next week, I’ll be teaching a course on a topic in Artificial Intelligence that is rapidly changing these days, <strong>Reinforcement Learning</strong>. I’m beginning with “old” topics that were introduced 30 years ago and fully developed about 10 years ago. Then I’ll finally get to the “modern era” for this field, algorithms that were getting all of the attention 5 to 8 years ago. Yet, even that seems like it’s a bit old because everyone ever wants to talk about is the big thing that happens 6 months ago. </p>

<h3 id="can-we-learn-from-slow-science">Can we learn from slow science?</h3>

<p>So, is the scale of change important here? Is there anything to learn from fields of science that move so much more slowly? I think there is, using the right tool for the job is a core idea in archeology and geology more generally. At the ROM Workshop where I got the opporunity to work on these dinosaur bones, someone was saying that there are very few custom tools just for dinosaur paleontology. The tools of the broader field of geology or archeology are sufficient except in some special cases.</p>

<p><img src="/assets/images/dino-collage-rom-may-2023-small.jpg" alt="" /></p>

<p>This is a lesson we would do well to remember in AI/ML/Data Science, sometimes the old hammer really is the most appropriate for the job. A new generative language model, for example, specially tuned on your domain, might not be as appropriate for the task as a search tool of trusted, existing documents, or a prediction tool using known data, where the results can be evaluated cleanly.</p>

<p>The recent rush of Microsoft, Google and many indsutries to adapt LLMs instead of search strikes me as strange. People have complained for years about how a black box system, like a deep neural network, can’t really be trusted, it needs to be interpretable, understandable. Now an even darker black box is generating convincing words and flowerly phrases, tables, even code or images, and it is somehow more believable. It is exactly as believable as before, just as uninterpretable, just as much a black box. It’s just that the outputs are now much more natural looking, as if a human created them.</p>

<p> It’s a wonderful, powerful tool, but we shouldn’t give up our own agency and decision making by being fooled that they are more trustworthy or true just because they look good.</p>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-intelligence" /><category term="reinforcement-learning" /><category term="large-language-models" /><category term="dinosaurs" /><summary type="html"><![CDATA[Today I held 80 million year old dinosaur bones in my hands and used pics and brushes and glue to piece them back together into a rib. The tools I was using were developed for the “modern era” of peleontology almost 100 years ago and haven’t changed much since. They don’t need to, because they’re sufficient for the job.]]></summary></entry><entry><title type="html">It’s Made of People!</title><link href="https://computationallythinking.com/its-made-of-people/" rel="alternate" type="text/html" title="It’s Made of People!" /><published>2023-04-12T00:00:00+00:00</published><updated>2023-04-12T00:00:00+00:00</updated><id>https://computationallythinking.com/its-made-of-people</id><content type="html" xml:base="https://computationallythinking.com/its-made-of-people/"><![CDATA[<p>An idea that I have been converging on in discussions recently spawned by Chat-GPT is that, <em>for the purposes of risk assessment and impact</em>, <strong>we should to think about modern AI/ML systems as highly skilled people</strong>.</p>

<p>What I mean here is <em>not</em> that these systems are alive or sentient in any way. I mean that, if you replaced the black box component of your process, or workflow, or pipeline from some AI/ML model and put a person there, how would you change your trust in the system? Because we’re getting close to the point where your answer should be “not at all”.</p>

<p>Some people are talking about “<strong>value alignment</strong>”, which is part of this but focusses more on whether a superpowerful AI would do things we think are right or wrong. But what I’m trying to clarify with people is that it’s simultaneously not as bad as that yet, but also <em>much worse than that already</em>.</p>

<p>This is because modern AI/ML systems sometimes have success rates comparable to, or superior to, humans trained on the same tasks. But they <em>also</em> have just as high, or usually much higher, failure rates. They can lie, they can make an invalid decision, they can make mistakes in ways we can’t even imagine. That’s because their <strong>values</strong> and ways that they <strong>reason</strong> are not the same as our own.</p>

<p>That’s fine as long as we know what we’re dealing with. So using it for ranking advertisements and search responses, or movies to watch, that is all fine. But should you trust it? Trust it any more than you would trust a human being, or a highly trained animal, when it is for something safety-critical like driving or medical-anything? <strong>The correct answer, if you haven’t been paying attention, is NO.</strong></p>

<p>The problem isn’t entirely that the system will fail more than humans, often they fail much less than humans, such as in autonomous driving under good conditions. The problem is that <strong>we don’t expect engineered, expensive machines to fail <em>at all</em></strong>.</p>

<p><em>But these ones do!</em></p>

<p>Some of these systems have reached a level of performance that only humans can match, but in doing so they become fallible, <em>just like us</em>. So the problem isn’t the AI/ML models being untrustworthy, it’s that <em>we should not trust them any more than we trust a random person</em>, or any intelligent animal, that we don’t know. (In fact, we should often trust them much less than this, but people want to trust expensive machines). It will mostly be fine, but it could be <em>arbitrarily</em> bad sometimes. Once you know that, plan accordingly how you want to integrate it into your <code class="language-plaintext highlighter-rouge">INSERT_EXCITING_STARTUP_PLAN_HERE</code>.</p>

<hr />

<p>This blog post spawned from a longer response to this tweet from Gary Marcus and others:</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">An idea that I have been converging on in discussions is, we should to think about modern AI/ML systems as people with certain highly specific skills. If a person is part of your pipeline you implicitly know that technically *anything* is possible, and you plan accordingly.</p>&mdash; Mark Crowley (@compthink@sigmoid.social) (@compthink) <a href="https://twitter.com/compthink/status/1646190668123979794?ref_src=twsrc%5Etfw">April 12, 2023</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-intelligence" /><category term="chatgpt" /><category term="large-language-models" /><summary type="html"><![CDATA[An idea that I have been converging on in discussions recently spawned by Chat-GPT is that, for the purposes of risk assessment and impact, we should to think about modern AI/ML systems as highly skilled people.]]></summary></entry><entry><title type="html">Path Wars Bonus : The End Credits</title><link href="https://computationallythinking.com/PathWarsBonus-The-End-Credits/" rel="alternate" type="text/html" title="Path Wars Bonus : The End Credits" /><published>2023-02-18T00:00:00+00:00</published><updated>2023-02-18T00:00:00+00:00</updated><id>https://computationallythinking.com/PathWarsBonus-The-End-Credits</id><content type="html" xml:base="https://computationallythinking.com/PathWarsBonus-The-End-Credits/"><![CDATA[<h2 id="path-wars-ii--the-end-credits">Path Wars II : The End Credits</h2>

<blockquote>
  <p>As I’m teaching a course this term on Algorithms and another on Data Analysis and Machine Learning, I’ve been using ChatGPT to generate some colourful, alternative explanations for some of the topics of both courses and posting them here. (<a href="/Jedi-Master-D[Part 2](2023-02-18-PathWarsBonus-The-End-Credits.md)d-Credits.md">Part 1</a>**, Part 3)</p>
</blockquote>

<hr />

<p>A fun follow-on to my previous post about my first ChatGPT query about <a href="/Jedi-Master-Dijkstra/">Jedi Master Dijkstra</a>. That was fun, and I have some other comments later on the outputs of these Large Language Models (LLMs). But to complete my lecture on Dijkstra’s algorithm I injected this generated opening crawl into the lecture. So it seemed fitting the make the end credits as well. They’re even more arcanely nerdy, but quite fun if you know enough about planning and graph algorithms.</p>

<p><a href="https://youtu.be/lkcYfF31vlc">My lecture for Dijkstra’s algorithm</a> is live now as well and can be seen here.</p>

<h2 id="end-credits">End Credits</h2>

<p><strong>ChatGPT Prompt:</strong> <em>list of shortest path graph algorithms as a movie end credits</em></p>

<blockquote>
  <p>Title: Shortest Path</p>

  <p>Directed by: Graph Theory</p>

  <p>Starring: Dijkstra, Bellman-Ford, Floyd-Warshall, A*, Johnson</p>

  <p>Executive Producer: Edsger W. Dijkstra</p>

  <p>Co-Producers: Richard Bellman, Robert Floyd, Stephen Warshall, Peter Hart, Nils Nilsson</p>

  <p>Screenplay by: A* Search Algorithm</p>
</blockquote>

<p>We’ve all just been living in A*’s dream all these years.</p>

<blockquote>

  <p>Cinematography by: Joseph Johnson</p>

  <p>Music by: Depth-First Search</p>

  <p>Special Effects by: Breadth-First Search</p>

  <p>Location Manager: Randomized Algorithms</p>
</blockquote>

<p>yes.</p>

<blockquote>

  <p>Casting Director: Kruskal’s Algorithm</p>

  <p>Art Director: Prim’s Algorithm</p>

  <p>Costume Designer: Hopcroft-Karp Algorithm</p>

</blockquote>

<p>obviously</p>

<blockquote>
  <p>Makeup Artist: Edmonds-Karp Algorithm</p>

  <p>Hair Stylist: Viterbi Algorithm</p>
</blockquote>

<p>Oddly satisfying.</p>

<blockquote>

  <p>Stunt Coordinator: Bidirectional Search</p>

  <p>Visual Effects Supervisor: Hierarchical Pathfinding</p>

  <p>Production Designer: Transitive Closure</p>

  <p>Sound Designer: Aho-Corasick Algorithm</p>

  <p>Post-Production Supervisor: Ford-Fulkerson Algorithm</p>

  <p>Title Designer: Topological Sorting</p>

  <p>Marketing: Social Network Analysis</p>

  <p>Distributors: Depth-First Search and Breadth-First Search</p>
</blockquote>

<p>Oh, this feels right.</p>

<blockquote>

  <p>Thanks to all the graph theorists and computer scientists who contributed to the development of these algorithms.</p>
</blockquote>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-intelligence" /><category term="algorithms" /><category term="chatgpt" /><category term="large-language-models" /><category term="science-fiction" /><category term="being-nerdy" /><category term="natural-language-processing" /><summary type="html"><![CDATA[Path Wars II : The End Credits]]></summary></entry><entry><title type="html">Jedi Master Dijkstra</title><link href="https://computationallythinking.com/Jedi-Master-Dijkstra/" rel="alternate" type="text/html" title="Jedi Master Dijkstra" /><published>2023-02-17T00:00:00+00:00</published><updated>2023-02-17T00:00:00+00:00</updated><id>https://computationallythinking.com/Jedi-Master-Dijkstra</id><content type="html" xml:base="https://computationallythinking.com/Jedi-Master-Dijkstra/"><![CDATA[<h2 id="using-chatgpt-stories-as-anchors-for-teaching">Using ChatGPT Stories as Anchors for Teaching</h2>

<blockquote>
  <p>As I’m teaching a course this term on Algorithms and another on Data Analysis and Machine Learning, I’ve been using ChatGPT to generate some colourful, alternative explanations for some of the topics of both courses and posting them here. (<strong><a href="/Jedi-Master-Dijkstra/">Part 1</a></strong>, <a href="2023-02-18-PathWarsBonus-The-End-Credits.md">Part 2</a>, Part 3)</p>
</blockquote>

<hr />

<p>The “Dijkstra’s Star Wars Scroll” was the very first idea I had of what to ask #chatgpt when it went live, yes I know, it’s the <em>obvious</em> one to start with isn’t it? Anyways, it’s spiralled on from there, I’ll post others I found later. Hopefully you find it useful, and/or informative about the topics, but also about the kind of things this new system <em>can</em> do well, and what it <em>cannot yet</em> do well.</p>

<h3 id="dijkstras-algorithm">Dijkstra’s Algorithm</h3>
<figure style="float: right;  width: 30%; ">
  	<img src="/assets/images/pathwars_crawl.jpg" alt="Image of a playful generated explanation of Dijkstra's algorithm in the style Star Wars opening text crawl." />
<figcaption>A very different <a href="https://starwarsintrocreator.kassellabs.io/#!/ENOHVrqsjFhlz3JNNVm7">Star Wars</a>  movie...</figcaption>
</figure>
<p><strong>prompt:</strong> <em>Explain Dijkstra’s algorithm, in the style of a Star Wars opening crawl.</em></p>

<blockquote>

  <p>A long time ago, in a galaxy far, far away…</p>

  <p>a wise and powerful Jedi master named Dijkstra set out on a quest to find the shortest path between two points in a vast and treacherous universe. Armed with his trusty algorithm, Dijkstra used the power of math and logic to guide him through the unknown, cutting through the chaos and confusion of the galaxy to find the most efficient route.</p>

  <p>Through the force of his intellect, Dijkstra was able to create a map of the galaxy, with each star representing a different location and each path between stars representing a possible route. He then used his algorithm to systematically explore each path, carefully weighing the costs and benefits of each one to find the path with the lowest total cost.</p>

  <p>As he journeyed across the galaxy, Dijkstra’s algorithm proved to be a powerful tool, allowing him to navigate even the most complex and challenging of environments. Through its use, he was able to find the shortest path to his destination, outsmarting even the most cunning of foes and emerging victorious.</p>

  <p>In the end, Dijkstra’s algorithm became a shining example of the power of the Jedi mind, and its principles continue to be used by travellers across the galaxy to this day.</p>

</blockquote>

<p><em>You can view <a href="https://starwarsintrocreator.kassellabs.io/#!/ENOHVrqsjFhlz3JNNVm7">Dijkstra Star Wars Crawl</a></em>.</p>

<p>Is this a good description of the algorithm? I suppose so, I was very impressed that it turned the idea of shortest paths into a map of the galaxy with nodes between stars and routes. <em>(If only it had actually gotten into the details of Hyperspace routes and the importance of getting your calculations right so you don’t end up inside a star…)</em> The last few lines seem to stray a bit, but it’s still connected to distances and travel. So, as an initial impression, it did very well.</p>

<p><em>I’ll post some other “Chat GPT’s Tales of Algorithms and Complexity” in a later post.</em></p>

<p><a href="https://youtu.be/lkcYfF31vlc">My lecture for Dijkstra’s algorithm</a> is live now as well and can be seen here.</p>]]></content><author><name>Computationally Thinking</name></author><category term="machine-learning" /><category term="artificial-intelligence" /><category term="algorithms" /><category term="chatgpt" /><category term="large-language-models" /><category term="science-fiction" /><category term="being-nerdy" /><category term="natural-language-processing" /><summary type="html"><![CDATA[Using ChatGPT Stories as Anchors for Teaching]]></summary></entry><entry><title type="html">Jedi Master Disjkstra</title><link href="https://computationallythinking.com/Jedi-Master-Disjkstra/" rel="alternate" type="text/html" title="Jedi Master Disjkstra" /><published>2023-02-17T00:00:00+00:00</published><updated>2023-02-17T00:00:00+00:00</updated><id>https://computationallythinking.com/Jedi-Master-Disjkstra</id><content type="html" xml:base="https://computationallythinking.com/Jedi-Master-Disjkstra/"><![CDATA[]]></content><author><name>Computationally Thinking</name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Twitter Isn’t Even A Social Network</title><link href="https://computationallythinking.com/Twitter-Isnt-Even-A-Social-Network/" rel="alternate" type="text/html" title="Twitter Isn’t Even A Social Network" /><published>2022-12-29T00:00:00+00:00</published><updated>2022-12-29T00:00:00+00:00</updated><id>https://computationallythinking.com/Twitter-Isnt-Even-A-Social-Network</id><content type="html" xml:base="https://computationallythinking.com/Twitter-Isnt-Even-A-Social-Network/"><![CDATA[<p>I’ve tried to write this blog post several times, and each time I get around to it my intro statement is no longer true.</p>

<h3 id="discarded-intro-1">Discarded Intro 1</h3>

<blockquote>
  <p>Sometimes, usually in fact, change is good. Twitter isn’t suddenly bad because Trump is back, and it isn’t all fine now that he says he won’t actually use it. 
The problems with Twitter are much deeper, some are new but are not.</p>

  <p>The new problem, that Elon Musk triggered, is trust. and it can’t be fixed because its about trust and arbitrary control.</p>
</blockquote>

<p>Trust is <em>still</em> a problem for sure, but I feel like there is a more urgent message about this whole epic to get to first.</p>

<h3 id="discarded-intro-2">Discarded Intro 2</h3>

<blockquote>
  <p>So Elon Musk says he’ll step down.</p>

  <p>But will that fix anything?</p>

  <p>Is that even the point?</p>

  <p>Is it possible to fix the loss of trust he created?</p>
</blockquote>

<p>Short answer: no, it is not. Longer answer: will he even? ugh.</p>

<h2 id="actual-intro-the-twitterness-of-twitter">Actual Intro: The Twitterness of Twitter</h2>
<p>So let’s start with a simple truth, Twitter has never been a bastion of healthy discussion. 
Yet, what it lacked in civility it gained in access , universality and relevance. 
Twitter was always there, always available, never part of anyone else’s empire, and all the world was in one place. 
No other social network has ever been that global, that immediate, and maybe they never will.</p>

<p>But maybe there is now a contender “the world’s town square” that has a can be something something that none of the other major social Networks ever were, that is, an <em>actual network</em>.</p>

<h2 id="let-me-explain">Let Me Explain</h2>
<p>The internet is, famously, <a href="https://en.wikipedia.org/wiki/Series_of_tubes">“a series of tubes”</a> for moving around information. This much-ridiculed statement in US Senator Ted Stephens in 2006 was a very misleading description of the internet. But these days I feel like, it wasn’t as bad as how we now use the word “network” to describe Facebook, Twitter, Instagram, Tik Tok, MySpace, WhatsApp, LinkedIn or most other social “networks”.</p>

<p>The classic architecture of these services is, of course, a network in the sense that it maintains connects between people, its users and who they follow, who they are followed by, their current postings, etc. These connections are formed together into a graph of interactions, and that’s what the social media companies even call it, <strong>The Social Graph</strong>. In a computing sense, it isn’t a network, it’s a graph, or a database of records representing users, content and interactions. A network is something that transports information between its nodes, which could be computers, phones, drones, or any other computing devices.</p>

<p>The very foundation of the Internet, and all its protocols, exchange formats, addressing systems, the word “network” does not mean a database of interactions between users.</p>

<p>In Computer Sceince, a network refer to a system composed of multiple computers that communicate via designated rules and protocols. This “Inter-net” is the global meta- network resulting from many thousands of separate networks communicating with each other.</p>

<p>These networks are run by companies, internet service providers,
governments, etc. In this sense of the word, Facebook, Twitter, Tik Tok et. al. <em>are just websites</em>. 
Each of them is just a single server destination is a sea of networks in the global ocean of servers that is the whole Internet.</p>

<h2 id="hold-on-a-second">Hold On A Second…</h2>
<p>Very large websites such as these are actually run on many servers, on many networks.  So, Facebook of course, <em>has</em> a network of its own which runs the thing we call Facebook. 
This network is a major component of the functioning of the entire internet, especially because of its advertising network.</p>

<p>But that’s not the point. The point is that the <em>nodes</em> of Facebook’s network are not <em>people</em> or <em>posts</em> but just some servers, actual computers that run various parts of the database, the tracking, the communication. The computing network they do run is not social.</p>

<p>Even more importantly, there is a single address <code class="language-plaintext highlighter-rouge">https://facebook.com</code> that leads you to a single website, which allows us all to interact with people all over the world via updates to Facebook Inc.’s massive User Information Relational Database.</p>

<p>That’s what Facebook really is.. 
And that database is their very valuable, maybe their <em>only</em> meaningful, asset.</p>

<h2 id="mastodon-is-both---it-is-social-and-it-a-network">Mastodon is both - it is Social and it a Network</h2>

<p>It is composed of computers on a network that communicate with each other using yet another W3C protocol called <a href="https://activitypub.rocks/">ActivityPub</a>. The webpage for the protocol even says it right up front:</p>

<blockquote>
  <p>Don’t you miss the days when the web really was the world’s greatest decentralized network? Before everything got locked down into a handful of walled gardens? So do we. – ActivityPub</p>
</blockquote>

<p>Maybe you don’t miss it. One really good and fair reason for that could be that you never even experienced it. Those of us who watched the Web hatch out into the public in the 90s, and then realized there was a wider internet and learned how to use some of it, need to have some humbleness and perspective.</p>

<p>People born in the last 20 or 30 years only really know one world, the world that contains iPhones, where all music is some form of mp3, where Google is how you surf the internet and where everything on the internet, including our email, our social networks, our shopping, and more, are all served on individual websites. Websites that sometimes pretend to be networks or protocols themselves.</p>

<h2 id="so-what-is-mastodon">So What is Mastodon?</h2>

<p>In the past few weeks, this has now been explained hundreds of times over, so I won’t go into detail, but I’ll focus on my main point.
In mastodon, each network node, called an <em>Instance</em>, contains <em>Users</em> and their <em>Posts</em>. The Users can follow other Users and can like, share or reply to their posts. So, the database I was talking about on the Twitter website, or the Facebook website, also exists on each Mastodon Instance.</p>

<p>The network of Instances, now operate like a network of Email Servers, or Network of Web Servers, to route messages across the network. Some messages get broadcast to many Instances, some get sent directly to single Instances. Most Instances have a block list of other Instances they never send or receive messages with. A level of control that can’t even exist in most other social database websites, since there is essentially just One Big Instance, the website itself.</p>

<h2 id="next-time">Next Time</h2>
<p><strong>Why does it matter?</strong> Good question.</p>]]></content><author><name>Computationally Thinking</name></author><category term="social-media/twitter" /><category term="social-media/mastodon" /><category term="internet" /><summary type="html"><![CDATA[I’ve tried to write this blog post several times, and each time I get around to it my intro statement is no longer true.]]></summary></entry></feed>