Ra
தமிழ்
← Back to articles

The Philosophy of AI

The Philosophy of AI

I was thinking about this movie Enthiran where Dr. Vaseegaran will introduce his Robot to the world. I was wondering how Shankar (Sujatha) envisioned the introduction of the most advanced technology to the world. So they booked a mandapam, gathered hotshots and the scientist does the big reveal, the sole frutition of his toil for ages. Cut to 2025, Grok 4 was released yesterday and it was Elon Musk sitting with a bunch of Asians and candidly talking about the Kardeshev scale while the team proudly boasted of all the compute they are mobilizing to make the AI break all barriers it has to outperform benchmarks. It indeed takes a village. And Sujatha was a scientific man. A genius who inspired a lot of us kids. In his mind, AI might have been just the best algorithm that humans would eventually discover. But turns out, we need industries competing, poaching (talent), train on paywalled content and import power plants to power more compute in order to evolve AI. Not to mention the AI Gold Rush and Shovel Sellers (+ Snake Oil), a very distinct game of economy and politics came into play and now NVIDIA is the leading company in the world in terms of market capitalization.

If you here still, I am sad but happy to say that I have lured you into a discourse on not subscribing to narrative. Yes this article that you are currently reading is not about AI. Turn back. NOW!!!

Ok. Where do we start?

LLMs are great. They are. No fucking doubt about it. But the narrative set by Elon et al that this is the path to absolute AGI is bullshit. Let's rewind a little. Shall we?

How does a natural language model run/operate? You load a program into your computer. A vast amount of text from books, websites, articles, and other sources has been used to train this program. It has developed a statistical understanding of how words naturally occur together in human language rather than learning the entire text by heart. Imagine it as someone who has read millions of pages and has a firm grasp on the normal flow of sentences. Internalised, as in a mind map of patterns, rather than memorised. The model now performs the following actions when you provide it with a small number of words, such as three words from a sentence:

  1. It examines each word separately to determine its meaning and typical usage.

  2. It also takes into account how those words complement one another.

  3. It then forecasts the next most natural word that ought to come after that based on its internal map of linguistic patterns.

Based on its understanding of language, this process repeats a word at a time, enabling it to produce entire sentences, paragraphs, or even pages. So LLMs are basically good at answering questions, summarizing, translating, coding, even reasoning — all by predicting what comes next in a conversation or task. We are too early to call it intelligence. It simulates intelligence. Are we even ready to dream to live during a kardeshev scale progress even if that progress is 0.1%?

To be continued...