Sunday, December 28, 2025

Civilisations & Philosophical Traditions

Triggered by the recent workshop on the Epistemology and Ontology of Academic Knowledge organized by the Kerala State Higher Education Council, I started reading up on Indian philosophy. The endeavor resulted in reading up several other philosophies including Zoroastrianism, Judaism, etc. In this, I present a summary of the various civilizations and philosophical pursuits/status of each of these, which I managed to learn using ChatGPT 5.2 and using it to summarize it in the form of a table. 

Period (approx.)

Region

Civilization / Tradition

What exists

Philosophical status

3200–2000 BCE

Mesopotamia

Sumerian civilization

Cuneiform, law, myth, kingship

No reflective philosophy

3100–2000 BCE

Nile Valley

Ancient Egypt

Hieroglyphs, maʿa, afterlife texts

Didactic ethics, not critical philosophy

2600–1900 BCE

South Asia

Indus Valley civilization

Urbanism, undeciphered script

Worldview unknown (script unreadable)

3000–1100 BCE

Aegean

Mycenaean Greece (incl. Minoan)

Palaces, Linear scripts, mythic order

Pre-philosophical

1600 BCE

China

Shang dynasty

Oracle bones, ritual divination

Recorded ritual order

from 300 BCE

Mesoamerica

Maya civilization

Writing, calendars, astronomy

Ritual–cosmic order

1500–1200 BCE

South Asia

Rigveda (Early Vedic)

ṛta, ritual + questioning

Earliest known reflective tradition

1400–1000 BCE

Iranian world

Zoroastrianism (Gāthās)

Truth vs lie, moral choice

Earliest known ethical philosophy

1100–800 BCE

Greece

Greek Dark Age

Oral epics, social reorg.

Pre-philosophical

800–500 BCE

India

Upaniṣads

Self, ultimate reality

Axial philosophy

800–500 BCE

West Asia

Judaism (classical)

Law, covenant, history

Axial moral–legal philosophy

600–300 BCE

Greece

Greek philosophy (Archaic → Classical)

logos, nature, ethics

Axial philosophy

600 BCE

India

Jainism

Non-violence, karma

Axial ethical philosophy

500 BCE

India

Buddhism

Suffering, no-self

Axial soteriology

600–300 BCE

China

Confucianism; Daoism

Role-ethics; natural order

Axial philosophy

600 BCE–300 CE

South India

Keeladi

Iron Age urbanism, literacy

Civilizational substrate

300 BCE–300 CE

South India

Tamil Sangam philosophy

Ecology, virtue, social life

World-affirming ethics

1st c. CE

Mediterranean

Christianity

Salvation theology

Theological philosophy

7th c. CE

Arabia

Islam

Law, monotheism

Theological–legal system

200 BCE–300 CE

India

Proto-Śaiva/Vaiṣṇava

Temple/ascetic devotion

Pre-Bhakti

600–1200 CE

India

Bhakti movement

Vernacular devotion

Devotional synthesis

Early CE onward

Korea/Japan

Confucianism & Buddhism

Imported philosophies

Derivative traditions

Timeless (oral)

Australia

Aboriginal traditions

Dreaming cosmologies

Oral ethics/metaphysics

 

What his tells us is that the Vedic and Zoroastrian philosophies are the earliest known 

Below are the core ideas attributed to each philosophical tradition (again with the help of ChatGPT5.2)

  • Vedic → ṛta, ritual maintenance of order
  • Zoroastrian → aša vs druj, ethical dualism
  • Greek (early) → nature explained via logos, not myth
  • Greek (Plato/Aristotle) → forms, causation, virtue ethics
  • Judaism → covenant, law, historical time
  • Jainism → non-theism, karma as binding constraint, ahimsa
  • Buddhism → suffering, impermanence, no-self
  • Tamil Sangam → akam/puramtiṇaiaram–poruḷ–inbam
  • Christianity → salvation, incarnation, theology
  • Islam → law, obedience, prophetic continuity


I will continue to write my understanding of some of these philosophical traditions in more detail. 

 

 

 

Saturday, November 08, 2025

The Role of a Human Teacher in the AI Era

We probably remember the discussions around GPT a couple of years ago that it cannot even solve simple math problems such calculating the time taken by 4 cars to cover a specified distance at a given speed. The GPT multiplied the time taken by each car, and gave an answer four times the correct answer. Its capacity for quantitative reasoning was poor.

Over the next two years, the developments have been tremendous. The GPTs can now perform complex calculations, write thousand lines of codes, write essays, and even ‘guide’ learning! While teachers have been impressed by these developments and are breathlessly trying to keep up to ever-increasing sophistication of AI and the applications that leverage AI, it is natural that there would be a number of questions in many minds: “Given the advancements in AI, what is my role in education as a teacher? Will there be no teachers a few years from now? What will be the future of education, educational institutions?”.

While there is no doubt that educational institutions will undergo a sea of change in their modes of functioning and the modern learning process will undergo a paradigm shift, in this article we outline the limitations of AI and why a human teacher will always be quintessential in the educating of the young. It is true that the slogan that AI cannot replace teacher has been around but without much explicit articulation and has not kept the ‘fears’ of a teacher at abeyance.

This article is an attempt to alleviate the fears of a teacher and distinguish between those functions of teachers that AI can perform better than teachers, and those functions that it may not be able to function in the immediate future. This would mean a careful consideration of the function of teachers in classrooms and outside the classrooms.

GPT models have been trained on vast amounts of data, primarily textual, to learn ‘patterns’ in the data. An oft quoted example to explain this is as follows: Take the sentence “The King and the ______ left for their summer palace”. It is easy to guess that the most likely word in the blank is ‘queen’ because this phrase almost always occurs as such in many stories and books. It is such correlations between words and co-occurrence of words in the textual corpora that the GPT models ‘learn’.

The example above is only meant to develop an intuitive understanding of how the GPT model learns – in reality, the surface patterns of correlations and co-occurrences it learns are much more complex, thanks to the developments in the AI algorithms. It is with this it is able to synthesize a large amount of text and ‘organize’ it when given a prompt.

In AI, an intent is the user's underlying goal or purpose behind their query or action. The way current AI functions, it has no intent. Only an entity with mind has an intent and goal. And intent is what we humans bring in when we provide prompts.

Imagine you are teaching a class of 60 students and are aiming at designing a learning task that requires the students to think critically about what you are engaged with. In designing your task, you keep in mind possible responses of the students, your immediate educational goals embedded within an educational philosophy, the background knowledge and understanding of your students, your prior experience in teaching (the same set of students and other sets of students).

This is a complex design task – it is complex because there are multiple goals and constraints to be satisfied simultaneously. The level of creativity and strategic thinking required in the design task, drawing from prior teaching experience, aiming at a novel educational goal, anticipating student responses is beyond the reach of AI simply because it is only trained to synthesize from existing data, not design creatively with intent. This is where it cannot replace a human – the human touch in teaching is a creative and strategic role, not simply that of delivery a textbook. This is also a wakeup call for teachers to move from simply donning the hat of deliverers of knowledge to that of developing the higher-order capabilities of students.

While there is considerable research in the world to model how the mind-brains of humans interpret other people’s thoughts, make moral judgments, and comprehend their belief systems [1], at least for now, it does not seem that the existing AI models are capable of replacing a teacher.


[1] https://betterworld.mit.edu/spectrum/issues/fall-2009/theory-of-mind/

Tuesday, October 21, 2025

Computational Modeling in Physical Biology: The AI Opportunity

For decades, teaching computational modeling in biology at the senior undergraduate or early postgraduate level was profoundly challenging. Mastering it demanded a high level of understanding that was difficult for students, particularly those from traditional life sciences backgrounds with limited exposure to programming or advanced mathematics. The difficulty stemmed not only from the need to grasp the physics behind the models—including interpreting results, identifying assumptions, and adding parameters to enhance realism, but also, and crucially, the ability to code. Students often became slaves to the syntax, spending more time debugging semicolons than engaging with scientific principles.

The advent of powerful AI tools and prompting has fundamentally overcome the barrier of coding and syntax. When any student can instantaneously generate code or retrieve information, education focused on rote memorization or basic coding syntax becomes futile. The critical skill for the next generation of scientists is no longer the manual labor of programming, but the expert oversight required to interpret, critique, and guide the AI's output. This shift transforms the student from a slave to programming into the director of scientific thought.

What follows is an example of a five-part pedagogical framework that leverages AI to move beyond code generation and directly address the core challenge of modeling in physical biology. Using the central problem of the competition between deterministic forces and thermal energy—a key concept in physical biology—this framework outlines how students can be coaxed to transition to active scientific critics, equipped with the higher-order thinking skills necessary to thrive in an AI-assisted research environment.

This framework achieves multiple goals: it develops higher-order thinking abilities and computational thinking/modeling ability for biological phenomena, all while simultaneously teaching prompting as the essential bridge between scientific thought and computational directives.

 

Example Exercise

Part 1: Discovering the Basics 

Goal: Understand the code structure and how the two main parameters, Drift Force and Thermal Jiggle, affect the visual output of the particle's movement.


Task:
a)      Copy-paste the below code into Google Colab 


import numpy as np

import matplotlib.pyplot as plt

 

# --- Customizable Parameters ---

drift_force = 4.0       # F_drift: The constant pull (deterministic force)

thermal_jiggle = 0.5    # T_jiggle: The strength of random molecular impacts (thermal noise)

time_steps = 1000       # N: Total number of steps to simulate

dt = 0.01               # Delta t: Time step size

 

# --- Simulation Setup (Implicitly includes fluid drag, Gamma) ---

# For simplicity, we assume mass=1 and a constant friction coefficient (gamma=1).

# The movement update follows the heavily damped Langevin equation: dx = (F_drift/gamma) * dt + sqrt(2*kT*dt/gamma) * N(0,1)

# Here, T_jiggle is proportional to sqrt(2*kT/gamma).

 

# Initialize position and time arrays

position = 0.0

path = [position]

time = [0.0]

 

# --- Simulation Loop ---

for i in range(1, time_steps):

    # 1. Deterministic Movement (Drift)

    deterministic_step = drift_force * dt 

    

    # 2. Stochastic Movement (Thermal Jiggle)

    # np.random.normal(0, 1) generates a random number from a standard normal distribution (N(mean=0, std=1))

    noise_amplitude = np.sqrt(2 * dt) * thermal_jiggle

    stochastic_step = noise_amplitude * np.random.normal(01)

    

    # Update position: Total movement = Drift + Jiggle

    position += deterministic_step + stochastic_step

    

    # Record results

    path.append(position)

    time.append(i * dt)

 

# --- Plotting the Results ---

plt.figure(figsize=(105))

plt.plot(time, path, label=f'Drift={drift_force}, Jiggle={thermal_jiggle}')

plt.title("Particle Movement: Drift vs. Thermal Jiggle")

plt.xlabel("Time (s)")

plt.ylabel("Position (arbitrary units)")

plt.grid(True, linestyle='--', alpha=0.6)

plt.legend()

plt.show()

 


b)    Activity: Play and Plot


Run the simulation with the default settings (Trial A). Then, change only the bold parameter for the subsequent trials (B, C, and D) and record your observations.


Trial

drift_force

thermal_jiggle

Observation (Describe the path: erratic, straight, fast, slow, etc.)

A (Default)

1.0

0.5

B

1.0

2.0 (Increase Jiggle)

C

4.0 (Increase Drift)

0.5

D

0.5

2.0

 

Question: Look at the plot for Trial B. The path becomes very erratic, or “messy”. Why do you think the particle’s movement is so jagged and unpredictable when you increase the thermal_jiggle parameter?


Part 2: Questioning the Physics

Look closely at your plot for Trial C (high drift force). Even though the force is constant, the particle’s speed does not increase infinitely; it reaches a steady, constant average speed.Isn’t this contradictory to Newton’s law F=ma where a constant force F should cause constant acceleration a)? What do you think is happening here, and what physical process is secretly included in the model’s math to prevent the particle from accelerating forever?

Part 3: Contextualizing the Biology: The Motor Protein in a Changing Cell

Imagine this simulation models a myosin motor protein walking along a cellular track.

·       The drift_force is the energy driving the motor.
·       The thermal_jiggle is the water molecules pushing it around.

This ideal situation is rarely the case in a real cell. What other parameters do you think can be added to the model to make it more realistic (e.g., related to fuel, physical environment, or biological obstacles)?

Part 4: Learning to Prompt

The code for the above simulation is reproduced below for ease of reference: 


import numpy as np

import matplotlib.pyplot as plt

 

# --- Customizable Parameters ---

drift_force = 4.0       # F_drift: The constant pull (deterministic force)

thermal_jiggle = 0.5    # T_jiggle: The strength of random molecular impacts (thermal noise)

time_steps = 1000       # N: Total number of steps to simulate

dt = 0.01               # Delta t: Time step size

 

# --- Simulation Setup (Implicitly includes fluid drag, Gamma) ---

# For simplicity, we assume mass=1 and a constant friction coefficient (gamma=1).

# The movement update follows the heavily damped Langevin equation: dx = (F_drift/gamma) * dt + sqrt(2*kT*dt/gamma) * N(0,1)

# Here, T_jiggle is proportional to sqrt(2*kT/gamma).

 

# Initialize position and time arrays

position = 0.0

path = [position]

time = [0.0]

 

# --- Simulation Loop ---

for i in range(1, time_steps):

    # 1. Deterministic Movement (Drift)

    deterministic_step = drift_force * dt 

    

    # 2. Stochastic Movement (Thermal Jiggle)

    # np.random.normal(0, 1) generates a random number from a standard normal distribution (N(mean=0, std=1))

    noise_amplitude = np.sqrt(2 * dt) * thermal_jiggle

    stochastic_step = noise_amplitude * np.random.normal(01)

    

    # Update position: Total movement = Drift + Jiggle

    position += deterministic_step + stochastic_step

    

    # Record results

    path.append(position)

    time.append(i * dt)

 

# --- Plotting the Results ---

plt.figure(figsize=(105))

plt.plot(time, path, label=f'Drift={drift_force}, Jiggle={thermal_jiggle}')

plt.title("Particle Movement: Drift vs. Thermal Jiggle")

plt.xlabel("Time (s)")

plt.ylabel("Position (arbitrary units)")

plt.grid(True, linestyle='--', alpha=0.6)

plt.legend()

plt.show()


Task: 

a)     Generate a prompt that can reproduce the above code. Use the commented lines in the code as hints to develop your prompt. 
b)    Check if the output from your code the output from the code above is qualitatively similar (for instance, the behavior of the plot observed in Trial B or Trial C). 


Part 5: Revise the Model using Prompting

In Part 3, you identified a few parameters that can be added to make the simulation more realistic. Use prompting to add the parameters to your simulation and critically evaluate the behavior. [You can either evaluate by making observations as in Part 1, or, evaluate the underlying physics as you did in Part 2.]

Conclusion: Teaching Scientific Directorship in the AI Era

This pedagogical framework for computational modeling in physical biology represents a fundamental strategic pivot: leveraging AI to address a historical teaching bottleneck and, in doing so, maximizing the development of higher-order cognition in students. The ultimate goal is not merely to teach students with AI, but to teach them how to lead AI.

The challenge in teaching computational concepts to students from traditional biology backgrounds was that the necessity of mastering coding and debugging created a significant extraneous cognitive load. This mandatory struggle with syntax diverted the student's finite working memory away from the actual germane load—the complex intellectual work of scientific analysis and model creation. It is crucial to emphasize that this strategy does not undermine the ultimate value of coding; rather, it makes a strategic, context-dependent choice to remove this technical barrier for a specific audience.

A Safeguard Against Cognitive Offloading

The scientific merit of this five-part structure lies in its meticulous sequencing, which serves as a safeguard against cognitive offloading—the central tension identified in AI education literature.

1.    Instruction First, Prompting Later: The student is rigorously taught analysis, critique, and model enhancement in Parts 1-3. The provided code is used as a neutral object of study, allowing students to develop mastery of the scientific process (e.g., interpreting implicit assumptions like fluid drag, proposing biological revisions like ATP concentration) before touching the AI tool.

2.   AI as Expert Assistant: The student is coaxed to prompting only after mastering the scientific requirements. The subsequent task of generating a computational directive (Parts 4-5) becomes the highest-order learning activity. This ensures the student is performing the necessary mental work (germane load), using the AI to execute their demands.

This intentional scaffolding operationalizes the expert oversight that is now the critical ability of the next generation. By automating the extraneous technical burden, the framework effectively elevates students into the “learner-as-leader” paradigm. They are taught to be the director of scientific thought, validating their ability to govern and refine complex computational systems—a necessary prerequisite for innovation in the AI-driven research environment of tomorrow.

Ultimately, this strategy transforms a technological challenge into a pedagogical triumph, ensuring that computational tools accelerate, rather than replace, genuine scientific education.



*This document and the exercises were refined and enhanced using Gemini 2.5 with the initial idea and subsequent prompts given by the author, Vigneshwar Ramakrishnan. In essence, AI was used as an expert assistant in developing this document.