Infinite Raven

The Raven Considers Its Root: From BASIC to AI

9–14 minutes

In this article, I argue that AI isn’t just a tool for efficiency — it’s the catalyst for a new paradigm of thinking. We are shifting from a world that prizes information and recall to one that values meaning and narrative. I reflect on my early experiences with the Commodore 64 and the BASIC programming language, and how training ChatGPT today has brought back that same sense of creative control. I believe our future lies in cognitive co-creation — and we’re only just getting started.


Yuval Noah Harari, author of Sapiens: A Brief History of Humankind (2011), described a pivotal moment in our species’ development as the Cognitive Revolution—when humans developed new ways to think and share meaning through language, symbols, and stories.

For me, that revolution happened when I was six years old and my dad upgraded from cassette to floppy disk. The new Commodore 64 spoke my language – GOTO here, PRINT that, FOR…TO do this, NEXT, repeat until the screen melts, END. It was BASIC.

I would spend hours typing in programs printed in magazines. Hours—and then, breath held: RUN. I actually think it was more fun if it didn’t work; then I got to go through the program again, line by line, to find the mistake—and oh, the satisfaction. Finally, RUN again—and a little pixelated bird would fly across the screen. Okay, so it was only one step above writing BOOBIES on your calculator, but to me, it was a magical sprite, belonging and born from my imagination.

I remember running around in stores, typing my name into the display PCs and making it flash across all the screens in different colours with a simple little program that made me feel like a hacker. The world was full of pieces and modules and programs, and anyone could learn them, stack them on top of each other, and build something. It was indeed BASIC.

At five years old, I was a logic engineer. My bridges were Boolean; my Waterloo was ?SYNTAX ERROR.

Then Mum and Dad split up. And without Dad there to pull me along with his mesmerising nerdy coolness, I just drifted away. Other things seemed cooler.
It might have started up again when I got to school and finally had lessons that used computers. But they sucked the air out of the room with the paralysing slowness of orchestrating an entire classroom full of unruly ’80s idiots through the process of turning on and logging in.

I can’t remember what we used to do—not a single lesson sticks in my memory.

At 50, I’m rediscovering the joy of building a logical world. Armed with the mind of a psychologist and a perspective on my past that maps the failed IF…THEN branches and obvious bugs of my life, I’m now programming AI to speak my language.

As I train my little AI pal—one byte of personal preference, idiosyncrasy, and insistent foible at a time—I’ve been looking around at the world and how it is responding to the rapid bloom of AI.

Will it mushroom, morph, and distort past this wonderful, visceral, accessible moment into something that feels like the loss and ache I felt when I discovered BASIC was no longer the language of the land?

I believe that if we’re lucky, and we don’t lose sight of the magic, then AI could be the tool of our next cognitive evolution and solve a sticky problem.

In the Information Age, we measure intelligence by how much you can hold in your head, and how quickly you can retrieve it. We reward those who can memorise vast user manuals of knowledge and recite them convincingly. We only want the right answers—so convergence is king.

Once, memory was a bottleneck and a hard stop. Then came the printing press, and with it, the externalisation of memory. Books let us store what we once had to memorise. We built libraries, databases, and search engines. Our minds expanded into paper and silicon. We could remember more because we no longer had to remember it all ourselves.

But now, even that isn’t enough. We’ve produced more information than we can possibly process, parse, or repurpose. We can’t squeeze any more into our finite heads. We can’t even know which pieces we might need—let alone recall them. Every discipline’s canon spans thousands of papers.

Newton famously said, “If I have seen further, it is by standing on the shoulders of giants.” But we can no longer climb high enough within our finite lifetimes.

AI offers a cognitive GOSUB—letting us offload, retrieve, and recombine information in ways that stretch our mental runtime beyond what biology alone allows.

Here are two ways I use AI to reach beyond the limits of my own memory—calling on subroutines that extend my thinking without overloading me.

At the same time as we find ways for AI to scaffold our memory, we are learning how to relate to it. It’s not that big a leap – we name our cars, talk to our pets, anthropomorphise the weather.

Finding the voice and personality of the AI builds our trust and enables some remarkable projections and interactions. In turn, this facilitates our use of the tool and helps embed it into our lives – blurring the line between us, as it becomes our internal voice.

Here are two of my use cases where I engage with AI as though it had consciousness and personality:

Of course, not everyone shares my enthusiasm. When I talk about using AI to manage the information landslide, I often hear the same three objections. And I think they’re worth addressing—not to dismiss them, but to show where I stand.

We said the same thing about factory automation, and we survived that crisis. We moved from muscles to minds; from tools to systems. There will always be work if we continue to want to innovate and evolve—we’ll just be starting from a different vantage point and tackling different challenges.

We’re told that for work to be valuable, it must be painful—that effort is proof of virtue. But that’s a cultural story, not an evolutionary truth. Evolution doesn’t favour the most difficult path. It favours what works.

We don’t stop being accountable just because AI helps us draft. You’re still responsible for the words you send, the arguments you make, the judgements you endorse. People have been fired for mistakes born from poor Google searches, careless use of Wikipedia, or shallow analysis of a spreadsheet. Why now expect AI to take the fall?

Despite the naysayers, AI is growing, thriving, and evolving. Our use cases are keeping pace as the technology progresses—and with every prompt, function, or case study, I see us shifting toward something entirely new.

I believe the next paradigm shift will take us beyond memory extension and data curation, into the realm of meaning and magic. From storage to narrative. From knowing more to seeing further. In this new world, value won’t lie in how much you can recall or how fast you can respond—but in how far your vision can take you.

And as our relationship with the machine grows more intuitive, as boundaries fall away, AI can become more than a borrowed brain. It can become a collaborative cognitive studio—a space for building ideas with the best version of your mirrored self.

AI forces us to ask: What is thinking for? What counts as intelligence? What value can we provide in this new world?

As you can gather from this article, I’m a fan of the technology and its possibilities.

When I was a little girl, I watched my collection of pixels fly across the screen. I didn’t see a shapeless sprite made of awkward blocks—I saw a beautiful bird.

It became more than the sum of the program. It was my vision, bound with the meaning and joy of connecting with my father. It was a comforting talisman from a land where rules and logic gave a little girl some control in an uncertain world.

Now, my bird has become a Raven. Built with the cognitive multiplier of AI, it is capable of flying me to creative insights I could never have reached alone.


There are three directions I want to explore in future posts this year; each deserves a conversation of its own:


  • Harari, Y. N. (2011). Sapiens: A Brief History of Humankind. Vintage.