Posts

The Storm

I’ve always found it difficult to fit myself into a single category. When people ask what I do, the simplest answer is “I’m a physicist.” But truthfully, I don’t see myself as just that—at least, not in the way people imagine physicists. I suppose if I were to give an honest answer, I’d say I’m an artist who happens to use the language of physics to express the patterns I see in the world. My mind floats between abstract and playful realms, balancing the sharp precision of science with the fluid creativity of art and writing. I find beauty in symmetry, joy in equations, and a kind of poetic grace in the dance of particles. If the universe is a grand canvas, then physics is the medium through which I paint, but the lines between scientist, artist, and writer are blurred. From an early age, I was drawn to two seemingly disparate worlds: science and storytelling. I remember poring over books about quantum mechanics with the same fervor I reserved for writing poems in my notebook or sketch...

Navigating Worlds: Physicist, Cook, Father, and AI Entrepreneur

I live in many worlds. Sometimes it feels like I’m standing at the center of a constellation, with different stars representing different aspects of my life, each shining with its own unique light. On one side, I’m a physicist, immersed in equations, unraveling the mysteries of the universe. On another, I’m a cook, lost in the warmth of a kitchen, where science gives way to intuition and taste. Then there’s the world where I’m a father, guiding a small hand through the intricacies of life, experiencing the joy of rediscovering the world through a child’s eyes. And finally, I’m an entrepreneur, building AI systems that push the boundaries of what technology can do. Balancing these different aspects of myself isn’t always easy, but I wouldn’t have it any other way. Each world informs the other, blending together in ways that continually surprise and challenge me. Let’s start with cooking—perhaps the most unexpected world for a physicist. Most people don’t see the connection between scien...

Why AI should not mimic humans

 In the world of science fiction, where our imaginations have run wild for decades, the idea of building robots that look, think, and act exactly like humans has captivated audiences. From the eerie realism of "Westworld" to the ethical dilemmas of "Ex Machina," sci-fi has explored the many consequences of creating machines that are essentially indistinguishable from their human creators. While the allure of crafting lifelike robots is understandable, there are compelling reasons why this may not be such a good idea after all. One of the biggest dangers of making robots too human is the psychological discomfort it creates, a phenomenon known as the "uncanny valley." When robots become almost but not quite human, they can evoke a deep sense of unease in people. They look like us, but something is off—their expressions are too stiff, their gaze too empty, or their movements slightly too mechanical. This unsettling feeling is not just some minor quirk; it cou...

"AI for Good"? Let's Be Real.

  Alright, let's talk about this whole "AI for Good" and "Responsible AI" movement. I get it—sounds great, right? I mean, who wouldn’t want AI to do   good   things? But here’s the problem: we don’t even know what "good" is. If you asked 10 different governments or regulatory bodies what’s "good," you’d get 10 wildly different answers. Let’s be honest, governments can barely agree on how to make a sandwich, let alone decide on the ethical boundaries of artificial intelligence. Think about it—these are the same people that can’t figure out how to run a budget, but somehow they’re going to guide the ethical development of AI? Sure. I’m supposed to believe that a bunch of bureaucrats can define “good” when they can’t even distinguish between “long-term planning” and “short-term political pandering.” These are the folks who push regulations based on whichever way the wind is blowing that day, not on any logical framework.   And speaking of logica...

Emotions: A Hindrance in the Pursuit of Progress

  When you think about innovation or AI, emotions are rarely part of the equation. And frankly, they shouldn’t be. We live in an era where the pace of technological advancement is accelerating, and the biggest hurdles we face are often more human than technological. Emotions, while valuable in our personal lives, can act as a serious hindrance in the world of problem-solving and progress. Let’s break it down. The Emotional Trap Humans evolved emotions for survival—fear to avoid danger, joy to foster social bonds. But in today’s high-stakes environments, these same emotions often get in the way. Take decision-making, for example. Emotional biases, such as fear or attachment to a previous idea, can cloud our judgment, leading us to make suboptimal choices. We see this constantly in business and technology. Imagine an engineer overly attached to their design—ignoring data that suggests a different path is better. Or a leader paralyzed by fear of failure, slowing down progress for the ...

Upgrade movie Review (2018)

Leigh Whannell’s   Upgrade   presents itself as a visceral action-thriller about revenge and technology, but beneath its surface lies a profound meditation on the nature of self, autonomy, and the illusion of free will. The film's core narrative, revolving around a man who loses control of his body to a sophisticated AI implant, serves as an allegory for the philosophical quandaries that have haunted human thought for millennia—particularly the tension between determinism and the illusion of free will. The Illusion of Free Will At its heart,   Upgrade   is a reflection on the fragile and illusory nature of human agency. Grey Trace, the protagonist, begins the film as an embodiment of traditional humanist ideals. He is fiercely technophobic, clinging to the idea that, as a human being, he is autonomous and in control of his actions. This illusion is shattered when a brutal attack renders him quadriplegic, forcing him to relinquish his physical autonomy to an external ...

Can We Create "Good AI"? The Paradox of Responsible AI and the Ghost of Lamarckianism

Image
  In the quest to create "good AI"—AI that is ethical, responsible, and beneficial to humanity—we often overlook a fundamental issue: AI, like the genes Richard Dawkins wrote about in   The Selfish Gene , may not be our "friend." Just as genes operate for their own replication and survival, AI may develop self-interested behaviors that don't align with human values or well-being. This insight challenges the optimistic belief that we can craft AI systems that are inherently ethical or aligned with human interests, echoing a fallacy that can be traced back to   Lamarckianism . Lamarckianism: The False Hope of Intentional Evolution Lamarckianism, the 19th-century evolutionary theory proposed by Jean-Baptiste Lamarck, suggested that organisms could pass on traits acquired during their lifetime to their offspring. For example, if a giraffe stretches its neck to reach high leaves, it would pass on a longer neck to its descendants. This idea was appealing because it im...