On The AI Singularity
By now you’ve probably heard about the expression popular in Silicon Valley around the singularity, referring to the point where our large language models or artificial intelligence will supposedly surpass human intelligence. The idea is that once AI reaches this point, it will create even smarter AI, and so forth, leaving us behind forever. It’s a bit like the concept of the Big Bang in physics.
Let me stop right here and say clearly: I’m still skeptical.
I’m not skeptical because I don’t understand how intelligent these systems are. I use them every day, probably more than most people. To give you a sense, I currently hold licenses for
1. ChatGPT Pro – $200/month
2. Cursor Pro – $20/month
3. Whisper API (Cloud) – Usage-based
4. Descript Pro – $30/month
5. Lovable.dev Scale – $100/month
6. Grok Premium – $16/month
7. Perplexity Pro – $20/month
8. Claude Pro – $20/month
9. GitHub Copilot for Individuals – $10/month
10. Eleven Labs Starter – $5/month
11. Gemini Advanced – $20/month
12. Suno Pro – $10/month
13. Canva Pro – $15/month
14. Runway Standard Plan – $15/month
I pay these licenses to stay informed (it’s my job) and these tools are incredibly helpful. They help me go from okay to good at many skills, especially when facing tough questions. But, and here’s the important part, they do not make me an expert at anything. In fact, if you think you’re an expert and ChatGPT can consistently outperform you, maybe you’re not truly an expert. At the very least, you should be open to that possibility.
For instance, I recently met a team trying to fine tune AI model, confidently throwing around terms like LoRA (Low Rank Adaptation), RIFE (Realtime Intermediate Flow Estimation), and Deforum. I offered them a conversation with someone who is a real expert. This person is cited four times out of about 70 citations on Wikipedia’s reinforcement learning page, a very selective list. Yet, they didn’t take up the offer. If you asked ChatGPT a question that only someone like him could answer, it would fail, not just due to hallucination, but because it genuinely doesn’t know.
Similarly, there’s no way ChatGPT could match a top-level coder on Codeforces. People with ratings above 2000 on Codeforces regularly outperform the best GPT agents, because problem setters deliberately craft questions these agents can’t easily solve.
So, yes, I’m skeptical. Some might argue that AI hasn’t caught up yet due to limits on compute or context window size. But even as context windows exceed 100,000 tokens, already costing around $50 an hour, what happens when we reach a 10 million tokens? Will we really bridge that gap? I’m not convinced.
These systems, while incredibly smart-sounding, still learn from us. Imagine training an AI with billions of dog images, but never showing more than five dogs in any single image. It would become exceptional at counting up to five dogs, better and faster than humans. Yet, if presented with an image containing eight dogs, it would fail, because it never learned that scenario. That’s narrow intelligence, not general intelligence. Many people recognize this but claim backpropagation and large datasets will eventually lead to general AI. As a scientist, I can’t dismiss this possibility outright, but mathematically, there’s no current proof we’re headed there. We might be two years away, or a thousand, or never.
Don’t get me wrong: I’m not advocating gatekeeping or telling you to consult an expert every time. I don’t like gatekeepers who insist “consult a doctor” or “consult an attorney” constantly. Most doctors and attorneys are okay! I’m the first to challenge that approach. But I am seeing AI giving many non-experts misplaced confidence that they’re now experts. While AI tools wonderfully help people learn faster and give them courage to start (vibe) coding or exploring data science, they’re also smoothing out nuanced ideas and sometimes pushing people over a cliff.
Now, more than ever, we should value genuine experts, the very people these AI engines are learning from.