I recently watched the whole series Neural Networks: Zero to Hero by Andrej Karpathy. After watching it, I had a much clearer understanding of what an LLM is, what GPT is, and how ChatGPT is different from the base GPT model.
My original reason for watching the series was simple: I believe everyone should understand what an AI assistant really is. We should understand what it can do, what it cannot do, and where its limitations are. AI is no longer just a tool used by researchers or big tech companies. It is becoming part of daily life, study, and work.
Before this, I treated AI mostly as my second brain. I used it every day: for learning, writing, coding, thinking, and even for daily life questions. I knew AI could help developers complete some tasks faster, but I still felt that human experience was something AI could not easily replace.
But after learning more about the path from GPT to ChatGPT, especially the idea of reinforcement learning from human feedback, I started to feel something different.
For tasks where the answer can be verified, AI can improve very quickly. Coding, math, logic, tests, and many structured problems are like this. There is a clear way to check whether the answer is correct. The code can compile or fail. The test can pass or fail. The math answer can be checked. The logic can be evaluated.
As a developer, this made me feel the risk of my career for the first time.
Developer work is full of logic. We may think our experience is special, and in some ways it is. But developers also love sharing. We write blogs, create best practices, summarize lessons, design patterns, architecture principles, and reusable solutions. In other words, we often turn our experience into structured knowledge. We convert messy real-world work into logic, patterns, and rules.
And that is exactly the kind of knowledge AI can learn from.
This thought stayed in my mind for days. When I was not working or studying, I kept thinking about it. If AI becomes very good at coding, math, and logic, what does that mean for humans? What does that mean for students? What does that mean for the future of education?
Then another thought came to me: our world is built on math.
Science, engineering, software, finance, physics, architecture, AI itself — all of them depend on math and logic. These are not just school subjects. They are the foundation of human development.
So if AI can do more and more of this work, should humans still learn these things deeply?
I think the answer is yes. Maybe even more than before.
The danger is not that AI can do math or coding. The danger is that humans may stop learning how to think. If students use AI too early, before they build strong fundamentals, they may only learn how to ask for answers instead of how to reason. They may get the result, but not the ability behind the result.
This is especially important because math, logic, and structured thinking are not just skills for passing exams or getting developer jobs. They are part of how humans understand the world.
Technology itself is good. It can improve productivity, reduce repetitive work, and help people learn faster. But capital is different. Capital usually chases profit. If a company sees AI only as a way to replace workers and reduce costs, then AI will not be used mainly to help humans grow. It will be used to remove humans from the process.
That is why I feel worried.
We should not reject AI. That is not realistic, and it is not useful. AI is powerful, and it can be a great assistant. But we need to be careful about how we use it, especially in education and early career training.
AI should not replace the learning process. It should support it.
For developers, the future may not be about writing every line of code by hand. But we still need to understand systems, logic, tradeoffs, architecture, debugging, security, and product value. We need enough knowledge to judge whether AI’s answer is good or wrong.
If humans lose the ability to verify, question, and reason, then we become dependent on the machine.
So maybe the real question is not:
Can AI do the work?
The better question is:
Can humans still understand the work after AI does it?
That may become one of the most important questions of the AI era.
