For about two years I used both LLMs.
What changed was the tooling. In the past year, and especially early 2026, Skills and Claude Code became a real part of how I work. Not copy-pasting into Claude anymore. Working inside it. Building with it.
ChatGPT became basically a search bar, the place I go when I do not want to spend Claude tokens on a question. What to do about a pimple? How to fix x? Where to buy something?
ChatGPT has custom GPTs, memory, and instructions too. The configuration options exist on both sides. I just like Claude more. The outputs are better for how I work, the UI is nicer, and the company building it thinks differently.
why does the company behind it matter?
I have deep respect Dario Amodei.
He has written two essays:
The first, Machines of Loving Grace, opens by saying most people are underestimating how radical the upside of AI could be, just as they are underestimating how bad the risks could be. Not some marketing essay. A serious attempt to describe what the world looks like if powerful AI goes right: compressing decades of biological and medical research, lifting economic floors, expanding access to things that have only ever belonged to the wealthy. He writes that his predictions will be radical by most standards, but he means them earnestly and sincerely.
The second, The Adolescence of Technology, published in January 2026, is the other side. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems have the maturity to wield it. He describes the moment we are in as a rite of passage where capability races ahead of governance and institutional maturity. He calls the essay a possibly futile attempt to jolt people awake.
Then watch what he does when it costs something. Anthropic drew two hard lines in their DoD contract: no mass domestic surveillance of Americans, no fully autonomous weapons. The Pentagon demanded unrestricted access. Anthropic refused. Amodei said publicly they cannot in good conscience accede to the demands. The Trump administration designated Anthropic a supply chain risk, a label previously reserved for foreign adversaries, never before applied to an American company. Anthropic sued the federal government. They also chose to forgo several hundred million dollars in revenue to cut off access by firms linked to the Chinese Communist Party.
Their Constitutional AI runs on a priority stack: safety, ethics, compliance, helpfulness, in that order.
As long as I have a choice, I would rather give my time, data, and energy to a company I am aligned with. Anthropic's position on AI and human agency is that position. The essays, the DoD refusal, the CCP revenue they walked away from.
These are a pattern of behavior that speaks volume to me.
what are the 4Ds of AI fluency?
Anthropic built their framework around a question: what should stay human?
Delegation is deciding what to hand off and what stays with you. Description is communicating intent precisely. Discernment is evaluating output critically until the reasoning holds. Diligence is transparency about what AI contributed. Together they are a governance model at the individual level. Who is accountable when the output is wrong?
That philosophy shows up in how Claude behaves. It was designed to be curious and considerate rather than rush to answers. It pauses, considers multiple angles, and admits uncertainty. I use it for thinking, not just executing. It pushes back when the logic does not hold and yes this is intentionally asked on my settings, to please push back on any nonsense I say.
Anthropic Academy launched in March 2026. Free, 13 self-paced courses, verifiable certificates. The flagship is the 4D AI Fluency course. Dedicated courses on the Claude API, MCP, and Claude Code. The GitHub repo has hands-on notebooks for developers who prefer running code over watching lectures. A company that publishes its thinking and then builds the tools to teach it is doing something different.
how do you make the switch from ChatGPT to Claude?
If you are a purpose-led founder or running a mission-driven startup, Claude is the right move; it is built for the kind of work we do: writing that needs to sound like a person, strategy that needs nuance, client communication that cannot afford to be generic.
Of course, it will depend on the right setup. Great context. Projects keep client work separated. Custom Instructions load into every conversation with no character limit. Skills are markdown files that layer into your operating context that can execute specific workflows and even roles. The more Claude knows about your business, your clients, and your voice, the greater its results.
I built a free setup guide at ninaverse.blog/blueprint: a 7-section template you fill out once for your custom instructions. After that, it carries into every session. For the full tactical walkthrough of switching, Ruben Hassid has written the most thorough guide I have seen.
If you build, Claude Code is worth the learning curve. Terminal agent that reads your codebase, works across files, and manages git. I build my site, client tools, and automations with it.
where do Claude and ChatGPT stand in 2026?
ChatGPT has image generation and more mature voice mode. For image generation I use Gemini or Nano Banana. For deep research I use Perplexity. For voice input I use Whispr Flow. Claude's web search handles most of what I need day to day an AI that reads my context, follows my instructions, writes in my voice, and produces work I can use.






comments
No comments yet.