Anthropic—the company that builds Claude—sends every engineering candidate an email before their interview: “Note that use of AI tools during this interview is not permitted.” Meanwhile, Meta hands you Claude, GPT-5, and Gemini inside the interview and says “go ahead, use them all.” Both companies still test you on algorithms and data structures. Google puts “experience with data structures and algorithms” in every job posting, from intern to senior staff. OpenAI does the same. So if the companies building AI still want you to know algorithms, what does that tell us about what actually matters in 2026?
1. The Great Contradiction
Tech Twitter and LinkedIn repeat the same message: coding is dead, DSA is dead, just learn to prompt. Agents will write all your code. Anthropic’s CEO said that 90% of code would be AI-generated—and practically, we’re getting there. AI tools are writing a lot of code.
But here’s the contradiction. Anthropic—the company building Claude—still hires engineers and still asks DSA questions. OpenAI requires “strong foundation in data structures, algorithms, and software engineering principles” for roles like Research Engineer and ML Engineer. Google’s Software Engineering Intern posting lists “experience with data structures and algorithms” as a core requirement. Their Senior SWE role for Agentic Planning and Memory demands 5 years of DSA experience—even on an AI/agents role. Staff SWE for AI Data requires 8 years.
Don’t blindly believe social media. Check job postings at OpenAI, Meta, Google, and Anthropic yourself. The reality differs sharply from the narrative.
2. What’s Actually Changing
Companies like Meta are experimenting with AI-assisted coding interviews. Instead of “here’s a LeetCode medium, solve it on a whiteboard,” they let you use AI tools—sometimes state-of-the-art models—during the interview. Meta’s format includes a 60-minute CoderPad with an AI-assist chat window, multi-file codebase, and access to models like GPT-4o mini, GPT-5, Claude Sonnet, Gemini, and Llama. Around 20–30% of CoderPad customers now use AI in interviews, with over 35,000 AI-assisted interviews run.
If you hate DSA, that might sound exciting. “Finally, no more dynamic programming!” Not exactly. These AI-assisted interviews still test algorithms and data structures. You still need to know graph traversals, hash maps, when to use BFS vs DFS. The knowledge hasn’t gone away.
What’s changed is the format. You’re no longer tested purely on writing a syntactically correct solution from memory. You’re tested on whether you can work with AI to arrive at a correct, performant, well-tested solution. The companies doing this aren’t making it easier—they’re making it harder. If you have an AI assistant, expectations go up. Problems get more complex. The bar for “good” shifts.
AI-assisted means you need to know enough to evaluate what the AI gives you. That’s a higher bar, not a lower one.
3. The Bar Didn’t Drop — It Shifted
In a traditional coding interview, the implicit test was: can you recall an algorithm, implement it correctly, and optimize it? Syntax, logic, complexity analysis.
In an AI-assisted interview—or any modern engineering role—you’re evaluated on different criteria:
- Correctness: Not just “does it pass test cases,” but “did you verify it’s solving the right problem?” People leaning heavily on AI often skip this. They get code that runs and assume it’s correct.
- Judgment: Can you look at AI-generated code and tell if it’s good? Spot when it deleted something important? Identify an edge case it missed? Explain why one approach beats another?
- Communication: You must explain your reasoning. Why did you prompt the AI this way? Why accept this suggestion but reject that one? You can’t just paste AI output and say “done.”
- Testing instinct: Unit tests, edge cases. Knowing that “works on the happy path” doesn’t mean “works.” With AI generating plausible-looking code in seconds, the ability to verify and validate is arguably the most important engineering skill.
Data backs this up. CodeRabbit’s 2025 report found AI-generated code has 10.83 issues per PR vs 6.45 for human code—about 1.7x more. AI PRs contain 1.4x more critical issues and 1.7x more major issues. Cortex’s 2026 benchmark showed PRs per author up 20%, incidents per PR up 23.5%, and change failure rates up ~30%.
Two Cautionary Code Examples
Example 1: AI misses data type assumption. You ask AI to remove duplicate users from a list, keeping the first occurrence. AI returns code that works for lists of strings or integers. But if users are dictionaries, the code fails with TypeError: unhashable type: 'dict'. AI wasn’t wrong—you didn’t specify the input type. Your job is to understand use cases and edge cases.
Example 2: AI deletes critical code during “optimization.” You ask AI to make a rate-limiting function faster. The original has rate limiting, input validation, and audit logging. AI “refactors” it to just execute the query and return the result. Tests pass with correct inputs. In production, the system breaks under load. AI didn’t know your constraints. You must communicate the full context.
If you were hoping AI would make interviews easier, it’s doing the opposite for anyone who relied on memorization rather than understanding.
4. The Skill That AI Can’t Replace
So if the interview landscape is evolving but fundamentals aren’t going away, what skill actually matters? One word: thinking.
Every competent engineer runs some version of a mental checklist when writing or reviewing code. It doesn’t matter if the code was AI-generated or hand-written. The checklist is the same:
- Am I solving the right problem? Someone can spend an hour building an elegant solution to the wrong problem.
- Is this correct? Not “does it compile” or “does it pass one test case.” Is it actually, provably correct?
- What are the edge cases? What am I missing? Where could this break?
- What are the trade-offs? Every solution has them. If you can’t articulate the trade-offs, you don’t fully understand it.
- Is this maintainable? Will someone—maybe future you—understand this in six months?
- Can I explain this? If you can’t explain your solution clearly, you probably don’t fully understand it.
This habit of rigorous thinking separates engineers who use AI effectively from engineers who are just along for the ride. It’s exactly what interviews—traditional or AI-assisted—are trying to detect.
5. So What Should You Actually Do?
First: Look at the actual interview process for the companies you’re targeting. Not what Twitter says about interviews in general. What does this company do? If they run DSA rounds, prepare for DSA. If they run AI-assisted coding, prepare for that. Don’t let ideology about what interviews should be stop you from preparing for what they are.
Second: Invest in fundamentals regardless. Algorithms, data structures, complexity analysis—this knowledge doesn’t expire. It’s the foundation that lets you evaluate what AI gives you. If anything, it’s more valuable now because it differentiates you from someone who can only prompt.
For traditional DSA rounds: Learn patterns (two pointers, sliding window, trees, recursion, backtracking, dynamic programming). Practice a variety of problems. Master complexity analysis.
For AI-assisted rounds: Learn AI tools—how to use different models, what prompts work, what outputs to expect. Provide context when prompting; include scenario, domain, and constraints. Practice code review: look at AI-generated code, find problems, ask “is it solving the correct problem?” Explain your reasoning and articulate trade-offs.
Universal skills (both formats): Problem decomposition. Trade-off analysis. Reading others’ code. System thinking—how your application behaves at scale, how users interact, real-world constraints.
Finally: Stop waiting for interviews to get easier. Every time the format changes, the bar adjusts. When AI tools enter the picture, the problems get harder to compensate. Companies aren’t trying to find people who can use AI—everyone can use AI. They’re trying to find people who can think. If you can think, you’ll be fine regardless of how the format evolves.
Practice with high-quality problems. Use resources like Smart Interview Grind to build a custom plan based on your expertise level, target companies, and patterns you need to master.
Video Explanation
For a detailed discussion with real examples and deeper insights, check out the video:
If you need personalized guidance on preparing for coding interviews or building a study plan, you can schedule a one-on-one session to discuss your specific questions.
For more coding solutions and resources, check out my GitHub repository and all my helpful resources.






